QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
79,272,285
| 5,028,466
|
UserWarning: DataFrame constructor is internal. Do not directly use it. in Foundry
|
<p>I'm running a Code Workbook in palantir foundry, where one of the steps is below code that takes data from a dataset:</p>
<pre><code>import pyspark.sql.functions as F
def df_kyc (JOHNNY):
JOHNNY = JOHNNY.select(*[
F.col(c).cast("string").alias(c) if t == "timestamp" else F.col(c)
for c, t in JOHNNY.dtypes])
JOHNNY = JOHNNY.toPandas()
return JOHNNY
</code></pre>
<p>however, after running the preview, I receive following log error:</p>
<p>"
/opt/spark/work-dir/<strong>environment</strong>/lib/python3.8/site-packages/pyspark/sql/dataframe.py:148: UserWarning: DataFrame constructor is internal. Do not directly use it.
warnings.warn("DataFrame constructor is internal. Do not directly use it.")
"</p>
<p>Do you know what's wrong with that please? Thanks</p>
|
<python><pyspark><palantir-foundry><foundry-code-workbooks>
|
2024-12-11 15:32:17
| 1
| 537
|
Turpan
|
79,272,268
| 447,426
|
How replace WriteOnlyCollection in DB object - "because the WriteOnlyCollection does not support implicit iteration or direct assignment"
|
<p>I have an DB object "Part" with a field:</p>
<pre><code>health_indicators: WriteOnlyMapped["HealthIndicators"] = relationship("HealthIndicators",
uselist=True, back_populates="part")
</code></pre>
<p>So per default health_indicators are never filled but are set to WriteOnlyCollection if read from DB.</p>
<p>Now i want to fill the list with a specific select statement. And provide methods in my PartRepo that return objects with filled HealthIndicators:</p>
<pre><code>def get_part_by_id_filled_latest_hi(self, part_id: str) -> Part | None:
part = self.get_part_by_id(part_id)
if part:
part.health_indicators = self.db_session.scalars(HealthIndicatorsRepo.between_dates_stmt(part_id, None, None)).all()
return part
</code></pre>
<p>The problem is that i am not allowed to replace <code>part.health_indicators</code>:</p>
<pre><code>E sqlalchemy.exc.InvalidRequestError: Collection "Part.health_indicators" does not support implicit iteration; collection replacement operations can't be used
</code></pre>
<p>I understand the meaning and idea behind this error (what if i replace and then safe to db :-P).
Is thee an easy way to provide my "Part" with filled health_indicators? Or can you suggest other way for my use case?</p>
<p>The <a href="https://docs.sqlalchemy.org/en/20/orm/large_collections.html#querying-items" rel="nofollow noreferrer">documentation</a> was not of help here.</p>
<p>What i additionally tried:</p>
<p><code>part.health_indicators.add_all(self.db_session.scalars(fill_query).all())</code></p>
<p>What does not yield an error but how to access my items from within an <a href="https://github.com/sqlalchemy/sqlalchemy/blob/860ca0b9d21874d9827f7f37cd0c330436baefb4/lib/sqlalchemy/orm/writeonly.py#L539" rel="nofollow noreferrer">"WriteOnlyCollection"</a> - it seems not an iterable or kind of list.</p>
|
<python><sqlalchemy>
|
2024-12-11 15:28:06
| 1
| 13,125
|
dermoritz
|
79,272,147
| 1,014,299
|
ModuleNotFoundError when referencing folder in Python
|
<p>I have a Python3 project arranged as follows:</p>
<p>C:\automation\framework\constants.py</p>
<p>C:\automation\tests\unit-tests\test_myunittest.py</p>
<p>In my unit test, I'm trying to call methods in framework folder, which has the required <strong>init</strong>.py file
At the start of my unit test, I have the following imports:</p>
<pre><code>import pytest
from framework import constants
</code></pre>
<p>When I call pytest, it throws the following error:</p>
<pre><code>ModuleNotFoundError: No module named 'framework'
</code></pre>
<p>How do I fix this?</p>
|
<python><python-3.8><python-module><python-packaging>
|
2024-12-11 14:49:35
| 1
| 1,091
|
bearaman
|
79,272,105
| 5,775,358
|
VSCode ruff ignores ruff.toml
|
<p>For a project I have created a <code>ruff.toml</code> file. In this file is among other things a different line-length defined. When ruff is used from the cli <code>ruff format</code> it works as expected. But when I use <code>Ruff: format document</code> from the extension it goes back to the line length defined in my VSCode settings.</p>
<p>This a part of all my settings, but these are all settings containing <code>ruff</code>:</p>
<pre class="lang-json prettyprint-override"><code>{
"ruff.format.args": [
"--line-length=79"
],
"ruff.configurationPreference": "filesystemFirst",
"editor.defaultFormatter": "charliermarsh.ruff",
"notebook.defaultFormatter": "charliermarsh.ruff",
}
</code></pre>
<p>I expect that this line: <code>"ruff.configurationPreference": "filesystemFirst",</code> defines to first look a t the file system, if no <code>.toml</code> file is found, then it should use the default settings.</p>
|
<python><visual-studio-code><ruff>
|
2024-12-11 14:39:18
| 2
| 2,406
|
3dSpatialUser
|
79,272,102
| 11,159,734
|
Azure Document Intelligence (formrecognizer) - 'InvalidContent' when passing pdf
|
<p>I upload a pdf file to my streamlit application like this:</p>
<pre><code>import streamlit as st
uploaded_file = st.file_uploader("Upload pdf file", type="pdf")
result = analyze_general_document(uploaded_file)
</code></pre>
<p>I want to analzye this pdf using the <code>Azure Document Intelligence</code> python package like this:</p>
<pre><code>from io import BytesIO
from azure.core.credentials import AzureKeyCredential
from azure.ai.formrecognizer import DocumentAnalysisClient
def set_client(secrets: dict):
endpoint = secrets["AI_DOCS_BASE"]
key = secrets["AI_DOCS_KEY"]
document_analysis_client = DocumentAnalysisClient(endpoint=endpoint, credential=AzureKeyCredential(key))
return document_analysis_client
def analyze_general_document(uploaded_file, secrets: dict):
print(f"File type: {uploaded_file.type}")
print(f"File size: {uploaded_file.size} bytes")
client = set_client(secrets)
# poller = client.begin_analyze_document_from_url("prebuilt-document", formUrl)
poller = client.begin_analyze_document("prebuilt-document", document=uploaded_file)
</code></pre>
<p>I can successfully print the file type and file size as you can see in the terminal output:</p>
<pre><code>File type: application/pdf
File size: 6928426 bytes
</code></pre>
<p>Also opening the file with <code>PyMuPDF</code> works fine as well.</p>
<p>However the method <code>begin_analyze_document</code> throws the following exeception:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\myuser\AppData\Local\miniconda3\envs\projectai\Lib\site-packages\streamlit\runtime\scriptrunner\exec_code.py", line 88, in exec_func_with_error_handling
result = func()
^^^^^^
File "C:\Users\myuser\AppData\Local\miniconda3\envs\projectai\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 579, in code_to_exec
exec(code, module.__dict__)
File "C:\Users\myuser\Documents\visual-studio-code\project\project-ai-docs\webapp\app.py", line 79, in <module>
main()
File "C:\Users\myuser\Documents\visual-studio-code\project\project-ai-docs\webapp\app.py", line 61, in main
zip_content = process_pdf(uploaded_file, secrets)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\myuser\Documents\visual-studio-code\project\project-ai-docs\webapp\app_backend.py", line 40, in process_pdf
analyze_general_document(uploaded_file, secrets)
File "C:\Users\myuser\Documents\visual-studio-code\project\project-ai-docs\webapp\az_document_intelligence.py", line 18, in analyze_general_document
poller = client.begin_analyze_document("prebuilt-document", document=uploaded_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\myuser\AppData\Local\miniconda3\envs\projectai\Lib\site-packages\azure\core\tracing\decorator.py", line 105, in wrapper_use_tracer
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\myuser\AppData\Local\miniconda3\envs\projectai\Lib\site-packages\azure\ai\formrecognizer\_document_analysis_client.py", line 129, in begin_analyze_document
return _client_op_path.begin_analyze_document( # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\myuser\AppData\Local\miniconda3\envs\projectai\Lib\site-packages\azure\core\tracing\decorator.py", line 105, in wrapper_use_tracer
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\myuser\AppData\Local\miniconda3\envs\projectai\Lib\site-packages\azure\ai\formrecognizer\_generated\v2023_07_31\operations\_document_models_operations.py", line 518, in begin_analyze_document
raw_result = self._analyze_document_initial( # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\myuser\AppData\Local\miniconda3\envs\projectai\Lib\site-packages\azure\ai\formrecognizer\_generated\v2023_07_31\operations\_document_models_operations.py", line 443, in _analyze_document_initial
raise HttpResponseError(response=response)
azure.core.exceptions.HttpResponseError: (InvalidRequest) Invalid request.
Code: InvalidRequest
Message: Invalid request.
Inner error: {
"code": "InvalidContent",
"message": "The file is corrupted or format is unsupported. Refer to documentation for the list of supported formats."
}
</code></pre>
<p>Why is the pdf considered invalid?
I also tried wrapping it in a BytesIO object like this but it didn't work either:</p>
<pre><code>def analyze_general_document(uploaded_file, secrets: dict):
print(f"File type: {uploaded_file.type}")
print(f"File size: {uploaded_file.size} bytes")
# Read the file as bytes
file_bytes = uploaded_file.read()
client = set_client(secrets)
# poller = client.begin_analyze_document_from_url("prebuilt-document", formUrl)
poller = client.begin_analyze_document("prebuilt-document", document=BytesIO(file_bytes))
</code></pre>
|
<python><azure><streamlit><azure-form-recognizer>
|
2024-12-11 14:37:50
| 3
| 1,025
|
Daniel
|
79,272,101
| 453,851
|
How to type hint a decorator to dictate some parameters but not all?
|
<p>I'm looking of a way (the way) to type hint a decorator indicating the parameters that must exist only. The decorator does not change the signature of the function it wraps.</p>
<pre class="lang-py prettyprint-override"><code>from functools import wraps
def my_decorator(func):
@wraps(func)
def wrapper(a: int, b: str, *args, **kwargs) -> float:
return func(a, b, *args, **kwargs)
return wrapper
</code></pre>
<hr />
<p>I've been through a bunch of different attempts. My latest resulted in a MyPy error:</p>
<pre class="lang-py prettyprint-override"><code>_P = ParamSpec("_P")
def my_decorator(
func: Callable[[int, str, , _P], flaot]
) -> func: Callable[[int, str, , _P], flaot]:
...
</code></pre>
<p>Resulted in the error:</p>
<pre><code>error: Invalid location for ParamSpec "_P" [valid-type]
note: You can use ParamSpec as the first argument to Callable, e.g., "Callable[_P, int]"
</code></pre>
<hr />
<p>How can I type hint this, such that MyPy will warn if the decorated function has the wrong signature?</p>
|
<python><python-typing><mypy>
|
2024-12-11 14:37:35
| 1
| 15,219
|
Philip Couling
|
79,272,091
| 561,243
|
Use of subclasses of Generic in other classes
|
<p>I am again fighting against the use of bound type variable in python.</p>
<p>Have a look at this example:</p>
<pre class="lang-py prettyprint-override"><code>
from typing import Generic, TypeVar, Any
ParameterType = TypeVar('ParameterType')
class Parameter(Generic[ParameterType]):
def __init__(self, name: str, value: ParameterType) -> None:
self.name = name
self.value = value
# no mypy problem up to here
class MyClass:
def __init__(self) -> None:
self._container: dict[str, Parameter[ParameterType]] = {} # error: Type variable "parameter_type.ParameterType" is unbound [valid-type]
def load_params(self, external_data: dict[str, Any]) -> None:
for k,v in external_data.items():
self._container[k] = Parameter(k, v)
my_class = MyClass()
ext_data = {'a' : 15, 'b': 'string', 'c' : [1,2,3,4]}
my_class.load_params(ext_data)
</code></pre>
<p>This is an extremely simplified version of what I want to do in my code. I have parameter class in which the <code>value</code> can be several different types.</p>
<p>I have another class, not directly inheriting from the parameter class that is loading (from a toml file) a dictionary with key and value. These will be used to create new Parameter classes and they will be stored in a dictionary inside the class.</p>
<p>The scheme is working nicely and I have already used it several times, but now I want to do some static type checking.</p>
<p>If I run mypy --strict on the code, I get an error on the init of MyClass. Complaining that Parameter is unbound for MyClass and that I should make MyClass derive from Generic[ParameterType] as well.</p>
<p>Does this make any sense? In my project MyClass has a completely different inheritance and moreover ParameterType is the type of one parameter and not of all parameters in the MyClass container (in the example the first is int, the second is str and the third is list[int]).</p>
<p>The alternative solution would be to use Any:</p>
<pre class="lang-py prettyprint-override"><code>
class AltParameter:
def __init__(self, name: str, value: Any) -> None:
self.name = name
self.value = value
class MyAltClass:
def __init__(self) -> None:
self._container: dict[str, AltParameter] = {}
def load_params(self, external_data: dict[str, Any]) -> None:
for k,v in external_data.items():
self._container[k] = AltParameter(k, v)
my_alt_class = MyAltClass()
my_alt_class.load_params(ext_data)
</code></pre>
<p>but then I can also quit doing static typing.</p>
<p>Can you help me solving this typing puzzle?</p>
|
<python><python-typing><mypy>
|
2024-12-11 14:33:13
| 0
| 367
|
toto
|
79,271,961
| 10,306,224
|
Pydantic/Django Ninja use only existing keys (even with None)
|
<p>having an app in Django Ninja with schemas:</p>
<pre><code>class NumericalFilterSchema(Schema):
gt: Optional[int] = None
lt: Optional[int] = None
gte: Optional[int] = None
lte: Optional[int] = None
exact: Optional[int] = None
class Config(Schema.Config):
extra = "forbid"
class StringFilterSchema(Schema):
contains: Optional[str] = None
icontains: Optional[str] = None
exact: Optional[str] = None
class Config(Schema.Config):
extra = "forbid"
class InputsSchema(Schema):
major_version: Optional[NumericalFilterSchema] = None
app_name: Optional[StringFilterSchema] = None
class Config(Schema.Config):
extra = "forbid"
class InputSchema(Schema):
filters: InputsSchema
class Config(Schema.Config):
extra = "forbid"
</code></pre>
<p>which I then use in the endpoint like this:</p>
<pre><code>
@router_v1.post(
"/apps",
tags=["..."],
auth=AuthBearer(),
)
def dynamic_filter(request: HttpRequest, filters: InputsSchema):
query = Q()
# import ipdb
# ipdb.set_trace()
for key, value in filters.dict(exclude_none=True).items():
# key = translate_field(key) # just abstraction between endpoint keys to db keys
if isinstance(value, dict):
for k, v in value.items():
if v is not None:
query &= Q(**{f"{key}__{k}": v})
else:
query &= Q(**{key: value})
results = Apps.objects.filter(query)
...
</code></pre>
<p>Problem:</p>
<p>As you can see in the query building I am excluding all the None values which is fine in the most cases for example:</p>
<pre><code>{
"major_version": {
"exact": 3
},
"app_name": {
"icontains": "google"
}
}
</code></pre>
<p>this will return schema</p>
<pre><code>InputsSchema(major_version=NumericalFilterSchema(gt=None, lt=None, gte=None, lte=None, exact=3), app_name=StringFilterSchema(contains=None, icontains='google', exact=None))
</code></pre>
<p>which is great... but what if my input value is None?</p>
<p>for example:</p>
<pre><code>{
"major_version": {
"exact": null
},
"app_name": {
"icontains": "google"
}
}
</code></pre>
<p>here the <code>exact</code> key value pair will resolve to <code>"exact": None</code> which will be the same as other keys after pydantic/ninja validation:</p>
<pre><code>InputsSchema(major_version=NumericalFilterSchema(gt=None, lt=None, gte=None, lte=None, exact=None), app_name=StringFilterSchema(contains=None, icontains='google', exact=None))
</code></pre>
<p>which "sucks" for me bcs I use <code>exclude_none=True</code> which filters out all Nones - even the value I gave.</p>
<p>is there a way to avoid creating the non existing keys in created model? So after the request validation I will have something like:</p>
<pre><code>InputsSchema(major_version=NumericalFilterSchema(exact=None), app_name=StringFilterSchema(icontains='google'))
</code></pre>
<p>so I don't have to use the <code>exclude_none=True</code> and pass the <code>None</code> to the query?</p>
<p>Thanks!</p>
|
<python><django><validation><pydantic><django-ninja>
|
2024-12-11 13:50:15
| 1
| 905
|
Leemosh
|
79,271,959
| 6,571,328
|
UPOS Mappings - Tensorflow Datasets TDFS
|
<p>I am using the tensorflow tdfs dataset <a href="https://www.tensorflow.org/datasets/catalog/xtreme_pos" rel="nofollow noreferrer">extreme/pos</a> which I retrieve using the code below. It is annotated with universal part of speech <a href="https://universaldependencies.org/u/pos/" rel="nofollow noreferrer">POS</a> labels. These are int values. Its fairly easy to map them back to their part of speech by creating my own mapping (0 = ADJ, 7 = NOUN, etc.) but I was wondering if there is a way of retrieving these class mappings from the tdfs dataset?</p>
<pre><code>(orig_train, orig_dev, orig_test), ds_info = tfds.load(
'xtreme_pos/xtreme_pos_en',
split=['train', 'dev', 'test'],
shuffle_files=True,
with_info=True
)
</code></pre>
|
<python><tensorflow>
|
2024-12-11 13:50:06
| 2
| 393
|
RodP
|
79,271,953
| 4,451,315
|
cumulative sum per group in PyArrow
|
<p>In pandas I can do:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({'a': [1,2,3,4,5,6], 'b': ['x']*3+['y']*3})
df.groupby('b')['a'].cumsum()
</code></pre>
<pre><code>0 1
1 3
2 6
3 4
4 9
5 15
Name: a, dtype: int64
</code></pre>
<p>How can I get the same result in PyArrow starting from</p>
<pre class="lang-py prettyprint-override"><code>pa.table({'a': [1,2,3,4,5,6], 'b': ['x']*3+['y']*3})
</code></pre>
<p><em>without</em> converting to pandas?</p>
|
<python><pyarrow>
|
2024-12-11 13:48:15
| 2
| 11,062
|
ignoring_gravity
|
79,271,832
| 3,084,842
|
Calculating negative values with numpy.log
|
<p>I'm trying to do a calculation that contains negative values inside a log (base-e) function. Python's numpy package has the <a href="https://numpy.org/doc/stable/reference/generated/numpy.log.html" rel="noreferrer"><code>log</code></a> function, MWE below:</p>
<pre><code>from numpy import exp, log
z = 1j*log(-1.1)
print(exp(-1j*z))
</code></pre>
<p>but it gives an error message:</p>
<pre><code>RuntimeWarning: invalid value encountered in log
z = 1j*log(-1.1)
</code></pre>
<p>The correct answer is <code>-1.1</code> which we can be easily verified on WolframAlpha. Is there an alternative numpy log function that can handle negative values? If not, is there a simple workaround or formula that I can use?</p>
|
<python><numpy>
|
2024-12-11 13:07:34
| 5
| 3,997
|
Medulla Oblongata
|
79,271,828
| 2,805,692
|
No module named 'awsgluedq'
|
<p>I am trying to run aws glue script in local docker environment</p>
<pre><code>import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from awsgluedq.transforms import EvaluateDataQuality
</code></pre>
<p>I have imports as above. All imports works except <code>awsgluedq</code>
It throws error as <code>ModuleNotFoundError: No module named 'awsgluedq'</code>
Command to execute glue script</p>
<pre><code>docker run -it -v ~/.aws:/home/glue_user/.aws -v $WORKSPACE_LOCATION:/home/glue_user/workspace/ -e AWS_PROFILE=$PROFILE_NAME -e DISABLE_SSL=true --rm -p 4040:4040 -p 18080:18080 --name glue_spark_submit amazon/aws-glue-libs:glue_libs_4.0.0_image_01-arm64 spark-submit /home/glue_user/workspace/src/$SCRIPT_FILE_NAME
</code></pre>
<p>As per documentation for awsgluedq glue version 3 or later should be used. But glue version is 4. Whats wrong here?</p>
<p>Thanks in advance</p>
|
<python><aws-glue>
|
2024-12-11 13:06:48
| 1
| 317
|
sdk
|
79,271,808
| 5,767,535
|
Python not finding files from module when not using `from`
|
<p>I have the following folder structure:</p>
<pre><code>C:
βββ dev
βββ my_scripts
βββ __init__.py
βββ my_utils.py
</code></pre>
<p>The file <code>my_utils.py</code> includes some function <code>f</code>.</p>
<p>The following code works:</p>
<pre><code>import sys
sys.path.append('C:\dev')
from my_scripts import my_utils
my_utils.f()
</code></pre>
<p>However the following does <strong>not</strong> work:</p>
<pre><code>import sys
sys.path.append('C:\dev')
import my_scripts
my_scripts.my_utils.f()
> AttributeError: module 'my_scripts' has no attribute 'my_utils'
</code></pre>
<p>Why am I getting this error?</p>
|
<python><import>
|
2024-12-11 13:00:52
| 1
| 2,343
|
Daneel Olivaw
|
79,271,750
| 16,804,841
|
Access container class from contained class with python dataclasses
|
<p>I have a parent class, that contains a child class. Both are implemented with python dataclasses. The classes look like this:</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
from dataclasses import dataclass
@dataclass
class Parent:
name: str
child: Child
@dataclass
class Child:
parent: Parent
</code></pre>
<p>The goal is, to access the child class from the parent class, but also the parent class from the child class. At the same time I don't want to have to annotate either of the references as Optional.</p>
<p>Since the child object only exist with a parent object, this would possible:</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
from dataclasses import dataclass
@dataclass
class Parent:
name: str
child: Child
def __post_init__(self):
self.child.parent = self
@dataclass
class Child:
parent: Parent = None
Parent(name="foo", child=Child())
</code></pre>
<p>However, since I am using mypy, it complains that <code>Child.parent</code> should be annotated with <code>Optional[Parent]</code>. In practice this is only true until after the <code>__post_init__</code> call. How could I get around this issue?</p>
|
<python><python-typing><mypy><python-dataclasses>
|
2024-12-11 12:39:54
| 1
| 313
|
Hazel
|
79,271,743
| 14,691,751
|
Why can't my Python interpreter or Arlpy see the Acoustics Toolbox models?
|
<p>I am trying to use the <a href="http://oalib.hlsresearch.com/AcousticsToolbox/" rel="nofollow noreferrer">Acoustics Toolbox</a> in Python, using the <code>arlpy</code> module. I am able to import <code>arlpy</code> but it cannot see any of the models:</p>
<pre><code>>>> import arlpy.uwapm as pm
>>> print(pm.models())
[]
>>>
</code></pre>
<p>I have installed the Acoustics Toolbox files onto my local drive, and have set up the Environment path appropriately.
The models are located in
<code>C:\WorkTools\AcousticToolbox\atWin10_2020_11_4\windows-bin-20201102\</code>
and I have added that path to the list of environment paths for both System and User.</p>
<p>I'm aware of <a href="https://stackoverflow.com/a/74900962/14691751">this answer</a> and <a href="https://stackoverflow.com/a/44272417/14691751">this one</a>. They do describe what I'm <em>supposed</em> to do, but do not address what to do when it doesn't work.</p>
<p>I'm using Python 3.12.4. I get the same issue whether I try to use the models from Spyder or from the command prompt.</p>
|
<python><windows><path>
|
2024-12-11 12:38:09
| 0
| 388
|
Dan Pollard
|
79,271,631
| 12,016,688
|
Why reference count of None object is fixed?
|
<p>I was experimenting with refcount of objects, and I noticed the reference count of <code>None</code> object does not change when I bind identifiers to <code>None</code>. I observed this behavior in python version <code>3.13</code>.</p>
<pre class="lang-py prettyprint-override"><code>Python 3.13.0 (main, Oct 7 2024, 05:02:14) [Clang 16.0.0 (clang-1600.0.26.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.getrefcount(None)
4294967295
>>> list_of_nones = [None for _ in range(100)]
>>> sys.getrefcount(None)
4294967295
>>> del list_of_nones
>>> sys.getrefcount(None)
4294967295
</code></pre>
<p>This behavior is in contrast with the behavior of python <code>3.10</code>:</p>
<pre class="lang-py prettyprint-override"><code>Python 3.10.15 (main, Sep 7 2024, 00:20:06) [Clang 15.0.0 (clang-1500.3.9.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>>
>>> sys.getrefcount(None)
4892
>>> list_of_nones = [None for _ in range(100)]
>>> sys.getrefcount(None)
4990
>>>
>>> del list_of_nones
>>>
>>> sys.getrefcount(None)
4890
</code></pre>
<p>In the <code>3.10</code> version, the reference count of <code>None</code>, decrease and increase according to binding identifiers and deleting them. But in the 3.13, the reference count is always fixed.
Can someone explain this behavior?</p>
|
<python><python-3.10><reference-counting><python-3.13>
|
2024-12-11 11:59:52
| 1
| 2,470
|
Amir reza Riahi
|
79,271,442
| 8,831,742
|
Imported igraph graph does not correctly recognize nodes
|
<p>I'm switching from <code>networkx</code> to <code>igraph</code> for my graph analysis project and the graph i'm loading from a file is incorrectly classified as completely disconnected.</p>
<p>Here is the code i'm using:</p>
<pre><code>from sys import argv,getsizeof
from igraph import Graph
# read network file
assert len(argv)>=2, "Must specify a file containing a graph"
G = Graph.Load(argv[1],format="edgelist")
graphsize = sum(getsizeof(v) for v in G.vs) + sum(getsizeof(e) for e in G.es)
graphsize = graphsize // (1024*1024)
print(f"Graph occupies {graphsize} MBs in memory")
print(G.ecount()) # shows the correct number of edges in the graph
print("Graph is connected" if Graph.is_connected(G) else "Graph is not connected")
comps = G.connected_components()
print(max(comps.sizes()))
</code></pre>
<p>The <code>max(comps.sizes())</code> line prints a 1, meaning that each node is counted as being in its own separate component.
The same analysis with <code>networkx</code> on the same file correctly said the graph was connected. How do i fix this?</p>
|
<python><graph-theory><igraph>
|
2024-12-11 11:00:50
| 0
| 353
|
none none
|
79,271,319
| 8,868,419
|
Seleniumbase not logging
|
<p>I am trying to capture responses using Selenium and undetected, this is working normally using those two:</p>
<pre><code>import undetected_chromedriver as uc
capabilities = DesiredCapabilities.CHROME
capabilities["goog:loggingPrefs"] = {"performance": "ALL"}
driver = uc.Chrome(headless=False, desired_capabilities=capabilities)
driver.get(url)
time.sleep(5)
logs_raw = driver.get_log("performance")
</code></pre>
<p>This is working correctly and it logs all the requests and responses, however it does not work in headless mode, so I switched to use Seleniumbase</p>
<pre><code>from seleniumbase import Driver
driver = Driver(uc=True, log_cdp=True, headless=True)
driver.get(url)
time.sleep(5)
logs_raw = driver.get_log("performance")
print(logs_raw)
</code></pre>
<p>As my understanding log_cdp is same as "goog:loggingPrefs" <a href="https://github.com/seleniumbase/SeleniumBase/issues/2220" rel="nofollow noreferrer">https://github.com/seleniumbase/SeleniumBase/issues/2220</a>, however nothing gets logged, both in headless and non-headless and I am not sure what I am doing wrong.
The page gets loaded bypassing the captcha displaying all the data I need.</p>
<pre><code>seleniumbase 4.33.7
Google Chrome 131.0.6778.85
Windows 11 using WSL Ubuntu 22.0.4 LTS
</code></pre>
|
<python><selenium-webdriver><seleniumbase>
|
2024-12-11 10:22:15
| 2
| 2,722
|
Matteo
|
79,271,266
| 3,906,713
|
Scipy.optimize.minimize returns `Desired error not necessarily achieved due to precision loss.` for a toy problem
|
<pre><code>import numpy as np
from scipy.optimize import minimize
np.random.seed(42)
nDim = 24
xBase = np.random.normal(0, 1, nDim)
x0 = np.zeros(nDim)
loss = lambda x: np.linalg.norm(x - xBase)
# loss = lambda x: (x - xBase).dot(x - xBase)
res = minimize(loss, x0, method = 'BFGS', options={'gtol': 1.0E-3, 'maxiter': 1000})
</code></pre>
<p>Here, I minimize the most basic quadratic loss function in 24 dimensions. When I use squared quadratic loss (commented out), the method converges no problem. When I take square root of the objective function, the method fails, even though it is essentially the same problem.</p>
<pre><code> message: Desired error not necessarily achieved due to precision loss.
success: False
status: 2
fun: 1.8180001548836974e-07
x: [ 4.967e-01 -1.383e-01 ... 6.753e-02 -1.425e+00]
nit: 4
jac: [-7.681e-02 4.436e-02 ... 2.302e-01 2.046e-01]
hess_inv: [[ 9.825e-01 1.306e-03 ... 1.217e-02 3.625e-02]
[ 1.306e-03 9.980e-01 ... -1.150e-02 -4.421e-03]
...
[ 1.217e-02 -1.150e-02 ... 6.066e-01 7.074e-02]
[ 3.625e-02 -4.421e-03 ... 7.074e-02 8.894e-01]]
nfev: 1612
njev: 64
</code></pre>
<p>The documentation for <a href="https://docs.scipy.org/doc/scipy/reference/optimize.minimize-bfgs.html#optimize-minimize-bfgs" rel="nofollow noreferrer">BFGS</a> suggests modifying the <code>gtol</code> argument, which I have tried. However, it has no effect, unless I set it to some ridiculously high number like 1.0E+1, in which case the answer is just wrong. In all other cases for gtol within range of 1.0E-1 to 1.0E-10, the exact message as above is returned. The answer is actually correct, because the objective function is 1.0E-7, but the error message still says that optimization failed.</p>
<p>Am I doing something wrong, or is this a bug?</p>
<p>I have had a look at <a href="https://stackoverflow.com/questions/24767191/scipy-is-not-optimizing-and-returns-desired-error-not-necessarily-achieved-due">this</a> related question, but my objective function is significantly simpler than in that case, so I suspect that the concerns mentioned in the answers should not apply here.</p>
<p><strong>NOTE</strong>: My goal is not to get Scipy to work for this toy problem. My goal is to learn to manipulate the arguments of <code>minimize</code> function to increase the likelihood that it solves my actual problem, which is far more complicated.</p>
|
<python><scipy><minimization>
|
2024-12-11 10:06:38
| 1
| 908
|
Aleksejs Fomins
|
79,271,090
| 9,884,998
|
Generalizing a gaussian mix to take any number of arguments with numpy.vectorize causes performance issues
|
<p>I am optimizing a gaussian mix using maximum likelyhood estimation. Originally I used the following model:</p>
<pre><code>def normal(x, mu, sigma):
"""
Gaussian (normal) probability density function.
Args:
x (np.ndarray): Data points.
mu (float): Mean of the distribution.
sigma (float): Standard deviation of the distribution.
Returns:
np.ndarray: Probability density values.
"""
return (1 / (np.sqrt(2 * np.pi) * sigma)) * np.exp(-0.5 * ((x - mu) / sigma) ** 2)
def model(x, a, mu1, s1, mu2, s2):
return a*normal(x, mu1, s1) + (1-a)*normal(x, mu2, s2)
</code></pre>
<p>This works great and finds a good fit in under a second.
I now wanted to dynamically generate such a function for any number of peaks.</p>
<pre><code>def generate_gaussian_mix(n):
def gaussian_mix(x, *params):
if len(params) != 3 * n - 1:
print(params)
raise ValueError(f"Expected {3 * n - 1} parameters, but got {len(params)}.")
params = np.asarray(params)
mu = params[0::3] # Means
sigma = params[1::3] # Standard deviations
a = params[2::3] # Weights
a = np.hstack((a, 1 - np.sum(a)))
return np.sum((a / (np.sqrt(2 * np.pi) * sigma))*np.exp(-0.5 * ((x - mu) / sigma) ** 2))
return np.vectorize(gaussian_mix)
</code></pre>
<p>This model takes over three minutes to compute on my laptop with the same number of peaks and events. What are optimization steps I could take to reduce the magnitude of this second function? Is there a good way to avoid vectorization? Do you have any ideas to avoid the repeated slicing?</p>
<p>for completeness, this is the optimization function:</p>
<pre><code>def neg_log_event_likelyhood(model, event, theta):
x = -np.log(model(event, *theta))
return x
def fit_distribution_anneal(model, events, bounds, data_range = None, **kwargs):
def total_log_likelyhood(theta, model, events):
return np.sum(neg_log_event_likelyhood(model, events, theta))
if data_range is not None:
events = np.copy(events)
events = events[np.logical_and(events > data_range[0], events < data_range[1])]
result = dual_annealing(total_log_likelyhood, bounds, args=(model, events), **kwargs)
params = result.x
return params
</code></pre>
<p>The annealing is required as opposed to minimize due to the non convex nature of the problem.</p>
|
<python><numpy><performance><gaussian-mixture-model>
|
2024-12-11 09:05:11
| 1
| 529
|
David K.
|
79,271,050
| 14,351,788
|
logging.getLogger cannot get the logging config info
|
<p>I have two .py files: main.py and utils.py.
The main.py will import funciton from utils.py
My plan is that the log info in both .py files can be recorded in the same log file</p>
<p>Here is my code:</p>
<p>in the main.py:</p>
<pre><code>from utils import run
import logging
# log setting
log_path = './logs'
if not os.path.exists(log_path):
os.makedirs(log_path)
full_log_path = log_path + '/model-' + 'p2p_check-' + time.strftime("%Y-%m-%d", time.localtime()) + '.log'
logging.basicConfig(level=logging.DEBUG,
datefmt='%Y-%m-%d %H:%M:%S',
handlers=[logging.FileHandler(full_log_path, 'w', 'utf-8'),
logging.StreamHandler()])
run()
</code></pre>
<p>in the uitls.py:</p>
<pre><code>import logging
logger = logging.getLogger(__name__)
def run():
logger.critical('Logger handlers: %s', logger.handlers)
logger.info('This is an info message from the submodule.')
logger.debug('This is a debug message')
logger.info('This is an info message')
logger.warning('This is a warning message')
logger.error('This is an error message')
logger.critical('This is a critical message')
</code></pre>
<p>However, no log will be recode in my log file and part of log info in utils.py will output on my screen:</p>
<pre><code>Logger handlers: []
This is a warning message
This is an error message
This is a critical message
</code></pre>
<p>It seems that the getLogger function cannot get the logging config on the main.py. How to fix this issue?</p>
|
<python><logging>
|
2024-12-11 08:50:23
| 0
| 437
|
Carlos
|
79,270,968
| 414,830
|
what is the correct way to pass context to a playwright function when using python django as the basis for a webapp?
|
<p>I've got a web app that I built first with Flask, then totally rebuilt with Django, to get advantage of the admin interface.
It builds a flyer for a kids football match, using the context to overlay on a background image via css absolute position. Once the coach is happy with the image, a button calls a Generate function, which opens a browser with playwright and passes the session data to recreate the image and allow the coach to download it.</p>
<p>When the coach is viewing the image the text fields are populated from the context. this all works fine.</p>
<pre class="lang-py prettyprint-override"><code>def image(request):
... some other bits ...
for key in request.session.keys():
context[key] = request.session[key]
return render(request, 'coaches/image.html', context)
</code></pre>
<p>this is an example from the template that shows how the text is added from context. I'm using Django Template engine.</p>
<pre class="lang-html prettyprint-override"><code> {% for player in squad %}
<td>
<span class="playernumber">{{ player.number }}</span>
<span class="playerfirstname">{{ player.first }}</span>
<span class="playerlastname">{{ player.last }}</span>
</td>
{% if forloop.counter|divisibleby:2 %}
</tr>
<tr>
{% endif %}
{% endfor %}
</code></pre>
<p>the same template code doesnt work when the playwright browser view is invoked. the view is fine, the template is fine, but calling the variables from context doesnt work.</p>
<p>This is down to the different way it works. With the playwright function, you pass the whole session.</p>
<pre class="lang-py prettyprint-override"><code>def generate(request):
session_data = {
"name": "mysite",
"value": request.COOKIES.get("mysite"),
"url": request.META['HTTP_HOST']
}
target_url = request.META['HTTP_HOST'] + "/coaches/image"
image_bytes = take_screenshot_from_url(target_url, session_data)
</code></pre>
<p>with the playwright function looking like this. (I followed this article <a href="https://realpython.com/python-code-image-generator/#step-4-generate-an-image-from-the-code" rel="nofollow noreferrer">real python</a> building the code formatter, to work out if this pattern does what I want).</p>
<pre class="lang-py prettyprint-override"><code>def take_screenshot_from_url(url, session_data):
with sync_playwright() as playwright:
webkit = playwright.webkit
browser = webkit.launch()
browser_context = browser.new_context(device_scale_factor=2)
browser_context.add_cookies([session_data])
page = browser_context.new_page()
page.goto(url)
screenshot_bytes = page.locator("#squad-image").screenshot()
browser.close()
return screenshot_bytes
</code></pre>
<p>Inside the template, the variables just arent populated.
I've worked out that the session_data variable contains a key value, which is where my session variables actually live. but I'm unsure of the correct way to address them.</p>
<p>I've tried all sorts of things, including serializing/deserializing, creating a dict and passing that, even just passing one variable.</p>
<p>I'm aware ive probably got a fundamental mistake in how i understand whats going on here, would appreciates suggestions on what to read. I've tried taking this back to basics and it works, i think the flaw is in how i understand the difference between Django session storing it in cookies and Flask not doing. Then how to build the call to the playwright view with the actual cookie.</p>
|
<python><django><playwright><playwright-python>
|
2024-12-11 08:19:34
| 1
| 1,043
|
bytejunkie
|
79,270,956
| 1,769,197
|
Python: weird characters in extract_message
|
<p>I use <code>extract_message</code> from <a href="https://pypi.org/project/extract-msg%20package" rel="nofollow noreferrer">pypi.org/project/extract-msg</a> package to extract outlook messages. It works well until this outlook file causes the error message. I found out that the problem is that the email content contains some weird characters like <code>Γ’βΒͺ </code>which cause the error ? Any idea how to make ignore these weird characters ?</p>
<pre><code>msg = extract_msg.Message(email)
2024-12-11 16:02:00,865 - ERROR : [message_base.py:97 - __init__] Critical error accessing the body. File opened but accessing the body will throw an exception.
Traceback (most recent call last):
File "C:\Users\xxxxx\miniconda\envs\spy_3_10\lib\cmd.py", line 214, in onecmd
func = getattr(self, 'do_' + cmd)
AttributeError: 'InterruptiblePdb' object has no attribute 'do_msg'. Did you mean: 'do_s'?
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\xxxxx\miniconda\envs\spy_3_10\lib\site-packages\extract_msg\msg_classes\message_base.py", line 94, in __init__
self.body
File "C:\Users\xxxxx\miniconda\envs\spy_3_10\lib\functools.py", line 981, in __get__
val = self.func(instance)
File "C:\Users\xxxxx\miniconda\envs\spy_3_10\lib\site-packages\extract_msg\msg_classes\message_base.py", line 944, in body
if (body := self.getStringStream('__substg1.0_1000')) is not None:
File "C:\Users\xxxxx\miniconda\envs\spy_3_10\lib\site-packages\extract_msg\msg_classes\msg.py", line 738, in getStringStream
return None if tmp is None else tmp.decode(self.stringEncoding)
File "C:\Users\xxxxx\miniconda\envs\spy_3_10\lib\encodings\cp1252.py", line 15, in decode
return codecs.charmap_decode(input,errors,decoding_table)
UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 2279: character maps to <undefined>
</code></pre>
|
<python><encoding>
|
2024-12-11 08:14:50
| 0
| 2,253
|
user1769197
|
79,270,937
| 7,962,284
|
Issue with Python Flask App Deployment to Azure App Service During Zip Deployment
|
<p>I'm facing an issue with deploying my Python Flask app to <strong>Azure App Service(Python 3.9)</strong> using zip deployment. Below are the details:</p>
<p><strong>Directory structure:</strong></p>
<pre><code>.
|---app.py
|---requirements.txt
</code></pre>
<p><strong>Working Code:</strong></p>
<p><em>app.py:</em></p>
<pre><code>from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello():
return "Hello, Azure!--1"
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8000)
</code></pre>
<p><em>requirements.txt:</em></p>
<pre><code>Flask==3.1.0
</code></pre>
<p><strong>Non-Working Code:</strong></p>
<p><em>app.py:</em></p>
<pre><code>from flask import Flask
import openai
app = Flask(__name__)
@app.route('/')
def hello():
return "Hello, Azure!--2"
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8000)
</code></pre>
<p><em>requirements.txt:</em></p>
<pre><code>Flask==3.1.0
openai==1.57.1
</code></pre>
<p><strong>Steps for deploying:</strong></p>
<p><em><strong>Create zip:</strong></em></p>
<ul>
<li>Open the directory in file explorer</li>
<li>Select the above files (app.py and requirements.txt)</li>
<li>Right click on file app.py</li>
<li>Compress to -> ZIP File</li>
</ul>
<p><em><strong>Command to deploy into Azure app service:</strong></em></p>
<pre><code>curl -k -v -X POST -H 'Content-Type: application/zip' -u "<$app_service_user>:<60character_length_userPWD>" --data-binary @app.zip https://<appservicename>.scm.dev001.ase.frk.com/api/zipdeploy
</code></pre>
<p><strong>Error in the API log stream:</strong></p>
<pre><code>Site's appCommandLine: gunicorn --bind=0.0.0.0:8000 app:app
Launching oryx with: create-script -appPath /home/site/wwwroot -output /opt/startup/startup.sh -virtualEnvName antenv -defaultApp /opt/defaultsite -userStartupCommand 'gunicorn --bind=0.0.0.0:8000 app:app'
Could not find build manifest file at '/home/site/wwwroot/oryx-manifest.toml'
Could not find operation ID in manifest. Generating an operation id...
Build Operation ID: c9b33d68-8d6d-4326-9b08-257fe923d135
Oryx Version: 0.2.20240619.2, Commit: cf006407a02b225f59dccd677986973c7889aa50, ReleaseTagName: 20240619.2
Writing output script to '/opt/startup/startup.sh'
WARNING: Could not find virtual environment directory /home/site/wwwroot/antenv.
WARNING: Could not find package directory /home/site/wwwroot/__oryx_packages__.
Booting worker with pid: 1075
[1075] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/opt/python/3.9.19/lib/python3.9/site-packages/gunicorn/arbiter.py", line 609, in spawn_worker
worker.init_process()
File "/opt/python/3.9.19/lib/python3.9/site-packages/gunicorn/workers/base.py", line 134, in init_process
self.load_wsgi()
File "/opt/python/3.9.19/lib/python3.9/site-packages/gunicorn/workers/base.py", line 146, in load_wsgi
self.wsgi = self.app.wsgi()
File "/opt/python/3.9.19/lib/python3.9/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/opt/python/3.9.19/lib/python3.9/site-packages/gunicorn/app/wsgiapp.py", line 58, in load
return self.load_wsgiapp()
File "/opt/python/3.9.19/lib/python3.9/site-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp
return util.import_app(self.app_uri)
File "/opt/python/3.9.19/lib/python3.9/site-packages/gunicorn/util.py", line 371, in import_app
mod = importlib.import_module(module)
File "/opt/python/3.9.19/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/home/site/wwwroot/app.py", line 2, in <module>
import openai
ModuleNotFoundError: No module named 'openai'
[1075] [INFO] Worker exiting (pid: 1075)
[1064] [ERROR] Worker (pid:1075) exited with code 3
[1064] [ERROR] Shutting down: Master
[1064] [ERROR] Reason: Worker failed to boot.
</code></pre>
<p>I've verified that the code works locally, but it fails when deployed to Azure App Service. Any insights or suggestions on what might be causing this issue would be greatly appreciated!</p>
<p>Thanks in advance!</p>
|
<python><azure><flask><gunicorn><zipdeploy>
|
2024-12-11 08:08:16
| 1
| 1,007
|
chiru
|
79,270,796
| 22,146,392
|
Script is being ran twice when rendering jinja2 template?
|
<p>I have a python script that renders a jinja2 template. Something like this:</p>
<p><code>basic.py</code></p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
from jinja2 import Environment, PackageLoader, select_autoescape
env = Environment(
loader=PackageLoader('basic'),
autoescape=select_autoescape()
)
print(env.get_template('basic_template.md').render({'version': 'version123', 'date': 'some-date'}))
</code></pre>
<p>My directory structure looks like this:</p>
<pre><code>.
|__ templates/
| |
| |__ basic_template.md
|
|__ basic.py
</code></pre>
<p>I'm running this in a CI pipeline. The job runs this command: <code>python /path/to/basic.py</code>.</p>
<p>Whenever it runs, the whole script runs twice. This is being caused by the <code>env</code> variable--specifically <code>PackageLoader('basic')</code>. I'm pretty sure when this is loaded it's running the <code>basic.py</code> script (again)--from within <code>basic.py</code>.</p>
<p>If I create an empty dummy script and use that in <code>PackageLoader</code>, my script works fine (only renders the template once):</p>
<pre><code>.
|__ templates/
| |
| |__ basic_template.md
|
|__ basic.py
|
|__ other_script.py
</code></pre>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
from jinja2 import Environment, PackageLoader, select_autoescape
env = Environment(
loader=PackageLoader('other_script'),
autoescape=select_autoescape()
)
print(env.get_template('basic_template.md').render({'version': 'version123', 'date': 'some-date'}))
</code></pre>
<p>If I pass a non-existent file to <code>PackageLoader</code> or if I omit <code>loader=PackageLoader()</code> all together, it throws an error.</p>
<p>So what's the right way to accomplish this? I don't want to have a random empty file lying around if I don't need it. How do I render the template without <code>PackageLoader</code> running my script a second time?</p>
|
<python><jinja2>
|
2024-12-11 07:20:32
| 2
| 1,116
|
jeremywat
|
79,270,612
| 4,248,409
|
Suppress stdout of a coroutine without affecting other coroutines in python
|
<pre class="lang-py prettyprint-override"><code>import asyncio
import contextlib
import os
async def my_coroutine():
with open(os.devnull, 'w') as f, contextlib.redirect_stdout(f):
print("This will not be printed")
await asyncio.sleep(1)
print("This will be printed")
async def my_coroutine2():
print("This will be printed from 2")
await asyncio.sleep(1)
async def main():
await asyncio.gather(
my_coroutine(),
my_coroutine2()
)
asyncio.run(main())
</code></pre>
<p>In the above case, the output will be</p>
<pre><code>This will be printed
</code></pre>
<p>Is there a way to print</p>
<pre class="lang-bash prettyprint-override"><code>This will be printed from 2
This will be printed
</code></pre>
<p>That means I want to suppress the <code>stdout</code> of only <code>my_coroutine</code> without affecting <code>my_coroutine2</code></p>
|
<python><python-asyncio>
|
2024-12-11 06:05:39
| 1
| 530
|
BAdhi
|
79,270,538
| 852,854
|
The osmnx call graph_from_bbox() reports that it takes 1 positional argument, however, help(ox.graph.graph_from_bbox) appears to expect 4
|
<p>I am writing a simple test driver to generate an html file, displaying two possible, real-world routes for a very limited area. When I run the python script, I get what I thought would be an easy error to resolve: "line 12, in <module>
graph = ox.graph.graph_from_bbox(*bbox, network_type="drive")
TypeError: graph_from_bbox() takes 1 positional argument but 4 positional arguments (and 1 keyword-only argument) were given".</p>
<p>Here is the code that uses the function:</p>
<pre><code>import osmnx as ox
import networkx as nx
import folium
# Configure timeout for HTTP requests
ox.settings.timeout = 180 # Set timeout to 180 seconds
# Define the bounding box as (north, south, east, west)
bbox = (37.5, 32.0, -94.0, -104.0) # Approximate bounding box for OK, TX, KS
# Create the graph using the bounding box
graph = ox.graph.graph_from_bbox(*bbox, network_type="drive")
# Define start and end points (Oklahoma City to Dallas)
start_point = (35.4676, -97.5164) # Oklahoma City
end_point = (32.7767, -96.7970) # Dallas
# Find the nearest nodes in the road network
start_node = ox.distance.nearest_nodes(graph, X=start_point[1], Y=start_point[0])
end_node = ox.distance.nearest_nodes(graph, X=end_point[1], Y=end_point[0])
# Calculate the shortest path using Dijkstra's algorithm
shortest_path = nx.shortest_path(graph, source=start_node, target=end_node, weight='length')
# Create a map centered between the start and end points
route_map = folium.Map(location=[(start_point[0] + end_point[0]) / 2, (start_point[1] + end_point[1]) / 2], zoom_start=7)
# Extract route geometry and plot it on the map
route_coords = [(graph.nodes[node]['y'], graph.nodes[node]['x']) for node in shortest_path]
folium.PolyLine(route_coords, color="blue", weight=5, opacity=0.8).add_to(route_map)
# Add markers for start and end points
folium.Marker(location=start_point, popup="Start: Oklahoma City").add_to(route_map)
folium.Marker(location=end_point, popup="End: Dallas").add_to(route_map)
# Save the map to an HTML file
route_map.save("real_world_route_map.html")
</code></pre>
<p>Since it is my first time working with osmnx, I thought I'd check the details of what the function expects by typing: help(ox.graph.graph_from_bbox) This is the usage text I see:</p>
<pre><code>This function uses filters to query the Overpass API: you can either
specify a pre-defined `network_type` or provide your own `custom_filter`
with Overpass QL.
Use the `settings` module's `useful_tags_node` and `useful_tags_way`
settings to configure which OSM node/way tags are added as graph node/edge
attributes. You can also use the `settings` module to retrieve a snapshot
of historical OSM data as of a certain date, or to configure the Overpass
server timeout, memory allocation, and other custom settings.
Parameters
----------
bbox
Bounding box as `(left, bottom, right, top)`. Coordinates should be in
unprojected latitude-longitude degrees (EPSG:4326).
network_type
{"all", "all_public", "bike", "drive", "drive_service", "walk"}
What type of street network to retrieve if `custom_filter` is None.
simplify
If True, simplify graph topology via the `simplify_graph` function.
retain_all
If True, return the entire graph even if it is not connected. If
If
False, retain only the largest weakly connected component.
truncate_by_edge
If True, retain nodes outside bounding box if at least one of node's
neighbors is within the bounding box.
custom_filter
A custom ways filter to be used instead of the `network_type` presets,
e.g. `'["power"~"line"]' or '["highway"~"motorway|trunk"]'`. If `str`,
the intersection of keys/values will be used, e.g., `'[maxspeed=50][lanes=2]'`
will return all ways having both maxspeed of 50 and two lanes. If
`list`, the union of the `list` items will be used, e.g.,
`['[maxspeed=50]', '[lanes=2]']` will return all ways having either
maximum speed of 50 or two lanes. Also pass in a `network_type` that
is in `settings.bidirectional_network_types` if you want the graph to
be fully bidirectional.
Returns
-------
G
</code></pre>
<p>The error I received appears to say there is only 1 positional, instead of the 4 the function help indicates. What have I missed?</p>
|
<python><geospatial><osmnx>
|
2024-12-11 05:18:50
| 1
| 703
|
plditallo
|
79,270,262
| 4,755,229
|
How do I use cygdb in Conda environment?
|
<p>I'm trying to run <code>cygdb</code> according to <a href="https://cython.readthedocs.io/en/latest/src/userguide/debugging.html" rel="nofollow noreferrer">this</a> document. However, the <code>cygdb</code> always seem to find the system python instead of the python of current conda environment, as:</p>
<pre><code>It used the Python interpreter /usr/bin/python
Traceback (most recent call last):
File "<string>", line 11, in <module>
ModuleNotFoundError: No module named 'Cython'
</code></pre>
<p>How do I force <code>cygdb</code> to use the python interpreter to use the interpreter of current environment? I tried adding <code>-- --interpreter='which python'</code> (this is not quotation but backtick, but I'm not sure how to format it in markdown), but it did not help.</p>
|
<python><gdb><cython>
|
2024-12-11 01:46:13
| 0
| 498
|
Hojin Cho
|
79,270,237
| 6,041,629
|
Modify simple interpolation function to work on a vector?
|
<p>I am doing many thousands of repetitive interpolations within an optimization algorithm in Python. I have written a function to interpolate a single value fairly efficiently. I would like to extend this to allow for 1D array inputs, as this may help speed up the computations avoiding for loops in my optimizer.</p>
<p>Sample code:</p>
<pre><code>import numpy as np
from numba import njit
#Universal Interpolation function
@njit
def calc(x0, x, y):
if x0 < x[0]:
return y[0]
elif x0 > x[-1]:
return y[-1]
else:
for i in range(len(x) - 1):
if x[i] <= x0 <= x[i + 1]:
x1, x2 = x[i], x[i + 1]
y1, y2 = y[i], y[i + 1]
return y1 + (y2 - y1) / (x2 - x1) * (x0 - x1)
WeirCurve=np.array([[749.81, 0], [749.9, 5], [750, 14.2], [751, 226], [752, 556], [753, 923.2], [754, 1155.3]])
def WeirDischCurve(x):
x = np.asarray(x)
result = calc(x, WeirCurve[:, 0], WeirCurve[:, 1])
return result
</code></pre>
<p>The function works for single value inputs:</p>
<pre><code>WeirDischCurve(751.65)
Out[10]: 440.4999999999925
</code></pre>
<p>but fails with a vector/1D array input.</p>
<pre><code>WeirDischCurve([751.65, 752.5, 753.3])
</code></pre>
<p>Any suggestions on how to do this as efficiently as possible? Scipy.interpolate features are too slow for my needs. Any suggestions would be appreciated (Note - I tried Creating a polynomial to do this but it seems to introduce a lot of error for some of the required interpolations)</p>
|
<python><interpolation>
|
2024-12-11 01:31:50
| 1
| 526
|
Kingle
|
79,270,224
| 8,176,763
|
optimizing pull and push to db with python csv and psycopg using airflow
|
<p>I have a task in airflow that invokes a helper function, the helper function reads data from oracle db and writes that data into an io buffer in batches. Then once the buffer is ready I read from that buffer and write the data. I think the reading step should be fine in terms of memory footprint if i reduce the the amount of lines in <code>read(INT)</code> . But the writing step I am not sure if is very memory intensive since io buffers are stored in memory correct ? Perhaps I should write to disk and then read from disk instead of keeping in the buffer ? What is the best approach in term of having a low memory profile in this scenario ?</p>
<p>from airflow.decorators import task</p>
<pre><code>def create_query_buffer(con: str, sql: str):
import csv
import io
import time
cur = con.cursor()
cur.prefetchrows = 17000
cur.arraysize = 20000
cur.execute(sql)
start_time = time.time()
# the buffer
output = io.StringIO()
# write into buffer csv style format
writer = csv.writer(output, lineterminator="\n")
col_names = [row[0] for row in cur.description]
writer.writerow(col_names)
while True:
rows = cur.fetchmany()
if not rows:
break
for row in rows:
writer.writerow(row)
print(
f"Time to write the oracle query into buffer is {round(time.time() - start_time,2)} seconds"
)
output.seek(0)
return output
@task
def stage_ods(
queries: list[str], table_names: list[str], postgres_con: str, oracle_con: str
) -> None:
import oracledb
import psycopg
from psycopg import sql
oracle_con_ods = oracledb.connect(oracle_con)
with psycopg.connect(postgres_con, autocommit=True) as conn:
cur = conn.cursor()
for query, table in zip(queries, table_names):
stream = create_query_buffer(oracle_con_ods, query)
columns = list(next(stream).rstrip().lower().split(","))
with conn.transaction():
cur.execute(
sql.SQL("TRUNCATE TABLE {} RESTART IDENTITY CASCADE").format(
sql.Identifier(table)
)
)
with cur.copy(
sql.SQL("COPY {} ({}) FROM STDIN WITH CSV").format(
sql.Identifier(table),
sql.SQL(", ").join(map(sql.Identifier, columns)),
)
) as copy:
while data := stream.read():
copy.write(data)
if __name__ == "__main__":
stage_ods()
</code></pre>
|
<python><io><airflow><psycopg3>
|
2024-12-11 01:21:59
| 0
| 2,459
|
moth
|
79,270,171
| 1,378,252
|
Unable to import module 'index': No module named 'pydantic_core._pydantic_core
|
<p>I'm using amplify to push my lambda. When I run it locally via <code>amplify mock function {functionName}</code> it runs and behaves as expected. When I deploy using <code>amplify push</code> it successfully deploys. However when I attempt to run the lambda from the AWS lambda ui I get the following error message.</p>
<pre><code>{
"errorMessage": "Unable to import module 'index': No module named 'pydantic_core._pydantic_core'",
"errorType": "Runtime.ImportModuleError",
"stackTrace": []
}
</code></pre>
<p>My development machine is a mac using the M3 chip. Python is version 3.8.19. I'm stuck and thus far nothing I've been able to find online has resolved my issue.</p>
|
<python><aws-lambda><runtime><aws-amplify><apple-m3>
|
2024-12-11 00:37:42
| 1
| 4,462
|
toddmetheny
|
79,270,111
| 21,935,028
|
Extract function arguments in PLSQL calls with Antlr4/Python4
|
<p>I am trying to extract the arguments in function/procedures calls in Oracle PLSQL using Antlr4 and Python3.</p>
<p>I trap the enterFunction_argument event (is the right term?) and I <strong>think</strong> the context could have the arguments under <code>arguments</code>, <code>argument</code>, <code>function_argument</code> or <code>function_arguments</code> depending of if the function has no, one or more than one argument. So I am trying to cater for all these bit I keep getting the error</p>
<pre><code>def enterFunction_argument(self, ctx:PlSqlParser.Function_argumentContext):
print(ctx.toStringTree(recog=parser))
print(dir(ctx))
print("")
args = None
if hasattr(ctx, 'arguments'):
args = ctx.arguments()
elif hasattr(ctx, 'argument'):
args = [ ctx.argument() ]
elif hasattr( ctx, 'function_argument'):
fa = ctx.function_argument()
if hasattr( fa, 'arguments'):
args = ctx.function_argument().arguments()
elif hasattr( fa, 'argument'):
args = [ ctx.function_argument().argument() ]
for arg in args:
argName = arg.expression().getChild(0).getText().lower() # Errors
argName = arg.getText().lower() # Errors
</code></pre>
<p>I get errors trying to get the arg name:</p>
<pre><code>argName = arg.getText()
AttributeError: 'list' object has no attribute 'getText'
</code></pre>
<pre><code>argName = arg.expression().getChild(0).getText().lower()
AttributeError: 'list' object has no attribute 'expression'
</code></pre>
<p>The tree view of the source code pertaining to this code is:</p>
<pre><code> β β β β β ββ function_argument
β β β β β β β "(" (LEFT_PAREN)
β β β β β β β argument
β β β β β β ββ expression
β β β β β β ββ logical_expression
β β β β β β ββ unary_logical_expression
β β β β β β ββ multiset_expression
β β β β β β ββ relational_expression
β β β β β β ββ compound_expression
β β β β β β ββ concatenation
β β β β β β ββ model_expression
β β β β β β ββ unary_expression
β β β β β β ββ atom
β β β β β β ββ general_element
β β β β β β ββ general_element_part
β β β β β β ββ id_expression
β β β β β β ββ regular_id
β β β β β β ββ "l_name" (REGULAR_ID)
β β β β β β β "," (COMMA)
β β β β β β β argument
β β β β β β ββ expression
β β β β β β ββ logical_expression
β β β β β β ββ unary_logical_expression
β β β β β β ββ multiset_expression
β β β β β β ββ relational_expression
β β β β β β ββ compound_expression
β β β β β β ββ concatenation
β β β β β β ββ model_expression
β β β β β β ββ unary_expression
β β β β β β ββ atom
β β β β β β ββ constant
β β β β β β ββ quoted_string
β β β β β β ββ "'Started'" (CHAR_STRING)
β β β β β ββ ")" (RIGHT_PAREN)
</code></pre>
<p>The PLSQL source code:</p>
<pre><code>CREATE OR REPLACE
PACKAGE BODY pa_monthly_sales_upd IS
PROCEDURE pr_upd_comp_monthly_sales ( p_date IN DATE DEFAULT TRUNC(SYSDATE-1) ) IS
l_name varchar2(30) := 'pr_upd_comp_monthly_sales';
l_fin_year NUMBER;
l_fin_month NUMBER;
BEGIN
pa_logging.pr_log_info( l_name, 'Started' ); --< This line is highlighted in this question
pa_calendar.pr_get_cal_month ( p_date, l_fin_year, l_fin_month );
pa_logging.pr_log_info( l_name, 'Completed' );
END pr_upd_comp_monthly_sales;
END pa_monthly_sales_upd;
/
</code></pre>
<p>The output I want is <code>l_name</code> and <code>"Started"</code>.</p>
|
<python><antlr4>
|
2024-12-10 23:45:27
| 1
| 419
|
Pro West
|
79,269,991
| 4,288,043
|
Python dateutil is being inconsistent with American vs British date formats
|
<p>I am using the dateutil library with its 'fuzzy' ability to parse dates out of strings. It seemed to be actually quite good at it but on careful inspection it was jumping back and forth between British and American date formats in reading in, namely whether the middle number is a month (British) or a day (American)</p>
<p>The following code will demo what is happening, to make the output more readable and obvious, I am converting the output from datetime to date then to isoformat dates.</p>
<pre><code>from dateutil.parser import parse
def customparse(x):
return parse(x, fuzzy=True, ignoretz=True).date().isoformat()
a = "Yorkshire Terrier 13-06-2013a.pdf"
b = "09-10-2014-spaniel.pdf"
print(a, " ", customparse(a))
print(b, " ", customparse(b))
</code></pre>
<p>output:</p>
<pre class="lang-none prettyprint-override"><code>Yorkshire Terrier 13-06-2013a.pdf 2013-06-13
09-10-2014-spaniel.pdf 2014-09-10
</code></pre>
<p>In each case the British format is intended which it correctly picks in the first one as it is obvious, no 13th month of the year, but then it decides to go with the American format for the second one.</p>
<p>Does anyone know how to tame its behaviour and make it be consistent?</p>
|
<python><python-dateutil>
|
2024-12-10 22:30:19
| 1
| 7,511
|
cardamom
|
79,269,976
| 1,473,517
|
Is it possible to store bytes in raw format with no space overhead?
|
<p>I am using heapq and would like the values to be raw bytes to be compared alphabetically. I want to use as little space as possible. Strings, even byte strings, unfortunately come with a space overhead. For example, take this MWE with fake data:</p>
<pre><code>from heapq import heappush
from pympler.asizeof import asizeof
L=[]
for _ in range(10_000):
heappush(L, ("abcd"*3).encode('latin1'))
print(f"{asizeof(L)} bytes")
</code></pre>
<p>This outputs 565176 bytes. I would really like it to take closer to 120000 bytes.</p>
<p>Is this possible?</p>
|
<python>
|
2024-12-10 22:22:56
| 2
| 21,513
|
Simd
|
79,269,946
| 986,618
|
How to create a custom, dynamic permalink/path for pages in Wagtail?
|
<p>I am trying to define custom permalinks for my Wagtail pages.</p>
<p>I use the following code in my model:</p>
<pre><code>def get_url_parts(self, request):
site_id, root_url, _ = super().get_url_parts(request)
return (
site_id,
root_url,
f"/{self.date.year}/{self.date.month}/{self.slug}",
)
</code></pre>
<p>According to the documentation, that should be enough. The code <code>{% pageurl post %}</code> returns what I am expecting (e.g. <code>/2024/10/this-is-a-post</code>) but the page itself returns a 404. What else do I need to do? I can't find anything anywhere saying I need to change anything in <code>urls.py</code>.</p>
|
<python><wagtail>
|
2024-12-10 22:07:56
| 0
| 7,340
|
MMM
|
79,269,724
| 480,118
|
Importing module error: module does not provide an export named default/{module name}
|
<p>I have a script named <code>/src/web/static/js/my_vue_widget.js</code>. It looks like this:</p>
<pre class="lang-js prettyprint-override"><code>const MyVueWidget = {
name: 'MyVueWidget',
...
};
export default MyVueWidget;
</code></pre>
<p>I have an HTML <code>/src/web/static/templates/index.html</code> which tries to import the Vue component from that JavaScript.</p>
<pre class="lang-js prettyprint-override"><code>import {MyVueWidget} from '/static/js/my_vue_widget.js';
##Also tried these varaitions with no better results:
#import MyVueWidget from '/static/js/my_vue_widget.js';
#import MyVueWidget from './static/js/my_vue_widget.js';
#import {MyVueWidget} from './static/js/my_vue_widget.js';
</code></pre>
<p>In my FastAPI app's <code>main.py</code> I have:</p>
<pre class="lang-py prettyprint-override"><code>app.mount("/static", StaticFiles(directory="static"), name="static")
app.mount("/templates", tmplts, name="templates")
</code></pre>
<p>When this page is being rendered, in the console I see the following JavaScript error:</p>
<pre><code>Uncaught SyntaxError: The requested module '/static/js/my_vue_widget.js' does not provide an export named 'MyVueWidget' (at (index):142:17)
</code></pre>
|
<javascript><python><vue.js><fastapi>
|
2024-12-10 20:33:15
| 0
| 6,184
|
mike01010
|
79,269,686
| 13,132,728
|
Alternate background colors in styled pandas df that also apply to MultiIndex in python pandas
|
<h1>SETUP</h1>
<p>I have the following <code>df</code>:</p>
<pre><code>import pandas as pd
import numpy as np
arrays = [
np.array(["fruit", "fruit", "fruit","vegetable", "vegetable", "vegetable"]),
np.array(["one", "two", "total", "one", "two", "total"]),
]
df = pd.DataFrame(np.random.randn(6, 4), index=arrays)
df.index.set_names(['item','count'],inplace=True)
def style_total(s):
m = s.index.get_level_values('count') == 'total'
return np.where(m, 'font-weight: bold; background-color: #D2D2D2', None)
def style_total_index(s):
return np.where(s == 'total', 'font-weight: bold; background-color: #D2D2D2','')
(df
.style
.apply_index(style_total_index)
.apply(style_total)
)
</code></pre>
<p><a href="https://i.sstatic.net/fjfsxY6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fjfsxY6t.png" alt="setup_example" /></a></p>
<h1>WHAT I WANT TO DO/DESIRED OUTPUT</h1>
<p>I would like to apply alternating background colors to each row (as well as the MultiIndex) while still keeping the separately colored and formatted <code>total</code> row. Here is a visual example of what I am trying to accomplish:</p>
<p><a href="https://i.sstatic.net/AmNqqj8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AmNqqj8J.png" alt="desired_output_example" /></a></p>
<p>As you can see, all rows as well as the MultiIndex are alternating colors with the <code>total</code> keeping its own custom formatting.</p>
<h1>WHAT I HAVE TRIED</h1>
<p>I have tried a whole bunch of things. I came across <a href="https://stackoverflow.com/questions/75013668/alternate-row-colour-for-dataframe">this question</a> and <a href="https://stackoverflow.com/questions/42576491/python3-pandas-styles-change-alternate-row-color#61009688">this question</a> that both use <code>set_table_styles()</code>, however, <a href="https://github.com/pandas-dev/pandas/issues/42276" rel="nofollow noreferrer">this issue</a> on the pandas Github says that <em>"Styler.set_table_styles is not exported to excel. This will not be change..."</em>. So, <code>set_table_styles()</code> is not an option here. How can I go about achieving the desired output?</p>
|
<python><pandas><dataframe><pandas-styles>
|
2024-12-10 20:19:41
| 1
| 1,645
|
bismo
|
79,269,677
| 13,328,553
|
KMS with encryption SDK - how to do envelope encryption?
|
<p>I am currently using the aws encryption sdk to encrypt and decrypt some of my data (encrypted at rest).</p>
<p>However, when trying to decrypt a lot of the data at once, it is very slow. On inspection, it seems that the SDK is making a HTTP call for each piece of data!</p>
<p>I found a good post on this sub that suggests that there are 2 ways to do this. The latter being envelope encryption, which seems way better for my use case.</p>
<p>However, I cannot find a clear example of how to setup this envelope encryption with the SDK. Ideally I want to just have KMS manage the key that is used to encrypt the data key. Therefore it would only be 1 HTTP call, with the encryption / decryption of the data being handled locally.</p>
<p>Is there a way to do this with the SDK? Or do I need to manually manage this?</p>
<p>here is my code:</p>
<pre class="lang-py prettyprint-override"><code>
import aws_encryption_sdk
import botocore
from aws_encryption_sdk import CachingCryptoMaterialsManager, CommitmentPolicy
from flask import current_app
def get_master_key_provider():
botocore_session = botocore.session.get_session()
key_arn = current_app.config["KMS_KEY"]
kms_kwargs = dict(key_ids=[key_arn])
if botocore_session is not None:
kms_kwargs["botocore_session"] = botocore_session
master_key_provider = aws_encryption_sdk.StrictAwsKmsMasterKeyProvider(
**kms_kwargs
)
return master_key_provider
def generate_cache(
MAX_CACHE_SIZE=10, MAX_ENTRY_MESSAGES=10, MAX_ENTRY_AGE_SECONDS=60.0
):
key_provider = get_master_key_provider()
cache = aws_encryption_sdk.LocalCryptoMaterialsCache(MAX_CACHE_SIZE)
caching_cmm = CachingCryptoMaterialsManager(
master_key_provider=key_provider,
cache=cache,
max_age=MAX_ENTRY_AGE_SECONDS,
max_messages_encrypted=MAX_ENTRY_MESSAGES,
)
return caching_cmm
class DecryptionSession:
def __init__(
self,
MAX_CACHE_SIZE=10,
MAX_ENTRY_MESSAGES=10,
MAX_ENTRY_AGE_SECONDS=60.0,
):
self._caching_cmm = generate_cache(
MAX_CACHE_SIZE, MAX_ENTRY_MESSAGES, MAX_ENTRY_AGE_SECONDS
)
self._client = aws_encryption_sdk.EncryptionSDKClient(
commitment_policy=CommitmentPolicy.REQUIRE_ENCRYPT_REQUIRE_DECRYPT,
)
def decrypt_text(self, ciphertext: str):
"""Decrypts an encrypted piece of text and returns the original text"""
ciphertext_bytes = bytes.fromhex(ciphertext)
decrypted_bytes, _ = self._client.decrypt(
source=ciphertext_bytes, materials_manager=self._caching_cmm
)
decrypted_text = decrypted_bytes.decode()
return decrypted_text
</code></pre>
<p>I am using the Python SDK if that makes any difference.</p>
|
<python><amazon-web-services><encryption><amazon-kms>
|
2024-12-10 20:16:34
| 1
| 464
|
SoftwareThings
|
79,269,600
| 13,634,560
|
unable to access vaex dataframe
|
<p>I am using Vaex for the first time. With <code>vaex.example()</code>, I can save the dataframelocal as <code>df</code>. However, with real world data, this doesn't seem possible. See screenshot below.</p>
<p><a href="https://i.sstatic.net/ENPyrpZP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ENPyrpZP.png" alt="vaex df saved locally" /></a></p>
<p>I'm unable to do an MRE as it's dependent on a large-scale dataframe, but has anyone seen this before? Am I misunderstanding how vaex is meant to be used?</p>
|
<python><vaex>
|
2024-12-10 19:50:49
| 0
| 341
|
plotmaster473
|
79,269,542
| 3,765,883
|
Python/VScode 'continue' causes rest of loop code to be grayed out
|
<p>I have the following Python code (in VScode IDE):</p>
<pre><code>def parse_seeyou_waypoints(lines, bounds=None):
waypoint_list = WaypointList()
cherrypy.log('in parse_seeyou_waypoints function:')
#gfp 241210: modified to wait for header line before processing
#gfp 241210: added 'ISO-8859-2' decoding for correct cherrypy logging display
#gfp 241208 added to print out all lines in selected .CUP file
# wpnum = 0
# for byteline in lines:
# wpnum = wpnum + 1
# line = byteline.decode('ISO-8859-2')
header = 'name,code,country,lat,lon,elev,style,rwdir,rwlen,freq,desc'
wpnum = 0
for byteline in lines:
wpnum = wpnum + 1
line = byteline.decode('ISO-8859-2') #gfp 241210: added 'ISO-8859-2' decoding for correct cherrypy logging display
line = line.strip()
# cherrypy.log('in for loop: wpnum = %s line = %s' %(wpnum, line))
cherrypy.log(f'for loop row {wpnum}: {line}')
if header != line: #gfp 241210: modified to skip blank lines before header line
continue
else:
cherrypy.log(f'header line found at row {wpnum}: {line}')
continue #skip to next line (first waypoint line)
#check for other possible .cup sections?
if line == "" or line.startswith("*"):
continue
if line == "-----Related Tasks-----":
cherrypy.log('In -----Related Tasks----: line = %s' % line)
break
cherrypy.log('in for loop before line = __CSVLine(line): wpnum = %s' %wpnum)
fields = []
line = __CSVLine(line)
cherrypy.log('in for loop after line = __CSVLine(line): wpnum = %s' %wpnum)
while line.has_next():
fields.append(next(line))
cherrypy.log('in while line.has_next():line = %s' %line)
#display fields for this line
cherrypy.log('extracted fields for line = %s' %wpnum)
idx = 0
for field in fields:
cherrypy.log(' field[%s]: %s' % (wpnum, field))
</code></pre>
<p>As it stands above, all code below</p>
<pre><code>else:
cherrypy.log(f'header line found at row {wpnum}: {line}')
continue #skip to next line (first waypoint line)
</code></pre>
<p>Is grayed out (i.e. VSCode says it is unreachable), and in AFAICT doesn't execute at all. However, if I comment out the 'continue' statement in the 'else:' block, then the rest of the function is 'reachable'.</p>
<p>What am I doing wrong here?</p>
|
<python><visual-studio-code>
|
2024-12-10 19:34:46
| 1
| 327
|
user3765883
|
79,269,489
| 13,163,640
|
Celery infinite retry pattern issue
|
<p>I am using celery with AWS SQS for async tasks.</p>
<pre><code>@app.task(
autoretry_for=(Exception,),
max_retries=5,
retry_backoff=True,
retry_jitter=False,
acks_late=True,
)
@onfailure_reject(non_traced_exceptions=NON_TRACED_EXCEPTIONS)
def send_order_update_event_task(order_id, data):
.........
</code></pre>
<p>But the retry pattern is getting very much messed up when I use an <strong>integer</strong> value for the <strong>retry_backoff</strong> arg. No of tasks spawning up are getting out of control.</p>
<p>logs:</p>
<pre><code> 2024-12-10 05:16:10
ERROR [1b810665-c0b1-4527-8cd9-c142f67d6605] [53285c923f-79232a3856] tasks.order_request_task - [ send_order_update_event_task] Exception for order: 700711926: Order absent 700711926, retry_count: 10
2024-12-10 05:16:10
ERROR [1b810665-c0b1-4527-8cd9-c142f67d6605] [1052f09663-c19b42589a] tasks.order_request_task - [ send_order_update_event_task] Exception for order: 700711926: Order absent 700711926, retry_count: 10
2024-12-10 05:16:10
ERROR [1b810665-c0b1-4527-8cd9-c142f67d6605] [dd021828dd-4f6b8ae6f8] tasks.order_request_task - [ send_order_update_event_task] Exception for order: 700711926: Order absent 700711926, retry_count: 10
2024-12-10 05:16:10
ERROR [1b810665-c0b1-4527-8cd9-c142f67d6605] [116bef9273-e4dbfb526b] tasks.order_request_task - [ send_order_update_event_task] Exception for order: 700711926: Order absent 700711926, retry_count: 10
2024-12-10 05:16:10
ERROR [1b810665-c0b1-4527-8cd9-c142f67d6605] [913697ae7e-d4f65d45a5] tasks.order_request_task - [ send_order_update_event_task] Exception for order: 700711926: Order absent 700711926, retry_count: 10
2024-12-10 05:16:10
ERROR [1b810665-c0b1-4527-8cd9-c142f67d6605] [d99e889882-a76718b549] tasks.order_request_task - [ send_order_update_event_task] Exception for order: 700711926: Order absent 700711926, retry_count: 10
2024-12-10 05:16:10
ERROR [1b810665-c0b1-4527-8cd9-c142f67d6605] [d99e889882-30bac3e515] tasks.order_request_task - [ send_order_update_event_task] Exception for order: 700711926: Order absent 700711926, retry_count: 10
2024-12-10 05:16:10
ERROR [1b810665-c0b1-4527-8cd9-c142f67d6605] [d7f01e5b4f-edfa22355f] tasks.order_request_task - [ send_order_update_event_task] Exception for order: 700711926: Order absent 700711926, retry_count: 10
2024-12-10 05:16:10
ERROR [1b810665-c0b1-4527-8cd9-c142f67d6605] [8ba15966ae-2266247e56] tasks.order_request_task - [ send_order_update_event_task] Exception for order: 700711926: Order absent 700711926, retry_count: 10
2024-12-10 05:16:10
ERROR [1b810665-c0b1-4527-8cd9-c142f67d6605] [738688f34d-34067ca58b] tasks.order_request_task - [ send_order_update_event_task] Exception for order: 700711926: Order absent 700711926, retry_count: 10
2024-12-10 05:16:10
ERROR [1b810665-c0b1-4527-8cd9-c142f67d6605] [c790586783-b363d38520] tasks.order_request_task - [ send_order_update_event_task] Exception for order: 700711926: Order absent 700711926, retry_count: 10
2024-12-10 05:16:10
ERROR [1b810665-c0b1-4527-8cd9-c142f67d6605] [6231986f4c-7696b7cf47] tasks.order_request_task - [ send_order_update_event_task] Exception for order: 700711926: Order absent 700711926, retry_count: 10
2024-12-10 05:16:10
ERROR [1b810665-c0b1-4527-8cd9-c142f67d6605] [e020ded4ca-f11c933d87] tasks.order_request_task - [ send_order_update_event_task] Exception for order: 700711926: Order absent 700711926, retry_count: 10
</code></pre>
<p>I am printing the retry count for each of the retries but there seems to be multiple tasks with same retry count, for example there are 20 retries for retry count 1, 40 for retry count 2 and so on. I am not sure why this is happening.
One specific queue(celery-requests-primary) is being used for performing these tasks and all these tasks are running in one deployment called <strong>celery-requests-primary</strong> which has multiple <strong>pods</strong>.
What might be causing this? Is any other information needed for this to be debugged</p>
|
<python><django><celery><amazon-sqs>
|
2024-12-10 19:15:07
| 1
| 567
|
Dev
|
79,269,336
| 6,089,311
|
How to use one field expression for multiple columns in polars
|
<p>I have some data:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
import pandas as pd
df = pl.DataFrame({
"dt":pd.date_range("2024-12-01", periods=4, freq="D"),
"A": {"value":[10, 10, 20, 30], "multiple":[1,2,0,0]},
"B": {"value":[1, 2, 3, 4], "multiple":[0,5,5,0]},
})
</code></pre>
<p>I'd like to use:</p>
<pre class="lang-py prettyprint-override"><code>result = df.with_columns(
pl.exclude("dt").struct.with_fields(
result=pl.field("value") * pl.field("multiple"),
).struct.with_fields(
result2= pl.field("result") / 2
),
)
print(
result.select(
pl.exclude("dt").struct.unnest().name.prefix("<dont know how to determine col name>"),
)
)
</code></pre>
<p>But it returns a different schema than I expected. I thought that for each column (exclude "dt") it would add fields for corresponding column.</p>
<p>I'd like to obtain this output without repetition in code (and without python list comprehension like <code>*[pl.col(c).struct.with_fields(...) for c in wanted_cols]</code>):</p>
<pre class="lang-py prettyprint-override"><code>result = df.with_columns(
pl.col("A").struct.with_fields(
result=pl.field("value") * pl.field("multiple"),
).struct.with_fields(
result2= pl.field("result") / 2
),
pl.col("B").struct.with_fields(
result=pl.field("value") * pl.field("multiple"),
).struct.with_fields(
result2= pl.field("result") / 2
)
)
print(
result.select(
pl.col("A").struct.unnest().name.prefix("A_"),
pl.col("B").struct.unnest().name.prefix("B_"),
)
)
</code></pre>
<p>Output is:</p>
<pre class="lang-py prettyprint-override"><code>shape: (4, 8)
βββββββββββ¬βββββββββββββ¬βββββββββββ¬ββββββββββββ¬ββββββββββ¬βββββββββββββ¬βββββββββββ¬ββββββββββββ
β A_value β A_multiple β A_result β A_result2 β B_value β B_multiple β B_result β B_result2 β
β --- β --- β --- β --- β --- β --- β --- β --- β
β i64 β i64 β i64 β f64 β i64 β i64 β i64 β f64 β
βββββββββββͺβββββββββββββͺβββββββββββͺββββββββββββͺββββββββββͺβββββββββββββͺβββββββββββͺββββββββββββ‘
β 10 β 1 β 10 β 5.0 β 1 β 0 β 0 β 0.0 β
β 10 β 2 β 20 β 10.0 β 2 β 5 β 10 β 5.0 β
β 20 β 0 β 0 β 0.0 β 3 β 5 β 15 β 7.5 β
β 30 β 0 β 0 β 0.0 β 4 β 0 β 0 β 0.0 β
βββββββββββ΄βββββββββββββ΄βββββββββββ΄ββββββββββββ΄ββββββββββ΄βββββββββββββ΄βββββββββββ΄ββββββββββββ
</code></pre>
<p>Why the first implementation returns that wrong structs in schema than second implementation?</p>
<p>I've also tried flat df (without structs), cols like <code>A_value, A_multiple, B_value, B_multiple</code> and:</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(
(pl.selectors.ends_with("_value") * pl.selectors.ends_with("_multiple")).name.map(lambda c: c.split("_")[0] + "_result")
)
</code></pre>
<p>But expressions with different regexes are not implemented yet. <a href="https://github.com/pola-rs/polars/issues/8282" rel="nofollow noreferrer">Github polars issue 8282</a></p>
|
<python><dataframe><python-polars>
|
2024-12-10 18:14:35
| 0
| 586
|
Jan
|
79,269,275
| 1,938,552
|
Sub-attribute annotation syntax
|
<p>It is possible to type the following code in Python3.12:</p>
<pre><code>class B:
pass
b = B()
class A:
a: "x"
b.c: "y"
</code></pre>
<p>Without <code>b</code> declaration, this code throws NameError. After that, <code>A.__annotations__</code> shows: <code>{'a': 'x'}</code>. Annotation dicts of <code>b</code> and <code>B</code> are empty.</p>
<p>Does this mean anything? Is it just some future syntax with no effects or does the <code>b.c</code> annotation end up somewhere?</p>
|
<python><python-typing>
|
2024-12-10 17:57:58
| 1
| 1,059
|
haael
|
79,269,259
| 11,091,148
|
Azure OpenTelemetry Exporter duplicates logs
|
<p>I have a simple python script to export logs into an app insights trace table by following this <a href="https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-opentelemetry/samples/logging/basic.py" rel="nofollow noreferrer">example</a>, however, the log entries get duplicated. What could be the reason?</p>
<p><a href="https://i.sstatic.net/odcrY4A4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/odcrY4A4.png" alt="enter image description here" /></a></p>
<pre><code># example.py
import logging
if os.getenv("APPLICATIONINSIGHTS_CONNECTION_STRING"):
from azure.monitor.opentelemetry import configure_azure_monitor
configure_azure_monitor(logger_name=__name__)
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
logger.info(f"Logging configured.")
</code></pre>
<p>I also already tried to disable propagation on the Azure/Otel libs, but nothing worked</p>
<pre><code>_azure_logger = logging.getLogger("azure")
_azure_logger.setLevel(logging.WARNING)
_azure_logger.propagate = False
_otel_logger = logging.getLogger("opentelemetry")
_otel_logger.setLevel(logging.WARNING)
_otel_logger.propagate = False
</code></pre>
|
<python><azure><logging><open-telemetry>
|
2024-12-10 17:54:56
| 1
| 526
|
Bennimi
|
79,269,139
| 435,563
|
python SyncManager used by remote processes: how to identify shared objects
|
<p>Using a <a href="https://docs.python.org/3/library/multiprocessing.html#multiprocessing.managers.SyncManager" rel="nofollow noreferrer">SyncManager</a>, if a group of processes need to share more than one <code>dict()</code> (say), the typical recipe is for the starting process to create them, and pass proxies to subprocesses.</p>
<p>However, in the case of <a href="https://docs.python.org/3/library/multiprocessing.html#using-a-remote-manager" rel="nofollow noreferrer">remote managers</a>, this method of identifying the shared dictionaries isn't available. What is the best way to establish the identity of the shared objects?</p>
<p>If each subprocess calls <code>SyncManger.dict()</code> three times, are they guaranteed to get back proxies to the same three objects in the same order?</p>
<p>Is a good way to deal with this to have one "designated" dict (say, the first one called) which is a place where other objects can be registered? Is this a good use for a <code>Namespace</code> or are they just used for sharing constants?</p>
|
<python><synchronization><python-multiprocessing>
|
2024-12-10 17:19:00
| 1
| 5,661
|
shaunc
|
79,269,129
| 11,581,214
|
<textarea> tag is not rendered properly using CSS with IronPDF
|
<p>I am attempting to convert an HTML form to a fillable PDF with IronPDF (IronPdf 2024.8.1.3) in Python (3.12.6). The HTML renders appropriately in Chrome. Tags other than the <textarea> tag render appropriately in the PDF (verifying use of the CSS). IronPDF is not using the defined textarea style to override the default rendering with Helvetica and auto-sizing of the font when the amount of text exceeds the textarea size. I have specified a font-size (12) and set the overflow-y to scroll in the style. In the PDF, the font auto-sizing produces very small text and no scrolling. The PDF can be edited manually in Adobe Acrobat Pro to change the font size from Auto to a fixed size. This works, but it is not a practical solution.</p>
<p>This exercise has been attempted with the style defined internally within the HTML using the <style> tag. I have also tried an inline style within the <textarea> tag. The final attempts, as reflected in the code below, used a link to an external CSS file within the HTML, with and without setting the RenderingOptions.CustomCssUrl to point to the CSS file.</p>
<p>All of the attempted options work in Chrome but do not work for the rendered PDF. I do have an open incident with Iron Software.</p>
<p><strong>HTML in Chrome</strong></p>
<p><a href="https://i.sstatic.net/53SKseRH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/53SKseRH.png" alt="HTML in Chrome" /></a></p>
<p><strong>Rendered PDF in Chrome</strong></p>
<p><a href="https://i.sstatic.net/MDfnfLpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MDfnfLpB.png" alt="Rendered PDF in Chrome" /></a></p>
<p><strong>Questions:</strong></p>
<ul>
<li>Is there something that I have missed that would allow the CSS style to override the default settings for the <textarea> tag?</li>
<li>Is there a post-processing option that would allow me to iterate over all <textarea> (Tx) fields in the rendered PDF and manually set the font size not to auto size?</li>
</ul>
<pre><code># render HTML as PDF with IronPDF
import ironpdf
ironpdf.License.LicenseKey = "license key goes here"
html = """<!DOCTYPE html>
<html lang="en">
<head>
<title>Large Text Area Test</title>
<link rel="stylesheet" type="text/css" href="/Users/username/styles/test2.css">
</head>
<body>
<h1>This is a heading</h1>
<p>This is a paragraph.</p>
<form name="testform">
<br><input type="checkbox" name="mycheckbox">
<label for="mycheckbox">Check the box.</label><br>
<br>Enter some text:
<br><textarea id="largetextarea" name="largetextarea" rows="20" cols="80">
Here is some text between textarea tags.
</textarea>
<br>
</form>
</body>
</hmtl>
"""
renderer = ironpdf.ChromePdfRenderer()
renderer.RenderingOptions.CreatePdfFormsFromHtml = True
renderer.RenderingOptions.CustomCssUrl = '/Users/username/styles/test2.css'
pdf = renderer.RenderHtmlAsPdf(html)
pdf.SaveAs('largetextareacss.pdf')
</code></pre>
<pre><code>/*
test2.css - Style sheet settings for PDF creation from HTML
*/
h1 {
color: blue;
font-family: cambria;
font-size: 16pt;
margin-top: 2em;
}
p {
color: red;
font-family: cambria;
font-size: 12pt;
margin-left: 1em;
}
body {
color: black;
font-family: cambria;
font-size: 12pt;
}
textarea {
color: blue;
font-family: cambria;
font-size: 12pt;
overflow-y: scroll;
}
</code></pre>
|
<python><html><forms><pdf><ironpdf>
|
2024-12-10 17:15:34
| 1
| 524
|
BalooRM
|
79,269,114
| 25,413,271
|
decorate function and method with same decorator
|
<p>I have a function, a class implementing the similar method and a decorator. The current decorator signature does allow it to be used for the function but doesn't work with the method as expected.</p>
<pre><code>from functools import wraps
def nice(f):
@wraps(f)
def decorator(a, b):
result = f(a, b)
return 'result is: %s' % str(result)
return decorator
@nice
def sumup(a, b):
return a + b
class Test:
def __init__(self):
pass
@nice
def sumup(self, a, b):
return a + b
print(sumup(2, 6))
cls = Test()
print(cls.sumup(4, 8))
</code></pre>
<p>Can I somehow 'overload' decorator? In <em>C++</em> I would be able to write the same decorator again with the same name but different signature and it would work. Can I use some nice solutions in this case? Or I need to just add the same decorator with different name like:</p>
<pre><code>def nice_cls(f):
@wraps(f)
def decorator(self, a, b):
result = f(self, a, b)
return 'result is: %s' % str(result)
return decorator
</code></pre>
<p>and use it for the method?</p>
|
<python><function><class><decorator>
|
2024-12-10 17:11:34
| 2
| 439
|
IzaeDA
|
79,269,012
| 13,132,728
|
How to style all cells in a row of a specific MultiIndex value in pandas
|
<h1>SETUP</h1>
<p>I have the following <code>df</code>:</p>
<pre><code>import pandas as pd
import numpy as np
arrays = [
np.array(["fruit", "fruit", "fruit","vegetable", "vegetable", "vegetable"]),
np.array(["one", "two", "total", "one", "two", "total"]),
]
df = pd.DataFrame(np.random.randn(6, 4), index=arrays)
df.index.set_names(['item','count'],inplace=True)
</code></pre>
<h1>WHAT I AM TRYING TO DO</h1>
<p>I am trying to style <code>df</code> so that each cell where <code>count == 'total'</code> is bolded.</p>
<h1>WHAT I HAVE TRIED</h1>
<p>I was able to index all rows where <code>count == 'total'</code> with the following code:</p>
<pre><code>idx = pd.IndexSlice
totals = df.loc[idx[:, 'total'],:]
</code></pre>
<p>but when I try to apply a function:</p>
<pre><code>def df_style(val):
return "font-weight: bold"
df.style.applymap(df_style,subset=totals)
</code></pre>
<p>I get the following error:</p>
<pre><code>KeyError: 0
</code></pre>
<hr />
<h3>How can I style this <code>df</code> so that all cells where <code>count == 'total'</code> are bolded?</h3>
<p><a href="https://stackoverflow.com/questions/51938245/display-dataframe-values-in-bold-font-in-one-row-only">Here is a similar question</a>, albeit with just a regular index rather than MultiIndex.</p>
|
<python><pandas><dataframe><pandas-styles>
|
2024-12-10 16:38:02
| 1
| 1,645
|
bismo
|
79,268,684
| 16,712,729
|
Folium's "FloatImage" not displayed in Vscode notebook
|
<p>I try to use FloatImage from folium.plugin to display an image on a Folium map.</p>
<p>When I print the map on a jupyter notebook in VScode, the image is replaced by an alt text.</p>
<p><img src="https://i.sstatic.net/wiaGCs4Y.png" alt="map" /></p>
<p>However, when I save the map as an html, the image is visible.</p>
<p>How to fix the behaviour on vscode ?</p>
<p>This can be reproduced with:</p>
<pre><code>import folium
from folium.plugins.float_image import FloatImage
map = folium.Map()
path = Path('images_folder/image.png')
style = {'bottom': 50, 'left': 50, 'width': -1}
img=FloatImage(path,**style)
map.add_child(img)
</code></pre>
|
<python><visual-studio-code><folium>
|
2024-12-10 14:52:21
| 0
| 499
|
NicolasPeruchot
|
79,268,560
| 6,946,110
|
Middleware for fetching the user from database for each request
|
<p>I have a (SQLAlchemy) User model as follows:</p>
<pre><code>class User(BaseModel):
id: uuid: Mapped[UUID] = mapped_column(UUID, ...)
</code></pre>
<p>I need to check if the user exists in DB for every request. I know I should define a dependency for it, but I have more than 100 tiny and big endpoints. So, it's not a good approach to add a dependency in all of them one by one.</p>
<p>So, I need to define two middlewares:</p>
<ol>
<li><p>SessionMiddleware: Generates a SQLAlchemy session for each request</p>
</li>
<li><p>AuthenticationMiddleware: Fetches the user from the DB using the session from the previous middleware (SessionMiddleware). <br></p>
</li>
</ol>
<p>I created the two above middlewares as follows:</p>
<pre><code>class SessionMiddleware(BaseHTTPMiddleware):
async def __call__(self, scope, receive, send):
if scope["type"] == "http":
# Create a new database session
request = Request(scope, receive, send)
request.state.db_session = next(get_db())
try:
# Process the request
await self.app(scope, receive, send)
finally:
# Ensure the session is closed
if hasattr(request.state.db_session, "close"):
request.state.db_session.close()
else:
await self.app(scope, receive, send)
class AuthenticationMiddleware(BaseHTTPMiddleware):
async def __call__(self, scope, receive, send):
db_session = request.state.db_session
try:
request.state.user = User.query(db_session).find_one(name=client_name)
await self.app(scope, receive, send)
except NotFoundError:
raise JSONResponse(detail=User.object_not_found_error())
</code></pre>
<p>When I work with Swagger, it works fine and returns the response or raises the error in the right situation.</p>
<p>But the issue is when I want to write tests by using Pytest. In order to isolate tests, I want to generate a database session at the beginning of each unit test, pass it to the endpoint, and then roll back it at the end of each unit test. On the other hand, I need the same database session in the test function and also in the endpoint.</p>
<p>So, I have to override the SessionMiddleware. I tried different ways but I couldn't find the solution.</p>
<p>In summary, I need to access the same database session in the test functions and in the endpoints.</p>
|
<python><pytest><fastapi><pytest-fixtures><fastapi-middleware>
|
2024-12-10 14:10:56
| 1
| 1,553
|
msln
|
79,268,478
| 561,243
|
Understanding unbound type error with mypy
|
<p>I am new to static type checking in python and honestly I believed it was much easier that what it actually is.</p>
<p>Here is an ultra simplified version of my code:</p>
<pre class="lang-py prettyprint-override"><code>
from typing import Collection, TypeVar, Generic
ItemType = TypeVar('ItemType')
class WorkerMeta(type):
pass
class Worker(metaclass=WorkerMeta):
def __init__(self) -> None:
self.item: ItemType # error: Type variable "unbound.ItemType" is unbound [valid-type]
# (Hint: Use "Generic[ItemType]" or "Protocol[ItemType]" base class
# to bind "ItemType" inside a class)
# (Hint: Use "ItemType" in function signature to bind "ItemType"
# inside a function)
def get_collection(self) -> Collection[ItemType]:
l : Collection[ItemType] = []
return l
def run(self) -> None:
items: Collection[ItemType] = self.get_collection() # error: Type variable "unbound.ItemType" is unbound [valid-type]
# (Hint: Use "Generic[ItemType]" or "Protocol[ItemType]" base class
# to bind "ItemType" inside a class)
# (Hint: Use "ItemType" in function signature to bind "ItemType"
# inside a function)
for self.item in items:
print(self.item)
class MyWorker(Worker):
def get_collection(self) -> Collection[ItemType]:
return [1,2,3] # error: List item 0 has incompatible type "int"; expected "ItemType" [list-item]
# error: List item 1 has incompatible type "int"; expected "ItemType" [list-item]
# error: List item 2 has incompatible type "int"; expected "ItemType" [list-item]
w = MyWorker()
w.run()
# output will be
# 1
# 2
# 3
</code></pre>
<p>The worker class is defining a kind of execution scheme that is then reproduced by all its subclasses.</p>
<p>In the run method there is a loop on a collection made of elements all of the same type, but that might be different in different worker subclasses. I thought this was the task for a TypeVar.</p>
<p>The code works perfectly. But mypy is complaining a lot. I put the error messages as comment in the code.</p>
<p>Can you suggest me how to fix this issue with typing keeping my code still working?</p>
|
<python><python-typing><mypy>
|
2024-12-10 13:47:32
| 1
| 367
|
toto
|
79,268,326
| 1,473,517
|
What's wrong with my code to convert a matrix to and from a byte string?
|
<p>I have this function to convert a binary 2d array to a byte array:</p>
<pre><code>def flatten_and_pad_to_multiple_of_8(binary_matrix):
# Step 1: Calculate the size of the original flattened array
rows, cols = binary_matrix.shape
current_length = rows * cols
# Step 2: Calculate the required length that is a multiple of 8
padded_length = ((current_length + 7) // 8) * 8
# Step 3: Initialize flat_bits with the required padded length
flat_bits = np.zeros(padded_length, dtype=np.uint8)
# Step 4: Fill flat_bits with values from the binary matrix
idx = 0
for i in range(rows):
for j in range(cols):
flat_bits[idx] = binary_matrix[i, j]
idx += 1
return flat_bits
def matrix_to_ascii(matrix):
flat_bits = flatten_and_pad_to_multiple_of_8(matrix)
# Convert the flattened bits into bytes
ascii_string = ""
for i in range(0, len(flat_bits), 8):
byte = 0
for j in range(8):
byte = (byte << 1) | flat_bits[i + j]
ascii_char = chr(byte)
ascii_string += ascii_char
return ascii_string
</code></pre>
<p>If</p>
<pre><code>matrix = np.array([[0, 1, 1, 1, 1],
[1, 0, 1, 1, 1],
[1, 1, 0, 1, 1],
[1, 1, 1, 0, 1],
[1, 1, 1, 1, 0]], dtype=uint8)
</code></pre>
<p>then matrix_to_ascii(matrix) is '}Γ·Γ\x00' although it is a string. I then have to do matrix_to_ascii(matrix).encode(). My problem is in converting it back to a matrix.</p>
<p>I will first convert the string to a byte array to save space. I need to save space in my code. Here is the broken code to convert it back to a matrix:</p>
<pre><code>def ascii_to_matrix(byte_array, original_shape):
"""
ascii_string must be a bytestring before it is passed in.
"""
# Initialize the binary matrix with the original shape
rows, cols = original_shape
binary_matrix = np.zeros((rows, cols), dtype=np.uint8)
# Fill the binary matrix with bits from the byte array
bit_idx = 0
for byte in byte_array:
for j in range(8):
if bit_idx < rows * cols:
binary_matrix[bit_idx // cols, bit_idx % cols] = (byte >> (7 - j)) & 1
bit_idx += 1
else:
break
return binary_matrix
</code></pre>
<p>Unfortunately, it gives the wrong output:</p>
<pre><code>ascii_to_matrix(matrix_to_ascii(matrix).encode(), (5, 5))
array([[0, 1, 1, 1, 1],
[1, 0, 1, 1, 1],
[0, 0, 0, 0, 1],
[1, 1, 0, 1, 1],
[0, 1, 1, 1, 1]], dtype=uint8)
</code></pre>
<p>What am I doing wrong?</p>
<p>(I am not using any fancier numpy functions as I will want to speed this all up with numba. In particular, I can't use packbits or tobytes as they are not supported by numba.Β I also can't use bytes or bytearray.)</p>
|
<python><numpy><numba>
|
2024-12-10 12:53:21
| 2
| 21,513
|
Simd
|
79,268,316
| 253,954
|
How do I add a computed column to an SQLModel table?
|
<p>I am using SQLModel together with FastAPI and I have the models <code>Package</code> and <code>Download</code>s, where a <code>Package</code> has multiple <code>Download</code>s which in turn have a <code>count</code> field. My goal is to add up the <code>Download.count</code>s associated to each package in a query like this:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT package.name, SUM(download.count) as download_total
FROM package
LEFT JOIN download ON (download.package_id = package.id)
GROUP BY package.name
</code></pre>
<p>and return a model instance (e. g. <code>PackagePublic</code>) including the column <code>downloads_total</code>, so I can use it as a FastAPI reponse model, about like this:</p>
<pre class="lang-py prettyprint-override"><code>@app.get("/packages", response_model=list[PackagePublic])
def packages(session: Session=Annotated[Session, Depends(session)]):
statement = select(PackagePublic).join(Download)
return session.exec(statement).all()
</code></pre>
<p>These are my models:</p>
<pre class="lang-py prettyprint-override"><code>class PackageBase(SQLModel): ...
class Package(PackageBase, table=True):
id: int | None = Field(default=None, primary_key=True)
downloads: list["Download"] = Relationship(back_populates="package")
class PackagePublic(PackageBase):
id: int
downloads: list["Download"] = []
downloads_total: int # <- not a database column
class Download(SQLModel, table=True):
id: int | None = Field(default=None, primary_key=True)
package_id: int = Field(foreign_key="package.id")
package: Package = Relationship(back_populates="downloads")
downloads: int
</code></pre>
<p>I tried adjusting the query (<code>statement = select(Package, func.sum(Download.count).label("downloads_total")).join(Download).group_by(Package.id)</code>), but this way I cannot return a FastAPI response model. I also tried adding a computed field to either the <code>Package</code> or <code>PackagePublic</code> model (using <code>column_property</code>), but failed completely.</p>
<p>Iβd be happy with either adjusting the query or adding a "virtual" field (like <code>PackagePublic.downloads_total</code> in the code above) to the model.</p>
|
<python><fastapi><sqlmodel>
|
2024-12-10 12:50:32
| 0
| 8,019
|
fqxp
|
79,268,272
| 6,195,489
|
Making predictions using numpyro and MCMC
|
<p>I am following the numpyro example in celerite2 <a href="https://celerite2.readthedocs.io/en/latest/tutorials/first/" rel="nofollow noreferrer">here</a>, which has a numpyro interface.</p>
<p>Further up using emceee they make predictions <a href="https://celerite2.readthedocs.io/en/latest/tutorials/first/" rel="nofollow noreferrer">here</a></p>
<p>I have tried to make predictions using the numpyro interface, just to make sure it is behaving correctly, but am struggling.</p>
<pre><code>from jax import config
config.update("jax_enable_x64", True)
import celerite2.jax
import jax.numpy as jnp
import matplotlib.pyplot as plt
import numpy as np
import numpyro
import numpyro.distributions as dist
from celerite2.jax import terms as jax_terms
from jax import random
from numpyro.infer import MCMC, NUTS, Predictive
np.random.seed(42)
prior_sigma = 2.0
freq = np.linspace(1.0 / 8, 1.0 / 0.3, 500)
omega = 2 * np.pi * freq
t = np.sort(
np.append(
np.random.uniform(0, 3.8, 57),
np.random.uniform(5.5, 10, 68),
)
)
yerr = np.random.uniform(0.08, 0.22, len(t))
y = 0.2 * (t - 5) + np.sin(3 * t + 0.1 * (t - 5) ** 2) + yerr * np.random.randn(len(t))
true_t = np.linspace(0, 10, 500)
true_y = 0.2 * (true_t - 5) + np.sin(3 * true_t + 0.1 * (true_t - 5) ** 2)
def numpyro_model(t, yerr, y=None):
mean = numpyro.sample("mean", dist.Normal(0.0, prior_sigma))
log_jitter = numpyro.sample("log_jitter", dist.Normal(0.0, prior_sigma))
log_sigma1 = numpyro.sample("log_sigma1", dist.Normal(0.0, prior_sigma))
log_rho1 = numpyro.sample("log_rho1", dist.Normal(0.0, prior_sigma))
log_tau = numpyro.sample("log_tau", dist.Normal(0.0, prior_sigma))
term1 = jax_terms.SHOTerm(
sigma=jnp.exp(log_sigma1), rho=jnp.exp(log_rho1), tau=jnp.exp(log_tau)
)
log_sigma2 = numpyro.sample("log_sigma2", dist.Normal(0.0, prior_sigma))
log_rho2 = numpyro.sample("log_rho2", dist.Normal(0.0, prior_sigma))
term2 = jax_terms.SHOTerm(sigma=jnp.exp(log_sigma2), rho=jnp.exp(log_rho2), Q=0.25)
kernel = term1 + term2
gp = celerite2.jax.GaussianProcess(kernel, mean=mean)
gp.compute(t, diag=yerr**2 + jnp.exp(log_jitter), check_sorted=False)
numpyro.sample("obs", gp.numpyro_dist(), obs=y)
numpyro.deterministic("psd", kernel.get_psd(omega))
nuts_kernel = NUTS(numpyro_model, dense_mass=True)
mcmc = MCMC(
nuts_kernel,
num_warmup=1000,
num_samples=1000,
num_chains=2,
progress_bar=False,
)
rng_key = random.PRNGKey(34923)
mcmc.run(rng_key, t, yerr, y=y)
posterior_samples = mcmc.get_samples()
t_pred = jnp.linspace(0, 10, 500)
predictive = Predictive(numpyro_model, posterior_samples, return_sites=["obs"])
rng_key, rng_key_pred = random.split(rng_key)
predictions = predictive(rng_key_pred, t=t_pred, yerr=jnp.mean(yerr))
predicted_means = predictions["obs"]
mean_pred = jnp.mean(predicted_means, axis=0)
lower_ci = jnp.percentile(predicted_means, 2.5, axis=0)
upper_ci = jnp.percentile(predicted_means, 97.5, axis=0)
plt.figure(figsize=(10, 6))
plt.plot(true_t, true_y, color="green", label="True Function", linewidth=2)
plt.errorbar(t, y, yerr=yerr, fmt=".k", capsize=3, label="Observed Data")
plt.plot(t_pred, mean_pred, color="blue", label="Predicted Mean", linewidth=2)
plt.fill_between(
t_pred, lower_ci, upper_ci, color="blue", alpha=0.3, label="95% Credible Interval"
)
plt.xlabel("t")
plt.ylabel("y")
plt.title("Posterior Predictions with 95% Credible Intervals")
plt.legend()
plt.grid()
plt.show()
</code></pre>
<p>gives:</p>
<p><a href="https://i.sstatic.net/pzPC27Hf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pzPC27Hf.png" alt="numpyro predictions" /></a></p>
<p>while with emcee the predictions look like:</p>
<p><a href="https://i.sstatic.net/2fJNUFNM.webp" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fJNUFNM.webp" alt="emcee predictions" /></a></p>
<p>Can anyone see what I am doing wrong, or explain who to correctly make predictions in this case?</p>
<p>Thanks</p>
|
<python><numpyro>
|
2024-12-10 12:38:44
| 0
| 849
|
abinitio
|
79,268,222
| 13,840,270
|
Pyspark: Subset Array based on other column value
|
<p>I use Pyspark in Azure Databricks to transform data before sending it to a sink. In this sink any array must at most have a length of 100. In my data I have an <code>array</code> that is always length 300 an a field specifying how many values of these are relevant (<code>n_relevant</code>).</p>
<p><code>n_relevant</code> values might be:</p>
<ul>
<li>below 100 -> then I want to keep all values</li>
<li>between 100 and 300 -> then I want to subsample based on modulo</li>
<li>above 300 -> then I want to subsample modulo 3</li>
</ul>
<p>E.g.:</p>
<pre><code>array: [1,2,3,4,5,...300]
n_relevant: 4
desired outcome: [1,2,3,4]
array: [1,2,3,4,5,...300]
n_relevant: 200
desired outcome: [1,3,5,...199]
array: [1,2,3,4,5,...300]
n_relevant: 300
desired outcome: [1,4,7,...298]
array: [1,2,3,4,5,...300]
n_relevant: 800
desired outcome: [1,4,7,...298]
</code></pre>
<p>This little program reflects the desired behavior:</p>
<pre class="lang-py prettyprint-override"><code>from math import ceil
def subsample(array:list,n_relevant:int)->list:
if n_relevant<100:
return [x for i,x in enumerate(array) if i<n_relevant]
if 100<=n_relevant<300:
mod=ceil(n_relevant/100)
return [x for i,x in enumerate(array) if i%mod==0 and i<n_relevant]
else:
return [x for i,x in enumerate(array) if i%3==0]
n_relevant=<choose n>
t1=[i for i in range(300)]
subsample(t1,n_relevant)
</code></pre>
<p>What I have tried:</p>
<p><a href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.functions.transform.html" rel="nofollow noreferrer">transforms</a> to set undesired values to <code>0</code> and remove those with <a href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.functions.array_remove.html" rel="nofollow noreferrer">array_remove</a> could subset with a specific modulo BUT cannot adopt to <code>n_relevant</code>. Specifically you cannot hand a parameter to the lambda function and you cannot dynamically change the function.</p>
|
<python><arrays><pyspark><azure-databricks>
|
2024-12-10 12:23:40
| 2
| 3,215
|
DuesserBaest
|
79,268,152
| 28,063,240
|
Why does BeautifulSoup output self-closing tags in HTML?
|
<p>I've tried with 3 different parsers: <code>lxml</code>, <code>html5lib</code>, <code>html.parser</code></p>
<p>All of them output invalid HTML:</p>
<pre><code>>>> BeautifulSoup('<br>', 'html.parser')
<br/>
>>> BeautifulSoup('<br>', 'lxml')
<html><body><br/></body></html>
>>> BeautifulSoup('<br>', 'html5lib')
<html><head></head><body><br/></body></html>
>>> BeautifulSoup('<br>', 'html.parser').prettify()
'<br/>\n'
</code></pre>
<p>All of them have <code>/></code> "self-closing" void tags.</p>
<p>How can I get BeautifulSoup to output HTML that has void tags without <code>/></code>?</p>
|
<python><beautifulsoup>
|
2024-12-10 11:59:11
| 2
| 404
|
Nils
|
79,268,122
| 10,153,071
|
Any faster and memory-efficient alternative of torch.autograd.functional.jacobian(model.decoder, latent_l)?
|
<p>I have a decoder <code>model.decoder</code>, which is comprised of a series of Convolutional Batchnorm and ReLU layers. I have a latent vector <code>latent_l</code>, which is a 8 dimensional latent vector, say, has the dimension (1, 8, 1, 1), where 1 is the batch size. I am doing <code>torch.autograd.functional.jacobian(model.decoder, latent_l)</code>, which is taking a huge amount of time, is there any fast approximation for this jacobian?</p>
<p>There is <code>jacrev</code>, but I am not sure if that works for this example where we pass a decoder as a whole and compute the jacobian of the decoder with respect to the latent vector.</p>
<p>When I use <code>torch.autograd.functional.jacobian(model.decoder, latent_l, vectorize=True)</code>, the memory consumption of the GPU increases drastically, leading to the crashing of the program. Is there any efficient way of doing this using Pytorch?</p>
|
<python><pytorch><automatic-differentiation>
|
2024-12-10 11:45:49
| 1
| 536
|
Jimut123
|
79,268,106
| 2,443,525
|
Can not install latest version of PIP package
|
<p>I'm trying to install the latest version of pyworkforce in my virtual environment, but for some reasons it installs an old version:</p>
<pre><code>% python3 -m venv venv
% source venv/bin/activate
% pip install --upgrade --no-cache-dir pyworkforce
Collecting pyworkforce
Downloading pyworkforce-0.5.1-py3-none-any.whl.metadata (10 kB)
Collecting numpy>=1.23.0 (from pyworkforce)
Downloading numpy-2.2.0-cp313-cp313-macosx_14_0_arm64.whl.metadata (62 kB)
INFO: pip is looking at multiple versions of pyworkforce to determine which version is compatible with other requirements. This could take a while.
Collecting pyworkforce
Downloading pyworkforce-0.5.0-py3-none-any.whl.metadata (10 kB)
Downloading pyworkforce-0.4.0-py3-none-any.whl.metadata (9.2 kB)
Downloading pyworkforce-0.3.0-py3-none-any.whl.metadata (7.3 kB)
Downloading pyworkforce-0.2.2-py3-none-any.whl.metadata (5.2 kB)
Downloading pyworkforce-0.2.1-py3-none-any.whl.metadata (4.3 kB)
Downloading pyworkforce-0.2.0-py3-none-any.whl.metadata (3.9 kB)
Downloading pyworkforce-0.1.1-py3-none-any.whl.metadata (1.4 kB)
Downloading pyworkforce-0.1.1-py3-none-any.whl (4.5 kB)
Installing collected packages: pyworkforce
Successfully installed pyworkforce-0.1.1
</code></pre>
<p>Can someone explain what happens here? I expect PIP to install version 0.5.1 (and whatever numpy version is needed for that), but it gives me 0.1.1.</p>
|
<python><pip>
|
2024-12-10 11:42:11
| 1
| 426
|
Lodewijck
|
79,267,891
| 11,561,121
|
Run GlueJobOperator with parameters based on dag run parameter
|
<p>My Airflow dags has multiple variables as run params.
I would like to launch a Glue job with specific parameters based on one of the run params:</p>
<p>when run_type=0 I would like to launch the job with 2 workers standard type.
Otherwise I would like to launch the job with 10 workers G.2X type.</p>
<p>This is what I done:</p>
<pre><code>from datetime import datetime
from airflow.decorators import dag
from airflow.models.param import Param
from airflow.models.taskinstance import TaskInstance
from airflow.operators.empty import EmptyOperator
from airflow.operators.python import PythonOperator
from airflow.providers.amazon.aws.operators.glue import GlueJobOperator
run_type = '{{ params.run_type }}'
def determine_glue_config(task_instance: TaskInstance, **kwargs) -> None:
"""Determine Glue job configuration based on the run_type."""
print(run_type)
if int(run_type) == 0:
confs = {
"NumberOfWorkers": 2,
"WorkerType": "Standard",
}
else:
confs = {
"NumberOfWorkers": 10,
"WorkerType": "G.2X",
}
task_instance.xcom_push(key='glue_confs', value=confs)
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email': '${alerting_email}',
'email_on_failure': True,
'email_on_retry': False,
'retries': 1,
'executor_config': {
'KubernetesExecutor': {'service_account_name': '${airflow_worker}', 'namespace': '${namespace}'}
},
}
@dag(
dag_id=dag_id,
default_args=default_args,
description='Pipeline for Perfmarket ETL',
tags=['perfmarket'],
schedule_interval='${dag_scheduling_time}',
start_date=datetime(2024, 10, 22),
catchup=False,
render_template_as_native_obj=True,
concurrency=20,
max_active_runs=20,
params={
'run_type': Param(default=0, type='integer'), # Default to '0' (daily) if not specified,
'start_date': Param(default=datetime.today().strftime('%Y%m%d'), type='string'),
'end_date': Param(default=datetime.today().strftime('%Y%m%d'), type='string'),
},
)
def perfmarket_dag() -> None:
"""Build the performance marketing dag."""
start_task = EmptyOperator(task_id='start')
generate_args_task = PythonOperator(
task_id="determine_glue_config",
python_callable=determine_glue_config,
)
glue_bronze_task = GlueJobOperator(
task_id='submit_bronze_glue_job',
job_name=f'perfmarket-{environment}-wizaly-bronze-compute',
wait_for_completion=True,
script_location=f's3://{wizaly_project_bucket_name}/glue/bootstrap/perfmarket_bronze.py',
s3_bucket=wizaly_project_bucket_name,
stop_job_run_on_kill=True,
run_job_kwargs="{{ task_instance.xcom_pull(task_ids='determine_glue_config', key='glue_confs') }}",
script_args={
'--S3_BRONZE_BUCKET_NAME': wizaly_bucket_name,
'--S3_IDT_ACCESS_ROLE_ARN': wizaly_bucket_role_access_arn,
'--S3_MIROR_BUCKET_NAME': miror_bucket_name,
'--S3_STAGING_BUCKET_NAME': wizaly_bucket_name,
'--S3_WIZALY_BUCKET_NAME': wizaly_project_bucket_name,
'--RUN_TYPE': f'"{run_type}"',
'--START_DATE': f'"{start_date}"',
'--END_DATE': f'"{end_date}"',
},
)
determine_glue_config >> glue_bronze_task
</code></pre>
<p>This fails with</p>
<pre><code>[2024-12-10, 11:25:11 CET] {standard_task_runner.py:110} ERROR - Failed to execute job 10315 for task determine_glue_config (invalid literal for int() with base 10: '{{ params.run_type }}'; 49)
</code></pre>
|
<python><airflow>
|
2024-12-10 10:33:11
| 0
| 1,019
|
Haha
|
79,267,768
| 2,546,099
|
Change installation of pytorch from CUDA-based torch to CPU-based torch in docker-build
|
<p>I have a poetry-based project I'd like to pack into a docker image. For that, I use the following docker script:</p>
<pre><code>FROM python:3.12 AS builder
ARG CUR_GIT_COMMIT
ENV CUR_GIT_COMMIT=$CUR_GIT_COMMIT
RUN pip install poetry==1.8.4
RUN apt-get update && apt-get install ffmpeg libsm6 libxext6 libegl-dev -y
ENV POETRY_NO_INTERACTION=1 \
POETRY_VIRTUALENVS_IN_PROJECT=1 \
POETRY_VIRTUALENVS_CREATE=1 \
POETRY_CACHE_DIR=/tmp/poetry_cache
WORKDIR /test_project
COPY pyproject.toml poetry.lock ./
RUN touch README.md
# RUN poetry install --without dev --no-root
# RUN poetry install --no-root
RUN poetry install --without dev --no-root && rm -rf ${POETRY_CACHE_DIR}
FROM python:3.12-slim AS runtime
ARG CUR_GIT_COMMIT
ENV CUR_GIT_COMMIT=$CUR_GIT_COMMIT
ENV VIRTUAL_ENV=/test_project/.venv \
PATH="/test_project/.venv/bin:${PATH}"
WORKDIR /test_project
RUN apt-get update && apt-get install ffmpeg libsm6 libxext6 libegl-dev -y
COPY --from=builder ${VIRTUAL_ENV} ${VIRTUAL_ENV}
COPY test_project ./test_project
COPY run_test_project.py ./
ENTRYPOINT ["python", "-m", "run_test_project"]
</code></pre>
<p>However, as my project contains <code>pytorch</code> with CUDA, the resulting image has a size of > 8 GByte, much larger than what is supported by my repository. Therefore, I want to switch out my CUDA-based torch with a CPU-based <code>torch</code> build, but only in the docker-image. As my current implementation of <code>torch</code> in <code>pyproject.toml</code> is defined as</p>
<pre><code>[tool.poetry.dependencies]
python = ">=3.10,<3.13"
torch = { version = "^2.5.1+cu124", source = "pytorch" }
torchaudio = { version = "^2.1.1", source = "pytorch" }
torchvision = { version = "^0.20.0+cu124", source = "pytorch" }
[[tool.poetry.source]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu124"
priority = "explicit"
</code></pre>
<p>the idea was to replace the package names and -url with the cpu-based url before creating the docker image, using</p>
<pre><code>RUN sed -i -e "s%whl/cu124%whl/cpu%g" ./pyproject.toml
RUN sed -i -e "s%+cu124%%g" ./pyproject.toml
RUN poetry lock --no-update
</code></pre>
<p>However, this still installs all nvidia-related packages, blowing up the image unnecessarily.</p>
|
<python><docker><python-poetry>
|
2024-12-10 09:53:15
| 0
| 4,156
|
arc_lupus
|
79,267,542
| 1,867,328
|
How to get information of a function and its arguments in Python
|
<p>I ran below to get information about the list of arguments and default values of a function/method.</p>
<pre><code>import pandas as pd
import inspect
inspect.getfullargspec(pd.drop_duplicates)
</code></pre>
<p>Results:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'pandas' has no attribute 'drop_duplicates'
>>> inspect.getfullargspec(pd.drop_duplicates)
</code></pre>
<p>What is the correct way to fetch such information from any function/method?</p>
|
<python><pandas><signature>
|
2024-12-10 08:45:56
| 1
| 3,832
|
Bogaso
|
79,267,492
| 1,317,099
|
How to Redact Sensitive Data in Debug Logs from Python Libraries (httpx, httpcore, hpack)
|
<p>I am working on an application that uses the Supabase Python client, and Iβve run into an issue with sensitive data like tokens appearing in debug logs. These logs include REST API calls that expose sensitive query parameters and headers, which I'd like to redact.</p>
<p>For example, here are some log snippets:</p>
<pre><code>[2024-12-09 14:32:00] [DEBUG] [hpack.hpack] Decoded 68, consumed 1 bytes
[2024-12-09 14:32:00] [DEBUG] [hpack.hpack] Decoded (b'content-location', b'/sensitive_table?sensitive_token=eq.abc-1234-567899888-23333-33333-333333-333333'), total consumed 70 bytes, indexed True
[2024-12-09 14:32:00] [DEBUG] [httpcore.http2] receive_response_headers.complete return_value=(200 (b'content-location', b'/sensitive_table?sensitive_token=eq.abc-1234-567899888-23333-33333-333333-333333'))
</code></pre>
<p>These logs contain sensitive information like the <code>sensitive_token</code> in query parameters and other sensitive details in the request paths. I need to ensure that these values are redacted in logs.</p>
<hr />
<h3>What I Tried:</h3>
<p>I set up a <code>logging.Filter</code> in Python to redact sensitive data from logs. Hereβs an example of the filter I implemented:</p>
<pre class="lang-py prettyprint-override"><code>class SensitiveDataFilter(logging.Filter):
def filter(self, record: logging.LogRecord) -> bool:
# Redacts sensitive data
record.msg = re.sub(r"abc-[0-9a-f\-]+", "[REDACTED-TOKEN]", record.msg)
return True
# Applying the filter
for logger_name in ["httpx", "httpcore", "hpack", "supabase"]:
logging.getLogger(logger_name).addFilter(SensitiveDataFilter())
</code></pre>
<p>Unfortunately, even with this filter, sensitive data still appears in logs, especially in debug output from libraries like <code>httpx</code>, <code>httpcore</code>, and <code>hpack</code>. It seems these libraries generate logs in a way that bypasses the filter or logs data outside the <code>msg</code> attribute of <code>LogRecord</code>.</p>
<hr />
<p>Is there a recommended way to redact sensitive data from debug logs, especially for libraries like <code>httpx</code>, <code>httpcore</code>, and <code>hpack</code>?
Are there specific hooks or middleware options for libraries like <code>httpx</code> to intercept and redact sensitive data?</p>
<p>Thanks.</p>
|
<python><supabase>
|
2024-12-10 08:26:07
| 0
| 1,345
|
Ganesh Rathinavel
|
79,267,490
| 1,334,657
|
How to add extra attributes to PyTorch labels?
|
<p>I want to train NNs for classification with PyTorch and let's say that I have data similar to here. <a href="https://i.sstatic.net/MBHxXyOp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MBHxXyOp.png" alt="enter image description here" /></a></p>
<p>HW1, HW2 midterms, final, merits are features for training. Results are the actual class label which I want to predict. After that, I want to calculate pass/fail accuracy grouped by major or mapped back to ID.</p>
<p>So, how can I add those extra attributes and hide them (along with labels) for later use?</p>
<p>Maybe it doesn't have to be done this way, but how can I map back to ID and major for further calculation?</p>
<p>This is how I'd read my data (not read directly from folders assigned labels)</p>
<pre><code>import torch
import numpy as np
from torch.utils.data import TensorDataset, DataLoader
my_x = [np.array([[1.0,2],[3,4]]), np.array([[5.,6],[7,8]]), np.array([[9.,2],[0,7]])] # a list of numpy arrays
my_y = [np.array([4.]), np.array([2.]), np.array([6.])] # another list of numpy arrays (targets)
tensor_x = torch.Tensor(my_x) # transform to torch tensor
tensor_y = torch.Tensor(my_y)
my_dataset = TensorDataset(tensor_x,tensor_y) # create your datset
my_dataloader = DataLoader(my_dataset) # create your dataloader
</code></pre>
<p>Thx.</p>
|
<python><pandas><pytorch>
|
2024-12-10 08:25:39
| 0
| 3,038
|
bensw
|
79,267,176
| 219,153
|
How to enable automatic color cycling for Arc patches in matplotlib?
|
<p>In this Python 3.12 script:</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.patches import Arc
fig, ax = plt.subplots()
for a1 in range(0, 360, 20):
ax.add_patch(Arc((0, 0), 10, 10, theta1=a1, theta2=a1+20))
ax.set_aspect('equal')
ax.autoscale_view()
plt.show()
</code></pre>
<p>I would like to see each <code>Arc</code> having a different color, sequentially provided by the default color cycler. How to do it?</p>
<hr />
<p>Preferably, I would like to have implicit color cycling, i.e. without specifying <code>color=</code> parameter.</p>
|
<python><matplotlib><colors>
|
2024-12-10 06:09:27
| 2
| 8,585
|
Paul Jurczak
|
79,267,083
| 4,794
|
Summarize higher dimensions in numpy
|
<p>I have a numpy array that holds board game states for all possible moves, and I want to summarize some of those moves. I'm struggling to vectorize that code and avoid a for loop when I choose which moves I want to summarize.</p>
<p>Here's a simplified example of what I'm trying to do. I create a 3x3x3x3 array that represents the spaces that could be attacked by a queen at each board square of a 3x3 chess board. In other words, the first two dimensions are the coordinates of a queen on the board, and the last two dimensions are boolean flags of whether the queen could attack that square.</p>
<p>Then I select some squares, and count up how many of those squares could attack each square of the board. That counting step is what I'm trying to do without a Python for loop.</p>
<p>Here's the example code:</p>
<pre><code>import numpy as np
size = 3
patterns = np.zeros((size, size, size, size), dtype=bool)
for i in range(size):
for j in range(size):
patterns[i, j, i, :] = True
patterns[i, j, :, j] = True
for i2 in range(size):
shift = i2 - i
j2 = j + shift
if 0 <= j2 < size:
patterns[i, j, i2, j2] = True
j3 = j - shift
if 0 <= j3 < size:
patterns[i, j, i2, j3] = True
active_positions = np.array([[0, 1, 0],
[1, 0, 0],
[0, 0, 0]], dtype=bool)
# This is the part I want to vectorize:
counts = np.zeros((size, size))
for i in range(size):
for j in range(size):
if active_positions[i, j]:
counts += patterns[i, j]
print(counts)
</code></pre>
<p>Is there a way to do that counting without using a for loop?</p>
|
<python><numpy><vectorization>
|
2024-12-10 05:18:33
| 1
| 56,676
|
Don Kirkby
|
79,267,077
| 1,040,688
|
Python matrices in GF(2)
|
<p>I want to manipulate matrices of elements of the finite field GF(2), where every element is a 1 or a 0, in Python. I see that numpy can use dtype=bool, but it uses saturating addition, not wrapping, so it doesn't implement GF(2) where <code>1 + 1 = 0</code>. Is there a way to do this in numpy, or another package? I especially need to be able to do matrix multiplication, inversion, and solve systems of linear equations within the field.</p>
|
<python><numpy><linear-algebra>
|
2024-12-10 05:15:15
| 2
| 1,321
|
Gavin Wahl
|
79,267,051
| 8,800,836
|
Efficient solution of A @ X = B with B a triangular matrix in scipy or numpy
|
<p>Consider three square matrices <code>A</code>, <code>X</code>, and <code>B</code> that form the linear system:</p>
<pre class="lang-py prettyprint-override"><code>A @ X = B
</code></pre>
<p>to be solved for a particular solution <code>X</code> (which might not be unique).</p>
<p>I know that the matrix <code>B</code> is a triangular matrix. Is there a way to use the fact that <code>B</code> is a triangular matrix to speed up the solution for <code>X</code>?</p>
<p>I am aware that <code>scipy</code> has the function <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.solve_triangular.html" rel="nofollow noreferrer"><code>solve_triangular</code></a> for the case where the matrix <code>A</code> is triangular. But in my case, all I know <em>a priori</em> is that <code>B</code> is triangular.</p>
|
<python><numpy><scipy><linear-algebra>
|
2024-12-10 05:00:31
| 0
| 539
|
Ben
|
79,267,004
| 698,182
|
How do you enable runtime-repack in llama cpp python?
|
<p>After updating llama-cpp-python I am getting an error when trying to run an ARM optimized GGUF model <code>TYPE_Q4_0_4_4 REMOVED, use Q4_0 with runtime repacking</code>. After looking into it, the error comes from <a href="https://github.com/ggerganov/llama.cpp/pull/9921/files" rel="nofollow noreferrer">this</a> change to llama.cpp. From what I can tell, if you set the <code>GGML_CPU_AARCH64</code> cmake flag to ON then this feature should be enabled. I tried doing this with a force reinstall <code>CMAKE_ARGS="-DGGML_CPU_AARCH64=ON" pip install llama-cpp-python --upgrade --force-reinstall --no-cache-dir</code>, but it did not work, and in the output I don't think this flag was enabled.</p>
<p>When I try using a Q4_0 model the performance is far worse than the Q4_0_4_4 was, how do I enable runtime repacking in llama.cpp when using llama-cpp-python so I can leverage ARM specific model optimizations?</p>
|
<python><python-3.x><pip><llama-cpp-python><llamacpp>
|
2024-12-10 04:19:30
| 0
| 1,931
|
ekcrisp
|
79,266,819
| 13,135,901
|
Faster glossary generation
|
<p>I am trying to make a table of contents for my <code>queryset</code> in Django like this:</p>
<pre class="lang-py prettyprint-override"><code>def get_toc(self):
toc = {}
qs = self.get_queryset()
idx = set()
for q in qs:
idx.add(q.title[0])
idx = list(idx)
idx.sort()
for i in idx:
toc[i] = []
for q in qs:
if q.title[0] == i:
toc[i].append(q)
return toc
</code></pre>
<p>But it has time complexity <code>O(n^2)</code>. Is there a better way to do it?</p>
<p><strong>UPDATE</strong>
I meant glossary, not table of contents.</p>
|
<python><django>
|
2024-12-10 01:31:59
| 3
| 491
|
Viktor
|
79,266,812
| 28,063,240
|
How to split an HTML string around tags?
|
<p>How can I use Python to split an html string by a specified unpaired tag? For example</p>
<pre><code>split('hello<br >there', 'br')
</code></pre>
<p>should return <code>['hello', 'there']</code>,</p>
<pre><code>split('<div id="d71">text1<br data-i="1">text2<br>text3</div>', 'br')
</code></pre>
<p>should return <code>['<div id="d71">text1', 'text2', 'text3</div>']</code> or <code>['<div id="d71">text1</div>', '<div id="d71">text2</div>', '<div id="d71">text3</div>']</code></p>
<p>I've had a look at</p>
<pre class="lang-py prettyprint-override"><code>def get_start_stop(source, tag_name):
soup = BeautifulSoup(source, 'html.parser')
return dir(soup.find(tag_name))
</code></pre>
<p>But the things I was hopeful about, <code>sourcepos</code>, <code>string</code>, <code>strings</code>, <code>self_and_descendants</code>, <code>.nextSibling.sourcepos</code> don't have the information necessary (as far as I can tell) to get the start and end indexes of the tag in the source string.</p>
<p>I've also tried things like</p>
<pre class="lang-py prettyprint-override"><code>from lxml import html
def split(input_str, tag_name):
tree = html.fromstring(input_str)
output_list = []
for element in tree.iter():
if element.tag == tag_name:
output_list.append(element.tail)
else:
output_list.append(html.tostring(element, encoding='unicode', with_tail=False))
return output_list
</code></pre>
<p>but <code>with_tail=False</code> doesn't do what I expect</p>
|
<python><beautifulsoup><lxml>
|
2024-12-10 01:23:11
| 3
| 404
|
Nils
|
79,266,809
| 11,505,680
|
Scipy minimize with conditioning
|
<p>I have a function of 3 variables that I want to optimize. Unfortunately, the variables have different orders of magnitude, meaning that the problem is very ill-conditioned. I'm handling this by multiplying the optimization variable by a conditioning matrix. (In the example below, my conditioning matrix is diagonal, so I've implemented it as a vector. There may be applications for non-diagonal conditioning matrices.)</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from scipy.optimize import minimize
def penalty(x):
# do stuff
return # result
conditioner = np.array([1000, 1, 0.1])
x0 = np.array([0.003, 1.415, 9.265]) * conditioner
optim_result = minimize(penalty, x0)
bestfit = optim_result.x / conditioner
</code></pre>
<p>Is there a way to do this that doesn't require the manual application of <code>conditioner</code> or require me to keep track of which data are conditioned (e.g., <code>bestfit</code>) and which are not (e.g., <code>optim_result</code>)?</p>
<p>I suppose I could put some of the arithmetic inside the penalty function, like this:</p>
<pre class="lang-py prettyprint-override"><code>def penalty(x):
x_cond = x * conditioner
# do stuff
return # result
</code></pre>
<p>but it only solves half of the problem.</p>
|
<python><optimization><scipy>
|
2024-12-10 01:16:37
| 1
| 645
|
Ilya
|
79,266,778
| 3,765,883
|
Python: lines =file.readlines() puts extra 'b' character in front of each line?
|
<p>I'm trying to debug a web app built by somebody else. The app is supposed to parse a text file containing lines that represent geographical waypoints in a comma-separated format. When viewed in a windows app like notepad++ or the web-based text viewer <a href="https://filehelper.com" rel="nofollow noreferrer">https://filehelper.com</a>, the lines look like they should, i.e.:</p>
<pre><code>name,code,country,lat,lon,elev,style,rwdir,rwlen,freq,desc
"Abfaltersb Chp",Abfalter,,4645.518N,01232.392E,952.0m,1,,,,
"Admont Chp",Admont C,,4734.994N,01427.156E,628.0m,1,,,,
"Admont Stift Kir",Admont S,,4734.517N,01427.700E,646.0m,1,,,,
"Admonterhaus",Admonter,,4737.917N,01429.483E,1720.0m,1,,,,
"Aflenz Kirche",Aflenz K,,4732.717N,01514.467E,772.0m,1,,,,
</code></pre>
<p>but when I use 'cherrypy.log('line%s: %s' % (wpnum, line)) in a loop to display the lines in a linux terminal window, I get:</p>
<pre><code>[09/Dec/2024:23:37:08] line1: b'name,code,country,lat,lon,elev,style,rwdir,rwlen,freq,desc\r\n'
[09/Dec/2024:23:37:08] line2: b'"Abfaltersb Chp",Abfalter,,4645.518N,01232.392E,952.0m,1,,,,\r\n'
[09/Dec/2024:23:37:08] line3: b'"Admont Chp",Admont C,,4734.994N,01427.156E,628.0m,1,,,,\r\n'
[09/Dec/2024:23:37:08] line4: b'"Admont Stift Kir",Admont S,,4734.517N,01427.700E,646.0m,1,,,,\r\n'
[09/Dec/2024:23:37:08] line5: b'"Admonterhaus",Admonter,,4737.917N,01429.483E,1720.0m,1,,,,\r\n'
[09/Dec/2024:23:37:08] line6: b'"Aflenz Kirche",Aflenz K,,4732.717N,01514.467E,772.0m,1,,,,\r\n'
[09/Dec/2024:23:37:08] line7: b'"Afritz Church",Afritz C,,4643.650N,01347.983E,710.0m,1,,,,\r\n'
[09/Dec/2024:23:37:08] line8: b'"Aich Assac Chp",Aich Ass,,4725.480N,01351.460E,736.0m,1,,,,\r\n'
</code></pre>
<p>with the "b'" added to the beginning of each line.</p>
<p>so my question is - is Python's lines = file.readlines() function actually adding the "b'" to each line, or is this some artifact of the cherrypy.log() mechanism? Or, as the 1981 Memorex cassette tape commercial said, "Is it real, or is it Memorex?"</p>
<p>TIA,</p>
<p>Frank</p>
|
<python><readlines>
|
2024-12-10 00:49:10
| 1
| 327
|
user3765883
|
79,266,767
| 9,850,548
|
How to handle heterogenous GNN?
|
<p>I have created this data:</p>
<pre><code>HeteroData(
user={ x=[100, 16] },
keyword={ x=[321, 16] },
tweet={ x=[1000, 16] },
(user, follow, user)={ edge_index=[2, 291] },
(user, tweetedby, tweet)={ edge_index=[2, 1000] },
(keyword, haskeyword, tweet)={ edge_index=[2, 3752] }
)
</code></pre>
<p>And these two varibales based on that:</p>
<pre><code>x_dict:
user: torch.Size([100, 16])
keyword: torch.Size([321, 16])
tweet: torch.Size([1000, 16])
edge_index_dict:
('user', 'follow', 'user'): torch.Size([2, 291])
('user', 'tweetedby', 'tweet'): torch.Size([2, 1000])
('keyword', 'haskeyword', 'tweet'): torch.Size([2, 3752])
</code></pre>
<p>My nodes were indexed from 0 to 99 for users, from 100 to 420 for keywords, and next for tweet nodes.</p>
<p>When I want to run this model:</p>
<pre><code>class HeteroGATBinaryClassifier(torch.nn.Module):
def __init__(self, metadata, in_channels, hidden_channels, heads=1):
super().__init__()
self.metadata = metadata # Metadata about node and edge types
# Define GNN layers for each edge type
self.conv1 = HeteroConv({
edge_type: GATConv(in_channels, hidden_channels, heads=heads, add_self_loops = False)
for edge_type in metadata[1]
}, aggr='mean') # Aggregate using mean
self.conv2 = HeteroConv({
edge_type: GATConv(hidden_channels * heads, hidden_channels, heads=heads, add_self_loops = False)
for edge_type in metadata[1]
}, aggr='mean')
# Linear layer for classification (binary output)
self.classifier = Linear(hidden_channels * heads, 1) # Single output for binary classification
def forward(self, x_dict, edge_index_dict, target_node_type):
# First GAT layer
x_dict = self.conv1(x_dict, edge_index_dict)
x_dict = {key: F.elu(x) for key, x in x_dict.items()}
# Second GAT layer
x_dict = self.conv2(x_dict, edge_index_dict)
x_dict = {key: F.elu(x) for key, x in x_dict.items()}
# Apply the classifier only on the target node type
logits = self.classifier(x_dict[target_node_type]) # Logits for target node type
return torch.sigmoid(logits) # Output probabilities for binary classification
</code></pre>
<p>But I see the following error:</p>
<blockquote>
<p>IndexError: Found indices in 'edge_index' that are larger than 999 (got 1420). Please ensure that all indices in 'edge_index' point to valid indices in the interval [0, 1000) in your node feature matrix and try again.</p>
</blockquote>
<p>Do you have any idea how to handle it?</p>
|
<python><pytorch><pytorch-geometric><gnn>
|
2024-12-10 00:42:57
| 1
| 317
|
aliiiiiiiiiiiiiiiiiiiii
|
79,266,709
| 28,063,240
|
How to access container's python packages from the host?
|
<p>I would like to have local read access to the Python packages that my docker container process installs.</p>
<p>I thought the easiest way to do this would be to "symlink" a local file to a directory on the container's directory, which I thought I could do with volumes/mounts.</p>
<p>I have tried many different methods:</p>
<pre class="lang-yaml prettyprint-override"><code>services:
django:
volumes:
- /tmp/docker-django-packages:/usr/local/lib/python3.12/site-packages
</code></pre>
<p>Problem: <code>/tmp/docker-django-packages</code> is not created if it doesn't exist, but there is no docker error, however no python packages can be resolved by the container's python process. If I manually make <code>/tmp/docker-django-packages</code> on the host, then I still get the same error.</p>
<pre class="lang-yaml prettyprint-override"><code>services:
django:
volumes:
- type: bind
source: /tmp/docker-django-packages
target: /usr/local/lib/python3.11/site-packages
bind:
create_host_path: true
</code></pre>
<p>Problem: <code>/tmp/docker-django-packages</code> is not created. If I make it manually, it is not populated. The container behaviour is not affected at all.</p>
<pre class="lang-yaml prettyprint-override"><code>services:
django:
volumes:
- docker-django-packages:/usr/local/lib/python3.11/site-packages
volumes:
docker-django-packages:
driver: local
driver_opts:
type: none
o: bind
device: "/tmp/docker-django-packages"
</code></pre>
<p>Problem: host directory not created if it doesn't exist, not populated if it already exists, and the container functions as normal</p>
<pre class="lang-yaml prettyprint-override"><code>services:
django:
volumes:
- type: volume
source: docker-django-packages
target: /usr/local/lib/python3.11/site-packages
volumes:
docker-django-packages:
driver: local
driver_opts:
type: none
o: bind
device: "/tmp/docker-django-packages"
</code></pre>
<p>Problem: host directory not created, nor populated, container yet again functions as if these lines weren't in the compose file at all</p>
<pre class="lang-yaml prettyprint-override"><code>services:
django:
volumes:
- type: bind
source: docker-django-packages
target: /usr/local/lib/python3.11/site-packages
volumes:
docker-django-packages:
driver: local
driver_opts:
type: none
o: bind
device: "/tmp/docker-django-packages"
</code></pre>
<p>Problem: nothing changes on the host machine (no new directory, no new contents if I make it manually), but the container python process can no longer find. Also seems to require making the service <code>privileged</code> for the first run. Also seems to do some kind of caching, if I remove the <code>privileged</code> permission for subsequent runs, the container starts but I just get python import errors.</p>
<hr />
<p>So what am I actually meant to do so that I can mirror the container's Python packages on the host machine?</p>
|
<python><django><docker><docker-compose>
|
2024-12-09 23:40:30
| 0
| 404
|
Nils
|
79,266,572
| 1,940,534
|
Python Selenium selecting an option in a list
|
<p>I have a SELECT list</p>
<pre><code><select size="4" name="ctl00$ContentPlaceHolder1$lstAvailableClients" id="ctl00_ContentPlaceHolder1_lstAvailableClients" style="height:150px;width:250px;">
<option value="2780">2780 - R W M (self)</option>
<option value="2714">2714 - Robert (self)</option>
</select>
</code></pre>
<p>Currently I just select the first option like this:</p>
<pre><code>el = driver.find_element('id','ctl00_ContentPlaceHolder1_lstAvailableClients')
for option in el.find_elements(By.TAG_NAME,"option"):
option.click() # select() in earlier versions of webdriver
break
</code></pre>
<p>But I can get the count using</p>
<pre><code>el.find_elements(By.TAG_NAME, "option").count
</code></pre>
<p>I could ask the user to select an option 1-2 to proceed.</p>
<p>Is there a way to get the option count and then get the <code>option.click()</code> to select that index to click?</p>
|
<python><selenium-webdriver>
|
2024-12-09 22:07:15
| 2
| 1,217
|
robm
|
79,266,557
| 2,815,937
|
python: heatmap with categorical color and continuous transparency
|
<p>I want to make a heatmap in python (seaborn, matplotlib, etc) with two dimensions of information. I have a categorical value I want to assign to color, and a continuous variable (i.e. between 0-100 or 0-1) I want to assign to transparency, so each box has its own color and transparency (or intensity).</p>
<p>for example:</p>
<pre><code>colors = pd.DataFrame([['b','g','r'],['black','orange','purple'],['r','yellow','white']])
transparency = pd.DataFrame([[0.1,0.2,0.3],[0.9,0.1,0.2],[0.1,0.6,0.3]])
</code></pre>
<p>how can I make a heatmap from this data such that the top left box is blue in color and 10% transparency (or 10% opaqueness, whichever), and so on?</p>
<p>The best idea I have so far is to turn the colors into integer values, add those to the transparency values, and then make a custom colormap where each integer has a different color, ranging from white to the color in between the integer values. That sounds complicated to make and I'm hoping there's a built-in way to do this.</p>
<p>Any ideas?</p>
|
<python><matplotlib><seaborn>
|
2024-12-09 21:58:12
| 2
| 735
|
andbeonetraveler
|
79,266,516
| 12,358,733
|
Python http.client using HTTP Proxy server going to non-HTTPS site
|
<p>I'm working to retrieve API data in an environment where an outbound HTTP/HTTPS proxy server is required. When connecting to the site using HTTPS via the proxy using <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/CONNECT" rel="nofollow noreferrer">Tunneling</a>, it works fine. Here's example code:</p>
<pre><code>import http.client
PROXY = {'host': "10.20.30.40", 'port': 3128}
TARGET = {'host': "example.com", 'port': 443, 'url': "/api/v1/data"}
HEADERS = {'Host': TARGET.get('host'), 'User-agent': "Python http.client"}
TIMEOUT = 10
if port := TARGET.get('port') == 443:
conn = http.client.HTTPSConnection(PROXY.get('host'), port=PROXY.get('port'), timeout=TIMEOUT)
conn.set_tunnel(host=TARGET.get('host'), port=TARGET.get('port', 443))
else:
conn = http.client.HTTPConnection(PROXY.get('host'), port=PROXY.get('port'), timeout=TIMEOUT)
conn.request(method="GET", url=TARGET.get('url', "/"), headers=HEADERS)
response = conn.getresponse()
conn.close()
print(response.status, response.reason)
</code></pre>
<p>I also want to support plain HTTP URLs, and have tried this:</p>
<pre><code>TARGET = {'host': "example.com", 'port': 80, 'url': "/api/v1/data"}
</code></pre>
<p>The proxy replies with a 400 / Bad request error. Here's the log:</p>
<pre><code>192.168.1.100 NONE_NONE/400 3885 GET /api/v1/data - HIER_NONE/- text/html
</code></pre>
<p>A test curl to the same URL shows up in the proxy as this:</p>
<pre><code>192.168.1.100 TCP_MISS/200 932 GET http://example.com/api/v1/data - HIER_DIRECT/203.0.113.132 application/json
</code></pre>
<p>This makes some sense. When using tunneling, the web server's host is configured in <a href="https://docs.python.org/3/library/http.client.html#http.client.HTTPConnection.set_tunnel" rel="nofollow noreferrer">set_tunnel()</a>. But HTTP does not require that step. I was thinking the HTTP "host" header would take care of this, but could be mistaken. What am I missing?</p>
|
<python><proxy><http-proxy><http.client>
|
2024-12-09 21:32:48
| 0
| 931
|
John Heyer
|
79,266,479
| 2,587,904
|
Python insert image into cell
|
<p>How can I use python code to insert an image (local file) into an excel cell.</p>
<p>In the Excel UI <a href="https://support.microsoft.com/en-us/office/insert-picture-in-cell-in-excel-e9317aee-4294-49a3-875c-9dd95845bab0" rel="nofollow noreferrer">https://support.microsoft.com/en-us/office/insert-picture-in-cell-in-excel-e9317aee-4294-49a3-875c-9dd95845bab0</a> there is a insert picture to cell function.</p>
<p>However, from the API I have only been able to:</p>
<ul>
<li>add any image (but not into a cell)</li>
<li>anchor with a cell - but not inside the cell</li>
<li>visually fine-tune the image size/cell size to then overlay the image over a cell (but not into)</li>
</ul>
<p>How can I replicate the Insert a Picture in a Cell from the python API?</p>
<p>I want to write a pandas dataframe of the following structure to an excel:</p>
<pre><code>key,value,image_path
foo,bar,my_image.png
bafoo,123,awesome.png
</code></pre>
<p>however in Airtyble style (like their attachments feature) I want ti display/visualize the image attachment directly from excel.</p>
<p>This is possible from the UI - so far I could not find any python API for this.</p>
<h2>clarifications</h2>
<p>commonly something like:</p>
<pre><code>import openpyxl
wb = openpyxl.Workbook()
ws = wb.worksheets[0]
#ws.merge_cells('A1:A3')
img = openpyxl.drawing.image.Image(FILENAME)
row_number = 1
col_idx = 1
cell = ws.cell(row=row_number, column=col_idx)
ws.add_image(img)
wb.save('output.xlsx')
</code></pre>
<p>is suggested - but this is just adding the image - not adding it into/iside the cell</p>
<p>I also do not want to add an image URL - just link the file as can be accomplished from the UI</p>
<h2>edit</h2>
<p>I would not want to unzip the XML and manually (or via python) edit the XML ideally.</p>
<h3>edit2:</h3>
<p>it must work with the unzip the XML and add it manually to XML. I have been exploring with some AI tools, in fact some images are now loaded into the cells - but excel complains about unreadable data. However, when clicking ignore/delete it can load the image into the cell. I now need to figure out how to write the correct XML for XLSX by hand.</p>
<p>Hopefully though there is a better way via some API.</p>
|
<python><excel><pandas><image><airtable>
|
2024-12-09 21:16:08
| 0
| 17,894
|
Georg Heiler
|
79,266,470
| 5,289,570
|
How to use virtual environments in Jupyter Notebook in VS Code
|
<p>I use Jupyter extension (v2024.10.0), Python (v2024.20.0), and Pylance (v2024.12.1) with default interpreter path, VS Code 1.95.3 on MacOS Sonoma 14.6. I want to debug Python code interactively in jupyter notebooks using the same virtual environment I use when running modules via Terminal.</p>
<p>e.g. I have a project with these files:</p>
<ul>
<li>test.ipynb</li>
<li>script.py</li>
</ul>
<p>I run my script like this:</p>
<pre><code>$ source ~/envs/my-project-env/bin/activate
(my-project-env) $ python script.py
</code></pre>
<p>I would like to use my-project-env environment in the Jupyter notebook. I click Select Kernel top right, but it doesn't appear in the drop-down.</p>
<p><a href="https://i.sstatic.net/2fHTmTlM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fHTmTlM.png" alt="enter image description here" /></a></p>
<p>Next I try the <a href="https://stackoverflow.com/a/58134257/5289570">tip here</a> and use <code>ipython kernel install --user --name=my-project-env</code>. It creates the kernel, and I can select it from the Jupyter Kernel... list instead, but it uses the wrong Python path, pointing to my global default interpreter instead.</p>
<p>Next I try <a href="https://stackoverflow.com/a/69284289/5289570">this tip</a>, added my python interpreter by adding my executable's path to the list of Python Interpreters. This works great for creating a dedicated terminal for running the script.py. No need for manually sourcing my environment anymore:</p>
<p><a href="https://i.sstatic.net/JpehQ9b2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpehQ9b2.png" alt="enter image description here" /></a></p>
<p>But the kernel is still unavailable for selection in Jupyter Notebook "Select Kernel" list. How can I run the Jupyter Notebook and get the same code auto-completion using the same environment as my script?</p>
|
<python><visual-studio-code><jupyter-notebook><virtualenv>
|
2024-12-09 21:10:25
| 2
| 1,416
|
Wassadamo
|
79,266,454
| 5,344,240
|
Spark cache inside a function
|
<p>I have this toy example function returning a cached spark DataFrame (DF):</p>
<pre><code>def foo(df):
try:
base = complicated_query(df)
base.cache() # lazy cache
base.count() # trigger cache - wrong design???
num1 = base.withColumn('number', f.lit('1'))
num2 = base.withColumn('number', f.lit('2'))
return num1.union(num2)
finally:
None
# base.unpersist()
</code></pre>
<p>The purpose of <code>foo</code> is to simply encapsulate temporary variables (DataFrames) which I don't want to have in the outer scope. <code>base</code> is some complicated DF used twice, hence I <code>cache</code> it. My questions are around the explicit <code>count</code> call (suggested by ChatGPT). This is to trigger caching but I feel like it is a wrong design.</p>
<ul>
<li>Why do we need <code>count</code> (an action) at this point? What is the gain? The actual caching would happen anyway on the first action to the return value of <code>foo</code> if I didn't call <code>count</code>.</li>
<li>I noticed that calling <code>foo</code> twice with the same input <code>df</code> has different execution times: first is slow, which is expected since a <code>count</code> is called. But the second is almost instant. Why is that? Surely <code>base</code> is already cached and then <code>count</code> is trivial, but the <code>base</code> reference in the second run is a different reference than that in the first run. How does spark know that it can reuse a cached DF from the first run? (whose memory btw I leaked I guess since after <code>foo</code> exits I cannot <code>unpersist</code> it.)</li>
<li>Do we really have memory leak when <code>foo</code> exits? How can I <code>unpersist</code> base? Do I have to unpersist at all? I know there is <code>spark.catalog.clearCache()</code> to wipe out all cached DFs but I would like to do this explicitly for <code>base</code>. That's why there is/was a <code>finally</code> clause in the function, to prevent a leak, but that was a failed attempt as in that case I was freeing up the cache before I could even use it...</li>
</ul>
<p>Can you please help resolve these?</p>
|
<python><azure><apache-spark><pyspark><azure-databricks>
|
2024-12-09 21:03:42
| 1
| 455
|
Andras Vanyolos
|
79,266,438
| 5,970,782
|
Jupyterlab alternative for pdb/ipdb
|
<p>How can I set a breakpoint in a Python script running in a Jupyter notebook such that:</p>
<ul>
<li>When the breakpoint is hit, the variables from the script are available in the Jupyter kernel.</li>
<li>I can interact with and modify those variables in the Jupyter notebook during the pause.</li>
<li>After resuming execution, the changes I made in the notebook are reflected in the script's state.</li>
</ul>
<p>The typical <code>import ipdb; ipdb.set_trace()</code> approach doesn't allow direct synchronization of the kernel's namespace with the script. Is there an alternative that provides this functionality?</p>
|
<python><debugging><jupyter><ipdb><jupyter-kernel>
|
2024-12-09 20:57:28
| 0
| 791
|
Pavel Prochazka
|
79,266,275
| 14,700,182
|
Python: FastAPI not accepting SQLModel as a return type on routes
|
<p>I'm making an API based on the <code>FastAPI</code> docs (<a href="https://fastapi.tiangolo.com/tutorial/sql-databases/#install-sqlmodel" rel="nofollow noreferrer">https://fastapi.tiangolo.com/tutorial/sql-databases/#install-sqlmodel</a>). I have noticed this docs are kind of deprecated due to <code>@app.on_event("startup")</code> decorator, which my interpreter told me need to be replaced with lifespan events (<a href="https://fastapi.tiangolo.com/advanced/events/" rel="nofollow noreferrer">https://fastapi.tiangolo.com/advanced/events/</a>).</p>
<p>In the API docs, the model <code>Hero</code> is created with <code>SQLModel</code> and is stated to be a valid <code>pydantic</code> model, using it as a return type on routes:</p>
<blockquote>
<p>The Hero class is very similar to a <code>pydantic</code> model (in fact, underneath, it actually is a Pydantic model).</p>
</blockquote>
<pre><code>from typing import Annotated
from fastapi import Depends, FastAPI, HTTPException, Query
from sqlmodel import Field, Session, SQLModel, create_engine, select
class Hero(SQLModel, table=True):
id: int | None = Field(default=None, primary_key=True)
name: str = Field(index=True)
age: int | None = Field(default=None, index=True)
secret_name: str
# Code below omitted π
</code></pre>
<pre><code># Code above omitted π
@app.post("/heroes/")
def create_hero(hero: Hero, session: SessionDep) -> Hero:
session.add(hero)
session.commit()
session.refresh(hero)
return hero
# Code below omitted π
</code></pre>
<p>When I try to do the same, python throws this error:</p>
<blockquote>
<p>fastapi.exceptions.FastAPIError: Invalid args for response field! Hint: check that <module 'models.Hero' from 'C:\...\Hero.py'> is a valid Pydantic field type. If you are using a return type annotation that is not a valid Pydantic field (e.g. Union[Response, dict, None]) you can disable generating the response model from the type annotation with the path operation decorator parameter response_model=None. Read more: <a href="https://fastapi.tiangolo.com/tutorial/response-model/" rel="nofollow noreferrer">https://fastapi.tiangolo.com/tutorial/response-model/</a></p>
</blockquote>
<p>This is my project structure:</p>
<pre><code>src
βββ app.py
βββ models
β βββ Hero.py
βββ service
βββ db_config.py
</code></pre>
<p>With this command I mount my API:</p>
<pre><code>$ uvicorn app:app --reload --host 0.0.0.0 --port 8080
</code></pre>
<p><code>app.y</code></p>
<pre><code>from fastapi import FastAPI, Query
from typing import Annotated
from fastapi.responses import JSONResponse
from service.db_config import engine, SessionDep, create_db_and_tables
from contextlib import asynccontextmanager
from sqlmodel import select
from models import Hero
@asynccontextmanager
async def lifespan(app: FastAPI):
print("Startup")
create_db_and_tables()
yield
print("Shutdown")
app = FastAPI(lifespan=lifespan)
@app.get("/")
def get_welcome_message():
return JSONResponse(
content={
"message": "Welcome to my API"
},
status_code=200
)
@app.post("/heroes")
def create_hero(hero: Hero, session: SessionDep) -> Hero:
session.add(hero)
session.commit()
session.refresh(hero)
return JSONResponse(
content=hero,
status_code=200
)
@app.get("/heroes")
def get_heroes(session: SessionDep, offset: int = 0, limit: Annotated[int, Query(le=100)] = 100) -> list[Hero]:
heroes = session.exec(select(Hero).offset(offset).limit(limit)).all()
return JSONResponse(
content=heroes,
status_code=200
)
</code></pre>
<p><code>db_config.py</code></p>
<pre><code>from typing import Annotated
from fastapi import Depends
from sqlmodel import Session, create_engine, SQLModel
def get_session():
with Session(engine) as session:
yield session
def create_db_and_tables():
SQLModel.metadata.create_all(engine)
db_engine = "postgresql"
db_user = "postgresql"
db_user_password = "ADMIN"
db_address = "127.0.0.1"
db_name = "realm"
connect_args = {"check_same_thread": False}
db_url = f"{db_engine}://{db_user}:{db_user_password}@{db_address}/{db_name}"
engine = create_engine(url=db_url, connect_args=connect_args)
SessionDep = Annotated[Session, Depends(get_session)]
</code></pre>
<p><code>Hero.py</code></p>
<pre><code>from sqlmodel import Field, SQLModel,table
class Hero(SQLModel, table=True):
id: str = Field(primary_key=True, nullable=False)
name: str = Field(nullable=False)
age: int = Field(nullable=False)
</code></pre>
<p>How can I solve this?</p>
|
<python><fastapi><uvicorn>
|
2024-12-09 19:43:34
| 0
| 334
|
Benevos
|
79,266,143
| 141,650
|
How to transitively "import" a conftest.py file?
|
<p>I have a directory structure with 1 base class, 1 test file, and 2 <code>conftest.py</code> files, like so:</p>
<pre><code>/tests/base/BUILD
py_library(
name="some_base",
srcs=["some_base.py"]
)
/tests/base/conftest.py
import pytest
@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
result = outcome.get_result()
setattr(item, "result_" + result.when, result)
/tests/base/some_base.py
class SomeBase:
...
/tests/BUILD
py_test(
name="some_test",
srcs=["some_test.py"],
deps=["//tests/base:some_base"]
)
/tests/conftest.py
# empty content
/tests/some_test.py
import tests.base.some_base
class SomeTest(some_base.BaseClass):
...
</code></pre>
<p>At runtime (<code>pytest some_test...</code>), it fails in a way that demonstrates that <code>pytest_runtest_makereport</code> was never processed by pytest. If I amend <code>/tests/conftest.py</code> like so, it works:</p>
<pre><code>/tests/conftest.py
import tests.base.conftest
pytest_runtest_makereport = conftest.pytest_runtest_makereport
</code></pre>
<p>Any chance there's a cleaner way to get pytest to process the <code>conftest.py</code> files <em>transitively depended upon by the test</em>?</p>
<p>(I've read through a bunch of related SO questions (<a href="https://stackoverflow.com/search?q=conftest.py">https://stackoverflow.com/search?q=conftest.py</a>), but haven't quite landed on this use case).</p>
|
<python><pytest>
|
2024-12-09 18:49:58
| 0
| 5,734
|
Stephen Gross
|
79,266,000
| 5,489,736
|
slow vtk rendering on ec2 machine
|
<p>I am running the following code to render a 3D mesh using vtk.</p>
<p>On my MacBook Pro M2 (32GB RAM), the code runs significantly faster compared to AWS EC2 instances, including both CPU (c7i.8xlarge) and GPU (g5.2xlarge).</p>
<p>Specifically, the call to window_to_image_filter.Update() is about 100x slower on the cloud machines. I suspect this is something to do with laking of screen.</p>
<pre><code>Class MyRenderer:
def __init__(self, mesh: trimesh.Trimesh):
self.vtkmesh = create_vtk_mesh(mesh.vertices, mesh.faces) # This returns vtkPolyData mesh
self.renderer = vtk.vtkRenderer()
self.render_window = vtk.vtkRenderWindow()
self.render_window.AddRenderer(self.renderer)
self.render_window.SetOffScreenRendering(False)
self.render_window.SetSize(128, 128)
mapper = vtk.vtkPolyDataMapper()
mapper.SetInputData(self.vtkmesh)
actor = vtk.vtkActor()
actor.SetMapper(mapper)
self.renderer.AddActor(actor)
self.camera = vtk.vtkCamera()
self.camera.ParallelProjectionOn()
self.camera.SetParallelScale(75)
self.camera.SetClippingRange(1, 400)
self.renderer.SetActiveCamera(self.camera)
self.render_window.Render()
def render_mesh(self, camera_position: np.ndarray, camera_focal_point: np.ndarray, pts_3d=None):
self.camera.SetPosition(*camera_position)
self.camera.SetFocalPoint(*camera_focal_point)
view_matrix = self.camera.GetViewTransformMatrix()
projection_matrix = self.camera.GetProjectionTransformMatrix(1, -1, 1)
view_matrix_np = np.array([view_matrix.GetElement(i, j) for i in range(4) for j in range(4)]).reshape((4, 4))
projection_matrix_np = np.array([projection_matrix.GetElement(i, j) for i in range(4) for j in range(4)]).reshape((4, 4))
rotation_det = np.linalg.det(view_matrix_np[:3, :3])
imaging_matrix = projection_matrix_np @ view_matrix_np
sensor_normal = camera_focal_point - camera_position
sensor_normal = sensor_normal / np.linalg.norm(sensor_normal)
sensor_plane = sensor_normal[0], sensor_normal[1], sensor_normal[2], -np.dot(camera_position, sensor_normal)
self.renderer.SetActiveCamera(self.camera)
self.render_window.Render()
window_to_image_filter = vtk.vtkWindowToImageFilter()
window_to_image_filter.SetInput(self.render_window)
# Bencmark:
# On Apple MAC M2 - 0.001
# AWS EC2 c7i.8xlarge Amazon linux (cpu) - 0.130
# AWS EC2 g5.2xlarge Ubuntu (GPU) - 0.134
with timing('this is slow'):
window_to_image_filter.Update()
</code></pre>
<p>Here are the benchmark results for the line window_to_image_filter.Update():</p>
<ul>
<li>Mac M2: ~0.001 seconds</li>
<li>AWS EC2 c7i.8xlarge (CPU): ~0.130 seconds AWS</li>
<li>EC2 g5.2xlarge (GPU): ~0.134 seconds</li>
</ul>
<h5>What I've Tried</h5>
<ul>
<li>Running the code with xvfb for virtual display on EC2 (must).</li>
<li>Switching from vtk-9.4.0 to vtk-osmesa, but the performance remains the same.</li>
</ul>
<p>I suspect the issue might be related to the lack of a physical screen or hardware-specific optimizations. Is there any way to make the rendering on EC2 as fast as on my Mac, or at least 10x faster?</p>
<p>Since I call <code>render_mesh()</code> on ~500 different <code>camera_position</code> in my flow, it important for me to speed it up.</p>
<p>How can I optimize the rendering pipeline on AWS EC2 to reduce the time for window_to_image_filter.Update()?</p>
<p>Are there specific configurations or hardware requirements that could resolve this performance discrepancy?</p>
<p>Any guidance would be greatly appreciated!</p>
|
<python><vtk><xvfb><osmesa>
|
2024-12-09 17:59:07
| 0
| 398
|
Cowabunga
|
79,265,951
| 7,658,051
|
configuring karrigell logging to log uncaught exceptions
|
<p>I am managing a third-part app made with python's framework <a href="https://karrigell.sourceforge.net/en/index.html" rel="nofollow noreferrer">Karrigell</a>.</p>
<p>I have added a new logger, <code>my_logger</code>, which is declared and instantiated in a file called <code>my_logging.py</code></p>
<p>I see that the logger can properly fill the log file with messages</p>
<pre><code>my_logger.info("message from my_logger.info")
</code></pre>
<p>However, it seems it cannot log tracebacks, nor of handled nor of unhandled exceptions.</p>
<p>Here are some scenarios:</p>
<p>In a html template file of my project, <code>customer_file.hip</code>, I have the following scenarios:</p>
<h2>scenario 1 - syntax error</h2>
<p><strong>customer_file.hip</strong></p>
<pre><code>from my_logging import my_logger
import sys, traceback
...
try:
my_logger.info("my_logger.info - I am going to raise a exception inside a 'try' block.")
hello, this line triggers a handled error
except Exception:
my_logger.info("exception for file {}, name {}".format(__file__, __name__))
traceback.print_exc(file=sys.stderr)
my_logger.info("Logged the exception traceback.")
</code></pre>
<p><strong>result</strong></p>
<p>the traceback is printed in the browser as I try to access the page rendered by my template, but not in my log file.</p>
<p><a href="https://i.sstatic.net/vNkcKgo7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vNkcKgo7.png" alt="enter image description here" /></a></p>
<h2>scenario 2 - unhandled exception</h2>
<p><strong>customer_file.hip</strong></p>
<pre><code>from my_logging import my_logger
import sys, traceback
...
my_logger.info("my_logger.info - I am going to raise an unhandled exception.")
new_variable = undeclared_variable
my_logger.info("this line of code is never reached.")
</code></pre>
<p><strong>result</strong></p>
<p>only <code>I am going to raise an unhandled exception</code> gets printed in my log file.</p>
<h2>scenario 3 - handled exception</h2>
<p><strong>customer_file.hip</strong></p>
<pre><code>from my_logging import my_logger
import sys, traceback
...
try:
my_logger.info("my_logger.info - I am going to raise a exception inside a 'try' block.")
new_variable = undeclared_variable
my_logger.info("this line of code is never reached.")
except Exception:
my_logger.info("exception for file {}, name {}".format(__file__, __name__))
traceback.print_exc(file=sys.stderr)
my_logger.info("Logged the exception traceback.")
</code></pre>
<p><strong>result</strong></p>
<p>only <code>I am going to raise a exception inside a 'try' block</code> gets printed in my log file.</p>
<hr />
<p>Here below is the code of <code>my_logging.py</code>, how can I change it to keep track of both handled and unhandled exceptions?</p>
<h2>my_logging.py</h2>
<pre><code>import os
import datetime
import sys
import traceback
import logging
from logging.handlers import TimedRotatingFileHandler
def setup_logger():
##############
# stdout part
##############
# log filename
# --------------
log_dir = "/home/myuser/var/log/my_project"
log_filename = "my_project.log"
log_file_path = os.path.join(log_dir, log_filename)
# log directory
# ---------------
if not os.path.exists(log_dir):
try:
os.makedirs(log_dir)
except OSError as e:
if e.errno != 17: # Errno 17 = "File exists"
raise # raise exception if the error is not "file exists"
# logger name
# --------------
logger_1 = logging.getLogger("my_project_logger")
# logger level
# --------------
logger_1.setLevel(logging.DEBUG) # logger's level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
# Formatter
# -----------
formatter_1 = logging.Formatter(
"[%(asctime)s %(name)s] %(message)s"
# %(levelname)s
)
# File handler
# --------------
# define a rotating File Handler
file_handler_1 = TimedRotatingFileHandler(
log_file_path,
when="midnight",
interval=1,
backupCount=7
)
return logger_1
# instantiate the logger
my_logger = setup_logger()
</code></pre>
<h2>what I have tried</h2>
<p>I have tried to add a global handler for uncaught expressions:</p>
<p>in <code>my_logging.py</code>, at the end, after line</p>
<pre><code>my_logger = setup_logger()
</code></pre>
<p>I have added</p>
<pre><code>def handle_exception(exc_type, exc_value, exc_traceback):
if issubclass(exc_type, KeyboardInterrupt):
sys.__excepthook__(exc_type, exc_value, exc_traceback)
return
my_logger.error("Uncaught exception", exc_info=(exc_type, exc_value, exc_traceback))
sys.excepthook = handle_exception
</code></pre>
<p>but the error message does not get logged.</p>
|
<python><exception><stderr><traceback><uncaught-exception>
|
2024-12-09 17:40:59
| 0
| 4,389
|
Tms91
|
79,265,874
| 3,486,684
|
Generating a dataframe of *combinations* (not permutations)?
|
<p>Suppose I have a bag of items <code>{a, b}</code>. Then I can choose pairs out of it in a variety of ways. One way might be to pick all possible permutations: <code>[a, a], [a, b], [b, a], [b, b]</code>. But I might disallow repetition, in which case the possible permutations are: <code>[a, b], [b, a]</code>. I might go further and declare that <code>[a, b]</code> is the same as <code>[b, a]</code>, i.e. I only care about the "combination" of choices, not their permutations.</p>
<p>For more about the distinction between combination vs. permutation, see: <a href="https://en.wikipedia.org/wiki/Combination" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Combination</a></p>
<p>What are the best ways to produce a combination of choices (i.e. order of elements should not matter)? My current solutions looks like this:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
choices = pl.DataFrame(
[
pl.Series("flavor", ["x"] * 2 + ["y"] * 3),
pl.Series("choice", ["a", "b"] + ["1", "2", "3"]),
]
)
# join to produce the choices
choices.join(choices, on=["flavor"]).with_columns(
# generate a 2-element list representing the choice
sorted_choice_pair=pl.concat_list("choice", "choice_right").list.sort()
).filter(pl.col.choice.eq(pl.col.sorted_choice_pair.list.first()))
</code></pre>
<pre><code>shape: (9, 4)
ββββββββββ¬βββββββββ¬βββββββββββββββ¬βββββββββββββββββββββ
β flavor β choice β choice_right β sorted_choice_pair β
β --- β --- β --- β --- β
β str β str β str β list[str] β
ββββββββββͺβββββββββͺβββββββββββββββͺβββββββββββββββββββββ‘
β x β a β a β ["a", "a"] β
β x β a β b β ["a", "b"] β
β x β b β b β ["b", "b"] β
β y β 1 β 1 β ["1", "1"] β
β y β 1 β 2 β ["1", "2"] β
β y β 2 β 2 β ["2", "2"] β
β y β 1 β 3 β ["1", "3"] β
β y β 2 β 3 β ["2", "3"] β
β y β 3 β 3 β ["3", "3"] β
ββββββββββ΄βββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ
</code></pre>
<p>So I generate all permutations, and then filter out those that where the "left element" does not match the first element of the list.</p>
|
<python><python-polars>
|
2024-12-09 17:16:11
| 1
| 4,654
|
bzm3r
|
79,265,786
| 1,802,693
|
Using DataSource in bokeh library for color, alpha and other properties
|
<p>So far in python's bokeh library I've used a patch method to draw some objects, but the current solution runs in a for loop which is not optimal for this library:</p>
<pre class="lang-py prettyprint-override"><code>for data_point in data_points:
tf_sec = pd.Timedelta(data_point['tf']) / 2.0
dt = tf_sec + datetime.strptime(data_point['dt'], '%Y-%m-%d %H:%M:%S')
p = float(data_point['p'])
p_diff = p * 0.005
p_offset = (p + p_diff * data_point['va'])
obj = plot.patch(
x = [dt, dt - tf_sec, dt + tf_sec],
y = [p, p_offset, p_offset],
color = COLOR_MAP.get(data_point.get('type'), DEFAULT_COLOR),
fill_alpha = data_point.get('color_ratio', 1.0),
line_alpha = 1.0,
)
</code></pre>
<p>Now I want to optimize this solution and draw the objects with a <strong>DataSource</strong> using <strong>pandas</strong>.</p>
<p>The X and Y columns are working fine, they accept the value, but if I try to define the other columns (e.g.: color, fill_alpha, line_alpha etc.) they don't accept the value from the datasource, but they raise an error:</p>
<pre><code>ValueError: failed to validate Patch(id='p1179', ...).fill_color: expected either None or a value of type Color, got 'color'
</code></pre>
<p>If I understand correctly, these properties are not configured to accept this referenced column of values.</p>
<p><strong>Question</strong></p>
<ul>
<li>how could I workaround this, and use the values defined in the column of the pandas dataframe?</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>each object should have a calculated color, alpha etc. what are calculated in the pandas dataframe</li>
<li>I don't want to iterate over the rows of the pandas' dataframe and create different objects</li>
<li>I want to call this patch function once for all objects</li>
</ul>
<p>My current version is:</p>
<pre class="lang-py prettyprint-override"><code>data_points_df['tf_sec'] = data_points_df.apply(lambda row: pd.Timedelta(row['tf']) / 2.0, axis=1)
data_points_df['dt'] = data_points_df.apply(lambda row: row['dt'] + row['tf_sec'], axis=1)
data_points_df['p_offset'] = data_points_df.apply(lambda row: row['p'] + 0.005 * row['p'] * row['va'], axis=1)
data_points_df['x'] = data_points_df.apply(lambda row: [row['dt'], row['dt'] - row['tf_sec'], row['dt'] + row['tf_sec']], axis=1)
data_points_df['y'] = data_points_df.apply(lambda row: [row['p'], row['p_offset'], row['p_offset']], axis=1)
data_points_df['color'] = data_points_df.apply(lambda row: COLOR_MAP.get(row['type']) if row['type'] in COLOR_MAP else DEFAULT_COLOR, axis=1)
data_points_df['fill_alpha'] = data_points_df.apply(lambda row: row['color_ratio'] if 'color_ratio' in data_points_df.columns else 1.0, axis=1)
data_points_df['line_alpha'] = 1.0
obj = plot.patch(
source = ColumnDataSource(data_points_df),
x = 'x',
y = 'y',
color = ???,
fill_alpha = ???,
line_alpha = ???,
)
</code></pre>
|
<python><pandas><bokeh>
|
2024-12-09 16:46:40
| 0
| 1,729
|
elaspog
|
79,265,466
| 9,112,151
|
Redis: Cannot release a lock that's no longer owned
|
<p>I'm trying to lock certain key.</p>
<p>I have <code>httpx.Auth</code> subclass:</p>
<p>from redis.asyncio import Redis
from httpx import AsyncClient, Auth</p>
<p>def get_redis_client():
return Redis(host=settings.redis_host, port=settings.redis_port)</p>
<p>class BearerAuth(Auth):
esm_url_get_token: str = urljoin(settings.esm_host, settings.esm_path_url_get_token)
token_redis_key = "esm:token"
token_expires_redis_key = "esm:token:expires"</p>
<pre><code>@log_esm()
async def _get_token(self) -> Response:
redis_client = get_redis_client()
async with redis_client.lock(self.token_redis_key):
if expires := await redis_client.get(self.token_expires_redis_key):
expires = int(expires)
if time.time() < expires:
return await redis_client.get(self.token_redis_key)
timeout = Timeout(settings.esm_timeout, read=None)
client = AsyncClient(verify=settings.esm_verify, timeout=timeout)
data = {"login": settings.esm_login, "password": settings.esm_password}
response = await client.post(url=self.esm_url_get_token, data=data)
token = response.json()["token"]
await redis_client.set(self.token_redis_key, token)
await redis_client.set(self.token_expires_redis_key, self._get_expires())
await asyncio.sleep(settings.esm_delay_between_get_token_and_request)
return token
def _get_expires(self):
return int(time.time()) + 55 * 60
async def async_auth_flow(self, request: HTTPXRequest) -> typing.AsyncGenerator[HTTPXRequest, Response]: # NOSONAR
response = await self._get_token()
token = response.json()["token"]
request.headers["Authorization"] = f"Bearer {token}"
yield request
</code></pre>
<p>The main goal is to get token. If token expires refresh it. With code above I got an error:</p>
<blockquote>
<p>Traceback (most recent call last):\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py",
line 435, in run_asgi\n result = await app( # type:
ignore[func-returns-value]\n<br />
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py",
line 78, in <strong>call</strong>\n return await self.app(scope, receive,
send)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/uvicorn/middleware/message_logger.py",
line 86, in <strong>call</strong>\n raise exc from None\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/uvicorn/middleware/message_logger.py",
line 82, in <strong>call</strong>\n await self.app(scope, inner_receive,
inner_send)\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/fastapi/applications.py",
line 289, in <strong>call</strong>\n await super().<strong>call</strong>(scope, receive,
send)\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/starlette/applications.py",
line 122, in <strong>call</strong>\n await self.middleware_stack(scope, receive,
send)\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in <strong>call</strong>\n raise exc\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in <strong>call</strong>\n await self.app(scope, receive, _send)\n
File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/prometheus_fastapi_instrumentator/middleware.py",
line 169, in <strong>call</strong>\n raise exc\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/prometheus_fastapi_instrumentator/middleware.py",
line 167, in <strong>call</strong>\n await self.app(scope, receive,
send_wrapper)\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/starlette/middleware/base.py",
line 108, in <strong>call</strong>\n response = await
self.dispatch_func(request, call_next)\n<br />
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/partners_logging/middlewares.py",
line 24, in logger_middleware\n return await call_next(request)\n<br />
^^^^^^^^^^^^^^^^^^^^^^^^\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/starlette/middleware/base.py",
line 84, in call_next\n raise app_exc\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/starlette/middleware/base.py",
line 70, in coro\n await self.app(scope, receive_or_disconnect,
send_no_error)\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/starlette/middleware/exceptions.py",
line 79, in <strong>call</strong>\n raise exc\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/starlette/middleware/exceptions.py",
line 68, in <strong>call</strong>\n await self.app(scope, receive, sender)\n
File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py",
line 20, in <strong>call</strong>\n raise e\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py",
line 17, in <strong>call</strong>\n await self.app(scope, receive, send)\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/starlette/routing.py",
line 718, in <strong>call</strong>\n await route.handle(scope, receive, send)\n
File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/starlette/routing.py",
line 276, in handle\n await self.app(scope, receive, send)\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/starlette/routing.py",
line 66, in app\n response = await func(request)\n<br />
^^^^^^^^^^^^^^^^^^^\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/fastapi/routing.py",
line 273, in app\n raw_response = await run_endpoint_function(\n<br />
^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/fastapi/routing.py",
line 190, in run_endpoint_function\n return await
dependant.call(**values)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n
File
"/Users/albertaleksandrov/PycharmProjects/kz-esm/app/web/api/test.py",
line 47, in check_lock\n await client.create_request(request,
user)\n File
"/Users/albertaleksandrov/PycharmProjects/kz-esm/app/services/integrations/client.py",
line 151, in create_request\n response = await
self.raw_request(action="post", url=self.esm_url_post, data=data)\n
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n
File
"/Users/albertaleksandrov/PycharmProjects/kz-esm/app/services/integrations/client.py",
line 44, in wrapped\n result: Response = await func(self, *args,
**kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File
"/Users/albertaleksandrov/PycharmProjects/kz-esm/app/services/integrations/client.py",
line 141, in raw_request\n response = await getattr(client,
action)(url=url, json=data)\n<br />
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/httpx/_client.py",
line 1848, in post\n return await self.request(\n<br />
^^^^^^^^^^^^^^^^^^^\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/httpx/_client.py",
line 1533, in request\n return await self.send(request, auth=auth,
follow_redirects=follow_redirects)\n<br />
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n
File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/httpx/_client.py",
line 1620, in send\n response = await self._send_handling_auth(\n<br />
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/httpx/_client.py",
line 1645, in _send_handling_auth\n request = await
auth_flow.<strong>anext</strong>()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n
File
"/Users/albertaleksandrov/PycharmProjects/kz-esm/app/services/integrations/client.py",
line 110, in async_auth_flow\n response = await self._get_token()\n
^^^^^^^^^^^^^^^^^^^^^^^\n File
"/Users/albertaleksandrov/PycharmProjects/kz-esm/app/services/integrations/client.py",
line 44, in wrapped\n result: Response = await func(self, *args,
**kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File
"/Users/albertaleksandrov/PycharmProjects/kz-esm/app/services/integrations/client.py",
line 88, in _get_token\n async with
redis_client.lock(self.token_redis_key):\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/redis/asyncio/lock.py", line 165, in <strong>aexit</strong>\n await self.release()\n File
"/Users/albertaleksandrov/Library/Caches/pypoetry/virtualenvs/esm-Xqw79VLb-py3.11/lib/python3.11/site-packages/redis/asyncio/lock.py", line 262, in do_release\n raise LockNotOwnedError("Cannot release
a lock that's no longer owned")\nredis.exceptions.LockNotOwnedError:
Cannot release a lock that's no longer owned</p>
</blockquote>
<p>How to fix it?</p>
|
<python><redis>
|
2024-12-09 15:04:36
| 0
| 1,019
|
ΠΠ»ΡΠ±Π΅ΡΡ ΠΠ»Π΅ΠΊΡΠ°Π½Π΄ΡΠΎΠ²
|
79,265,302
| 25,413,271
|
Sum up column values by special logic
|
<p>Say we have an array like:</p>
<pre><code>a = np.array([
[k11, k12, k13, k14, k15, k16, k17, k18],
[k21, k22, k23, k24, k25, k26, k27, k28],
[k31, k32, k33, k34, k35, k36, k37, k38],
[k41, k42, k43, k44, k45, k46, k47, k48]
])
const = C
</code></pre>
<p>I need to create a vector from this array like this (runge kutta 4):</p>
<pre><code>result = np.array([
const * (k11 + 2*k21 + 2*k31 + k41),
const * (k12 + 2*k22 + 2*k32 + k42),
const * (k13 + 2*k23 + 2*k33 + k43),
....
const * (k18 + 2*k28 + 2*k38 + k48)
])
</code></pre>
<p>I am able to do this in cycle, but I am pretty sure numpy methods allow this in vectorised form.</p>
|
<python><numpy>
|
2024-12-09 14:20:56
| 3
| 439
|
IzaeDA
|
79,265,067
| 12,890,458
|
Why does the output change after putting code in a function in a simpy process?
|
<p>In a simpy process I handle a simpy interrupt. I do this in two places and so I want to put the handling code in a function. Before putting it in a function all functions well, after putting it in a function it yields different results.</p>
<p>Below the written out code</p>
<pre><code>def offer(env, simpy_car, simpy_tankcar, route, timestep):
action = None
t_prev = -1
for lat in route:
try:
# fuel consumption during timestep unequal to one hour
# it is assumed that each simpy timestep equals one hour
t = env.now
fuel = simpy_car.power * (t - t_prev) * 3600 / (E_OIL * EFFICIENCY)
t_prev = t
yield env.process(simpy_car.consume(env, timestep, fuel, lat))
except sp.Interrupt as e:
action = e.cause
while True:
try:
if action == 'weather':
yield env.process(simpy_car.refuel(env, simpy_tankcar))
elif action == 'tanking':
yield env.process(simpy_car.tanking(env, simpy_tankcar))
simpy_car.status = 'DRIVING'
elif action == 'refuelling':
simpy_tankcar.status = 'DRIVING'
simpy_car.status = 'DRIVING'
action = None
break
except sp.Interrupt as e:
action = e.cause
try:
yield env.process(simpy_car.check_fuel(env, simpy_tankcar))
except sp.Interrupt as e:
action = e.cause
while True:
try:
if action == 'weather':
yield env.process(simpy_car.refuel(env, simpy_tankcar))
elif action == 'tanking':
yield env.process(simpy_car.tanking(env, simpy_tankcar))
simpy_car.status = 'DRIVING'
elif action == 'refuelling':
simpy_tankcar.status = 'DRIVING'
simpy_car.status = 'DRIVING'
action = None
break
except sp.Interrupt as e:
action = e.cause
yield from ()
</code></pre>
<p>Below the code with the function and function calls</p>
<pre><code>def process_interrupt(action, simpy_car, simpy_tankcar):
"""
State machine
"""
while True:
try:
if action == 'weather':
yield env.process(simpy_car.refuel(env, simpy_tankcar))
elif action == 'tanking':
yield env.process(simpy_car.tanking(env, simpy_tankcar))
simpy_car.status = 'DRIVING'
elif action == 'refuelling':
simpy_tankcar.status = 'DRIVING'
simpy_car.status = 'DRIVING'
action = None
break
except sp.Interrupt as e:
action = e.cause
return action
def offer(env, simpy_car, simpy_tankcar, route, timestep):
action = None
t_prev = -1
for lat in route:
try:
t = env.now
fuel = simpy_car.power * (t - t_prev) * 3600 / (E_OIL * EFFICIENCY)
t_prev = t
yield env.process(simpy_car.consume(env, timestep, fuel, lat))
except sp.Interrupt as e:
action = e.cause
action = process_interrupt(action, simpy_car, simpy_tankcar)
try:
yield env.process(simpy_car.check_fuel(env, simpy_tankcar))
except sp.Interrupt as e:
action = e.cause
action = process_interrupt(action, simpy_car, simpy_tankcar)
yield from ()
</code></pre>
<p>I expect both versions to yield the same results, but they don't. What am I doing wrong? Why does the function version yield different results?</p>
|
<python><simpy>
|
2024-12-09 13:01:59
| 1
| 460
|
Frank Tap
|
79,264,799
| 1,276,622
|
Classes uses the same properties
|
<p>I have a simple piece of code. In it I make 2 animals a cat and a spider. They have a difference number of legs and eyes. I just want to print these data.</p>
<pre><code>class Legs:
def __init__(self, amount):
self.amount = amount
def legsInfo(self):
return f"{self.amount} legs"
class Eyes:
def __init__(self, amount):
self.amount = amount
def eyesInfo(self):
return f"{self.amount} eyes"
class Animal(Legs, Eyes):
def __init__(self, name, legs_amount, eyes_amount):
self.name = name
Legs.__init__(self, legs_amount)
Eyes.__init__(self, eyes_amount)
def legsInfo(self):
return super().legsInfo()
def eyesInfo(self):
return super().eyesInfo()
# Objects
cat = Animal("Tom", 4, 2)
spider = Animal("Webster", 8, 6)
# Test de output
print(cat.legsInfo()) # 2 legs ERROR
print(cat.eyesInfo()) # 2 eyes
print(spider.legsInfo()) # 6 legs ERROR
print(spider.eyesInfo()) # 6 eyes
</code></pre>
<p>It appears that because of the property amount is used in both classes, this value is shared. Which I think is very strange, it are different classes, so there is no need to share.</p>
<p>Is this a Python bug or some kind of "special feature"?</p>
<p>I have changed the name amount to legsAmount and eyesAmount and then it works fine. I want to define / specify / set the amount variable as "only for this class" or something.</p>
|
<python><class>
|
2024-12-09 11:27:55
| 4
| 4,410
|
Vincent
|
79,264,683
| 7,344,164
|
Error loading Pytorch model checkpoint: _pickle.UnpicklingError: invalid load key, '\x1f'
|
<p>I'm trying to load the weights of a Pytorch model but getting this error: <code>_pickle.UnpicklingError: invalid load key, '\x1f'.</code></p>
<p>Here is the weights loading code:</p>
<pre><code>import os
import torch
import numpy as np
# from data_loader import VideoDataset
import timm
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('Device being used:', device)
mname = os.path.join('./CDF2_0.pth')
checkpoints = torch.load(mname, map_location=device)
print("Checkpoint loaded successfully.")
model = timm.create_model('legacy_xception', pretrained=True, num_classes=2).to(device)
model.load_state_dict(checkpoints['state_dict'])
model.eval()
</code></pre>
<p>I have tried with different Pytorch versions. I have tried to inspect the weights by changing the extension to .zip and opening with archive manager but can't fix the issue. Here is a <a href="https://drive.google.com/file/d/1wGsqX1LGkdmbH5tNxieX_rm2IcViH3d6/view?usp=sharing" rel="nofollow noreferrer">public link</a> to the weights .pth file, I'm trying to load. Any help is highly appreciated as I have around 40 trained models that took around one month for training!</p>
|
<python><deep-learning><pytorch><pickle><torch>
|
2024-12-09 10:47:29
| 1
| 14,299
|
DevLoverUmar
|
79,264,675
| 3,400,076
|
Often Misused: File Upload - Fortify
|
<pre><code><input id="field" name="upload" type="file" />
</code></pre>
<p>Fortify flagged the above line of code and indicated as Medium Priority, with the title, Often Misused: File Upload.</p>
<p>The recommendation indicated that</p>
<blockquote>
<p>Do not allow file uploads if they can be avoided. If a program must
accept file upoads, then restrict the abilitiy of an attacker to
supply malicious content by only accepting the specific types of
content the program expects. Most attacks that rely on uploaded
content require that attackers be able to supply content of their
choosing. Placing restrictions on the content the program will accept
will greatly limit the range of possible attacks. Check file names,
extension and file content to make sure they are all expected and
acceptable for use by the application.</p>
</blockquote>
<p>I did the check file names, extension and file content in the form submit action, but the fortify still flagged out. May I know what is the right fix for this flagged out?</p>
|
<python><jinja2><fortify>
|
2024-12-09 10:45:57
| 0
| 519
|
xxestter
|
79,264,662
| 7,600,014
|
Showing package specifiers
|
<p>In my project, which I manage with <code>uv</code>, some transitive dependency is restricting numpy to <code>numpy==2.0</code>; even though the newest version is <code>numpy=2.2</code>.</p>
<p>Is there any convenient way to make <code>uv</code> tell me what the cause of this restriction in my dependency tree is?</p>
<p>I can get halfway there by <code>uv tree --invert --package numpy</code>, resulting in (redacted):</p>
<pre class="lang-bash prettyprint-override"><code>numpy v2.0.2
βββ numba v0.60.0
| βββ my_main_project v0.1.0
βββ matplotlib v3.9.3
βββ scipy v1.14.1
</code></pre>
<p>However, this is forcing me to walk the respective packages by hand to find the restriction.</p>
<p>Much more convenient here (<code>pip install pipdeptree</code>):</p>
<pre class="lang-bash prettyprint-override"><code>$ pipdeptree -r -p numpy
numpy==2.0.2
βββ numba==0.60.0 [requires: numpy>=1.22,<2.1]
β βββ my_main_project v0.1.0 [requires: numba>=0.60]
βββ matplotlib==3.9.3 [requires: numpy>=1.23]
...
βββ scipy==1.14.1 [requires: numpy>=1.23.5,<2.3]
</code></pre>
<p>Is there any way to produce this with native <code>uv</code>?</p>
|
<python><uv>
|
2024-12-09 10:42:35
| 1
| 487
|
mzoll
|
79,264,514
| 6,239,971
|
Data retrieving and SQL database update
|
<p>I'm trying to retrieve some data from an API and save them to a local database I created. All data come from Google Ads campaigns, and I need to make two separate calls because of their docs, but that's good. I take the data, mix them in a dataframe, and then call a second function to check if that specific row is already in the database to add or update the row properly.</p>
<p>To check if the row is already in the database, I need to look at these fields: <code>['date', 'campaign', 'keyword_text']</code>. The problem is that <code>keyword_text</code> can be a value or be <code>None</code> (while the other two are always a value and can't be empty).</p>
<p>The save_to_database function is working for every other retrieving function, but when it faces the <code>None</code> value of <code>keyword_text</code>, it recognizes it's already there, but then it duplicates the entry in the database.</p>
<p>That's the function to retrieve the data:</p>
<pre><code>def google_ads_retrieving(start_date, end_date):
data_source = "google_ads"
fields = FIELDS['google_ads']
r = api_retrieving(data_source, fields, start_date, end_date)
if r.status_code == 200:
data_json = json.loads(r.text)
else:
st.error(f"Errore nel recupero dei dati da {data_source}: {str(r.text)}")
return
df = pd.json_normalize(data_json, record_path=["data"])
df["date"] = pd.to_datetime(df["date"]).dt.date
fields_conv = FIELDS['google_ads_conv']
r_conv = api_retrieving(data_source, fields_conv, start_date, end_date)
if r_conv.status_code == 200:
data_conv_json = json.loads(r_conv.text)
else:
st.error(f"Errore nel recupero delle conversioni da {data_source}: {str(r.text)}")
return
df_conv = pd.json_normalize(data_conv_json, record_path=["data"])
df_conv["date"] = pd.to_datetime(df_conv["date"]).dt.date
df_conv['conversions'] = pd.to_numeric(df_conv['conversions'], errors='coerce').fillna(0).astype(int)
df = df.merge(df_conv, on=["datasource", "source", "account_id", "account_name", "date", "campaign", "keyword_text"], how="left")
df['keyword_text'] = df['keyword_text'].fillna('').astype(str).str.strip()
try:
save_to_database(df, f"{data_source}_data", is_api=True)
st.success(f"Dati da {data_source} salvati correttamente")
except Exception as e:
st.error(f"Errore nel salvare i dati da {data_source}: {str(e)}")
</code></pre>
<p>That's the function to save data in the database:</p>
<pre><code>def save_to_database(df, table_name, is_api=True):
conn = sqlite3.connect('local_data.db')
cursor = conn.cursor()
if is_api:
if table_name == 'facebook_data':
key_columns = ['date', 'campaign', 'adset_name', 'ad_name', 'age', 'gender']
elif table_name == 'google_ads_data':
key_columns = ['date', 'campaign', 'keyword_text']
elif table_name == 'tiktok_data':
key_columns = ['date', 'campaign', 'ad_group_name', 'ad_name']
elif table_name == 'googleanalytics4_data':
key_columns = ['date', 'campaign']
else:
st.warning(f"Tabella {table_name} non supportata nella funzione save_to_database. I dati potrebbero non essere salvati correttamente.")
key_columns = ['date', 'campaign']
else:
key_columns = ['id']
if 'keyword_text' in key_columns:
df['keyword_text'] = df['keyword_text'].fillna('').astype(str).str.strip()
cursor.execute(f"SELECT {', '.join(key_columns)} FROM {table_name}")
existing_data = set()
for row in cursor.fetchall():
existing_key = tuple(k if k is not None else '' for k in row)
existing_data.add(existing_key)
for _, row in df.iterrows():
key = tuple(row[col] for col in key_columns)
key_str = tuple(k if pd.notnull(k) else '' for k in key)
if key_str in existing_data:
update_query = f"UPDATE {table_name} SET "
update_query += ", ".join([f"{col} = ?" for col in df.columns if col not in key_columns])
update_query += f" WHERE {' AND '.join([f'{col} = ?' for col in key_columns])}"
update_values = [row[col] for col in df.columns if col not in key_columns] + list(key)
cursor.execute(update_query, update_values)
else:
insert_query = f"INSERT INTO {table_name} ({', '.join(df.columns)}) VALUES ({', '.join(['?' for _ in df.columns])})"
cursor.execute(insert_query, row.tolist())
existing_data.add(key_str)
conn.commit()
conn.close()
</code></pre>
<p>What am I missing here to make this work?</p>
|
<python><pandas><dataframe><sqlite3-python>
|
2024-12-09 09:51:44
| 0
| 454
|
Davide
|
79,264,444
| 4,211,520
|
How to automate ffmpeg to split and merge parts of video, and keep the audio in sync?
|
<p>I have a Python script that automates trimming a large video (2 hours) into smaller segments and then concatenating them without re-encoding, to keep the process fast. The script runs these ffmpeg commands:</p>
<pre><code>import subprocess
# Extract chunks
segments = [(0, 300), (300, 600), (600, 900)] # example segments in seconds
for i, (start, length) in enumerate(segments):
subprocess.run([
"ffmpeg", "-i", "input.mp4", "-ss", str(start), "-t", str(length),
"-c", "copy", "-reset_timestamps", "1", "-y", f"chunk_{i}.mp4"
], check=True)
# Create concat list
with open("list.txt", "w") as f:
for i in range(len(segments)):
f.write(f"file 'chunk_{i}.mp4'\n")
# Concatenate
subprocess.run([
"ffmpeg", "-f", "concat", "-safe", "0",
"-i", "list.txt", "-c", "copy", "-y", "merged_output.mp4"
], check=True)
</code></pre>
<p>All chunks come from the same source video, with identical codecs, resolution, and bitrate. Despite this, the final merged_output.mp4 sometimes has audio out of syncβespecially after the first chunk.</p>
<p>Iβve tried using -ss before -i to cut on keyframes, but the issue persists.</p>
<p><strong>Question</strong>: How can I ensure correct A/V sync in the final concatenated video when programmatically segmenting and merging via ffmpeg without fully re-encoding? Is there a way to adjust the ffmpeg commands or process to avoid audio desynchronization?</p>
|
<python><video><ffmpeg>
|
2024-12-09 09:29:22
| 1
| 31,711
|
Tree
|
79,264,326
| 597,742
|
Use `structlog` in a library while respecting `stdlib` log configuration
|
<p>I'm looking to use <code>structlog</code> in a Python library, but would also like to transparently support the logging configuration set up by a containing application. In particular:</p>
<ul>
<li>if a log level is set (e.g. via <code>logging.basicConfig</code>), respect that</li>
<li>support the standard <code>pytest</code> logging fixture (including the configured log level)</li>
</ul>
<p>As this is a library, making global changes to the <code>structlog</code> or <code>logging</code> configuration isn't appropriate.</p>
<p>So far, the best I have been able to come up with is to expose a public API along the lines of:</p>
<pre class="lang-py prettyprint-override"><code>import logging
import structlog.stdlib
def use_stdlib_logging(log_level: int|None = logging.INFO):
"""Configure SDK logging to use stdlib logging with the given log level.
Note: this alters the default global structlog configuration.
"""
structlog.stdlib.recreate_defaults(log_level=log_level)
</code></pre>
<p>While this would work, it means every embedding application needs to know to call that API - by default, the library would completely ignore the standard library's logging config.</p>
|
<python><structlog>
|
2024-12-09 08:46:27
| 1
| 41,806
|
ncoghlan
|
79,264,247
| 12,813,584
|
similarity from word to sentence after doing words Embedding
|
<p>I have dataframe with 1000 text rows.</p>
<p>I did word2vec .</p>
<p>Now I want to create a new field which give me the distance from each sentence to the word that i want, lets say the word "king".</p>
<p>I thought about taking in each sentence the 4 closet words to the word king and make average of them.
maybe by using <code>model.wv.similarity</code>.
the avg of each sentnce will be in the field df['king']</p>
<p>I will glad to know how to do that or to hear about another method.</p>
<p>example data:</p>
<pre><code> data = {
'text': [
"The king sat on the throne with wisdom.",
"A queen ruled the kingdom alongside the king.",
"Knights were loyal to their king.",
"The empire prospered under the rule of a wise monarch."
]
}
df = pd.DataFrame(data)
df['text']=df['text'].str.split()
model = Word2Vec(df['text'], vector_size=100, window=2, min_count=1 )
model.wv.similarity('Knights','king')
</code></pre>
<p><strong>edit</strong>:</p>
<p>My mission is:</p>
<p>I have 1000 text rows (people that complain about something)
I want to catalog them into 4 words.
Lets say that word 1 is king. Word 2 is castleβ¦
I want to know about each sentence which word from the 4 words most represent the sentence.
In order to do that I thought about taking each word from the 4 words and calculate <code>model.wv.similarity</code> to all of the words in df['text'].
After that, for each sentence, take the 3 words that have the highest score to word king (and to the word castle and ets..) .
calculate mean of the 3 highest score and that would be the value of df['king'] for the sentence</p>
|
<python><nlp><text-mining><word2vec><similarity>
|
2024-12-09 08:14:04
| 1
| 469
|
rafine
|
79,264,125
| 6,301,394
|
Casting expression result
|
<p>Would it be possible to cast an expression into a specific datatype without casting the source columns?</p>
<p>Consider the following frame:</p>
<pre><code>df = pl.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}, schema={'A': pl.UInt8, 'B': pl.UInt8})
βββββββ¬ββββββ
β A β B β
β --- β --- β
β u8 β u8 β
βββββββͺββββββ‘
β 1 β 4 β
β 2 β 5 β
β 3 β 6 β
βββββββ΄ββββββ
</code></pre>
<p>Now using an expression:</p>
<pre><code>df = df.with_columns((pl.col("A") - pl.col("B")).alias("C"))
βββββββ¬ββββββ¬ββββββ
β A β B β C β
β --- β --- β --- β
β u8 β u8 β u8 β
βββββββͺββββββͺββββββ‘
β 1 β 4 β 253 β
β 2 β 5 β 253 β
β 3 β 6 β 253 β
βββββββ΄ββββββ΄ββββββ
</code></pre>
<p>This is obviously correct in comp-sci terms but not practical. Can I only resolve this by first casting "A" and "B" to a signed integer or is there a better/more concise way?</p>
|
<python><python-polars>
|
2024-12-09 07:14:32
| 1
| 2,613
|
misantroop
|
79,263,708
| 14,205,874
|
Spotipy (python) functions run forever with OAuth
|
<pre><code> import requests
import json
import spotipy
from credentials import CLIENT_ID, CLIENT_SECRET, REDIRECT_URI
from spotipy.oauth2 import SpotifyOAuth
# auth
print('starting auth')
sp = spotipy.Spotify(auth_manager=SpotifyOAuth(client_id=CLIENT_ID, redirect_uri=REDIRECT_URI, client_secret=CLIENT_SECRET, scope='playlist-modify-public user-library-read'))
print('finished auth')
taylor_uri = 'spotify:artist:06HL4z0CvFAxyc27GXpf02'
print('searching')
results = sp.artist_albums(taylor_uri, album_type='album')
print('done searching')
albums = results['items']
while results['next']:
results = sp.next(results)
albums.extend(results['items'])
for album in albums:
print(album['name'])
</code></pre>
<pre><code> results = sp.artist_albums(taylor_uri, album_type='album')
</code></pre>
<p>the line above leaves the code hanging, i have tried many other functions but will just load forever...</p>
<p><a href="https://i.sstatic.net/vT8Cwu7o.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vT8Cwu7o.png" alt="enter image description here" /></a></p>
|
<python><spotipy>
|
2024-12-09 02:29:14
| 0
| 407
|
Tony
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.