QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,253,276
| 10,132,474
|
how could I write a pytorch model in diamond inheritance way?
|
<p>I need to implement a complex model, and I better to use Diamond inheritance to satisfy different requirements, here is the toy code to show what I am trying to do:</p>
<pre><code>import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.model_zoo as modelzoo
import torch.distributed as dist
class A(nn.Module):
def __init__(self, ratio=4, *args, **kwargs):
super().__init__()
self.conv_base = nn.Conv2d(3, 3 * ratio, 3, 1, 1)
class B(A):
def __init__(self, b_args, **kwargs):
super().__init__(ratio=4)
self.conv1 = nn.Conv2d(4, 3, 1, 1, 0)
class C(A):
def __init__(self, c_args, **kwargs):
super().__init__(ratio=4)
self.conv2 = nn.Conv2d(4, 3, 1, 1, 0)
class D(B, C):
def __init__(self, b_args, c_args):
super().__init__(b_args=b_args, c_args=c_args)
self.conv3 = nn.Conv2d(4, 3, 1, 1, 0)
b_args = dict(a=1)
c_args = dict(b=2)
model = D(b_args, c_args)
</code></pre>
<p>I got the error that:</p>
<pre><code>Traceback (most recent call last):
File "tmp.py", line 95, in <module>
model = D(b_args, c_args)
File "tmp.py", line 88, in __init__
super().__init__(b_args=b_args, c_args=c_args)
File "tmp.py", line 72, in __init__
super().__init__(ratio=4)
TypeError: C.__init__() missing 1 required positional argument: 'c_args'
</code></pre>
<p>Would you please tell me how could I make this work?</p>
<p>By the way, I need <code>B</code> and <code>C</code> to be workable, which means that they are not only parents of <code>D</code>, but also a class that should be able to be instanced.</p>
|
<python><inheritance><pytorch>
|
2024-12-05 03:00:17
| 1
| 1,147
|
coin cheung
|
79,253,234
| 1,492,613
|
how does the area interpolation for downsampling work when the scale factor is not integer?
|
<p>by definition area interpolation is just weighted by the pixel area.</p>
<p>So I suppose if scale factor is 1.5, then output pixel 00 contain full pixel of 00, half of the 01 and 10, 1/4 of the 11. the weight will be in_pixel_area/(1.5)^2</p>
<p>However, this does not seem to be the case:</p>
<pre class="lang-py prettyprint-override"><code>x = torch.tensor(
[[3, 106, 107, 40, 148, 112, 254, 151],
[62, 173, 91, 93, 33, 111, 139, 25],
[99, 137, 80, 231, 101, 204, 74, 219],
[240, 173, 85, 14, 40, 230, 160, 152],
[230, 200, 177, 149, 173, 239, 103, 74],
[19, 50, 209, 82, 241, 103, 3, 87],
[252, 191, 55, 154, 171, 107, 6, 123],
[7, 101, 168, 85, 115, 103, 32, 11]],
dtype=torch.float).unsqueeze(0).unsqueeze(0)
print(x.shape, x.sum())
for scale in [8/6, 2]:
pixel_area = scale**2
y = F.interpolate(x, scale_factor=1/scale, mode="area")
print(y.shape, y, y.sum()*pixel_area)
print((3 + 106*(scale-1) + 62*(scale-1) + 173*(scale-1)**2)/pixel_area)
print((11 + 123*(scale-1) + 32*(scale-1) + 6*(scale-1)**2)/pixel_area)
</code></pre>
<p>the output is:</p>
<pre><code>torch.Size([1, 1, 8, 8]) tensor(7707.)
torch.Size([1, 1, 6, 6]) tensor([[[[ 86.0000, 119.2500, 82.7500, 101.0000, 154.0000, 142.2500],
[117.7500, 120.2500, 123.7500, 112.2500, 132.0000, 114.2500],
[162.2500, 118.7500, 102.5000, 143.7500, 167.0000, 151.2500],
[124.7500, 159.0000, 154.2500, 189.0000, 112.0000, 66.7500],
[128.0000, 126.2500, 125.0000, 155.5000, 54.7500, 54.7500],
[137.7500, 128.7500, 115.5000, 124.0000, 62.0000, 43.0000]]]]) tensor(7665.7778)
43.99999999999999
35.62499999999999
torch.Size([1, 1, 4, 4]) tensor([[[[ 86.0000, 82.7500, 101.0000, 142.2500],
[162.2500, 102.5000, 143.7500, 151.2500],
[124.7500, 154.2500, 189.0000, 66.7500],
[137.7500, 115.5000, 124.0000, 43.0000]]]]) tensor(7707.)
86.0
43.0
</code></pre>
<p>we can see if scale = 2, then it works fine.
But when scale = 8/6, it gives out strange result.</p>
<p>First the <code>y.sum()*pixel_area</code> does not equal to <code>x.sum()</code>
2nd I try to directly calculate the pixel value via weight it give out 44 instead of 86.
3rd I would expect the output 00 pixel has different result when the scale is different, but apparently the 00 is still 86. why?</p>
<p><strong>update</strong>
with closer look, it seems when scale = 8/6, it simply does 2x2 kernel average with stride 1x1. But isn't this against the definition of area interpolation?</p>
|
<python><pytorch><interpolation>
|
2024-12-05 02:19:16
| 1
| 8,402
|
Wang
|
79,253,146
| 1,487,336
|
What is the difference between `cached_property` and `field(init=False)` in Python dataclass?
|
<p>I have a dataset which needs to be loaded from database. I'm wondering what is the difference between the following two ways of handling it.</p>
<pre><code>import pandas as pd
from dataclasses import dataclass, field
@dataclass
class A:
df: pd.DataFrame = field(init=False)
def load_df(self):
self.df = query_from_database()
</code></pre>
<p>and</p>
<pre><code>import pandas as pd
from dataclasses import dataclass, field
from functools import cached_property
@dataclass
class A:
@cached_property
def df(self):
df = query_from_database()
return df
</code></pre>
|
<python><python-dataclasses>
|
2024-12-05 01:18:04
| 2
| 809
|
Lei Hao
|
79,252,985
| 11,197,957
|
Is there a sane way of writing an external Python library with mssparkutils calls?
|
<p>I have been tasked with clarifying and simplifying a pretty knotty codebase, which exists at present as a series of <strong>"runbooks"</strong> in <strong>Azure Synapse</strong>. As part of that process, I thought it would be good to put some of the more convoluted data analysis into <strong>an external Python library</strong>. That way, we can do any software development locally, which tends to be a lot quicker; we can also carry out unit testing locally, which makes it quicker to track down bugs and fix them.</p>
<p>I ran into a problem early on, however, with anything related to the <strong><code>mssparkutils</code></strong> object. You can call this object freely within the runbooks themselves, but, if you attempt to call that object within a library which is imported into a runbook, your code will crash.</p>
<p>I discovered the <code>dummy-notebookutils</code> package earlier today, and I thought it might be my salvation, but now it seems that I'm out of luck. I use it like this in the library:</p>
<pre class="lang-py prettyprint-override"><code>from notebookutils import mssparkutils
from pathlib import Path
def copy_file_to_folder(path_to_file: str, path_to_folder: str) -> str:
mssparkutils.fs.cp(path_to_file, f"file:{path_to_folder}")
# Return path to which file was copied.
result = str(Path(path_to_folder)/Path(path_to_file).name)
return result
</code></pre>
<p>This certainly does what I want locally, i.e. <code>mssparkutils</code> does nothing. But I run into seemingly-insurmountable problems when I try to import this library into a runbook. Specifically, if I try to run this line:</p>
<pre class="lang-py prettyprint-override"><code>props = json.loads(mssparkutils.credentials.getPropertiesAll(linked_service))
</code></pre>
<p>It gives me this exception:</p>
<pre><code>NameError: name 'mssparkutils' is not defined
</code></pre>
<p>If I try to fix the problem by placing an import above it:</p>
<pre><code>from notebookutils import mssparkutils
</code></pre>
<p>Then <code>mssparkutils.credentials.getPropertiesAll(linked_service)</code> returns an empty string (which is was not doing before), and therefore <code>json.loads()</code> crashes.</p>
<p>It seems that just the presence of that <code>dummy-notebookutils</code> import is interfering with the built-in <code>mssparkutils</code>, and is thus causing havoc.</p>
<p>Is there a sane way of writing external libraries for use in Synapse like this? Or is my whole approach wrong? Is there a way of using <code>dummy-notebookutils</code> so that the runbook doesn't crash?</p>
<p>I have been thinking about just wrapping the whole library into a big class, and then, within the notebook, passing the <code>mssparkutils</code> object into the constructor. That would probably work, but it would disrupt the existing structure of codebase, which I am reluctant to do if there is any viable alternative.</p>
|
<python><azure><python-import><azure-synapse>
|
2024-12-04 23:14:37
| 1
| 734
|
Tom Hosker
|
79,252,957
| 754,136
|
Bars almost disappear when I layer and facet charts
|
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
import altair as alt
np.random.seed(0)
model_keys = ['M1', 'M2']
data_keys = ['D1', 'D2']
scene_keys = ['S1', 'S2']
layer_keys = ['L1', 'L2']
ys = []
models = []
dataset = []
layers = []
scenes = []
for sc in scene_keys:
for m in model_keys:
for d in data_keys:
for l in layer_keys:
for s in range(10):
data_y = list(np.random.rand(10) / 10)
ys += data_y
scenes += [sc] * len(data_y)
models += [m] * len(data_y)
dataset += [d] * len(data_y)
layers += [l] * len(data_y)
df = pd.DataFrame({
'Y': ys,
'Model': models,
'Dataset': dataset,
'Layer': layers,
'Scenes': scenes},
)
bars = alt.Chart(df, width=100, height=90).mark_bar(tooltip=True).encode(
x=alt.X("Scenes:N"),
y=alt.Y("mean(Y):Q"),
color=alt.Color("Scenes:N"),
opacity=alt.Opacity(
"Dataset:N",
scale=alt.Scale(
domain=['D1', 'D2'],
),
legend=alt.Legend(
labelExpr="datum.label == 'D1' ? 'D1 - transparent' : 'D2 - full'"
),
),
xOffset=alt.XOffset("Dataset:N"),
column=alt.Column('Layer:N'),
row=alt.Row("Model:N")
)
bars.save('test.html')
</code></pre>
<p>This script first generates some random data where each configuration is determined by four values: model, dataset, layer, scene. Then, it stores it into a dataframe and make a chart plot. This works fine and gives me this.</p>
<p><a href="https://i.sstatic.net/8XpbfJTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8XpbfJTK.png" alt="enter image description here" /></a></p>
<p>But I need to add error bars and text, and here is where things get wrong. First, I need to remove <code>row</code> and <code>column</code> from the chart or I can't layer it. Then I make an error bar chart and text chart, layer them, and facet them according to <code>row</code> and <code>column</code> again.</p>
<pre class="lang-py prettyprint-override"><code>bars = alt.Chart(df, width=100, height=90).mark_bar(tooltip=True).encode(
x=alt.X("Scenes:N"),
y=alt.Y("mean(Y):Q"),
color=alt.Color("Scenes:N"),
opacity=alt.Opacity(
"Dataset:N",
scale=alt.Scale(
domain=['D1', 'D2'],
),
legend=alt.Legend(
labelExpr="datum.label == 'D1' ? 'D1 - transparent' : 'D2 - full'"
),
),
xOffset=alt.XOffset("Dataset:N"),
# column=alt.Column('Layer:N'),
# row=alt.Row("Model:N")
)
error_bars = alt.Chart(df).mark_errorbar(extent='ci').encode(
x=alt.X('Scenes:N'),
y=alt.Y('Y:Q'),
)
text = alt.Chart(df).mark_text(align='center',
baseline='line-bottom',
color='black',
dy=-5, # y-shift
).encode(
x=alt.X('Scenes:N'),
y=alt.Y('mean(Y):Q'),
text=alt.Text('mean(Y):Q', format='.1f'),
)
combined = alt.layer(bars, error_bars, text).facet(
column=alt.Column('Layer:N'),
row=alt.Row("Model:N"),
spacing={"row": 0, "column": 15},
).resolve_scale(x='independent')
combined.save('test.html')
</code></pre>
<p>The aggregation works, but the bars suddenly become extremely thin. How can I fix it?</p>
<p><a href="https://i.sstatic.net/t5gQVZyf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t5gQVZyf.png" alt="enter image description here" /></a></p>
|
<python><altair>
|
2024-12-04 22:58:13
| 1
| 5,474
|
Simon
|
79,252,863
| 1,473,517
|
Is it possible to speed up my set implementation?
|
<p>I am trying to make a fast and space efficient set implementation for 64 bit unsigned ints. I don't want to use set() as that converts everything into Python ints that use much more space than 8 bytes per int. Here is my effort:</p>
<pre><code>import numpy as np
from numba import njit
class HashSet:
def __init__(self, capacity=1024):
self.capacity = capacity
self.size = 0
self.EMPTY = np.uint64(0xFFFFFFFFFFFFFFFF) # 2^64 - 1
self.DELETED = np.uint64(0xFFFFFFFFFFFFFFFE) # 2^64 - 2
self.table = np.full(capacity, self.EMPTY) # Initialize with a special value indicating empty
def insert(self, key):
if self.size >= self.capacity:
raise RuntimeError("Hash table is full")
if not self._insert(self.table, key, self.capacity, self.EMPTY, self.DELETED, self._hash):
print(f"Key already exists: {key}")
else:
self.size += 1
def contains(self, key):
return self._contains(self.table, key, self.capacity, self.EMPTY, self.DELETED, self._hash)
def remove(self, key):
if self._remove(self.table, key, self.capacity, self.EMPTY, self.DELETED, self._hash):
self.size -= 1
def __len__(self):
return self.size
@staticmethod
@njit
def _hash(key, capacity):
return key % capacity
@staticmethod
@njit
def _insert(table, key, capacity, EMPTY, DELETED, hash_func):
index = hash_func(key, capacity)
while table[index] != EMPTY and table[index] != DELETED and table[index] != key:
index = (index + 1) % capacity
if table[index] == key:
return False # Key already exists
table[index] = key
return True
@staticmethod
@njit
def _contains(table, key, capacity, EMPTY, DELETED, hash_func):
index = hash_func(key, capacity)
while table[index] != EMPTY:
if table[index] == key:
return True
index = (index + 1) % capacity
return False
@staticmethod
@njit
def _remove(table, key, capacity, EMPTY, DELETED, hash_func):
index = hash_func(key, capacity)
while table[index] != EMPTY:
if table[index] == key:
table[index] = DELETED
return True
index = (index + 1) % capacity
return False
</code></pre>
<p>I am using numba whereever I can to speed things up but it still isn't very fast. For example:</p>
<pre><code>hash_set = HashSet(capacity=204800)
keys = np.random.randint(0, 2**64, size=100000, dtype=np.uint64)
def insert_and_remove(hash_set, key):
hash_set.insert(np.uint64(key))
hash_set.remove(key)
%timeit insert_and_remove(hash_set, keys[0])
</code></pre>
<p>This gives:</p>
<pre><code>16.9 μs ± 407 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
</code></pre>
<p>The main cause is the code that I have failed to wrap with numba I think.</p>
<p>How can this be sped up?</p>
<p><strong>EDIT</strong></p>
<p>@ken suggested defining _hash as a global function outside the class. This speeds things up so now it is only ~50% slower than set().</p>
|
<python><performance><numba>
|
2024-12-04 22:06:10
| 1
| 21,513
|
Simd
|
79,252,813
| 12,334,204
|
How to "inject" a local variable when importing a python module?
|
<p>Is it possible to declare a local variable <em>before</em> it's imported?</p>
<p>For example to get this code to run as expected:</p>
<pre class="lang-py prettyprint-override"><code># a.py
# do magic here to make b.foo = "bar"
import b
b.printlocal()
</code></pre>
<pre class="lang-py prettyprint-override"><code># b.py
# imagine this file cannot be changed (e.g., for dumb political reasons)
local_var = foo
def printlocal():
print(local_var)
</code></pre>
<p>Running <code>a.py</code> should print "bar". How can this be accomplished without changing <code>b.py</code>?</p>
<h2>What I've Tried so Far</h2>
<h4>A. Patching</h4>
<pre class="lang-py prettyprint-override"><code>from unittest.mock import patch
with patch("b.foo"):
import b
b.printlocal()
</code></pre>
<p>Result: <code>NameError: name 'foo' is not defined</code></p>
<p>Thoughts: I think this doesn't work because the module is patched <em>after</em> the import completes.</p>
<h4>B. Namespace manipulation</h4>
<pre class="lang-py prettyprint-override"><code># a.py
import sys
module_name = "b"
module_globals = sys.modules[module_name].__dict__
module_globals["foo"] = "bar"
import b
b.printlocal()
</code></pre>
<p>Result: <code>KeyError: 'b'</code></p>
<p>Thoughts: This fails because the module cannot be modified before it's available in sys.modules. I haven't tried this directly but I think if the module was manually created with the locals it wouldn't work either as a module wouldn't import a second time.</p>
<h4>C. Using locals <strong>import</strong></h4>
<pre class="lang-py prettyprint-override"><code># a.py
import importlib
b = importlib.__import__(
"b",
locals={"__builtins__": __builtins__, "foo": "bar"},
# I also tried using globals here with same error
)
b.printlocal()
</code></pre>
<p>Result: <code>NameError: name 'foo' is not defined</code></p>
<p>Thought: I think this fails because <strong>import</strong> globals and locals are only used for importing the file. Documentation on this function is sparse.</p>
<h2>Motivation: Why attempt this "unpythonic" monstrosity?</h2>
<p>Databricks notebooks have <code>spark</code> and <code>dbutils</code> variables at the top level so there are hundreds (just in my repos) of python files that cannot be imported making unit testing difficult. I would love to be able to initialize these as MagicMocks while importing to make testing possible and to avoid side-effects.</p>
<p>In the example above <code>foo</code> is a stand-in for <code>spark</code> and <code>dbutils</code>.<code>b.py</code> is a notebook which contains some spark code with side effects and also functions which should be tested. <code>printlocal()</code> is a stand-in for a function to be tested. Here's a quick example:</p>
<pre class="lang-py prettyprint-override"><code># Databricks notebook source
from pyspark.sql.functions import to_timestamp, concat
curr_date = dbutils.widgets.get("CurrentDate") # will fail here if imported directly "dbutils not defined"
env = dbutils.widgets.get("Environment")
def dateStringToTimestampExpression(date_str): # want to unittest this in a.py
return to_timestamp(concat(col(date_str), "yyyy-MM-dd")
df = spark.table.read(f"{env}.table_a")
df = df.withColumn("updated_on", dateStringToTimestampExpression(curr_date))
df.write.saveAsTable("table_a_enhanced") # undesirable side-effect
</code></pre>
|
<python><python-3.x><databricks><python-unittest><python-unittest.mock>
|
2024-12-04 21:40:40
| 1
| 589
|
Brett S
|
79,252,785
| 9,492,730
|
Troubleshooting consuming python module in azure function
|
<p>I have the following code structure in my Python (3.12.3) application:</p>
<pre><code>**ROOT**
-- .venv
-- azure_functions
---- function_app.py
---- host.json
---- local.settings.json
---- requirements.json
-- src
---- __init__.py
---- main.py
-- tests
---- test_action.py
-- .gitignore
-- requirements.txt
-- setup.py
</code></pre>
<p>where:</p>
<ul>
<li><code>venv</code> - virtual env directory that has installed <code>requirements.json</code> deps from the <code>root</code> and <code>azure_functions</code> folders</li>
<li><code>azure_functions</code> - the folder that has files related to azure function (http triggers)</li>
<li><code>function_app.py</code> - a problematic http trigger that can't call <code>llm_call</code> function from <code>main.py</code></li>
</ul>
<p><strong>For the record</strong>, the trigger inside azure_functions directory can be successfully deployed and called in case if it has only printing output as string inside, so azure function related configuration is correct.</p>
<ul>
<li><p><code>azure_functions/requirements.json</code> - consist of single line <code>azure-functions</code>.</p>
</li>
<li><p><code>src/__init__.py</code> (empty file) / <code>src/main.py</code> - module and his definition, the methods (like <code>llm_call</code>) inside <code>main.py</code> can be successfully called from <code>tests</code> folder, but not from <code>azure_functions</code> directory</p>
</li>
<li><p><code>requirements.txt</code> - deps related to main.py files</p>
</li>
<li><p><code>tests</code> - the folder with tests, <code>test_action.py</code> can successfully call methods from <code>main.py</code> just with simple configuration:</p>
<pre><code>from src.main import llm_call
</code></pre>
</li>
<li><p><code>setup.py</code>:</p>
<pre><code># a simple package/module related configuration
from setuptools import setup, find_packages
setup(name="main", version="1.0", packages=find_packages())
</code></pre>
</li>
</ul>
<p><strong>The issue</strong> I have is when I consume methods from the <code>main</code> package in <code>function_app.py</code> via:</p>
<pre><code> from src.main import llm_call # just with this single new line, no actual usage of this below
</code></pre>
<p>I see no issues in VS Code (no highlighted lines), but when I deploy the application to azure, the deployment seems failed (even though I see no errors in the log other than this line <code>AM appname-func: No HTTP triggers found.</code> in output). I'm deploying it from VS code and see that my function disappear right after this from Azure Resources UI (<code>Function App</code> => <code>Functions</code>):
<a href="https://i.sstatic.net/mdPWaWUD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mdPWaWUD.png" alt="enter image description here" /></a> and can't be visible in azure UI.</p>
<p>As soon as I remove this line <code>from src.main import llm_call</code> from the trigger and deploy it one more time, all the logic become working and I see again the deployed function on the above screenshot. It looks like the issue is how I consume methods from <code>main.py</code> module in <code>function_app.py</code>, but not sure how I can troubleshot it.
Any help would be appreciated.</p>
<p><strong>UPDATE:</strong>
main.py:</p>
<pre><code>from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
ChatPromptTemplate
)
from typing import Callable
from langchain.chains import LLMChain
from langchain.schema import BaseOutputParser
import os
model_to_use="gpt-4o-mini"
class CustomOutputParser(BaseOutputParser):
def parse(self, text: str):
return text
def llm_call(model, system_template, human_template, param_provider: Callable[[], dict], custom_output_parser) -> str:
try:
api_key = os.environ['OPENAI_API_KEY']
temperature=0.7
max_tokens=2048
chat_model = ChatOpenAI(model=model, api_key=api_key, temperature=temperature, max_tokens=max_tokens)
system_message_prompt = SystemMessagePromptTemplate.from_template(system_template)
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
chain = LLMChain(
llm=chat_model,
prompt=chat_prompt,
output_parser=custom_output_parser
)
params = param_provider()
return chain.run(**params)
except Exception as e:
print(f"An error occurred while calling the LLM: {e}")
raise e
</code></pre>
<p>function code:</p>
<pre><code>import azure.functions as func
import logging
from src.main import llm_call
app = func.FunctionApp(http_auth_level=func.AuthLevel.FUNCTION)
@app.route(route="llm_httptrigger")
def llmhttptrigger(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request!')
</code></pre>
<p>test code:</p>
<pre><code>import unittest
from src.main import llm_call
class TestCallLLMFunction(unittest.TestCase):
def test_call_llm(self):
category = "category1"
..
</code></pre>
<p><strong>UPDATE2:</strong>
Thanks @PravallikaKV for suggestion about moving the function related code into the root folder, unfortunately it didn't help. I still don't see a function on the portal and still see the following in the deploy output:</p>
<blockquote>
<p>5:26:37 PM appname-func: No HTTP triggers found.</p>
</blockquote>
<p><em>For the record1</em>, it looks like the problem is not about the folders structure, since this structure is generated by VS Code wizard where I've chosen the <code>azure_functions</code> directory as the place where the function code should be saved. And if I remove calls of functions taken from <code>main.py</code>, it works well and I can call this ("hello world" like) http trigger via web api request, ie it's deployed successfully in this case.</p>
<p><em>For the record2</em>, this code I use to create <code>venv</code> folder:</p>
<pre><code>1. python -m venv .venv
2. .venv\Scripts\activate
2.1. Configure python interpreter path from .venv in VS Code
3. pip.exe install -r .\requirements.txt
4. pip.exe install -r .\azure_functions\requirements.txt
5. pip install -e .
</code></pre>
<p>I feel that some of the <code>venv</code> configuration is not visible (not sure how it should be configured) on azure function side, but can't really see how to proceed here since no errors are triggered during the deployment.</p>
<p><strong>UPDATE3:</strong></p>
<p>If i start creating an azure function from scratch, one of the wizard steps is to configure virtual environment:
<a href="https://i.sstatic.net/Id9czTWk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Id9czTWk.png" alt="enter image description here" /></a>
previously, I skipped this part, now I'm trying to configure it and see the following error:
<a href="https://i.sstatic.net/ribQfxkZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ribQfxkZ.png" alt="enter image description here" /></a>
<code>3.12.3</code> is the python version I use on my machine and in the venv (providing absolute path to python in <code>.venv</code> folder gives the same error), not sure where this error comes from..</p>
<p><strong>UPDATE4:</strong>
Apparently, python version 3.12 I used before is not supported by azure, however switching to 3.11 resolves points in <strong>UPDATE3</strong>, but didn't help with actual problem too</p>
|
<python><azure><azure-functions><python-module>
|
2024-12-04 21:29:04
| 1
| 4,783
|
dododo
|
79,252,670
| 13,413,858
|
How is memory handled for intermidiate values in Python?
|
<p>For example, if you need to construct a new object from 2 lists, it seems that it is irrelevant whether you concat them and then iterate, or you yield from one list and then the other, even though intuitively concat should allocate more memory for that temporary value.</p>
<pre class="lang-py prettyprint-override"><code>from itertools import pairwise, chain
from random import choice
from string import ascii_lowercase
import tracemalloc
def measure_allocation():
s = [choice(ascii_lowercase) for _ in range(1_000_000)]
tracemalloc.start()
# HERE
res1 = tuple(['^']+s)
snapshot1 = tracemalloc.take_snapshot()
top_stats = snapshot1.statistics('lineno')
for stat in top_stats[:10]:
print(stat)
tracemalloc.stop()
print('*'*80)
tracemalloc.start()
# AND HERE
res2 = tuple(chain(['^'],s))
snapshot2 = tracemalloc.take_snapshot()
top_stats = snapshot2.statistics('lineno')
for stat in top_stats[:10]:
print(stat)
tracemalloc.stop()
if __name__ == "__main__":
import sys
print(sys.version)
print('*'*80)
measure_allocation()
</code></pre>
<p>Result:</p>
<pre><code>3.12.6 (main, Sep 6 2024, 19:03:47) [GCC 13.3.0]
********************************************************************************
/tmp/tmp.MbQI2VsYCZ/foo.py:13: size=7813 KiB, count=1, average=7813 KiB
********************************************************************************
/tmp/tmp.MbQI2VsYCZ/foo.py:24: size=7813 KiB, count=1, average=7813 KiB
</code></pre>
<p>As you can see, both values are identical. Perhaps I am testing it incorrectly?</p>
<p>Just to clarify on why this is unexpected for me, the way I see it, python would for <code>res1</code>:</p>
<ul>
<li>Evaluate the inner expression first (<code>["^"] + s</code>)</li>
<li>Make a new list in memory, which would be the same size as <code>s</code> since list sizes are increased in growing chunks.</li>
<li>At this point we have both <code>s</code> and the temp list in memory, so memory usage should be doubled</li>
<li>Now we iterate and construct the tuple, <code>temp</code> gets discarded.</li>
</ul>
<p>Results however suggested that we only had to <code>malloc</code> for the tuple, not the intermediate list.</p>
|
<python><memory>
|
2024-12-04 20:40:26
| 1
| 494
|
Mathias Sven
|
79,252,655
| 2,326,896
|
Breakpoint in a callback function from iPyWidgets Jupiter Notebook not breaking in VsCode
|
<p>I created a Jupyter Notebook cell with this code</p>
<pre><code>from ipywidgets import widgets
def test_fun(button):
for i in range(10): # breakpoint in this line
print(i)
# Button to split in 12 months
test_button = widgets.Button(description="Click Me!")
test_button.on_click(test_fun)
display(test_button)
</code></pre>
<p>Then I set a breakpoint in "for i in range(10)". I expected the debugger to start when I clicked the test_button, but that didn't happen. You can see in the screenshot the cycle runs uninterrupted.</p>
<p><a href="https://i.sstatic.net/3GVxZQOl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3GVxZQOl.png" alt="VsCode screenshot" /></a></p>
<p>I tried this in JupyterLabs without a problem. I also tried the <code>jupyter.debugJustMyCode": false</code> option, although it doesn't seem applicable to this case.</p>
<p>Also, breakpoints outside the <code>test_fun</code> work as expected.</p>
<p><strong>Versions:</strong></p>
<ul>
<li>Visual Studio Code: 1.95.3</li>
<li>jupyter_server: 2.14.2</li>
<li>ipywidgets: 8.1.1</li>
</ul>
|
<python><visual-studio-code><debugging><jupyter-notebook><ipywidgets>
|
2024-12-04 20:35:19
| 0
| 891
|
Fernando César
|
79,252,519
| 16,606,223
|
PyTorch > TorchScript tensor.view() alternative?
|
<p>I have the following code:</p>
<pre class="lang-py prettyprint-override"><code>@torch.jit.script
def my_function(t: torch.Tensor) -> torch.Tensor:
if t.dtype != torch.bfloat16:
raise ValueError("Input tensor must be of dtype torch.bfloat16")
int_type = torch.int16
t_bits = t.view(int_type) <- error
</code></pre>
<p>That throws:</p>
<pre><code>RuntimeError: shape '[2]' is invalid for input of size 1
</code></pre>
<p>Because, given the documentation:</p>
<pre><code>.. warning::
This overload is not supported by TorchScript, and using it in a
Torchscript program will cause undefined behavior.
</code></pre>
<p>So the question is, what's the alternative for use?</p>
|
<python><pytorch><torchscript>
|
2024-12-04 19:37:03
| 1
| 1,631
|
Yahor Barkouski
|
79,252,493
| 12,466,687
|
How to create geom_segment() plot on Date x-axis using plotnine in python?
|
<p>Not able to figure out how to use <code>geom_segment()</code> on <code>Date</code> <code>x-axis</code> in <code>python</code> by using <code>plotnine</code>. Have tried some code but getting error:</p>
<pre><code>import pandas as pd
from plotnine import ggplot, aes, geom_segment
from datetime import date
# this is a sample dataset (created using ai, feel free to tweak or replace if needed)
data = pd.DataFrame({
'date': pd.date_range('2022-01-01', periods=12, freq="ME"),
'category': ['A', 'B', 'C', 'A', 'B', 'C', 'A', 'B', 'C', 'A', 'B', 'C'],
'value': [10, 20, 30, 15, 25, 35, 20, 30, 40, 25, 35, 45]
})
# Create the plot
(ggplot(data, aes(x='date', y='category', xend='date', yend='value')) +
geom_segment(
size=8.5,
color="#edece3") +
theme(axis_text_x=element_text(angle=45, hjust=1))
</code></pre>
<pre><code>(ggplot(data, aes(x='date', y='category', xend='date', yend='category')) +
geom_segment(
size=8.5,
color="#edece3") +
theme(axis_text_x=element_text(angle=45, hjust=1))
</code></pre>
<p>even when I try to mention date manually then it doesn't work</p>
<pre><code>ggplot(data, aes(x='date', y='category', xend=date(2022,12,1), yend='value'))
</code></pre>
<pre><code>ggplot(data, aes(x='date', y='category', xend=date(2022,12,1), yend='category'))
</code></pre>
<p>All I am trying is to create <code>#edece3</code> color segment to extend from start date to end date for all the categories.</p>
<p>taking below image as an example/inspiration and trying to replicate some part but stuck at the start:</p>
<p><a href="https://i.sstatic.net/HhmN7vOy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HhmN7vOy.png" alt="enter image description here" /></a></p>
|
<python><ggplot2><plotnine><geom-segment>
|
2024-12-04 19:26:12
| 1
| 2,357
|
ViSa
|
79,252,320
| 5,763,413
|
Pip fails with ReadTimeoutError on internal Artfactory
|
<p>TLDR: Pip/mamba/conda keep failing with a timeout issue on arbitrary packages. <strong>Pip is set up to pull from an internal artifactory.</strong></p>
<p>The contents of pip.conf</p>
<pre class="lang-none prettyprint-override"><code>[global]
index-url = https://user:password@artifactory.myURL.com/artifactory/api/pypi/primary-pypi/simple
trusted-host = artifactory.myURL.com
ssl_verify: /usr/local/share/ca_certificates/my_cert.crt
[search]
index = https://user:password@artifactory.myURL.com/artifactory/api/pypi/primary-pypi/simple
cache-dir = ''
</code></pre>
<p>I cannot get it to fail consistently on a certain package. However, the main offenders seem to be larger packages like pytorch, numpy, scipy, etc.</p>
<p>If I run the pip install command multiple times in a row, it will fail on a different package each time.</p>
<p>I watch the network traffic in nload/btm/task manager. Every time it does the same thing: traffic stays very strong for several packages, then on a random package, it drops to zero about halfway to 2/3's of the way through the download, then fails with:</p>
<pre><code>14.31 Collecting scipy (from spacenet_processing==0.0.1)
14.35 Downloading https://artifactory.myURL.com/artifactory/api/pypi/primary-pypi/path/to/scipy-1.14.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (41.2 MB)
31.62 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 31.7/41.2 MB 14.1 MB/s eta 0:00:01
31.81 TimeoutError: The read operation timed out
###### Long portion of stack trace removed but it's the standard stack trace from a timeout on pip #####
31.81 pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='artifactory.myURL.com', port=443): Read timed out.
------
</code></pre>
<p>This is <em>not</em> an internet speed issue. I can take the failed download file and curl it with no issues immediately after. Adjusting the timeout does not help either, it just hangs for the amount of time I set the timeout to.</p>
<p>I am trying to trace down what the issue could be in the internally hosted artifactory.</p>
|
<python><pip><artifactory><connection-timeout><timeoutexception>
|
2024-12-04 18:16:16
| 1
| 2,125
|
blackbrandt
|
79,252,306
| 8,163,071
|
Testing passport.js login with selenium in CI fashion
|
<p>I have a set of existing selenium (Selenium Webdriver 4, python) tests for a website that will henceforth be authenticated with <a href="https://www.passportjs.org/packages/passport-google-oauth20/" rel="nofollow noreferrer">passsport.js google oauth2</a>. I want to continue automating the end-to-end workflow starting with the browser launch, so I need to get through the Passport auth in these tests.</p>
<p>On my local machine, I have found that manually authenticating in Chrome, then telling Chromedriver to use my Chrome profile to login:</p>
<pre><code>options = webdriver.ChromeOptions()
options.add_argument('--user-data-dir=<abs path to chrome user data>')
options.add_argument('--profile-name=Default')
driver = webdriver.Chrome(options=options)
</code></pre>
<p>However in CI no such directory is available.</p>
<p>Is there an existing solution to authenticate in a CI pipeline? I am currently using CircleCI but open to changing that.</p>
|
<python><authentication><selenium-webdriver><oauth-2.0><passport.js>
|
2024-12-04 18:11:03
| 0
| 3,719
|
C. Peck
|
79,252,301
| 1,390,272
|
Regex match only when there is no printable characters anywhere behind it
|
<p>As an example, I want to match the letter A, if there are 0 or more is white spaces behind it and no printable character, I want a solution that will broudly work in the default python java, golang, and other engines.</p>
<p>Match this:</p>
<pre><code> A
</code></pre>
<p>Not this:</p>
<pre><code> x A
</code></pre>
<p>I can do this in PHP with the syntax below, but \K is not supported in many engines (default python, java, golang)</p>
<pre><code>^\s*\KA(?=\s*$)
</code></pre>
<p>I can do this:</p>
<pre><code>^(?:\s*)A
</code></pre>
<p>but it will match on all the spaces before the A as well, which is not what I want.</p>
<p>I feel like the solution should be pretty obvious, but I'm blanking. I know in a <em>lookahead</em> I can easily do this, for example, if I wanted to match 'A' and not any printable characters after, then I would simply do:</p>
<pre><code>A(?=\s*$)
</code></pre>
<p>to match the 'A' here:</p>
<pre><code>A
</code></pre>
<p>and not here:</p>
<pre><code>A x
</code></pre>
<p>But I'm not sure how to do this for a lookbehind because in most engines a * quantifier inside a lookbehind makes it non-fixed width, which is an error. <strong>I understand that in javascript this is valid syntax</strong></p>
|
<python><java><regex><go>
|
2024-12-04 18:09:01
| 1
| 1,673
|
Goku
|
79,252,297
| 46,521
|
Why does Pandas .floor() throw AmbiguousTimeError for this Timestamp that's already unambiguous?
|
<p>The following fails:</p>
<pre><code>def round_in_tz(epoch_sec: int, tz_name: str, freq_spec: str):
"""Round the given epoch timestamp to the nearest pandas time frequency spec in the given timezone."""
return pd.to_datetime(epoch_sec, unit='s', utc=True).tz_convert(tz_name).floor(freq_spec)
print(round_in_tz(1730610000, "US/Eastern", freq_spec="86400s")) # works OK
# raises pytz.exceptions.AmbiguousTimeError: Cannot infer dst time from 2024-11-03 01:00:00, try using the 'ambiguous' argument
print(round_in_tz(1730610000, "US/Eastern", freq_spec="300s"))
</code></pre>
<ol>
<li>Is there truly any ambiguity here? I'm aware of DST, but using epoch times avoids any ambiguity, right? Is this a bug in Pandas?</li>
<li>What's the best way to rewrite it? I hacked together the following:</li>
</ol>
<pre><code>def round_in_tz(epoch_sec: int, tz_name: str, freq_spec: str):
"""Round the given epoch timestamp to the nearest pandas time frequency spec in the given timezone."""
t = pd.to_datetime(epoch_sec, unit='s', utc=True).tz_convert(tz_name)
is_dst = bool(t.timetuple().tm_isdst)
return t.floor(freq_spec, ambiguous=is_dst)
</code></pre>
<p>NOTE: We cannot use <code>.floor(freq_spec).tz_convert(tz_name)</code> because it changes the behavior when freq_spec="86400s". (I want Midnight US/Eastern, not Midnight UTC.)</p>
<p>I've found several previous questions on DST ambiguity, but they all involve truly ambiguous situations (such as parsing a timestamp from a string during the doubly-occurring DST hour.) In contrast, this question uses the epoch timestamp 1730610000, which I would not expect to be ambiguous.</p>
|
<python><pandas><dst>
|
2024-12-04 18:07:53
| 1
| 6,651
|
tba
|
79,252,248
| 2,270,043
|
How to unit test AWS Glue scripts using pytest, Dynamic frames and Data frames?
|
<p>I want to unit test my AWS Glue scripts. I am using Python and Pyspark. I want to unit test functions that utilise Dynamic frames and Data frames. I don't need to interface with AWS or data on AWS. I want to essentially assert the content of the frames is correct for these small testable functions.</p>
<p>I was hoping to setup a pytest fixture that would be a Spark Session with GlueContext. I was hoping I could do this just using Python dependencies e.g. pyspark and awsglue-dev but I am getting issues.</p>
<p>My conftest fixture looks like this:</p>
<pre><code>@pytest.fixture(scope="session")
def spark_context():
spark_ctx = SparkContext.getOrCreate()
spark_ctx.setLogLevel("WARN")
return spark_ctx
@pytest.fixture
def glue_context(spark_context):
glue_ctx = GlueContext(spark_context)
return glue_ctx
</code></pre>
<p>The errors I am getting look like this:</p>
<pre><code>test setup failed
spark_context = <SparkContext master=local[*] appName=pyspark-shell>
@pytest.fixture
def glue_context(spark_context):
> glue_ctx = GlueContext(spark_context)
if min_partitions is None:
> return self._jvm.GlueContext(self._jsc.sc())
E TypeError: 'JavaPackage' object is not callable
</code></pre>
<p>Can anyone please advise on what the issue is here? Does my AWS Glue local dev/test setup need to be different or more than just installing the Glue and PySpark dependencies via PDM/PyPi?</p>
<p>Thanks,</p>
<p>Sam</p>
|
<python><pyspark><pytest><aws-glue>
|
2024-12-04 17:54:31
| 0
| 781
|
SamBrick
|
79,252,245
| 2,893,712
|
Pandas Filter and Sum but Apply to All Rows
|
<p>I have a dataframe that has user ID, code, and value.</p>
<pre><code>user code value
0001 P 10
0001 P 20
0001 N 10
0002 N 40
0002 N 30
0003 P 10
</code></pre>
<p>I am trying to add a new column that groups by User ID, filters for code = <code>P</code> and sums the value. However I want this value to be applied to every row.</p>
<p>So for the example above, the output I'm looking for would be:</p>
<pre><code>user code value Sum_of_P
0001 P 10 30
0001 P 20 30
0001 N 10 30
0002 N 40 0
0002 N 30 0
0003 P 10 10
</code></pre>
<p>I tried doing <code>df['Sum_of_P'] = df.loc[df['code'] == 'P', 'value'].groupby(df['user']).transform('sum')</code>, but this applies only to the rows with code = <code>P</code>. Is there a way to have to have this apply to all rows?</p>
|
<python><pandas>
|
2024-12-04 17:53:18
| 2
| 8,806
|
Bijan
|
79,252,195
| 5,167,277
|
Vercel --prebuilt Issue with FastAPI and Github Actions
|
<p>I’m trying to deploy a FastAPI app using Github Actions to Vercel. I can successfully deploy it without using the <code>--prebuilt</code> tag. However, it seems to me that the <code>--prebuilt</code> tag prevents the whole source code to be shared/deployed to Vercel according to the docs <a href="https://vercel.com/docs/cli/build" rel="nofollow noreferrer">https://vercel.com/docs/cli/build</a> ? So the issue is that, apparently, Vercel can’t detect <code>.vercel/output/builds.json</code> file/folder even though it looks like <code>vercel build</code> command does create it.</p>
<p><strong>Command</strong></p>
<p><code>vercel deploy --prebuilt --token=${{ secrets.VERCEL_TOKEN }}</code></p>
<p><strong>Error</strong></p>
<p><code>Error: ENOENT: no such file or directory, lstat '/vercel/path0/.vercel/output/builds.json'</code></p>
<p><strong>Project Setup</strong></p>
<pre><code>api
|-- routes
| |-- middleware.py
| |-- auth_route.py
| |-- ...
|-- main.py
tests
|-- unit_tests
| |-- test_main.py
| |-- test_auth_route.py
| |-- ...
requirements.txt
README.md
vercel.json
</code></pre>
<p><code>vercel.json</code></p>
<pre><code>{
"routes": [
{
"src": "/(.*)",
"dest": "api/main.py"
}
]
}
</code></pre>
<p><code>ci.yml</code></p>
<pre><code>name: Run Tests and Deploy to Vercel
on:
push:
branches:
- main
jobs:
run_tests:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11.5'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Run tests
run: |
pytest
# deploy to vercel
- name: Install Vercel CLI
run: npm install --global vercel@latest
- name: Pull Vercel Environment Information
run: vercel pull --yes --environment=preview --token=${{ secrets.VERCEL_TOKEN }}
- name: Build Project Artifacts
run: vercel build --token=${{ secrets.VERCEL_TOKEN }}
- name: Deploy Project Artifacts to Vercel
run: vercel deploy --prebuilt --token=${{ secrets.VERCEL_TOKEN }}
</code></pre>
<p><strong>Full Log</strong></p>
<pre><code>Run vercel build --token=***
vercel build --token=***
shell: /usr/bin/bash -e {0}
env:
pythonLocation: /opt/hostedtoolcache/Python/3.11.5/x64
PKG_CONFIG_PATH: /opt/hostedtoolcache/Python/3.11.5/x64/lib/pkgconfig
Python_ROOT_DIR: /opt/hostedtoolcache/Python/3.11.5/x64
Python2_ROOT_DIR: /opt/hostedtoolcache/Python/3.11.5/x64
Python3_ROOT_DIR: /opt/hostedtoolcache/Python/3.11.5/x64
LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.11.5/x64/lib
Vercel CLI 39.1.2
WARN! Due to `builds` existing in your configuration file, the Build and Development Settings defined in your Project Settings will not apply. Learn More: https://vercel.link/unused-build-settings
Installing required dependencies...
Build Completed in .vercel/output [4s]
Run vercel deploy --prebuilt --token=***
vercel deploy --prebuilt --token=***
shell: /usr/bin/bash -e {0}
env:
pythonLocation: /opt/hostedtoolcache/Python/3.11.5/x64
PKG_CONFIG_PATH: /opt/hostedtoolcache/Python/3.11.5/x64/lib/pkgconfig
Python_ROOT_DIR: /opt/hostedtoolcache/Python/3.11.5/x64
Python2_ROOT_DIR: /opt/hostedtoolcache/Python/3.11.5/x64
Python3_ROOT_DIR: /opt/hostedtoolcache/Python/3.11.5/x64
LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.11.5/x64/lib
Vercel CLI 39.1.2
Retrieving project…
Deploying redhorseacademy-projects/fastapi_demo
Uploading [--------------------] (0.0B/8.4MB)
(node:1865) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. Use events.setMaxListeners() to increase limit
(Use `node --trace-warnings ...` to show where the warning was created)
Uploading [=====---------------] (2.1MB/8.4MB)
Uploading [==========----------] (4.2MB/8.4MB)
Uploading [===============-----] (6.3MB/8.4MB)
Uploading [====================] (8.4MB/8.4MB)
Inspect: https://vercel.com/redhorseacademy-projects/fastapi_demo/6e2Q9vtYnY25YHu7rtBXviBYiQUb [10s]
Preview: https://fastapidemo-juxg522yb-redhorseacademy-projects.vercel.app [10s]
Queued
Building
Error: ENOENT: no such file or directory, lstat '/vercel/path0/.vercel/output/builds.json'
https://fastapidemo-juxg522yb-redhorseacademy-projects.vercel.app
Error: Process completed with exit code 1.
</code></pre>
<p>I tried building/deploying locally using Vercel CLI and it turns out I also run into the same issue despite the file <code>.vercel/output/builds.json</code> being in the local directory.</p>
<pre><code>(fastapi_env) (base) redhorseacademyPC:fastapi_demo redhorseacademy$ vercel build
Vercel CLI 39.1.2
WARN! Due to `builds` existing in your configuration file, the Build and Development Settings defined in your Project Settings will not apply. Learn More: https://vercel.link/unused-build-settings
Installing required dependencies...
✅ Build Completed in .vercel/output [9s]
(fastapi_env) (base) redhorseacademyPC:fastapi_demo redhorseacademy$ vercel deploy --prebuilt --archive=tgz
Vercel CLI 39.1.2
🔍 Inspect: https://vercel.com/redhorseacademy-projects/fastapi_demo/D7sF6esFWwpRLEbhCRdibYFNKyCn [6m]
✅ Preview: https://fastapidemo-38s604pe3-redhorseacademy-projects.vercel.app [6m]
Error: ENOENT: no such file or directory, lstat '/vercel/path0/.vercel/output/builds.json'
</code></pre>
<p><code>.vercel/output/builds.json</code></p>
<pre><code>{
"//": "This file was generated by the `vercel build` command. It is not part of the Build Output API.",
"target": "preview",
"argv": [
"/usr/local/bin/node",
"/usr/local/bin/vercel",
"build"
],
"builds": [
{
"require": "@vercel/python",
"requirePath": "/usr/local/lib/node_modules/vercel/node_modules/@vercel/python/dist/index.js",
"apiVersion": 3,
"src": "app/main.py",
"use": "@vercel/python"
}
]
}
</code></pre>
<p>Lastly, I've made the changes suggest <a href="https://github.com/vercel/vercel/issues/12683" rel="nofollow noreferrer">https://github.com/vercel/vercel/issues/12683</a> here as well as checked this discussion <a href="https://github.com/orgs/vercel/discussions/7092" rel="nofollow noreferrer">https://github.com/orgs/vercel/discussions/7092</a> (which is the exact same issue but couldn't quite understand the solution besides removing <code>--prebuilt</code> in the yaml file) but to no avail.</p>
<p>Any help is appreciated :)</p>
|
<python><github-actions><fastapi><vercel>
|
2024-12-04 17:36:27
| 1
| 1,163
|
gmartinsnull
|
79,251,914
| 16,389,095
|
Convert list of integers into a specific string format
|
<p>I'm dealing with a list of integers that represent the pages in which a keyword was found. I would like to build block of code that converts this list into a string with a specific format that follow some simple rules. Single integers are converted into string. Consecutive integers are considered as intervals (left bound, hyphen, right bound) and then converted into string. Each conversion is comma separated. This is an example of input and desired output:</p>
<pre><code>input = [4, 5, 6, 7, 8, 9, 10, 22, 23, 26, 62, 63, 113, 137, 138, 139]
desired_output = "4-10, 22, 23, 26, 62, 63, 113, 137-139"
</code></pre>
<p>I wrote this code:</p>
<pre><code>res = [4, 5, 6, 7, 8, 9, 10, 22, 23, 26, 62, 63, 113, 137, 138, 139]
if len(res)>0:
resStr = str(res[0])
isConsecutive = False
for index in range(1, len(res)):
diff = res[index] - res[index-1]
if diff == 1:
isConsecutive = True
if index == len(res)-1:
resStr = resStr + "-" + str(res[index])
continue
else:
if isConsecutive:
isConsecutive = False
resStr = resStr + "-" + str(res[index-1]) + ", " + str(res[index])
else:
resStr = resStr + ", " + str(res[index])
print(res)
print(resStr)
</code></pre>
<p>This code gives me as a result:</p>
<pre><code>4-10, 22-23, 26, 62-63, 113, 137-139
</code></pre>
<p>It doesn't recognize that only two consecutive numbers have not be considered as an interval: "22, 23" and not "22-23", as well as "62, 63" and not "62-63".
How can be solved this issue? Is there a simpler or more efficient way to perform the conversion?</p>
|
<python><string><list><integer>
|
2024-12-04 16:04:06
| 2
| 421
|
eljamba
|
79,251,900
| 4,776,977
|
CUDA out of memory while using Llama3.1-8B for inference
|
<p>I have written a simple Python script that uses the HuggingFace <code>transformers</code> library along with <code>torch</code> to run <code>Llama3.1-8B-instruct</code> purely for inference, after feeding in some long-ish bits of text (about 10k-20k tokens). It runs fine on my laptop, which has a GPU with 12GB RAM, but can also access up to 28GB total (I guess from the main system RAM?)</p>
<p>However, when I run the code on a "Standard NC4as T4 v3" Windows Virtual Machine, with a single Tesla T4 GPU with 16GB RAM, it very quickly throws this error: <code>CUDA out of memory. Tried to allocate XXX GiB. GPU</code></p>
<p>According to my calculations, this code should run fine given the available RAM. Nevertheless, I've tried to make the script more memory efficient:</p>
<ul>
<li>Changing the attention mechanism - by setting the <code>attn_implementation</code> when instantiating the model - first to "sdpa". Then tried to adopt flash attention, but found it impossible to install the package <code>flashattn</code> on Windows</li>
<li>Using <code>xformers</code> to run <code>enable_xformers_memory_efficient_attention</code> - think this is also about the attention mechanism, but couldn't install / run this</li>
<li>Using <code>torch.inference_mode()</code></li>
<li>Setting flags like <code>low_cpu_mem_usage = True</code> during model instantiation.</li>
<li>Explicitly setting <code>torch_dtype</code> to <code>torch.float16</code> or <code>torch.bfloat16</code> during model instantiation.</li>
<li>Using <code>BitsAndBytesConfig</code> to trigger 8-bit and 4-bit quantization.</li>
</ul>
<p>The last step was the only one to have any effect - the model manages to generate one or two responses, but still fails to complete the loop, and still fails on the very first inference. If I've understood the documentation properly, 4-bit quantization should drastically reduce memory requirements and make it almost trivial to do a forward pass with this model size with this GPU.</p>
<p>So I'm suspicious about trying to further optimise the code - the next step apparently being to manually map all the model layers to the GPU and CPU respectively by defining a <code>device_map</code>.</p>
<p>So my questions are:</p>
<ol>
<li>Is this likely to be truly a memory issue, or is it a red herring?</li>
<li>If it's memory, what could I do beyond what I've tried?</li>
<li>If it's not, are there obvious things to check? (I'm suspecting to do with system configuration or package installations...)</li>
</ol>
|
<python><pytorch><huggingface-transformers><llama>
|
2024-12-04 16:00:36
| 0
| 1,768
|
Tom Wagstaff
|
79,251,882
| 18,206,974
|
What operations are thread-safe in Python?
|
<p>I am trying to understand why some particular code snippets behave thread-safe.</p>
<p>There's a <a href="https://docs.python.org/3/faq/library.html#what-kinds-of-global-value-mutation-are-thread-safe" rel="nofollow noreferrer">Python's FAQ page</a> which says what mutations are considered thread safe (though I still don't understand why atomicity here implies thread-safety)</p>
<p>There's also a <a href="https://docs.python.org/3/glossary.html#term-global-interpreter-lock" rel="nofollow noreferrer">glossary page</a> which tells a little bit about GIL and states:</p>
<blockquote>
<p>This [GIL] simplifies the CPython implementation by making the object model (including critical built-in types such as dict</p>
</blockquote>
<p>So my current understanding the following:</p>
<ol>
<li>GIL ensures that only one VM's instructions (not Python's) is executed in any given moment</li>
<li>Some operations on builtin types decompile into exactly one bytecode instruction which can be considered thread-safe (in a given context)</li>
</ol>
<p>But this does not match what I see. The following python function:</p>
<pre class="lang-py prettyprint-override"><code>def inc(stats):
stats["ok"] = stats["ok"] + 1
</code></pre>
<p>is decompiled into (under <code>Python 3.10.14 (main, Mar 19 2024, 21:46:16) [Clang 15.0.0 (clang-1500.3.9.4)] on darwin</code>)</p>
<pre><code> 0 LOAD_FAST 0 (stats)
2 LOAD_CONST 1 ('ok')
4 BINARY_SUBSCR
6 LOAD_CONST 2 (1)
8 BINARY_ADD
10 LOAD_FAST 0 (stats)
12 LOAD_CONST 1 ('ok')
14 STORE_SUBSCR
16 LOAD_CONST 0 (None)
18 RETURN_VALUE
</code></pre>
<p>where we can clearly see that the operation is broken down into three corresponding bytecode instructions (get add set) (<code>BINARY_SUBSCR</code>, <code>BINARY_ADD</code>, <code>STORE_SUBSCR</code>).</p>
<p>Despite that, running this function multiple times using <code>concurrent.futures.ThreadPoolExecutor</code> always gives consistent results (i.e. no race conditions)</p>
<p>The same happens for other suspiciously non-thread safe operations like <code>x = x + 1</code> (which also give consistent results)</p>
<p>If my understanding is correct we should expect race conditions occasionally.</p>
<p>I suspect one or more of the following:</p>
<ol>
<li>My understanding is not correct</li>
<li>There's a special treatment in the CPython for dictionary operations</li>
<li>The operation happens to be fast enough so Python thread scheduler unable to preempt thread in the middle of the executing of the statement (not bytecode)</li>
</ol>
<p>UPD: I tried breaking up <code>stats["ok"] = stats["ok"] + 1</code> into two statements through intermediate variable and inserting short sleeps between those two and observed data races. So 3rd intuition is likely right and <code>stats["ok"] = stats["ok"] + 1</code> also inherently not thread safe as it also consist of multiple bytecode instructions.</p>
|
<python><thread-safety><gil>
|
2024-12-04 15:54:54
| 1
| 336
|
Vladyslav Chaikovskyi
|
79,251,873
| 1,592,427
|
Haystack 2 with Elasticsearch complex document
|
<p>I have Elasticsearch storage with a <code>movies</code> index which is populated with documents like that:</p>
<pre class="lang-json prettyprint-override"><code>{
_id: ObjectId,
title: "Some film",
actors: [ { _id: ObjectId, title: "Actor A" }, { _id: ObjectId, title: "Actor B" } ]
}
</code></pre>
<p>Can I use haystack to run queries like "Who're the actors in Some film" or "What films did Actor A play in?"? Haystack 1 had <code>search_fields</code> argument in <code>ElasticsearchDocumentStore</code> but it was removed in version 2.
Sorry for dumb question, I'm very new in that.</p>
|
<python><elasticsearch><rag><haystack>
|
2024-12-04 15:53:06
| 1
| 414
|
Andrew
|
79,251,582
| 4,692,635
|
rise in jupyter truncates in height in fenced syntax highlight code
|
<p>I added the following code into a jupyter notebook cell:</p>
<pre><code># this is just a demo
# line 2
# line 3
# line 4
def fib(n):
if n<1:
return 1
else:
return fib(n-1)+fib(n-2)
if __name__ == '__main__':
print(fib(5))
</code></pre>
<p>The code was rendered correctly:</p>
<p><a href="https://i.sstatic.net/Xe7xSNcg.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Xe7xSNcg.jpg" alt="correct code block in a cell" /></a></p>
<p>However, when I launched RISE to present my notebook as a slider, the code was truncated, and only a few lines were shown:</p>
<p><a href="https://i.sstatic.net/fZQ2Nk6t.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fZQ2Nk6t.jpg" alt="incorrect code representation" /></a></p>
<p>Is there a way to make the mentioned code visible in RISE?</p>
<hr />
<p>P.S. I am using rise-5.7.1 with jupyter 1.0.0 and jupyter_core-4.6.3 under Python 3.6.10 |Anaconda custom (64-bit) on Windows 64 bits. In the presentation view, always only 10 lines of code are displayed; if I want to see more lines, I have to use a mousewheel.</p>
|
<python><jupyter-notebook><slider><jupyter><rise>
|
2024-12-04 14:32:09
| 0
| 567
|
oyster
|
79,251,417
| 3,414,663
|
In Polars, how can you update several columns simultaneously?
|
<p>Suppose we have a Polars frame something like this</p>
<pre><code>lf = pl.LazyFrame([
pl.Series("a", ...),
pl.Series("b", ...),
pl.Series("c", ...),
pl.Series("i", ...)
])
</code></pre>
<p>and a function something like this</p>
<pre><code>def update(a, b, c, i):
s = a + b + c + i
a /= s
b /= s
c /= s
return a, b, c
</code></pre>
<p>that depends of elements of columns <code>a, b, c</code> and also <code>i</code>.</p>
<p>How can we update each row of a frame using the function?</p>
<p>We could use <code>with_columns</code> to update the rows of each column independently but how can we do it with the dependency between columns?</p>
<p><strong>Edit</strong></p>
<p>In response to comments from @roman let's tighten up the question.</p>
<p>Use this LazyFrame</p>
<pre><code>lf = pl.LazyFrame(
[
pl.Series("a", [1, 2, 3, 4], dtype=pl.Int8),
pl.Series("b", [5, 6, 7, 8], dtype=pl.Int8),
pl.Series("c", [9, 0, 1, 2], dtype=pl.Int8),
pl.Series("i", [3, 4, 5, 6], dtype=pl.Int8),
pl.Series("o", [7, 8, 9, 0], dtype=pl.Int8),
]
)
</code></pre>
<p>We want to update columns a, b & c in a way that depends on column i. Column o should be unaffected. We have a function that takes the values a, b, c & i and returns a, b, c & i, where the first three have been updated but i remains the same as the input. After updating, all columns should have the same dtype as before.</p>
<p>The closest we can get is using an update function like this.</p>
<pre><code>def update(args):
s = args["a"] + args["b"] + args["c"] + args["i"]
args["a"] /= s
args["b"] /= s
args["c"] /= s
return args.values()
</code></pre>
<p>and applying it like this</p>
<pre><code>(
lf.select(
pl.struct(pl.col("a", "b", "c", "i"))
.map_elements(update, return_dtype=pl.List(pl.Float64))
.list.to_struct(fields=["a", "b", "c", "i"])
.alias("result"),
)
.unnest("result")
.collect()
)
</code></pre>
<p>But this has some problems.</p>
<ol>
<li>We have lost column o.</li>
<li>Column i has become Float64</li>
<li>It's pretty ugly.</li>
</ol>
<p>Is there a better way?</p>
|
<python><python-polars>
|
2024-12-04 13:49:42
| 1
| 589
|
user3414663
|
79,251,181
| 7,054,480
|
PyCharm typing warning inconsistency for abstract base classes
|
<p>In the following code, PyCharm is issuing a typing warning for <code>Base.foo</code> ("Expected to return 'int', got no return") but not for <code>Base.bar</code>, which has exactly the same typing hint and return type:</p>
<pre class="lang-py prettyprint-override"><code>import abc
class Base(abc.ABC):
@abc.abstractmethod
def foo(self) -> int:
print('fooing')
pass
@abc.abstractmethod
def bar(self) -> int:
pass
class Derived(Base):
def foo(self) -> int:
return 42
def bar(self) -> int:
return 42
</code></pre>
<p>In practice, neither <code>Base.foo</code> nor <code>Base.bar</code> can be called directly, so I'd expect no typing warnings at all there. If, on the other hand, one were to <code>return super().foo()</code> in <code>Derived.foo</code>, then a typing warning should indeed be issued, but the same should apply for <code>Derived.bar</code> so I still don't understand why the different behavior. Can someone explain, or is this a bug in PyCharm?</p>
|
<python><pycharm><python-typing><abc>
|
2024-12-04 12:34:16
| 1
| 332
|
Fidel I. Schaposnik
|
79,250,910
| 4,738,644
|
Adjust images to screen size in tensorboard instead of maximizing
|
<p>I have a grid of images (2 rows x 4 columns) in Tensorboard. Each image is 640 x 480 and I have a sequence of those grids, representing the prediction changes in each epoch.</p>
<p>My problem is that the image is either too small...</p>
<p><a href="https://i.sstatic.net/nAWbOSPN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nAWbOSPN.png" alt="enter image description here" /></a></p>
<p>... or too big when I use the "toogle actual size" button...</p>
<p><a href="https://i.sstatic.net/wiKDIc3Y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wiKDIc3Y.png" alt="enter image description here" /></a></p>
<p>... and each time that I change of epoch, the images become again tiny (as shown image 1)</p>
<p>I could of course create the images with a smaller size, but for me it's also important to keep the size, in case I want to check in detail how the predictions were made. Someone in Stackoverflow in another question <a href="https://stackoverflow.com/questions/50935523/zoom-in-to-tensorboard-graphs-using-chrome">proposed to mouse zoom</a> but it is also disturbing to do this each time I run tensorboard. I also saw the option of <a href="https://stackoverflow.com/questions/76492136/show-tensorboard-time-series-for-large-images">downgrading tensorboard</a> but I'm not allowed to do it since I don't have privileges for that</p>
<p>I also checked at the arguments that tb.SummaryWriter uses but there is nothing related to it. In their code there is only the suggestion of using torchvision.utils.make_grid() which I already did.</p>
<p>Is there an option in Tensorboard (either by programming or in the page itself) to make the grid adjust to the screen without having to do mouse zooming?</p>
|
<python><pytorch><tensorboard>
|
2024-12-04 11:08:19
| 0
| 421
|
Diego Alejandro Gómez Pardo
|
79,250,681
| 1,826,241
|
Add custom overload to Python method with existing type hints
|
<p>Is it possible to extend existing type hints for a Python package <em>in my application code</em> (i.e., not touching the upstream type hints) by adding custom overload for a method? (See also <a href="https://github.com/dlukes/pandas-apply-set" rel="nofollow noreferrer">repo with minimal reproducible example</a>.)</p>
<p>For instance, take the type hints for <code>pandas.DataFrame.apply</code> defined in <a href="https://github.com/pandas-dev/pandas-stubs/blob/4e21a311794196670835fd9a0151ae13c5f57a9a/pandas-stubs/core/frame.pyi#L1355-L1496" rel="nofollow noreferrer"><code>pandas-stubs/core/frame.pyi</code></a>. As defined, they do not allow for the applied function to return a set, like so:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
"a": list("abc"),
"b": list("def"),
"c": list("aaa"),
}
)
print(df.apply(lambda row: set(x for x in row if not pd.isna(x)), axis=1))
</code></pre>
<p>With <code>pandas-stubs</code> installed, Pyright reports:</p>
<pre><code>/Users/david/repos/eva/pandas-apply-set/reprex.py
/Users/david/repos/eva/pandas-apply-set/reprex.py:10:7 - error: No overloads for "apply" match the provided arguments (reportCallIssue)
/Users/david/repos/eva/pandas-apply-set/reprex.py:10:16 - error: Argument of type "(row: Unknown) -> set[Unknown]" cannot be assigned to parameter "f" of type "(...) -> Series[Any]" in function "apply"
Type "(row: Unknown) -> set[Unknown]" is not assignable to type "(...) -> Series[Any]"
Function return type "set[Unknown]" is incompatible with type "Series[Any]"
"set[Unknown]" is not assignable to "Series[Any]" (reportArgumentType)
2 errors, 0 warnings, 0 informations
</code></pre>
<p>But everything is actually fine at run time -- the script prints out:</p>
<pre><code>0 {a, d}
1 {e, a, b}
2 {c, f, a}
dtype: object
</code></pre>
<p>So this is likely a bug and the type hints are overly restrictive. Analogous bugs have been reported against the type hints for <code>pandas.Series.apply</code>, concerning both <a href="https://github.com/pandas-dev/pandas-stubs/issues/598" rel="nofollow noreferrer"><code>set</code></a> and <a href="https://github.com/pandas-dev/pandas-stubs/issues/871" rel="nofollow noreferrer"><code>frozenset</code></a>, and fixed.</p>
<p>My question is: is there a way I can extend the existing type hints for <code>DataFrame.apply</code> in my own code, adding an <code>@overload</code> that allows returning <code>set</code> from the applied function?</p>
<p>I know I can override the entirety of <code>frame.pyi</code> by creating a <code>typings/pandas-stubs/core/frame.pyi</code> in my own project, based on a copy of the upstream version, where I'd add <code>set</code> to the union of returned types in the appropriate overload. But that would require keeping the rest of the file in sync with upstream as it changes.</p>
<p>Instead, I'd like to keep using upstream <code>frame.pyi</code>, just extend the type hints for <code>DataFrame.apply</code> in my custom code to allow returning <code>set</code> from the applied function.</p>
<p>The Claude LLM suggests an approach along the following lines:</p>
<pre><code>import typing as t
import pandas as pd
class DataFrameExt(pd.DataFrame):
@t.overload
def apply(
self,
f: t.Callable[..., set[t.Any]],
raw: bool = ...,
result_type: None = ...,
args: t.Any = ...,
*,
axis: t.Literal[1],
**kwargs: t.Any,
) -> pd.Series[t.Any]: ...
pd.DataFrame.apply = DataFrameExt.apply
</code></pre>
<p>But this particular version yields errors of its own.</p>
<p>Is it even possible to make it work? Or should I just cast the <code>DataFrame</code> at the <code>.apply</code> call site to a custom unrelated class which has the correct type hint for <code>.apply</code>? Something like:</p>
<pre><code>import typing as t
import pandas as pd
class DataFrameApplyOverride:
def apply(
self,
f: t.Callable[..., set[t.Any]],
raw: bool = ...,
result_type: None = ...,
args: t.Any = ...,
*,
axis: t.Literal[1],
**kwargs: t.Any,
) -> pd.Series[t.Any]: ...
# And then, at the call site of .apply:
print(
t.cast(DataFrameApplyOverride, df).apply(
lambda row: set(x for x in row if not pd.isna(x)), axis=1
)
)
</code></pre>
<p>As mentioned initially, I've also set up a minimal reproducible example in <a href="https://github.com/dlukes/pandas-apply-set" rel="nofollow noreferrer">this repo</a> which includes the code above alongside dependencies for easy experimenting.</p>
|
<python><pandas><overloading><python-typing>
|
2024-12-04 10:05:50
| 1
| 1,873
|
dlukes
|
79,250,667
| 14,681,038
|
Refactoring marshmellow schema to pydantic 2
|
<p>I have been struggling to refactor this marshmellow schema to pydantic 2. Problem is with AttributeValue field. When I refactor this to pydantic, I always get strange errors regarding this field. This attribute field is dynamic that can accept many values as you can see it in <code>_get_field</code> function. I cannot seem to find correct way to define similar field in pydantic. Any suggestions?</p>
<pre><code>import copy
from dataclasses import dataclass
from marshmallow import fields, schema, ValidationError, validates_schema, post_load, Schema
@dataclass(frozen=True, kw_only=True)
class Attribute:
"""
A dataclass for storing the attributes of a segment.
"""
id: str
value: str | list[str]
def get_validation_error_kwargs(field_name=None):
kwargs = {}
if field_name is not None:
kwargs["field_name"] = field_name
return kwargs
def make_single_option_validator(options, field_name=None):
validation_error_kwargs = get_validation_error_kwargs(field_name)
def validate(value):
if value not in options:
raise ValidationError(
f'"{value}" is not a valid option', **validation_error_kwargs
)
return validate
def make_multiple_option_validator(options, field_name=None):
def validate(values):
invalid_values = set(values).difference(options)
if invalid_values:
invalid = ", ".join(sorted(f'"{value}"' for value in invalid_values))
raise ValueError(f"The following values are not valid options: {invalid} ")
return validate
class AttributeValue(fields.Field):
"""
A field for validating and serialising attribute values.
"""
type_fields = {
"single": fields.String(),
"multi": fields.List(fields.String),
"boolean": fields.Boolean(),
"range": fields.Integer(),
"account": fields.String(),
"supplier": fields.List(fields.String),
}
option_validator_factories = {
"single": make_single_option_validator,
"account": make_single_option_validator,
"multi": make_multiple_option_validator,
"supplier": make_multiple_option_validator,
}
# Used to produce ``None`` when the attribute ID isn't valid and a
# type-specific field can't be found. This is required because
# even though the ID is validated elsewhere, this field's
# validation will always run.
null_field = fields.Constant(None)
def _get_field(self, attribute_id):
manifest = self.parent.manifest # type: ignore[attr-defined]
manifest_attribute = manifest.attributes.get(attribute_id)
if manifest_attribute:
attribute_type = manifest_attribute["type"]
field = self.type_fields[attribute_type]
# Check if the attribute type requires that its value is
# validated against a set of options
if (
attribute_type in self.option_validator_factories
and "option_labels" in manifest_attribute
):
field = copy.deepcopy(field)
if not self.parent.context.get("skip_attributes_validation", False):
# Proceed with attribute's option validation
# only if the campaign isn't finalized (frozen).
# For finalized (frozen) campaing,
# there will be 'skip_attributes_validation' == True.
field.validators = [
self.option_validator_factories[attribute_type](
manifest_attribute["option_labels"]
)
]
return field
return self.null_field
def _serialize(self, value, attr, obj, **kwargs):
field = self._get_field(obj.id)
return field.serialize(attr, obj)
def _deserialize(self, value, attr, data, **kwargs):
field = self._get_field(data["id"])
return field.deserialize(value, attr, data)
class AttributeSchema(Schema):
"""
A schema for validating and serialising attributes.
"""
id = fields.String()
value = AttributeValue(allow_none=True)
@property
def manifest(self):
return self.context["manifest"]
@validates_schema
def validate_values(self, data, **_):
if data.get("value") is None:
raise ValidationError(f"{data['id']!r} value may not be null.")
@post_load
def make_attribute(self, data, **kwargs):
return Attribute(**data)
</code></pre>
<p>And this is example test case:</p>
<pre><code>from datetime import date
import pytest
from marshmallow import ValidationError
from .pricing import Attribute, DateRange
from .schema import AttributeSchema
@pytest.fixture
def schema(manifest):
"""
Return a ``AttributeSchema`` instance bound to Manifest.
"""
return AttributeSchema(context={"manifest": manifest})
TYPE_INPUTS = [
# Single
(
{"id": "industry", "value": "appliances"},
Attribute(id="industry", value="appliances"),
),
# Multi
(
{"id": "day_part", "value": ["am", "pm"]},
Attribute(id="day_part", value=["am", "pm"]),
),
# Boolean
({"id": "restricted", "value": True}, Attribute(id="restricted", value=True)),
]
@pytest.mark.parametrize("payload, expected_attribute", TYPE_INPUTS)
def test_deserialisiaton_succeeds(schema, payload, expected_attribute):
"""
Each supported attribute value type should be appropriately
deserialised.
"""
attribute = schema.load(payload)
assert attribute == expected_attribute
</code></pre>
<p>As you can see attribute value can be one of multiple types and should be validated correctly.</p>
<p>Any suggestions or guidance will be appreciated.</p>
|
<python><refactoring><python-dataclasses><marshmallow><pydantic-v2>
|
2024-12-04 10:02:56
| 0
| 643
|
Sharmiko
|
79,250,415
| 19,500,571
|
Calculate relative difference of elements in a 1D numpy array
|
<p>Say I have a 1D numpy-array given by <code>np.array([1,2,3])</code>.</p>
<p>Is there a built-in command for calculating the relative difference between each element and display it in a 2D-array? The result would then be given by</p>
<p><code>np.array([[0,-50,-100*2/3], [100,0,-100*1/3], [200,50,0]])</code></p>
<p>Alternatively I would have to use a for-loop.</p>
|
<python><numpy>
|
2024-12-04 08:44:20
| 1
| 469
|
TylerD
|
79,250,306
| 11,630,148
|
Can't get data from Companies House and HMRC API that have more than £2m in turnover data
|
<p>I've been dealing with this issue for a day now and have troubles getting company data from HMRC and Companies House API. I mainly need companies that have more than £1m turnover. I have set <code>MIN_TURNOVER = 1_000_000</code> in my code to filter out companies that have less than £1m turnover.</p>
<p>This is my code</p>
<pre class="lang-py prettyprint-override"><code>import os
import requests
import logging
import urllib.parse
import json
import csv
from datetime import datetime
import time
from hmrc_client import HMRCClient
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# Configure logging
logging.basicConfig(
level=logging.DEBUG,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
# Constants
COMPANIES_API_KEY = os.getenv('COMPANIES_API_KEY')
BASE_URL = 'https://api.companieshouse.gov.uk'
RATE_LIMIT_DELAY = 0.6 # Minimum delay between requests (in seconds)
MIN_DIRECTOR_AGE = 50 # Minimum age for directors
MIN_TURNOVER = 1_000_000 # £1 million minimum turnover
class CompaniesHouseClient:
def __init__(self):
"""Initialize the Companies House client."""
self.api_key = os.getenv('COMPANIES_API_KEY')
if not self.api_key:
raise ValueError("COMPANIES_API_KEY not found in environment variables")
# Set up session with authentication
self.session = requests.Session()
self.session.auth = (self.api_key, '')
self.session.headers.update({
'Accept': 'application/json',
'User-Agent': 'CompanyDataRetrieval/1.0'
})
# Configure rate limiting
self.last_request_time = time.time()
self.request_times = [] # Keep track of request timestamps
self.max_requests_per_minute = 500 # Conservative limit
self.min_request_interval = 0.15 # Minimum time between requests in seconds
self.hmrc_client = HMRCClient()
logger.info("Initialized Companies House client")
def _rate_limit(self):
"""Implement rate limiting for API requests."""
current_time = time.time()
# Remove request timestamps older than 1 minute
self.request_times = [t for t in self.request_times if current_time - t <= 60]
# If we've made too many requests in the last minute, wait
if len(self.request_times) >= self.max_requests_per_minute:
sleep_time = 60 - (current_time - self.request_times[0])
if sleep_time > 0:
logger.info(f"Rate limit approaching, waiting {sleep_time:.2f} seconds...")
time.sleep(sleep_time)
self.request_times = [] # Reset after waiting
# Ensure minimum interval between requests
time_since_last_request = current_time - self.last_request_time
if time_since_last_request < self.min_request_interval:
time.sleep(self.min_request_interval - time_since_last_request)
self.last_request_time = time.time()
self.request_times.append(self.last_request_time)
def make_request(self, url, params=None):
"""Make a request to the Companies House API with retry logic"""
max_retries = 3
base_delay = 2 # Base delay for exponential backoff
for attempt in range(max_retries):
try:
self._rate_limit() # Apply rate limiting
logger.debug(f"Making request to {url}")
response = self.session.get(
url,
params=params,
timeout=30
)
if response.status_code == 429: # Rate limit exceeded
retry_after = int(response.headers.get('Retry-After', base_delay * (2 ** attempt)))
logger.warning(f"Rate limit exceeded. Waiting {retry_after} seconds...")
time.sleep(retry_after)
continue
response.raise_for_status()
return response.json()
except requests.exceptions.RequestException as e:
logger.error(f"Request failed: {str(e)}")
if attempt < max_retries - 1:
sleep_time = base_delay * (2 ** attempt)
logger.info(f"Retrying in {sleep_time} seconds...")
time.sleep(sleep_time)
else:
return None
return None
def search_companies(self, sic_code):
"""Search for companies with specific SIC code"""
companies = []
items_per_page = 100
max_results = 20000
processed_companies = set()
# Search terms optimized for each SIC code
search_terms = {
# General cleaning
'81210': [f'"{sic_code}" cleaning'],
'81200': [f'"{sic_code}" cleaning'],
# Specialized cleaning
'81220': [f'"{sic_code}" cleaning'],
'81221': [f'"{sic_code}" window cleaning'],
'81222': [f'"{sic_code}" specialized cleaning'],
'81223': [f'"{sic_code}" chimney cleaning'],
'81229': [f'"{sic_code}" specialized cleaning'],
# Other cleaning
'81290': [f'"{sic_code}" cleaning'],
'81291': [f'"{sic_code}" disinfecting'],
'81299': [f'"{sic_code}" cleaning'],
# Additional services
'81300': [f'"{sic_code}" landscaping'],
'82990': [f'"{sic_code}" cleaning'],
# Waste management
'38110': [f'"{sic_code}" waste'],
'38210': [f'"{sic_code}" waste treatment'],
'38220': [f'"{sic_code}" hazardous waste'],
'38320': [f'"{sic_code}" recycling']
}
terms = search_terms.get(sic_code, [f'"{sic_code}"'])
for term in terms:
logger.info(f"Searching with term: {term}")
start_index = 0
while start_index < max_results:
try:
params = {
'q': term,
'items_per_page': items_per_page,
'start_index': start_index,
'restrictions': 'active'
}
response_data = self.make_request(f"{BASE_URL}/search/companies", params)
if not response_data or 'items' not in response_data:
break
items = response_data['items']
if not items:
break
total_items = response_data.get('total_results', 0)
logger.info(f"Processing {len(items)} companies from index {start_index}. Total available: {total_items}")
# Process companies in batches
for company in items:
company_number = company.get('company_number')
if not company_number or company_number in processed_companies:
continue
# Get basic company details first
company_details = {
'company_number': company_number,
'company_name': company.get('company_name', ''),
'company_status': company.get('company_status', ''),
'date_of_creation': company.get('date_of_creation', ''),
'company_type': company.get('type', '')
}
# Only get full details if basic criteria are met
if company_details['company_status'].lower() == 'active':
full_details = self.get_company_details(company_number)
if full_details:
company_details.update(full_details)
companies.append(company_details)
processed_companies.add(company_number)
logger.debug(f"Found matching company: {company_details['company_name']}")
start_index += len(items)
if start_index >= min(total_items, max_results):
break
except Exception as e:
logger.error(f"Error processing search term {term} at index {start_index}: {str(e)}")
break
logger.info(f"Found {len(companies)} unique companies for SIC code {sic_code}")
return companies
def get_company_details(self, company_number):
"""Get detailed information about a company"""
if not company_number:
return None
url = f"{BASE_URL}/company/{company_number}"
data = self.make_request(url)
if data:
# Add the company number to the data if not present
data['company_number'] = company_number
# Clean up the company name
if 'company_name' not in data and 'title' in data:
data['company_name'] = data['title']
# Ensure SIC codes are present
if 'sic_codes' not in data:
data['sic_codes'] = []
return data
def get_company_officers(self, company_number):
"""Get officers of a company"""
if not company_number:
return None
url = f"{BASE_URL}/company/{company_number}/officers"
params = {
'items_per_page': 100,
'status': 'active' # Only get active officers
}
return self.make_request(url, params)
def get_company_accounts(self, company_number):
"""Get company accounts information"""
if not company_number:
return None
url = f"{BASE_URL}/company/{company_number}/filing-history"
data = self.make_request(url)
if not data or 'items' not in data:
logger.warning(f"No filing history found for company {company_number}")
return None
# Look for the most recent full accounts
for filing in data.get('items', []):
if filing.get('category') in ['accounts', 'accounts-with-accounts-type-full', 'accounts-with-accounts-type-small']:
accounts_data = filing.get('data', {})
# Try different possible turnover fields
turnover_fields = ['turnover', 'revenue', 'total_turnover', 'uk_turnover']
for field in turnover_fields:
if field in accounts_data:
try:
turnover_str = str(accounts_data[field])
# Handle different formats
if isinstance(turnover_str, (int, float)):
return float(turnover_str)
# Remove currency symbols and commas
turnover_str = turnover_str.replace('£', '').replace(',', '').strip()
# Handle ranges like "1000000-5000000"
if '-' in turnover_str:
lower, upper = map(str.strip, turnover_str.split('-'))
try:
# Try to get both bounds
lower_val = float(lower)
upper_val = float(upper)
# Use the higher value if both are valid
return max(lower_val, upper_val)
except ValueError:
# If upper bound fails, use lower bound
return float(lower)
# Handle text-based ranges
turnover_bands = {
'over £500m': 500_000_000,
'over £100m': 100_000_000,
'over £50m': 50_000_000,
'over £25m': 25_000_000,
'over £10m': 10_000_000,
'over £5m': 5_000_000,
'over £2m': 2_000_000,
'over £1m': 1_000_000,
}
for band, value in turnover_bands.items():
if band.lower() in turnover_str.lower():
return value
# Try direct conversion
return float(turnover_str)
except (ValueError, AttributeError) as e:
logger.warning(f"Could not parse turnover value '{accounts_data[field]}' for company {company_number}: {e}")
continue
logger.warning(f"No turnover information found in filing history for company {company_number}")
return None
def process_companies(self):
"""Process companies and save to CSV"""
# Define SIC codes for cleaning and waste management
sic_codes = {
"Cleaning": [
'81210', # General cleaning of buildings
'81229', # Other specialized cleaning activities
'81220', # Other building and industrial cleaning activities
'81222', # Specialized cleaning activities
'81221', # Window cleaning services
'81223', # Chimney cleaning services
'81299', # Other cleaning services n.e.c.
'81290', # Other cleaning activities
'81291', # Disinfecting and exterminating services
'81200', # General cleaning activities
'81300', # Landscaping activities
'82990', # Other business support activities
],
"Waste Management": [
'38110', # Collection of non-hazardous waste
'38320', # Recovery of sorted materials
'38220', # Treatment and disposal of hazardous waste
'38210', # Treatment and disposal of non-hazardous waste
]
}
# Create output file with timestamp
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
output_file = f'filtered_companies_{timestamp}.csv'
# Define CSV fields
fieldnames = [
'company_number', 'company_name', 'company_status',
'incorporation_date', 'sic_codes', 'registered_office_address',
'active_directors_over_50', 'company_type', 'companies_house_turnover',
'hmrc_turnover', 'last_accounts_date', 'category', 'vat_number'
]
processed_count = 0
saved_count = 0
with open(output_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
for category, codes in sic_codes.items():
logger.info(f"Processing {category} companies...")
for sic_code in codes:
logger.info(f"Searching for companies with SIC code {sic_code}")
companies = self.search_companies(sic_code)
if not companies:
logger.warning(f"No companies found for SIC code {sic_code}")
continue
# Process companies in batches
batch_size = 50 # Reduced batch size for better handling
for i in range(0, len(companies), batch_size):
batch = companies[i:i + batch_size]
logger.info(f"Processing batch {i//batch_size + 1} of {len(companies)//batch_size + 1}")
for company in batch:
processed_count += 1
company_number = company.get('company_number')
company_name = company.get('company_name', 'Unknown')
try:
# Get turnover information
ch_turnover = self.get_company_accounts(company_number)
# Get VAT number and HMRC turnover
vat_info = self.hmrc_client.get_vat_info(company_number)
hmrc_turnover = None
vat_number = None
if vat_info:
vat_number = vat_info.get('vatNumber')
if vat_number:
hmrc_turnover = self.hmrc_client.get_company_turnover(vat_number)
# Check if either turnover meets our criteria (£1M or more)
turnover_ok = False
# Check Companies House turnover
if ch_turnover and ch_turnover >= MIN_TURNOVER:
turnover_ok = True
# Check HMRC turnover if Companies House turnover wasn't sufficient
elif hmrc_turnover and hmrc_turnover >= MIN_TURNOVER:
turnover_ok = True
# Only proceed if we have a valid turnover of £1M or more
if not turnover_ok:
continue
# Save companies that have £1M+ turnover
company_data = {
'company_number': company_number,
'company_name': company_name,
'company_status': company.get('company_status', ''),
'incorporation_date': company.get('date_of_creation', ''),
'sic_codes': ', '.join(company.get('sic_codes', [])),
'registered_office_address': self._format_address(company.get('registered_office_address', {})),
'active_directors_over_50': '',
'company_type': company.get('type', ''),
'companies_house_turnover': f"£{ch_turnover:,.2f}" if ch_turnover else 'Not available',
'hmrc_turnover': f"£{hmrc_turnover:,.2f}" if hmrc_turnover else 'Not available',
'last_accounts_date': (
company.get('last_accounts', {}).get('made_up_to', 'Not available')
),
'category': category,
'vat_number': vat_number or 'Not available'
}
writer.writerow(company_data)
csvfile.flush() # Force write to disk
saved_count += 1
logger.info(f"Saved data for company {company_name}")
except Exception as e:
logger.error(f"Error processing company {company_name}: {str(e)}")
continue
# Add a small delay between companies
time.sleep(RATE_LIMIT_DELAY)
logger.info(f"Completed batch. Total processed: {processed_count}, Total saved: {saved_count}")
logger.info(f"Completed SIC code {sic_code}. Total processed: {processed_count}, Total saved: {saved_count}")
logger.info(f"Completed category {category}. Total processed: {processed_count}, Total saved: {saved_count}")
logger.info(f"Processing complete. Processed {processed_count} companies, saved {saved_count} to CSV")
return output_file
def _format_address(self, address_dict):
"""Format address dictionary into a string"""
if not address_dict:
return ''
address_parts = [
address_dict.get('address_line_1', ''),
address_dict.get('address_line_2', ''),
address_dict.get('locality', ''),
address_dict.get('region', ''),
address_dict.get('postal_code', ''),
address_dict.get('country', '')
]
return ', '.join(part for part in address_parts if part)
def calculate_age(self, date_of_birth):
"""Calculate age from date of birth dictionary"""
if not date_of_birth or 'year' not in date_of_birth:
return None
try:
# Create a date object using year and month (if available)
year = int(date_of_birth['year'])
month = int(date_of_birth.get('month', 1))
day = 1 # Default to first of the month
birth_date = datetime(year, month, day)
today = datetime.now()
age = today.year - birth_date.year
# Adjust age if birthday hasn't occurred this year
if today.month < birth_date.month:
age -= 1
return age
except (ValueError, TypeError):
logger.error(f"Error calculating age for date of birth: {date_of_birth}")
return None
def main():
try:
logger.info("Starting company data retrieval process")
client = CompaniesHouseClient()
output_file = client.process_companies()
if output_file:
logger.info(f"Data has been saved to {output_file}")
else:
logger.error("No data was saved")
except Exception as e:
logger.error(f"An error occurred: {str(e)}")
raise
if __name__ == "__main__":
main()
</code></pre>
|
<python><web-scraping>
|
2024-12-04 08:12:58
| 1
| 664
|
Vicente Antonio G. Reyes
|
79,250,200
| 6,862,601
|
Make ssh run a local script when invoked by subprocess.Popen
|
<p>I have this code that executes shell commands on remote hosts:</p>
<pre><code>cmd_list = shlex.split(cmd)
proc = subprocess.Popen(cmd_list, stdin=sys.stdin, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
</code></pre>
<p>I am trying to run this command (based on this <a href="https://stackoverflow.com/a/2732991/6862601">answer</a>):</p>
<pre><code>'ssh', '<hostname>', 'sudo', 'tar', 'xzf', '<path_to_tar_file>', '-C', '<some_directory>', '&&', 'bash', '-s', '<', '/Users/myself/.../myscript.sh'
</code></pre>
<p>However, it fails with this error:</p>
<pre><code>bash: /Users/myself/.../myscript.sh: No such file or directory
</code></pre>
<p>How do I make it read from my local file and feed it to <code>bash -s</code> on the remote host?</p>
|
<python><ssh>
|
2024-12-04 07:25:53
| 1
| 43,763
|
codeforester
|
79,250,060
| 7,364,894
|
Pymemcache OSError: [Errno 99] Cannot assign requested address
|
<p>Context:</p>
<p>We have a django application running inside a container on our cloud instance. We recently started seeing errors when we try to access value from django cache in an api end point.</p>
<pre><code>cache.get('key')
</code></pre>
<p>This api endpoint is very frequently accessed by our users.</p>
<p>The full error that we are seeing is attached below.</p>
<p>Error trace</p>
<pre><code>Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/core/handlers/base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/views/decorators/csrf.py", line 56, in wrapper_view
return view_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/srv/www/iridize/tipcms/views.py", line 2141, in cross_app_new
cache_value = cache.get(cache_key, {})
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/core/cache/backends/memcached.py", line 75, in get
return self._cache.get(key, default)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pymemcache/client/hash.py", line 347, in get
return self._run_cmd("get", key, default, default=default, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pymemcache/client/hash.py", line 322, in _run_cmd
return self._safely_run_func(client, func, default_val, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pymemcache/client/hash.py", line 211, in _safely_run_func
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pymemcache/client/base.py", line 1494, in get
return client.get(key, default)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pymemcache/client/base.py", line 687, in get
return self._fetch_cmd(b"get", [key], False, key_prefix=self.key_prefix).get(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pymemcache/client/base.py", line 1133, in _fetch_cmd
self._connect()
File "/usr/local/lib/python3.11/site-packages/pymemcache/client/base.py", line 424, in _connect
sock.connect(sockaddr)
OSError: [Errno 99] Cannot assign requested address
</code></pre>
<p>We are using memcached for caching purposes and our cache config in django looks like this.</p>
<pre><code>CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.PyMemcacheCache',
'LOCATION': os.environ.get('MEMCACHE_SERVICE', '127.0.0.1:11211'),
'OPTIONS': {
"use_pooling": True
}
}
}
</code></pre>
<p>My colleague suspected that we may not be having open ports on our pod and he did some investigation</p>
<blockquote>
<p>It can be tested easily by counting the busy ports in the pod with:</p>
<p>cat /proc/net/tcp|wc -l In pods we observed 25k busy
ports or even more, when the max port number is around 30k.</p>
<p>Also a more specific command searching for the memcache port number
(11211 = hex 2BCB) can be done with this, Also added a 03: filter,
that is counting the non-released ports only (these are not busy
anymore, but the WAIT_TIME is still in progress, so they are not
released yet)</p>
<p>cat /proc/net/tcp|grep 2BCB|grep " 03:"|wc -l</p>
</blockquote>
<p>Some context about above block: If we try to hit our server with several hundreds of requests the number of ports linked with memcached keeps on increasing.</p>
<p>Question:</p>
<p>Why does pymemcached open new connection for every cache.get request, since our server is hit with huge number of requests, we are running out of ports as the connections are opened very frequently. We thought pymemcached opens some connections and reuse them instead of opening new connections every time. We tried to control this with max_pool_size, pool_idle_timeout parameters where the timeout parameter which has value 0 should not discard any connections and we assumed that some reuse of connections will happen but that is not happening doc link: <a href="https://pymemcache.readthedocs.io/en/latest/apidoc/pymemcache.client.hash.html" rel="nofollow noreferrer">https://pymemcache.readthedocs.io/en/latest/apidoc/pymemcache.client.hash.html</a> but still not able to prevent many connections opened by the pymemcached.</p>
<p>Software versions:
Django==4.2.15
pymemcache==4.0.0</p>
|
<python><django><network-programming><memcached><pymemcached>
|
2024-12-04 06:13:35
| 0
| 3,648
|
Thakur Karthik
|
79,249,502
| 298,511
|
No name 'Main' in module 'juliacall'
|
<p>I have an interesting problem where the line <code>from juliacall import Main as jl</code> shows an error in both Pylint and Pylance (VSCode). Is this just a linter issue and if so, can it be resolved (other than disabling)?</p>
<pre><code>No name 'Main' in module 'juliacall' Pylint(E0611:no-name-in-module)
"Main" is unknown import symbol Pylance(reportAttributeAccessIssue)
</code></pre>
<p>Furthermore, if the import statement is placed at the top of the file, the code also hangs at runtime (no error messages). Putting the import statement inside each function resolves this, but no idea why.</p>
<pre><code>(.venv) C:\myproj>pip show JuliaCall
Name: juliacall
Version: 0.9.23
Summary: Julia and Python in seamless harmony
Home-page: http://github.com/JuliaPy/PythonCall.jl
Author:
Author-email:
License:
Location: c:\myproj\.venv\Lib\site-packages
Requires: juliapkg
Required-by:
(.venv) C:\myproj>python --version
Python 3.12.4
</code></pre>
|
<python><julia><juliacall>
|
2024-12-03 23:49:04
| 1
| 2,918
|
Mike Lowery
|
79,249,475
| 405,017
|
In Dash can I update an output value but disable further updates from that change?
|
<p>I have two inputs, and an output. Both inputs, when changed, affect the output.
When the user types a value in one input, I want the value in the other to be cleared...but to NOT invoke the follow-on callback that would update the output. Is this possible?</p>
<pre class="lang-py prettyprint-override"><code>app.layout = html.Div([
dcc.Input(id="foo"),
dcc.Input(id="bar"),
html.Div(id="out")
])
@callback(
Output("out", "children", allow_duplicate=True),
Output("bar", "value"), # changing this shouldn't trigger from_bar
Input("foo", "value"),
prevent_initial_call=True,
)
def from_foo(value: str):
return (f"foo={value}", "")
@callback(
Output("out", "children", allow_duplicate=True),
Output("foo", "value"), # changing this shouldn't trigger from_foo
Input("bar", "value"),
prevent_initial_call=True,
)
def from_bar(value: str):
return (f"bar={value}", "")
</code></pre>
<p>I know there's a solution in simple cases where you put all the inputs and outputs into a single callback and then use <code>callback_context</code> to pick which changed and modify your logic and use <code>dash.no_update</code> to not change that one. But I'm directly asking if there's a way to set a flag or something that says "skip callbacks when processing the outputs just of this function".</p>
|
<python><plotly-dash><circular-dependency>
|
2024-12-03 23:34:30
| 0
| 304,256
|
Phrogz
|
79,249,439
| 6,929,343
|
Tkinter menu command, set string to None
|
<p>I have this code I'd like to condense if possible:</p>
<pre class="lang-py prettyprint-override"><code>def ForgetPassword():
""" Clear sudo password for extreme caution """
global SUDO_PASSWORD
SUDO_PASSWORD = None
self.tools_menu.add_command(label="Forget sudo password", underline=0,
font=g.FONT, command=ForgetPassword, state=tk.DISABLED)
</code></pre>
<p>Rather than using <code>command=ForgetPassword</code>, is it possible for something like <code>command=SUDO_PASSWORD=NONE</code>?</p>
|
<python><tkinter><variable-assignment>
|
2024-12-03 23:10:30
| 0
| 2,005
|
WinEunuuchs2Unix
|
79,248,888
| 363,028
|
How to embed python scripts into LibreOffice Calc (v.24.8.3.2)
|
<p>This is a repeat of <a href="https://stackoverflow.com/questions/70501673/how-to-embed-python-scripts-into-libreoffice-calc">How to embed python scripts into LibreOffice Calc</a> because <a href="https://stackoverflow.com/a/70510410">https://stackoverflow.com/a/70510410</a> does not work for me.</p>
<p>Specifically step 2:</p>
<blockquote>
<p>With APSO installed, go to Tools -> Macros -> Organize python scripts.</p>
</blockquote>
<p>Nothing happens when I click Organize python scripts</p>
<p><a href="https://i.sstatic.net/Olzp3Jn1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Olzp3Jn1.png" alt="Menu organize pythong scripts" /></a></p>
<p>Version: 24.8.3.2 (X86_64) / LibreOffice Community</p>
|
<python><libreoffice-calc>
|
2024-12-03 19:09:35
| 1
| 34,146
|
Ole Tange
|
79,248,883
| 3,652,584
|
Submit too many commands from different files as a single batch job
|
<p>I want to use bash to run a batch job on an HPC. The commands to be executed are saved to a text file. Previously, I used the following to run each line of the text file separately as a batch job.</p>
<pre><code>File=Commands.txt
head -n $SLURM_ARRAY_TASK_ID $File | tail -n 1 | bash
</code></pre>
<p>I need to extend this approach to find all files in a certain directory (and their single level descendants) that follow a certain pattern</p>
<pre><code>BaseDir="Folder1/Folder2/Folder3"
FolderPattern="FolderPattern*"
FilePatterns=("FilePattern1_*.txt" "FilePattern2_*.txt")
</code></pre>
<p>There should be multitple candidate folders, in each there should be at least single file of each pattern. I need to read the lines in these files, without saving them to a file. The combined commands should be distributed into 210 maximum batch jobs. For example, if there are 1000 commands to be execured, there should be ~ 5 commands to run sequentially in each batch job.</p>
<p>I need to ensure that the loaded HPC modules and python location is inherited in all running jobs:</p>
<pre><code>#!/bin/bash
#SBATCH --account=XXXX
#SBATCH --ntasks=1
#SBATCH --job-name=Process
#SBATCH --output=%x-%A-%a.out
#SBATCH --error=%x-%A-%a.err
#SBATCH --partition=XXX
#SBATCH --time=00:30:00
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=200G
#SBATCH --gpus-per-node=1
#SBATCH --ntasks-per-node=1
#SBATCH --array=1-210
#SBATCH --export=ALL
# Load required modules
module load tensorflow
# Ensure the virtual environment paths are correctly set as environment variables.
export PATH=TF_Virt_Env/bin:$PATH
export PYTHONPATH=TF_Virt_Env/lib/python3.10/site-packages:$PYTHONPATH
# Activate the Python virtual environment
source TF_Virt_Env/bin/activate
</code></pre>
<p>Example commands</p>
<pre><code>/usr/bin/time -v python3 Script.py --arg1 val1 --arg2 val2 --arg3 val3
</code></pre>
<p>Is this possible without splitting the commands to separate files and submit them separately.</p>
<p>Thanks in advance</p>
|
<python><bash><slurm><hpc>
|
2024-12-03 19:06:42
| 0
| 537
|
Ahmed El-Gabbas
|
79,248,780
| 2,590,981
|
Python: Can one call of random.uniform (a,b) influence the outcome of a second call?
|
<p>I tripple-checked this: the call of the function earlier in my code has an influence on the outcome of a second call (with different variables involved) later on in my code. Random seed is not changed within. I'm puzzled.</p>
|
<python><random><seed>
|
2024-12-03 18:20:44
| 1
| 653
|
JohnDoe
|
79,248,710
| 2,475,195
|
Reshaping out tensor in pytorch produces weird behavior
|
<p>I was going through <a href="https://github.com/parrt/fundamentals-of-deep-learning/blob/main/notebooks/3.train-test-diabetes.ipynb" rel="nofollow noreferrer">https://github.com/parrt/fundamentals-of-deep-learning/blob/main/notebooks/3.train-test-diabetes.ipynb</a> as an exercise, but forgot to reshape y tensors in these lines</p>
<pre><code>y_train = torch.tensor(y_train).float().reshape(-1,1) # column vector
y_test = torch.tensor(y_test).float().reshape(-1,1)
</code></pre>
<p>And my model just stopped learning early on, loss was not improving in training. Does anyone understand what's the effect of those <code>reshape()</code> calls, how do I avoid this bug in the future?</p>
<p>Full code and comments below:</p>
<pre><code>def train1(model, X_train, X_test, y_train, y_test,
learning_rate = .5, nepochs=2000):
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
history = [] # track training and validation loss
for epoch in range(nepochs+1):
y_pred = model(X_train)
loss = torch.mean((y_pred - y_train)**2)
y_pred_test = model(X_test)
loss_test = torch.mean((y_pred_test - y_test)**2)
history.append((loss, loss_test))
if epoch % (nepochs//10) == 0:
print(f"Epoch {epoch:4d} MSE train loss {loss:12.3f} test loss {loss_test:12.3f}")
optimizer.zero_grad()
loss.backward() # autograd computes w1.grad, b1.grad, ...
optimizer.step()
return torch.tensor(history)
ncols = X_train.shape[1]
n_neurons = 150
model2 = torch.nn.Sequential(
torch.nn.Linear(ncols, n_neurons),
torch.nn.ReLU(),
torch.nn.Linear(n_neurons, 1)
)
d = load_diabetes()
df = pd.DataFrame(d.data, columns=d.feature_names)
df['disease'] = d.target # "quantitative measure of disease progression one year after baseline"
print (df.head(3))
np.random.seed(1) # set a random seed for consistency across runs
n = len(df)
X = df.drop('disease',axis=1).values
y = df['disease'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20)
m = np.mean(X_train,axis=0)
std = np.std(X_train,axis=0)
X_train = (X_train-m)/std
X_test = (X_test-m)/std
X_train = torch.tensor(X_train).float()
X_test = torch.tensor(X_test).float()
# HERE !!!!!!
# without reshape: train loss doesn't emprove beyond epoch 800, loss=6074
# y_train = torch.tensor(y_train).float()
# y_test = torch.tensor(y_test).float()
# print (y_train.shape, y_test.shape) # torch.Size([353]) torch.Size([89])
# with reshape, train loss goes down to 7
y_train = torch.tensor(y_train).float().reshape(-1,1) # column vector
y_test = torch.tensor(y_test).float().reshape(-1,1)
print (y_train.shape, y_test.shape) # torch.Size([353]) torch.Size([89])
########################################################################
history = train1(model2, X_train, X_test, y_train, y_test,
learning_rate=.02, nepochs=8000)
# Epoch 0 MSE train loss 29603.037 test loss 26998.922
# Epoch 800 MSE train loss 2133.840 test loss 3174.325
# Epoch 1600 MSE train loss 1423.420 test loss 4316.454
# Epoch 2400 MSE train loss 375.720 test loss 7257.883
# Epoch 3200 MSE train loss 120.477 test loss 9051.368
# Epoch 4000 MSE train loss 57.527 test loss 10240.634
# Epoch 4800 MSE train loss 31.486 test loss 10784.966
# Epoch 5600 MSE train loss 16.044 test loss 11113.780
# Epoch 6400 MSE train loss 8.490 test loss 11283.872
# Epoch 7200 MSE train loss 6.594 test loss 11503.454
# Epoch 8000 MSE train loss 3.513 test loss 11644.484
</code></pre>
|
<python><deep-learning><pytorch><neural-network><tensor>
|
2024-12-03 17:54:12
| 1
| 4,355
|
Baron Yugovich
|
79,248,600
| 1,114,872
|
an import that works on python, but not on jupyter lab
|
<p>I have installed a library (<a href="https://pypi.org/project/plot-likert/" rel="nofollow noreferrer">https://pypi.org/project/plot-likert/</a>) in a venv, but curiously it works on vanilla python and does not work on <code>jupyter lab</code></p>
<p>On vanilla python:</p>
<pre><code>(venv_jupyter) lucas@karaboudjan:~/.instalados/venv_jupyter$ which python
/home/lucas/.instalados/venv_jupyter/bin/python
(venv_jupyter) lucas@karaboudjan:~/.instalados/venv_jupyter$ python
Python 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import plot_likert
>>>
</code></pre>
<p>On <code>jupyter lab</code> (even after a kill -9 -1 and running everything again)</p>
<pre><code>---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[2], line 4
2 import matplotlib.pyplot as plt
3 import seaborn as sns
----> 4 import plot_likert
5 health = pd.read_csv('health_concerns.csv')
6 health.head()
ModuleNotFoundError: No module named 'plot_likert'
</code></pre>
<p>I have no idea what might be the issue! Other libraries are working as intended after a <code>pip install</code></p>
|
<python><pip><jupyter-lab>
|
2024-12-03 17:15:29
| 0
| 1,512
|
josinalvo
|
79,248,554
| 885,486
|
Tensorflow - Time Series Tutorial - How Does Data Windowing Works?
|
<p>I am following the <a href="https://www.tensorflow.org/tutorials/structured_data/time_series#linear_model" rel="nofollow noreferrer">Time Series tutorial</a> from TensorFlow, which takes in weather data and predicts the future value of temperature.</p>
<p>I have a hard time wrapping my head around how the Window Generator works and how each variable behaves.</p>
<p>This is the code to generate a window:</p>
<blockquote>
<p>wide_window = WindowGenerator(input_width=24, label_width=1,
shift=1, label_columns=['T (degC)'])</p>
</blockquote>
<p>Now, per the tutorial, this means that the model takes in 24 hours of weather data, and predicts the next 1 hour (a.k.a. 1 value) of temperature. This works.</p>
<p>HOWEVER, if I change the <strong>shift</strong> (how far in the future to make a prediction) to any value that's not 1:</p>
<blockquote>
<p>wide_window = WindowGenerator(input_width=24, label_width=1,
shift=3, label_columns=['T (degC)'])</p>
</blockquote>
<p>Then the model returns no prediction, which makes the Plot function of WindowGenerator crash.</p>
<p>LIKEWISE, if I change the <strong>label_width</strong> (how many predictions to make from the shift backwards by the looks of it) to any value that's not 24 or 1:</p>
<blockquote>
<p>wide_window = WindowGenerator(input_width=24, label_width=3,
shift=1, label_columns=['T (degC)'])</p>
</blockquote>
<p>Then the model.fit method crashes with error:</p>
<pre><code>Exception has occurred: ValueError
Dimensions must be equal, but are 3 and 24 for '{{node compile_loss/mean_squared_error/sub}} = Sub[T=DT_FLOAT](data_1, sequential_1/dense_1/Add)' with input shapes: [?,3,1], [?,24,1].
</code></pre>
<p>My question is: why? Why can I only predict if it's 24 predictions or 1 prediction, but not 3 predictions? And why can I predict 1 step in the future but no further?</p>
<p>This is the relevant part of the code (I've omitted the downloading and formatting of data from the tutorial to keep this code snippet tight):</p>
<pre><code># WINDOW GENERATOR CLASS
class WindowGenerator():
def __init__(self, input_width, label_width, shift,
train_df=train_df, val_df=val_df, test_df=test_df,
label_columns=None):
# Store the raw data.
self.train_df = train_df
self.val_df = val_df
self.test_df = test_df
# Work out the label column indices.
self.label_columns = label_columns
if label_columns is not None:
self.label_columns_indices = {name: i for i, name in
enumerate(label_columns)}
self.column_indices = {name: i for i, name in
enumerate(train_df.columns)}
# Work out the window parameters.
self.input_width = input_width
self.label_width = label_width
self.shift = shift
self.total_window_size = input_width + shift
self.input_slice = slice(0, input_width)
self.input_indices = np.arange(self.total_window_size)[self.input_slice]
self.label_start = self.total_window_size - self.label_width
self.labels_slice = slice(self.label_start, None)
self.label_indices = np.arange(self.total_window_size)[self.labels_slice]
def __repr__(self):
return '\n'.join([
f'Total window size: {self.total_window_size}',
f'Input indices: {self.input_indices}',
f'Label indices: {self.label_indices}',
f'Label column name(s): {self.label_columns}'])
# SPLIT FUNCTION
def split_window(self, features):
inputs = features[:, self.input_slice, :]
labels = features[:, self.labels_slice, :]
if self.label_columns is not None:
labels = tf.stack(
[labels[:, :, self.column_indices[name]] for name in self.label_columns],
axis=-1)
# Slicing doesn't preserve static shape information, so set the shapes
# manually. This way the `tf.data.Datasets` are easier to inspect.
inputs.set_shape([None, self.input_width, None])
labels.set_shape([None, self.label_width, None])
return inputs, labels
# PLOT FUNCTION
def plot(self, model=None, plot_col='T (degC)', max_subplots=3):
inputs, labels = self.example
plt.figure(figsize=(12, 8))
plot_col_index = self.column_indices[plot_col]
max_n = min(max_subplots, len(inputs))
for n in range(max_n):
plt.subplot(max_n, 1, n+1)
plt.ylabel(f'{plot_col} [normed]')
plt.plot(self.input_indices, inputs[n, :, plot_col_index],
label='Inputs', marker='.', zorder=-10)
if self.label_columns:
label_col_index = self.label_columns_indices.get(plot_col, None)
else:
label_col_index = plot_col_index
if label_col_index is None:
continue
plt.scatter(self.label_indices, labels[n, :, label_col_index],
edgecolors='k', label='Labels', c='#2ca02c', s=64)
if model is not None:
predictions = model(inputs)
plt.scatter(self.label_indices, predictions[n, self.label_indices[0]-1:self.label_indices[-1], label_col_index],
marker='X', edgecolors='k', label='Predictions',
c='#ff7f0e', s=64)
# ERROR WHEN SHIFT IS NOT 1 BECAUSE NO PREDICTION:
# Exception has occurred: ValueError. x and y must be the same size
if n == 0:
plt.legend()
plt.xlabel('Time [h]')
plt.show()
# MAKE DATASET FUNCTION
def make_dataset(self, data):
data = np.array(data, dtype=np.float32)
ds = tf.keras.utils.timeseries_dataset_from_array(
data=data,
targets=None,
sequence_length=self.total_window_size,
sequence_stride=1,
shuffle=True,
batch_size=BATCH_SIZE,)
ds = ds.map(self.split_window)
return ds
@property
def train(self):
return self.make_dataset(self.train_df)
@property
def val(self):
return self.make_dataset(self.val_df)
@property
def test(self):
return self.make_dataset(self.test_df)
@property
def example(self):
"""Get and cache an example batch of `inputs, labels` for plotting."""
result = getattr(self, '_example', None)
if result is None:
# No example batch was found, so get one from the `.train` dataset
result = next(iter(self.train))
# And cache it for next time
self._example = result
return result
val_performance = {}
performance = {}
# WIDE WINDOW
wide_window = WindowGenerator(
input_width=24, label_width=1, shift=1,
label_columns=['T (degC)'])
# print(wide_window)
# print('Input shape:', wide_window.example[0].shape)
# print('Output shape:', baseline(wide_window.example[0]).shape)
# LINEAR MODEL
linear = tf.keras.Sequential([
tf.keras.layers.Dense(units=1)
])
# print('Input shape:', single_step_window.example[0].shape)
# print('Output shape:', linear(single_step_window.example[0]).shape)
# COMPILE AND FIT FUNCTION
MAX_EPOCHS = 20
def compile_and_fit(model, window, patience=2):
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss',
patience=patience,
mode='min')
model.compile(loss=tf.keras.losses.MeanSquaredError(),
optimizer=tf.keras.optimizers.Adam(),
metrics=[tf.keras.metrics.MeanAbsoluteError()])
history = model.fit(window.train, epochs=MAX_EPOCHS,
validation_data=window.val,
callbacks=[early_stopping])
# # ERROR WHEN label_width IS NOT 1 OR 24 (WHY THESE VALUES?):
# Exception has occurred: ValueError
# Dimensions must be equal, but are 3 and 24 for '{{node compile_loss/mean_squared_error/sub}} = Sub[T=DT_FLOAT](data_1, sequential_1/dense_1/Add)' with input shapes: [?,3,1], [?,24,1].
return history
# COMPILE AND FIT THE LINEAR MODEL ONTO THE WIDE WINDOW
history = compile_and_fit(linear, wide_window)
print('Input shape:', wide_window.example[0].shape)
print('Output shape:', linear(wide_window.example[0]).shape)
val_performance['Linear'] = linear.evaluate(wide_window.val, return_dict=True)
performance['Linear'] = linear.evaluate(wide_window.test, verbose=0, return_dict=True)
wide_window.plot(linear)
</code></pre>
|
<python><tensorflow><neural-network>
|
2024-12-03 17:00:43
| 1
| 1,448
|
Erken
|
79,248,486
| 6,447,123
|
How to reverse the tokenizer.apply_chat_template()
|
<pre class="lang-py prettyprint-override"><code># Chat template example
prompt = [
{ "role": "user", "content": "Random prompt."},
]
# Applying chat template
prompt = tokenizer.apply_chat_template(chat)
</code></pre>
<p>Is there anyway to convert the prompr string back to the original array?</p>
<p>My problem: If I don't use <code>pipeline</code> the response is a string and I need to manually parse it, I'm trying to figure out how <code>pipeline</code> parse it under the hood</p>
|
<python><huggingface-transformers>
|
2024-12-03 16:39:55
| 0
| 4,309
|
A.A
|
79,248,481
| 653,966
|
Finding Python Errors - Async function not Awaited
|
<p>I am using Python in Visual Studio 2022 - not Code, on Win11. I had a bug recently where an async function was not awaited. It occurred to me that it would be great to find these types of errors at design time, not run time.</p>
<p>I have tried installing PyLint, but none of the PyLint options are available in the Tools Menu. I have Python installed as a workload in VS 2022.</p>
<p>Can anyone tell me how to get VS2022 set up so that I can trap async functions that are called without an await prefix?</p>
|
<python><asynchronous><pylint>
|
2024-12-03 16:38:39
| 0
| 2,195
|
Steve Hibbert
|
79,248,406
| 2,617,144
|
Not JSON serializable when using SQLAlchemy
|
<p>Given the following code below</p>
<pre><code>from sqlalchemy.orm import mapped_column, Mapped, DeclarativeBase, relationship
from sqlalchemy import String, Integer, ForeignKey, Column
class Base(DeclarativeBase):
pass
class CModel(Base) :
__tablename__ = 'c'
id: Mapped[int] = mapped_column(Integer, primary_key=True)
cnp: Mapped[str] = mapped_column(String)
cn: Mapped[str] = mapped_column(String)
cs_id = mapped_column(Integer, ForeignKey("cs.id"))
cs = relationship("CSModel", foreign_keys=[cs_id])
def to_dict(self):
return {
"id": self.id,
"cnp": self.cnp,
"cn": self.cn,
"cs": self.cs
}
class CSModel(Base):
__tablename__ = 'cs'
id: Mapped[int] = mapped_column(Integer, primary_key=True)
s: Mapped[str] = mapped_column(String)
so: Mapped[int] = mapped_column(Integer)
</code></pre>
<p>I have followed the documentation found <a href="https://docs.sqlalchemy.org/en/20/orm/join_conditions.html" rel="nofollow noreferrer">here</a>. I have checked my database schema to make sure the models match correctly. I have looked over the documentation thrice and am still getting <code>TypeError: Object of type CSModel is not JSON serializable</code>. What am I missing?</p>
|
<python><sqlalchemy>
|
2024-12-03 16:20:46
| 0
| 1,851
|
Mark Hill
|
79,248,350
| 1,473,320
|
Is it possible to install the Google Cloud CLI on alpine Linux Without it installing a bundled version of python?
|
<h2>Problem Background</h2>
<p>I am building an Alpine Linux docker image to use as a (relatively) lightweight CI image. In this image, I need to have the Google Cloud <code>gcloud</code> CLI installed.</p>
<p>I am installing Python from my <code>Dockerfile</code> like this:</p>
<pre><code>RUN apk add python3
</code></pre>
<p>On Alpine Linux 3.20, (on 2024/12/03,) this installs Python 3.12.7. Installing some version of python appears to be a pre-requisite for even running the Google Cloud CLI install script.</p>
<pre class="lang-bash prettyprint-override"><code>/ # cat /etc/alpine-release
3.20.0
/ # python --version
Python 3.12.7
</code></pre>
<p>Google Cloud CLI claims to be compatible with Python 3.8 to 3.13 - <a href="https://cloud.google.com/sdk/docs/install" rel="nofollow noreferrer">source, under the "Linux" tab</a>.</p>
<p>I am installing the Google Cloud CLI using a script like this, run from inside my container being built:</p>
<pre class="lang-bash prettyprint-override"><code>mkdir -p /usr/local/gcloud
curl -s https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz | tar -xz -C /usr/local/gcloud
/usr/local/gcloud/google-cloud-sdk/install.sh --quiet --usage-reporting false
</code></pre>
<p>When I run this script as part of my Docker image build, I see this:</p>
<pre><code>#17 8.452 Your current Google Cloud CLI version is: 502.0.0
#17 8.452 Installing components from version: 502.0.0
#17 8.452
#17 8.495 ┌─────────────────────────────────────────────────────────────────────────────┐
#17 8.495 │ These components will be installed. │
#17 8.496 ├─────────────────────────────────────────────────────┬────────────┬──────────┤
#17 8.496 │ Name │ Version │ Size │
#17 8.497 ├─────────────────────────────────────────────────────┼────────────┼──────────┤
#17 8.497 │ BigQuery Command Line Tool │ 2.1.9 │ 1.7 MiB │
#17 8.499 │ BigQuery Command Line Tool (Platform Specific) │ 2.1.8 │ < 1 MiB │
#17 8.501 │ Bundled Python 3.11 (Platform Specific) │ 3.11.9 │ 74.4 MiB │
#17 8.503 │ Cloud Storage Command Line Tool │ 5.31 │ 11.3 MiB │
#17 8.504 │ Cloud Storage Command Line Tool (Platform Specific) │ 5.30 │ < 1 MiB │
#17 8.506 │ Google Cloud CLI Core Libraries (Platform Specific) │ 2024.08.30 │ < 1 MiB │
#17 8.507 │ Google Cloud CRC32C Hash Tool (Platform Specific) │ 1.0.0 │ 1.3 MiB │
#17 8.509 │ gcloud cli dependencies (Platform Specific) │ 2021.04.16 │ < 1 MiB │
#17 8.510 └─────────────────────────────────────────────────────┴────────────┴──────────┘
</code></pre>
<p>The line that caught my eye is the ~75MiB "Bundled Python 3.11" line. <strong>I would like to prevent the installer script from downloading and using a Bundled version of Python.</strong> Is this possible?</p>
<h2>Things I've Tried</h2>
<h3><code>--install-python false</code> Flag</h3>
<p>When I run the <code>--help</code> on the script to see options, one catches my eye:</p>
<pre class="lang-bash prettyprint-override"><code>/usr/local/gcloud/google-cloud-sdk/install.sh --help
# ....
--install-python INSTALL_PYTHON
(true/false) Attempt to install Python. MacOS only.
</code></pre>
<p>Sure enough, this flag appears to not do anything on Alpine Linux (I still see the install script downloading and installing Python 3.11).</p>
<h3>Setting <code>CLOUDSDK_PYTHON</code> environment variable</h3>
<p>After some Googling around, I attempted to set the <code>CLOUDSDK_PYTHON</code> environment variable in my <code>Dockerfile</code> in two ways:</p>
<pre><code>ENV CLOUDSDK_PYTHON="/usr/bin/python"
ENV CLOUDSDK_PYTHON="/usr/bin/python3"
</code></pre>
<p>Neither worked (I still see the install script downloading and installing Python 3.11).</p>
<h3>Not a solution for me: Google Provided Docker Container</h3>
<p>I understand that <a href="https://cloud.google.com/sdk/docs/downloads-docker" rel="nofollow noreferrer">Google provides a base image that already has the <code>gcloud</code> CLI installed</a>. For reasons that aren't fully relevant to the main thrust of this question, I can't use that to elide this problem. I have to use a different Alpine Linux based base image.</p>
|
<python><docker><gcloud><alpine-linux>
|
2024-12-03 16:06:17
| 1
| 472
|
kashev
|
79,248,235
| 4,653,423
|
Validate kubernetes memory & cpu resources input strings in python
|
<p>I am writing an API which takes inputs for kubernetes container resources. The API takes only the 4 values as string inputs and not the entire <code>resources</code> block.</p>
<p>Examples:<br />
For memory inputs: <code>4Gi</code>, <code>8G</code>, <code>512M</code><br />
For cpu limits: <code>1</code>, <code>250m</code>, <code>0.5</code></p>
<p>I will use this string value input received in the API to configure one of the deployment's container(s) during the course of processing the request.</p>
<p>I need to validate the string inputs received in the API so that I can make sure at an early stage that my k8s deployment will not fail because of bad input strings and also I want to put a maximum limit on what configurations can be specified by the requester.</p>
<p>I would like to know if there exists a library in python that can easily convert these strings to lowest unit and help me compare it with a <code>max_threshold</code></p>
|
<python><python-3.x><kubernetes>
|
2024-12-03 15:36:26
| 1
| 1,369
|
Mukund Jalan
|
79,248,152
| 1,818,122
|
AWS CDK and Multiple Chalice Apps with CDK-Chalice
|
<p>My goal is to create an infra stack and create a few lambdas using Chalice that would use the resources on that stack.</p>
<p>There is a tutorial given on <a href="https://aws.amazon.com/blogs/developer/aws-chalice-adds-support-for-the-aws-cdk/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/developer/aws-chalice-adds-support-for-the-aws-cdk/</a>. I followed the steps and it worked fine. It's using cdk-chalice.</p>
<p>How about adding multiple Chalice Apps on the same infrastructure? Is it possible there?</p>
<p>Any suggestions are also appreciated.</p>
<p>Thanks</p>
|
<python><amazon-web-services><aws-cdk><chalice>
|
2024-12-03 15:11:02
| 0
| 1,350
|
The_Cute_Hedgehog
|
79,248,001
| 13,392,257
|
object has no attribute '__fields_set__'. Did you mean: '__fields__'?
|
<p>I am using <code>pydantic-settings==2.4.0</code>
and trying to configure my settings class with yaml file</p>
<p>My code:</p>
<pre><code>import os
import yaml
from pydantic.v1 import AnyHttpUrl, BaseSettings, EmailStr, validator, BaseModel, Field
from typing import List, Optional, Union, Any
from pathlib import Path
...
class ServiceSettingsYaml(BaseSettings):
backend_cors_origins: List[AnyHttpUrl] = []
kafka_host: str = None
kafka_port: str = None
kafka_topics: str = None
kafka_instance: str = None
file_encoding: str = "utf-8"
db_user: str = None
db_password: str = None
db_host: str = None
db_port: str = None
db_name: str = None
sqlalchemy_database_url: str = None
def __init__(self, config_file_path: str = "configs/dev-config.yaml"):
config = load_yaml_config(config_file_path)
print("AAA_ ", config.kafka_consumer) # prints {'host': 'kafka', 'port': 9092, 'kafka_topics': 'my_topic'}
self.kafka_host: str = config.kafka_consumer["host"]
</code></pre>
<p>I have an error in the last code from my listing:</p>
<pre><code>File "/usr/src/app/app/main.py", line 12, in <module>
51_fetcher_new-consumer-1 | from app.core.config import settings
51_fetcher_new-consumer-1 | File "/usr/src/app/app/core/config.py", line 144, in <module>
51_fetcher_new-consumer-1 | settings = ServiceSettingsYaml()
51_fetcher_new-consumer-1 | File "/usr/src/app/app/core/config.py", line 131, in __init__
51_fetcher_new-consumer-1 | self.kafka_host: str = config.kafka_consumer["host"]
51_fetcher_new-consumer-1 | File "/usr/local/lib/python3.10/site-packages/pydantic/v1/main.py", line 405, in __setattr__
51_fetcher_new-consumer-1 | self.__fields_set__.add(name)
51_fetcher_new-consumer-1 | AttributeError: 'ServiceSettingsYaml' object has no attribute '__fields_set__'. Did you mean: '__fields__'?
</code></pre>
<p>How to configure my class with help of yaml file properly?</p>
|
<python><pydantic>
|
2024-12-03 14:28:58
| 1
| 1,708
|
mascai
|
79,247,843
| 4,190,098
|
Cannot crop correctly with openCV in Python
|
<p>I want to crop the identity card off of this scan
<a href="https://i.sstatic.net/QsYdqgRn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QsYdqgRn.png" alt="enter image description here" /></a>
Mind, that there is a lot of white space under the id (until here)</p>
<p>First, I preprocess the image:</p>
<pre><code>def preprocess_before_crop_2(scan_path, output_dir):
# Read the image
original_image = cv2.imread(scan_path)
# Grayscale
gray = cv2.cvtColor(original_image, cv2.COLOR_BGR2GRAY)
# Histogram Equalization on grayscale image
equalized = cv2.equalizeHist(gray)
# Initial Denoising
denoised = cv2.fastNlMeansDenoising(equalized, None, h=20, templateWindowSize=7, searchWindowSize=21)
# Sharpening kernel
kernel = np.array([[0, -1, 0], [-1, 5, -1], [0, -1, 0]])
sharpened = cv2.filter2D(denoised, -1, kernel)
# Bilateral filter
bilateral_filtered = cv2.bilateralFilter(sharpened, d=9, sigmaColor=75, sigmaSpace=75)
# Increase Contrast
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
contrast = clahe.apply(bilateral_filtered)
# Apply slight Gaussian blur before binarization for anti-aliasing
blurred = cv2.GaussianBlur(contrast, (3, 3), 0)
# Binary conversion with Otsu's thresholding
_, binary = cv2.threshold(blurred, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
# Adaptive thresholding
adaptive_thresh = cv2.adaptiveThreshold(blurred, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 11, 2)
return adaptive_thresh
</code></pre>
<p>then I use it to crop the image:</p>
<pre><code>def crop_document(scan_path, output_dir):
original_image = cv2.imread(scan_path)
# preprocess image
preprocessed_image = preprocess_before_crop(scan_path, output_dir)
contours, hierarchy = cv2.findContours(preprocessed_image,cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
# Find object with the biggest bounding box
mx = (0,0,0,0) # biggest bounding box so far
mx_area = 0
areas = []
for cont in contours:
x,y,w,h = cv2.boundingRect(cont)
area = w*h
ratio = float(w) / float(h)
areas.append((area,ratio))
if area > mx_area and ratio > 1:
mx = x,y,w,h
mx_area = area
x,y,w,h = mx
# Crop and save
cropped_image=original_image[y:y+h,x:x+w]
return cropped_image
</code></pre>
<p>However, the resulting cropped image looks like this: (I added a red rectangle of what I would like to crop instead)
<a href="https://i.sstatic.net/AJX4LWJ8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJX4LWJ8.png" alt="enter image description here" /></a></p>
<p>I tried to remove/change some steps in the preprocessing steps, but to no avail.</p>
<p>Edit:
This was the image I originally uploaded, sorry if anyone was confused.
<a href="https://i.sstatic.net/F0e886NV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F0e886NV.png" alt="enter image description here" /></a></p>
|
<python><opencv><image-processing><computer-vision><crop>
|
2024-12-03 13:41:21
| 2
| 949
|
Selphiron
|
79,247,768
| 1,719,931
|
Define __dict__ in slotted dataclass
|
<p>Consider the following example:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
@dataclass(slots=True)
class Circle:
radius:int = 2
@property
def myslots(self):
return self.__slots__
@property
def __dict__(self):
return self.__slots__
c = Circle()
print(c.myslots)
print(c.__dict__)
</code></pre>
<p>which results in:</p>
<pre><code>❯ python test.py
('radius',)
Traceback (most recent call last):
File "/home/raffaele/Downloads/test.py", line 21, in <module>
print(c.__dict__)
^^^^^^^^^^
AttributeError: 'Circle' object has no attribute '__dict__'. Did you mean: '__dir__'?
</code></pre>
<p>How can I define a <strong>dict</strong> property in a <code>dataclass</code> with <code>slots=True</code>?</p>
|
<python><python-dataclasses><slots>
|
2024-12-03 13:20:30
| 1
| 5,202
|
robertspierre
|
79,247,457
| 2,270,043
|
How to set ENV VARs that can be used during PyCharm local development in AWS Glue Interactive Session Magics
|
<p>I'm trying to make an ENV VAR available to use in an AWS Glue Interactive Session Magic. The ENV VAR would exist in a .env file. Essentially I want to do this:</p>
<p><code>%extra_py_files { SOME_PATH_TO_S3_DEPENDENCIES }</code></p>
<p>When the Interactive Session eventually kicks off it has no knowledge of the ENV VAR. I assume this is because the session is running as a Spark Cluster in AWS and its environment knows nothing about my local ENV VARs.</p>
<p>I have tried using the %env magic but it doesn't work. I have tried using PyCharms Jupyter Notebook server configuration to pass in the ENV VAR but it doesn't work. I have tried using os.getenv() BUT this messes up the magics and flow of the script. It is as if nothing can be executed before the magics .</p>
<p>Does anyone know how to make this work?</p>
<p>Thanks,</p>
<p>Sam</p>
|
<python><jupyter-notebook><pycharm><aws-glue>
|
2024-12-03 11:46:51
| 0
| 781
|
SamBrick
|
79,247,386
| 3,324,294
|
HKDF function in Python and NodeJS give different results, why?
|
<p>While working on implementing encryption/decryption between a Python backend server and a NodeJS frontend, my decryption attempts on the frontend were failing. I noticed that HKDF result generated by <code>cryptography</code> library in Python and by <code>crypto</code> in NodeJS don't output the same results.</p>
<p>Here's the code I used to test this:</p>
<p>Python code (with output):</p>
<pre class="lang-py prettyprint-override"><code>import base64
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives.kdf.hkdf import HKDF
input = b'helloworld'
hkdf = HKDF(
algorithm=hashes.SHA256(),
length=48, # 32 bytes for key + 16 bytes for IV
salt=b'\x00' * 32, # Explicit salt
info=b'',
)
output = hkdf.derive(input)
print(base64.b16encode(output))
# Output: b'E76D8FF8CE3E6FBFA6EBDD3BCE19766940316D2973503BB7B174C3F667EDE0AA65C9A74686D38E5B3FF8411A6E8354A8'
</code></pre>
<p>NodeJS code (with output):</p>
<pre class="lang-js prettyprint-override"><code>import crypto from 'crypto';
const input = Buffer.from("helloworld");
const salt = Buffer.alloc(32, 0); // Explicit salt
const output = crypto.hkdfSync('sha256', salt, input, Buffer.alloc(0), 48);
console.log(Buffer.from(output).toString("hex"));
// Output: 10523f4571d67851f2e3549a6071cef99db6cc88619a30e7d0419b38054ef63873409a0dbf4e5f4e66b693af44c3e393
</code></pre>
<p>Why? What am I missing?</p>
|
<python><node.js><cryptography><hkdf>
|
2024-12-03 11:24:51
| 1
| 374
|
ZeroByter
|
79,247,378
| 3,264,895
|
twisted reactor not launching tasks on raspberry pi
|
<p>I don't know why this code works on my windows computer, but just blocks on my raspberry pi.</p>
<pre><code>from twisted.internet import reactor, ssl, threads
from autobahn.twisted.websocket import WebSocketClientFactory, WebSocketClientProtocol
import queue
from queue import Empty
import threading
import time
connection_signal = threading.Event()
class MyWebSocketClientProtocol(WebSocketClientProtocol):
def onOpen(self):
print("WebSocket connection opened.")
self.factory.client_protocol = self
def onConnect(self, response):
print(f"Server connected: {response.peer}")
connection_signal.set()
def onClose(self, wasClean, code, reason):
print(f"WebSocket connection closed: {reason}")
def onMessage(self, payload, isBinary):
print(f"Message received: {payload}")
class WebSocketManager:
def __init__(self, queue):
self.queue = queue
self.client_protocol = None
def connect(self, host, port):
self.factory = WebSocketClientFactory(f"wss://{host}:{port}")
self.factory.protocol = MyWebSocketClientProtocol
context_factory = ssl.ClientContextFactory()
reactor.connectSSL(host, port, self.factory, context_factory)
connection_signal.wait()
def disconnect(self):
self.factory.client_protocol.sendCloseFrame()
def send_message(self, message):
self.factory.client_protocol.sendMessage(message.encode("utf-8"))
def reactor_thread(queue):
manager = WebSocketManager(queue)
def process_queue():
while True:
try:
msg = queue.get(timeout=0.1)
if msg:
if msg == "connect":
manager.connect("192.168.2.98", 8000)
time.sleep(0.1) # allow the connection to be established
elif msg == "disconnect":
manager.disconnect()
else:
manager.send_message(msg)
except Empty:
pass
reactor.callInThread(process_queue)
reactor.run(installSignalHandlers=False)
def main():
message_queue = queue.Queue()
reactor_thread_instance = threading.Thread(
target=reactor_thread, args=(message_queue,)
)
reactor_thread_instance.start()
message_queue.put("connect")
message_queue.put("Hello WebSocket!")
message_queue.put("disconnect")
time.sleep(5)
# reactor.stop()
# reactor_thread_instance.join()
if __name__ == "__main__":
main()
</code></pre>
<p>I've seen my computer runs select reactor, while the raspberry runs epoll reactor, but I tried forcing select on raspberry and epoll on computer and nothing changes.</p>
<p>I've detected that the raspberry pi never call the __init__ of the class MyWebSocketClientProtocol, while the computer does.</p>
<p>In both venv, the library and pythons are same versions (autobahn == 23.6.2, twisted == 23.10.0, pyopenssl == 24.2.1, python3.12)</p>
|
<python><python-3.x><raspberry-pi><twisted>
|
2024-12-03 11:23:33
| 1
| 864
|
Spoutnik16
|
79,247,356
| 12,297,666
|
Why i got this error trying to import Parametric UMAP
|
<p>I am trying to use <a href="https://umap-learn.readthedocs.io/en/latest/transform_landmarked_pumap.html" rel="nofollow noreferrer">Parametric UMAP</a>, using this code:</p>
<pre><code>from umap import ParametricUMAP
p_embedder = ParametricUMAP()
</code></pre>
<p>And this error happens:</p>
<pre><code>C:\Users\ldsp_\miniconda3\envs\tf-gpu\lib\site-packages\umap\__init__.py:16: UserWarning: The umap.parametric_umap package requires Tensorflow > 2.0 to be installed.
You can install Tensorflow at https://www.tensorflow.org/install
or you can install the CPU version of Tensorflow using
pip install umap-learn[parametric_umap]
warn(
Traceback (most recent call last):
Cell In[7], line 1
p_embedder = ParametricUMAP()
File ~\miniconda3\envs\tf-gpu\lib\site-packages\umap\__init__.py:26 in __init__
raise ImportError(
ImportError: umap.parametric_umap requires Tensorflow >= 2.0
</code></pre>
<p>But I have tensorflow 2.8.0 installed, check <code>pip list</code>:</p>
<pre><code>Package Version
----------------------------- -------------------
tensorboard 2.8.0
tensorboard-data-server 0.6.1
tensorboard-plugin-wit 1.8.1
tensorflow 2.8.0
tensorflow-estimator 2.5.0
tensorflow-io-gcs-filesystem 0.31.0
Note: you may need to restart the kernel to use updated packages.
</code></pre>
<p>I searched around, but did not found anything related to this issue at UMAP Github repository. Any ideas what could be causing this error and how to fix it?</p>
|
<python><umap>
|
2024-12-03 11:16:35
| 1
| 679
|
Murilo
|
79,247,309
| 244,297
|
How to clean up thread-local data after using ThreadPoolExecutor?
|
<p>I want to use <a href="https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor" rel="nofollow noreferrer"><code>ThreadPoolExecutor</code></a> to parallelize some (legacy) code with database access. I would like to avoid creating a new database connection for each thread, so I use <a href="https://docs.python.org/3/library/threading.html#threading.local" rel="nofollow noreferrer"><code>threading.local()</code></a> to keep a single connection per thread, e.g.:</p>
<pre><code>thread_local = threading.local()
def get_thread_db():
if not hasattr(thread_local, "db"):
thread_local.db = get_database_connection()
return thread_local.db
def do_stuff():
db_conn = get_thread_db()
# ...
with concurrent.futures.ThreadPoolExecutor(max_workers=max_workers) as executor:
futures = [executor.submit(do_stuff, *params) for params in data]
# ...
</code></pre>
<p>How do I ensure that all threads' connections are properly closed after all the submitted tasks are finished? It appears that <code>ThreadPoolExecutor</code> has the <code>initializer</code> parameter, but no parameter for an exit handler.</p>
|
<python><multithreading><database-connection><threadpool><python-multithreading>
|
2024-12-03 11:01:23
| 2
| 151,764
|
Eugene Yarmash
|
79,247,299
| 726,730
|
What is the right way to close a pair of QThread/Process?
|
<p>In my main pyqt5 program i instantiate some classes.</p>
<p>Each of this classes are born a QThread and a Process.</p>
<p>I am having trouble closing all this pairs of QThreads/Processes.</p>
<p>Here is what i do:</p>
<p>main close event:</p>
<pre class="lang-py prettyprint-override"><code> def closeEvent(self,event):
try:
if self.closeEvent_isSet == False:
self.closeEvent_isSet = True
# unsure that record deck is not recording
if self.record_deck_instance.deck_status != "stopped":
self.record_deck_instance.stop_and_close()
event.ignore()
return None
# unsure that ip call is not recording
for i in range(0, 3):
if self.ip_calls_record_deck_instance.ip_calls[i]["deck_status"] == "recording":
self.ip_calls_record_deck_instance.stop_and_close()
event.ignore()
return None
event.ignore()
self.final_slice_instance.put_to_plot = False
self.final_slice_instance.put_to_pyaudio = False
self.final_slice_instance.put_to_record = False
self.deck_1_instance.put_to_q = False
self.deck_2_instance.put_to_q = False
self.music_clip_deck_instance.put_to_q = False
# stop information frame
self.information_frame_instance.close()
# stop deck 1
self.deck_1_instance.close()
# stop deck 2
self.deck_2_instance.close()
# stop speackers deck
self.speackers_deck_instance.close()
# stop music clip deck
self.music_clip_deck_instance.close()
# stop final slice
self.final_slice_instance.close()
# stop final slice plot
self.final_slice_plot_instance.close()
# stop final slice pyaudio
self.final_slice_pyaudio_instance.close()
# stop record deck
self.record_deck_instance.close()
# stop ip call 1
if self.ui.ip_call_1_frame.isVisible():
self.ui.ip_call_1_dismiss.click()
self.ip_call_1_instance.close()
# stop ip call 2
if self.ui.ip_call_2_frame.isVisible():
self.ui.ip_call_2_dismiss.click()
self.ip_call_2_instance.close()
# stop ip call 3
if self.ui.ip_call_3_frame.isVisible():
self.ui.ip_call_3_dismiss.click()
self.ip_call_3_instance.close()
# stop ip calls (aiortc)
self.ip_calls_instance.close()
# stop speackers deck secondary
self.speackers_deck_secondary_instance.close()
# stop secondary slice
self.secondary_slice_instance.close()
# stop secondary slice pyaudio
self.secondary_slice_pyaudio_instance.close()
# ip calls record deck
self.ip_calls_record_deck_instance.close()
# player list
self.player_list_instance.close()
self.close_timer = QtCore.QTimer()
self.close_timer.timeout.connect(lambda: self.MainWindow.close())
self.close_timer.setSingleShot(True)
self.close_timer.start(20)
return None
else:
if all(not item for item in self.processes.values()):
# stop sync processes thread
self.sync_processes_instance.close()
event.accept()
if self.restart:
QtCore.QProcess.startDetached(sys.executable, sys.argv)
else:
try:
self.close_timer.disconnect()
except:
pass
self.close_timer = QtCore.QTimer()
self.close_timer.timeout.connect(lambda: self.MainWindow.close())
self.close_timer.setSingleShot(True)
self.close_timer.start(20)
event.ignore()
return None
except:
print(traceback.format_exc())
</code></pre>
<p>close method of one class instance:</p>
<pre class="lang-py prettyprint-override"><code> def close(self):
try:
print("close")
if self.is_closing == False:
self.is_closing = True
self.queue.put({"type":"close"})
print("queue put close...")
else:
try:
self.child_process.terminate()
self.emitter.quit()
except:
pass
while(self.queue.qsize()!=0):
_ = self.queue.get()
manage_processes_class.deinit_process(self)
self.main_self.processes["information_frame_instance_process_state"] = False
except:
self.main_self.processes["information_frame_instance_process_state"] = False
error_message = traceback.format_exc()
#self.main_self.open_information_frame_error_window(error_message)
</code></pre>
<p>After the first run of this close() method, one close item is putted in process queue. Process send close message via emitter to QThread and then process is return None. Then QThread emits close signal to parent class instance and return None. Finally the parent class instance is calling close_now() from the emitted signal.</p>
<p>close_now method:</p>
<pre class="lang-py prettyprint-override"><code> def close_now(self):
try:
print("close_now")
self.is_closing = True
self.close()
except:
self.is_closing = True
self.close()
</code></pre>
<p>This workaround finally works, but what about if an error oqueured in QThread. Then the close signal will never emitted so the close_now will never be called, so no stop of QThread/Process.</p>
<p>I want to modern solution using: <code>terminate()</code>, <code>join()</code> and <code>quit()</code> and to be as simple as possible.</p>
<p><strong>Edit:</strong> I change the main closeEvent:</p>
<pre class="lang-py prettyprint-override"><code> def closeEvent(self,event):
try:
self.MainWindow.hide()
# unsure that record deck is not recording
if self.record_deck_instance.deck_status != "stopped":
self.record_deck_instance.stop_and_close()
event.ignore()
return None
# unsure that ip call is not recording
for i in range(0, 3):
if self.ip_calls_record_deck_instance.ip_calls[i]["deck_status"] == "recording":
self.ip_calls_record_deck_instance.stop_and_close()
event.ignore()
return None
self.final_slice_instance.put_to_plot = False
self.final_slice_instance.put_to_pyaudio = False
self.final_slice_instance.put_to_record = False
self.deck_1_instance.put_to_q = False
self.deck_2_instance.put_to_q = False
self.music_clip_deck_instance.put_to_q = False
# stop information frame
self.information_frame_instance.close()
# stop deck 1
self.deck_1_instance.close()
# stop deck 2
self.deck_2_instance.close()
# stop speackers deck
self.speackers_deck_instance.close()
# stop music clip deck
self.music_clip_deck_instance.close()
# stop final slice
self.final_slice_instance.close()
# stop final slice plot
self.final_slice_plot_instance.close()
# stop final slice pyaudio
self.final_slice_pyaudio_instance.close()
# stop record deck
self.record_deck_instance.close()
# stop ip call 1
if self.ui.ip_call_1_frame.isVisible():
self.ui.ip_call_1_dismiss.click()
self.ip_call_1_instance.close()
# stop ip call 2
if self.ui.ip_call_2_frame.isVisible():
self.ui.ip_call_2_dismiss.click()
self.ip_call_2_instance.close()
# stop ip call 3
if self.ui.ip_call_3_frame.isVisible():
self.ui.ip_call_3_dismiss.click()
self.ip_call_3_instance.close()
# stop ip calls (aiortc)
self.ip_calls_instance.close()
# stop speackers deck secondary
self.speackers_deck_secondary_instance.close()
# stop secondary slice
self.secondary_slice_instance.close()
# stop secondary slice pyaudio
self.secondary_slice_pyaudio_instance.close()
# ip calls record deck
self.ip_calls_record_deck_instance.close()
# player list
self.player_list_instance.close()
# stop sync processes thread
self.sync_processes_instance.close()
event.accept()
if self.restart:
QtCore.QProcess.startDetached(sys.executable, sys.argv)
except:
print(traceback.format_exc())
</code></pre>
<p>Now one close method is:</p>
<pre class="lang-py prettyprint-override"><code> def close(self):
try:
self.queue.put({"type":"close"})
try:
self.child_process.join()
self.child_process.terminate()
self.emitter.quit()
except:
pass
while(self.queue.qsize()!=0):
_ = self.queue.get()
manage_processes_class.deinit_process(self)
except:
error_message = traceback.format_exc()
self.main_self.open_information_frame_error_window(error_message)
</code></pre>
<p>But it takes 12 seconds to accept the event... :/</p>
|
<python><process><qthread><terminate>
|
2024-12-03 10:59:33
| 0
| 2,427
|
Chris P
|
79,247,249
| 11,571,390
|
Dynamic module aliasing in __init__.py works at runtime but is not recognized by Pylance in VSCode
|
<p>I’m trying to create a clean alias for a submodule in my Python package by dynamically importing it in <code>__init__.py</code> and exposing it as a different name. While this works perfectly at runtime, Pylance in VSCode does not recognize the alias, and I lose autocomplete and type checking.</p>
<p>Here’s a minimal example of my setup:</p>
<pre><code>my_package/
├── my_package/
│ ├── __init__.py
│ ├── sub_module/
│ │ ├── example.py
└── test_script.py
</code></pre>
<p>Code:</p>
<p>my_package/sub_module/example.py</p>
<pre><code>def example_function():
return "Hello from example_function!"
</code></pre>
<p>my_package/<strong>init</strong>.py</p>
<pre><code>import importlib
import sys
from types import ModuleType
# Dynamically import sub_module.example
example_module = importlib.import_module("my_package.sub_module.example")
# Create an alias `my_package.alias`
alias_module = ModuleType("my_package.alias")
alias_module.example_function = example_module.example_function
# Add the alias to sys.modules for dynamic imports
sys.modules["my_package.alias"] = alias_module
</code></pre>
<p>test_script.py</p>
<pre><code>from my_package.alias import example_function
print(example_function())
</code></pre>
<p>Expected Behaviour:
Running test_script.py works perfectly and prints:
<code>Hello from example_function!</code></p>
<p><strong>Problem:</strong>
Pylance in VSCode shows the following error:
Import "my_package.alias" could not be resolved Pylance(reportMissingImports)</p>
<p>Additionally, Pylance does not provide autocomplete or type checking for the alias.</p>
<p><strong>Question:</strong>
How can I configure my project so that Pylance recognizes my_package.alias and provides type checking/autocomplete? I’d like to avoid using stub files or polluting the package structure if possible.</p>
<p>EDIT:</p>
<p>just for additional clarity.
I've tried lots of combinations of things like this in the init too:</p>
<pre><code># Import the actual module or function
import my_package.sub_module.example as alias
# Declare it as part of the package API
__all__ = ["alias"]
</code></pre>
<p>In the test script, this sort of thing just means I can do</p>
<pre><code>import my_package
print(my_package.alias.example_function())
</code></pre>
<p>But nothing I do allow me to do a 'from' import alias i.e.</p>
<pre><code>from my_package.alias import whatever
</code></pre>
|
<python><visual-studio-code><pylance>
|
2024-12-03 10:42:12
| 0
| 595
|
Gary Frewin
|
79,247,128
| 1,504,082
|
Pandas pyarrow types and sum aggfunc in pivot tables
|
<p>I stumbled over the issue that the summation aggfunc in pandas behaves not as expected when using <code>bool[pyarrow]</code> instead of <code>bool</code>.</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'count': [False] * 12, 'index': [0, 1] * 6, 'cols': ['a', 'b', 'c'] * 4})
# count index cols
# 0 False 0 a
# 1 False 1 b
# 2 False 0 c
# 3 False 1 a
# 4 False 0 b
# 5 False 1 c
# 6 False 0 a
# 7 False 1 b
# 8 False 0 c
# 9 False 1 a
# 10 False 0 b
# 11 False 1 c
# Returns ints
# cols a b c
# index
# 0 0 0 0
# 1 0 0 0
df.pivot_table(values='count', aggfunc=np.sum, columns='cols', index='index')
df.pivot_table(values='count', aggfunc='sum', columns='cols', index='index')
df.pivot_table(values='count', aggfunc=lambda x: x.sum(), columns='cols', index='index')
# Now change to pyarrow bools
df['count'] = df['count'].astype('bool[pyarrow]')
# Returns ints
# cols a b c
# index
# 0 0 0 0
# 1 0 0 0
df.pivot_table(values='count', aggfunc=np.sum, columns='cols', index='index')
df.pivot_table(values='count', aggfunc='sum', columns='cols', index='index')
# Returns a boolean
# cols a b c
# index
# 0 False False False
# 1 False False False
df.pivot_table(values='count', aggfunc=lambda x: x.sum(), columns='cols', index='index')
</code></pre>
<p>What is the reason for this behavior?</p>
<p>I am using pandas 2.2.0 and pyarrow 15.0.0.</p>
<p>Update: I found a pandas issue which describes a similar use case: <a href="https://github.com/pandas-dev/pandas/issues/53030" rel="nofollow noreferrer">https://github.com/pandas-dev/pandas/issues/53030</a></p>
<p>Seems to be related to pandas internal handling of pyarrow types.</p>
<p>A possible workaround could be:</p>
<pre class="lang-py prettyprint-override"><code>df.pivot_table(values='count', aggfunc=lambda x: x.astype(int).sum(), columns='cols', index='index')
</code></pre>
|
<python><pandas><pyarrow>
|
2024-12-03 10:08:46
| 0
| 4,475
|
maggie
|
79,247,116
| 19,576,917
|
Pyinstaller isn't properly converting a python script with alot of dependencies to an exe file
|
<p>I have written a python script that enables users to upload pdf files and ask questions about them. The script uses Gemini for smart responses and PyQt5 for the GUI. I am trying to convert this script into an executable file that is portable and can run on other windows devices. The script has the following imports:</p>
<pre><code>import sys
import os
import re
import google.generativeai as genai
import numpy as np
from PyQt5.QtWidgets import (QApplication, QMainWindow, QVBoxLayout, QHBoxLayout,
QTextEdit, QLineEdit, QPushButton, QFileDialog, QWidget,
QMessageBox, QInputDialog)
from PyQt5.QtGui import QFont
from PyQt5.QtCore import Qt
from sentence_transformers import SentenceTransformer
from langchain.text_splitter import MarkdownTextSplitter
import fitz
</code></pre>
<p>I tried <code>pyinstaller</code> but after a long period converting the script to exe, the exe file didn't work and kept showing this error <code>fatal error: failed to extract .... decompression resulted in return code -1</code>. I think the problem might be with the hidden imports but I couldn't solve it.</p>
|
<python><python-3.x><pyinstaller><desktop-application><auto-py-to-exe>
|
2024-12-03 10:05:41
| 0
| 488
|
Chandler Bong
|
79,247,088
| 9,261,745
|
telegram bot to trigger share_to window with checkbox
|
<p>I am using python-telegram-bot module to create a InlineKeyboardButton called invite. I would like the user to click the invite button then a share_to window with checkbox showing up like this <a href="https://i.sstatic.net/yrCEOMI0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yrCEOMI0.png" alt="enter image description here" /></a>.</p>
<p>But when I use switch_inline_query, what I get is like this <a href="https://i.sstatic.net/AGWQSl8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AGWQSl8J.png" alt="enter image description here" /></a>.</p>
<pre><code>async def handle_start(update: Update, context: CallbackContext) -> None:
"""Handle the /start command and send an inline keyboard."""
keyboard = [
[InlineKeyboardButton("Play Now", callback_data="play_now")],
[InlineKeyboardButton("Follow X", callback_data="follow_x"),
InlineKeyboardButton("Join Troops", url=f"https://t.me/{BOT_USERNAME}")],
# Use switch_inline_query to open the "Share to" window
[InlineKeyboardButton("Invite", switch_inline_query="Invite your friends!")]
]
reply_markup = InlineKeyboardMarkup(keyboard)
await update.message.reply_text(
"good luck!\n\n",
reply_markup=reply_markup
)
</code></pre>
<p>I am wondering how to achieve the share_to window with checkbox?</p>
|
<python><telegram-bot><python-telegram-bot>
|
2024-12-03 09:59:42
| 0
| 457
|
Youshikyou
|
79,246,860
| 14,333,315
|
common folder with template files for gitlab python projects
|
<p>I have a folder with word templates files which are common for several projects.
If template was changed, i have to copy new file to all that projects in that directory.</p>
<p>Is there a way to automate the procedure?</p>
<p>I thought about creating a diff project "word_templates" and make changes in the project only. But how can I force another project "my_project" to update the folder with files from "word_templates" ?</p>
|
<python><git><gitlab>
|
2024-12-03 08:50:21
| 1
| 470
|
OcMaRUS
|
79,246,676
| 3,104,974
|
Plot contours from discrete data in matplotlib
|
<p>How do I make a <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.contourf.html" rel="nofollow noreferrer"><code>contourf</code></a> plot where the areas are supposed to be discrete (integer array instead of float)?</p>
<p>The values should discretely mapped to color indices. Instead matplotlib just scales the result across the whole set of colors.</p>
<p>Example:</p>
<pre><code>import numpy as np
from matplotlib import pyplot as plt
axes = (np.linspace(-2, 2, 100), np.linspace(-2, 2, 100))
xx, yy = np.meshgrid(*axes, indexing="xy")
fig, ax = plt.subplots()
z = np.abs(xx * yy).astype(int) # values 0, 1, 2, 3, 4
z[z==0] = 4
ax.contourf(xx, yy, z, cmap="Set1")
</code></pre>
<p><a href="https://i.sstatic.net/kAoTwyb8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kAoTwyb8.png" alt="contourf plot" /></a></p>
<p><a href="https://i.sstatic.net/pz6l2YVf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pz6l2YVf.png" alt="discrete color scale" /></a></p>
|
<python><matplotlib><contourf>
|
2024-12-03 07:42:04
| 1
| 6,315
|
ascripter
|
79,245,922
| 21,935,028
|
How to get parameter name, type and default value of Oracle PLSQL function body?
|
<p>I have the following PLSQL code which I am processing with Antlr4 in Python. I ma having trouble extracting the function parameter name and related details.</p>
<pre><code>CREATE OR REPLACE
FUNCTION getcost ( p_prod_id IN VARCHAR2 , p_date IN DATE) RETURN number AS
</code></pre>
<p>The ParseTree output for this:</p>
<pre><code>╚═ sql_script
╠═ unit_statement
║ ╚═ create_function_body
║ ╠═ "CREATE" (CREATE)
║ ╠═ "OR" (OR)
║ ╠═ "REPLACE" (REPLACE)
║ ╠═ "FUNCTION" (FUNCTION)
║ ╠═ function_name
║ ║ ╚═ identifier
║ ║ ╚═ id_expression
║ ║ ╚═ regular_id
║ ║ ╚═ "getcost" (REGULAR_ID)
║ ╠═ "(" (LEFT_PAREN)
║ ╠═ parameter
║ ║ ╠═ parameter_name
║ ║ ║ ╚═ identifier
║ ║ ║ ╚═ id_expression
║ ║ ║ ╚═ regular_id
║ ║ ║ ╚═ "p_prod_id" (REGULAR_ID)
║ ║ ╠═ "IN" (IN)
║ ║ ╚═ type_spec
║ ║ ╚═ datatype
║ ║ ╚═ native_datatype_element
║ ║ ╚═ "VARCHAR2" (VARCHAR2)
║ ╠═ "," (COMMA)
║ ╠═ parameter
║ ║ ╠═ parameter_name
║ ║ ║ ╚═ identifier
║ ║ ║ ╚═ id_expression
║ ║ ║ ╚═ regular_id
║ ║ ║ ╚═ "p_date" (REGULAR_ID)
║ ║ ╠═ "IN" (IN)
║ ║ ╚═ type_spec
║ ║ ╚═ datatype
║ ║ ╚═ native_datatype_element
║ ║ ╚═ "DATE" (DATE)
║ ╠═ ")" (RIGHT_PAREN)
║ ╠═ "RETURN" (RETURN)
║ ╠═ type_spec
║ ║ ╚═ datatype
║ ║ ╚═ native_datatype_element
║ ║ ╚═ "number" (NUMBER)
║
</code></pre>
<p>The python code is:</p>
<pre><code> def enterParameter(self, ctx:PlSqlParser.ParameterContext):
print(ctx.toStringTree(recog=parser))
param_name = None
if ctx.parameter_name:
if ctx.parameter_name.identifier() and ctx.parameter_name.identifier().id_expression():
param_name = ctx.parameter_name.identifier().id_expression().regular_id().getText()
param_type = None
if ctx.type_spec and ctx.type_spec.datatype:
if ctx.type_spec.datatype.native_datatype_element():
param_type = ctx.type_spec.datatype.native_datatype_element().getText()
default_value = None
print(f"Parameter: {param_name}, Type: {param_type}, Def: {default_value}")
</code></pre>
<p>The output from the print statement is:</p>
<pre><code>(parameter (parameter_name (identifier (id_expression (regular_id p_prod_id)))) IN (type_spec (datatype (native_datatype_element VARCHAR2))))
</code></pre>
<p>But I get the following error:</p>
<pre><code> if ctx.parameter_name.identifier() and ctx.parameter_name.identifier().id_expression():
AttributeError: 'function' object has no attribute 'identifier'
</code></pre>
<p>If I use <code>ctx.getChild(0).getText()</code> to get the parameter name it works, but I don't want to rely on hardcoded indices, and I also do not understand why this isn't working.</p>
|
<python><antlr4>
|
2024-12-03 00:04:03
| 1
| 419
|
Pro West
|
79,245,905
| 11,061,827
|
Sagemaker's SklearnModel requirements.txt not getting installed
|
<p>This is my code:</p>
<pre><code>from sagemaker.sklearn import SKLearnModel
role = sagemaker.get_execution_role()
model = SKLearnModel(
model_data= f"s3://{default_bucket}/{prefix}/model.tar.gz",
role=role,
entry_point="inference.py",
framework_version="1.2-1",
py_version="py3",
)
predictor = model.deploy(
instance_type="ml.c5.large",
initial_instance_count=1,
container_startup_health_check_timeout=180
)
</code></pre>
<p><code>s3://{default_bucket}/{prefix}/model.tar.gz</code> contains:</p>
<pre><code>Contents of the tarball: (I also tried putting requirements.txt in code/ as advised on documentation for pytorch models on tarball structure)
?rw-r--r-- sagemaker-user/users 4349839 2024-11-29 19:22:21 model.pkl
?rw-r--r-- sagemaker-user/users 24 2024-12-02 14:43:26 inference.py
?rw-r--r-- sagemaker-user/users 44212 2024-11-29 19:23:17 explainer
?rw-r--r-- sagemaker-user/users 24 2024-12-02 14:43:26 requirements.txt
</code></pre>
<p><code>requirements.txt</code> contains:</p>
<pre><code>dill
pandas
joblib
</code></pre>
<p>(and I even know pandas is installed by default, checking the aws code)</p>
<p>When I try to deploy I get an error because inference.py has on the first line <code>import dill</code> saying the module is not found and in Cloudwatch I see only inference 1.0.0 is installed (which I assume is my script).</p>
<p>I know I could probably create a subprocess within inference.py and call pip there, but I want to do this properly.</p>
|
<python><amazon-web-services><scikit-learn><amazon-sagemaker>
|
2024-12-02 23:51:31
| 1
| 631
|
Daniele Gentili
|
79,245,886
| 15,842
|
How can I efficiently scan multiple remote parquet files in parallel?
|
<p>Suppose I have <code>urls</code>, a list of <code>s3</code> Parquet urls (on S3).</p>
<p>I observe that this <code>collect_all</code> runs in O(urls).</p>
<p>Is there a better way to parallelize this task?</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
pl.collect_all((
pl.scan_parquet(url).filter(expr) for url in urls)
))
</code></pre>
|
<python><python-polars>
|
2024-12-02 23:37:46
| 1
| 21,402
|
Gregg Lind
|
79,245,770
| 472,610
|
Where is scipy.stats.dirichlet_multinomial.rvs?
|
<p>I wanted to draw samples from a <a href="https://en.wikipedia.org/wiki/Dirichlet-multinomial_distribution" rel="nofollow noreferrer">Dirichlet-multinomial</a> distribution using SciPy. Unfortunately it seems that <code>scipy.stats.dirichlet_multinomial</code> <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.dirichlet_multinomial.html#scipy.stats.dirichlet_multinomial" rel="nofollow noreferrer">does not define</a> the <code>rvs</code> method that other distributions use to generate random samples.</p>
<p>I think this would be equivalent to the following for a single sample:</p>
<pre class="lang-py prettyprint-override"><code>import scipy.stats as sps
def dirichlet_multinomial_sample(alpha, n, **kwargs):
kwargs['size'] = 1 # force size to 1 for simplicity
p = sps.dirichlet.rvs( alpha=alpha, **kwargs )
return sps.multinomial.rvs( n=n, p=p.ravel(), **kwargs )
</code></pre>
<p>Multiple samples (i.e. <code>size > 1</code>) could be drawn similarly with a little bit more work to make it efficient. This seems easy enough to implement. My two questions are:</p>
<ul>
<li>Is the above implementation correct?</li>
<li>If it is, how can I suggest this enhancement to SciPy developers?</li>
</ul>
|
<python><scipy><statistics>
|
2024-12-02 22:25:51
| 1
| 8,002
|
Jonathan H
|
79,245,689
| 1,843,101
|
How to Configure tufup URLs for new Updates and Fetching Metadata/Targets?
|
<p>I'm using the python <a href="https://github.com/dennisvang/tufup" rel="nofollow noreferrer">tufup library</a> to manage updates for my application, it seems that I need to specify two different URLs, but I'm struggling to understand from the <a href="https://github.com/dennisvang/tufup-example" rel="nofollow noreferrer">docs/examples</a> where/how each one gets configured? or what am I misunderstanding here?</p>
<p><strong>URL to Check for Updates:</strong> This URL is where the application checks for new releases or updates. ex: "https://github.com/USER/PROJECT/releases/latest" (<a href="https://docs.github.com/en/repositories/releasing-projects-on-github/linking-to-releases" rel="nofollow noreferrer">doc</a>)</p>
<p><strong>URL to Get Metadata and Targets:</strong> This URL is where the tufup client fetches the metadata and target files required for verifying and applying updates. These files include root.json and other metadata files. This URL (I assume) should point to a secure server that I control. ex: "https://myCustomURL.com/verification_files"</p>
<p>Here is my current configuration:</p>
<pre><code>__version__ = "0.1"
import customtkinter as ctk
import os, shutil, time
from pathlib import Path
from tufup.client import Client
from dotenv import load_dotenv
# Load environment variables from .env file
load_dotenv()
# Define settings (these should ideally be in a separate settings.py file)
APP_NAME = "myAppName"
APP_VERSION = __version__
INSTALL_DIR = Path(os.getenv("INSTALL_DIR", Path.home() / "myAppName"))
METADATA_DIR = INSTALL_DIR / "metadata"
METADATA_BASE_URL = os.getenv("METADATA_BASE_URL", "https://myCustomURL.com/verification_files/metadata")
TARGET_DIR = INSTALL_DIR / "targets"
TARGET_BASE_URL = os.getenv("TARGET_BASE_URL", "https://myCustomURL.com/verification_files/targets")
TRUSTED_ROOT_SRC = Path(os.getenv("TRUSTED_ROOT_SRC", "update_repo/metadata/root.json"))
TRUSTED_ROOT_DST = METADATA_DIR / "root.json"
class App(ctk.CTk):
def __init__(self):
super().__init__()
# Check for updates
self.check_for_updates()
def progress_hook(self, bytes_downloaded: int, bytes_expected: int):
progress_percent = bytes_downloaded / bytes_expected * 100
time.sleep(0.2) # Simulate slow or large download
if progress_percent >= 100:
print("Download complete.")
def check_for_updates(self):
print("Checking for updates...")
try:
# Ensure directories exist
for dir_path in [INSTALL_DIR, METADATA_DIR, TARGET_DIR]:
dir_path.mkdir(exist_ok=True, parents=True)
# Ensure trusted root metadata exists
if not TRUSTED_ROOT_DST.exists():
if not TRUSTED_ROOT_SRC.exists():
raise FileNotFoundError(f"Trusted root metadata file not found: {TRUSTED_ROOT_SRC}")
shutil.copy(src=TRUSTED_ROOT_SRC, dst=TRUSTED_ROOT_DST)
print('Trusted root metadata copied to cache.')
# Create update client
client = Client(
app_name=APP_NAME,
app_install_dir=INSTALL_DIR,
current_version=APP_VERSION,
metadata_dir=METADATA_DIR,
metadata_base_url=METADATA_BASE_URL,
target_dir=TARGET_DIR,
target_base_url=TARGET_BASE_URL,
refresh_required=False,
)
# Perform update
new_update = client.check_for_updates(pre=None)
if new_update:
# Apply the update
client.download_and_apply_update(
skip_confirmation=True,
progress_hook=self.progress_hook,
purge_dst_dir=False,
exclude_from_purge=None,
log_file_name='install.log',
)
print("Update installed. Please restart the application.")
else:
print("You are up to date!")
except Exception as e:
print(f"Update check failed: {e}")
</code></pre>
|
<python><tkinter><updates><tuf><tufup>
|
2024-12-02 21:46:48
| 1
| 1,015
|
Michael Romrell
|
79,245,514
| 2,889,716
|
Error with Celery+S3 celery.exceptions.ImproperlyConfigured: Missing bucket name
|
<p>This is my script to have a celery on SQS and S3 as the result backend:</p>
<pre class="lang-py prettyprint-override"><code>from celery import Celery
from celery.backends.s3 import S3Backend
# Set your SQS queue URL
sqs_queue_url = 'https://sqs.eu-central-1.amazonaws.com/938891507445/test-sqs-queue'
app = Celery('tasks')
app.conf.update(
broker='sqs://',
broker_transport_options={
'region': 'eu-central-1',
'predefined_queues': {
'celery': {
'url': sqs_queue_url,
},
},
'task_create_missing_queues': False,
},
result_backend='s3://', # Use a string URL format for the backend
result_backend_transport_options={
'region_name': 'eu-central-1',
'bucket_name': 'celery-back',
'prefix': 'celery-results',
'expiration': 3600,
}
)
# Define a simple Celery task
@app.task
def add(x, y):
return x + y
# Example usage: calling the task
if __name__ == '__main__':
result = add.delay(4, 6)
print(f'Task result: {result.get(timeout=10)}')
</code></pre>
<p>But when I try to run it using celery 5.4, like this:</p>
<pre><code>poetry run celery -A cel worker --loglevel=info
</code></pre>
<p>And error says:</p>
<pre><code>root@ip:/app# poetry run celery -A cel worker --loglevel=info
/app/.venv/lib/python3.9/site-packages/celery/platforms.py:829: SecurityWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the --uid option.
User information: uid=0 euid=0 gid=0 egid=0
warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format(
[2024-12-02 21:09:28,126: WARNING/MainProcess] No hostname was supplied. Reverting to default 'localhost'
[2024-12-02 21:09:28,145: CRITICAL/MainProcess] Unrecoverable error: ImproperlyConfigured('Missing bucket name')
Traceback (most recent call last):
File "/app/.venv/lib/python3.9/site-packages/celery/worker/worker.py", line 202, in start
self.blueprint.start(self)
File "/app/.venv/lib/python3.9/site-packages/celery/bootsteps.py", line 112, in start
self.on_start()
File "/app/.venv/lib/python3.9/site-packages/celery/apps/worker.py", line 135, in on_start
self.emit_banner()
File "/app/.venv/lib/python3.9/site-packages/celery/apps/worker.py", line 169, in emit_banner
' \n', self.startup_info(artlines=not use_image))),
File "/app/.venv/lib/python3.9/site-packages/celery/apps/worker.py", line 231, in startup_info
results=self.app.backend.as_uri(),
File "/app/.venv/lib/python3.9/site-packages/celery/app/base.py", line 1303, in backend
self._backend = self._get_backend()
File "/app/.venv/lib/python3.9/site-packages/celery/app/base.py", line 971, in _get_backend
return backend(app=self, url=url)
File "/app/.venv/lib/python3.9/site-packages/celery/backends/s3.py", line 47, in __init__
raise ImproperlyConfigured('Missing bucket name')
celery.exceptions.ImproperlyConfigured: Missing bucket name
</code></pre>
<p>I ran a script to put a file in S4 and send a message to SQS from the same environment, So there is not problem with the bucket name or permissions</p>
|
<python><amazon-s3><celery><amazon-sqs>
|
2024-12-02 20:26:32
| 0
| 4,899
|
ehsan shirzadi
|
79,245,287
| 6,862,601
|
W0622: Redefining built-in 'help' (redefined-builtin)
|
<p>I have this method that causes a pylint issue:</p>
<pre><code>def add_ssh_config_path(self,
help="Enter ssh-config path for ssh connection to the host.",
required=False, metavar="<SSH_CONFIG_PATH>"):
self._parser.add_argument("--ssh-config-path", metavar=metavar,
required=required, help=help, type=str)
return self
</code></pre>
<p>The issue is:</p>
<pre><code>W0622: Redefining built-in 'help' (redefined-builtin)
</code></pre>
<p>I am not able to understand the issue. Why is <code>help</code> a built-in in the context of a method definition? How do I get around this issue?</p>
|
<python><pylint>
|
2024-12-02 19:07:44
| 1
| 43,763
|
codeforester
|
79,245,159
| 1,449,982
|
How to run Stable Diffusion 3.5 Large on 4090?
|
<p>I have 4090 and this code:</p>
<pre><code>login(HF_TOKEN)
self.pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3.5-large", torch_dtype=torch.bfloat16)
self.pipe = self.pipe.to("cuda")
</code></pre>
<p>This fails to load to vram due to oom.
What is the correct way to run it?</p>
<p>Tried using other floats but did not work</p>
|
<python><huggingface><stable-diffusion>
|
2024-12-02 18:19:09
| 1
| 8,652
|
Dima
|
79,245,038
| 3,723,031
|
Python typing error when incrementing a dict value: 'Unsupported operand types for + ("object" and "int")'
|
<p>I have this code:</p>
<pre><code>result = {"AAA": "a", "BBB": "b", "Count": 0}
result["Count"] += 1
</code></pre>
<p>mypy generates this error message:</p>
<pre class="lang-none prettyprint-override"><code>Unsupported operand types for + ("object" and "int")
</code></pre>
<p>What are some easy ways to fix this?</p>
|
<python><python-typing><mypy>
|
2024-12-02 17:34:00
| 2
| 1,322
|
Steve
|
79,245,015
| 2,801,669
|
Pydantic objects as elements in a Polars dataframe get automatically converted to dicts (structs)?
|
<p>What puzzles me is that when running</p>
<pre class="lang-py prettyprint-override"><code>class Cat(pydantic.BaseModel):
name: str
age: int
cats = [Cat(name="a", age=1), Cat(name="b", age=2)]
df = pl.DataFrame({"cats": cats})
df = df.with_columns(pl.lit(0).alias("acq_num"))
def wrap(batch):
return Cat(name="c", age=3)
df = df.group_by("acq_num").agg(pl.col("cats").map_batches(wrap, return_dtype=pl.Struct).alias("cats"))
type(df["cats"][0][0])
# dict
</code></pre>
<p>the resulting entries are dicts, even though the function "wrap" returns a Cat.
So polars automatically converts it to a dict, calling model_dump of pydantic?</p>
<p>Changing to</p>
<pre class="lang-py prettyprint-override"><code>df = df.group_by("acq_num").agg(pl.col("cats").map_batches(wrap, return_dtype=pl.Object).alias("cats"))
</code></pre>
<p>results in the error:</p>
<pre><code>SchemaError: expected output type 'Object("object", None)', got 'Struct([Field { name: "name", dtype: String }, Field { name: "age", dtype: Int64 }])'; set `return_dtype` to the proper datatype
</code></pre>
<p>I am confused by this conversion happening.
How can I prevent it?</p>
|
<python><dataframe><python-polars><pydantic>
|
2024-12-02 17:22:51
| 1
| 1,080
|
newandlost
|
79,245,011
| 5,115,928
|
Cassandra Python driver custom retryPolicy doesn't catch ReadFailure
|
<p>I created a custom retryPolicy to implement backoff for my Keyspaces cluster.</p>
<p>It works for read timeouts, I do have logs with <code>retry on_read_timeout</code>. But It doesn't retry on <code>ReadFailure</code>, and my understanding is that it should do it thanks to the <code>on_request_error</code> method. But it doesn't and fails immediately with the following error: <code>Error from server: code=1300 [Replica(s) failed to execute read] message=\\"Operation failed - received 0 responses and 1 failures\\" info={\'consistency\': \'LOCAL_QUORUM\', \'required_responses\': 2, \'received_responses\': 0, \'failures\': 1}</code></p>
<p>cassandra-driver==3.25.0
cassandra-sigv4==4.0.2</p>
<p>Here is my retry policy:</p>
<pre><code>from cassandra.policies import RetryPolicy
class KeyspacesRetryPolicy(RetryPolicy):
def __init__(self, RETRY_MAX_ATTEMPTS=5, base_delay=0.5, max_delay=10):
# retry_num starts at 0
self.max_retry_num = RETRY_MAX_ATTEMPTS-1
self.base_delay = base_delay
self.max_delay = max_delay
def __backoff(self, retry_num):
# exponential backoff delay
delay = min(self.base_delay * (2 ** (retry_num + 1)), self.max_delay)
print(f"Backing off for {delay} seconds (retry number {retry_num})")
time.sleep(delay)
def on_read_timeout(self, query, consistency, required_responses, received_responses, data_retrieved, retry_num):
if retry_num <= self.max_retry_num:
self.__backoff(retry_num)
print("retry on_read_timeout")
return self.RETRY, consistency
else:
return self.RETHROW, None
def on_write_timeout(self, query, consistency, write_type, required_responses, received_responses, retry_num):
if retry_num <= self.max_retry_num:
self.__backoff(retry_num)
print("retry on_write_timeout")
return self.RETRY, consistency
else:
return self.RETHROW, None
def on_unavailable(self, query, consistency, required_replicas, alive_replicas, retry_num):
if retry_num <= self.max_retry_num:
self.__backoff(retry_num)
print("retry on_unavailable")
return self.RETRY, consistency
else:
return self.RETHROW, None
def on_request_error(self, query, consistency, error, retry_num):
if retry_num <= self.max_retry_num:
self.__backoff(retry_num)
print("retry on_request_error")
return self.RETRY, consistency
else:
return self.RETHROW, None
</code></pre>
|
<python><cassandra><amazon-keyspaces><datastax-python-driver>
|
2024-12-02 17:21:05
| 1
| 306
|
sechstein
|
79,244,988
| 1,430,607
|
Unable to integrate asyncio and scrapy
|
<p>I have this spider:</p>
<pre class="lang-py prettyprint-override"><code>import logging
from scrapy import Spider, Request, settings
from dummy import settings
from dummy.items import DummyItem
LOGGER = logging.getLogger(__name__)
class DummySpider(Spider):
name = 'DummySpider'
def start_requests(self) -> Request:
LOGGER.info("!!!!!!!!!!!!!!!!!!!!!!!!!! run start_requests")
from twisted.internet import reactor
LOGGER.info(f'reactor_class: {reactor.__class__}')
LOGGER.info(f'scrapy_twisted_reactor: {settings.TWISTED_REACTOR}')
yield Request('https://google.com', dont_filter=True)
async def parse(self, _, **kwargs) -> DummyItem:
LOGGER.info("!!!!!!!!!!!!!!!!!!!!!!!!!! run parse")
item = DummyItem()
item['id'] = 42
yield item
</code></pre>
<p>If I run it with scrapy:</p>
<pre class="lang-py prettyprint-override"><code>2024-12-03 13:54:19 [scrapy.core.engine] INFO: Spider opened
2024-12-03 13:54:19 [dummy.spiders.dummy] INFO: !!!!!!!!!!!!!!!!!!!!!!!!!! run start_requests
2024-12-03 13:54:19 [dummy.spiders.dummy] INFO: reactor_class: <class 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'>
2024-12-03 13:54:19 [dummy.spiders.dummy] INFO: scrapy_twisted_reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2024-12-03 13:54:19 [dummy.spiders.dummy] INFO: !!!!!!!!!!!!!!!!!!!!!!!!!! run parse
2024-12-03 13:54:19 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.google.com/>
{'id': 42}
2024-12-03 13:54:19 [scrapy.core.engine] INFO: Closing spider (finished)
</code></pre>
<p>Now, if I try to use CrawlerRunner instead, it just hangs:</p>
<pre class="lang-py prettyprint-override"><code>from scrapy.utils.reactor import install_reactor
install_reactor('twisted.internet.asyncioreactor.AsyncioSelectorReactor')
import asyncio
import logging
from scrapy.crawler import CrawlerRunner
from scrapy.utils.defer import deferred_to_future
from twisted.internet import reactor
from dummy.spiders.dummy import DummySpider
LOGGER = logging.getLogger(__name__)
class TaskError(Exception):
pass
async def run_scraper() -> None:
crawler_runner = CrawlerRunner()
deferred = crawler_runner.crawl(DummySpider)
print(deferred)
future = deferred_to_future(deferred)
print(future)
print(reactor.__class__)
await deferred_to_future(deferred)
if __name__ == "__main__":
asyncio.run(run_scraper())
</code></pre>
<pre><code><Deferred at 0x71cf0c6c3e90>
<Future pending cb=[Deferred.asFuture.<locals>.checkCancel() at .venv/lib/python3.12/site-packages/twisted/internet/defer.py:1214]>
<class 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'>
</code></pre>
|
<python><scrapy><python-asyncio><twisted>
|
2024-12-02 17:13:55
| 2
| 31,741
|
Luis Sieira
|
79,244,802
| 19,916,174
|
pytest-django: Model class sites.models.Site doesn't declare an explicit app_label
|
<p>There have been lots of similar questions, but most of them are irrelevant to <code>pytest</code>. Running <code>manage.py runserver</code> works correctly for me, it only fails with pytest.</p>
<p>When running <code>pytest --collectonly --verbose -n0</code>, <code>pytest</code> fails with the following error:</p>
<pre><code>============================= test session starts ==============================
platform linux -- Python 3.12.6, pytest-8.3.3, pluggy-1.5.0 -- /home/jasongrace2282/repos/website5/.venv/bin/python3
cachedir: .pytest_cache
django: version: 5.1.3, settings: director.settings (from ini)
rootdir: /home/jasongrace2282/repos/website5/manager
configfile: pyproject.toml
testpaths: director
plugins: django-4.9.0, anyio-4.6.2.post1, xdist-3.6.1
collecting ... collected 1 item / 1 error
<Dir manager>
<Package director>
<Dir apps>
<Package auth>
<Module tests.py>
<Function test_password_login>
Make sure logging in via password on the dev env works.
==================================== ERRORS ====================================
________________ ERROR collecting director/apps/sites/tests.py _________________
director/apps/sites/tests.py:3: in <module>
from . import parser
director/apps/sites/parser.py:6: in <module>
from .models import DockerAction, DockerImage, Domain
director/apps/sites/models.py:39: in <module>
class Site(models.Model):
../.venv/lib/python3.12/site-packages/django/db/models/base.py:134: in __new__
raise RuntimeError(
E RuntimeError: Model class sites.models.Site doesn't declare an explicit app_label and isn't in an application in INSTALLED_APPS.
=========================== short test summary info ============================
ERROR director/apps/sites/tests.py - RuntimeError: Model class sites.models.S...
!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
====================== 1 test collected, 1 error in 0.17s ======================
</code></pre>
<p>My <code>director/apps/sites/apps.py</code> looks like:</p>
<pre class="lang-py prettyprint-override"><code>class SitesConfig(AppConfig):
default_auto_field = "django.db.models.BigAutoField"
name = "director.apps.sites"
</code></pre>
<p>And my <code>settings.INSTALLED_APPS</code> has the following content:</p>
<pre class="lang-py prettyprint-override"><code>INSTALLED_APPS = [
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
"debug_toolbar",
"django_browser_reload",
"django_linear_migrations",
"social_django",
"director.apps.auth",
"director.apps.users",
"director.apps.sites",
"heroicons",
]
if DEBUG:
INSTALLED_APPS += [
"django_extensions",
]
</code></pre>
<p>Additionally, my directory structure looks like this</p>
<pre class="lang-none prettyprint-override"><code>.
├── conftest.py
├── director
│ ├── apps
│ │ ├── auth
│ │ │ ├── admin.py
│ │ │ ├── apps.py
│ │ │ ├── __init__.py
│ │ │ ├── migrations
│ │ │ │ └── __init__.py
│ │ │ ├── models.py
│ │ │ ├── oauth.py
│ │ │ ├── tests.py
│ │ │ ├── urls.py
│ │ │ └── views.py
│ │ ├── sites
│ │ │ ├── admin.py
│ │ │ ├── apps.py
│ │ │ ├── __init__.py
│ │ │ ├── migrations
│ │ │ │ ├── 0001_initial.py
│ │ │ │ ├── 0002_alter_dockerimage_name.py
│ │ │ │ ├── __init__.py
│ │ │ │ └── max_migration.txt
│ │ │ ├── models.py
│ │ │ ├── parser.py
│ │ │ ├── tests.py
│ │ │ ├── urls.py
│ │ │ └── views.py
│ ├── asgi.py
│ ├── djtypes.py
│ ├── __init__.py
│ ├── settings
│ │ ├── __init__.py
│ ├── urls.py
│ └── wsgi.py
├── manage.py
└── pyproject.toml
</code></pre>
<p>With a pytest config of</p>
<pre class="lang-ini prettyprint-override"><code>[tool.pytest.ini_options]
DJANGO_SETTINGS_MODULE = "director.settings"
python_files = ["tests.py", "test_*.py"]
</code></pre>
|
<python><django><testing><pytest><pytest-django>
|
2024-12-02 16:17:06
| 1
| 344
|
Jason Grace
|
79,244,782
| 3,792,852
|
Why is this asynchronous event loop blocked?
|
<p>I have a python application that handles remote control of a drone. I mainly employ traditional multithreading, but I have to support asynchronous programming too since some third party libs insist on using it. Currently, I have the following situation:</p>
<p>The current flying mission is temporarily halted, which runs a asynchronous while loop that constantly checks for a resume flag and the waits a few milliseconds using <code>asyncio.sleep</code>. During that time, a websocket gets a command to adjust the drone's yaw angle. This starts a new thread which runs the appropriate drone code. However, this code is never executed, the while loop seems to be blocking all execution of further asynchronous code, despite the constant calls to <code>asyncio.sleep</code>.</p>
<p>The following sscce illustrates the process:</p>
<pre><code>import asyncio
import threading
import time
from concurrent.futures import ThreadPoolExecutor
class Drone():
def __init__(self):
self.ex = ThreadPoolExecutor(max_workers=5)
self.base_loop = asyncio.get_event_loop()
async def run(self):
print("Mission was temporarily halted, starting waiting loop...")
while True:
await asyncio.sleep(.2)
async def adjust_yaw(self):
print("Yaw angle adjusted.")
def sender(self):
time.sleep(2)
print("adjusting yaw angle...")
asyncio.run_coroutine_threadsafe(self.adjust_yaw(), self.base_loop)
if __name__ == '__main__':
drone = Drone()
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
th = threading.Thread(target=drone.sender, name="MissionRunner", args=[])
th.start()
loop.run_until_complete(drone.run())
</code></pre>
<p>As you can see, <code>"Yaw angle adjusted."</code> is never shown.</p>
<p>What would be the best way to unblock this loop?</p>
<p>It would be great if the changes could be limited to the <code>sender</code> method. As already mentioned, I'm just including asyncio because I have to and I wouldn't want to make big adjustments in my working codebase just because of this one library.</p>
<p><strong>Addendum</strong></p>
<p>I tried jsbueno's answers, which work perfectly in the example provided. However, putting them into my actual code, the event loop still refuses to run anything in the sender method. I don't know which other parts in my code are relevant to this behaviour and showing all code involved would blow this thread out of proportion.</p>
<p>The only working solution so far is not to care. I don't memorize or set any loops, and in the sender method, I simply await the methods with whatever event loop is currently running. The relevant methods will throw errors in the form</p>
<pre><code>RuntimeError: Task <Task pending coro=<main() running at /home/pi/mission/cpa/socket_listener.py:264> cb=[_run_until_complete_cb() at /usr/lib/python3.7/asyncio/base_events.py:158]> got Future <Future pending cb=[wrap_future.<locals>._call_check_cancel() at /home/pi/venv_mission/lib/python3.7/site-packages/aiogrpc/utils.py:52]> attached to a different loop
</code></pre>
<p>But before that, all necessary interactions with the drone are completed. So the program complains, but it works. An unsatisfying solution, but a solution nonetheless...</p>
|
<python><python-asyncio>
|
2024-12-02 16:10:51
| 1
| 355
|
user3792852
|
79,244,745
| 2,473,382
|
Python layer in lambda via cdk
|
<p><strong>TL;DR</strong>: the requirements.txt file ends up the layer, not the actual modules.</p>
<p>I have a very simple Lambda, created by the cdk.</p>
<pre class="lang-py prettyprint-override"><code>from aws_cdk import aws_lambda_python_alpha as aws_lambda_python
from aws_cdk import aws_iam, aws_lambda, Duration, Stack
from constructs import Construct
class Lambda(Stack):
def __init__(self, scope: Construct, construct_id: str):
super().__init__(scope, construct_id)
aws_lambda_python.PythonFunction(
self,
"LambdaFunction",
entry="src/lambda",
index="size.py",
handler="lambda_handler",
runtime=aws_lambda.Runtime.PYTHON_3_9,
layers=[
aws_lambda_python.PythonLayerVersion(
self,
"lambda-layer",
entry="src/layers/",
compatible_runtimes=[aws_lambda.Runtime.PYTHON_3_9],
)
],
)
</code></pre>
<p>This works, as in I see the lambda in the console, as well as the content of <code>src/lambda</code>, and a layer.</p>
<p>The file <code>src/layers/requirements.txt</code> exist, and is very small:</p>
<pre><code>boto3
botocore
jsons
structlog
</code></pre>
<p>When I look at the layer: <code>aws lambda get-layer-version-by-arn --arn <arn copied from the lambda console></code> and download/unzip Content.Location, I only get requirements.txt, not the actual modules. And of course, when running the lambda I get <code>module jsons not found</code> (for instance).</p>
<p>What do I need to do to get the actual dependecies, not the requirement file?</p>
|
<python><aws-lambda><aws-cdk><aws-lambda-layers>
|
2024-12-02 15:59:03
| 0
| 3,081
|
Guillaume
|
79,244,724
| 18,744,117
|
Either incorrect spacing or incorrect size using matplotlib shared y axis and aspect equal
|
<p><strong>Context</strong></p>
<p>I'm trying to display some timing data of differing algorithms which depend on some input size and another parameter. The goal is to have <code>n</code> plots showing the various algorithms and one summary plot.</p>
<p>the algorithm plots have Parameter to Execution Time axes and different lines for each input size, whilst the summary plot shows the Input size to Minimal Execution Time (parameter selected optimally).</p>
<p><strong>Problem</strong></p>
<p>For this I have the following code</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import matplotlib.patheffects as pe
data = {
"Case1": {
10: ([10,20,30], [ 100, 20, 80 ]),
1000: ([10,20,30], [ 200, 40, 100 ]),
},
"Case2": {
10: ([10,20,30], [ 50, 60, 70 ]),
1000: ([10,20,30], [ 100, 200, 300 ]),
}
}
minimum_data = {
"Case1": ([10,1000],[20, 40]),
"Case2": ([10,1000],[50, 100])
}
minimum = 10
maximum = 400
plot_count = 2 + 1
fig, axs = plt.subplots(1, plot_count, figsize=(6*plot_count, 6), sharey=True)
*individual_plots, group_plot = axs
for ax, ( category, grouped ) in zip( individual_plots, data.items() ):
for group, datum in grouped.items():
ax.loglog(datum[0], datum[1])
ax.text(datum[0][0], datum[1][0], f'size {group}', path_effects=[pe.withStroke(linewidth=4, foreground="white")])
ax.set_aspect("equal", adjustable="box")
ax.set_xlabel("parameter")
ax.set_ylabel("total")
ax.set_title(category)
ax.set_ylim([minimum,maximum])
for category, data in minimum_data.items():
group_plot.loglog(data[0], data[1], label=category)
group_plot.set_aspect("equal", adjustable="box")
group_plot.set_xlabel("size")
group_plot.set_ylabel("total")
group_plot.set_title("minimal totals")
group_plot.set_ylim([minimum,maximum])
fig.legend(
title="categories",
loc='outside center right'
)
plt.show()
</code></pre>
<p>Which produces something along the lines of</p>
<p><a href="https://i.sstatic.net/bmKtLqcU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bmKtLqcU.png" alt="bad spacing and bad y axis" /></a></p>
<p>Where the subplots are too far apart and the shared y axis does not seem to be working.</p>
<p>Stretching the figure "fixes" the shared y axis, but makes the spacing worse.</p>
<p><a href="https://i.sstatic.net/TMBFnRuJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMBFnRuJ.png" alt="acceptable y axis, terrible spacing" /></a></p>
<p>Conversly, shrinking the image does the opposite.</p>
<p><a href="https://i.sstatic.net/J0TUes2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J0TUes2C.png" alt="acceptable spacing, terrible y axis" /></a></p>
<p><strong>Question</strong></p>
<p>How would I keep the shared y Axis and have the margins between the different plots be identical? (Also working for more than 3 total plots)</p>
|
<python><matplotlib><plot>
|
2024-12-02 15:52:44
| 0
| 683
|
Sam Coutteau
|
79,244,701
| 9,640,384
|
How to get name of the script used in a job on Cloudera ML platform
|
<p>I want to programmatically retrieve the name of the script used in the current job that runs a python script on the Cloudera ML platform.</p>
<ul>
<li><code>__file__</code> magic variable doesn't work as in the background our Cloudera runtime uses ipython to execute the script</li>
<li>I could get the script associated with the specific job via an API call retrieving meta-information about the job, but I fail to find the job id in the environmental variables</li>
</ul>
<p>What could I try else?</p>
|
<python><cloudera>
|
2024-12-02 15:45:20
| 2
| 3,263
|
Mischa Lisovyi
|
79,244,510
| 2,753,095
|
xfce4-terminal cannot be killed
|
<p>From a Python program, I start <code>xfce4-terminal</code> as a subprocess. When calling <code>.terminate()</code>, xfce4-terminal is still there. Not nice. So I tried to kill it more abruptly - surprisingly, the corresponding process with corresponding PID is not there anymore... Some ancient and dangerous black magic is happening there (?). See by yourself:</p>
<pre><code>>>> import subprocess
>>> proc=subprocess.Popen(["xfce4-terminal"])
>>> #you should see the terminal is started...
>>> proc.terminate()
>>> #it is still there, alive and kicking...
>>> import os
>>> os.kill(proc.pid, 9) #let's try something more radical (SIGKILL)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ProcessLookupError: [Errno 3] No such process
>>> # WTF???
</code></pre>
<p>I was curious of this behaviour so I reproduced the issue in bash, it has nothing to do with Python's subprocess of course:</p>
<pre><code>(base) [matias@kashyyyk bec]$ xfce4-terminal &
[1] 167148
(base) [matias@kashyyyk bec]$ kill 167148
bash: kill: (167148) - No such process
</code></pre>
<p>(the terminal is still there)</p>
<p>I am really puzzled. What's going on?</p>
|
<python><bash><subprocess><kill><xfce>
|
2024-12-02 14:49:52
| 1
| 7,998
|
mguijarr
|
79,244,459
| 20,591,261
|
How to Use Aggregation Functions as an Index in a Polars DataFrame?
|
<p>I have a Polars DataFrame, and I want to create a summarized view where aggregated values (e.g., unique IDs, total sends) are displayed in a format that makes comparison across months easier. Here's an example of my dataset:</p>
<p>My example dataframe:</p>
<pre><code>import polars as pl
df = pl.DataFrame({
"Channel": ["X", "X", "Y", "Y", "X", "X", "Y", "Y", "X", "X", "Y", "Y", "X", "X", "Y", "Y"],
"ID": ["a", "b", "b", "a", "e", "b", "g", "h", "a", "a", "k", "a", "b", "n", "o", "p"],
"Month": ["1", "2", "1", "2", "1", "2", "1", "2", "1", "2", "1", "2", "1", "2", "1", "2"]
})
</code></pre>
<p>Currently, I use the following <code>group_by()</code> approach to calculate the number of unique IDs and the total number of sends for each Month and Channel:</p>
<pre><code>(
df
.group_by(
pl.col("Month"),
pl.col("Channel")
)
.agg(
pl.col("ID").n_unique().alias("Uniques ID"),
pl.col("ID").len().alias("Total sends")
)
)
shape: (4, 4)
┌───────┬─────────┬────────────┬─────────────┐
│ Month ┆ Channel ┆ Uniques ID ┆ Total sends │
│ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ str ┆ u32 ┆ u32 │
╞═══════╪═════════╪════════════╪═════════════╡
│ 1 ┆ X ┆ 3 ┆ 4 │
│ 1 ┆ Y ┆ 4 ┆ 4 │
│ 2 ┆ X ┆ 3 ┆ 4 │
│ 2 ┆ Y ┆ 3 ┆ 4 │
└───────┴─────────┴────────────┴─────────────┘
</code></pre>
<p>However, my actual dataset is much larger, and have more agg_functions, so I want a format that better highlights comparisons across months. Ideally, I want the output to look like this:</p>
<pre><code>| Channels | agg_func | months | months |
|----------|--------------|--------|--------|
| | | 1 | 2 |
| X | Uniques ID | 3 | 3 |
| X | Total sends | 4 | 4 |
| Y | Uniques ID | 4 | 3 |
| Y | Total sends | 4 | 4 |
</code></pre>
<p>I believe I could use <code>.pivot()</code> and pass the aggregation functions as part of the index. But, I'm not sure how to implement this directly without creating an auxiliary DataFrame. Any suggestions?</p>
|
<python><python-polars>
|
2024-12-02 14:33:51
| 1
| 1,195
|
Simon
|
79,244,353
| 1,600,821
|
Syn dependency when trying to use maturin on a Mac
|
<p>I am trying to follow the <a href="https://www.maturin.rs/tutorial" rel="nofollow noreferrer">quickstart of maturin</a>. However, after creating a project and copying the <code>Cargo.toml</code> over I always get this issue when I run <code>cargo build</code>.</p>
<pre><code>> cargo build
warning: unused manifest key: build-system
warning: unused manifest key: tool
Updating crates.io index
error: failed to select a version for the requirement `syn = "^2.0.59"`
candidate versions found which didn't match: 2.0.56, 2.0.55, 2.0.54, ...
location searched: crates.io index
required by package `pyo3-macros-backend v0.23.1`
... which is depended on by `pyo3-macros v0.23.1`
... which is depended on by `pyo3 v0.23.1`
... which is depended on by `guessing-game v0.1.0 (/Users/vincent/Development/guessing-game)`
</code></pre>
<p>I tried installing rust (I had a version locally that was multiple years old at this point) but it did not help. Google also did not return anything useful. Anybody know</p>
|
<python><rust><maturin>
|
2024-12-02 13:56:32
| 1
| 35,145
|
cantdutchthis
|
79,244,267
| 2,852,466
|
Python code to get all the subclass objects of a given base class and call the function defined in the base class
|
<h2>Edit:</h2>
<p>The question has been closed as duplicate to the <a href="https://stackoverflow.com/questions/328851/printing-all-instances-of-a-class">Printing all instances of a class</a></p>
<p>However, there is are two differences here.</p>
<ol>
<li>I want to create the object of all the subclass instances.</li>
<li>I can't modify the BaseClass to collect all the objects to a list. I need some mechanism to do this outside the BaseClass</li>
</ol>
<h2>Original question</h2>
<p>I am new to python (coming from Java background) and experimenting a few things. One of them is to get all the subclasses (the subclasses of the subclasses as well) and call a function on that subclass object. The function is defined in the base class.</p>
<p>Python modules structure:</p>
<p><a href="https://i.sstatic.net/Jph5qQA2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jph5qQA2.png" alt="Modules structure" /></a></p>
<p>Code in <code>base_class.py</code></p>
<pre><code>class BaseClass:
def __init__(self, entity_name):
self.entity_name = entity_name
def get_entity_name(self) -> str:
return self.entity_name
</code></pre>
<p>Code in <code>sub_classes.py</code></p>
<pre><code>from test.sub_classes import SubClass1, SubClass2
from test.sub_classes_scanner import SubClassesScanner
sub_class_1 = SubClass1('SubClass_1_test')
sub_class_2 = SubClass2('SubClass_2_test')
print(sub_class_1.get_entity_name())
print(sub_class_2.entity_name)
all_subclasses = SubClassesScanner.get_all_existing_sub_classes()
for subclass in all_subclasses:
print(subclass)
try:
print(subclass.get_entity_name())
except Exception as e:
print(e)
try:
print(subclass.entity_name)
except Exception as e:
print(e)
</code></pre>
<p>Code in <code>sub_classes_scanner.py</code></p>
<pre><code>from typing import Optional, Type, Set
from test.base_class import BaseClass
from test.sub_classes import SubClass1, SubClass2
class SubClassesScanner:
@staticmethod
def get_all_existing_sub_classes(cls: Optional[Type[BaseClass]]=None,) -> Set[Type[BaseClass]]:
if not cls:
cls = BaseClass
return set(cls.__subclasses__()).union(
[
s
for c in cls.__subclasses__()
for s in SubClassesScanner.get_all_existing_sub_classes(c)
]
)
# return {SubClass1(), SubClass2()}
</code></pre>
<p>When I run the <code>test_module.py</code> code, I get the following output</p>
<pre><code><class 'test.sub_classes.SubClass1'>
BaseClass.get_entity_name() missing 1 required positional argument: 'self'
type object 'SubClass1' has no attribute 'entity_name'
<class 'test.sub_classes.SubClass2'>
BaseClass.get_entity_name() missing 1 required positional argument: 'self'
type object 'SubClass2' has no attribute 'entity_name'
</code></pre>
<p>However, creating the object directly and invoking the function <code>get_entity_name()</code> works fine without errors (i.e., it prints the entity name passed as the parameter).</p>
<p>What should be done to make it work as intended? The goal is to scan all the subclasses (there could be as many as 100 subclasses) from different modules and packages dynamically (may tens of them typically) and invoke the function and get a response properly for each subclass.</p>
|
<python><python-3.x>
|
2024-12-02 13:26:47
| 0
| 1,001
|
Ravi
|
79,244,260
| 11,023,821
|
Python behave: get list of selected features in before_all
|
<p>For a pre-processing (cross-cutting) task, I would like to obtain a list of all selected "Features" already in the before_all hook. However, I only get very basic data via the Context there, like "Context.config". Where, as far as I can see, only an in-memory representation of the run configuration (command line parameters) is accessible.</p>
<p>I mean, behave must have such a list eventually since this is what it processes. How can I access it?</p>
<p>Background: I want to extract some data out of all selected Features and perform a central, expensive pre-processing in the before_all hook.</p>
|
<python><bdd><python-behave>
|
2024-12-02 13:24:22
| 0
| 1,719
|
Ralf Ulrich
|
79,244,250
| 880,783
|
How can I convert a TypeVarTuple to a ParamSpec?
|
<p>(I am almost certain the wording of the question does not make sense, but I have yet to find a better one.)</p>
<p>I have the following code that I want to type-check:</p>
<pre class="lang-py prettyprint-override"><code>from collections.abc import Callable
from typing import Generic, TypeVar, TypeVarTuple
C = TypeVar("C")
T = TypeVar("T")
Ts = TypeVarTuple("Ts")
class Decorator1(Generic[T]):
def __init__(self, /, type: type[T]) -> None:
super().__init__()
def __call__(self, function: Callable[[C, T], None], /) -> Callable[[C, T], None]:
return function
class DecoratorNa(Generic[T]):
# Q: how to prevent "T" from becoming "object" here?
def __init__(self, /, *type: type[T]) -> None:
super().__init__()
def __call__(self, function: Callable[[C, T], None], /) -> Callable[[C, T], None]:
return function
class DecoratorNb(Generic[*Ts]):
def __init__(self, /, *types: *Ts) -> None:
super().__init__()
# Q: how to convert "*Ts" from "type[int], type[str]" to "int, str" here?
def __call__(
self, function: Callable[[C, *Ts], None], /
) -> Callable[[C, *Ts], None]:
return function
class MyClass:
# This is given and supposed to type-check:
@Decorator1(int)
def print_number(self, number: int) -> None: ...
@Decorator1(str)
def print_string(self, string: str) -> None: ...
# # This is supposed to error:
# @Decorator1(int)
# def print_another_string(self, string: str) -> None: ...
# This is supposed to type-check:
@DecoratorNa(int, str)
def print_number_and_string_a(self, number: int, string: str) -> None: ...
# This is supposed to type-check:
@DecoratorNb(int, str)
def print_number_and_string_b(self, number: int, string: str) -> None: ...
</code></pre>
<p>The constructor of <code>Decorator</code> accepts one (or more) types at runtime (compare <code>PySide6.QtCore.Slot()</code>), and the decorated function is supposed to accept one (or more) parameters <em>of that type</em>.</p>
<p>This works fine with a single type using <code>type[T]</code>, yet I fail to find the same thing for multiple types. I tried <code>*type: type[T]</code> but that evaluates to <code>T = object</code>. With a <code>TypeVarTuple</code>, I end up not seeing how I can convert <code>*Ts</code> to something like a <code>ParamSpec</code>. I know <code>typing.get_args</code>, but I am not sure how this is supposed to be applied at type-check time.</p>
|
<python><python-typing><python-decorators>
|
2024-12-02 13:19:24
| 1
| 6,279
|
bers
|
79,244,109
| 1,542,011
|
Worker thread is started multiple times when running Dash App
|
<p>I'm trying to (a) have a worker thread that does some background work and (b) to monitor its progress using Dash. For some reason, when executing the code, the script is loaded twice, and two worker threads end up running. The code below reproduces the situation.</p>
<pre><code>import threading
import time
from threading import Thread
from dash import Dash, html
print("Load script")
def worker():
print(f"Worker started; Thread {threading.get_ident()}")
i = 0
while True:
time.sleep(5)
print(f"Thread {threading.get_ident()}, iteration {i}")
i += 1
if __name__ == "__main__":
app = Dash("App")
app.layout = html.Div([html.Div(id='dummy')])
worker_thread = Thread(target=worker)
worker_thread.start()
app.run(debug=True)
</code></pre>
<p>The output is:</p>
<pre><code>Load script
Worker started; Thread 6179811328
Dash is running on http://127.0.0.1:8050/
* Serving Flask app 'App'
* Debug mode: on
Load script
Worker started; Thread 6121779200
Thread 6179811328, iteration 0
Thread 6121779200, iteration 0
</code></pre>
<p>As the output shows, the script is loaded a 2nd time, and another worker thread is created. I wonder why and how I can prevent it.</p>
|
<python><multithreading><plotly-dash>
|
2024-12-02 12:35:32
| 1
| 1,490
|
Christian
|
79,243,949
| 8,000,016
|
Connection error from GCP Composer (Airflow) to GCP PubSub
|
<p>I'm trying to subscribe to a GCP PubSub to get the messages from GCP Composer but got the following error:</p>
<pre><code>airflow.exceptions.AirflowException: The conn_id `google_cloud_default` isn't defined
</code></pre>
<p>My DAG code is the following:</p>
<pre><code>subscribe_pubsub_task = PubSubPullSensor(
task_id="subscribe_pubsub",
project_id=GCP_REPORTING_ID,
subscription=PUBSUB_SUBSCRIPTION,
ack_messages=True,
deferrable=True,
)
</code></pre>
<p>I've reviewed the Airflow docs: <a href="https://airflow.apache.org/docs/apache-airflow-providers-google/stable/_api/airflow/providers/google/cloud/sensors/pubsub/index.html" rel="nofollow noreferrer">https://airflow.apache.org/docs/apache-airflow-providers-google/stable/_api/airflow/providers/google/cloud/sensors/pubsub/index.html</a></p>
<p>How to get connection id using GCP Service Account to authenticate?</p>
|
<python><google-cloud-platform><airflow><google-cloud-pubsub>
|
2024-12-02 11:45:20
| 1
| 1,264
|
Alberto Sanmartin Martinez
|
79,243,808
| 5,350,089
|
Python Tkinter Grid Column width Not expanding need First row not Scorallable
|
<p>I have three column with scrollbar and it needs to be expand and it should be stretched the window size but i am unable to increase the column width according to the screen size and i need top first row should need to stable that is like heading. the code is below</p>
<pre><code>import tkinter as tk
from tkinter import *
from tkinter import ttk
root = tk.Tk()
root.title("TEST")
root.geometry("800x600")
frame=ttk.Frame(root)
frame.pack(expand=1, fill=BOTH)
canvas = tk.Canvas(frame)
scrollbar = ttk.Scrollbar(frame, orient="vertical", command=canvas.yview)
canvas.configure(yscrollcommand=scrollbar.set)
frame3s = ttk.Frame(canvas)
frame3s.bind("<Configure>", lambda e: canvas.configure(scrollregion=canvas.bbox("all")))
RCnt = 0
DataRow = {}
LabCnt = 0;
for gh in range(40):
DataRow[LabCnt] = ttk.Label(frame3s, text=LabCnt, font=("Arial", 16, "bold"),cursor="hand2", justify=tk.CENTER,relief="solid")
DataRow[LabCnt].grid(row=RCnt, column=0, sticky='ew')
LabCnt += 1
DataRow[LabCnt] = ttk.Label(frame3s, text=LabCnt, font=("Arial", 16, "bold"),cursor="hand2", justify=tk.CENTER,relief="solid")
DataRow[LabCnt].grid(row=RCnt, column=1, sticky='ew')
LabCnt += 1
DataRow[LabCnt] = ttk.Label(frame3s, text=LabCnt, font=("Arial", 16, "bold"),cursor="hand2", justify=tk.CENTER,relief="solid")
DataRow[LabCnt].grid(row=RCnt, column=2, sticky='ew')
LabCnt += 1
RCnt += 1
frame3s.columnconfigure(gh, weight=1)
frame3s.rowconfigure(gh, weight=1)
frame.columnconfigure(0, weight=1)
frame.rowconfigure(0, weight=1)
canvas.create_window((0, 0), window=frame3s, anchor="nw")
canvas.grid(row=0, column=0, sticky="nsew")
scrollbar.grid(row=0, column=1, sticky="ns")
def _on_mousewheel(event):
canvas.yview_scroll(int(-1 * (event.delta / 120)), "units")
canvas.bind_all("<MouseWheel>", _on_mousewheel)
while True:
root.update()
</code></pre>
<p>please guide me to achieve expanded column width according to the screen size</p>
|
<python><tkinter><scrollbar><tkinter-canvas>
|
2024-12-02 11:04:22
| 1
| 445
|
Sathish
|
79,243,713
| 7,495,742
|
How to replace multiple special chars in a string
|
<p>I am trying to replace special characters in a string, but there are different occurences and differents characters. The actual code will only replace the first special character encountered, whereas i would like to replace all special characters in the string.</p>
<p>Edit : All answers i get are specific to this string, whereas my answer will work for any string ! Thanks everyone, problem solved !</p>
<pre><code>titre = "4K ULtra HD | SAMSUNG UHD Demo׃ LED TV"
print("original : "+ titre)
chars=["?",":","*","~","|","#","/","\""]
for i in chars:
if i in titre:
titre=titre.replace(i,"-")
print("modified : "+ titre)
#Output : modified : 4K ULtra HD - SAMSUNG UHD Demo׃ LED TV
</code></pre>
<p>This will only replace the "|" and not the ":", expected result is:</p>
<pre><code>4K ULtra HD - SAMSUNG UHD Demo- LED TV
</code></pre>
|
<python>
|
2024-12-02 10:36:04
| 2
| 357
|
Garbez François
|
79,243,572
| 189,777
|
Agent override given tool result
|
<p>I am writing following agent conversation:</p>
<pre><code>from autogen import ConversableAgent
from typing import Annotated
from model_config import get_model_config
llm_config = get_model_config('GPT-4o')
# Define simple calculator functions
def add_numbers(
a: Annotated[int, "First number"], b: Annotated[int, "Second number"]
) -> str:
print(f"Adding {a} and {b}")
return f"The sum of {a} and {b} is {a - b}." ### Mistake is in purpose!!! ###
# Define the assistant agent that suggests tool calls.
assistant = ConversableAgent(
name="CalculatorAssistant",
system_message="You are a helpful AI calculator. Return 'TERMINATE' when the task is done.",
llm_config=llm_config,
)
# The user proxy agent is used for interacting with the assistant agent and executes tool calls.
user_proxy = ConversableAgent(
name="User",
is_termination_msg=lambda msg: msg.get("content") is not None
and "TERMINATE" in msg["content"],
human_input_mode="NEVER",
)
# Register the tool signatures with the assistant agent.
assistant.register_for_llm(name="add_numbers", description="Add two numbers")(
add_numbers
)
# Register the tool functions with the user proxy agent.
user_proxy.register_for_execution(name="add_numbers")(add_numbers)
user_proxy.initiate_chat(assistant, message="What is the sum of 9 and 6?")
</code></pre>
<p>I expect that the result will be: <code>The sum of 9 and 6 is 3.</code></p>
<p>But I get the following result:</p>
<pre class="lang-none prettyprint-override"><code>User (to CalculatorAssistant):
What is the sum of 9 and 6?
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
CalculatorAssistant (to User):
***** Suggested tool call (call_IRs1aIYp0h7zv5dJReyAlLT0): add_numbers *****
Arguments:
{"a":9,"b":6}
****************************************************************************
--------------------------------------------------------------------------------
>>>>>>>> EXECUTING FUNCTION add_numbers...
Adding 9 and 6
User (to CalculatorAssistant):
User (to CalculatorAssistant):
***** Response from calling tool (call_IRs1aIYp0h7zv5dJReyAlLT0) *****
The sum of 9 and 6 is 3.
**********************************************************************
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
CalculatorAssistant (to User):
**The sum of 9 and 6 is 15. TERMINATE**
</code></pre>
<p>It seems that the agent does not relay on the tool result and have a "second opinion" on it.</p>
<p>Is it possible to force the tool result on the agent?</p>
|
<python><artificial-intelligence><ms-autogen>
|
2024-12-02 09:42:23
| 0
| 837
|
Ronen
|
79,243,356
| 240,566
|
How do I convert a csv file to an apache arrow ipc file with dictionary encoding
|
<p>I am trying to use pyarrow to convert a csv to an apache arrow ipc with dictionary encoding turned on. The following appears to convert the csv to an arrow ipc file:</p>
<pre><code>file = "./in.csv"
arrowFile = "./out.arrow"
with pa.OSFile(arrowFile, 'wb') as arrow:
with pa.csv.open_csv(file) as reader:
with pa.RecordBatchFileWriter(arrow, reader.schema) as writer:
for batch in reader:
writer.write_batch(batch)
</code></pre>
<p>I tried the following to use dictionary encoding:</p>
<pre><code>convert_options = pa.csv.ConvertOptions(auto_dict_encode = True)
with pa.OSFile(arrowFile, 'wb') as arrow:
with pa.csv.open_csv(file, convert_options=convert_options) as reader:
with pa.RecordBatchFileWriter(arrow, reader.schema) as writer:
for batch in reader:
writer.write_batch(batch)
</code></pre>
<p>But I get the following error:</p>
<pre><code>File "pyarrow/ipc.pxi", line 507, in pyarrow.lib._CRecordBatchWriter.write_batch
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Dictionary replacement detected when writing IPC file format. Arrow IPC files only support a single non-delta dictionary for a given field across all batches.
</code></pre>
<p>How do I fix the code to use dictionary encoding?</p>
|
<python><pyarrow><apache-arrow>
|
2024-12-02 08:13:55
| 1
| 10,496
|
Jay Askren
|
79,243,348
| 587,587
|
How to connect to an Oracle7 database
|
<p>I have an ancient Oracle7 database (version 7.3.4.5.0) that I need to extract some data from.</p>
<p>I mainly work with Python, so naturally I started looking at <a href="https://pypi.org/project/oracledb/" rel="nofollow noreferrer">oracledb</a>. However, it looks like the oldest version of oracledb only supports Oracle 9 and up (got erro DPY-4011 when attempting a connection).</p>
<p>So I started looking at <a href="https://pypi.org/project/cx-Oracle/" rel="nofollow noreferrer">cx_Oracle</a> instead. It looks like I should probably use cx_Oracle 3.0 or older.</p>
<p>However I can't find any binary distributables for it. Are they available anywhere? If not, can I install it directly from PyPi somehow?</p>
<p>If all fails, is there any other way to write a script for Oracle7 (VBScript?, JScript?, VBA?) that's documented?</p>
|
<python><oracle-database><python-2.x><oracle7>
|
2024-12-02 08:09:53
| 1
| 492
|
Anton Lahti
|
79,243,106
| 931,625
|
How to extract the parent JSON property name using JSONPath?
|
<p>I would like to extract the <code>content_filter_results</code> property names that have been filtered, e.g., <code>foo</code> here:</p>
<pre class="lang-json prettyprint-override"><code>{
"choices": [
{
"content_filter_results": {
"foo": {
"filtered": true,
"severity": "safe"
},
"bar": {
"filtered": false,
"severity": "safe"
},
"baz": {
"filtered": false,
"severity": "safe"
}
}
}
]
}
</code></pre>
<p>This JSONPath expression <code>$.choices[*].content_filter_results[?(@.filtered==true)]</code> selects its children properties, e.g.,</p>
<pre class="lang-json prettyprint-override"><code>[
{
"filtered": true,
"severity": "safe"
}
]
</code></pre>
<p>but I would like to be able to get <code>foo</code>.</p>
|
<python><json><jsonpath>
|
2024-12-02 06:17:08
| 1
| 4,120
|
Kel Solaar
|
79,243,015
| 4,515,046
|
Pythonanywhere and flask app requiring Playwright, does not install
|
<p>I have a Flask app that uses crawl4ai, which has a dependancy on Playwright, listed here, despite not directly interacting with the browser object, we still need to install Playwright:</p>
<p><a href="https://crawl4ai.com/mkdocs/basic/installation/" rel="nofollow noreferrer">https://crawl4ai.com/mkdocs/basic/installation/</a></p>
<p>Pythonanywhere however, say this is not possible, and the only workaround I can find is about directly using their browser version pre-installed:</p>
<p><a href="https://help.pythonanywhere.com/pages/Playwright/" rel="nofollow noreferrer">https://help.pythonanywhere.com/pages/Playwright/</a></p>
<p><code>The next step would normally to install the browsers using playwright install. However, that won't work on PythonAnywhere</code></p>
<p>This is not a help because i'm interacting with crawl4ai, not the Playwright browser.</p>
<p>What are my options here since I want to use crawl4ai and it has a dependancy that I simply cannot install? Do I need to find another python provider?</p>
|
<python><flask><pythonanywhere>
|
2024-12-02 05:22:02
| 0
| 3,546
|
yoyoma
|
79,242,541
| 13,175,203
|
In Polars, how do you create a group counter / group ID?
|
<p>How do you get a <code>group_id</code> column like this, grouping by columns <code>col1</code> and <code>col2</code> ?</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>col1</th>
<th>col2</th>
<th>group_id</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>Z</td>
<td>1</td>
</tr>
<tr>
<td>A</td>
<td>Y</td>
<td>2</td>
</tr>
<tr>
<td>A</td>
<td>Z</td>
<td>1</td>
</tr>
<tr>
<td>B</td>
<td>Z</td>
<td>3</td>
</tr>
</tbody>
</table></div>
<p>based on such a DataFrame :</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({
'col1': ['A', 'A', 'A', 'B'],
'col2': ['Z', 'Y', 'Z', 'Z']}
)
</code></pre>
<p>In other words, I'm looking for polars equivalent to R <em>data.table</em> <code>.GRP</code> (<code>df[, group_id:=.GRP, by = .(col1, col2)]</code>). Thanks !</p>
<p><em>Context :</em> I want to build an event ID because in my data, I have many detailed rows for one event. Once the event ID is created, I will use it to perform various window operations. I prefer to have this event ID rather than keeping a list of grouping variables.</p>
|
<python><dataframe><python-polars>
|
2024-12-01 21:44:44
| 2
| 491
|
Samuel Allain
|
79,242,476
| 3,431,557
|
Optimising requests in my Python code using asynchronicity
|
<p>I’m currently attempting to optimise a Python script I wrote that makes multiple HTTP requests using the <code>aiohttp</code> library. I want to take advantage of asynchronous programming in order to handle requests concurrently, without blocking the execution of the rest of the program. I have written the following code with the goal of improving performance by making non-blocking HTTP requests, but after running the new asynchronous code, I don't observe any significant performance improvements when compared to the previous, synchronous version of the script.</p>
<p>I suspect there might be something I'm missing or doing incorrectly. I’m hoping to understand what could be going wrong with my implementation—whether it's a problem with how I'm structuring the asynchronous code, how I'm managing concurrency, or something else that could be affecting the performance</p>
<pre><code>import traceback
import requests
import aiohttp
from bs4 import BeautifulSoup
# Helper function to get BeautifulSoup object
async def get_soup(session, url, headers, cookies):
async with session.get(url, headers=headers, cookies=cookies, timeout=30) as response:
if response.status == 200:
html_text = await response.text()
return BeautifulSoup(html_text, "html.parser")
else:
raise Exception(f"Failed to retrieve {url}, status code: {response.status}")
# Worker function for processing each page
async def process_page(session, site_index, start_page_num, end_page_num, new_listings):
site = sites[site_index]
cur_url = site['url']
site_headers = site['headers']
cassing = site['casings'][0]['cassingIdentity']
link_MD = site['casings'][0]['link']
sitename = site['pageTitle']
individual_Scraper_Metadata = site['individualCasings'][0]
COUNTER = site['URL_Counter']
# Prepare URL for first page
current_page_num = start_page_num
end_page_num *= COUNTER
async with aiohttp.ClientSession() as session:
try:
while True:
# Format the URL dynamically based on the page number
if cur_url.count('{}') > 1:
cur_url2 = cur_url.format(current_page_num-1, current_page_num-1)
else:
cur_url2 = cur_url.format(current_page_num)
print(f"Requesting URL: {cur_url2}")
# Retrieve and parse HTML
soup = await get_soup(session, cur_url2, site_headers, COOKIEJAR)
# Get the main listings from the page
raw_listings = soup.find_all(cassing["tag"], class_=cassing["classname"])
print(f"Found {len(raw_listings)} raw listings on page {current_page_num}")
if len(raw_listings) == 0:
print(f"No listings found at {cur_url2}")
break # Exit if no listings are found
# Extract data from each listing
for raw_listing in raw_listings:
page_listing = individualListingScraper(individual_Scraper_Metadata, raw_listing, link_MD)
page_listing['pageNo'] = current_page_num
page_listing['siteName'] = sitename
new_listings.append(page_listing)
# Terminate the loop based on the page number
if end_page_num != -1 and current_page_num >= end_page_num:
print(f"Reached last page {current_page_num}")
break
current_page_num += COUNTER
return finalReturn(sitename, new_listings)
except Exception as e:
print(f"Error processing page {current_page_num}: {str(e)}")
traceback.print_exc()
# Main function to coordinate scraping tasks
async def autoScraper(SITE_INDEX, START_PAGE_NUM=1, END_PAGE_NUM=-1):
global COOKIEJAR
new_listings = []
if SITE_INDEX >= len(sites):
return "site number not found"
site = sites[SITE_INDEX]
try:
# Set up cookies for subsequent requests
initial_response = await aiohttp.ClientSession().get("https://www.google.com/")
COOKIEJAR = initial_response.cookies
# Start scraping process
await process_page(aiohttp.ClientSession(), SITE_INDEX, START_PAGE_NUM, END_PAGE_NUM, new_listings)
return new_listings
except Exception as e:
print(f"Error in autoScraper: {str(e)}")
return new_listings
def individual_listing_scraper(individual_metadata, raw_listing_html, link_metadata, site_headers, cookie_jar, timeout_sec):
try:
# Extract the listing URL
listing_url_tag = raw_listing_html.find(link_metadata['tag'], class_=link_metadata['classname'])
if not listing_url_tag:
print("Error: Listing URL not found in the raw HTML.")
return {}
url = listing_url_tag.get(link_metadata['attrib'])
if not url:
print("Error: URL attribute not found in the tag.")
return {}
url = link_metadata['baseurlLink'] + url
# Fetch the HTML of the individual listing page
response = requests.get(url, headers=site_headers, cookies=cookie_jar, timeout=timeout_sec, verify=False)
response.raise_for_status() # Raise an error for bad HTTP responses
individual_page_html = BeautifulSoup(response.text, 'html.parser')
# Extract the listing data based on metadata
tag = individual_metadata['cassingIdentity']['tag']
classname = individual_metadata['cassingIdentity']['classname']
_id = individual_metadata['cassingIdentity']['id']
index = individual_metadata['cassingIdentity']['index']
listing_raw_html = individual_page_html.find_all(tag, class_=classname, id=_id)
if not listing_raw_html or len(listing_raw_html) <= index:
print(f"Error: Listing data not found or index out of range.")
return {}
listing_data = listing_raw_html[index]
# Assuming `bulklisting_scraper` function processes this data further
listing = bulklisting_scraper(individual_metadata, listing_data)
listing['link'] = url
return listing
except requests.exceptions.RequestException as e:
print(f"Request error: {e}")
except Exception as e:
print(f"Unexpected error: {e}")
traceback.print_tb(e.__traceback__)
return {}
</code></pre>
|
<python><beautifulsoup><aiohttp>
|
2024-12-01 21:06:30
| 0
| 637
|
user119264
|
79,242,432
| 298,288
|
await asyncio.to_thread blocks complete event loop?
|
<p>I am trying to create a Python program that - when triggered via a websocket - does some blocking work in a thread. I would expect my program to be able to continue to react to the stop signal also received via websocket, but it does not.</p>
<p>It works when I change from <code>result = await blocking</code> to <code>task = asyncio.create_task(blocking)</code>, but I don't understand why it does not work with <code>result = await blocking</code>. Can someone please explain that? Thanks!</p>
<pre><code>import asyncio
import json
import websockets
class MweThreading:
def __init__(self):
self.continue_blocking_work = False
def do_blocking_work(self):
print("do_blocking_work was called")
while self.continue_blocking_work:
print("doing blocking work ...")
print("Ended blocking work (after while loop)")
async def handle_websocket(self,websocket):
async for message in websocket:
payload = json.loads(message)
if payload["action"] == "start_blocking_work":
print("handle websocket action start_blocking_work")
self.continue_blocking_work = True
blocking = asyncio.to_thread(self.do_blocking_work)
# this does not work - BUT WHY? <=========================================
result = await blocking
# this works
# task = asyncio.create_task(blocking)
print("after blocking")
elif payload["action"] == "stop_blocking_work":
print("handle websocket action stop_blocking_work")
self.continue_blocking_work = False
else:
raise Exception("error handling JSON payload")
async def main(self):
async with websockets.serve(self.handle_websocket, "localhost", 8765):
print("Websocket server started on ws://localhost:8765")
await asyncio.Future()
if __name__ == "__main__":
mwe = MweThreading()
asyncio.run(mwe.main())
</code></pre>
|
<python><python-asyncio><python-multithreading>
|
2024-12-01 20:40:20
| 0
| 18,394
|
me.at.coding
|
79,242,365
| 14,230,633
|
Create custom mlflow PythonModel with `predict()` function that takes no arguments
|
<p>I'd like to create a custom mlflow <a href="https://mlflow.org/docs/latest/_modules/mlflow/pyfunc/model.html#PythonModel" rel="nofollow noreferrer">PythonModel</a> whose predict function does not take <em>any</em> arguments. It looks like <code>mlflow.pyfunc.load_model</code> (which I use to reover the custom moel) returns a <a href="https://mlflow.org/docs/latest/_modules/mlflow/pyfunc.html#PyFuncModel" rel="nofollow noreferrer">PyFuncModel</a> whose <code>.predict</code> method requires a <code>data</code> argument. Is there any clever way to get around that?</p>
<p>Side note: I know it's odd to have a model whose predict function doesn't require arguments, but I have my reasons!</p>
|
<python><mlflow><mlops>
|
2024-12-01 19:45:03
| 0
| 567
|
dfried
|
79,242,289
| 7,373,353
|
Azure Container-App Post-Request with PDF-Upload returns "502 Bad Gateway", "507 Insufficient Storage" or "503 Service Unavailable"
|
<p>I have a Azure Container-App containing an Docker Image of an Fast-API application that is supposed to alter an uploaded PDF and return back the altered PDF. On my localhost the application runs fine, however after deployment the Post-Request leads to the above responses. It says:
<code>upstream connect error or disconnect/reset before headers. reset reason: connection termination</code></p>
<p>However, the simple Hello-World-Get-Request works just fine so that the deployment should have worked.</p>
<p>The Protocol Stream does not contain any useful information.</p>
<p>I am thankful for any help, since I am not super experienced using Azure.</p>
<p>main.py</p>
<pre><code>from fastapi import FastAPI, HTTPException, File, UploadFile
import logging
import io
from app.redact_pdf import redact_pdf
from fastapi.responses import StreamingResponse
app = FastAPI()
@app.post("/http_trigger")
async def http_trigger(file: UploadFile = File(...), language: str = None):
logging.info('FastAPI HTTP trigger function processed a request.')
if file.content_type == 'application/pdf':
input_pdf_stream = io.BytesIO(await file.read())
if not language:
raise HTTPException(status_code=400, detail="The request does not contain language parameter.")
output_pdf_stream = redact_pdf(input_pdf_stream, language)
return StreamingResponse(
io.BytesIO(output_pdf_stream.read()),
media_type="application/pdf"
)
else:
raise HTTPException(status_code=400, detail="The request does not contain PDF data.")
@app.get("/hello")
async def read_root():
return {"message": "hello world"}
</code></pre>
<p>redact_pdf.py</p>
<pre><code>import pymupdf
import re
import io
import cv2
import numpy as np
from flair.data import Sentence
from flair.models import SequenceTagger
def redact_pdf(input_pdf_stream, language):
if language not in ["EN", "DE"]:
raise ValueError("Invalid language selection. Please choose 'EN' for English or 'DE' for German.")
if language == "EN":
tagger = SequenceTagger.load("flair/ner-english-large")
else:
tagger = SequenceTagger.load("flair/ner-german-large")
Open PDF from BytesIO-Stream
doc = pymupdf.open("pdf", input_pdf_stream)
zip_code_pattern = re.compile(r'\b\d{5}\b') # Beispiel: 12345
email_pattern = re.compile(r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b')
phone_patterns = [
re.compile(r'\b\d{3}[-.\s]?\d{3}[-.\s]?\d{4}\b'), # 123-456-7890 oder 123 456 7890
re.compile(r'\(\d{3}\)\s*\d{3}[-.\s]?\d{4}\b'), # (123) 456-7890
re.compile(r'\b\d{4}[-.\s]?\d{3}[-.\s]?\d{3}\b'), # 1234-567-890
]
for page_num in range(len(doc)):
page = doc.load_page(page_num)
text = page.get_text()
sentence = Sentence(text)
tagger.predict(sentence)
entities = [(entity.start_position, entity.end_position, entity.text) for entity in sentence.get_spans('ner') if entity.tag in ["PER", "ORG", "LOC"]]
for match in zip_code_pattern.finditer(text):
start, end = match.span()
entities.append((start, end, match.group()))
for pattern in phone_patterns:
for match in pattern.finditer(text):
start, end = match.span()
entities.append((start, end, match.group()))
for match in email_pattern.finditer(text):
start, end = match.span()
entities.append((start, end, match.group()))
print(entities)
for start, end, entity_text in entities:
search_results = page.search_for(entity_text)
if search_results:
for rect in search_results:
page.add_redact_annot(rect, text="REDACTED", fill=(0, 0, 0))
page.apply_redactions()
pix = page.get_pixmap()
img = np.frombuffer(pix.samples, dtype=np.uint8).reshape(pix.height, pix.width, pix.n)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
binary = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 11, 2)
contours, _ = cv2.findContours(binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
x, y, w, h = cv2.boundingRect(cnt)
if 30 < w < 400 and 5 < h < 150:
page.add_redact_annot(pymupdf.Rect(x, y, x+w, y+h), text="REDACTED", fill=(0, 0, 0))
page.apply_redactions()
output_pdf_stream = io.BytesIO()
doc.save(output_pdf_stream, garbage=4, deflate=True, clean=True)
output_pdf_stream.seek(0)
return output_pdf_stream
</code></pre>
<p>Dockerfile</p>
<pre><code># Start with the official Python image
FROM python:3.9-slim
# Set the working directory
WORKDIR /app
# Install system dependencies
RUN apt-get update \
&& apt-get install --no-install-recommends -y \
gcc \
build-essential \
libgl1-mesa-glx \
libglib2.0-0 \
&& apt-get clean
# Install python packages
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy the application code
COPY . .
# Expose the port the app runs on
EXPOSE 8000
# Command to run the application
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
</code></pre>
|
<python><azure><docker><fastapi><azure-container-apps>
|
2024-12-01 19:05:35
| 1
| 395
|
leabum
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.