QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
79,401,639
| 498,537
|
I am attempting to use Django Ninja for the first time and running into a strange error
|
<p>I cant quite understand error I am receiving. I am simply trying to setup a model schema for my model. I am an old Django hand but ninja is new for me. What am I doing wrong here? Would love some help and feedback.</p>
<p>My model is this</p>
<pre><code>class Program(models.Model):
mentor = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
description = models.TextField()
start_date = models.DateField()
end_date = models.DateField()
attendees = models.ManyToManyField(settings.AUTH_USER_MODEL, related_name="attendees")
participants = models.ManyToManyField(settings.AUTH_USER_MODEL, related_name="participants")
created_on = models.DateTimeField(auto_now_add = True)
updated_on = models.DateTimeField(auto_now = True)
</code></pre>
<p>My api.py has the following definition</p>
<pre><code>class MentorOutSchema(ModelSchema):
class Meta:
model = Program
fields = [
"mentor",
"description",
"start_date",
"end_date",
"attendees",
"participants",
]
</code></pre>
<p>My endpoint is this</p>
<pre><code>@router.get('/programs')
async def mentor_programs(request, response=list[MentorOutSchema]):
return Program.objects.filter(mentor=request.user)
</code></pre>
<p>When I start the server, I get the following error</p>
<pre class="lang-none prettyprint-override"><code> @router.get('/programs')
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vivekv/.cache/pypoetry/virtualenvs/pifnow-Zp6OiFzb-py3.12/lib/python3.12/site-packages/ninja/router.py", line 268, in decorator
self.add_api_operation(
File "/home/vivekv/.cache/pypoetry/virtualenvs/pifnow-Zp6OiFzb-py3.12/lib/python3.12/site-packages/ninja/router.py", line 319, in add_api_operation
path_view.add_operation(
File "/home/vivekv/.cache/pypoetry/virtualenvs/pifnow-Zp6OiFzb-py3.12/lib/python3.12/site-packages/ninja/operation.py", line 426, in add_operation
operation = OperationClass(
^^^^^^^^^^^^^^^
File "/home/vivekv/.cache/pypoetry/virtualenvs/pifnow-Zp6OiFzb-py3.12/lib/python3.12/site-packages/ninja/operation.py", line 331, in __init__
super().__init__(*args, **kwargs)
File "/home/vivekv/.cache/pypoetry/virtualenvs/pifnow-Zp6OiFzb-py3.12/lib/python3.12/site-packages/ninja/operation.py", line 82, in __init__
self.signature = ViewSignature(self.path, self.view_func)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vivekv/.cache/pypoetry/virtualenvs/pifnow-Zp6OiFzb-py3.12/lib/python3.12/site-packages/ninja/signature/details.py", line 87, in __init__
self.models: TModels = self._create_models()
^^^^^^^^^^^^^^^^^^^^^
File "/home/vivekv/.cache/pypoetry/virtualenvs/pifnow-Zp6OiFzb-py3.12/lib/python3.12/site-packages/ninja/signature/details.py", line 171, in _create_models
model_cls = type(cls_name, (base_cls,), attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vivekv/.cache/pypoetry/virtualenvs/pifnow-Zp6OiFzb-py3.12/lib/python3.12/site-packages/pydantic/_internal/_model_construction.py", line 219, in __new__
set_model_fields(cls, bases, config_wrapper, ns_resolver)
File "/home/vivekv/.cache/pypoetry/virtualenvs/pifnow-Zp6OiFzb-py3.12/lib/python3.12/site-packages/pydantic/_internal/_model_construction.py", line 537, in set_model_fields
fields, class_vars = collect_model_fields(cls, bases, config_wrapper, ns_resolver, typevars_map=typevars_map)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vivekv/.cache/pypoetry/virtualenvs/pifnow-Zp6OiFzb-py3.12/lib/python3.12/site-packages/pydantic/_internal/_fields.py", line 220, in collect_model_fields
_warn_on_nested_alias_in_annotation(ann_type, ann_name)
File "/home/vivekv/.cache/pypoetry/virtualenvs/pifnow-Zp6OiFzb-py3.12/lib/python3.12/site-packages/pydantic/_internal/_fields.py", line 258, in _warn_on_nested_alias_in_annotation
for anno_arg in args:
^^^^
TypeError: 'member_descriptor' object is not iterable
</code></pre>
|
<python><django><django-ninja>
|
2025-01-31 02:16:26
| 1
| 2,308
|
vivekv
|
79,401,525
| 9,951,273
|
Overloaded method invocation causes type check error
|
<p>I posted a <a href="https://stackoverflow.com/questions/79401185/overload-a-method-based-on-init-variables">question earlier</a> asking how to overload Python methods using generics.</p>
<p>The solution provided is shown below.</p>
<pre><code>from __future__ import annotations
from typing import Literal, overload
class WoodData: ...
class ConcreteData: ...
class Foo[T: Literal["wood", "concrete"]]:
def __init__(self, data_type: T) -> None:
self.data_type = data_type
@overload
def get_data(self: Foo[Literal["wood"]]) -> WoodData: ...
@overload
def get_data(self: Foo[Literal["concrete"]]) -> ConcreteData: ...
def get_data(self):
if self.data_type == "wood":
return WoodData()
return ConcreteData()
foo = Foo("concrete")
x = foo.get_data()
</code></pre>
<p>This example works as expected, but the problem arises when we try to call <code>get_data</code> from a sibling class method.</p>
<p>For example, I add the <code>bar</code> method below.</p>
<pre><code>from __future__ import annotations
from typing import Literal, overload
class WoodData: ...
class ConcreteData: ...
class Foo[T: Literal["wood", "concrete"]]:
def __init__(self, data_type: T) -> None:
self.data_type = data_type
@overload
def get_data(self: Foo[Literal["wood"]]) -> WoodData: ...
@overload
def get_data(self: Foo[Literal["concrete"]]) -> ConcreteData: ...
def get_data(self):
if self.data_type == "wood":
return WoodData()
return ConcreteData()
def bar(self):
self.get_data()
</code></pre>
<p>This gives the typing error:</p>
<pre><code>Cannot access attribute "get_data" for class "Foo[T@Foo]*"
Could not bind method "get_data" because "Self@Foo[T@Foo]" is not assignable to parameter "self"
"Foo[T@Foo]*" is not assignable to "Foo[Literal['wood']]"
Type parameter "T@Foo" is covariant, but "T@Foo" is not a subtype of "Literal['wood']"
Type "Literal['wood', 'concrete']" is not assignable to type "Literal['wood']"
"Literal['concrete']" is not assignable to type "Literal['wood']"
Could not bind method "get_data" because "Self@Foo[T@Foo]" is not assignable to parameter "self"
"Foo[T@Foo]*" is not assignable to "Foo[Literal['concrete']]"
Type parameter "T@Foo" is covariant, but "T@Foo" is not a subtype of "Literal['concrete']"
</code></pre>
<p>How can I access the <code>get_data</code> function from another method without getting a typing error?</p>
|
<python><python-typing>
|
2025-01-31 00:10:57
| 1
| 1,777
|
Matt
|
79,401,467
| 10,969,942
|
How to trace the caller of an async function in VS Code Debug mode when the call stack is lost?
|
<p>I am debugging a Python <code>async</code> function in VS Code, and I have hit a breakpoint inside the function. However, the call stack in the debug panel does not show the parent function that called it. Since without <code>await</code> it clears the synchronous call stack, I cannot see who originally invoked the function.</p>
|
<python><python-3.x><visual-studio-code><python-asyncio><vscode-debugger>
|
2025-01-30 23:28:37
| 1
| 1,795
|
maplemaple
|
79,401,282
| 1,165,268
|
Unable to import azure.kusto.ingest
|
<p>Iβm trying to import azure.kusto.ingest in my Python script, but Iβm getting an ImportError stating that the module cannot be found.</p>
<p>Iβve already installed the required package using:</p>
<p><code>pip install azure-kusto-ingest</code></p>
<p>To confirm that the package is installed, I ran:</p>
<p><code>pip show azure-kusto-ingest</code></p>
<p>And it correctly displays the package details. However, when I try to import it I get the below error</p>
<p><a href="https://i.sstatic.net/7oU1Bnue.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7oU1Bnue.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/itpfvzLj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/itpfvzLj.png" alt="enter image description here" /></a></p>
|
<python><azure-data-explorer>
|
2025-01-30 21:33:50
| 1
| 1,242
|
Robert
|
79,401,186
| 3,245,855
|
Kivy Dynamic Image Grid View
|
<p>I'm using python kivy.
I want to display cards containing a title and underneath an image, in a grid with 5 columns.</p>
<p>The behavior of the following snippet is 90% of what I want.
The images always fit the same space, leaving empty space up/down or left/right, so they maintain aspect ratio.
The only problem is that all the rows are scaled to fit the window. I want the height of the space where the image should fit to be a custom percentage of the window height, and to scroll down and up to see the rows that don't fit the current window.</p>
<pre><code>class MyApp(App):
def build(self):
scroll_view = ScrollView(do_scroll_x=False, do_scroll_y=True, smooth_scroll_end=15, scroll_wheel_distance=20, size=(Window.width, Window.height))
grid = GridLayout(cols=5, size_hint_x=1)
index = 0
for uuid, record in records.items():
image_texture = image_cache.get_image(uuid)
card = BoxLayout(orientation='vertical')
card.add_widget(Button(
text=record.name,
size_hint=(1, 0.2), # Stretch to fit column width (1 means full width)
on_press=lambda x: print(f"Button pressed: {x.text}")
))
card.add_widget(Image(
texture=image_texture,
size_hint_x=1,
))
grid.add_widget(card)
index += 1
if index >= 20:
break
scroll_view.add_widget(grid)
return scroll_view
if __name__ == '__main__':
MyApp().run()
</code></pre>
|
<python><kivy>
|
2025-01-30 20:39:58
| 1
| 1,119
|
Andrew
|
79,401,185
| 9,951,273
|
Overload a method based on init variables
|
<p>How can I overload the <code>get_data</code> method below to return the correct type based on the init value of <code>data_type</code> instead of returning a union of both types?</p>
<pre><code>from typing import Literal
DATA_TYPE = Literal["wood", "concrete"]
class WoodData: ...
class ConcreteData: ...
class Foo:
def __init__(self, data_type: DATA_TYPE) -> None:
self.data_type = data_type
def get_data(self) -> WoodData | ConcreteData:
if self.data_type == "wood":
return WoodData()
return ConcreteData()
</code></pre>
<p>I was thinking this could be done by specifying a generic for <code>Foo</code>. But I'm unsure on implementation details.</p>
<p>I'd prefer not to pass <code>WoodData</code>/<code>ConcreteData</code> directly as a generic. This is because I have many methods returning conditional data types depending on whether the init var is <code>wood</code> or <code>concrete</code>.</p>
<p>To illustrate that last point, I know I could add a generic that takes one of the two return types like so:</p>
<pre><code>from typing import Literal
DATA_TYPE = Literal["wood", "concrete"]
class WoodData: ...
class ConcreteData: ...
class Foo[MY_RETURN_TYPE: WoodData | ConcreteData]:
def __init__(self, data_type: DATA_TYPE) -> None:
self.data_type = data_type
def get_data(self) -> MY_RETURN_TYPE:
if self.data_type == "wood":
return WoodData()
return ConcreteData()
</code></pre>
<p>But imagine I have tons of methods conditionally returning different types based on the value of <code>data_type</code>. I don't want to specify each of these as generics. I'd rather overload the methods on the class and have return types accurately inferred.</p>
<p>Lastly, I know I could split this into two separate sub classes, but it would be nice to keep them as one class if possible.</p>
|
<python><python-typing>
|
2025-01-30 20:39:37
| 1
| 1,777
|
Matt
|
79,401,030
| 5,312,606
|
Good way to define new Protocol for collection types
|
<p>I would like to define a new structural type for my code that is basically a non-mutable, ordered set.</p>
<p>The following code captures the gist of it</p>
<pre class="lang-py prettyprint-override"><code>from ordered_set import OrderedSet
from typing import Protocol, TypeVar
from collections.abc import Set, Sequence
T = TypeVar("T", covariant=True)
class ConstOrderedSet(Protocol, Set[T], Sequence[T]): # type: ignore[misc]
pass
x: ConstOrderedSet[int] = OrderedSet([1, 2, 3])
x.append(5) # correctly fails
x[1] # correctly passes
x[1] = 2 # correctly fails
</code></pre>
<p>and correctly accepts/rejects valid/invalid constructs.
<strong>But</strong> apparently only <code>Protocol</code>s are allowed in the inheritance tree of <code>Protocol</code>s and apparently <code>Set</code> and <code>Sequence</code> are not Protocols.
The code does with <code>mypy</code> what I want, but I wonder how to do it correctly <strong>without</strong> having to implement a full <code>Protocol</code> of ConstOrderedSet on my own.</p>
|
<python><containers><python-typing>
|
2025-01-30 19:28:13
| 0
| 1,897
|
mcocdawc
|
79,400,993
| 11,062,613
|
Choosing an optimal partitioning strategy for a delta lake table on a laptop
|
<p>Iβm working on a data pipeline using Polars and Delta Lake to store and update a grouped time series on a laptop with 16GB RAM.
The dataset consists of a daily time series with ~1,000 unique groups with a history of 125 years.
I cannot manage the entire dataset in a single Parquet file, so I chose Delta Lake as the storage method.</p>
<p>Right now, Iβm trying to decide on the best partitioning strategy to balance query performance(!), file management and write efficiency.</p>
<p>My current pipeline:</p>
<p>Initial dataset</p>
<ul>
<li>group: String (~1,000 unique values).</li>
<li>date: Daily from 1900 to 2025.</li>
<li>5 numeric value columns.</li>
<li>Stored in a Delta Table.</li>
<li>Estimated size: 2β5 GB.</li>
</ul>
<p>Daily upserts:</p>
<ul>
<li>A small dataset (1β30 days) is merged daily, weekly, or monthly into the Delta table.</li>
</ul>
<p>Optimization steps:</p>
<ul>
<li>Z-Ordering on group and date.</li>
<li>Vacuuming to remove unnecessary files.</li>
</ul>
<p>A logical partitioning would be year/month/day, but that would create too many small files.
I'm considering two options:</p>
<p>Partition by year:</p>
<ul>
<li>Better for incremental updates and deletes.</li>
<li>Results in 125 partitions, leading to more files.</li>
</ul>
<p>Partition by decade:</p>
<ul>
<li>Reduces partitions to 13, meaning fewer small files.</li>
<li>Worse for incremental updates and deletes.</li>
</ul>
<p>Since Delta Lake requires partition columns to be stored, both strategies would require adding an extra column (year or decade) to the dataset.
However, I typically filter by date when reading the dataset, not year or decade, so query by partition is not useful in this case.</p>
<p>My Main Questions, are there better partitioning strategies for Delta Lake in my case?
How can I efficiently manage file sizes without sacrificing query performance?</p>
<p>Here is a similar example where no partitioning is applied which results in a single large parquet file:</p>
<pre><code>import os
import shutil
import psutil
from pathlib import Path
from datetime import date, timedelta
import polars as pl
from deltalake import DeltaTable
def generate_data(ngroups, ndates, ncols=5, start=date(1900, 1, 1), value=1.0, eager=True) -> pl.DataFrame | pl.LazyFrame:
groups = pl.LazyFrame({'group': pl.arange(1, ngroups+1, dtype=pl.Int64, eager=True).cast(pl.String)})
dates = pl.LazyFrame({'date': pl.date_range(start, start+timedelta(days=ndates-1), "1d", eager=True)})
lf = (
groups.join(dates, how='cross')
.with_columns(
[pl.lit(i*value, dtype=pl.Float64).alias(f'val_{i}') for i in range(1, ncols+1)])
)
return lf.collect() if eager else lf
print('Generating initial "large" dataset...')
df = generate_data(ngroups=1000, ndates=125*365, value=1.0, eager=True)
print(df.tail(3))
# βββββββββ¬βββββββββββββ¬ββββββββ¬ββββββββ¬ββββββββ¬ββββββββ¬ββββββββ
# β group β date β val_1 β val_2 β val_3 β val_4 β val_5 β
# β --- β --- β --- β --- β --- β --- β --- β
# β str β date β f64 β f64 β f64 β f64 β f64 β
# βββββββββͺβββββββββββββͺββββββββͺββββββββͺββββββββͺββββββββͺββββββββ‘
# β 1000 β 2024-11-28 β 1.0 β 2.0 β 3.0 β 4.0 β 5.0 β
# β 1000 β 2024-11-29 β 1.0 β 2.0 β 3.0 β 4.0 β 5.0 β
# β 1000 β 2024-11-30 β 1.0 β 2.0 β 3.0 β 4.0 β 5.0 β
# βββββββββ΄βββββββββββββ΄ββββββββ΄ββββββββ΄ββββββββ΄ββββββββ΄ββββββββ
size = df.estimated_size("gb")
memory = psutil.virtual_memory().total/1024**3
print(f' size/memory => {size:.3}gb/{memory:.3}gb => {int(100*size/memory)}%')
# size/memory => 1.99gb/15.5gb => 12%
print('Saving initial "large" "dataset to delta table...')
delta_path = Path('./table/').resolve() # <= Path to dalta table
if os.path.exists(delta_path):
shutil.rmtree(delta_path)
df.write_delta(delta_path, mode="overwrite",
delta_write_options={"partition_by": []})
df = None # free memory
print('Upserting delta table by "small" dataset...')
(
generate_data(ngroups=1000, ndates=5, start=date(2024, 11, 30), value=10.0, eager=True)
.write_delta(
delta_path,
mode="merge",
delta_write_options={"partition_by": []},
delta_merge_options={
"predicate": "s.group = t.group AND s.date = t.date",
"source_alias": "s",
"target_alias": "t"})
.when_matched_update_all()
.when_not_matched_insert_all().execute()
)
print("Optimize zorder to sort by group & date...")
DeltaTable(delta_path).optimize.z_order(["group", "date"])
print("Remove unused files...")
DeltaTable(delta_path).vacuum(
retention_hours=0,
dry_run=False,
enforce_retention_duration=False,
)
print("Update complete.")
# shutil.rmtree(delta_path)
</code></pre>
<p>Edit:</p>
<p>Another approach could be to limit the number of rows per Parquet file using max_rows_per_file instead of partitioning.
However, if the dataset is ordered by group and date, most Parquet files would probably be completely rewritten during any incremental update.
If the dataset is stored by date and group instead, then only the latest file would be updated, reducing rewrites. Unfortunately, this would require additional sorting by group and date when reading the dataset.
Alternatively, you could set the compressed target_file_size to ~10 MB, which might be more straightforward.</p>
|
<python><parquet><python-polars><delta-lake>
|
2025-01-30 19:11:47
| 0
| 423
|
Olibarer
|
79,400,976
| 8,365,359
|
Im using polars library, but I'm getting 'NameError: name 'PyLazyFrame' is not defined using polars'. Im' using python 3.9 and polars 1.21.0
|
<p>Im trying to get data from a parquet file using SQL syntax.</p>
<p>The code:</p>
<pre><code>import polars as pl
# Ensure the variables are defined
parquet_file = r'C:\Users\Kperez\Documents\md_curated_20250115.parquet'
date = "2024-12-13"
df3 = pl.read_parquet(parquet_file)
# Register the DataFrame as a table and execute the SQL query
ctx = pl.SQLContext()
ctx.register("data", df3)
result = ctx.execute(f"SELECT Identifier, IQ_CLOSEPRICE FROM data where Date = '{date}'").collect()
print(result)
</code></pre>
<p>But I got an error like this:</p>
<p><em>NameError: name 'PyLazyFrame' is not defined</em></p>
<p>View:
<a href="https://i.sstatic.net/IYaMxj5W.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IYaMxj5W.png" alt="Error code with descrpiption" /></a></p>
<p>Any ideas?</p>
|
<python><parquet><python-polars><python-3.9><polars>
|
2025-01-30 19:05:37
| 2
| 430
|
qarly_blue
|
79,400,971
| 8,972,038
|
How to check total no. of tokens while using Langchain's 'create_pandas_dataframe_agent'?
|
<p>I am using langchain's <code>create_pandas_dataframe_agent</code> agent to analyse a dataframe. The code looks like below:</p>
<pre><code>from langchain_experimental.agents import create_pandas_dataframe_agent
import pandas as pd
from langchain_openai import AzureOpenAI
df = pd.read_csv("file_path")
llm = AzureOpenAI(
deployment_name=name, # I have a variable 'name'
temperature=0.0,
)
agent_executor = create_pandas_dataframe_agent(
llm,
df,
# Few other params
)
prompt = """ Some Text """
agent_executor.invoke(prompt)
</code></pre>
<p>Now as per my understanding when agent's <code>invoke</code> is called both the prompt and the <code>df</code> is passed to the LLM.</p>
<p><strong>Note:</strong> I am using <code>gpt 3.5 turbo instruct</code> as my LLM.</p>
<p>Now I want to check how many tokens are consumed when I run this code. Any idea how to know that, preferably using some code.</p>
<p>I tried checking Azure Dashboard, but it's difficult to isolate tokens from a single request.</p>
|
<python><large-language-model><py-langchain><langchain-agents>
|
2025-01-30 19:03:15
| 1
| 418
|
Ankush Pandit
|
79,400,928
| 13,971,251
|
Server error when trying to access webpage with Requests, but no error with Selenium
|
<p>I have a script that uses the <code>requests</code> module to check a webpage for news updates. It worked up until a week ago, but now I am having a weird error. I am able to access the base url (<a href="http://www.example.com" rel="nofollow noreferrer">www.example.com</a>) using <code>requests</code>, but when I go to <a href="http://www.example.com/news/123456" rel="nofollow noreferrer">www.example.com/news/123456</a> it is appearing to have an error 500. This is what it says:</p>
<blockquote>
<p><strong>Internal Server Error</strong><br> Sorry, There were some technical issues while processing your request.</p>
</blockquote>
<p>However, I do not have this problem with <code>selenium</code>. Does anyone know what may be causing this error which is only happening with <code>requests</code>?</p>
<p>Here is my code:</p>
<pre class="lang-py prettyprint-override"><code>import requests
url = 'www.example.com/news/123456/'
#Headers that I got from the (working) selenium request
headers = {'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/90.0.4430.212 Safari/537.36'}
response = requests.get(url, headers=headers)
webpage_content = response.text
soup = BeautifulSoup(webpage_content, 'html.parser')
raw_lines = soup.find_all()
lines = '\n'.join([line.get_text() for line in raw_lines])
lines = lines.splitlines()
for line in lines:
print(line)
</code></pre>
|
<python><selenium-webdriver><python-requests>
|
2025-01-30 18:46:32
| 0
| 1,181
|
Kovy Jacob
|
79,400,827
| 2,079,306
|
Python FLASK Webapp zips a number of large files based on user selection, but the user doesn't know if it's hung
|
<p>Running a FLASK Python web application that zips up a number of files and serves them to the user based on the user's filters.</p>
<p>Issue, after the user clicks download the backend pulls all the files and zip creation starts, but this can take minutes. The user won't know if it's hung.</p>
<p>I decided streaming the zip file as it's being created gets the file to the user quicker, and it also lets the user know that the web app is working on it. The issue with this is in order to use the browser's download section (the little pop up or the download page with progress bars), you need to provide the content-length header, but we don't know the size of the zip file because it hasn't finished being created yet. I've tried my best to estimate the size of the zip file once it's complete, and I thought it would have been easy as my zip is just ZIP_STORED, but there is internal zip structure that I'm not able to accurately measure. The browser just ends up rejecting the download with ERR_CONTENT_LENGTH_MISMATCH.</p>
<p>I can provide a Server Sent Event (SSE) route to make my own progress bar by reading the number of bytes sent and polling it in a seperate /progress route, but I really had my heart set on using the browsers download section and it's a point of pride for me at this point. I could also just not stream it, then as it's being created use SSE to provide updates on the zip, then once it's finished send it with a content-length header... Not quite as nice as I'd like it to be though.</p>
<pre><code>def calculate_total_size(filenames):
total_size = 0
for file in filenames:
matching_filepath, _ = get_file_info(file)
if matching_filepath:
total_size += os.path.getsize(matching_filepath)
# Add overhead for ZIP file structure (22 bytes per file + 22 bytes for the central directory)
total_size += 22 * (len(filenames) + 1)
return total_size
def generate_file_entries(filenames):
for file in filenames:
matching_filepath, filename = get_file_info(file)
if matching_filepath:
file_stat = os.stat(matching_filepath)
modified_at = datetime.utcfromtimestamp(file_stat.st_mtime)
with open(matching_filepath, 'rb') as f:
chunk = f.read()
if isinstance(chunk, bytes): # Ensure only bytes are yielded
yield filename, modified_at, 0o600, ZIP_64, [chunk]
else:
print(f"Unexpected data type for file contents: {type(chunk)}")
</code></pre>
|
<python><web-applications><http-headers><zip><content-length>
|
2025-01-30 18:03:30
| 1
| 1,123
|
john stamos
|
79,400,733
| 2,130,515
|
How to access Gradio components created dynamically in a global function
|
<p>My goal:</p>
<ol>
<li>drop a list of files.</li>
<li>Read files and extract columns.</li>
<li>Dynamically generate dropdown componenets so the user can select the source and target columns.</li>
<li>I need to return (globally) all the selections, so I can use them with process_files() function.
I think (3) it should be solved using gr.State([]) but I am not sure how to update it.
here is my code:</li>
</ol>
<h2>Function to process files</h2>
<pre><code>import gradio as gr
import pandas as pd
def process_files(files, selections):
results = []
for file, selection in zip(files, selections):
file_name = file.name
source_col, target_col, source_lang, target_lang = selection
try:
if file_name.endswith(".csv"):
df = pd.read_csv(file.name)
elif file_name.endswith(".xlsx"):
df = pd.read_excel(file.name, engine="openpyxl")
else:
continue # Skip unsupported files
# Extract selected columns
extracted_data = df[[source_col, target_col]]
results.append(
f"β
Processed {file_name} ({source_lang} β {target_lang})")
except Exception as e:
results.append(f"Error processing {file_name}: {str(e)}")
return "\n".join(results)
</code></pre>
<h2>Gradio Interface</h2>
<pre><code>with gr.Blocks() as demo:
gr.Markdown("## Upload Multiple Files and Select Columns")
file_input = gr.File(file_types=[
".csv", ".xlsx"], label="Drop your files here", interactive=True, file_count="multiple")
all_selections = gr.State([])
@gr.render(inputs=[file_input], triggers=[file_input.change])
def generate_column_selectors(files):
if not files:
return "## No Files Uploaded", [] # Reset selections
with gr.Column():
for file in files:
try:
if file.name.endswith(".csv"):
df = pd.read_csv(file.name)
elif file.name.endswith(".xlsx"):
df = pd.read_excel(file.name, engine="openpyxl")
else:
continue
with gr.Row():
gr.Markdown(f"**{file.name}**") # Display filename
source_col = gr.Dropdown(
df.columns.tolist(), label="Source Column", interactive=True)
target_col = gr.Dropdown(
df.columns.tolist(), label="Target Column", interactive=True)
source_lang = gr.Textbox(
label="Source Language", placeholder="e.g., English", interactive=True)
target_lang = gr.Textbox(
label="Target Language", placeholder="e.g., French", interactive=True)
all_selections.append(
(source_col, target_col, source_lang, target_lang)) # this does not work.
except Exception as e:
gr.Markdown(f"Error processing {file.name}: {str(e)}")
submit_button = gr.Button("Submit & Process", interactive=True)
output_text = gr.Markdown()
submit_button.click(process_files, inputs=[
file_input, all_selections], outputs=[output_text])
demo.launch()
</code></pre>
|
<python><gradio>
|
2025-01-30 17:29:54
| 0
| 1,790
|
LearnToGrow
|
79,400,679
| 4,767,670
|
Plotting lambda functions in Python and mpmath.plot
|
<p>I'm using the <code>mpmath</code> plot function (which simply uses pyplot, as far as I understood).</p>
<p>Consider the following code:</p>
<pre><code>from math import cos, sin
import mpmath as mp
mp.plot([sin, cos], [0, 3]) # this is fine
l = [sin, cos]
mp.plot([lambda x: f(2*x) for f in l], [0, 3])
# this only plots sin(2x)!
</code></pre>
<p>Is there anything I'm missing here, or it's a bug in the <code>plot</code> function?</p>
|
<python><lambda><mpmath>
|
2025-01-30 17:10:58
| 1
| 1,807
|
NYG
|
79,400,579
| 13,046,093
|
How to create a column with randomly generated values in a pandas dataframe
|
<p>I want to assign a random float (from 0 to 1) to a column that contains unique value within a Pandas dataframe.</p>
<p>Below is a dataframe with unique value of "region"; I want to create a new column with a unique randomly generated float (between 0 to 1) corresponds to each region.</p>
<p>I used random function to generate a random number, but I couldn't figure out how to assign these random numbers to each region and make it a new column.</p>
<p>The goal also includes making sure the random number assigned to each region doesn't change in case of a re-run, so I set a seed.</p>
<pre><code>import pandas as pd
import numpy as np
import random
list_reg = ['region1', 'region2', 'region3', 'region4', 'region5', 'region6']
df_test = pd.DataFrame({
'region': list_reg,
'product1': [100, 250, 350, 555, 999999, 200000],
'product2': [41, 111, 12.14, 16.18, np.nan, 200003],
'product3': [7.04, 2.09, 11.14, 2000320, 22.17, np.nan],
'product4': [236, 249, 400, 0.56, 359, 122],
'product5': [None, 1.33, 2.54, 1, 0.9, 3.2]})
# in case of a re-run, make sure the randomly generated number doesn't change
random.seed(123)
random_genator = random.uniform(0.0001, 1.0000)
</code></pre>
<p>The desired goal would be something like below</p>
<p><a href="https://i.sstatic.net/jQH2EmFd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jQH2EmFd.png" alt="enter image description here" /></a></p>
|
<python><pandas><random><random-seed>
|
2025-01-30 16:31:05
| 1
| 460
|
user032020
|
79,400,574
| 4,725,074
|
How to enforce string type when reading CSVs in DuckDB?
|
<p>I'm trying to read a CSV in DuckDB (Python) but one of the records has a string ID while the rest of the file has the ID field as an integer.</p>
<p>I'd like to force reading it as a string, how to do it?</p>
|
<python><csv><duckdb>
|
2025-01-30 16:30:39
| 2
| 5,892
|
Vzzarr
|
79,400,538
| 1,025,702
|
Import module that defines SQLAlchemy ORM models, dynamically injecting a base class, but with static type checking
|
<p>I'm refactoring a Python codebase to be modular, with each module defining its own SQLAlchemy models. Because the models are all defined in separate schema files which can be included or excluded based on an envfile, there needs to be a single <code>declarative_base()</code> which they all use.</p>
<p>The way I've approached this is to have each <code>schema.py</code> file define a function that takes a <code>Base</code> and then uses that as the parent for the class inheritance:</p>
<pre class="lang-py prettyprint-override"><code># path/to/my/module/schema.py
def schema(Base):
class File(Base):
__tablename__ = "files"
id: Mapped[str] = mapped_column(String(36))
return (File,)
</code></pre>
<p>Then, in my higher up application, I can dynamically load it in:</p>
<pre class="lang-py prettyprint-override"><code># main.py
Base = declarative_base()
for schema in schemas:
schema(Base)
</code></pre>
<p>This works fine. The issue is that because I'm returning a class from a function, it no longer shows up for Pylance static type checking.</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code># path/to/my/module/Files.py
from .schema import schema
class Files():
files: List[File] = [] # <-- "File" is not defined
def __init__(self, Base):
(File,) = schema(Base)
self.File = File
some_files: List[File] = [] # <-- Variable not allowed in type expression
</code></pre>
<p>It's the "Variable not allowed in type expression" that is the real kicker. It won't allow me to use it because it's not available at runtime. It's no longer a "class" type, but a variable, because it's not static.</p>
<p>The only thing that has worked for both ways is to literally just duplicate the class definitions - once at a static level with a dummy <code>Base</code> and once returned from the function. Of course that's an awful idea, but it seems mad to not be able to do the equivalent in a normal DRY way.</p>
<p>I asked our robot overlords and they all seemed to hallucinate something to do with modifying the <code>__bases__</code> property but no dice.</p>
<p>Of course, the base could just be created at a root level and imported backwards from the deeper modules, but then that feels like data is flowing in the wrong direction; the vision being that everything flows one way.</p>
<p>Hopefully the above, highly abridged code explains the slight catch-22 with what I'm trying to achieve. Any ideas welcomedβ thanks in advance.</p>
|
<python><sqlalchemy><python-typing>
|
2025-01-30 16:18:26
| 0
| 9,400
|
Matt Fletcher
|
79,400,512
| 22,638,257
|
Attempting to preserve statefullness for a `st.dataframe` widget in a multipage Streamlit app
|
<h1>Background</h1>
<p>I have a multipage Streamlit app with the following project directory structure:</p>
<pre><code>βββ Home.py
βββ utils.py
βββ Pages
β βββ Data_Entry.py
</code></pre>
<p>This is a multi-page app where I'm trying to preserve the statefullness of the widgets in the Data_Entry.py script.</p>
<p>For the sake of this discussion, I have two widgets, a text input widget and a dataframe selector widget, in <code>Data_Entry.py</code>. I was able to figure out how to preserve the statefullness of the text input widget when navigating between <code>Home.py</code> and <code>Data_entry.py</code>.</p>
<p>However, the approach I'm using for preserving the statefullness of the text input widget isn't working for the <code>st.dataframe</code> widget.</p>
<p>Here's the relevant code for the text input widget, for clarity:</p>
<p><strong>Data_Entry.py</strong></p>
<pre><code>import streamlit as st
import utils as ut
ut.init_session_state()
def main():
st.markdown("### ProcessName")
ut.load_value("process_name_widget")
st.text_input(
"Enter the process' name here.",
help="Enter a concise name for the process here.",
max_chars=50,
key="_process_name_widget",
on_change=ut.store_value,
args=["process_name_widget"],
)
process_name= st.session_state._process_name_widget
if __name__ == "__main__":
main()
</code></pre>
<p>Here's the code from <code>utils.py</code>:</p>
<pre><code>import streamlit as st ## Streamlit
def init_session_state():
"""
This function initiates the session state for the Streamlit app.
"""
if "process_name_widget" not in st.session_state:
st.session_state.process_name_widget= ""
load_value("process_name_widget")
def store_value(key):
st.session_state[key] = st.session_state[f"_{key}"]
def load_value(key):
st.session_state[f"_{key}"] = st.session_state[key]
</code></pre>
<p>This approach for the text input widget's statefullness works perfectly.</p>
<p>My issue comes when I'm trying to do the same thing for preserving the statefullness of the <code>st.dataframe</code> widget. Here's the code snippet for the dataframe widgets:</p>
<p><code>Data_Entry.py</code>:</p>
<pre><code>### Same as for the previous snippet for Data_entry.py
import streamlit as st
import utils as ut
ut.init_session_state()
def main():
### The text_input widget code precedes this section
ut.load_value("team_selection_widget")
team_selection = st.dataframe(
teams_result_df,
use_container_width= True,
hide_index=True,
on_select=ut.store_value,
column_order=("team", "context"),
column_config={"team": "Team Abbreviation", "context": "Team Context"},
selection_mode="single-row",
key = "_team_selection_widget"
)
if __name__ == "__main__":
main()
</code></pre>
<p>Here's the modified code for utils.py:</p>
<pre><code>import streamlit as st ## Streamlit
def init_session_state():
"""
This function initiates the session state for the Streamlit app.
"""
if "process_name_widget" not in st.session_state:
st.session_state.process_name_widget= ""
if "team_selection_widget" not in st.session_state:
st.session_state.team_selection_widget = ""
load_value("process_name_widget")
load_value("team_selection_widget")
def store_value(key):
st.session_state[key] = st.session_state[f"_{key}"]
def load_value(key):
st.session_state[f"_{key}"] = st.session_state[key]
</code></pre>
<p>The only change is that I updated init_session_state(). I'm getting this error:</p>
<pre><code>StreamlitAPIException: Values for the widget with key '_team_selection_widget' cannot be set using st.session_state.
Traceback:
File "/home/Data_Entry.py", line 342, in <module>
main()
File "/home/Data_Entry.py", line 67, in main
team_selection = st.dataframe(
^^^^^^^^^^^^^
</code></pre>
<p>I'm not sure why the approach that works for the text_input widget doesn't work for the <code>st.dataframe</code> widget. Whatever approach I use to store the session state, this error never stops. What am I doing wrong? It's particularly confusing because I have a working approach for another widget.</p>
<h2>Attempted Fixes</h2>
<h3>Passing an arg to the <code>store_value</code> callable</h3>
<p>I attempt to pass an argument to <code>store_value</code> as follows:</p>
<pre><code>
def main():
### The text_input widget code precedes this section
ut.load_value("team_selection_widget")
team_selection = st.dataframe(
teams_result_df,
use_container_width= True,
hide_index=True,
on_select=ut.store_value(key="team_selection_widget"),
column_order=("team", "context"),
column_config={"team": "Team Abbreviation", "context": "Team Context"},
selection_mode="single-row",
key = "_team_selection_widget"
)
if __name__ == "__main__":
main()
</code></pre>
<p><code>utils.py</code> remains the same. This produces a different error:</p>
<pre><code>StreamlitAPIException: You have passed None to on_select. But only 'ignore', 'rerun', or a callable is supported.
Traceback:
File "/home/Data_Entry.py", line 342, in <module>
main()
File "/home/Data_Entry.py", line 67, in main
team_selection = st.dataframe(
^^^^^^^^^^^^^
</code></pre>
<h2>Previous Questions and their answers</h2>
<p>None of the previous questions talk specifically about the <code>st.dataframe</code> widget. Plus, I have an approach that works for one widget while not working for another widget, which seems to be unique.</p>
|
<python><streamlit>
|
2025-01-30 16:07:57
| 0
| 476
|
AcrKnight
|
79,400,489
| 29,295,031
|
how to compare two datafram and retrieve only the extre rows using pandas
|
<p>I have a usecase where I have two dfs :</p>
<p>the first df1 :</p>
<p><a href="https://i.sstatic.net/ZpP5DdmS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZpP5DdmS.png" alt="enter image description here" /></a></p>
<p>the second df2 :</p>
<p><a href="https://i.sstatic.net/LbMqnJdr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LbMqnJdr.png" alt="enter image description here" /></a></p>
<p>what I'm trying to do is to compare df1 and df2 and get only the row which are in df1 and does not exist in df2 which in this usecase :</p>
<p><a href="https://i.sstatic.net/F13HB5Vo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F13HB5Vo.png" alt="enter image description here" /></a></p>
<p>could you help please how to achieve this with Pandas ?</p>
|
<python><pandas><dataframe><streamlit>
|
2025-01-30 15:58:39
| 0
| 401
|
user29295031
|
79,400,433
| 1,406,168
|
azure function app python - Encountered an error (InternalServerError) from host runtime
|
<p>I have created a very simple azure function in python but keep getting folowing error when trying to run it. I cant find anything in app insights:</p>
<p><a href="https://i.sstatic.net/gYkFA0iI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gYkFA0iI.png" alt="enter image description here" /></a></p>
<p>Environment variables:</p>
<pre><code>[
{
"name": "APPINSIGHTS_PROFILERFEATURE_VERSION",
"value": "1.0.0",
"slotSetting": false
},
{
"name": "APPINSIGHTS_SNAPSHOTFEATURE_VERSION",
"value": "1.0.0",
"slotSetting": false
},
{
"name": "APPLICATIONINSIGHTS_CONNECTION_STRING",
"value": "InstrumentationKey=xx-xx-xx-xx-xx;IngestionEndpoint=https://westeurope-5.in.applicationinsights.azure.com/;LiveEndpoint=https://westeurope.livediagnostics.monitor.azure.com/;ApplicationId=xx-xx-xx-xx-xx",
"slotSetting": false
},
{
"name": "ApplicationInsightsAgent_EXTENSION_VERSION",
"value": "~3",
"slotSetting": false
},
{
"name": "xx",
"value": "xx",
"slotSetting": false
},
{
"name": "xx",
"value": "https://xx.blob.core.windows.net/",
"slotSetting": false
},
{
"name": "BUILD_FLAGS",
"value": "UseExpressBuild",
"slotSetting": false
},
{
"name": "ENABLE_ORYX_BUILD",
"value": "true",
"slotSetting": false
},
{
"name": "FUNCTIONS_EXTENSION_VERSION",
"value": "~4",
"slotSetting": false
},
{
"name": "FUNCTIONS_WORKER_RUNTIME",
"value": "python",
"slotSetting": false
},
{
"name": "SCM_DO_BUILD_DURING_DEPLOYMENT",
"value": "1",
"slotSetting": false
},
{
"name": "WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED",
"value": "1",
"slotSetting": false
},
{
"name": "XDG_CACHE_HOME",
"value": "/tmp/.cache",
"slotSetting": false
},
{
"name": "XDT_MicrosoftApplicationInsights_Mode",
"value": "recommended",
"slotSetting": false
}
]
</code></pre>
<p>Code:</p>
<pre><code>import azure.functions as func
import datetime
import logging
import requests
app = func.FunctionApp()
@app.timer_trigger(schedule="0 * * * * *", arg_name="myTimer", run_on_startup=True,
use_monitor=False)
def timer_trigger(myTimer: func.TimerRequest) -> None:
if myTimer.past_due:
print('The timer is past due!')
print("Hello, World!")
</code></pre>
<p>Configurations:
<a href="https://i.sstatic.net/vTqJUdfo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vTqJUdfo.png" alt="enter image description here" /></a></p>
<p>Any pointers how to trouble shoot this?</p>
|
<python><azure><azure-functions>
|
2025-01-30 15:43:07
| 1
| 5,363
|
Thomas Segato
|
79,399,929
| 1,221,812
|
How to sample Pandas DataFrame using a normal distribution by using random_state and numpy Generators
|
<p>I am trying to write Pandas code that would allow me to sample DataFrame using a normal distribution. The most convinient way is to use random_state parameter of the sample method to draw random samples, but somehow employ numpy.random.Generator.normal to draw random samples using a normal (Gaussian) distribution.</p>
<pre><code>import pandas as pd
import numpy as np
import random
# Generate a list of unique random numbers
temp = random.sample(range(1, 101), 100)
df = pd.DataFrame({'temperature': temp})
# Sample normal
rng = np.random.default_rng()
triangle_df.sample(n=10, random_state=rng.normal())
</code></pre>
<p><strong>This obviously doesn't work. There is an issue with <em>random_state=rng.normal().</em></strong></p>
|
<python><pandas><numpy><random><sample>
|
2025-01-30 12:38:50
| 1
| 473
|
pjercic
|
79,399,822
| 8,477,566
|
Dynamic argument types in Python + argparse
|
<p>I want to create a CLI using Python and <code>argparse</code>. The CLI should have options to specify a list of values, and also to dynamically specify the <em>type</em> of the values (<code>str</code>, <code>int</code>, <code>float</code>, etc.) in that list (all arguments in the list have the same type). The values in the list must be converted to the specified type.</p>
<p>I have the following baseline implementation, which does work, but if feels a bit clunky, especially when adding more complex types (or even functions which process the input list of arguments). I was wondering if there is a built-in/smoother/more canonical way to do this?</p>
<p><code>script.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import argparse
arg_type_dict = {t.__name__: t for t in [str, int, float]}
def main(
sweep_arg_type: str,
sweep_arg_vals: list,
):
arg_type = arg_type_dict[sweep_arg_type]
val_list = [arg_type(val_str) for val_str in sweep_arg_vals]
print(val_list)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--sweep_arg_vals", required=True, nargs="+")
parser.add_argument(
"--sweep_arg_type",
required=True,
choices=sorted(arg_type_dict.keys()),
)
args = parser.parse_args()
main(
args.sweep_arg_type,
args.sweep_arg_vals,
)
</code></pre>
<p>Usage examples:</p>
<pre class="lang-bash prettyprint-override"><code>python script.py -h
python script.py --sweep_arg_type int --sweep_arg_vals 0 1 10 -3
python script.py --sweep_arg_type float --sweep_arg_vals 0 1 10 -3
python script.py --sweep_arg_type float --sweep_arg_vals 1.2 3.4
python script.py --sweep_arg_type str --sweep_arg_vals abc def lmnop
</code></pre>
|
<python><command-line-interface><argparse>
|
2025-01-30 11:56:17
| 2
| 1,950
|
Jake Levi
|
79,399,788
| 8,596,146
|
503 error when running a python application on planet hoster / N0C
|
<p>I'm trying to create a simple python application on planet hoster which use the N0C interface (instead of cpanel), I didn't add any code (run.py contains only the template for the 200 ok message)
I tried with several versions of Python just in case (8, 9, 11, 12) but keep getting the same 503 error when I try to launch the app on the browser.
<a href="https://i.sstatic.net/Cbsj1x7r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Cbsj1x7r.png" alt="enter image description here" /></a></p>
<p>I use that command to source the environnment source <code>/home/[...]/virtualenv/my_application/3.11/bin/activate</code></p>
<p>When I check the sdterr.log file I see that error : <code>lscgid: execve():/usr/local/lsws/fcgi-bin/lswsgi_wrapper: No such file or directory</code></p>
<p>I have seen maybe it could be related to LiteSpeed version, and I should at least get the 5.4.10 version , but when I do pip install LiteSpeed==5.4.10 I get the message :</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement LiteSpeed==5.4.10 (from versions: 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.0.6, 1.0.7, 1.0.8, 1.1.0, 1.1.1, 1.1.2, 1.1.3, 1.1.4, 1.1.5, 1.1.6, 1.1.7, 1.2.0)
ERROR: No matching distribution found for LiteSpeed==5.4.10
</code></pre>
<p>Have you an idea what could solve this problem ? Thank you very much in advance.</p>
|
<python><cpanel><wsgi><http-status-code-503>
|
2025-01-30 11:42:46
| 0
| 333
|
GG24
|
79,399,699
| 13,343,354
|
Is cv.findEssentialMat using matlab in someway?
|
<p>I was looking to find a way to get essential matrix in opencv.</p>
<p>I found these two links that was for the function:
<a href="https://docs.opencv.org/3.4/d9/d0c/group__calib3d.html#ga0b166d41926a7793ab1c351dbaa9ffd4" rel="nofollow noreferrer">https://docs.opencv.org/3.4/d9/d0c/group__calib3d.html#ga0b166d41926a7793ab1c351dbaa9ffd4</a></p>
<p>and</p>
<p><a href="https://amroamroamro.github.io/mexopencv/matlab/cv.findEssentialMat.html" rel="nofollow noreferrer">https://amroamroamro.github.io/mexopencv/matlab/cv.findEssentialMat.html</a></p>
<p>In the second link you can see on top this "cv.findEssentialMat - MATLAB File Help ". I was confused. Is it using Matlab somewhow? Because I would like not to do it then.</p>
<p>Thanks!</p>
|
<python><matlab><opencv><image-processing><computer-vision>
|
2025-01-30 11:13:52
| 1
| 309
|
Shaig Hamzaliyev
|
79,399,419
| 5,558,497
|
VS code no longer displays value of variable when mouse-hovering it
|
<p>Sometime ago, I remember that when I was hovering over a declared variable in a python script (not in debug mode), I could see its value. That was very handy, as (I) I could see the value of the variable right away and (II) I could copy the value if necessary.</p>
<p>I regularly update VS code as well as the python extension, so my suspicion is that this default behaviour changed in any of the updates.</p>
<p>I have tried a few things as listed below, but nothing helped. When I hover over a variable (e.g. <code>reads = ['R1', 'R2']</code>, I see <code>(variable) reads: list[str]</code> whereas in the past I could see the list with its values i.e. <code>R1</code> and <code>R2</code></p>
<ol>
<li>My language sever is Pylance</li>
<li>I have enabled hover tooltips (setting is <code>python.analysis.inlayHints.variableTypes</code>)</li>
<li>I have played around with <code>python.analysis.typeCheckingMode</code>: from <code>basic</code> to <code>strict</code></li>
</ol>
<p>How can I re-enable that feature inside a python script in VSCode?</p>
|
<python><visual-studio-code>
|
2025-01-30 09:38:55
| 2
| 2,249
|
BCArg
|
79,399,353
| 859,141
|
Insert or update when importing from JSON
|
<p>My SQLAlchemy ORM model is populated by a JSON file that occasionally changes. The JSON file does not provide an integer primary key but has a unique alphanumeric ProductCode. My model:</p>
<pre class="lang-py prettyprint-override"><code>class ProductDescriptor(Base):
__tablename__ = 'product_descriptor'
id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
ProductCode: Mapped[str] = mapped_column(String(50), unique=True)
DisplayName: Mapped[str] = mapped_column(String(50))
Description: Mapped[str] = mapped_column(String(1000))
...
</code></pre>
<p><a href="https://stackoverflow.com/a/78702757">This answer</a> makes sense until this line:</p>
<pre><code>on_duplicate_stmt = insert_stmt.on_duplicate_key_update(dict(txt=insert_stmt.inserted.txt))
</code></pre>
<p>Because the incoming data lacks a key I have nothing to compare. Do I need to change the ProductCode definition? I am using <code>Unique=True</code>. My code:</p>
<pre class="lang-py prettyprint-override"><code>product_list = []
for product in products:
# Create filtered list of dicts to be send to the DB
product_list.append({
'ProductName': product.get('ProductName'),
'DisplayName': product.get('DisplayName'),
'Description': product.get('Description'),
... more columns
})
insert_stmt = insert(ProductDescriptor).values(product_list)
# This is where it goes wrong
on_duplicate_stmt = insert_stmt.on_duplicate_key_update()
# Trying to deal with integrity errors and rollback requests
for product in product_list:
try:
self.session.add(resource)
self.session.commit()
except IntegrityError:
pass
</code></pre>
<ul>
<li>How can I efficiently create an update or create function for bulk records?</li>
<li>Do I need to turn my unique field into a key field?</li>
<li>Can I keep an autoincrement key field as well?</li>
</ul>
<p>In Django I would be using the <code>update_or_create</code> method where I can specify the key field and provide a <code>defaults</code> dictionary:</p>
<pre class="lang-py prettyprint-override"><code>Profile.objects.update_or_create(custid=user_profile.custid, defaults=defaults)
</code></pre>
|
<python><sqlite><sqlalchemy><upsert>
|
2025-01-30 09:20:53
| 1
| 1,184
|
Byte Insight
|
79,398,889
| 9,477,704
|
Issue in running pipelines associated with multiple function apps in the same ADF
|
<p>I'm implementing an Azure queuing mechanism to trigger multiple pipleines at the same time. I have two function apps,
functiona_app1, functiona_app2. In each of these function apps, I have two pipelines created one for current customer and one for new customer. My requirement is that one pipeline should be triggered in each function app at the same time irrespective of the customer type. Other pipelines triggered should be in queue till any of the function app becomes available.</p>
<p>Suppose, If I trigger current customer, it should execute any of the current customer pipeline in any function app. If I trigger another current customer, it should execute the other pipeline in the other function app. And if I trigger new customer, since both the function apps are in use it should wait in the queue.</p>
<p>This is my PipelineManager code:</p>
<pre><code>class PipelineManager:
def __init__(self):
self.data_factory_client = DataFactoryManagementClient(
credential=DefaultAzureCredential(), subscription_id=SUBSCRIPTION_ID
)
# Reorganize pipeline configurations by function app
self.pipeline_configs = {
"function_app1": {
"max_concurrent": 1, # Only one pipeline per function app
"pipelines": {
"new_customer": {
"name": "new customer report with validation",
"priority": 1,
},
"current_customer": {
"name": "final current customer optimization",
"priority": 1,
},
}
},
"function_app2": {
"max_concurrent": 1, # Only one pipeline per function app
"pipelines": {
"new_customer": {
"name": "new customer pipeline 02",
"priority": 1,
},
"current_customer": {
"name": "current customer pipeline 02",
"priority": 1,
},
}
}
}
# Track running pipelines with timestamps and status
self.running_pipelines = {}
# Track pipeline usage history for load balancing
self.pipeline_usage = {} # {pipeline_name: last_used_timestamp}
# Track function app usage
self.function_app_running = {} # {function_app: {run_id, start_time}}
def _cleanup_running_pipelines(self):
"""Clean up completed or old pipeline entries and function app tracking"""
current_time = datetime.now()
to_remove = []
for run_id, info in self.running_pipelines.items():
if info["status"] in ["Succeeded", "Failed", "Cancelled"]:
if current_time - info["start_time"] > timedelta(hours=1):
to_remove.append(run_id)
# Clean up function app tracking
for func_app, running_info in self.function_app_running.items():
if running_info.get("run_id") == run_id:
del self.function_app_running[func_app]
elif current_time - info["start_time"] > timedelta(hours=24): # Safety cleanup
to_remove.append(run_id)
for run_id in to_remove:
del self.running_pipelines[run_id]
def _get_function_app_availability(self):
"""Check which function apps are available"""
self._cleanup_running_pipelines()
available_apps = {}
for func_app in self.pipeline_configs.keys():
is_available = func_app not in self.function_app_running or \
self.function_app_running[func_app]["run_id"] not in self.running_pipelines
available_apps[func_app] = is_available
return available_apps
def _calculate_pipeline_score(self, func_app, pipeline_name, priority):
"""Calculate a score for pipeline selection based on multiple factors"""
current_time = datetime.now()
# Time since last use score (0-1, higher is better)
last_used = self.pipeline_usage.get(pipeline_name, current_time - timedelta(hours=24))
time_since_use = (current_time - last_used).total_seconds()
time_score = min(time_since_use / 3600, 1.0) # Cap at 1 hour
# Priority score (0-1, lower is better)
priority_score = priority / 10 # Assuming max priority is 10
# Function app usage score (0-1, higher is better)
app_last_used = self.function_app_running.get(func_app, {}).get("start_time", current_time - timedelta(hours=24))
app_time_score = min((current_time - app_last_used).total_seconds() / 3600, 1.0)
# Add small random factor for tie-breaking (0-0.1)
randomization = random.uniform(0, 0.1)
# Combine scores (lower is better)
final_score = (
(1 - time_score) * 0.3 + # 30% weight to time since last use
priority_score * 0.2 + # 20% weight to priority
(1 - app_time_score) * 0.4 + # 40% weight to function app availability
randomization * 0.1 # 10% weight to randomization
)
return final_score
def get_available_pipeline(self, analysis_type):
"""Choose the most suitable pipeline based on function app availability"""
logger.info(f"\nSelecting pipeline for {analysis_type}")
# Get function app availability
available_apps = self._get_function_app_availability()
logger.info("Function app availability:")
for app, available in available_apps.items():
logger.info(f" {app}: {'Available' if available else 'In Use'}")
available_pipelines = []
for func_app, config in self.pipeline_configs.items():
if not available_apps[func_app]:
continue
if analysis_type in config["pipelines"]:
pipeline_config = config["pipelines"][analysis_type]
score = self._calculate_pipeline_score(
func_app,
pipeline_config["name"],
pipeline_config["priority"]
)
available_pipelines.append({
"name": pipeline_config["name"],
"score": score,
"function_app": func_app
})
if not available_pipelines:
logger.warning("No available pipelines found!")
return None
# Sort by score (lower is better)
available_pipelines.sort(key=lambda x: x["score"])
selected = available_pipelines[0]
logger.info(f"\nSelected pipeline: {selected['name']}")
logger.info(f" Function App: {selected['function_app']}")
logger.info(f" Score: {selected['score']:.3f}")
# Update usage timestamps
self.pipeline_usage[selected['name']] = datetime.now()
return selected["name"]
def get_pipeline_status(self, run_id):
"""Get the status of a pipeline run"""
try:
run_response = self.data_factory_client.pipeline_runs.get(
resource_group_name=RESOURCE_GROUP_NAME,
factory_name=DATA_FACTORY_NAME,
run_id=run_id,
)
status = run_response.status
# Update status in our tracking
if run_id in self.running_pipelines:
self.running_pipelines[run_id]["status"] = status
# If pipeline is complete, clean up function app tracking
if status in ["Succeeded", "Failed", "Cancelled"]:
for func_app, running_info in self.function_app_running.items():
if running_info.get("run_id") == run_id:
del self.function_app_running[func_app]
return status
except Exception as e:
logger.error(f"Failed to get pipeline status: {e}")
if run_id in self.running_pipelines:
# Clean up tracking on error
func_app = self.running_pipelines[run_id].get("function_app")
if func_app and func_app in self.function_app_running:
del self.function_app_running[func_app]
del self.running_pipelines[run_id]
return "Failed"
def run_pipeline(self, analysis_type, parameters):
try:
# Log current pipeline loads and metrics
logger.info("\nStarting pipeline run")
logger.info("Current pipeline metrics:")
metrics = self.get_pipeline_metrics()
logger.info(json.dumps(metrics, indent=2))
pipeline_name = self.get_available_pipeline(analysis_type)
if not pipeline_name:
raise ValueError(f"No available pipeline for analysis type: {analysis_type}")
# Find which function app this pipeline belongs to
function_app = None
for app, config in self.pipeline_configs.items():
if any(p["name"] == pipeline_name for p in config["pipelines"].values()):
function_app = app
break
run_response = self.data_factory_client.pipelines.create_run(
resource_group_name=RESOURCE_GROUP_NAME,
factory_name=DATA_FACTORY_NAME,
pipeline_name=pipeline_name,
parameters=parameters,
)
# Track the running pipeline
self.running_pipelines[run_response.run_id] = {
"pipeline_name": pipeline_name,
"start_time": datetime.now(),
"status": "InProgress",
"function_app": function_app
}
# Track function app usage
self.function_app_running[function_app] = {
"run_id": run_response.run_id,
"start_time": datetime.now()
}
logger.info(f"Started pipeline '{pipeline_name}' on {function_app} with run_id: {run_response.run_id}")
return run_response.run_id
except Exception as e:
logger.error(f"Failed to start pipeline: {e}")
return None
def get_pipeline_metrics(self):
"""Get current metrics for all function apps and pipelines"""
self._cleanup_running_pipelines()
metrics = {}
for func_app, config in self.pipeline_configs.items():
metrics[func_app] = {
"is_available": func_app not in self.function_app_running,
"current_run": None,
"pipelines": {}
}
if func_app in self.function_app_running:
run_id = self.function_app_running[func_app]["run_id"]
if run_id in self.running_pipelines:
metrics[func_app]["current_run"] = {
"pipeline": self.running_pipelines[run_id]["pipeline_name"],
"run_id": run_id,
"start_time": self.running_pipelines[run_id]["start_time"].isoformat(),
"status": self.running_pipelines[run_id]["status"]
}
for analysis_type, pipeline_config in config["pipelines"].items():
pipeline_name = pipeline_config["name"]
last_used = self.pipeline_usage.get(pipeline_name)
metrics[func_app]["pipelines"][analysis_type] = {
"name": pipeline_name,
"last_used": last_used.isoformat() if last_used else None
}
return metrics
</code></pre>
<p>I'm using a scoring mechanism to try if based on the availability of the pipeline, the pipeline will be executed. But I'm facing an issue where the pipelines from same function app are running at the same time. Not sure where the issue is. Please help!</p>
|
<python><python-3.x><azure-functions><azure-pipelines><azure-data-factory>
|
2025-01-30 05:38:14
| 0
| 499
|
Lahari
|
79,398,651
| 6,884,080
|
Packages do not show up in the list after running `pipenv install`
|
<p>Just came across a very weird behavior of pipenv that I don't quite understand.</p>
<p>Here is my environment</p>
<ul>
<li>Running terminal and iTerm2 under Rosetta</li>
<li>Python 3.11.9 installed via Pyenv (2.4.8) and set as global <code>pyenv global 3.9.11</code></li>
<li>Another Python 3.11.9 installed using the package downloaded directly from python's official website</li>
</ul>
<p>I setup a experiment project and it organized as following</p>
<pre><code>/dependencyExp_01/
|--app.py
|--Pipfile
</code></pre>
<p>where <code>Pipfile</code> is</p>
<pre><code>[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
packaging = "*"
typing_extensions = "*"
[dev-packages]
</code></pre>
<p>and <code>app.py</code> is</p>
<pre><code>from packaging.version import Version
print("something")
</code></pre>
<p>I installed the dependency uisng <code>$ pipenv install</code></p>
<p>-------- <strong>All the commands down below are executed under virtual environment</strong> --------</p>
<p>Then <code>$ pip list</code> and you can see that <code>packaging</code> is not in the list</p>
<pre><code>Package Version
---------- -------
pip 24.3.1
setuptools 75.6.0
wheel 0.45.1
</code></pre>
<p>Also, <code>$ pipenv graph</code>, <code>packaging</code> is in the list</p>
<pre><code>importlib-metadata==8.0.0
- zipp [required: >=0.5, installed: 3.19.2]
jaraco.collections==5.1.0
- jaraco.text [required: Any, installed: 3.12.1]
- autocommand [required: Any, installed: 2.2.2]
- inflect [required: Any, installed: 7.3.1]
- more-itertools [required: >=8.5.0, installed: 10.3.0]
- typeguard [required: >=4.0.1, installed: 4.3.0]
- typing-extensions [required: >=4.10.0, installed: 4.12.2]
- jaraco.context [required: >=4.1, installed: 5.3.0]
- backports.tarfile [required: Any, installed: 1.2.0]
- jaraco.functools [required: Any, installed: 4.0.1]
- more-itertools [required: Any, installed: 10.3.0]
- more-itertools [required: Any, installed: 10.3.0]
packaging==24.2
tomli==2.0.1
</code></pre>
<p>However, when I try to run <code>app.py</code> I got this</p>
<pre><code>(dependencyExp_01) jingwang@jing-MacBook-Pro dependencyExp_01 % python app.py
Traceback (most recent call last):
File "/Users/jingwang/Documents/PythonProject/dependencyExp_01/app.py", line 1, in <module>
from packaging.version import Version
ModuleNotFoundError: No module named 'packaging'
</code></pre>
<p>Then I installed <code>packaging</code> using <code>$ pip install packaging</code></p>
<p>After <code>$ pip list</code> I get, <code>packaging</code> appears in the list</p>
<pre><code>Package Version
---------- -------
packaging 24.2
pip 24.3.1
setuptools 75.6.0
wheel 0.45.1
</code></pre>
<p>Also, <code>$ pipenv graph</code>, the graph is the same as that before the <code>pip install</code></p>
<pre><code>importlib-metadata==8.0.0
- zipp [required: >=0.5, installed: 3.19.2]
jaraco.collections==5.1.0
- jaraco.text [required: Any, installed: 3.12.1]
- autocommand [required: Any, installed: 2.2.2]
- inflect [required: Any, installed: 7.3.1]
- more-itertools [required: >=8.5.0, installed: 10.3.0]
- typeguard [required: >=4.0.1, installed: 4.3.0]
- typing-extensions [required: >=4.10.0, installed: 4.12.2]
- jaraco.context [required: >=4.1, installed: 5.3.0]
- backports.tarfile [required: Any, installed: 1.2.0]
- jaraco.functools [required: Any, installed: 4.0.1]
- more-itertools [required: Any, installed: 10.3.0]
- more-itertools [required: Any, installed: 10.3.0]
packaging==24.2
tomli==2.0.1
</code></pre>
<p>Now when I run the <code>app.py</code> again, the error is gone and I can see the text printed out</p>
<pre><code>(dependencyExp_01) jingwang@jing-MacBook-Pro dependencyExp_01 % python app.py
something
(dependencyExp_01) jingwang@jing-MacBook-Pro dependencyExp_01
</code></pre>
<p>Currently the workaround is to install those missing packages separately using <code>$ pip install <whatever></code>, however, it's quite annoying. It would less hassle if <code>pipenv install</code> can take care of it. I don't know why it behaves like that, can anyone help here? Thanks</p>
|
<python><packaging><pipenv><pipfile>
|
2025-01-30 02:23:20
| 0
| 529
|
infinity_coding7
|
79,398,606
| 4,000,073
|
Install & use Python with any IDE offline
|
<p>I teach Python in a small org. where internet is disabled in the lab-systems due to some mis-usage/security issues. So pip installations OR installing any extension for any IDE is not possible.</p>
<p>I have copied (using USB drive) & installed python v3.13, VS-Code.I face problems executing Python scripts using VS-Code.</p>
<p>NOTE: I do not want to use IDLE.</p>
|
<python><ide><offline>
|
2025-01-30 01:53:09
| 2
| 311
|
Venkata
|
79,398,599
| 6,702,598
|
Python testing: How can I verify that no error logs were thrown?
|
<p>In my test I want to verify that no error log was done during the test. How do I do that?</p>
<p>I tried <code>with self.assertLogs("", level="ERROR") as error_logs:</code>. This can capture the error logs, but it will also raise if there were no error logs at all.</p>
|
<python><python-3.x><unit-testing><python-unittest>
|
2025-01-30 01:45:55
| 1
| 3,673
|
DarkTrick
|
79,398,519
| 6,141,238
|
Why do I receive an InvalidOperationError "file encoding is not UTF-8" when trying to write_csv with Polars?
|
<p>The code in the accepted answer of <a href="https://stackoverflow.com/questions/75272909">this</a> question is as follows:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df1 = pl.DataFrame({"a": [1, 2], "b": [3 ,4]})
df2 = pl.DataFrame({"a": [5, 6], "b": [7 ,8]})
with open("out.csv", mode="a") as f:
df1.write_csv(f)
df2.write_csv(f, include_header=False)
</code></pre>
<p>In VSCode on my Windows 10 machine, the line <code>df1.write_csv(f)</code> generates the error "polars.exceptions.InvalidOperationError: file encoding is not UTF-8." Can there really be a problem with the encoding or is something else wrong?</p>
<p>More broadly, I would like to append to a <code>.csv</code> file using <code>polars</code> as described in the linked question. Is the toy example above still the recommended approach for doing this?</p>
|
<python><dataframe><csv><utf-8><python-polars>
|
2025-01-30 00:25:03
| 0
| 427
|
SapereAude
|
79,398,514
| 8,037,521
|
Adjusting (center & rotation) of oriented bounding box to best fit points
|
<p>I have 3D points from noisy detection of one planar surface. I want to fit an oriented bounding box for these points.</p>
<p>I assume (it should hold in my case) that the edges are much lighter than the inner parts (i.e. higher intensity), so I approximately filter it by intensity to get only points close to the edges. But the detection is pretty noisy, there are also some surfaces and lidar noise around the edges so that makes the oriented bounding box a bit shifted and a bit rotated.</p>
<p>I want to apply optimization procedure to shift and rotate this original estimate so that it fits as many boundary points as possible. However, I am experience problem constructing such an optimization. I use <code>scipy.optimize.minimize</code> and the bounding box seems to just randomly just all over the place with the final result not matching the structure of interest if I use Powell method, and just not moving bounding box at all and staying at the original estimate if I use non-Powell method.</p>
<p>As I cannot by rules just dump my file here, I tried to generate toy example to show what I mean (I try to simulate both random noise within the structure + some neighboring structure; I also simulate my original non-perfect estimation of the bounding box by rotating and shifting ground truth bounding box):</p>
<pre><code>import open3d as o3d
def generate_noisy_boundary_pcd(center, width, height, depth, margin, noise_level, num_points):
"""
Generate a 3D Open3D point cloud with noisy points only on the boundary of a 3D rectangle.
Args:
center (tuple): Center of the rectangle (x, y, z).
width (float): Width of the rectangle in the X direction.
height (float): Height of the rectangle in the Y direction.
depth (float): Thickness of the rectangle in the Z direction.
margin (float): Thickness of the boundary region to retain.
noise_level (float): Standard deviation of Gaussian noise.
num_points (int): Number of points to generate.
Returns:
open3d.geometry.PointCloud: Open3D point cloud containing only noisy boundary points.
"""
# Generate rectangle points in 3D space
x_vals = np.linspace(-width / 2, width / 2, int(np.sqrt(num_points)))
y_vals = np.linspace(-height / 2, height / 2, int(np.sqrt(num_points)))
x_grid, y_grid = np.meshgrid(x_vals, y_vals)
# Front and back face at Β±depth/2
z_front = np.full_like(x_grid, depth / 2)
z_back = np.full_like(x_grid, -depth / 2)
# Create front and back faces
rect_points = np.column_stack((x_grid.ravel(), y_grid.ravel(), z_front.ravel()))
# Filter only boundary points (margin region)
boundary_mask = (
(np.abs(x_grid) >= (width / 2 - margin)) | # Vertical boundary
(np.abs(y_grid) >= (height / 2 - margin)) # Horizontal boundary
)
boundary_points = rect_points[boundary_mask.ravel()]
# Apply noise
noise = np.random.normal(0, noise_level, boundary_points.shape)
noisy_boundary_points = boundary_points + noise
# Shift to the desired center
noisy_boundary_points += np.array(center)
# Convert to Open3D point cloud
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(noisy_boundary_points)
return pcd
# Generate noisy boundary point cloud
pcd_noisy_boundary = generate_noisy_boundary_pcd(center=(0, 0, 0), width=0.8, height=1.0, depth=0.1, margin=0.1, noise_level=0.02, num_points=500)
# Visualize with Open3D
o3d.visualization.draw_geometries([pcd_noisy_boundary])
def apply_random_shift_and_rotation(obb, max_shift=0.05, max_rotation=np.radians(5)):
"""
Apply a small random shift and rotation to an oriented bounding box.
Args:
obb (o3d.geometry.OrientedBoundingBox): The original oriented bounding box.
max_shift (float): Maximum translation shift in each direction.
max_rotation (float): Maximum rotation in radians for each axis.
Returns:
o3d.geometry.OrientedBoundingBox: The transformed OBB.
"""
# Generate a small random shift
shift = np.random.uniform(-max_shift, max_shift, 3)
# Generate small random rotations around each axis
delta_angle_x = np.random.uniform(-max_rotation, max_rotation)
delta_angle_y = np.random.uniform(-max_rotation, max_rotation)
delta_angle_z = np.random.uniform(-max_rotation, max_rotation)
# Compute the new rotation matrix
delta_Rx = o3d.geometry.get_rotation_matrix_from_xyz((delta_angle_x, 0, 0))
delta_Ry = o3d.geometry.get_rotation_matrix_from_xyz((0, delta_angle_y, 0))
delta_Rz = o3d.geometry.get_rotation_matrix_from_xyz((0, 0, delta_angle_z))
# Apply rotation perturbation on top of existing rotation
R_new = obb.R @ (delta_Rz @ delta_Ry @ delta_Rx)
# Apply translation shift
center_new = obb.center + shift
# Create the transformed OBB
transformed_obb = o3d.geometry.OrientedBoundingBox(center_new, R_new, obb.extent)
return transformed_obb
# Compute the initial oriented bounding box from the noisy boundary point cloud
obb_original = o3d.geometry.OrientedBoundingBox.create_from_points(pcd_noisy_boundary.points)
obb_original.color = (1, 0, 0) # Red for original
# Apply random shift and rotation
obb_transformed = apply_random_shift_and_rotation(obb_original)
obb_transformed.color = (0, 1, 0) # Green for transformed
# Visualize both original and transformed bounding boxes
o3d.visualization.draw_geometries([pcd_noisy_with_cluster, obb_original, obb_transformed])
</code></pre>
<p>All above is just toy data generation. Below is code I am trying for optimization:</p>
<pre><code>from scipy.optimize import minimize
def fit_obb_to_points(params, pcd, obb, margin=0.1):
"""
Optimize the OBB center and refine its rotation so that it encloses the most points.
Args:
params (list): [cx, cy, cz, delta_angle_x, delta_angle_y, delta_angle_z].
pcd (o3d.geometry.PointCloud): Input 3D point cloud.
obb (o3d.geometry.OrientedBoundingBox): Initial OBB.
expand_factor (float): Factor to expand the bounding box slightly.
Returns:
int: Negative count of points inside the OBB (to be minimized).
"""
# Extract parameters
cx, cy, cz, delta_angle_x, delta_angle_y, delta_angle_z = params
center = np.array([cx, cy, cz])
# Compute refined rotation by applying small delta angles to the existing rotation
delta_Rx = o3d.geometry.get_rotation_matrix_from_xyz((delta_angle_x, 0, 0))
delta_Ry = o3d.geometry.get_rotation_matrix_from_xyz((0, delta_angle_y, 0))
delta_Rz = o3d.geometry.get_rotation_matrix_from_xyz((0, 0, delta_angle_z))
# Apply small rotation adjustments on top of the original OBB rotation
R_new = obb.R @ (delta_Rz @ delta_Ry @ delta_Rx)
expanded_obb = o3d.geometry.OrientedBoundingBox(center, R_new, [obb.extent[0] + margin, obb.extent[1] + margin, obb.extent[2]])
o3d.visualization.draw_geometries([expanded_obb, pcd])
# Count inliers inside both bounding boxes
inliers_large = expanded_obb.get_point_indices_within_bounding_box(pcd.points)
# Compute number of boundary inliers
boundary_inliers = len(inliers_large)
print(boundary_inliers)
return -boundary_inliers
def optimize_bounding_box(pcd, initial_obb):
"""
Adjusts the center and fine-tunes the rotation of the OBB so that it fits as many points as possible.
Args:
pcd (o3d.geometry.PointCloud): The point cloud.
initial_obb (o3d.geometry.OrientedBoundingBox): Initial bounding box (rotation should be preserved).
Returns:
o3d.geometry.OrientedBoundingBox: Optimized OBB.
"""
# Initial parameters: [center_x, center_y, center_z, delta_angle_x, delta_angle_y, delta_angle_z]
initial_params = [
*initial_obb.center,
0, 0, 0 # Start with no change to rotation
]
# Optimize the bounding box parameters
result = minimize(
fit_obb_to_points, initial_params,
args=(pcd, initial_obb),
# method="Powell"
)
# Extract optimized parameters
optimized_center = np.array(result.x[:3])
delta_angles = result.x[3:]
# Compute the final refined rotation
delta_Rx = o3d.geometry.get_rotation_matrix_from_xyz((delta_angles[0], 0, 0))
delta_Ry = o3d.geometry.get_rotation_matrix_from_xyz((0, delta_angles[1], 0))
delta_Rz = o3d.geometry.get_rotation_matrix_from_xyz((0, 0, delta_angles[2]))
optimized_rotation = initial_obb.R @ (delta_Rz @ delta_Ry @ delta_Rx)
# Create the final optimized OBB
optimized_obb = o3d.geometry.OrientedBoundingBox(optimized_center, optimized_rotation, initial_obb.extent)
return optimized_obb
optimized_obb = optimize_bounding_box(pcd_noisy_with_cluster, obb_transformed)
obb_transformed.color = (255, 0, 0)
optimized_obb.color = (0, 255, 0)
o3d.visualization.draw_geometries([pcd_noisy_with_cluster, obb_transformed, optimized_obb])
</code></pre>
<p>If you leave <code>o3d.visualization.draw_geometries([expanded_obb, pcd])</code> as it is in the optimization code, you will see that either bounding box is jumping a lot, sometimes even completely away from any points (Powell method), or is just stuck for a non-Powell method.</p>
<p>It seems clear that this method is not really suitable. But I cannot think of some alternative. In 2D image I would apply some contour finding methods but I cannot seem to find something for 3D points. There are of course plane detection methods in <code>open3d</code> <strong>but</strong> these are exactly where I get my original non-perfect estimate from. I however want to find some method to refine the estimation based on what was found by those <code>open3d</code> methods.</p>
<ol>
<li>Is it possible to fix the approach I am using?</li>
<li>Or if it is completely unsuitable, which approach should I pursue?</li>
</ol>
|
<python><open3d>
|
2025-01-30 00:19:36
| 1
| 1,277
|
Valeria
|
79,398,347
| 6,024,187
|
EvalCallback hangs in stable-baselines3
|
<p>I'm trying to train an A2C model in stable-baselines3 and the EvalCallback appears to freeze when it is called. I cannot figure out why. Below you will find a script that recreates this problem. Before that, here is the output of the various print statements spread throughout to monitor where the program is:</p>
<pre><code>Creating environments for SubprocVecEnv...
Defining the A2C model...
Making an evaluation callback...
Training the A2C model...
Number of time steps at training start: 0
_on_step: CustomEvalCallback has been called 1 times
Number of time steps at rollout end: 3
_on_step: CustomEvalCallback has been called 2 times
Number of time steps at rollout end: 6
_on_step: CustomEvalCallback has been called 3 times
Number of time steps at rollout end: 9
_on_step: CustomEvalCallback has been called 4 times
Number of time steps at rollout end: 12
_on_step: CustomEvalCallback has been called 5 times
eval will be performed now...
</code></pre>
<p>The script will sit here for as long as I will let it. Why is this happening and how do I fix it?</p>
<p>The script:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import gymnasium as gym
import torch as th
from stable_baselines3 import A2C
from stable_baselines3.common.vec_env import SubprocVecEnv
from stable_baselines3.common.monitor import Monitor
from stable_baselines3.common.callbacks import BaseCallback, EvalCallback
from typing import Any
class DummyVecEnv(gym.Env):
def __init__(self, n_controls: int = 1):
super().__init__()
self.n_controls = n_controls
self.observation_space = gym.spaces.Box(
low=0, high=1e4, shape=(1,),
dtype=np.float32
)
self.action_space = gym.spaces.Box(
low=-1e4, high=1e4, shape=(self.n_controls,),
dtype=np.float32
)
def _get_obs(self) -> np.ndarray:
return self.observation_space.sample()
def reset(
self,
*,
seed: int | None = None,
config: dict[str, Any] = {},
) -> tuple[np.ndarray, dict[str, Any]]:
return self._get_obs(), {}
def step(
self,
action: np.ndarray
) -> (np.ndarray, float, bool, bool, dict):
obs = self._get_obs()
reward = np.random.rand()
terminated = False
truncated = False
return obs, reward, terminated, truncated, {}
class MakeDummyEnv:
def __init__(self, n_controls: int = 1, is_eval_env: bool = False):
self.n_controls = n_controls
self.is_eval_env = is_eval_env
def __call__(self):
env = DummyVecEnv(n_controls=self.n_controls)
if self.is_eval_env:
return Monitor(env)
else:
return env
class TensorboardCallback(BaseCallback):
"""
Custom callback for plotting additional values in tensorboard.
"""
def __init__(self, verbose=0):
super().__init__(verbose)
def _on_rollout_end(self) -> None:
""" the rollout is lost before _on_step is called, log returns here """
if hasattr(self.model, "num_timesteps"):
print('Number of time steps at rollout end:', self.model.num_timesteps)
if hasattr(self.model, "rollout_buffer"):
rollout_data = next(self.model.rollout_buffer.get(batch_size=None))
self.logger.record('train/buffer_size', len(rollout_data.actions))
self.logger.record('train/return_mean', rollout_data.returns.mean().item())
self.logger.record('train/return_std', rollout_data.returns.std().item())
self.logger.record('train/first_env_temp',
rollout_data.observations.flatten()[0].item()
)
if not hasattr(self.model, "n_iterations"):
self.model.n_iterations = 0
self.model.n_iterations += 1
def _on_step(self) -> bool:
""" log the std of each output here """
if hasattr(self.model, "num_timesteps"):
self.logger.record('train/num_timesteps', self.model.num_timesteps)
if hasattr(self.model.policy, "log_std"):
for i in range(len(self.model.policy.log_std)):
self.logger.record(
f"train/std_{i:d}",
th.exp(self.model.policy.log_std[i]).item()
)
return True
def _on_training_start(self) -> None:
if hasattr(self.model, "num_timesteps"):
print('Number of time steps at training start:', self.model.num_timesteps)
def _on_training_end(self) -> None:
if hasattr(self.model, "num_timesteps"):
print('Number of time steps at training end:', self.model.num_timesteps)
class CustomEvalCallback(EvalCallback):
def _on_step(self) -> bool:
""" this is called by self.on_step which is called when EvalCallback is triggered """
print(f"_on_step: CustomEvalCallback has been called {self.n_calls:d} times")
if self.eval_freq > 0 and self.n_calls % self.eval_freq == 0:
print('\teval will be performed now...')
super()._on_step()
return True
if __name__ == "__main__":
print('Creating environments for SubprocVecEnv...')
vec_env = SubprocVecEnv(
env_fns=[MakeDummyEnv(n_controls=3) for _ in range(3)]
)
print('Defining the A2C model...')
model = A2C(
policy="MlpPolicy",
env=vec_env,
n_steps=1,
verbose=0,
device='cpu',
tensorboard_log="./logs"
)
print('Making an evaluation callback...')
n_eval_env = 2
eval_env = SubprocVecEnv(
env_fns=[
MakeDummyEnv(n_controls=3, is_eval_env=True)
for _ in range(n_eval_env)
]
)
eval_callback = CustomEvalCallback(
eval_env=eval_env,
callback_on_new_best=None,
callback_after_eval=None,
n_eval_episodes=n_eval_env,
eval_freq=5, # this appears to be number of iterations, not time steps
log_path=None, # leave this as None
best_model_save_path='saves', # saves as 'best_model' here
deterministic=True,
render=False,
verbose=0,
warn=True,
)
print('Training the A2C model...')
model.learn(
total_timesteps=25,
progress_bar=False,
log_interval=1,
callback=[TensorboardCallback(), eval_callback],
tb_log_name='dummy_log',
)
</code></pre>
|
<python><python-3.x><reinforcement-learning><stablebaseline3>
|
2025-01-29 22:34:12
| 1
| 897
|
Finncent Price
|
79,398,345
| 12,609,881
|
How to determine if checksums are present in parquet file?
|
<p>I recently came across an error when reading a parquet table in pyspark:</p>
<pre><code>Caused by: org.apache.parquet.io.ParquetDecodingException: could not verify page integrity, CRC checksum verification failed
</code></pre>
<p>This table was transferred over FTP. The error was resolved when I deleted the table in the destination and transferred it again from the source. Therefore, one if not more of the parquet files must have been corrupted when transferring over the network during the first attempt.</p>
<p>I am concerned that the generic job used to transfer this table is not robust to data corruption in transit. I want to inspect other tables that were transferred using this same generic job. However, I suspect that the underlying parquet files of most of these tables do not contain checksums. Otherwise chances are that my team would have run into this error before.</p>
<p>Sources I have come across lead me to believe that checksums in parquet page headers are optional and not enabled by default. If a checksum is present in the file it would be an integer and otherwise <code>NA</code></p>
<ul>
<li><a href="https://www.waitingforcode.com/apache-parquet/data-storage-apache-parquet/read" rel="nofollow noreferrer">https://www.waitingforcode.com/apache-parquet/data-storage-apache-parquet/read</a></li>
<li><a href="https://nanoparquet.r-lib.org/reference/parquet_pages.html" rel="nofollow noreferrer">https://nanoparquet.r-lib.org/reference/parquet_pages.html</a></li>
<li><a href="https://lists.apache.org/thread/16oqolrntt6qgf4nksc62smzc8m6ln0k" rel="nofollow noreferrer">https://lists.apache.org/thread/16oqolrntt6qgf4nksc62smzc8m6ln0k</a></li>
</ul>
<p>Is there any way in python to read the crc of parquet pages directly or at least determine if one is present indirectly? The stackoverflow question and answer below seems to suggest it cannot be done in <code>pyarrow</code>, unfortunately.</p>
<p><a href="https://stackoverflow.com/questions/75196568/how-do-i-get-page-level-data-of-a-parquet-file-with-pyarrow">How do I get page level data of a parquet file with pyarrow?</a></p>
<hr />
<p><strong>Update 2025-01-30</strong></p>
<p>I have not found a solution in python after further digging. Python packages <a href="https://pypi.org/project/parquet-tools/" rel="nofollow noreferrer">parquet-tools</a> and <a href="https://pypi.org/project/parquet-cli/" rel="nofollow noreferrer">parquet-cli</a> are not granular enough. However, the java version of <a href="https://rangareddy.github.io/ParquetTools/" rel="nofollow noreferrer">parquet-tools</a> is appropriate. The easiest way I was able to get the java version running is by leveraging a docker version: <a href="https://github.com/rm3l/docker-parquet-tools" rel="nofollow noreferrer">https://github.com/rm3l/docker-parquet-tools</a></p>
<pre><code>$docker container run -v ./local/path/to/parquet:/container/path/to/parquet --rm -t rm3l/parquet-tools:latest dump -n /container/path/to/parquet/test.parquet
</code></pre>
<p>I ran the two experiments below to determine if I can reliably detect corrupted parquet files.</p>
<p>Test A:</p>
<pre><code>data = [("John", 28), ("Anna", 23), ("Mike", 35), ("Sara", 30), ("David", 40)]
columns = ["Name", "Age"]
df = spark.createDataFrame(data, columns)
df.repartition(1).write.option("parquet.page.write-checksum.enabled", "true").parquet(...)
</code></pre>
<p>Then using parquet-tools produced output including the line below:</p>
<blockquote>
<p>page 0: DLE:RLE RLE:BIT_PACKED VLE:PLAIN ST:[no stats for this
column] <strong>CRC:[PAGE CORRUPT]</strong> SZ:47 VC:5</p>
</blockquote>
<p>Test B:</p>
<pre><code>data = [("John", 28), ("Anna", 23), ("Mike", 35), ("Sara", 30), ("David", 40)]
columns = ["Name", "Age"]
df = spark.createDataFrame(data, columns)
df.repartition(1).write.option("parquet.page.write-checksum.enabled", "false").parquet(...)
</code></pre>
<p>Then using parquet-tools produced output including the line below:</p>
<blockquote>
<p>page 0: DLE:RLE RLE:BIT_PACKED VLE:PLAIN ST:[no stats for this
column] <strong>CRC:[none]</strong> SZ:47 VC:5</p>
</blockquote>
<p>The results of Test A are very unexpected because it should not have resulted in a corrupted file and yet I am reading CRC:[PAGE CORRUPT] rather than an expected integer value.</p>
|
<python><pyspark><parquet><pyarrow>
|
2025-01-29 22:33:01
| 0
| 911
|
Matthew Thomas
|
79,398,317
| 2,955,608
|
Violin plot - Concatenating two violins in one
|
<p>I have a dataframe for which I am plotting violin plot. The dataframe has two categories A and B and highly imbalance data (one df has 1000 samples/rows and other has 200). I want to compare both categories and I believe looking at violin plot will give more information about distribution. While plotting it, I can plot two violins for separate groups. However, I want left side to represent category A while right side represent category B. I am able to plot it but what I also want is to keep the boxes, whiskers interquartile ranges etc. to be separate for each group. My plot has just one box in center (image 1). I want something like as is in image 2 (ref: <a href="https://python-graph-gallery.com/54-grouped-violinplot/" rel="nofollow noreferrer">https://python-graph-gallery.com/54-grouped-violinplot/</a>).</p>
<p>Setting split=False is creating two separately voilins.</p>
<pre><code>import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
# Sample data for df1 and df2
df1 = pd.DataFrame({
'category': ['A', 'A', 'B', 'B', 'B'],
'Value': [10, 200, 300, 40, 50]
})
df2 = pd.DataFrame({
'category': ['A', 'A', 'B', 'B', 'B'],
'Value': [3, 4, 5, 4, 1]
})
# Add group labels to each dataframe
df1['day'] = 'thurs'
df2['day'] = 'thurs'
# Combine the dataframes
combined_df = pd.concat([df1, df2])
# Plot violin chart
plt.figure(figsize=(10, 6))
sns.violinplot(x='day', y='Value', hue = 'category', data=combined_df, split=True)
plt.title('Violin Plot of df1 and df2')
plt.show()
</code></pre>
|
<python><seaborn><violin-plot>
|
2025-01-29 22:10:03
| 0
| 585
|
eshaa
|
79,398,246
| 1,172,907
|
What is meant with validation for django imagefield?
|
<p>The <a href="https://docs.djangoproject.com/en/5.1/ref/models/fields/#django.db.models.ImageField" rel="nofollow noreferrer">ImageField docu</a> states:</p>
<blockquote>
<p>Inherits all attributes and methods from FileField, but also validates that the uploaded object is a valid image.</p>
</blockquote>
<p>Yet any string is accepted</p>
<pre class="lang-py prettyprint-override"><code>class Foo(Model):
pic = models.ImageField(upload_to='files')
</code></pre>
<p>e.g. I can save this without error and <strong>nothing is uploaded</strong> to <code>files</code> (not even with a correct file)</p>
<pre class="lang-py prettyprint-override"><code>fooinstance.pic="bogus"
fooinstance.save()
fooinstance.full_clean()
fooinstance.pic.__dict__
</code></pre>
<pre><code>{'_file': None,
'name': 'bogus',
'instance': <Foo:...>,
'field': <django.db.models.fields.files.ImageField: pic>,
'storage': <django.core.files.storage.filesystem.FileSystemStorage at 0x721add4903d0>,
'_committed': True}
</code></pre>
<p>Meanwhile the FileField works/uploads perfectly fine</p>
|
<python><django>
|
2025-01-29 21:31:43
| 1
| 605
|
jjk
|
79,398,217
| 9,393,102
|
InaccessibleTensorError when using multiple TensorArrays in a list
|
<p>I am implementing a fairly complex network structure where I need to loop over a (possibly variable) time dimension, compute values at each step, and collect all these values, and do this for multiple layers. The values at each layer could have different shapes. I want to compile to a <code>tf.function</code> to speed things up. I will first provide a minimal example of the problematic code. TF 2.15, Python 3.11.</p>
<p>This code works:</p>
<pre><code>@tf.function
def test(inputs):
one_array = tf.TensorArray(tf.float32, size=tf.shape(inputs)[0])
for step in tf.range(tf.shape(inputs)[0]):
one_array = one_array.write(step, inputs[step])
return one_array.stack()
dummy_input = tf.random.normal([5])
test(dummy_input)
</code></pre>
<p>However this does not:</p>
<pre><code>@tf.function
def test2(inputs):
n_layers = 2 # number doesn't matter
arrays = [tf.TensorArray(tf.float32, size=tf.shape(inputs)[0]) for _ in range(n_layers)]
for step in tf.range(tf.shape(inputs)[0]):
for ind in range(n_layers):
arrays[ind] = arrays[ind].write(step, inputs[step])
for ind in range(n_layers):
arrays[ind] = arrays[ind].stack()
return arrays # returning this directly without the stack() in the line above also crashes
test2(dummy_input)
</code></pre>
<p>raises:</p>
<pre><code>---------------------------------------------------------------------------
InaccessibleTensorError Traceback (most recent call last)
Cell In[99], line 12
9 arrays[ind] = arrays[ind].stack()
10 return arrays
---> 12 test2(dummy_input)
File /project/jens/tf311/lib/python3.11/site-packages/tensorflow/python/util/traceback_utils.py:153, in filter_traceback.<locals>.error_handler(*args, **kwargs)
151 except Exception as e:
152 filtered_tb = _process_traceback_frames(e.__traceback__)
--> 153 raise e.with_traceback(filtered_tb) from None
154 finally:
155 del filtered_tb
File /project/jens/tf311/lib/python3.11/site-packages/tensorflow/python/eager/polymorphic_function/autograph_util.py:52, in py_func_from_autograph.<locals>.autograph_handler(*args, **kwargs)
50 except Exception as e: # pylint:disable=broad-except
51 if hasattr(e, "ag_error_metadata"):
---> 52 raise e.ag_error_metadata.to_exception(e)
53 else:
54 raise
InaccessibleTensorError: in user code:
File "/tmp/ipykernel_96618/173506222.py", line 9, in test2 *
arrays[ind] = arrays[ind].stack()
File "/project/jens/tf311/lib/python3.11/site-packages/tensorflow/core/function/capture/capture_container.py", line 144, in capture_by_value
graph._validate_in_scope(tensor) # pylint: disable=protected-access
InaccessibleTensorError: <tf.Tensor 'while/TensorArrayV2Write/TensorListSetItem:0' shape=() dtype=variant> is out of scope and cannot be used here. Use return values, explicit Python locals or TensorFlow collections to access it.
Please see https://www.tensorflow.org/guide/function#all_outputs_of_a_tffunction_must_be_return_values for more information.
<tf.Tensor 'while/TensorArrayV2Write/TensorListSetItem:0' shape=() dtype=variant> was defined here:
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/project/jens/tf311/lib/python3.11/site-packages/ipykernel_launcher.py", line 17, in <module>
File "/project/jens/tf311/lib/python3.11/site-packages/traitlets/config/application.py", line 1075, in launch_instance
File "/project/jens/tf311/lib/python3.11/site-packages/ipykernel/kernelapp.py", line 739, in start
File "/project/jens/tf311/lib/python3.11/site-packages/tornado/platform/asyncio.py", line 205, in start
File "/usr/lib/python3.11/asyncio/base_events.py", line 607, in run_forever
File "/usr/lib/python3.11/asyncio/base_events.py", line 1922, in _run_once
File "/usr/lib/python3.11/asyncio/events.py", line 80, in _run
File "/project/jens/tf311/lib/python3.11/site-packages/ipykernel/kernelbase.py", line 542, in dispatch_queue
File "/project/jens/tf311/lib/python3.11/site-packages/ipykernel/kernelbase.py", line 531, in process_one
File "/project/jens/tf311/lib/python3.11/site-packages/ipykernel/kernelbase.py", line 437, in dispatch_shell
File "/project/jens/tf311/lib/python3.11/site-packages/ipykernel/ipkernel.py", line 359, in execute_request
File "/project/jens/tf311/lib/python3.11/site-packages/ipykernel/kernelbase.py", line 775, in execute_request
File "/project/jens/tf311/lib/python3.11/site-packages/ipykernel/ipkernel.py", line 446, in do_execute
File "/project/jens/tf311/lib/python3.11/site-packages/ipykernel/zmqshell.py", line 549, in run_cell
File "/project/jens/tf311/lib/python3.11/site-packages/IPython/core/interactiveshell.py", line 3051, in run_cell
File "/project/jens/tf311/lib/python3.11/site-packages/IPython/core/interactiveshell.py", line 3106, in _run_cell
File "/project/jens/tf311/lib/python3.11/site-packages/IPython/core/async_helpers.py", line 129, in _pseudo_sync_runner
File "/project/jens/tf311/lib/python3.11/site-packages/IPython/core/interactiveshell.py", line 3311, in run_cell_async
File "/project/jens/tf311/lib/python3.11/site-packages/IPython/core/interactiveshell.py", line 3493, in run_ast_nodes
File "/project/jens/tf311/lib/python3.11/site-packages/IPython/core/interactiveshell.py", line 3553, in run_code
File "/tmp/ipykernel_96618/173506222.py", line 12, in <module>
File "/tmp/ipykernel_96618/173506222.py", line 5, in test2
File "/tmp/ipykernel_96618/173506222.py", line 6, in test2
File "/tmp/ipykernel_96618/173506222.py", line 7, in test2
The tensor <tf.Tensor 'while/TensorArrayV2Write/TensorListSetItem:0' shape=() dtype=variant> cannot be accessed from FuncGraph(name=test2, id=139903595515616), because it was defined in FuncGraph(name=while_body_262957, id=139903595521664), which is out of scope.
</code></pre>
<p>I am not sure what to do here. Using <code>range</code> instead of <code>tf.range</code> works if the input length is known in advance, but it will compile the "unrolled" loop into a graph, which takes very long for longer sequences. I need <code>TensorArray</code> because that's the dynamic data structure Tensorflow offers in such loops. But I also need to collect the <code>TensorArrays</code> in another data structure; the only alternative I see is to define one Python variable per array for each layer, which would require changing the code every time I change the number of layers -- not really an option.</p>
<p>I may be able to stack everything into one big <code>TensorArray</code> and then take it apart into the per-layer components afterwards, but I wanted to see if there is something I am missing here, first.</p>
<p>Note: I did read through the link given in the stack trace; it wasn't really helpful.</p>
|
<python><tensorflow><tensorflow2.0><tensorflow-autograph>
|
2025-01-29 21:19:47
| 1
| 10,639
|
xdurch0
|
79,398,178
| 581,467
|
Variable type for list and list[str] values
|
<p>What type should I set instead of TYPE1 and TYPE2 for x, y values?</p>
<pre><code>x: TYPE1 = list
y: TYPE2 = list[int]
</code></pre>
<p>For TYPE1 case, if I set</p>
<pre><code>type T = int | str | bool
x: type[list[T]] = list
</code></pre>
<p>mypy returns no errors, but why everything is ok here if value - "list" has no type specified?</p>
<p>And for TYPE2 I have no idea what it should be set to</p>
<p>Ok, so I end up with</p>
<pre><code>type TT = int | str | bool | float
type GenericIterableType[T: TT] = list[T] | tuple[T, ...] | set[T]
t: type[GenericIterableType[int] | GenericIterableType[str]] = list[str]
</code></pre>
<p>But is it possible to not repeat GenericIterableType for every type in TT ?</p>
|
<python><python-typing>
|
2025-01-29 20:59:22
| 1
| 896
|
sector119
|
79,398,160
| 1,115,716
|
Running two subprocesses together
|
<p>I need to gather some performance data on some GPU renders in Blender, some of these are very short, like under 5 seconds. I currently run the render via my benchmarking script like so:</p>
<pre><code>render_command_to_run = '{} {} --background --python {} -- --number {} --save {}'.format(
render_executable, file_to_render, massager_script, num_spheres, new_filename_full_path)
</code></pre>
<p>Which basically launches Blender in headless mode to run a render. In order to gather some performance data, I'm using <code>nvidia-smi</code> in another terminal like so:</p>
<pre><code>nvidia-smi --query-gpu=memory.used --format=csv -l 1
</code></pre>
<p>and gathering what I need that way, but I would ideally like to have this run as a python subprocess while the other is running so that I can parse the output as needed. How can I do that?</p>
|
<python><nvidia><blender>
|
2025-01-29 20:53:16
| 0
| 1,842
|
easythrees
|
79,398,117
| 595,305
|
How does sys.getswitchinterval() work with PyQt?
|
<p>From <a href="https://docs.python.org/3/library/sys.html#sys.setswitchinterval" rel="nofollow noreferrer">the docs</a>:
<code>setswitchinterval</code></p>
<blockquote>
<p>Set the interpreterβs thread switch interval (in seconds). This
floating-point value determines the ideal duration of the βtimeslicesβ
allocated to concurrently running Python threads. Please note that the
actual value can be higher, especially if long-running internal
functions or methods are used. Also, which thread becomes scheduled at
the end of the interval is the operating systemβs decision. The
interpreter doesnβt have its own scheduler.</p>
</blockquote>
<p>(<code>getswitchinterval()</code> retrieves the value. The standard value is 0.005 seconds).</p>
<p>This dates from Python 3.2, in 2009. I read <a href="https://realpython.com/python-gil/" rel="nofollow noreferrer">here</a> that this mechanism was introduced originally to prevent CPU-bound threads hogging the GIL.</p>
<blockquote>
<p>This problem was fixed in Python 3.2 in 2009 by Antoine Pitrou who
added a mechanism of looking at the number of GIL acquisition requests
by other threads that got dropped and not allowing the current thread
to reacquire GIL before other threads got a chance to run.</p>
</blockquote>
<p>Originally it was called <code>sys.setcheckinterval()</code>/<code>sys.getcheckinterval()</code> (now both deprecated) and gave a number of machine instructions rather than a time interval.</p>
<p>In PyQt, when you have an intensive worker thread and you want to prevent that freezing the GUI in the event thread, you have to introduce <code>time.sleep(...)</code> calls (there may be other mechanisms). In my experience, without this a worker thread (<code>QThread</code>) will simply never let go until it finishes.</p>
<p>But does anyone know why this Python 3.2+ "switch interval" mechanism doesn't provide a way of "letting the event thread breathe", out of the box? I'm guessing that the answer may be that PyQt sets up its own thread-switching architecture which overrides the basic Python one. But if that's the case it seems odd that something as basic as <code>time.sleep(..)</code> should "fix" the problem of providing event thread responsiveness. Obviously <code>time.sleep(..)</code> is not doing the same job as this switch interval mechanism: but in what way then is it different, at least in a PyQt context?</p>
<p>Incidentally, I'm using PyQt5... I presume it's the same with later versions.</p>
|
<python><multithreading><pyqt><sleep><gil>
|
2025-01-29 20:34:11
| 0
| 16,076
|
mike rodent
|
79,398,024
| 7,553,746
|
How to render a Jinja2 template field with correct capitalisation depending on field location?
|
<p>I am trying to render an HTML field with the same string, in this example it's "pots and pans". I need to ensure that if the {{ example_field }} is at the beginning of sentence it is capitalised and if not it's lowercase.</p>
<p>I've tried numerous iterations on the following and this is where I got to which does not work.</p>
<pre><code>from jinja2 import Environment, PackageLoader, select_autoescape
environment = Environment()
EXAMPLE_BODY_1 = """
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Example</title>
</head>
<body>
<div><h2>Example Contract Term</h2></div>
<div>
<p>
Sometimes the example_field is in the middle so needs to be lowercase,
{{ example_field }} is a good example.
</p>
<ul>
<i>All terms in this example are binding.</i>
<i>The terms are made up as this is an example.</i>
<i>Terms can always change.</i>
</ul>
</div>
<div><h2>Example Contract Other Term</h2></div>
<div>
<p>
{{ example_field }} can also be at the start of a sentence to therefore
needs to be capitalised.
</p>
<ul>
<i>Sometimes it's like the first example.</i>
<i>The terms are made up as this is an example.</i>
<i>Terms can always change.</i>
</ul>
</div>
</body>
</html>
"""
def capitalize_if_start_of_html_sentence(value, html_content):
# Normalize whitespace and remove HTML tags
content = ' '.join(html_content.split())
# Split the content into sentences using common sentence-ending punctuation
sentences = content.split('. ')
for sentence in sentences:
# Further split by other sentence-ending punctuation
sub_sentences = sentence.split('? ')
for sub_sentence in sub_sentences:
sub_sub_sentences = sub_sentence.split('! ')
for sub_sub_sentence in sub_sub_sentences:
# Check if the sub-sentence starts with the example_field placeholder
if sub_sub_sentence.startswith('{{ example_field }}'):
return value.capitalize()
return value.lower()
# Add the custom filter to the environment
environment.filters['capitalize_if_start_of_html_sentence'] = lambda value: capitalize_if_start_of_html_sentence(value, EXAMPLE_BODY_1)
# Create a template from string
template = environment.from_string(EXAMPLE_BODY_1)
example_merge_field = "pots and pans"
# Render the template with the custom filter applied
print(template.render(example_field=example_merge_field))
# Apply the filter manually to the example_field
capitalized_example_field = capitalize_if_start_of_html_sentence(example_merge_field, EXAMPLE_BODY_1)
# Render the template with the manually capitalized field
print(template.render(example_field=capitalized_example_field))
</code></pre>
<p>I also tried using HTMLParser as the data contents do have the correct {{ example_field }} then the text extracted which I could do starts with but I can't work it out.</p>
<pre><code>from html.parser import HTMLParser
class MyHTMLParser(HTMLParser):
def handle_starttag(self, tag, attrs):
print("Encountered a start tag:", tag)
def handle_data(self, data):
print("Data :", data)
</code></pre>
<p>This is what I am trying to achieve:</p>
<pre><code># Expected output
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Example</title>
</head>
<body>
<div><h2>Example Contract Term</h2></div>
<div>
<p>
Sometimes the example_field is in the middle so needs to be lowercase,
pots and pans is a good example.
</p>
<ul>
<i>All terms in this example are binding.</i>
<i>The terms are made up as this is an example.</i>
<i>Terms can always change.</i>
</ul>
</div>
<div><h2>Example Contract Other Term</h2></div>
<div>
<p>Pots and pans can also be at the start of a sentence to therefore
needs to be capitalised.
</p>
<ul>
<i>Sometimes it's like the first example.</i>
<i>The terms are made up as this is an example.</i>
<i>Terms can always change.</i>
</ul>
</div>
</body>
</html>
</code></pre>
|
<python><jinja2>
|
2025-01-29 19:49:30
| 1
| 3,326
|
Johnny John Boy
|
79,398,020
| 2,779,479
|
command not found when trying to use rsync on Vercel
|
<p>I have a python script which runs as part of <code>yarn build</code> for my static-site project deployed to Vercel. The script uses <code>rsync</code> to do some file copying, using <code>os.system</code> to make the call to rsync from within the python script: <code>os.system(f"rsync -a {full_path} dest_path")</code>. This works great locally, however when I try to deploy to Vercel I get the error:</p>
<pre><code>sh: line 1: rsync: command not found
</code></pre>
<p>I tried a few other approaches as well:</p>
<ul>
<li>using <code>subprocess.run(["npx", "rsync", "-a", full_path, dest_path], check=True)</code> to try get npx to run the command.</li>
</ul>
<p>Vercel build error:</p>
<pre><code>npm error could not determine executable to run
</code></pre>
<ul>
<li><p>Tried adding rsync to my developer dependencies in package.json and retrying.</p>
</li>
<li><p>Finally I also tried using <code>rsync_path = subprocess.check_output(["which", "rsync"]).decode("utf-8").strip()</code>
subprocess.run([rsync_path, "-a", full_path, dest_path], check=True)</p>
</li>
</ul>
<p>Vercel build error:</p>
<pre><code>error Command failed with exit code 1.
</code></pre>
<p>Is there a simple way for me to use rsync during my Vercel deployment? Does it even exist on the deployment machine? If not how can I set it up to use <code>rsync</code>?</p>
|
<python><vercel>
|
2025-01-29 19:48:09
| 0
| 360
|
Blargian
|
79,397,851
| 10,658,339
|
How to resize twin axis matplotlib
|
<p>I'm trying to overlay two figures in a plot, but when I use the twin axes as default, the resulting plot doesn't make much sense. I need to move the second figure (the one to be overlaid) to the top of the first graph. How can I adjust the y-axis so that it also moves to the top, without losing the information, ensuring that the values of the bar plot still correspond correctly to the real data?</p>
<p><a href="https://i.sstatic.net/TMp51lcJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMp51lcJ.png" alt="enter image description here" /></a></p>
<p>Below is my code, which includes synthetic data:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Sample data for demonstration
np.random.seed(42)
dates = pd.date_range(start='2023-01-01', periods=50, freq='D')
precipitation_values = np.random.randint(0, 100, size=50)
precipitacao = pd.DataFrame({'Data Hora': dates, 'Precipitacao Total(mm)': precipitation_values})
y_fake = np.arange(precipitacao.shape[0])
# Define the figure size
fig, ax = plt.subplots(figsize=(12, 8)) # Larger figsize for added space around the plot area
# Sample line plot
ax.plot(precipitacao['Data Hora'], y_fake, label='Example Line')
# Customize labels and ticks
ax.set_xlabel('X-axis Label', fontsize=12)
ax.set_ylabel('Y-axis Label', fontsize=12)
ax.set_title('Plot Title', fontsize=14)
# Twin axis for precipitation
ax2 = ax.twinx()
# Define new precipitation axis limits to make bars occupy only the upper half
precip_max = precipitacao['Precipitacao Total(mm)'].max()
offset = 75
ax2.set_ylim(0, precip_max) # Keep the same scale but shift position
# Shift the bars up by adding an offset equal to half of the plot height
ax2.bar(precipitacao['Data Hora'], precipitacao['Precipitacao Total(mm)'],
color='blue', linewidth=1, alpha=0.5, zorder=12, bottom=offset)
# Adjust tick positions and labels
ax2.set_yticks(np.linspace(0, precip_max, 5)) # Define ticks only for the upper half
ax2.set_yticklabels([f"{int(tick)}" for tick in np.linspace(0, precip_max, 5)])
# Remove lower part of twin y-axis to avoid clutter
ax2.spines['bottom'].set_visible(False)
ax2.spines['right'].set_visible(True) # Keep right spine for precipitation
ax2.set_ylabel('PrecipitaΓ§Γ£o Total (mm)', fontsize=10)
# Place the legend outside of the plot area
legend = ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.3), ncol=3, fontsize=10)
# Adjust the subplots to ensure there's enough space for the legend
plt.subplots_adjust(top=0.9, bottom=0.2)
# Show the plot
plt.show()
</code></pre>
|
<python><matplotlib><axis>
|
2025-01-29 18:44:39
| 0
| 527
|
JCV
|
79,397,808
| 17,837,614
|
"socket" has no attribute "accept"
|
<p>Why is mypy complaining?</p>
<blockquote>
<p>"socket" has no attribute "accept" [attr-defined]</p>
</blockquote>
<pre class="lang-py prettyprint-override"><code>import socket
def a() -> socket.SocketType:
...
b = a()
c, _ = b.accept()
</code></pre>
<p>It seems like <a href="https://github.com/python/typeshed/blob/89b49d207b93b66e9663df8c3f8a6ced570e35b0/stdlib/socket.pyi#L1281" rel="nofollow noreferrer">the <code>accept</code> method is present in stubs</a>.</p>
|
<python><python-typing><mypy><python-sockets>
|
2025-01-29 18:30:40
| 1
| 405
|
Sujay
|
79,397,701
| 14,632,200
|
Unable to find candidates for torch
|
<p>So I am trying to install torch on one of my projects and my specs are as below</p>
<p>Python 3.12.3 /
pip latest /
Poetry (version 2.0.1)/
I am on Apple M3 Max</p>
<p>Here is my <em>pyproject.toml</em></p>
<pre><code>torch = { version = "^2.5.1+cpu", source = "pytorch-cpu" }
torchvision = { version = "^0.20.1+cpu", source = "pytorch-cpu" }
torchaudio = { version = "^2.5.1+cpu", source = "pytorch-cpu" }
[[tool.poetry.source]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
priority = "explicit"
</code></pre>
<p>Adding a screenshot for more context
<a href="https://i.sstatic.net/82zj96dT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82zj96dT.png" alt="enter image description here" /></a></p>
|
<python><macos><pytorch><python-poetry>
|
2025-01-29 17:50:17
| 1
| 459
|
Olsi Hoxha
|
79,397,624
| 17,837,614
|
Missing type parameters for generic type "StrictRedis"
|
<pre><code>import redis
a = redis.StrictRedis(host="a", port=123, db=0, decode_response=true)
</code></pre>
<p>Mypy is giving error
<code>Missing type parameters for generic type "StrictRedis" [type-arg]</code></p>
<p>Python version: <code>3.9.20</code></p>
<p>Mypy version: <code>1.14.1</code></p>
|
<python><python-typing><mypy>
|
2025-01-29 17:22:31
| 2
| 405
|
Sujay
|
79,397,519
| 1,615,108
|
A solution to multiprocessing in Python without the if __name__ == '__main__' guard
|
<p><strong>TLDR</strong></p>
<p>Normally, any Python module that internally uses multiprocessing causes all scripts that import it to have to include the "guard" clause.</p>
<p>With the trick described, this guard is no longer necessary. People importing such a module in their scripts can write short and simple code - as if there was no multiprocessing involved.</p>
<p><strong>Summarizing the problem</strong></p>
<p>There are literally <a href="https://stackoverflow.com/search?q=Python%20multiprocessing%20An%20attempt%20has%20been%20made%20to%20start%20a%20new%20process%20before">dozens of questions</a> here on StackOverflow about spawning a process in Python using the <em>multiprocessing</em> module without using the</p>
<pre><code>if __name__ == '__main__'
...
</code></pre>
<p>"guard" clause in the main Python 3 script.</p>
<p>Without this clause, <em>multiprocessing</em> will report a RuntimeError. (Otherwise, the main script running in the spawned process would create an infinite chain of its copies; this would quickly cause the program to eat up all RAM and freeze.) Thus it is usually recommended to accept the guard clause as a necessity for platform-independent multiprocessing in Python.</p>
<p>However, there are use cases where this "guard" is unacceptable. For example, I am developing a module which facilitates <a href="https://github.com/FilipDominec/rp2daq" rel="nofollow noreferrer">hardware control from Python using a Raspberry Pi pico device</a>. On startup, it needs to spawn a separate process for efficient pipelining of USB communication, but such technical complexities shall be hidden inside the module so that it only exposes its high-level, user-friendly interface.</p>
<p>The problem is that the "guard" (and its indentation) had to be propagated into every Python executable that explicitly or implicitly used <em>multiprocessing</em>. Writing short Python scripts to automate laboratory tasks can be quite convenient even for non-programmers, but I cannot ask my fellow scientists to include this awkward, unpythonic feature into their every single script.</p>
<p>Below I suggest a solution as a first answer; I am looking forward to your alternatives and comments. Most importantly:</p>
<ol>
<li><p>Do you think this feature should once be added to the official <em>multiprocessing</em> module, too?</p>
</li>
<li><p>If so, what's the canonical way to do it? If not, what should I do otherwise?)</p>
</li>
<li><p>Do you think it's okay to link this solution in the numerous related questions on StackOverflow? It can help others, but I don't know the right policy regarding this.</p>
</li>
</ol>
|
<python><python-3.x><multiprocessing><spawn>
|
2025-01-29 16:49:26
| 1
| 414
|
dominecf
|
79,397,447
| 7,971,750
|
PyCharm Flask configuration uses the wrong folder as working directory?
|
<p>For no apparent reason, Flask decided to use a folder one level above the actual project directory.</p>
<p>Project details:</p>
<ul>
<li>Working directory in Run/Debug is set correctly</li>
<li>Python console in PyCharm starts in the correct directory</li>
<li>The parent folder is mentioned once -- in the "Paths to .env files" field, where a single .env file is included</li>
<li>Settings -> Project: projectname -> Project Structure shows the correct structure (intended working directory is set as project root)</li>
<li>Just in case, I set the working directory for both Python and Flask consoles in Settings -> Build, Execution, Deployment -> Console to project root</li>
</ul>
<p>Despite the working directory being set in every place I could think of, when I try to use the project, I get an error because it tried to load a file from the parent directory (e. g. <code>~/file.yml</code> instead of <code>~/projectname/file.yml</code>).</p>
<p>Running the project manually via the default Python configuration with the same working directory and env file path settings works fine.</p>
<p>What am I missing?</p>
|
<python><flask><pycharm>
|
2025-01-29 16:24:37
| 1
| 322
|
bqback
|
79,397,184
| 8,622,976
|
pip install from Git repository fails with "does not appear to be a Python project" when using branch with subdirectory
|
<h3>pip install Fails with "does not appear to be a Python project" When Installing from Git Repository with Subdirectory</h3>
<p>I'm attempting to install a Python package from a specific branch of a Git repository using the following command:</p>
<pre class="lang-bash prettyprint-override"><code>pip install git+ssh://git@bitbucket.org/project/repository.git@branch-name#egg=package-name&subdirectory=subdirectory
</code></pre>
<p>But I am encountering this error:</p>
<pre class="lang-bash prettyprint-override"><code>ERROR: package-name from git+ssh://git@bitbucket.org/project/repository.git@branch-name#egg=package-name does not appear to be a Python project: neither 'setup.py' nor 'pyproject.toml' found.
</code></pre>
<p>What I have tried:</p>
<p>Checked for <code>pyproject.toml</code>: The subdirectory contains the necessary Python files, and <code>pyproject.toml</code> is located in the root directory. To verify, I ran:</p>
<pre class="lang-bash prettyprint-override"><code>cd package_directory && pip install -e .
</code></pre>
<p>This worked successfully.</p>
<p>Question:
How can I correctly install this Python package from the Git repository when pip is not recognizing the pyproject.toml file?</p>
<p>Here's my pyproject.toml file (I'm using UV as the project manager):</p>
<pre class="lang-ini prettyprint-override"><code>[project]
name = "name"
version = "1.0.0"
description = "description"
authors = [{ name = "name", email = "email@gmail.com" }]
requires-python = ">3.9,<4.0"
dependencies = [
"pydantic>=2.9.2,<3",
"boto3>=1.35.54,<2",
"structlog>=24.4.0,<25",
"pymongo>=4.10.1,<5",
"opentelemetry-sdk>=1.28.2,<2",
"httpx>=0.27.0,<0.28",
]
[dependency-groups]
dev = [
"mypy>=1.14.1",
"pre-commit>=4.0.1",
"pytest>=8.3.4",
"respx>=0.22.0",
"ruff>=0.9.2",
]
</code></pre>
|
<python><pip>
|
2025-01-29 15:06:43
| 1
| 2,103
|
Alon Barad
|
79,397,169
| 5,873,325
|
Hide sidebar buttons when switching tabs
|
<p>Let's say i have this small streamlit application with a sidebar that contains 4 buttons for example (Button 1, ..., Button 4). And in the main panel, I have two tabs (Tab 1 and Tab 2).</p>
<p>Now I want this :</p>
<ul>
<li><p>When I select Tab 1, <strong>ONLY</strong> display Button 1 and Button 2 and hide Button 3 and Button 4.</p>
</li>
<li><p>When I select Tab 2, <strong>ONLY</strong> display Button 3 and Button 4 and hide Button 1 and Button 2.</p>
</li>
</ul>
<p>I tried this with <strong>session_state</strong> to store the current tab but it's not working. Any ideas ?</p>
<pre><code>import streamlit as st
def display_buttons(tab):
if tab == "Tab 1":
st.sidebar.button("Button 1")
st.sidebar.button("Button 2")
elif tab == "Tab 2":
st.sidebar.button("Button 3")
st.sidebar.button("Button 4")
if 'active_tab' not in st.session_state:
st.session_state['active_tab'] = 'Tab 1'
tab1, tab2 = st.tabs(["Tab 1", "Tab 2"])
if tab1:
st.session_state['active_tab'] = 'Tab 1'
if tab2:
st.session_state['active_tab'] = 'Tab 2'
with tab1:
st.write("This is Tab 1")
if st.session_state['active_tab'] == 'Tab 1':
display_buttons("Tab 1")
with tab2:
st.write("This is Tab 2")
if st.session_state['active_tab'] == 'Tab 2':
display_buttons("Tab 2")
</code></pre>
|
<python><streamlit>
|
2025-01-29 15:02:13
| 0
| 640
|
Mejdi Dallel
|
79,397,040
| 857,741
|
Is using tempfile.mkstemp()[1] leaks open file handles?
|
<p>I'm using <code>tempfile.mkstemp()[1]</code> in some class frequently. I'm doing some operations on the created file and then delete it.</p>
<p>I'm not using context manager with <code>with</code> or smth like that, just using the filename.</p>
<p>Obviously, <code>tempfile.mkstemp()</code> returns a tuple of opened file object and the filename. I'm only using filename and not even accessing file object, of course.</p>
<p>I'm wondering if that is causing leaking open file handler when execution is leaving particular method? Or is file object somehow silently closed?</p>
|
<python><resource-leak>
|
2025-01-29 14:19:40
| 1
| 6,914
|
LetMeSOThat4U
|
79,396,950
| 19,369,310
|
Converting a pandas dataframe in wide format to long format
|
<p>I have a Pandas dataframe in wide format that looks like this:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Class_ID': {0: 432, 1: 493, 2: 32},
'f_proba_1': {0: 3, 1: 8, 2: 6},
'f_proba_2': {0: 4, 1: 9, 2: 9},
'f_proba_3': {0: 2, 1: 4, 2: 1},
'p_proba_1': {0: 3, 1: 82, 2: 36},
'p_proba_2': {0: 2, 1: 92, 2: 96},
'p_proba_3': {0: 8, 1: 41, 2: 18},
'Meeting_ID': {0: 27, 1: 23, 2: 21}})
df
Class_ID f_proba_1 f_proba_2 f_proba_3 p_proba_1 p_proba_2 p_proba_3 Meeting_ID
0 432 3 4 2 3 2 8 27
1 493 8 9 4 82 92 41 23
2 32 6 9 1 36 96 18 21
</code></pre>
<p>and I would like to convert to long format:</p>
<pre><code> Class_ID Student_ID f_proba p_proba Meeting_ID
0 432 1 3 3 27
1 432 2 4 2 27
2 432 3 2 8 27
3 493 1 8 82 23
4 493 2 9 92 23
5 493 3 4 41 23
6 32 1 6 36 21
7 32 2 9 96 21
8 32 3 1 18 21
</code></pre>
<p>So I have tried <code>.melt</code> in Pandas and here is my code</p>
<pre><code>out = pd.melt(df,
id_vars = ['Class_ID', 'Meeting_ID'],
value_vars = ['f_proba_1','f_proba_2','f_proba_3','p_proba_1','p_proba_2','p_proba_3'],
var_name = 'Student_ID',
value_name = ['f_proba', 'p_proba'])
out
</code></pre>
<p>but it didn't work.</p>
|
<python><pandas><dataframe><pandas-melt>
|
2025-01-29 13:50:27
| 1
| 449
|
Apook
|
79,396,916
| 6,573,902
|
Forbidden (403) CSRF verification failed. Request aborted. while logging in in Label Studio
|
<p>Once <code>labelstudio</code> is updated to version</p>
<p>and run from commandline:</p>
<pre><code>LABEL_STUDIO_DISABLE_SIGNUP_WITHOUT_LINK=true CRYPTOGRAPHY_OPENSSL_NO_LEGACY=1 HOST='https://...' nohup label-studio -b --data-dir /label-studio/data/ -db /label-studio/db.sqlite --log-level INFO --internal-host localhost -p 8080 --host https://... --agree-fix-sqlite > label-studio.out 2> label-studio.err &
</code></pre>
<p>On login attempt it results in an issue:</p>
<blockquote>
<p>Forbidden (403) CSRF verification failed. Request aborted.</p>
</blockquote>
<p>The setup is Python <code>3.11.11</code> with the following relevant modules:</p>
<pre><code>label-studio==1.15.0
label-studio-converter==0.0.59
label-studio-sdk==1.0.8
label-studio-tools==0.0.4
</code></pre>
|
<python><python-3.x><django><label-studio>
|
2025-01-29 13:35:26
| 1
| 17,233
|
sophros
|
79,396,714
| 3,873,510
|
How to add a format script in a Poetry project that runs the Black formatter over a specific directory?
|
<p>I would like to add a script to my Poetry project that runs the Black formatter over a specific directory. I wrote this:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry.scripts]
format-scripts = "black:main scripts/**/*.py"
</code></pre>
<p>But it doesn't work. I am getting this error when running <code>poetry run format-scripts</code>:</p>
<pre><code>Warning: 'format-scripts' is an entry point defined in pyproject.toml, but it's not installed as a script. You may get improper `sys.argv[0]`.
The support to run uninstalled scripts will be removed in a future release.
Run `poetry install` to resolve and get rid of this message.
File "<string>", line 1
import sys; from importlib import import_module; sys.argv = ['format-scripts']; sys.exit(import_module('black').main scripts/**/*.py())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: invalid syntax. Perhaps you forgot a comma?
</code></pre>
<p>I tried removing the <code>scripts/**/*.py</code> path from the script, like so:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry.scripts]
format-scripts = "black:main"
</code></pre>
<p>This made it work, but I'm still getting the warning:</p>
<pre><code>poetry run format-scripts scripts/**/*.py
Warning: 'format-scripts' is an entry point defined in pyproject.toml, but it's not installed as a script. You may get improper `sys.argv[0]`.
The support to run uninstalled scripts will be removed in a future release.
Run `poetry install` to resolve and get rid of this message.
All done! β¨ π° β¨
4 files left unchanged.
</code></pre>
<p>However, not including the path in the script definition defeats the original purpose.</p>
<p>So, is it possible to add a format script in a Poetry project that runs the Black formatter over a specific directory?</p>
|
<python><python-poetry>
|
2025-01-29 12:22:50
| 0
| 22,246
|
Paul Razvan Berg
|
79,396,708
| 1,046,007
|
How to best change the colour of the Python prompt?
|
<p>I have an alias for python3 and one for a Python calculator (<code>pm</code> for Python Math):</p>
<pre class="lang-bash prettyprint-override"><code>alias p3='python3 -q'
alias pm='python3 -qic"from math import *; import sys; sys.ps1=\"] \"; sys.ps2=\"β¦ \""'
</code></pre>
<p>Since Python 3.13, the CLI prompt is coloured (ANSI bold purple on my Mac, I don't know elsewhere). This applies also to my <code>pm</code> command, which means that the colour is not encoded as an ANSI sequence in <code>sys.ps1</code> and <code>sys.ps2</code> (<a href="https://docs.python.org/3/library/sys.html#sys.ps1" rel="nofollow noreferrer">docs</a>). A <code>help(sys)</code> doesn't tell me where the colour(s) could be stored, and anyways, the same colour seems to apply to error and other messages, while the error location has a separate colour (bold red).</p>
<p>The following works for only modifying the prompts (cyan <code>0;36</code> in this case) (see <a href="https://gist.github.com/JBlond/2fea43a3049b38287e5e9cefc87b2124" rel="nofollow noreferrer">here</a> for more ANSI codes, but <code>\e</code>β<code>\033</code>):</p>
<pre class="lang-bash prettyprint-override"><code>alias pm='python3 -qic"from math import *; import sys; sys.ps1=\"\033[0;36m] \033[0m\"; sys.ps2=\"\033[0;36m⦠\033[0m\""'
</code></pre>
<p>but I cannot do the same for my <code>p3</code> command without losing the capability of using <code>p3</code> for launching a script (because <code>-c</code> makes it exit on completion, while <code>-i</code> forces it to be interactive).</p>
<p>Is there a "cleaner" way of changing the colour of the prompts (maybe without also changing the colour of error messages)?</p>
|
<python><python-3.x><read-eval-print-loop><ansi-escape>
|
2025-01-29 12:19:05
| 1
| 12,702
|
Walter Tross
|
79,396,461
| 8,354,130
|
FAISS Db Vector Search
|
<p>I am combining two embeddings and both are numerical embeddings both having shape (1,1)</p>
<pre><code>combined_embeddings = np.hstack((price_embeddings, location_embeddings))
index = faiss.IndexFlatL2(combined_embeddings.shape[1])
index.add(np.array(combined_embeddings, dtype=np.float32))
</code></pre>
<p>Now if I give these two embeddings as input then it works perfectly</p>
<pre><code>input_combined_embedding = np.hstack((input_price,input_location))
distances, indices = index.search(np.array(input_combined_embedding, dtype=np.float32), k=k)
</code></pre>
<p>it checks by euclidean distances of the vectors.</p>
<p>But what i want to implement is at run time or dynamically i want to decide on the basis of which embeddings I want to perform the search.
For eg. in above scneraio if I want to search on the basis of input_price then FAISS doesn't let me do that because it needs the input vector of same dimension as of index.
And I also cannot add padding because that numerical value will somewhere affect the distance calculation.</p>
<p>is there a way to achieve this in FAISS or in any other Vector DB??</p>
|
<python><faiss><vector-search>
|
2025-01-29 10:43:44
| 1
| 374
|
Nikhil
|
79,396,399
| 7,076,616
|
Checking for variable when evaluating When inside .agg
|
<p>I have a function <code>check_fun</code> where the user inputs one <code>table</code> and additional argumments of column names. This function later on evaluates some checks depending on which variables has been inputed in the function call. This works fine but each evaluation and append takes a lot of time. How can I rewrite it in one call?</p>
<p>Data</p>
<pre><code>from pyspark.sql import SparkSession
from pyspark.sql.functions import sum, col, when, count
import pandas as pd
# Sample data
data = [(1, 12, 1, 5), (6, 8, 1, 6), (7, 15, 1, 7), (4, 9, 1, 12), (10, 11, 1, 9)]
columns = ["a", "b", "c", "d"]
df = spark.createDataFrame(data, columns)
</code></pre>
<p>The original function</p>
<pre><code>def check_fun(df,
a_input: str = None,
b_input: str = None,
c_input: str = None,
d_input: str = None):
columns = ['check', 'description', 'count']
all_res = pd.DataFrame(columns = columns)
# check1
if a_input is not None and b_input is not None:
check1_count = df.filter(col(a_input) > col(b_input)).count()
check_1_res = pd.DataFrame([['check1', 'a > b', check1_count]],
columns = columns)
all_res = all_res.append(check_1_res)
# check2
if a_input is not None and c_input is not None:
check2_count = df.filter(col(a_input) > col(c_input)).count()
check_2_res = pd.DataFrame([['check2', 'a > c', check2_count]],
columns = columns)
all_res = all_res.append(check_2_res)
# check3
if a_input is not None and d_input is not None:
check3_count = df.filter(col(a_input) > col(d_input)).count()
check_3_res = pd.DataFrame([['check3', 'a > d', check3_count]],
columns=columns)
all_res = all_res.append(check_3_res)
# check4
if b_input is not None and c_input is not None:
check4_count = df.filter(col(a_input) < col(d_input)).count()
check_4_res = pd.DataFrame([['check4', 'a < d', check4_count]],
columns=columns)
all_res = all_res.append(check_4_res)
return(all_res)
</code></pre>
<p>How I tried solving it:</p>
<pre><code> a = "a"
b = "b"
c = "c"
d = "d"
df.agg(
when(a is not None and b is not None, sum(when(col(a) > col(b), 1).otherwise(0)).otherwise(None).alias('check1')
)).show()
</code></pre>
<p>But this returns an error...</p>
|
<python><pyspark>
|
2025-01-29 10:24:52
| 1
| 2,579
|
MLEN
|
79,396,219
| 865,169
|
Is there (a Python implementation of) an average-consistent interpolation method?
|
<p>I have a time series of values at low temporal resolution where the values represent an average of the respective surrounding intervals. Let's use hourly resolution for a simple example (in Python):</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
s_resample_test = pd.Series([0, 1, 2, 1, 0, -1, -2, -1], index=pd.date_range("2025-01-01 00:30:00", periods=8, freq="1h"))
</code></pre>
<p>So the value 0 at 00:30:00 represents an average for the interval 00:00:00 to 01:00:00, etc.</p>
<p>I wish to up-sample this sequence to a higher temporal resolution in such a way that the average of the up-sampled values is equal to the corresponding value from the original time series.<br />
For example, when up-sampling the above time series, I want the average of the up-sampled values in the interval from 00:00:00 to 01:00:00 to be 0, the values in the interval from 01:00:00 to 02:00:00 to average to 1, etc.</p>
<p>Which interval do the points at the boundaries (00:00:00, 01:00:00, ...) belong to? I would consider splitting them between both intervals and half-weighting them in the average. To check solutions, I want my interpolated time series to obey (<code>avg_mtx</code> is the most precise way I can express my average constraint):</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
init_mtx = np.eye(32)[0::4,:]
avg_mtx = np.empty((init_mtx.shape[0], init_mtx.shape[1] + 4))
for idx in range(init_mtx.shape[0]):
avg_mtx[idx, :] = np.convolve([.5, 1, 1, 1, .5], init_mtx[idx,:])
avg_mtx /= avg_mtx.sum(axis=1).reshape(-1,1)
avg_mtx = avg_mtx[1:-1,2:-5] # ignore boundary effects
s_resampled = desired_resampling_method(s_resample_test)
assert np.allclose(avg_mtx @ s_resampled, s_resample_test[1:-1])
</code></pre>
<p>For simplicity we can just say that the intervals are half-open: [00:00:00, 01:00:00), etc. This may facilitate a solution.</p>
<p>If I for example up-sample to quarterly resolution using:</p>
<pre class="lang-py prettyprint-override"><code>s_resample_test.resample("15min").interpolate(method="polynomial", order=2)
</code></pre>
<p>or:</p>
<pre class="lang-py prettyprint-override"><code>s_resample_test.resample("15min").interpolate(method="linear")
</code></pre>
<p>the result does not generally obey this average property. Neither does Fourier-based resampling using <code>scipy.signal.resample</code>.</p>
<p>Does an algorithm for the kind of interpolation I am looking for exist? And is there an implementation of it in Python?</p>
<p>A piecewise constant solution is probably the simplest to achieve, but I would like something smoother if possible. For example, I am speculating whether this can be formulated as a constrained smoothing spline approximation problem.</p>
<p>If not, I would also appreciate suggestions for a re-normalisation approach. Also, let us not necessarily worry about edge effects for now.</p>
|
<python><time-series><resampling>
|
2025-01-29 09:19:26
| 0
| 1,372
|
Thomas Arildsen
|
79,396,162
| 1,982,032
|
Smart way to write dataframe into an openoffice-ods excel with an existing file?
|
<p>We can write dataframe into a new excel file:</p>
<pre><code>x1 = np.random.randn(100, 2)
df1 = pd.DataFrame(x1)
with pd.ExcelWriter('/tmp/sample.xlsx') as writer:
df1.to_excel(writer, sheet_name='x1')
</code></pre>
<p><code>/tmp/sample.xlsx</code> is a new created xlsx file containing <code>df1</code>.</p>
<p>Now we can add another dataframe into the <code>/tmp/sample.xlsx</code> with an existing file <code>x1</code>.</p>
<pre><code>x2 = np.random.randn(100, 2)
df2 = pd.DataFrame(x2)
with pd.ExcelWriter('/tmp/sample.xlsx', engine='xlsxwriter',mode='a') as writer:
df2.to_excel(writer, sheet_name='x2')
</code></pre>
<p><code>engine</code> can be <code>xlsxwriter</code> or <code>openpyxl</code> for <code>xlsx</code> type file .</p>
<p>Replace the <code>xlsx</code> with <code>ods</code>,we can't write dataframe into an openoffice-ods excel with an existing file.</p>
<p>It works fine to create new <code>ods</code> file containing a dataframe:</p>
<pre><code>x1 = np.random.randn(100, 2)
df1 = pd.DataFrame(x1)
with pd.ExcelWriter('/tmp/sample.ods') as writer:
df1.to_excel(writer, sheet_name='x1')
</code></pre>
<p>Now add a new sheet in the '/tmp/sample.ods'.</p>
<pre><code>x2 = np.random.randn(100, 2)
df2 = pd.DataFrame(x2)
with pd.ExcelWriter('/tmp/sample.ods', engine='xlsxwriter',mode='a') as writer:
df2.to_excel(writer, sheet_name='x2')
</code></pre>
<p>It encounter error:</p>
<pre><code>ValueError: Append mode is not supported with xlsxwriter!
</code></pre>
<p>Try another engine <code>odf</code>:</p>
<pre><code>x2 = np.random.randn(100, 2)
df2 = pd.DataFrame(x2)
with pd.ExcelWriter('/tmp/sample.ods', engine='odf',mode='a') as writer:
df2.to_excel(writer, sheet_name='x2')
</code></pre>
<p>Same error:</p>
<pre><code>ValueError: Append mode is not supported with odf!
</code></pre>
<p>Without the <code>mode='a'</code> argument,sheet <code>x1</code> will be overlayed,only sheet <code>x2</code> left.
Is there a smart way to write dataframe into an openoffice-ods excel with an existing file?</p>
|
<python><dataframe><ods>
|
2025-01-29 08:56:07
| 0
| 355
|
showkey
|
79,396,002
| 3,825,996
|
Inheriting from NamedTuple causes "Overloaded function signatures 1 and 2 overlap with incompatible return types" in mypy
|
<p>I think this is a bug in mypy but I'm not sure:</p>
<pre class="lang-py prettyprint-override"><code>from typing import overload, NamedTuple
class A(NamedTuple):
pass
class B(NamedTuple):
pass
@overload
def frobnicate(arg: A) -> A: ...
@overload
def frobnicate(arg: B) -> B: ...
def frobnicate(arg: A | B) -> A | B:
if isinstance(arg, A):
return A()
elif isinstance(arg, B):
return B()
else:
raise TypeError()
</code></pre>
<p>This code causes mypy to emit:</p>
<pre><code>error: Overloaded function signatures 1 and 2 overlap with incompatible return types [overload-overlap]
</code></pre>
<p>Play with it here: <a href="https://mypy-play.net/?mypy=latest&python=3.12&gist=38e025ef049b6a1121cdd92c998e84a9" rel="nofollow noreferrer">https://mypy-play.net/?mypy=latest&python=3.12&gist=38e025ef049b6a1121cdd92c998e84a9</a></p>
<p>This happens specifically when inheriting from <code>NamedTuple</code>, not from <code>tuple</code> or <code>dict</code> or <code>object</code>.</p>
<p>Should I file an issue on mypy or is that expected behavior due to something I did not consider?</p>
<p>EDIT:</p>
<p>In my actual code I have f(A)->B and f(B)->A, so I can't just use a TypeVar. I didn't want to make the minimal example here confusing.</p>
|
<python><overloading><python-typing><mypy><namedtuple>
|
2025-01-29 07:29:24
| 0
| 766
|
mqnc
|
79,395,963
| 9,194,121
|
Can I call python script in c#?
|
<p>I have a C# project which is a simple console application, which takes an input from the user and returns an output.</p>
<p>In this solution I have created a Python file named app.py along with a virtual environment named env and a requirements.txt file. This Python file takes an input and returns an output.</p>
<p>Using the following commands I can run the Python file:</p>
<blockquote>
<p>python -m venv env</p>
<p>.\env\Scripts\activate</p>
<p>pip list</p>
<p>python -m pip install -U pip</p>
<p>python -m pip install -U groq</p>
<p>python -m pip install -U python-dotenv</p>
</blockquote>
<p>Although it seems I didn't use the requirements.txt file (I'm new to Python)</p>
<p>Can I use this Python file with all its dependencies in C#? In a way that I can take input from the user using C# (console application), pass it to the Python file, and receive the output that the Python file gives in C# and display it to the user?</p>
<p>The version is .NET 9 and Python 3.12.</p>
<p>I saw a number of libraries like ironpython, but apparently they are for using Python code in C# or vice versa.</p>
<p><a href="https://i.sstatic.net/tr0OGsmy.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tr0OGsmy.jpg" alt="pysharp" /></a></p>
|
<python><c#>
|
2025-01-29 07:14:10
| 1
| 909
|
arman
|
79,395,716
| 7,088,487
|
Solving Rotor Output Issues in Enigma Machine Emulation
|
<p>I'm implementing an Enigma machine in Python for a course assignment. For a specific case, Iβm unable to get the expected output result as explained in one of the test cases.</p>
<p>To investigate further, I emulated the input and settings using the online emulator at <a href="https://www.101computing.net/enigma-machine-emulator/" rel="nofollow noreferrer">Enigma Machine Emulator</a>, which produces the expected output. However, the output at each step, particularly the rotor output, is confusing for me.</p>
<p>According to the test case: "<strong>With rotors I II III, reflector B, ring settings 01 01 01, and initial positions Q E V, encoding an A produces an L.</strong>" Hereβs my understanding (please correct me if Iβm wrong):</p>
<ol>
<li>Rotor III is the rightmost rotor and is executed first.</li>
<li>The initial setting for Rotor III is "<strong>V</strong>", which is also the notch position. Since the ring setting is 01, it does not significantly impact.</li>
<li>For the remaining Rotors II and I, the initial settings are also set at the notch points. Therefore, by entering the first character, all three rotors will rotate.</li>
<li>Since Rotor III's initial position is set to V (position 21), pressing the input button will increment it to position 1. According to the table mapping "<strong>BDFHJLCPRTXVZNYEIWGAKMUSQO</strong>," this should result in "U," but the Enigma emulator outputs "Y."</li>
</ol>
<p>If I can solve this discrepancy, I will be able to resolve the other issues.</p>
<p>To use enigma Machine Emulator please enable encryption scrips and configure router as I, II, III with initial position Q E V and Reflector B.</p>
<p>By pressing A, emulator Rotor III output "Y" as mentioned in screenshot.</p>
<p><a href="https://i.sstatic.net/kZ18i1tb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kZ18i1tb.png" alt="Enigma Rotor III output for letter "A"" /></a></p>
|
<python><enigma2>
|
2025-01-29 04:24:35
| 0
| 700
|
Kamran Asim
|
79,395,514
| 10,262,805
|
Python Socket Communication Breaks When Removing print Statement
|
<p>I send multi line python string with socket:</p>
<pre><code> elif command=="help":
help_options='''
download <path> -> Download a File from Target Machine
upload <path> -> Upload a File to Targe
get <url> -> Download a File to Target Machine from Internet
start <program> -> Start Program on Target Machine
screenshot -> Take screenshot of the Target Screen
check -> Check for Administrator Privileges
'''
lines=help_options.splitlines()
for line in lines:
send_payload(line.encode("utf-8"))
send_payload("END".encode("utf-8"))
</code></pre>
<p>from the client this is how I am receiving:</p>
<pre><code> elif command=="help":
complete_message=""
while True:
line=receive_payload()
# ---- FIRST TRY FAILED
# if not line:
# time.sleep(0.01)
# continue
# --- SECOND TRY FAILED
# if not line:
# continue
# --- THIRD TRY FAILED ---
# time.sleep(0.01)
# ------- IT WORKS ONLY WITH THIS PRINT STATEMENT
# print(f"line {line}")
if line=="END":
break
if isinstance(line,bytes):
complete_message += line.decode('utf-8') + "\n"
else:
complete_message+=line+"\n"
print(complete_message)
</code></pre>
<p>I received text successfully by mistake when I added this debugging line</p>
<pre><code> print(f"line {line}")
</code></pre>
<p><a href="https://i.sstatic.net/T10YBSJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/T10YBSJj.png" alt="enter image description here" /></a></p>
<p>Does anyone know why the loop only works when printing the received data? Is there something going on with the socket buffer or the way the data is being handled in chunks?</p>
<p>Any insights or suggestions would be greatly appreciated!</p>
<p><code>receive_payload</code> and <code>send_payload</code> functions work successfully. If needed, I can post</p>
<p>this is send_payload</p>
<pre><code>def send_payload(data):
if isinstance(data, bytes):
# If data is in binary form (like a file), encode it in base64
data = base64.b64encode(data).decode('utf-8')
padding = len(data) % 4
if padding != 0:
data += "=" * (4 - padding) # Manually add padding if necessary
# By converting the entire data object to a JSON string, the function ensures that all data is sent as a single, self-contained message.
# This reduces the risk of partial or malformed messages due to network issues or incorrect handling of the raw data.
json_data = json.dumps({"data": data}) # Package the data in a dictionary and then serialize it
sock.send(json_data.encode("utf-8"))
print("data is sent from target machine")
</code></pre>
<p>this is <code>receive_payload</code>:</p>
<pre><code>def receive_payload():
json_data = ""
while True:
try:
chunk = target.recv(1024).decode("utf-8")
print(f"Chunk received: {chunk}")
if not chunk:
break
json_data += chunk # Collect chunks
try:
data = json.loads(json_data) # Load JSON data
file_data = data.get("data")
if file_data:
# Ensure correct padding for base64 data
padding = len(file_data) % 4
if padding != 0:
print(f"Adding {4 - padding} padding to base64 data.")
file_data += "=" * (4 - padding)
# Decode the base64 data
file_data = base64.b64decode(file_data)
return file_data # Return decoded data
except base64.binascii.Error as e:
print(f"Error during base64 decoding: {e}")
pass
return None # If it's not base64-encoded, return None
except json.JSONDecodeError:
# Handle incomplete JSON
print("Incomplete JSON received. Waiting for more data.")
continue
</code></pre>
|
<python><sockets><printing><python-sockets>
|
2025-01-29 00:47:22
| 2
| 50,924
|
Yilmaz
|
79,395,477
| 4,577,688
|
How to save a Dataset in multiple shards using `tf.data.Dataset.save`
|
<p>How can I save a <code>tf.data.Dataset</code> in multiple shards using <code>tf.data.Dataset.save()</code>? I am reading in my dataset from CSV using <code>tf.data.experimental.make_csv_dataset</code>.</p>
<p>The <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset#save" rel="nofollow noreferrer">TF docs here</a> are not very helpful. There is a <code>shard_func</code> argument, but the examples given aren't helpful and its not clear how to map to an <code>int</code> in a deterministic way. Using random <code>int</code>s doesn't seem to work either.</p>
<p>The solution in a similar question <a href="https://stackoverflow.com/questions/77067982/how-to-use-the-shard-func-in-tensorflows-tf-data-dataset-save">here</a> generates an error for me</p>
<blockquote>
<p>TypeError: unsupported operand type(s) for %: 'collections.OrderedDict' and 'int'</p>
</blockquote>
<p><strong>Single Shard</strong> (works)</p>
<p>This code successfully saves to a single shard:</p>
<pre><code>import pandas as pd
import numpy as np
import tensorflow as tf
# gen data
n=10000
pd.DataFrame(
{'label': np.random.randint(low=0, high=2, size=n),
'f1': np.random.random(n),
'f2': np.random.random(n),
'f3': np.random.random(n),
'c1': np.random.randint(n),
'c2': np.random.randint(n)}
).to_csv('tmp.csv')
# load data into a tf.data.Dataset
data_ts = tf.data.experimental.make_csv_dataset(
'tmp.csv', 1, label_name='label', num_epochs=1)
data_ts.save('tmp.data') # single shard, works!
</code></pre>
<p><strong>Multiple shards using <code>randint</code></strong> (saves single shard)</p>
<p>Trying to save to multiple shard using a random number, still only saves to a single shard, albeit with a random int in the file name.</p>
<pre><code># Try sharding, using random numbers.
def random_shard_function(features, label):
return np.int64(np.random.randint(10))
data_ts.save('tmp2.data', shard_func=random_shard_function)
</code></pre>
<p><a href="https://i.sstatic.net/9QlzVVHKm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9QlzVVHKm.png" alt="image of filesystem" /></a></p>
<p><strong>Modulo shard</strong> (error)</p>
<p>Trying the solution from <a href="https://stackoverflow.com/questions/77067982/how-to-use-the-shard-func-in-tensorflows-tf-data-dataset-save">this question</a>.</p>
<pre><code>def modulo_shard_function(features, label):
return x % 10
data_ts.save('tmp2.data', shard_func=modulo_shard_function)
</code></pre>
<blockquote>
<p>TypeError: unsupported operand type(s) for &: 'collections.OrderedDict' and 'int'</p>
</blockquote>
<p><strong>Debugging</strong> - no idea how <code>shard_fun</code> works</p>
<p>If I print out the inputs, it seems that the shard func is only run once, and the tensors are <code>SymbolicTensors</code></p>
<pre><code>def debug_shard_function(features, label):
for val in features.items():
print(f'{val=}')
print(f'{label=}')
print(f'{type(val[1])}')
return np.int64(10)
data_ts.save('tmp2.data', shard_func=debug_shard_function)
</code></pre>
<p>Output - still saves to a single shard:</p>
<pre><code>val=('', <tf.Tensor 'args_0:0' shape=(None,) dtype=int32>)
val=('f1', <tf.Tensor 'args_3:0' shape=(None,) dtype=float32>)
val=('f2', <tf.Tensor 'args_4:0' shape=(None,) dtype=float32>)
val=('f3', <tf.Tensor 'args_5:0' shape=(None,) dtype=float32>)
val=('c1', <tf.Tensor 'args_1:0' shape=(None,) dtype=int32>)
val=('c2', <tf.Tensor 'args_2:0' shape=(None,) dtype=int32>)
label=<tf.Tensor 'args_6:0' shape=(None,) dtype=int32>
<class 'tensorflow.python.framework.ops.SymbolicTensor'>
</code></pre>
|
<python><tensorflow><machine-learning>
|
2025-01-29 00:08:54
| 1
| 3,840
|
dule arnaux
|
79,395,319
| 4,907,639
|
'NoneType' object has no attribute 'encode' when loading tokenizer
|
<p>Error occurs when trying to load Pegasus model for text summarization</p>
<pre><code>from transformers import pipeline, set_seed
pipe = pipeline("summarization", model="google/pegasus-cnn_dailymail")
</code></pre>
<p>Error: <code>AttributeError: 'NoneType' object has no attribute 'encode' </code></p>
<pre><code> 1588 logger.info("Converting from Tiktoken")
1589 return TikTokenConverter(
1590 vocab_file=transformer_tokenizer.vocab_file,
1591 additional_special_tokens=transformer_tokenizer.additional_special_tokens,
-> 1592 ).converted()
1593 except Exception:
</code></pre>
<p>I've installed <code>tiktoken</code> and restarted the kernel, but the same error occurs</p>
|
<python><huggingface-transformers><summarization>
|
2025-01-28 22:29:49
| 1
| 2,109
|
coolhand
|
79,395,296
| 1,738,399
|
How to select multiple columns to produce expressions using Polars LazyFrame
|
<p>After building up a computation with polars.LazyFrame, I have pairs of columns that I want to multiply together. The exact names can vary depending on inputs, but the pattern is consistent. For example, given</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.from_repr("""
shape: (5, 4)
βββββββββββ¬βββββββββββ¬ββββββββββββ¬βββββββββββββ
β red_low β red_high β green_low β green_high β
β --- β --- β --- β --- β
β i64 β i64 β i64 β i64 β
βββββββββββͺβββββββββββͺββββββββββββͺβββββββββββββ‘
β 1 β 10 β 2 β 20 β
β 2 β 20 β 3 β 30 β
β 3 β 30 β 4 β 40 β
β 4 β 40 β 5 β 50 β
β 5 β 50 β 6 β 60 β
βββββββββββ΄βββββββββββ΄ββββββββββββ΄βββββββββββββ
""").lazy()
</code></pre>
<p>I want to, say, add each of the sets of <code>_low</code> + <code>_high</code> columns, giving me a result like</p>
<pre><code>βββββββββββ¬βββββββββββ¬ββββββββββββ¬βββββββββββββ¬ββββββββββ¬ββββββββββββ
β red_low β red_high β green_low β green_high β red_sum β green_sum β
β --- β --- β --- β --- β --- β --- β
β i64 β i64 β i64 β i64 β i64 β i64 β
βββββββββββͺβββββββββββͺββββββββββββͺβββββββββββββͺββββββββββͺββββββββββββ‘
β 1 β 10 β 2 β 20 β 11 β 22 β
β 2 β 20 β 3 β 30 β 22 β 33 β
β 3 β 30 β 4 β 40 β 33 β 44 β
β 4 β 40 β 5 β 50 β 44 β 55 β
β 5 β 50 β 6 β 60 β 55 β 66 β
βββββββββββ΄βββββββββββ΄ββββββββββββ΄βββββββββββββ΄ββββββββββ΄ββββββββββββ
</code></pre>
<p>I see a variety of way of using <code>pl.col()</code> or <code>polars.selectors</code> to select columns in expressions, but I can't find a way to do an operation like this on sets of columns.</p>
<p>I can, of course, pull columns out with <code>df.columns</code>, but newer Polars versions rightly warn us</p>
<pre><code>PerformanceWarning: Determining the column names of a LazyFrame requires resolving its schema, which is a potentially expensive operation.
</code></pre>
<p>I've managed to eliminate most of the operations that scan column names with selectors, but not this one.</p>
|
<python><python-polars>
|
2025-01-28 22:13:46
| 1
| 2,234
|
Jay Miller
|
79,395,117
| 5,312,606
|
Mapping cast in python
|
<p>What is a valid type of <code>T</code>in the following function?</p>
<pre class="lang-py prettyprint-override"><code>def map_cast(T, seq: Sequence) -> Sequence[T]:
"""Cast a sequence of elements to a sequence of elements of given type."""
return [cast(T, x) for x in seq]
</code></pre>
<p>both <code>T: typing.TypeVar</code>or <code>T: type</code> do not work. (rejected by <code>mypy</code> 1.14.1)</p>
<hr />
<p>To avoid an xy-problem and why I want this function.</p>
<p>I want to explicitly change the type of elements in a Sequence. If possible, I want to still have type-checking for the "sequence" character of seq. Yes, I can do <code>cast(Sequence[SomeType], seq)</code> as suggested in the comments but that would also accept if seq is not actually a Sequence, e.g. <code>cast(Sequence[SomeType], 5)</code> would work and I don't want that. If possible I want to avoid a copy, which my function currently does.</p>
<p>The problem of how to cleanly cast the elements of a sequence, preferably without a copy, is not explained in the other question.</p>
|
<python><python-typing>
|
2025-01-28 20:33:47
| 1
| 1,897
|
mcocdawc
|
79,395,026
| 11,092,636
|
Different SSIM Results for 3D Microscopy Slices Using torchmetrics and skimage: Which One Should I Use?
|
<p>I am working with 3D microscopy data (e.g., stacks of 2D images) and trying to compute the Structural Similarity Index (SSIM) between two 3D images. The goal is to evaluate the similarity between an input stack and a ground truth stack. They both have the same dimensions.</p>
<p>Here is a MRE with random tensors:</p>
<pre class="lang-py prettyprint-override"><code>import torch
import numpy as np
from skimage.metrics import structural_similarity
from torchmetrics.image import StructuralSimilarityIndexMeasure
# Simulating 10 256x256 numpy images randint from 0 to 2**16
input_array = np.random.randint(0, 2**16, (10, 256, 256), dtype=np.uint16)
output_array = np.random.randint(0, 2**16, (10, 256, 256), dtype=np.uint16)
input_tensor = torch.tensor(input_array, dtype=torch.float32)
output_tensor = torch.tensor(output_array, dtype=torch.float32)
# Computing SSIM
data_range = max(input_tensor.max(), output_tensor.max()) - min(input_tensor.min(), output_tensor.min())
ssim_metric = StructuralSimilarityIndexMeasure(data_range=data_range)
ssim_score = ssim_metric(
input_tensor.unsqueeze(0), # Add batch dimension
output_tensor.unsqueeze(0) # Add batch dimension
)
print(f"torchmetrics - SSIM score: {ssim_score}")
all_slices_ssim = []
for i in range(input_tensor.shape[0]):
slice_ssim = ssim_metric(
input_tensor[i].unsqueeze(0).unsqueeze(0), # Add channel and batch dimensions
output_tensor[i].unsqueeze(0).unsqueeze(0) # Add channel and batch dimensions
)
all_slices_ssim.append(float(slice_ssim))
mean_ssim = sum(all_slices_ssim) / len(all_slices_ssim)
print(f"torchmetrics - Mean SSIM score (slice-wise): {mean_ssim}")
ssim_3d_scikit = structural_similarity(
input_array,
output_array,
data_range=data_range.item(),
channel_axis=None
)
print(f"skimage - 3D SSIM score: {ssim_3d_scikit}")
ssim_3d_scikit_slices: list[float] = []
for i in range(input_tensor.shape[0]):
slice_ssim = structural_similarity(
input_array[i],
output_array[i],
data_range=data_range.item(),
channel_axis=None
)
ssim_3d_scikit_slices.append(slice_ssim)
print(f"skimage - Mean SSIM score (slice-wise): {sum(ssim_3d_scikit_slices) / len(ssim_3d_scikit_slices)}")
</code></pre>
<p>When I run this code, I get different results for the SSIM scores:</p>
<pre><code>torchmetrics - SSIM score: 0.00599062442779541
torchmetrics - Mean SSIM score (slice-wise): 0.005990624928381294
skimage - 3D SSIM score: 0.006143948174795598
skimage - Mean SSIM score (slice-wise): 0.005819291556411987
</code></pre>
<p>I'm not really sure, which one I should use (I feel like slice-wise mean is a no-go).</p>
|
<python><scikit-learn><torch><scikit-image><ssim>
|
2025-01-28 19:56:52
| 0
| 720
|
FluidMechanics Potential Flows
|
79,395,012
| 1,020,908
|
For every identifier select only rows with largest order column
|
<p>Consider the following table <code>source</code></p>
<pre class="lang-none prettyprint-override"><code>id: LONG
name: STRING
ornum: LONG
</code></pre>
<p>The goal is to create a new table <code>target</code> which for each distinct id in <code>source</code> selects the row with largest <code>ornum</code>.</p>
<p>The following works fine for reasonable number of distinct <code>id</code>s:</p>
<pre class="lang-sql prettyprint-override"><code>CREATE OR REPLACE TABLE target AS SELECT DISTINCT ON(id) * FROM test ORDER BY ornum DESC;
</code></pre>
<p>Is there a way to handle large number (e.g 1B) distinct values with limited memory (say 4GB)?</p>
|
<python><duckdb>
|
2025-01-28 19:51:17
| 1
| 848
|
pseudo
|
79,394,869
| 10,349,888
|
tkinter events - shift modifier with mouse wheel?
|
<p>Is it possible to bind <code>Shift</code> and the mouse wheel? I would have thought so, but I'm doing something wrong.</p>
<p>For example, the following code binds <code>Shift</code>, <code>Control</code>, <code>Alt</code> and no-modifier to the left mouse button (<code>Button-1</code>) and the scroll wheel turning up (<code>Button-4</code>).</p>
<pre><code>import tkinter as tk
root = tk.Tk()
label = tk.Label(root)
label.grid(row=0, column=0, sticky='nsew')
label.config(text='Mouse Wheel with Modifiers')
def show_event(event, desc=''):
label.config(text=desc + ' ' + str(event))
print(desc + ' ' + str(event))
label.bind('<Shift-Button-1>', lambda event, desc='Shift+Click': show_event(event, desc))
label.bind('<Control-Button-1>', lambda event, desc='Control+Click': show_event(event, desc))
label.bind('<Alt-Button-1>', lambda event, desc='Alt+Click': show_event(event, desc))
label.bind('<Button-1>', lambda event, desc='[]+Click': show_event(event, desc))
label.bind('<Shift-Button-4>', lambda event, desc='Shift+Wheel': show_event(event, desc))
label.bind('<Control-Button-4>', lambda event, desc='Control+Wheel': show_event(event, desc))
label.bind('<Alt-Button-4>', lambda event, desc='Alt+Wheel': show_event(event, desc))
label.bind('<Button-4>', lambda event, desc='[]+Wheel': show_event(event, desc))
root.mainloop()
</code></pre>
<p>If I run through the actions in order given I get</p>
<pre><code>Shift+Click <ButtonPress event state=Shift|Mod2|0x2000 num=1 x=107 y=9>
Control+Click <ButtonPress event state=Control|Mod2|0x2000 num=1 x=107 y=9>
Alt+Click <ButtonPress event state=Mod1|Mod2|0x2000 num=1 x=107 y=9>
[]+Click <ButtonPress event state=Mod2|0x2000 num=1 x=107 y=9>
### []+Wheel <ButtonPress event state=Mod2|0x2000 num=4 x=107 y=9>
Control+Wheel <ButtonPress event state=Control|Mod2|0x2000 num=4 x=107 y=9>
Alt+Wheel <ButtonPress event state=Mod1|Mod2|0x2000 num=4 x=107 y=9>
[]+Wheel <ButtonPress event state=Mod2|0x2000 num=4 x=107 y=9>
</code></pre>
<p>where the line prefixed with <code>###</code> was <code>Shift</code> held down with the wheel turned up. I would have expected to see <code>Shift+Wheel</code> here.</p>
<p>This suggests that I'm doing something wrong with the <code>'<Shift-Button-4>'</code> binding, but I can't see it. It's clearly possible to bind a mouse event to <code>Shift</code>, and also to bind other modifiers to the mouse wheel.</p>
<p><strong>Update</strong></p>
<p>Testing on a different machine (ubuntu 24.04, python 3.12, tkinter 8.6) shows the correct result (that is, <code><Shift-Button-4></code>) is recieved. So, this does work on ubuntu, at least in some configurations. Python 3.13 on the problem machine has the original issue.</p>
<p>Using <code>xev</code> on the problem machine shows that <code>Scroll</code> and <code>Control+Scroll</code> are coming through as <code>ButtonPress event</code>s with <code>state 0x2010, button5</code> and <code>state 0x2014, button 5</code> respectively. (I was scrolling the other way while testing, sorry). <strong>However</strong>, <code>Shift+Scroll</code> was coming through as <code>state 0x2011, button 7</code>.</p>
<p>Some more information about the "problem machine": it is ubuntu 20.04, running as a VM (VirtualBox 7.0.6 with guest additions) on a Mac OS host (11.7.1). This starts me wondering if <code>Shift+Scroll</code> (which on a mac would be <code><Shift-MouseWheel></code>) is getting garbled in the translation.</p>
<p>Finally, there is a (9-year-old, still open) <a href="https://www.virtualbox.org/ticket/15269" rel="nofollow noreferrer">report</a> that horizontal scrolling doesn't work on a linux guest/mac os host with virtualbox.</p>
<p>I cannot bind to <code><Button-6></code> or <code><Button-7></code>. I can bind to <code><MouseWheel></code> and <code><Shift-MouseWheel></code>, but these don't catch anything.</p>
|
<python><tkinter><event-handling>
|
2025-01-28 18:33:46
| 0
| 698
|
wtw
|
79,394,835
| 3,906,786
|
Changing bar color in chart based on condition
|
<p>Currently I try to visualize a simple dataframe with Streamlit, but I fail to change bar colors based on the condition, if a specific value is True or False.</p>
<p>The dataframe <code>df_to_display</code>:</p>
<pre><code> PRODUKT CURRENT_DELAY OVERDUE
6 C 546 False
5 A 76 False
17 X 27 True
12 T 21 True
11 S 3 True
3 M 3 True
14 V 3 False
7 R 3 True
</code></pre>
<p>The color of the bar should stay green if OVERDUE is False and should be red if OVERDUE is FALSE. There's also a column named MAX_DELAY in the original dataframe which can be used in this frame as well, if needed.</p>
<p>Currently my code to visualize the bar chart is this line:</p>
<pre class="lang-py prettyprint-override"><code>st.altair_chart(
alt.Chart(df_to_display)
.mark_bar()
.encode(
x=alt.X("CURRENT_DELAY", axis=alt.Axis(title="Current Delay (days)")),
y=alt.Y(
"PRODUKT",
axis=alt.Axis(title="Product"),
sort=alt.EncodingSortField(field="CURRENT_DELAY", order="descending"),
),
),
use_container_width=True,
)
</code></pre>
<p>I tried several tips I found in the internet but none of them helped me. I'm not able to figure out how to get the bars colorized differently based on the fact if they're overdue or not.</p>
<p>Can anyone give me a hint on this?</p>
|
<python><python-3.x><streamlit><altair>
|
2025-01-28 18:18:13
| 1
| 983
|
brillenheini
|
79,394,822
| 4,271,491
|
Can you write pandas dataframe into an avro file without specyfing the schema?
|
<p>I read an excel file into a dataframe using pandas and now I have a requirement to save it as an avro file for further processing of a different step in a script. <br>
As an input to each step I specify a number of parameters in a json but I don't have a real schema, only a column's names array. <br>
Is there a possibilty to write the dataframe to an avro file without specyfing the schema or by just using the column's names array?</p>
|
<python><pandas><dataframe><avro>
|
2025-01-28 18:12:44
| 0
| 528
|
Aleksander Lipka
|
79,394,750
| 15,307,950
|
how to enable seperate action for double click and drag and drop for python files
|
<p>I have a simple python script and I want to enable dragging and dropping files into the python script in Windows 11. This works. But only if I set python launcher as default application for python file types.
I want to launch my editor when double clicking.
My idea is to create a batch file as default application for the python script and launch my editor if no parameters are passed and launch it with python launcher if it has parameters.
I cannot get it to fully work. But when opening the editor command window is created that doesn't close automatically.</p>
<p><code>test drag.py</code>:</p>
<pre><code>import sys
import os
if __name__ == "__main__":
test_file_name = os.path.join(os.path.split(sys.argv[0])[0], "test.txt")
if len(sys.argv) > 1:
with open(test_file_name, 'w', newline='\n') as f:
f.write("file names:")
f.write("\n")
for file_path in sys.argv[1:]:
f.write(file_path)
f.write("\n")
else:
with open(test_file_name, 'w', newline='\n') as f:
f.write("no file names")
f.write("\n")
</code></pre>
<p><code>C:\drag\python_drag.cmd</code>:</p>
<pre><code>@echo off
if not "%~2"=="" (
:: Arguments detected, so files were dragged on it. Calling it with python.
START /B python.exe %*
) else (
:: No arguments detected, so double clicked. Opening it with editor (vscode)
START /B code %1
)
</code></pre>
<p>Open command key is updated via command line:</p>
<pre><code>reg add "HKEY_CLASSES_ROOT\Python.File\shell\open\command" /ve /t REG_EXPAND_SZ /d "\"C:\drag\python_drag.cmd\" \"%L\" %*" /f
</code></pre>
<p>Set drophandler if needed (if installing latest python version with py launcher it should be set). <code>enable_python_drag_and_drop.reg</code>:</p>
<pre><code>Windows Registry Editor Version 5.00
[HKEY_CLASSES_ROOT\Python.File\shellex\DropHandler]
@="{60254CA5-953B-11CF-8C96-00AA00B8708C}"
</code></pre>
<p>When starting python a window briefly opens (you can only see the outline) and then it closes automaticallt.</p>
<p>When starting VSCODE it opens an additional window that doesn't close. How to fix it?</p>
<p>Edit:
I figured out how to at least minimize the window:</p>
<pre><code>START /w /min cmd /c "call code %1"
</code></pre>
<p>or:</p>
<pre><code>START /w /min cmd /c "call code.cmd %1"
</code></pre>
|
<python><visual-studio-code><batch-file>
|
2025-01-28 17:48:00
| 1
| 726
|
elechris
|
79,394,722
| 1,419,224
|
Alternatives to run a .exe file with python
|
<p>I'm attempting to run the SWAT+ model executable (swatplus-61.0.2-ifx-win_amd64-Rel.exe) from Python using the subprocess module. However, I encounter the following error when running the executable through Python:</p>
<pre><code>The procedure entry point for_sop_core_quiet could not be located in the dynamic link library 'swatplus-61.0.2-ifx-win_amd64-Rel.exe'.
</code></pre>
<p>Interestingly, the executable runs perfectly when invoked manually from the Command Prompt, but fails when trying to execute it using subprocess in Python.</p>
<pre><code>import os
import subprocess
exe_file = r"C:\Users\mcva\Desktop\swat_plus_gango - Copy\swatplus-61.0.2-ifx-win_amd64-Rel.exe"
# Attempt to run the executable using subprocess
try:
subprocess.run([exe_file], check=True)
print("SWAT+ ran successfully")
except subprocess.CalledProcessError as e:
print(f"Error while running SWAT+: {e}")
print(f"Exit Code: {e.returncode}")
print(f"Error Output: {e.stderr}")
</code></pre>
<p>Troubleshooting steps I've tried:
Manually run the executable:
The executable runs perfectly from the Command Prompt, but fails when invoked from Python.
Checking paths:
I verified that the path to the executable is correct. The file exists at C:\Users\mcva\Desktop\swat_plus_gango - Copy\swatplus-61.0.2-ifx-win_amd64-Rel.exe and is accessible.
Error code 3221225785:
The error code 3221225785 (0xc0000135) indicates a possible DLL conflict, but when running manually, this doesn't appear.
Dependencies and DLLs:
I confirmed that the SWAT+ executable works in a manual run, so it should have all the necessary dependencies.
Running executable from Python:
When trying to run the executable via Pythonβs subprocess.run() function, I encounter the error mentioned above. The output is not informative: it just mentions that the entry point for for_sop_core_quiet is missing.What I need help with:
Why does SWAT+ run manually but not when invoked via Python?
How can I resolve the for_sop_core_quiet entry point issue when running via Python?
Are there any environment setup steps or configuration tweaks needed for running SWAT+ from Python?
How can I ensure that all necessary DLLs or dependencies are correctly loaded when running from Python?
Any insights or suggestions would be greatly appreciated!</p>
|
<python>
|
2025-01-28 17:40:18
| 0
| 1,265
|
mcva
|
79,394,583
| 1,942,868
|
How to get the form-data from bytes objects
|
<p>I have this API</p>
<pre><code>from fastapi import FastAPI,UploadFile, File,Depends,status,Body,Request, Form
@app.post('/myapi')
async def getmyapi(body=Body(...)):
print(body)
</code></pre>
<p>this shows bytes code like this,</p>
<pre><code>b'------WebKitFormBoundaryAVwbJDu8StWnEwnW\r\nContent-Disposition: form-data; name="metadata"\r\n\r\n{"sections":[{"sec.....
</code></pre>
<p>I can see the <code>form-data</code> and structure,</p>
<p>I would like to pick up the metadata from form-data.</p>
<p>I tried this , but in vain</p>
<pre><code>print(body.name)
->AttributeError: 'bytes' object has no attribute 'name'
</code></pre>
<p>How can I make it?</p>
<hr />
<p>I changed like this and solved thanks to the @Chris's comment</p>
<pre><code>#async def getmyapi(body=Body(...)):
async def getmyapi(metadata: str = Form(...), file: List[UploadFile] = File(...)):
print(json.loads(metadata)) #it shows the dict!
</code></pre>
|
<python><fastapi>
|
2025-01-28 16:46:10
| 0
| 12,599
|
whitebear
|
79,394,556
| 13,806,869
|
How to run a SQL merge statement within a Python loop?
|
<p>I have some SQL code:</p>
<pre><code>CREATE TABLE TABLE_1 AS
(
SELECT ACCOUNT, 0 AS REDEEMER
FROM ACCOUNTS
)
WITH DATA
IN MY_SCHEMA
COMPRESS YES
;
MERGE INTO TABLE_1 A
USING
(
SELECT ACCOUNT
FROM REDEEMERS
WHERE CATEGORY = 'x'
) B
ON A.ACCOUNT = B.ACCOUNT
WHEN MATCHED THEN UPDATE
SET A.REDEEMER = 1
;
</code></pre>
<p>Let's say table_1 looks like this when created, with redeemer set to 0 for all rows:</p>
<pre><code>account | redeemer
------------------
a | 0
b | 0
c | 0
</code></pre>
<p>If accounts a and c have redeemed category x, the merge should make table_1 look like this:</p>
<pre><code>account | redeemer
------------------
a | 1
b | 0
c | 1
</code></pre>
<p>This works as intended. I now want to do this for multiple categories in turn by embedding the SQL into a for loop in Python using sqlalchemy, like this:</p>
<pre><code>connection.execute(
f'''
CREATE TABLE TABLE_1 AS
(
SELECT ACCOUNT, 0 AS REDEEMER
FROM ACCOUNTS
)
WITH DATA
IN MY_SCHEMA
COMPRESS YES
;''')
categories = [
"x",
"y",
"z"
]
for i in categories:
#Set redeemer column to 1 for redeemers of the current category
connection.execute(
f'''
MERGE INTO TABLE_1 A
USING
(
SELECT ACCOUNT
FROM REDEEMERS
WHERE CATEGORY = '{i}'
) B
ON A.ACCOUNT = B.ACCOUNT
WHEN MATCHED THEN UPDATE
SET A.REDEEMER = 1
;''')
#Code to do some stuff with the data in table_1 goes here
#Set redeemer column back to 0 in preparation for the next loop
connection.execute(
f'''
UPDATE TABLE_1
SET REDEEMER = 0
;''')
</code></pre>
<p>However, this doesn't achieve the desired result. There's no error message, but the redeemer column does not seem to be getting updated by the merge. I've added a sum statement right after the merge which confirms redeemer is still set to 0 for all rows.</p>
<p>Database is DB2 if that helps shed any light on things.</p>
<p>Why does this code not work when run through Python?</p>
<p>I've also tried explicitly committing the merge statement like this, but the result is the same:</p>
<pre><code>#Set redeemer column to 1 for redeemers of the current category
with Session(connection) as session:
connection.execute(
f'''
MERGE INTO TABLE_1 A
USING
(
SELECT ACCOUNT
FROM REDEEMERS
WHERE CATEGORY = '{i}'
) B
ON A.ACCOUNT = B.ACCOUNT
WHEN MATCHED THEN UPDATE
SET A.REDEEMER = 1
;''')
session.commit()
</code></pre>
|
<python><merge><sqlalchemy>
|
2025-01-28 16:33:41
| 0
| 521
|
SRJCoding
|
79,394,290
| 10,767,161
|
How to optimize regex search across multiple patterns in Python?
|
<p>I have a Python script that uses multiple regular expressions to search through a file's content. Here's the relevant code snippet:</p>
<pre><code>PATTERNS = {
"email": re.compile(r"\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}\b"),
"phone": re.compile(r"\b\d{3}-\d{3}-\d{4}\b"),
"date": re.compile(r"\b\d{4}-\d{2}-\d{2}\b")
}
def readfilecontent(filepath):
with open(filepath,'r', encoding='utf-8', errors='ignore') as file:
return file.read()
filecontent = readfilecontent("path/ToFile")
for key, pattern in PATTERNS.items():
matches = pattern.findall(filecontent)
if matches:
for match in matches:
print(match)
</code></pre>
<p><code>PATTERNS</code> is a dictionary where the key is the pattern's name, and the value is a precompiled regex object (created using re.compile).</p>
<p><code>filecontent</code> contains the file's content as a single string.
The script works correctly but takes several minutes to process each file due to the large size of the files and the number of patterns.</p>
<p>Is there a way to speed this up, either by restructuring the code or using a more efficient library or approach? For example, would combining patterns, parallel processing, or other methods improve performance?</p>
|
<python><regex>
|
2025-01-28 15:08:07
| 4
| 747
|
Metin Bulak
|
79,394,276
| 1,416,001
|
How to limit the memory usage of a Python process?
|
<p>I'm trying to limit the memory usage of a Python service running in Kubernetes. I'm currently testing with Python 3.10 running in WSL2. I want the service to be aware of limitations set by Kubernetes, so it can throw a MemoryError when it's trying to allocate too much memory and handle that error in the code. I'm trying to set RLIMIT_AS as described <a href="https://carlosbecker.com/posts/python-docker-limits/" rel="nofollow noreferrer">here</a>. Kubernetes has a limit of 500 MB for the process. But when I set the RLIMIT_AS to 500 MB, the service does not even start. I then wrote a very simple script and checked how low I can set the RLIMIT_AS.</p>
<p>Script:</p>
<pre><code>import resource
limit = 1000 * 1024 * 1024 # 1000 MB
resource.setrlimit(resource.RLIMIT_AS, (limit, limit))
print(f"Setting memory limit to {limit} bytes.")
bytearray(1 * 1024 * 1024) # Allocate 1 MB
print("Successfully allocated memory.")
</code></pre>
<p>Output:</p>
<pre><code>Setting memory limit to 1048576000 bytes.
Traceback (most recent call last):
File "/mnt/c/Users/xxx/foobar.py", line 6, in <module>
bytearray(1 * 1024 * 1024) # Allocate 1 MB
MemoryError
Process finished with exit code 1
</code></pre>
<p>I have to set the limit to 1048 MB for the script to be successful. "htop" in WSL2 is showing me a VIRT of 1047 MB for the Python script, so it seems RLIMIT_AS has to be greater than VIRT, which is already more than 1 GB for the most simple script.</p>
<pre><code>PID USER PRI NI VIRT RES SHR S CPU%β½ MEM% TIME+ Command
56232 xxx 20 0 1046M 81112 26156 S 0.0 0.2 0:00.10 /home/xxx/.virtualenvs/xxx/bin/python3 /mnt/c/Users/xxx/foobar.py
</code></pre>
<p>Why is VIRT so high? How can I limit the real memory usage of the process?</p>
|
<python><kubernetes><memory><windows-subsystem-for-linux><setrlimit>
|
2025-01-28 15:04:51
| 1
| 985
|
John
|
79,394,058
| 5,462,551
|
Test that module is NOT imported
|
<p>I need to test that specific imports of my library don't import along some other heavy modules (e.g. tensorflow - we'll use it in this question). Here's a detailed description:</p>
<p>The test should run some imports and check that TensorFlow is NOT imported. Preferably, it will check for each import on its own, so we can know which import brought tensorflow with it (in case the test fails).</p>
<p><strong>IMPORTANT:</strong> the test must also work when running with pytest-xdist. This means that we need to bypass python's module caching.</p>
<p>Here's a general template of what this test might look like:</p>
<pre class="lang-py prettyprint-override"><code>def test_not_importing_tensorflow():
# possible setup
import mylib.some_module
# maybe some more imports - again, it's better to isolate each imported module if possible
# check that tensorflow is not imported
# possible cleanup thanks
</code></pre>
|
<python><pytest><python-import>
|
2025-01-28 13:51:40
| 2
| 4,161
|
noamgot
|
79,393,978
| 17,580,381
|
Unable to utilise Hadoop with PySpark
|
<p>Let me say first of all that I have no particular need for Hadoop. I'm just trying to understand what the problem is here.</p>
<p>Consider this MRE:</p>
<pre><code>from pyspark.sql import SparkSession
from os import environ as ENV
from pathlib import Path
HADOOP_DIR = Path("/Volumes/Spare/hadoop-3.4.0")
HADOOP_LIB = HADOOP_DIR / "lib/native"
def update_env(env: str, val: str) -> None:
try:
ENV[env] = ENV[env] + ":" + val
except KeyError:
ENV[env] = val
ENV["HADOOP_HOME"] = str(HADOOP_DIR)
update_env("LD_LIBRARY_PATH", str(HADOOP_LIB))
if __debug__:
print("Running assertions")
assert HADOOP_DIR.is_dir()
assert HADOOP_LIB.is_dir()
SparkSession.builder.appName("MySparkApp").getOrCreate()
</code></pre>
<p>When I run this program, I see the following warning:</p>
<p>25/01/28 13:06:52 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable.</p>
<p>I am running in debug mode and neither of the assertions fail.</p>
<p>My runtime platform is MacOS 15.2 (M2) with Python 3.13.1.</p>
<p>I have downloaded Hadoop 3.4.0 (hadoop-3.4.0-aarch64.tar.gz) and extracted its contents into /Volumes/Spare/hadoop-3.4.0</p>
<p>I have read multiple articles concerning this matter and from what I can tell, it's important to have the LD_LIBRARY_PATH and HADOOP_HOME environment variables set appropriately which I believe I have done.</p>
<p>And so my question is simply... Why is PySpark "Unable to load native-hadoop library"?</p>
|
<python><hadoop><pyspark>
|
2025-01-28 13:22:32
| 0
| 28,997
|
Ramrab
|
79,393,920
| 567,059
|
How to ensure hash of zip file created locally is identical to when created during CI/CD, using exactly the same contents
|
<p>How can I use <code>shutil.make_archive()</code> to make zip files that are absolutely identical, including the hash?</p>
<p>I am creating a zip file locally, and then through CI/CD, and I expect them to be identical, but the hashes differ. This causes CI/CD to think that the files are different and it attempts an update, which is undesirable as this would result in an update being made of every occasion, even when no change has been made.</p>
<p>The following precautions have been taken to ensure the zipped packages are identical, however I'm clearly missing something.</p>
<ul>
<li>The directory where packages will be installed is first removed, to ensure a clean packages directory.</li>
<li>Packages are installed by the root user.</li>
<li>All <code>__pycache__</code> directories are deleted.</li>
<li>The last accessed and modified times of each file is changed to ensure they match.</li>
</ul>
<p>Is there something more (or different) I can do to make sure the hashes of the zip files will always be the same, given the contents is exactly the same?</p>
<p>Please note that it is Terraform generating a hash of the zip file, using <code>filebase64sha256("packages.zip")</code>. I don't believe this makes a difference, but thought it worth mentioning anyway.</p>
<pre class="lang-bash prettyprint-override"><code>#!/bin/bash
REQUIREMENTS_FILE="requirements.txt"
REQUIREMENTS_DIR="packages/"
REQUIREMENTS_ZIP="packages.zip"
rm -rf $REQUIREMENTS_DIR
pip install --upgrade --force-reinstall --no-cache-dir -r $REQUIREMENTS_FILE -t $REQUIREMENTS_DIR
find $REQUIREMENTS_DIR | grep -E "(__pycache__|\.pyc$)" | xargs rm -rf
find $REQUIREMENTS_DIR -exec touch -t 200010101010 {} +
# Call out to zip.py to create the archive.
python $(dirname "$0")/assets/zip.py $REQUIREMENTS_DIR $REQUIREMENTS_ZIP
</code></pre>
<h3>zip.py</h3>
<pre class="lang-py prettyprint-override"><code>import argparse
import logging
import os
import zipfile
###
# Create a zip archive.
# It is intended that this script will be executed by the 'create_archives.sh'
# script, as part of the 'py:install' CI/CD pipeline job.
###
# Configure logging
logging.basicConfig(level=logging.INFO, format='[%(levelname)s] %(message)s')
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
# Configure args
parser = argparse.ArgumentParser(prog='zip.py',
add_help=False,
allow_abbrev=False,
description='Create a zip archive.')
parser.add_argument('src', help='Path to the source of the archive to create.')
parser.add_argument('dst', help='Path to the destination of the new archive.')
parser.add_argument('-p', dest='include_path', action='store_true',
help='Whether to include the path to `src` in the archive')
def zip(src: str, dst: str, include_path: bool=False) -> None:
'''Create a zip archive.
Args:
src (str): Path to the source of the archive to create.
dst (str): Path to the destination of the new archive.
include_path (bool=False): Whether to include the path to `src` in the
archive.
'''
with zipfile.ZipFile(dst, "w", zipfile.ZIP_DEFLATED) as zf:
abs_src = os.path.abspath(src)
for dirname, _, files in os.walk(src):
for filename in files:
absname = os.path.abspath(os.path.join(dirname, filename))
arcname = absname if include_path else absname[len(abs_src) + 1:]
zf.write(absname, arcname)
if __name__ == '__main__':
# Parse args.
args = parser.parse_args()
# Create zip archive.
logger.info("Creating zip archive '%s' from source '%s'.",
args.dst, args.src)
zip(**vars(args))
</code></pre>
|
<python><hash><zip>
|
2025-01-28 12:58:56
| 0
| 12,277
|
David Gard
|
79,393,859
| 13,339,621
|
Get pathlib Path with symlink
|
<p>Let's say I open a python console inside <code>/home/me/symlink/dir</code>, which is actually a shortcut to <code>/home/me/path/to/the/dir</code>. In this console I execute the following code:</p>
<pre class="lang-py prettyprint-override"><code>from pathlib import Path
path = Path.cwd()
print(path) # "/home/me/path/to/the/dir"
</code></pre>
<p>So <code>cwd()</code> actually resolves the symlinks and get the absolute path automatically and I am searching how to avoid this behavior.
If the path with symlink is provided at the Path object declaration, it seems to work as I would like:</p>
<pre class="lang-py prettyprint-override"><code>from pathlib import Path
path = Path("/home/me/symlink/dir")
print(path) # "/home/me/symlink/dir"
</code></pre>
<p>Is there a way to get the current working directory <strong>keeping</strong> the symlink other than providing the path at object instantiation ?</p>
|
<python><pathlib>
|
2025-01-28 12:41:52
| 1
| 1,549
|
matleg
|
79,393,845
| 10,108,726
|
How to limit admin django session and renew time after user action
|
<p>I am improving my django admin session security and I wanted to do 2 things:</p>
<ol>
<li>Limit my admin session time</li>
<li>Renew my session time after every user action</li>
</ol>
<p>What is the best way to do?</p>
|
<python><django>
|
2025-01-28 12:36:57
| 1
| 654
|
Germano
|
79,393,833
| 1,554,114
|
QDialog prevent closing when pressing return QDoubleSpinBox in focus
|
<p>I try to build an application based on QDialog.
It contains a 'Close' button, which always seems to go back into focus when I type into a QSpinBox.</p>
<p><code>QSpinBox.keyboardTracking = false</code> so I can process the final value.<br />
Now clicking somewhere else or pressing enter within the spin box will actually accept the new value via the <code>valueChanged</code> event.
But with enter, at the same time a 'click' is sent to the Close button.</p>
<p>I tried to override the keypress event:</p>
<pre><code>class MyDoubleSpinBox(QDoubleSpinBox):
def __init__(self, *args, **kwargs):
super(MyDoubleSpinBox, self).__init__(*args, **kwargs)
def keyPressEvent(self, event):
key = event.key()
#Catch keys
if key == Qt.Key_Return or key == Qt.Key_Enter:
# Process current item here
print(self.value())
#self.nextInFocusChain().setFocus()
#import pdb;pdb.set_trace()
#self.valueChanged(self.value())
pass
else:
super(MyDoubleSpinBox, self).keyPressEvent(event)
</code></pre>
<p>But this will also disable the valueChanged event.</p>
<p>How can I prevent triggering the close button when editing a SpinBox without masking its valueChanged event as well?</p>
<p>EDIT:
Or is QDialog the wrong class for the main window? :)</p>
|
<python><qt><pyside6><qt6>
|
2025-01-28 12:33:16
| 1
| 4,731
|
Nippey
|
79,393,350
| 14,649,310
|
langchain_community OllamaEndpointNotFoundError when trying to access phi3:mini Model
|
<p>I have my ollama service running in a container. If I query it from the terminal like:</p>
<pre><code>curl -X POST http://localhost:11434/v1/completions \
-H "Content-Type: application/json" \
-d '{
"model": "phi3:mini",
"prompt": "Test query"
}'
</code></pre>
<p>it works and I get a response. I also verified the model exists in my Ollama service. But when I try to send a request from my python app with</p>
<pre><code>from langchain_community.llms.ollama import Ollama
model = Ollama(name=model_version, base_url=LLAMA_BASE_URL)
for chunk in self.model.stream(input_text):
accumulated_text += chunk
</code></pre>
<p>where:
<code>model_version="phi3:mini"</code> and <code>LLAMA_BASE_URL="http://ollama:11434"</code> I get:</p>
<pre><code>langchain_community.llms.ollama.OllamaEndpointNotFoundError:
Ollama call failed with status code 404.
Maybe your model is not found and you should pull the model with `ollama pull llama2`.
</code></pre>
<p>If I install the <code>llama2</code> model in Ollama however it works I dont get this error!? Why? does <code>Ollama</code> client in python from <code>langchain_community</code> only works with specific models?</p>
|
<python><langchain><ollama>
|
2025-01-28 09:45:54
| 1
| 4,999
|
KZiovas
|
79,393,342
| 2,173,750
|
Is it possible in a Mako template to import objects from a Python/Module block in another Mako template?
|
<p>I have a Mako template with Python and Module blocks, like this:</p>
<pre><code><%
l1 = ['a','b','c']
%>
<%!
l2 = ['d','e','f']
%>
</code></pre>
<p>Is it possible to import <code>l1</code> or <code>l2</code> from another Mako template? <code><%namespace</code>, <code><%import</code> and other traditional ways to import other Mako templates do not seem to work.</p>
|
<python><mako>
|
2025-01-28 09:44:18
| 1
| 3,404
|
durum
|
79,393,186
| 2,879,720
|
Convolving with a gaussian kernel vs Gaussian blur
|
<p>While looking for a way to generate spatially varying noise, I came across <a href="https://stackoverflow.com/a/63868276/2879720">this answer</a>, which is able to do what I wanted. But I am getting confused about how the code works.</p>
<p>From what I understand, the first step of the code generates a gaussian kernel:</p>
<pre><code>import numpy as np
import scipy.signal
import matplotlib.pyplot as plt
# Compute filter kernel with radius correlation_scale (can probably be a bit smaller)
correlation_scale = 150
x = np.arange(-correlation_scale, correlation_scale)
y = np.arange(-correlation_scale, correlation_scale)
X, Y = np.meshgrid(x, y)
print(X.shape,Y.shape)
dist = np.sqrt(X*X + Y*Y)
filter_kernel = np.exp(-dist**2/(2*correlation_scale))
</code></pre>
<p>which when visualized looks as follows:</p>
<p><a href="https://i.sstatic.net/eKAkmTvI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eKAkmTvI.png" alt="" /></a></p>
<p>The second step of the code generates a random noise grid:</p>
<pre><code>n = 512
noise = np.random.randn(n, n)
</code></pre>
<p>that looks like:</p>
<p><a href="https://i.sstatic.net/OlmQ5Ql1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OlmQ5Ql1.png" alt="enter image description here" /></a></p>
<p>The third step convolves the random noise generated in step 2 with the filter generated in step 1.</p>
<pre><code>noise1 = scipy.signal.fftconvolve(noise, filter_kernel, mode='same')
</code></pre>
<p>and the output of this step looks like this:</p>
<p><a href="https://i.sstatic.net/naew62PN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/naew62PN.png" alt="enter image description here" /></a></p>
<p>My question is how does the output of step 3 end up looking like this instead of a smoothed out version of the random noise? Isn't convolution with a Gaussian kernel the same as applying a Gaussian blur? For instance, if I apply a Gaussian filter on the random noise generated, the output would look like this:</p>
<pre><code>from scipy.ndimage import gaussian_filter
noise = gaussian_filter(noise , sigma=1, radius=10)
</code></pre>
<p><a href="https://i.sstatic.net/6HQDI0XB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6HQDI0XB.png" alt="enter image description here" /></a></p>
<p>Why are the last two images so different from each other?</p>
|
<python><numpy><convolution><gaussianblur>
|
2025-01-28 08:49:17
| 1
| 373
|
MollieVX
|
79,393,140
| 21,049,944
|
How to transform pyarrow integer column to categorical without mapping every value
|
<p>I have a pyarrow table with a int column. Is it possible to transform it to categorical string without having to call <code>col.map(dict)</code> -> <code>cast("categorical")</code>?</p>
<p>I would basicaly like to just pass the dict to the table to make it know how to interpret the integers.</p>
|
<python><pyarrow>
|
2025-01-28 08:25:56
| 1
| 388
|
Galedon
|
79,393,122
| 4,706,952
|
Transforming Data with implicit categories in header with pandas
|
<p>I have a table like:</p>
<pre><code> | 2022 | 2022 | 2021 | 2021
class | A | B | A | B
-----------|------|------|------|------
X | 1 | 2 | 3 | 4
Y | 5 | 6 | 7 | 8
</code></pre>
<p>How can I transform it to following form?</p>
<pre><code>year | category | class | value
----------------------------------
2022 | A | X | 1
2022 | A | Y | 5
2022 | B | X | 2
2022 | B | Y | 6
2021 | A | X | 3
2021 | A | Y | 7
2021 | B | X | 4
2021 | B | Y | 8
</code></pre>
<p>I tried various combinations of <code>pd.melt</code> with no success..</p>
<p>Thx in advance!</p>
|
<python><pandas>
|
2025-01-28 08:20:39
| 2
| 7,517
|
symbolrush
|
79,392,747
| 5,312,606
|
Confusion about type-checking nested Sequence
|
<p>I have code that in a minimal example looks like this.</p>
<pre class="lang-py prettyprint-override"><code>from ordered_set import OrderedSet
from collections.abc import Sequence
from typing import NewType, TypeAlias
SeqOverFrag: TypeAlias = Sequence
MotifIdx = NewType("MotifIdx", int)
AOIdx = NewType("AOIdx", int)
x = []
x.append({MotifIdx(1): OrderedSet([AOIdx(1), AOIdx(2)])})
y: Sequence[dict[MotifIdx, Sequence[AOIdx]]] = x
</code></pre>
<p>which raises the error</p>
<pre class="lang-none prettyprint-override"><code>Incompatible types in assignment (expression has type "list[dict[MotifIdx, OrderedSet[AOIdx]]]", variable has type "Sequence[dict[MotifIdx, Sequence[AOIdx]]]")
</code></pre>
<p>If I explicitly declare <code>x</code></p>
<pre class="lang-py prettyprint-override"><code>x: list[dict[MotifIdx, Sequence[AOIdx]]] = []
</code></pre>
<p>or if I initialize it directly</p>
<pre class="lang-py prettyprint-override"><code>x = [{MotifIdx(1): OrderedSet([AOIdx(1), AOIdx(2)])}]
</code></pre>
<p>then the error goes away.</p>
<p>The fixes suggest that there is something wrong with the type deduction, on the other hand the error message shows, that the type deduction worked (?). Is this a bug in <code>mypy</code>?</p>
|
<python><python-typing><mypy>
|
2025-01-28 04:33:37
| 0
| 1,897
|
mcocdawc
|
79,392,684
| 10,024,860
|
Pytest chain mocking between sync and async functions
|
<p>I want to mock:</p>
<pre><code>async with aiohttp.ClientSession() as session:
async with session.post("xyz.com") as resp:
x = await resp.json()
</code></pre>
<p>Currently, I am doing:</p>
<pre><code># patched aiohttp as mock_aiohttp
mock_session = Mock()
mock_session_post_cm = AsyncMock()
mock_aiohttp.ClientSession.return_value.__aenter__.return_value = mock_session
mock_session.post.return_value = mock_session_post_cm
mock_session_post_cm.__aenter__.return_value.json.return_value = "ABC"
</code></pre>
<p>However it is very verbose and the intent becomes unclear, since I just want to mock the value of <code>x</code> and have no use for the intermediate mocks.</p>
<p>The challenge comes from the mix of sync and async function calls (mock_session is sync, everything else is async). If everything was either a property or a sync function, e.g. <code>x = a().b().c.d.e()</code>, then I could mock it in one line with <code>mock_a.return_value.b.return_value.c.d...</code>, and similarly it would be one line if every function was async, but every time it mixes, I have to create an intermediate variable. Is there a way around this?</p>
<p>For example it would be great if I could somehow do</p>
<pre><code>mock_aiohttp.ClientSession.return_value_async.__aenter__.return_value_async
.post.return_value_sync.__aenter__.return_value_async
.json.return_value_async = "ABC"
</code></pre>
|
<python><unit-testing><pytest><python-unittest>
|
2025-01-28 03:32:42
| 1
| 491
|
Joe C.
|
79,392,672
| 4,330,537
|
Trying to use chrome_undetected selenium driver and portable chrome
|
<p>I have the code below and I have chrome portable installed at
C:\temp\GoogleChromePortable64\App\Chrome-bin\chrome.exe
how do I tell undetected_chromedriver to use it?</p>
<pre><code># Import the required libraries
import undetected_chromedriver as uc
from selenium.webdriver.chrome.service import Service as ChromeService
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
from webdriver_manager.chrome import ChromeDriverManager
import time
# Define Chrome options
options = uc.ChromeOptions()
# Set headless to False to run in non-headless mode
options.headless = False
# Chrome driver
driver = uc.Chrome(use_subprocess=True, options=options)
# Go to the desired URL
driver.get("https://library.usask.ca/#gsc.tab=0")
# Wait until the input field is visible
wait = WebDriverWait(driver, 10)
q_field = wait.until(EC.presence_of_element_located((By.ID, "primoQueryTemp")))
# Use JavaScript to focus the element
driver.execute_script("arguments[0].focus();", q_field)
# Initiate typing into the field
q_field.send_keys("_")
# Click the field to trigger any other events
q_field.click()
# Keep the browser open for observation
time.sleep(566)
# Close the driver
driver.quit()
</code></pre>
|
<python><google-chrome><selenium-webdriver>
|
2025-01-28 03:18:38
| 1
| 835
|
RobM
|
79,392,607
| 3,063,547
|
install of python-for-android failing: "No module named 'setuptools" when using "python3 setup.py build"
|
<p>I want to install python-for-android. Downloaded the source via git clone of the repos. Tried <code>python3 setup.py build</code>, but get</p>
<pre><code>"Traceback (most recent call last):
File "/home/me/python-for-android/setup.py", line 7, in <module>
from setuptools import setup, find_packages
ModuleNotFoundError: No module named 'setuptools'."
</code></pre>
<p>Next tried <code>pip3 install setuptools</code>, but get:</p>
<pre><code>"Could not fetch URL https://pypi.org/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping
ERROR: No matching distribution found for setuptools".
</code></pre>
<p>How can I fix this?</p>
|
<python><android>
|
2025-01-28 02:08:22
| 0
| 853
|
user3063547
|
79,392,590
| 8,458,083
|
How to select a NixOS WSL Python interpreter(from a virtual environment) in Visual Studio Code on Windows
|
<p>I'm using NixOS on WSL2 and Visual Studio Code on Windows with the WSL extension. I've created a virtual environment in NixOS, and I want to use the python command of the virtual environment as the Python interpreter in VS Code. However, I'm encountering issues selecting the correct interpreter.</p>
<p>My setup:</p>
<p>NixOS installed on WSL2</p>
<p>Visual Studio Code on Windows with the WSL extension (to access all file in WSL)</p>
<p>Python virtual environment created in NixOS (see flake at the end of the question)</p>
<p>When I run which python inside my virtual environment in NixOS, I get:</p>
<blockquote>
<p>/nix/store/24w9ckmkq0asaif83c13z53yraq584lm-python3-3.14.0a4/bin/python</p>
</blockquote>
<p>I've tried to select this path as my interpreter in VS Code by adapting it to the Windows WSL path format:</p>
<blockquote>
<p>\wsl.localhost\NixOS\nix\store\24w9ckmkq0asaif83c13z53yraq584lm-python3-3.14.0a4\bin\python</p>
</blockquote>
<p>However, VS Code only allows me to choose executable (.exe) files. This makes sense, as the Linux executable can't be run directly from Windows.
Is it possible to select this NixOS WSL Python interpreter for use in VS Code on Windows? If so, how can I do it?
I understand that a Linux executable can't be run directly from Windows, but I'm hoping there's a way to configure VS Code to use the WSL Python interpreter seamlessly. Any guidance on how to properly set this up would be greatly appreciated.</p>
<p>My flake file bring nothing to understand the problem. It is to make the problem reproductible:</p>
<pre><code>{
description = "Python and Manim development environment";
inputs = {
nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
flake-utils.url = "github:numtide/flake-utils";
};
outputs = { self, nixpkgs, flake-utils }@inputs:
flake-utils.lib.eachDefaultSystem (system:
let
pkgs = nixpkgs.legacyPackages.${system};
in {
devShells.default = pkgs.mkShell {
buildInputs = with pkgs; [
python314
manim
];
shellHook = ''
echo "Development environment for Python and Manim"
echo "Run 'python' to start Python interpreter"
echo "Run 'manim' to start Manim"
'';
};
}
);
}
</code></pre>
|
<python><visual-studio-code><windows-subsystem-for-linux><nix><nix-flake>
|
2025-01-28 01:47:09
| 1
| 2,017
|
Pierre-olivier Gendraud
|
79,392,577
| 1,997,852
|
How to make paho-mqtt loop timeout?
|
<p>Using python paho-mqtt, I'm trying to receive exactly 1 message from a topic and then exit. If no messages are received in 5 seconds then I want to timeout and exit. The problem I have is that this code always waits 5 seconds, even if the message was received sooner. If I remove the <code>sleep()</code> line then it doesn't receive any message at all.</p>
<pre class="lang-py prettyprint-override"><code>import paho.mqtt.client as mqtt
import logging
import time
def on_message(client, userdata, message):
logging.info(f"Received message: {message.payload}")
client.loop_stop()
logging.basicConfig(level=logging.DEBUG, format="%(asctime)s [%(levelname)s] %(message)s")
mqttc = mqtt.Client(mqtt.CallbackAPIVersion.VERSION2)
mqttc.enable_logger()
mqttc.on_message = on_message
mqttc.connect("mqtt.eclipseprojects.io")
mqttc.subscribe("#")
mqttc.loop_start()
time.sleep(5)
logging.info('Complete!')
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>2025-01-28 01:39:33,511 [DEBUG] Sending CONNECT (u0, p0, wr0, wq0, wf0, c1, k60) client_id=b''
2025-01-28 01:39:33,511 [DEBUG] Sending SUBSCRIBE (d0, m1) [(b'#', 0)]
2025-01-28 01:39:33,632 [DEBUG] Received CONNACK (0, 0)
2025-01-28 01:39:33,634 [DEBUG] Received SUBACK
2025-01-28 01:39:33,634 [DEBUG] Received PUBLISH (d0, q0, r1, m0), 'AhoyDTU0600-Streusser/version', ... (6 bytes)
2025-01-28 01:39:33,634 [INFO] Received message: b'0.8.83'
2025-01-28 01:39:38,517 [INFO] Complete!
</code></pre>
|
<python><mqtt><paho>
|
2025-01-28 01:38:35
| 1
| 1,217
|
Elliott B
|
79,392,549
| 2,061,886
|
can't figure out how to shrink docker image after pip install
|
<p>I have a docker file that looks like this:</p>
<pre><code>FROM python:3.12-slim
RUN apt-get update \
&& apt-get install -y --no-install-recommends git \
&& apt-get purge -y --auto-remove \
&& rm -rf /var/lib/apt/lists/*
RUN pip3 install --no-cache-dir --upgrade \
pip poetry
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt \
&& rm -rf /root/.cache/pip
# ENV POETRY_VIRTUALENVS_CREATE=false
# COPY pyproject.toml .
# RUN poetry install
COPY . .
EXPOSE 8501
ENV PYTHONUNBUFFERED=1
CMD streamlit run ./app.py
</code></pre>
<p>my requirements file looks like this:</p>
<pre><code>streamlit>=1.41.1
vertexai>=1.71.1
google==3.0.0
google-genai>=0.6.0
google-generativeai==0.8.4
st_cookie==1.0.0
extra-streamlit-components==0.1.71
</code></pre>
<p>for some reason.... the pip install step results in increasing the docker image by 850mb! I've tried to use poetry as well to see if that helps and even when I just have one of the dependencies (any one of them), the docker file still increases by 600mb at least. Any thoughts on why that is and how I can reduce the size? Do I just need to manually go through something like <code>pipdeptree</code> and run <code>pip uninstall</code> of large packages?</p>
|
<python><docker>
|
2025-01-28 01:10:24
| 1
| 3,347
|
lightweight
|
79,392,545
| 626,664
|
What is giving Azure document intelligence a TypeError?
|
<p>I am using Azure Doc Intelligence to analyze PDFs. I am trying the code from <a href="https://learn.microsoft.com/en-us/python/api/overview/azure/ai-documentintelligence-readme?view=azure-python" rel="nofollow noreferrer">here.</a></p>
<p>Code:</p>
<pre><code>from azure.core.credentials import AzureKeyCredential
from azure.ai.documentintelligence import DocumentIntelligenceClient
from azure.ai.documentintelligence.models import AnalyzeOutputOption, AnalyzeResult
endpoint = os.environ["DOCUMENTINTELLIGENCE_ENDPOINT"]
key = os.environ["DOCUMENTINTELLIGENCE_API_KEY"]
document_intelligence_client = DocumentIntelligenceClient(endpoint=endpoint, credential=AzureKeyCredential(key))
with open(path_to_sample_documents, "rb") as f:
poller = document_intelligence_client.begin_analyze_document(
"prebuilt-read",
body=f,
output=[AnalyzeOutputOption.PDF],
)
result: AnalyzeResult = poller.result()
operation_id = poller.details["operation_id"]
response = document_intelligence_client.get_analyze_result_pdf(model_id=result.model_id, result_id=operation_id)
with open("analyze_result.pdf", "wb") as writer:
writer.writelines(response)
</code></pre>
<p>I am getting TypeError as follows,</p>
<blockquote>
<p>TypeError: Session.request() got an unexpected keyword argument 'body'</p>
</blockquote>
<p>How can I resolve it?</p>
|
<python><azure><typeerror><azure-document-intelligence>
|
2025-01-28 01:06:21
| 1
| 1,559
|
Droid-Bird
|
79,392,330
| 7,425,756
|
Is there an add-element-to-a-set method returning whether the element was actually added?
|
<p>I would like to achieve in Python the following semantics which is present in Kotlin:</p>
<pre><code>myset = set()
for elem in arr:
if not myset.add(elem): return 'duplicate found'
</code></pre>
<p>I am looking for a way to get a slight performance boost by 'hijacking' add-to-set operation by analysing the returned value. It should be better if compared to the following approach I already know in Python:</p>
<pre><code>myset = set()
for elem in arr:
if elem in myset: return 'duplicate found'
myset.add(elem)
</code></pre>
<p>Yet as far as I can observe add() method of set returns None no matter what the outcome of adding is.</p>
<p>Is there such a method in Python3?
Looking into the documentation I have not managed to find such a result returning add-to-set method yet.</p>
|
<python><python-3.x>
|
2025-01-27 22:31:16
| 4
| 528
|
Alexander Ites
|
79,392,160
| 1,903,416
|
Sphinxcontrib.Autoprogram Module Not Found with Python
|
<p>I'm using <a href="https://pythonhosted.org/sphinxcontrib-autoprogram/" rel="nofollow noreferrer">https://pythonhosted.org/sphinxcontrib-autoprogram/</a> to try to generate documentation for my CLI automatically from my configuration of argparse.</p>
<p>It looks like it should work, but after I do everything as the doc suggests, when I try to run the command:</p>
<pre><code>sphinx-build -M html . docs/ -v
</code></pre>
<p>I get an error:</p>
<pre><code>lib/python3.10/site-packages/sphinxcontrib/autoprogram.py", line 141, in import_object
mod = __import__(module_name)
ModuleNotFoundError: No module named 'main'
</code></pre>
<p>It seems no matter what <code>module.py</code> I supply to it, regardless of the normal "dir has <code>__init__.py</code> in it" rules I've followed, it can never find it.</p>
<p>Are there troubleshooting steps one can take? This may be a generic Python module location question, but I never have problems with modules outside Sphinx, so it's possibly some problem with how Sphinx works.</p>
<p>As a higher-level meta question, I am willing to entertain I should be generating docs for my Python CLI some other way, though I'd strongly prefer whatever way that is, it uses the argparse calls I'm already making.</p>
|
<python><module><python-sphinx>
|
2025-01-27 20:54:30
| 0
| 497
|
time e. less
|
79,392,129
| 2,140,256
|
How to make an asynchronous python call from within scala
|
<p>I am trying to access the <a href="https://github.com/dydxprotocol/v4-clients/tree/main/v4-client-py-v2" rel="nofollow noreferrer">Python Client V4 for DyDx</a> from within a scala project. I integrated the previous V3 using <a href="https://scalapy.dev/" rel="nofollow noreferrer">scalapy</a> library. But V4 contains asynchronous calls which I don't know how I should handle. So for example the <a href="https://github.com/dydxprotocol/v4-clients/blob/main/v4-client-py-v2/examples/short_term_order_composite_example.py" rel="nofollow noreferrer">short term order composite example's</a> test function begins with the following lines:</p>
<pre><code>node = await NodeClient.connect(TESTNET.node)
indexer = IndexerClient(TESTNET.rest_indexer)
</code></pre>
<p>I can access the connect call as follows in my scala code:</p>
<pre><code> val network = py.module("dydx_v4_client.network")
val client = py.module("dydx_v4_client.node.client")
</code></pre>
<p>If I print out the value for client it gives me</p>
<blockquote>
<p><coroutine object NodeClient.connect at 0x7fe18e87a340></p>
</blockquote>
<p>The connect call in NodeClient is defined as asynchronous:</p>
<pre><code>async def connect(config: NodeConfig) -> Self:
...
</code></pre>
<p>How can I actually execute this coroutine object?</p>
|
<python><scala>
|
2025-01-27 20:43:40
| 2
| 5,402
|
user79074
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.