QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,291,346
| 8,458,083
|
Is there an analog method to `__args__` for finding the return type in Python?
|
<p>I'm working with Python's typing system and I've found that I can use <code>__args__</code> to get the argument types of a callable. For example:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Callable
callable_type = Callable[[int, str], float]
print(callable_type.__args__) # Output: (<class 'int'>, <class 'str'>, <class 'float'>)
</code></pre>
<p>In this case, <code>__args__</code> returns a tuple containing both the argument types and the return type. Is there a similar method or attribute that specifically gives me the return type of a callable? I'm looking for something like:</p>
<pre class="lang-py prettyprint-override"><code>return_type = callable_type.__return_type__ # Hypothetical attribute
print(return_type) # Desired output: <class 'float'>
</code></pre>
<p>Is there a built-in way to achieve this in Python's typing system, or do I need to implement my own solution? (i would rather use a built-in way. To keep my code simple)</p>
|
<python>
|
2024-12-18 13:55:53
| 0
| 2,017
|
Pierre-olivier Gendraud
|
79,291,269
| 10,342,604
|
Pandas DataFrame Cannot use assign function - Why?
|
<p>I am encountering some odd behavior in pandas, and I am hoping someone could shed some light on specifics from the <code>df.assign(...)</code> function in a pandas dataframe. I am getting a <code>ValueError</code> when trying to assign to column, despite the function being valid.</p>
<pre class="lang-py prettyprint-override"><code>def is_toc_row(row):
m_sig = m_df.loc[m_df.signature == row.signature]
pct = (~pd.isnull(m_sig.line_type)).sum() / m_sig.shape[0]
return (not pd.isnull(row.line_type)) or (pct < .5)
m_df = m_df.assign(is_toc_row=is_toc_row)
</code></pre>
<p>Gives:</p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>But this works totally fine:</p>
<pre><code>for ind, row in m_df.iterrows():
m_df.at[ind, 'is_toc_row'] = is_toc_row(row)
</code></pre>
<p>Is there some issue with referencing the rest of the DataFrame in the function? All I see in the <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.assign.html" rel="nofollow noreferrer">docs</a> is that the subject df cannot change, which it does not.</p>
<p>Of course I am capable of building a workaround, I just want to understand why this does not work for future use.</p>
<p>A few rows of data</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>index</th>
<th>signature</th>
<th>line_type</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>WYcxXTjq27YAP4uJOcLeRLelyUixNJaOwFwf2qqfpM4</td>
<td>NaN</td>
</tr>
<tr>
<td>1</td>
<td>WYcxXTjq27YAP4uJOcLeRLelyUixNJaOwFwf2qqfpM4</td>
<td>NaN</td>
</tr>
<tr>
<td>2</td>
<td>WYcxXTjq27YAP4uJOcLeRLelyUixNJaOwFwf2qqfpM4</td>
<td>1</td>
</tr>
<tr>
<td>3</td>
<td>WYcxXTjq27YAP4uJOcLeRLelyUixNJaOwFwf2qqfpM4</td>
<td>2</td>
</tr>
<tr>
<td>4</td>
<td>WYcxXTjq27YAP4uJOcLeRLelyUixNJaOwFwf2qqfpM4</td>
<td>2.4</td>
</tr>
</tbody>
</table></div>
|
<python><python-3.x><pandas><dataframe>
|
2024-12-18 13:32:58
| 2
| 393
|
Chase
|
79,291,114
| 569,229
|
How to get BCP-47 locale in Python on Windows?
|
<p>According to <a href="https://learn.microsoft.com/en-us/globalization/locale/other-locale-nameshttps://learn.microsoft.com/en-us/globalization/locale/other-locale-names" rel="nofollow noreferrer">Microsoft's documentation</a> Windows has used IETF BCP locale names since Vista (in other words, they should be used on all currently-supported versions of Windows). These names typically look like "en-US" or "fr-FR".</p>
<p>When on Windows 10 I install Python 3.13.1 from python.org, I get the following when I start it in <code>cmd.exe</code>:</p>
<pre class="lang-py prettyprint-override"><code>>>> import locale
>>> locale.setlocale(locale.LC_ALL, "")
'French_France.1252'
</code></pre>
<p>Putting aside for a moment the 8-bit codepage, where does <code>French_France</code> come from? I can't find this documented anywhere! And how can I convert it to a BCP-47 name?</p>
|
<python><windows><locale>
|
2024-12-18 12:36:38
| 0
| 756
|
Reuben Thomas
|
79,291,035
| 3,450,026
|
robin_stocks Robinhood Authentication Stopped Working
|
<p>Robinhood Authentication through MFA as described here <a href="https://robin-stocks.readthedocs.io/en/latest/quickstart.html#with-mfa-entered-programmatically-from-time-based-one-time-password-totp" rel="nofollow noreferrer">https://robin-stocks.readthedocs.io/en/latest/quickstart.html#with-mfa-entered-programmatically-from-time-based-one-time-password-totp</a> was working until Dec 17. I have tried multiple times but unsuccessful. Is anyone experiencing a similar issue?</p>
<p>The error message is not very helpful.</p>
<pre><code>Traceback (most recent call last):
File "rhs.py", line 962, in f2
rbh_ = rbh.authentication.login (username="....", password="....", mfa_code=totp)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mycomputer/anaconda3/lib/python3.12/site-packages/robin_stocks/robinhood/authentication.py", line 198, in login
raise Exception(data['detail'])
~~~~^^^^^^^^^^
KeyError: 'detail'
</code></pre>
<p>I tried to print what <code>data</code> contains and obtained the following. (Note: <code>data = request_post(url, payload)</code> within the <code>robin_stocks/robinhood/authentication.py</code>).</p>
<p><code>{'verification_workflow': {'id': '***5d74a-****-****-9721-cb00a6d69***', 'workflow_status': 'workflow_status_internal_pending'}}</code></p>
<p>Could it be that the <code>'client_id': 'c82SH0WZOsabOXGP2sxqcj34FxkvfnWRZBKlBjFS',</code> used in the <code>payload</code> used for authentication (within the <code>robin_stocks/robinhood/authentication.py</code> <a href="https://github.com/jmfernandes/robin_stocks/blob/master/robin_stocks/robinhood/authentication.py" rel="nofollow noreferrer">file</a>) is no longer allowed by robinhood?</p>
<pre><code> url = login_url()
payload = {
'client_id': 'c82SH0WZOsabOXGP2sxqcj34FxkvfnWRZBKlBjFS',
'expires_in': expiresIn,
'grant_type': 'password',
'password': password,
'scope': scope,
'username': username,
'challenge_type': challenge_type,
'device_token': device_token
}
</code></pre>
<p>Any help will be greatly appreciated.</p>
|
<python><authentication><stock>
|
2024-12-18 12:11:55
| 2
| 697
|
emmanuelsa
|
79,290,968
| 28,837,255
|
'super' object has no attribute '__sklearn_tags__'
|
<p>I am encountering an AttributeError while fitting an XGBRegressor using RandomizedSearchCV from Scikit-learn. The error message states:</p>
<pre><code>'super' object has no attribute '__sklearn_tags__'.
</code></pre>
<p>This occurs when I invoke the <code>fit</code> method on the RandomizedSearchCV object. I suspect it could be related to compatibility issues between Scikit-learn and XGBoost or Python version. I am using Python 3.12, and both Scikit-learn and XGBoost are installed with their latest versions.</p>
<p>I attempted to tune the hyperparameters of an XGBRegressor using RandomizedSearchCV from Scikit-learn. I expected the model to fit the training data without issues and provide the best parameters after cross-validation.</p>
<p>I also checked for compatibility issues, ensured the libraries were up-to-date, and reinstalled Scikit-learn and XGBoost, but the error persists.</p>
|
<python><machine-learning><scikit-learn><xgboost>
|
2024-12-18 11:45:52
| 4
| 333
|
Varshith
|
79,290,851
| 1,059,860
|
make matplotlib plot interactive with Dash
|
<p>I found <a href="https://stackoverflow.com/a/78402943/1059860">this</a> thread shows how to create and visualise a directed weighted graph.
I improved on the answer provided there, slightly:</p>
<pre><code>import random
import networkx as nx
import matplotlib.pyplot as plt
from netgraph import MultiGraph
def create_graph_from_data(nodes, edges):
G = nx.MultiDiGraph()
# Add nodes from the provided list
G.add_nodes_from(nodes)
# Add edges from the provided list of tuples (node1, node2, weight)
for (node1, node2, weight) in edges:
G.add_edge(node1, node2, weight=weight)
return G
# Example usage:
nodes = [0, 1, 2, 3, 4] # Custom list of nodes
edges = [
(0, 1, 5), # Edge from node 0 to node 1 with weight 5
(1, 2, 3), # Edge from node 1 to node 2 with weight 3
(2, 3, 2), # Edge from node 2 to node 3 with weight 2
(3, 4, 4), # Edge from node 3 to node 4 with weight 4
(4, 0, 6), # Edge from node 4 to node 0 with weight 6
(0, 2, 7), # Edge from node 0 to node 2 with weight 7
(1, 4, 8) # Edge from node 1 to node 4 with weight 8
]
# Create the graph from the custom nodes and edges
G = create_graph_from_data(nodes, edges)
# Display the graph
MultiGraph(
G,
node_labels=True,
node_color="skyblue",
edge_color="gray",
edge_labels=nx.get_edge_attributes(G, 'weight'),
edge_label_fontdict=dict(fontsize=8),
arrows=True,
)
plt.show()
</code></pre>
<p>Is there a way to make this interactive on Dash? I want to eventually be able to say: show me only nodes 0, 2 and 3.</p>
|
<python><matplotlib><plotly-dash>
|
2024-12-18 11:05:17
| 0
| 2,258
|
tandem
|
79,290,498
| 4,451,315
|
Specify ddof in 'std' in agg
|
<p>If I have a pandas dataframe and am performing various aggregations, for example</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'a': [1,1,2], 'b': [4,5,6]})
df.groupby('a').agg({'b': ['sum', 'mean', 'std']})
</code></pre>
<pre><code> b
sum mean std
a
1 9 4.5 0.707107
2 6 6.0 NaN
</code></pre>
<p>then how can I specify the <code>ddof</code> to <code>std</code>?</p>
<p>I'm aware that I can do it when doing <code>df.groupby('a')['b'].std(ddof=0)</code>, but how can I do it in <code>.agg</code> where I specify multiple aggregations? Or is there no way?</p>
<hr />
<p>The solution suggested in <a href="https://stackoverflow.com/questions/25915225/pandas-dataframe-groupby-to-calculate-population-standard-deviation">Pandas dataframe groupby to calculate population standard deviation</a> is not suitable because it gives a warning</p>
<pre><code><ipython-input-25-3ae33ed0991c>:1: FutureWarning: The provided callable <function std at 0x7f528458cd60> is currently using SeriesGroupBy.std. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "std" instead.
</code></pre>
|
<python><pandas>
|
2024-12-18 08:51:21
| 1
| 11,062
|
ignoring_gravity
|
79,290,432
| 12,870,651
|
Python Async Function not updating timestamp
|
<p>I have a machine running some scripts that I need to keep an eye on.</p>
<p>I have set up a script in python to send me emails using every hour.</p>
<p>I would like to add timestamp to it so that I can see at a glance when was the last message send from the machine to know it's running. When testing this to send email after every 120 seconds, I found that the email sending part works perfectly but the timestamp does not update. Can't figure out what I am doing wrong.</p>
<p>The print statement returns the same timestamp after each loop and so does the email.</p>
<pre class="lang-py prettyprint-override"><code>Email Sent at 18 December 2024 13:16:50
Email Sent at 18 December 2024 13:16:50
Email Sent at 18 December 2024 13:16:50
</code></pre>
<p>My code:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
#Main Event loop to send emails
async def send_email(timestamp):
import datetime as dt
timestamp = dt.datetime.now()
import socket
hostname = socket.gethostname()
import getpass
user = getpass.getuser()
timestamp_string = timestamp.strftime("%d %B %Y %H:%M:%S")
subject = f"Machine Status Check - {timestamp_string}"
html_body = f"""
Machine Name: {hostname}\n
User ID: {user}
"""
to = "email.address@domain.com"
while True:
import win32com.client
outlook = win32com.client.gencache.EnsureDispatch('Outlook.Application')
new_email = outlook.CreateItem(0)
new_email.To = to
new_email.Subject = subject
new_email.HTMLBody = html_body
new_email.Send()
print(f"Email Sent at {timestamp_string}")
await asyncio.sleep(120)
#Run the asyncio event loop
async def main():
await send_email()
#Start the event loop
asyncio.run(main=main())
</code></pre>
|
<python><python-asyncio><pywin32><python-datetime>
|
2024-12-18 08:31:17
| 1
| 439
|
excelman
|
79,290,411
| 5,923,374
|
HuggingFace Dataset: Load datasets with different set of columns
|
<p>This is how I load my train and test datasets with HF:</p>
<pre class="lang-py prettyprint-override"><code>dataset = {name.replace('/', '.'): f'{name}/*.parquet' for name in ["train", "test"]}
dataset = load_dataset("parquet", data_files=dataset)
</code></pre>
<p>However, there is a problem, since my <code>train</code> dataset has extra columns that <code>test</code> doesn't have (and I need to load them in order to prepare training data.</p>
<p>Since there is a column mismatch, I get an error on load:</p>
<pre class="lang-none prettyprint-override"><code>datasets.table.CastError: Couldn't cast
column1: string
column2: list<item: string>
child 0, item: string
column3: list<item: int32>
child 0, item: int32
__index_level_0__: int64
-- schema metadata --
pandas: '{"index_columns": ["__index_level_0__"], "column_indexes": [{"na' + 1438
to
{'column1': Value(dtype='string', id=None), 'column2': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), '__index_level_0__': Value(dtype='int64', id=None)}
</code></pre>
<p>because column names don't match (<code>column3</code> is missing).</p>
<p>What is the correct way to load train and test with different set of columns?</p>
|
<python><huggingface-datasets>
|
2024-12-18 08:20:11
| 1
| 1,538
|
Ford O.
|
79,290,374
| 1,000,466
|
How do I highlight the VIX plot above 30 level with red color?
|
<p>I want to highlight the parts of a VIX plot where the index exceeds 30 points in red.</p>
<p>The following example code is as below:</p>
<pre><code>import yfinance as yf
import matplotlib.pyplot as plt
# Fetch VIX data
vix_data = yf.download('^VIX', start='2020-01-01', end='2024-12-31')
# Extract the dates and VIX closing values
dates = vix_data.index
vix_close = vix_data['Close']
# Plot the VIX data
plt.figure(figsize=(12, 6))
plt.plot(dates, vix_close, label='VIX Close', color='blue', linewidth=1.5)
# Highlight regions where VIX is above 30
# TODO
plt.fill_between(dates, vix_close, 30, where=(vix_close > 30), color='red', alpha=0.5, label='Above 30')
# Add threshold lines
plt.axhline(y=30, color='red', linestyle='--', label='Threshold (30 pts)')
plt.axhline(y=20, color='orange', linestyle='--', label='Threshold (20 pts)')
# Add labels and title
plt.title('VIX Index with Highlights Above 30', fontsize=16)
plt.xlabel('Date', fontsize=12)
plt.ylabel('VIX Level', fontsize=12)
plt.legend(fontsize=10)
plt.grid(True)
# Show the plot
plt.show()
</code></pre>
<p>My code could not pass beyond the fill_between function, so I marked it with TODO.</p>
<p>I am new to matplotlib. Could you advise how I might be able to fix this Python code?</p>
<p>My tools list used as below:</p>
<pre><code>matplotlib 3.10.0rc1
yfinance 0.2.50
</code></pre>
<p>My code break-up below these lines:</p>
<pre><code> File "C:...\venv\Lib\site-packages\matplotlib\axes\_axes.py", line 5626, in fill_between
return self._fill_between_x_or_y(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:...\venv\Lib\site-packages\matplotlib\axes\_axes.py", line 5611, in _fill_between_x_or_y
collection = mcoll.FillBetweenPolyCollection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
</code></pre>
|
<python><matplotlib><yfinance>
|
2024-12-18 08:07:13
| 2
| 745
|
Benny Khoo
|
79,290,203
| 8,040,369
|
Groupby a df column based on more than 3 columns
|
<p>I have an df which has 3 columns: Region, Country and AREA_CODE.</p>
<pre><code>Region Country AREA_CODE AREA_SUB_CODE_1 AREA_SUB_CODE_2
===========================================================================
AMER US A1 A1_US_1 A1_US_2
AMER CANADA A1 A1_CA_1 A1_CA_2
AMER US B1 B1_US_1 B1_US_2
AMER US A1 A1_US_1 A1_US_2
</code></pre>
<p>Is there a way to get output list of both the AREA_SUB_CODE_1 and AREA_SUB_CODE_2 as a list under each of the previous column value. something like the below?</p>
<pre><code>{
"AREA_SUB_CODE_1": {
"AMER": {
"US": {
"A1": ["A1_US_1"],
"B1": ["B1_US_1"]
},
"CANADA": {
"A1": ["A1_CA_1"],
}
}
},
"AREA_SUB_CODE_2": {
"AMER": {
"US": {
"A1": {
"A1_US_1": ["A1_US_2"]
},
"B1": {
"B1_US_1": ["B1_US_2"]
},
"CANADA": {
"A1": {
"A1_CA_1": ["A1_CA_2"],
}
}
}
},
}
</code></pre>
<p>So far i have tried to groupby on 3 columns it works which is,</p>
<pre><code>for (k1, k2), v in df.groupby(['Region', 'Country'])['AREA_CODE']:
tTmp.setdefault(k1, {})[k2] = sorted(v.unique())
</code></pre>
<p>But when i try to groupby 4 columns, it is throwing error</p>
<blockquote>
<p>too many values to unpack (expected 2)</p>
</blockquote>
<pre><code>for (k1, k2), v in df.groupby(['Region', 'Country', 'AREA_CODE'])['AREA_SUB_CODE_1']:
tTmp.setdefault(k1, {})[k2] = sorted(v.unique())
</code></pre>
<p>How to apply groupby for 4 columns and 5 columns? Or any other way to achieve this?</p>
|
<python><pandas><dataframe>
|
2024-12-18 06:48:49
| 3
| 787
|
SM079
|
79,290,194
| 6,702,598
|
How to run globally installed coverage on a package inside a virtual enviroment?
|
<p>I'd like to run a coverage check with the following configuration:</p>
<ul>
<li><code>coverage</code> is installed globally on the system (via pipx).</li>
<li><code>pytest</code> in installed inside the virtual environment of the project.</li>
</ul>
<p>(Inside the venv) When I run <code>coverage run -m pytest</code> I get the error <code>No module named 'pytest'</code>.</p>
<p>How to reproduce</p>
<pre><code>pipx install coverage
python -m venv venv
source ./venv/bin/activate
pip install pytest
coverage run -m pytest
</code></pre>
<p>Here are some other outputs (inside the venv)</p>
<ul>
<li>command: <code>pytest --version</code>.
<ul>
<li>result: <em>pytest 8.3.4</em></li>
</ul>
</li>
<li>command: <code>coverage run --source=. ./venv/bin/python -m pytest</code>
<ul>
<li>result:
<ul>
<li><em>CoverageWarning: No data was collected.</em></li>
<li><em>Couldn't run '[...]/venv/bin/python' as Python code: SyntaxError: invalid or missing encoding declaration</em></li>
</ul>
</li>
</ul>
</li>
</ul>
|
<python><python-3.x><coverage.py>
|
2024-12-18 06:43:32
| 0
| 3,673
|
DarkTrick
|
79,290,131
| 2,218,321
|
How to create instance of a Pydantic model with subset fields
|
<p>This is the model</p>
<pre><code>class UsersInformationModel(BaseDataModel):
id: Optional[int] = None
is_vip: bool = False
creation_tag: Optional[dict] = None # JSON field mapped to a dictionary
user_id: int # Not nullable
created_at: Optional[datetime] = None
updated_at: Optional[datetime] = None
branch_id: Optional[int] = None
setting: Optional[dict] = None # JSON field mapped to a dictionary
vip_type: int = 0
medication_history: Optional[bool] = None
other_disease: Optional[str] = None
tobacco_alcohol_use: Optional[bool] = None
consumed_medications: Optional[str] = None
consumed_medications_amount: Optional[str] = None
</code></pre>
<p>and BaseDataModel</p>
<pre><code>from pydantic import BaseModel, PrivateAttr, root_validator
from typing import Dict, Any
class BaseDataModel(BaseModel):
_explicit_fields: set = PrivateAttr(default_factory=set)
class Config:
# Pydantic settings
populate_by_name = True
exclude_unset = True
@root_validator(pre=True)
def validate_fields(cls, values):
# This ensures that only explicitly set fields are retained in the model
for field in values.copy():
if field not in cls.__fields__:
del values[field]
return values
def dict(self, **kwargs):
# Ensure fields are excluded if not set (only non-None fields are included)
return super().model_dump(exclude_unset=True, **kwargs)
</code></pre>
<p>The problem is when I write:</p>
<pre><code> data = {
'user_id': self.current_user.id,
'medication_history': self.data.medication_history,
'other_disease': self.data.other_disease,
'tobacco_alcohol_use': self.data.tobacco_alcohol_use,
'consumed_medications': self.data.consumed_medications,
'branch_id': None,
'consumed_medications_amount': self.data.consumed_medications_amount,
}
data = UsersInformationModel(**data)
data_dict = data.dict()
</code></pre>
<p>The <code>data</code> contains all of the model's fields and sets the non-mentioned fields to None, but the data_dict is OK and only contains fields included in the <code>data</code>.</p>
<p>How can I change <code>BaseDataModel</code> so that <code>data</code> only contains the mentioned fields? No matter if the value is None, I want it to have only explicitly included fields.</p>
|
<python><pydantic>
|
2024-12-18 06:09:12
| 0
| 2,189
|
M a m a D
|
79,289,826
| 865,220
|
Parent the BaseMesh to the rig with automatic weights is not working
|
<p>I am on blender 3.6.</p>
<p>Goal is to simulate a running/walking animation of a loaded BaseMesh as obj file via python.</p>
<p>Here is my python code:</p>
<pre><code>import bpy
import math
def import_basemesh(filepath):
# Import the BaseMesh OBJ file
bpy.ops.import_scene.obj(filepath=filepath)
basemesh = bpy.context.selected_objects[0]
basemesh.name = "BaseMesh"
return basemesh
def create_rig():
# Create an armature
bpy.ops.object.armature_add(enter_editmode=True, location=(0, 0, 0))
armature = bpy.context.object
armature.name = "Rig"
armature.show_in_front = True
# Add bones in Edit Mode
bones = armature.data.edit_bones
# Rename default bone to "Spine"
bones["Bone"].head = (0, 0, 18)
bones["Bone"].tail =(0, 0, 11)
bones["Bone"].name = "Spine"
# Add head bone
head_bone = bones.new("Head")
head_bone.head = (0, 0, 20)
head_bone.tail = (0, 0, 18)
head_bone.parent = bones["Spine"]
# Add left leg bone
left_leg_bone = bones.new("Left_Leg")
left_leg_bone.head = (0, 0, 11)
left_leg_bone.tail = (0, -2, 1)
left_leg_bone.parent = bones["Spine"]
# Add right leg bone
right_leg_bone = bones.new("Right_Leg")
right_leg_bone.head = (0, 0, 11)
right_leg_bone.tail = (0, 2, 1)
right_leg_bone.parent = bones["Spine"]
# Add left arm bone
left_arm_bone = bones.new("Left_Arm")
left_arm_bone.head = (0, 0, 18)
left_arm_bone.tail = (0, -6, 9)
left_arm_bone.parent = bones["Spine"]
# Add right arm bone
right_arm_bone = bones.new("Right_Arm")
right_arm_bone.head = (0, 0, 18)
right_arm_bone.tail = (0, 6, 9)
right_arm_bone.parent = bones["Spine"]
bpy.ops.object.mode_set(mode='OBJECT')
return armature
def rig_basemesh(basemesh, armature):
"""
Parent the BaseMesh to the rig with automatic weights.
"""
# Select the BaseMesh and the Armature
bpy.ops.object.mode_set(mode='OBJECT')
bpy.context.view_layer.objects.active = armature
bpy.ops.object.select_all(action='DESELECT')
basemesh.select_set(True)
armature.select_set(True)
# Set Parent to Armature with Automatic Weights
bpy.ops.object.parent_set(type='ARMATURE_AUTO')
print(f"Parented '{basemesh.name}' to '{armature.name}' with Automatic Weights.")
def rotate_basemesh(basemesh, angle_degrees):
"""
Rotates the BaseMesh around the Z-axis by a given angle (in degrees).
"""
import math
# Convert degrees to radians
angle_radians = math.radians(angle_degrees)
# Apply rotation
basemesh.rotation_euler[2] += angle_radians # Z-axis rotation (XY plane)
print(f"Rotated '{basemesh.name}' by {angle_degrees} degrees counterclockwise on the XY plane.")
def animate_running(armature):
"""
Create a running animation for the rig's bones.
"""
# Ensure animation data exists
if not armature.animation_data:
armature.animation_data_create()
# Switch to Pose Mode
bpy.context.view_layer.objects.active = armature
bpy.ops.object.mode_set(mode='POSE')
pose_bones = armature.pose.bones
# Verify the bones exist
required_bones = ["Left_Leg", "Right_Leg", "Left_Arm", "Right_Arm"]
missing_bones = [bone for bone in required_bones if bone not in pose_bones]
if missing_bones:
print(f"ERROR: Missing bones: {missing_bones}")
return
print("Available bones in armature:")
for bone in pose_bones:
print(bone.name)
# Set animation frame range
frame_start = 1
frame_end = 200
bpy.context.scene.frame_start = frame_start
bpy.context.scene.frame_end = frame_end
# Insert keyframes for each bone
print("Inserting keyframes for running animation...")
try:
while frame_start<frame_end:
# Animate the Right Leg's location (opposite phase)
# Set the starting rotation
pose_bones["Left_Leg"].rotation_quaternion = (1, 0, 0, math.sin(math.radians(45/2)))
pose_bones["Left_Leg"].keyframe_insert(data_path="rotation_quaternion", frame=frame_start)
pose_bones["Left_Leg"].rotation_quaternion = (1, 0, 0, 0)
pose_bones["Left_Leg"].keyframe_insert(data_path="rotation_quaternion", frame=frame_start + 5)
pose_bones["Left_Leg"].rotation_quaternion = (1, 0, 0, -math.sin(math.radians(45/2)))
pose_bones["Left_Leg"].keyframe_insert(data_path="rotation_quaternion", frame=frame_start + 10)
pose_bones["Left_Leg"].rotation_quaternion = (1, 0, 0, 0)
pose_bones["Left_Leg"].keyframe_insert(data_path="rotation_quaternion", frame=frame_start + 15)
# Animate the Right Leg's location (opposite phase)
pose_bones["Right_Leg"].rotation_quaternion = (1, 0, 0, -math.sin(math.radians(45/2)))
pose_bones["Right_Leg"].keyframe_insert(data_path="rotation_quaternion", frame=frame_start)
pose_bones["Right_Leg"].rotation_quaternion = (1, 0, 0, 0)
pose_bones["Right_Leg"].keyframe_insert(data_path="rotation_quaternion", frame=frame_start + 5)
pose_bones["Right_Leg"].rotation_quaternion = (1, 0, 0, math.sin(math.radians(45/2)))
pose_bones["Right_Leg"].keyframe_insert(data_path="rotation_quaternion", frame=frame_start + 10)
pose_bones["Right_Leg"].rotation_quaternion = (1, 0, 0, 0)
pose_bones["Right_Leg"].keyframe_insert(data_path="rotation_quaternion", frame=frame_start + 15)
# Animate the Left Arm's location (opposite to legs)
pose_bones["Left_Arm"].rotation_quaternion = (1, 0, 0, math.sin(math.radians(45/2)))
pose_bones["Left_Arm"].keyframe_insert(data_path="rotation_quaternion", frame=frame_start)
pose_bones["Left_Arm"].rotation_quaternion = (1, 0, 0, 0)
pose_bones["Left_Arm"].keyframe_insert(data_path="rotation_quaternion", frame=frame_start + 5)
pose_bones["Left_Arm"].rotation_quaternion = (1, 0, 0, -math.sin(math.radians(45/2)))
pose_bones["Left_Arm"].keyframe_insert(data_path="rotation_quaternion", frame=frame_start + 10)
pose_bones["Left_Arm"].rotation_quaternion = (1, 0, 0, 0)
pose_bones["Left_Arm"].keyframe_insert(data_path="rotation_quaternion", frame=frame_start + 15)
# Animate the Right Arm's location (opposite phase to Left Arm)
pose_bones["Right_Arm"].rotation_quaternion = (1, 0, 0, -math.sin(math.radians(45/2)))
pose_bones["Right_Arm"].keyframe_insert(data_path="rotation_quaternion", frame=frame_start)
pose_bones["Right_Arm"].rotation_quaternion = (1, 0, 0, 0)
pose_bones["Right_Arm"].keyframe_insert(data_path="rotation_quaternion", frame=frame_start + 5)
pose_bones["Right_Arm"].rotation_quaternion = (1, 0, 0, math.sin(math.radians(45/2)))
pose_bones["Right_Arm"].keyframe_insert(data_path="rotation_quaternion", frame=frame_start + 10)
pose_bones["Right_Arm"].rotation_quaternion = (1, 0, 0, 0)
pose_bones["Right_Arm"].keyframe_insert(data_path="rotation_quaternion", frame=frame_start + 15)
print("Location keyframes successfully inserted.")
frame_start += 20
except Exception as e:
print(f"ERROR: Failed to insert keyframes: {e}")
bpy.ops.object.mode_set(mode='OBJECT')
def main(filepath):
basemesh = import_basemesh(filepath)
armature = create_rig()
rig_basemesh(basemesh, armature)
basemesh = bpy.data.objects["BaseMesh"]
rotate_basemesh(basemesh, 90)
animate_running(armature)
obj_filepath = "C:/Users/hp/Downloads/fdx54mtvuz28-FinalBaseMesh/FinalBaseMesh.obj"
main(obj_filepath)
</code></pre>
<p>Expectation is the rig bones to attach to corresponding body parts , instead it seems like they are attached randomly as the entire body sways back and forth instead of running animation.
Here is the obj file of the basemesh</p>
<p><a href="https://sketchfab.com/3d-models/fdx54mtvuz28-final-base-mesh-bfa060fcbce64195be4fe5098b88b1a7" rel="nofollow noreferrer">https://sketchfab.com/3d-models/fdx54mtvuz28-final-base-mesh-bfa060fcbce64195be4fe5098b88b1a7</a></p>
|
<python><animation><blender><bpy>
|
2024-12-18 02:23:28
| 0
| 18,382
|
ishandutta2007
|
79,289,748
| 3,398,324
|
Combing runnable sequence pipeline with constitutional chain
|
<p>After updating my code to replace LLMChain (deprecated) with the new pipeline approach, I am getting an error because Constitutional Chain does expect the old LLMChain format. Can anyone suggest a solution? Is there a newer way to do this?</p>
<pre><code>#from langchain.chains import LLMChain
from langchain.prompts import ChatPromptTemplate
from langchain.chains.constitutional_ai.base import ConstitutionalChain
from langchain.chains.constitutional_ai.models import ConstitutionalPrinciple
# Initialize the model
llm = ChatGoogleGenerativeAI(
google_api_key=GEMINI_API_KEY, model="gemini-1.5-flash", temperature=0.3)
# Create a chat chain for creating text.
#chat_chain = LLMChain(llm=llm, prompt=ChatPromptTemplate.from_template("{query}"))
# Create a runnable sequence for the chat chain
chat_chain = ChatPromptTemplate.from_template("{query}") | llm | StrOutputParser()
# Create a principle for our constitutional chain.
principle = ConstitutionalPrinciple(
name="Fear of Spiders",
critique_request="The model should not include spiders in stories it writes.",
revision_request="Modify the story to be about animals other than spiders.",
)
constitutional_chain = ConstitutionalChain.from_llm(
chain=chat_chain,
constitutional_principles=[principle],
llm=llm
)
# Set the input query for the chat chain.
query = {"query": "Please give me the main events of a story about three household pets."}
# Run the constitutional chain using the query as the first input.
result = constitutional_chain.invoke(query)
print(result["output"])
</code></pre>
<p>This is the error:</p>
<pre><code>AttributeError: 'RunnableSequence' object has no attribute 'get'
</code></pre>
|
<python><langchain><py-langchain>
|
2024-12-18 01:28:21
| 0
| 1,051
|
Tartaglia
|
79,289,725
| 26,579,940
|
Multiply the value determined for each section
|
<p>I am in a situation where I need to multiply several columns by the value corresponding to the interval.</p>
<p>I'm using Pandas, but I'm using a for loop to get the value as follows.</p>
<p>I think there is a better way than this, can someone give me some advice?</p>
<p>sample:</p>
<pre><code>import pandas as pd
d = {
5: 5,
4: 7,
3: 2,
2: -4,
1: -5,
}
def mul(value):
if value < 1: return value * 0.01
elif 5 < value: return value * 20
for k, v in d.items():
if value <= k:
val = value * v
break
return val
a = [
{'a': 5, 'b': 10, 'c': 2},
{'a': 4, 'b': 0.5, 'c': 4},
{'a': 3.5, 'b': 1.5, 'c': 5},
{'a': 2.1, 'b': 5, 'c': 6},
{'a': 0.1, 'b': 1, 'c': 7},
]
df = pd.DataFrame(a)
a = []
b = []
for i in df.values:
a.append(mul(i[0]))
b.append(mul(i[1]))
df['a1'] = a
df['b1'] = b
print(df.head())
"""
a b c a1 b1
0 5.0 10.0 2.0 25.000 200.000
1 4.0 0.5 4.0 20.000 0.005
2 3.5 1.5 5.0 17.500 7.500
3 2.1 5.0 6.0 10.500 25.000
4 0.1 1.0 7.0 0.001 5.000
"""
</code></pre>
|
<python><pandas>
|
2024-12-18 01:09:05
| 2
| 404
|
white.seolpyo.com
|
79,289,579
| 11,117,255
|
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='my-opensearch-domain.com', port=443): Read timed out. (read timeout=15)
|
<p>I’m running a Python script that bulk-indexes documents into an OpenSearch domain. Sometimes the bulk requests fail with ReadTimeoutError exceptions, similar to what you see below:</p>
<pre><code>Traceback (most recent call last):
File "/.../urllib3/connectionpool.py", line 536, in _make_request
response = conn.getresponse()
...
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='my-opensearch-domain.com', port=443): Read timed out. (read timeout=15)
During handling of the above exception, another exception occurred:
requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='my-opensearch-domain.com', port=443): Read timed out. (read timeout=15)
</code></pre>
<p>I see these timeouts intermittently during _bulk indexing operations. Some batches succeed, while others time out after the default read_timeout.</p>
<p>What I’ve tried:
• Ensuring my internet connection is stable.
• Running smaller batch sizes (e.g., 100 docs at a time).
• Confirmed the OpenSearch cluster is healthy and has enough resources.
• Verified that indexing works most of the time, except under heavier load.</p>
<p>Questions:
• How can I prevent these intermittent ReadTimeoutError exceptions?
• Are there configuration options (timeouts, retries) I can tweak in the OpenSearch Python client or requests library?
• Are there best practices for large bulk indexing jobs in OpenSearch?</p>
<p>Any guidance on resolving these timeouts or best practices for handling large bulk indexing jobs would be appreciated.</p>
<pre><code>import json
import logging
import time
import boto3
from requests_aws4auth import AWS4Auth
from opensearchpy import OpenSearch, RequestsHttpConnection
# Configure basic logging (ERROR only to reduce verbosity)
logger = logging.getLogger(__name__)
logger.setLevel(logging.ERROR)
# Configuration (use placeholders, no PII)
AWS_REGION = "us-east-1" # Replace as needed
OPENSEARCH_ENDPOINT = "your-opensearch-domain.us-east-1.es.amazonaws.com" # Replace with your endpoint
INDEX_NAME = "test-index"
def create_opensearch_client(timeout=15, max_retries=3, pool_maxsize=10):
session = boto3.Session(region_name=AWS_REGION)
credentials = session.get_credentials()
awsauth = AWS4Auth(
credentials.access_key,
credentials.secret_key,
AWS_REGION,
"es",
session_token=credentials.token,
)
return OpenSearch(
hosts=[{"host": OPENSEARCH_ENDPOINT, "port": 443}],
http_auth=awsauth,
use_ssl=True,
verify_certs=True,
timeout=timeout,
max_retries=max_retries,
retry_on_timeout=True,
connection_class=RequestsHttpConnection,
pool_maxsize=pool_maxsize,
)
def bulk_index_docs(opensearch_client, docs, batch_size=100):
# Index documents in batches
for i in range(0, len(docs), batch_size):
batch = docs[i:i+batch_size]
bulk_body = []
for doc in batch:
doc_id = doc.get("id", f"doc_{i}")
bulk_body.append({"index": {"_index": INDEX_NAME, "_id": doc_id}})
bulk_body.append(doc)
try:
response = opensearch_client.bulk(body=bulk_body)
if response.get("errors"):
logger.error("Some documents failed to index.")
else:
# In a minimal reproducible example, we won't log successes to stdout
pass
except Exception as e:
logger.error(f"Bulk indexing failed: {str(e)}")
def main():
# Minimal example documents
docs = [
{"id": "1", "name": "Alice", "updated_at": "2024-12-17T10:00:00Z"},
{"id": "2", "name": "Bob", "updated_at": "2024-12-17T10:05:00Z"},
]
client = create_opensearch_client(timeout=15, max_retries=3, pool_maxsize=5)
bulk_index_docs(client, docs, batch_size=1)
if __name__ == "__main__":
main()
</code></pre>
|
<python><search><connection><opensearch>
|
2024-12-17 23:04:06
| 0
| 2,759
|
Cauder
|
79,289,546
| 14,122
|
Type hints lost when a decorator is wrapped as a classmethod?
|
<p>Consider the following code:</p>
<pre><code>from typing import Any, Callable, Coroutine
class Cache[**P, R]:
@classmethod
def decorate(cls, **params):
def decorator(f: Callable[P, Coroutine[Any, Any, R]]) -> Callable[P, Coroutine[Any, Any, R]]:
return f # in the real world, we instantiate a Cache here
return decorator
@Cache.decorate()
async def some_function(i: int) -> int:
return i + 1
cached_function = Cache.decorate()(some_function)
</code></pre>
<hr />
<p>If I ask pyright the type of <code>Cache.decorate</code> <em>before</em> the <code>@classmethod</code> wrapper (inspecting the word <code>decorate</code> in the above code), it returns:</p>
<pre><code>(method) def decorate(
cls: type[Self@Cache[P@Cache, R@Cache]],
**params: Unknown
) -> ((f: ((**P@Cache) -> (Coroutine[Any, Any, R@Cache])) -> ((**P@Cache) -> Coroutine[Any, Any, R@Cache]))
</code></pre>
<p>That looks to me like it understands that <code>P</code> (the argument types) and <code>R</code> (the return types) are plumbed through correctly.</p>
<p>However, if I ask it to introspect <code>Cache.decorate</code> in the context where it's being used as a decorator, it returns:</p>
<pre><code>(method) def decorate(**params: Unknown) -> ((f: ((...) -> Coroutine[Any, Any, Unknown])) -> ((...) -> Coroutine[Any, Any, Unknown]))
</code></pre>
<p>...which is to say, the relationship between input types and output types has been entirely discarded!</p>
|
<python><python-typing><type-parameter><pyright><pep-695>
|
2024-12-17 22:40:41
| 2
| 299,045
|
Charles Duffy
|
79,289,405
| 16,988,223
|
why my code can't detect any triangle, square or another shape using opencv
|
<p>I'm trying detect various shapes by using a video source using opencv with python, however my code only detect "ghost circles", I don't know why happens this behavior. I'm using <a href="https://drive.google.com/file/d/1seJRch_NwhYmInVIJIH712K8ul8-M0RM/view" rel="nofollow noreferrer">this video</a> in my code.</p>
<p>here my code full commented:</p>
<pre><code>import cv2
# Initialize counters for detected shapes
triangle_count = 0
quadrilateral_count = 0
pentagon_count = 0
hexagon_count = 0
circle_count = 0
# Horizontal reference line (middle of the frame)
line_y = 240 # Adjust according to the height of the video
# Read the video from file
video_path = 'video.mp4' # Video path
cap = cv2.VideoCapture(video_path)
# Check if the video is loaded correctly
if not cap.isOpened():
print("Error opening video.")
exit()
# Process the video frame by frame
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break # Exit when the video ends
# Create a copy of the original frame to use later
original = frame.copy()
# Convert the frame from BGR to HSV
hsv_image = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# Convert the HSV image to grayscale
gray_image = cv2.cvtColor(hsv_image, cv2.COLOR_BGR2GRAY)
# Apply Otsu thresholding to binarize the image
ret, otsu = cv2.threshold(gray_image, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
# Apply the mask to the original image
image = cv2.bitwise_and(original, original, mask=otsu)
# Find contours in the binary image
contours, _ = cv2.findContours(otsu, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Draw the horizontal reference line
cv2.line(frame, (0, line_y), (frame.shape[1], line_y), (0, 255, 0), 2)
# Process each contour
for i, contour in enumerate(contours):
if i == 0: # Ignore the largest outer contour
continue
# Calculate the area of the contour
contour_area = cv2.contourArea(contour)
# Filter out small objects based on area
if contour_area < 300: # Adjust the minimum area value
continue
# Approximate the shape of the contour
epsilon = 0.01 * cv2.arcLength(contour, True)
approx = cv2.approxPolyDP(contour, epsilon, True)
# Calculate the center of the object (bounding box coordinates)
x, y, w, h = cv2.boundingRect(approx)
center_y = y + h // 2 # Y coordinate of the object's center
# Check if the object crosses the horizontal reference line
if line_y - 10 <= center_y <= line_y + 10:
# Classify the shape based on the number of vertices
if len(approx) == 3:
cv2.putText(frame, "Triangle", (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 0, 255), 2)
triangle_count += 1
elif len(approx) == 4:
cv2.putText(frame, "Quadrilateral", (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 0, 0), 2)
quadrilateral_count += 1
elif len(approx) == 5:
cv2.putText(frame, "Pentagon", (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 255, 0), 2)
pentagon_count += 1
elif len(approx) == 6:
cv2.putText(frame, "Hexagon", (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 255, 255), 2)
hexagon_count += 1
else:
cv2.putText(frame, "Circle", (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 255, 0), 2)
circle_count += 1
# Draw the detected contour
cv2.drawContours(frame, [approx], 0, (0, 0, 0), 2)
# Display the counters in the top left corner
cv2.putText(frame, f"Triangles: {triangle_count}", (10, 20), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 0, 255), 2)
cv2.putText(frame, f"Quadrilaterals: {quadrilateral_count}", (10, 40), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 0, 0), 2)
cv2.putText(frame, f"Pentagons: {pentagon_count}", (10, 60), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 255, 0), 2)
cv2.putText(frame, f"Hexagons: {hexagon_count}", (10, 80), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 255, 255), 2)
cv2.putText(frame, f"Circles: {circle_count}", (10, 100), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 255, 0), 2)
# Show the processed frame
cv2.imshow("Shape Detection", frame)
# Exit with the 'q' key
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Release resources
cap.release()
cv2.destroyAllWindows()
</code></pre>
<p><a href="https://i.sstatic.net/TMXKX7HJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMXKX7HJ.png" alt="enter image description here" /></a></p>
<p>as we can see, when the shape crosses the horizontal line should be classified. I will appreciate you guys if you can help me to fix this problem, above is the link of the video to test my code, only requieres installed opencv and python in your machine.</p>
<p><strong>UPDATE - SECOND APPROACH.</strong></p>
<p>seems that the countour area for this shapes is > 5000</p>
<p><a href="https://i.sstatic.net/6btsH6BM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6btsH6BM.png" alt="enter image description here" /></a></p>
<p>code:</p>
<pre><code>import math
import numpy as np
import cv2
# Initialize the camera or video
cap = cv2.VideoCapture("video.mp4")
print("Press 'q' to exit")
# Function to calculate the angle between three points
def angle(pt1, pt2, pt0):
dx1 = pt1[0][0] - pt0[0][0]
dy1 = pt1[0][1] - pt0[0][1]
dx2 = pt2[0][0] - pt0[0][0]
dy2 = pt2[0][1] - pt0[0][1]
return float((dx1 * dx2 + dy1 * dy2)) / math.sqrt(float((dx1 * dx1 + dy1 * dy1)) * (dx2 * dx2 + dy2 * dy2) + 1e-10)
# Initialize a dictionary to count the detected shapes
shape_counts = {
'TRI': 0,
'RECT': 0,
'PENTA': 0,
'HEXA': 0,
'CIRC': 0
}
# Main loop
while(cap.isOpened()):
# Capture frame by frame
ret, frame = cap.read()
if ret:
# Convert to grayscale
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Apply the Canny detector
canny = cv2.Canny(gray, 80, 240, 3)
# Find contours
contours, hierarchy = cv2.findContours(canny, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Draw a horizontal line in the center of the image
line_y = int(frame.shape[0] / 2)
cv2.line(frame, (0, line_y), (frame.shape[1], line_y), (0, 255, 0), 2)
# Shape detection counter
for i in range(len(contours)):
# Approximate the contour with precision proportional to the perimeter of the contour
approx = cv2.approxPolyDP(contours[i], cv2.arcLength(contours[i], True) * 0.02, True)
#print(cv2.contourArea(contours[i]))
# Filter small or non-convex objects
if abs(cv2.contourArea(contours[i])) < 5000 or not cv2.isContourConvex(approx):
continue
# Classify the shapes based on the number of vertices
x, y, w, h = cv2.boundingRect(contours[i])
if y + h / 2 > line_y: # Only classify if the shape crosses the line
if len(approx) == 3:
# Triangle
shape_counts['TRI'] += 1
cv2.putText(frame, 'TRI', (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 255, 255), 2, cv2.LINE_AA)
elif 4 <= len(approx) <= 6:
# Polygon classification
vtc = len(approx)
cos = []
# Calculate the angles between the vertices using the angle() function
for j in range(2, vtc + 1):
cos.append(angle(approx[j % vtc], approx[j - 2], approx[j - 1]))
# Sort the angles and determine the type of figure
cos.sort()
mincos = cos[0]
maxcos = cos[-1]
# Classify based on the number of vertices
if vtc == 4:
shape_counts['RECT'] += 1
cv2.putText(frame, 'RECT', (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 255, 255), 2, cv2.LINE_AA)
elif vtc == 5:
shape_counts['PENTA'] += 1
cv2.putText(frame, 'PENTA', (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 255, 255), 2, cv2.LINE_AA)
elif vtc == 6:
shape_counts['HEXA'] += 1
cv2.putText(frame, 'HEXA', (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 255, 255), 2, cv2.LINE_AA)
else:
# Detect and label circle
area = cv2.contourArea(contours[i])
radius = w / 2
if abs(1 - (float(w) / h)) <= 2 and abs(1 - (area / (math.pi * radius * radius))) <= 0.2:
shape_counts['CIRC'] += 1
cv2.putText(frame, 'CIRC', (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 255, 255), 2, cv2.LINE_AA)
# Display the number of each detected shape at the top of the image
offset_y = 30
for shape, count in shape_counts.items():
cv2.putText(frame, f'{shape}: {count}', (10, offset_y), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 255, 0), 2, cv2.LINE_AA)
offset_y += 30
# Display the resulting frame
cv2.imshow('Frame', frame)
cv2.imshow('Canny', canny)
# Exit if 'q' is pressed
if cv2.waitKey(1) == ord('q'):
break
# Once finished, release the capture
cap.release()
cv2.destroyAllWindows()
</code></pre>
<p>however is detecting many rectangles and pentagons, but my video only has triangles and rectangles/squares. seems that I need to classify only shapes greater than 5000 and try to close his vertex because sometimes the contour is not complete.</p>
<p>thanks in advance.</p>
|
<python><opencv><computer-vision>
|
2024-12-17 21:26:20
| 1
| 429
|
FreddicMatters
|
79,289,373
| 7,124,155
|
Are there alternatives to a for loop when parsing free text in Python/PySpark?
|
<p>I have to read in data in Databricks Python/PySpark but the format is not the usual CSV or JSON so I have to iterate over a for loop. As a result it's very slow.</p>
<p>The data looks like this, for millions of rows. It's not the same format each row, although there are certain common formats:</p>
<p>HEADER0123 a bunch of spaces ACCTNUM999787666 more numbers ABC2XYZ some text</p>
<p>So to parse I read from s3 as text and get the data by character position:</p>
<pre><code>raw_text = (spark.read
.format("text")
.option("mode", "PERMISSIVE")
.option("header", "false")
.option("inferSchema","false")
.load(my_path))
my_list = []
input = raw_text.collect()
for row in input:
line = row[0].strip()
header = line[0:6]
acct = line[6:9]
my_list.append(header, acct)
etc.
</code></pre>
<p>Then later I create dataframes:</p>
<pre><code>df = spark.createDataFrame(my_list, "header string, acct int")
</code></pre>
<p>Even though I have experience with Spark dataframes this is the only way I can think of due to the unusual format.</p>
<p>Is there a way to leverage Spark to process this kind of data? Or a way that doesn't require a for loop?</p>
|
<python><pyspark><databricks>
|
2024-12-17 21:12:12
| 1
| 1,329
|
Chuck
|
79,289,215
| 1,056,174
|
Hypothesis: test defensive programming
|
<p>I'm writing tests with Hypothesis to test functions with restricted range. Consider the following toy example:</p>
<pre><code>import math
from hypothesis import assume, example, given
def inc(value: float) -> float:
assert not math.isnan(value)
return value + 1
@given(st.floats())
def test_inc(value: float):
assume(not math.isnan(value))
if(value == math.inf or value == -math.inf):
assert inc(value) == value
else:
assert inc(value) > value
</code></pre>
<p>Here I'm testing the behaviour for finite and infinite inputs, but not NaN. I would like to test that out-of-range values, such as NaN, correctly throw an <code>AssertionError</code>.</p>
<p>Is there a pattern built into Hypothesis where I can test this? Ideally I'd like an approach where I don't compromise the coverage of within-range values.</p>
|
<python><python-hypothesis><defensive-programming><property-based-testing>
|
2024-12-17 19:53:27
| 2
| 6,651
|
Carl Patenaude Poulin
|
79,289,117
| 620,679
|
Pandoc filter to treat (Python) script output as Markdown
|
<p>I need a <a href="https://pandoc.org/" rel="nofollow noreferrer">Pandoc</a> filter that takes Python as the code block, runs it, and then treats the output of the script as Markdown. Something like:</p>
<pre><code># Heading
```{.pyscript}
import datetime.datetime
print(f"The current date and time are {str(datetime.now()}.")
```
</code></pre>
<p>and then running pandoc like this:</p>
<pre><code>$ pandoc --filter pyscript file.md -o file.html
</code></pre>
<p>(presuming that "pyscript" is the name of the filter) to turn the above into this:</p>
<pre><code># Heading
The current date and time are 2024-12-17 14:11:28.684357
</code></pre>
<p>which pandoc will then format appropriately (as HTML in this particular case). Does any such pandoc filter exist?</p>
|
<python><markdown><pandoc>
|
2024-12-17 19:15:59
| 3
| 4,041
|
Scott Deerwester
|
79,288,904
| 4,886,441
|
Unable to match string in SQLite database
|
<p>I am trying to search for a string in the entire database using Python.</p>
<p>The specific code is</p>
<pre><code>q = 'dopamine'
curr = cur.execute("""SELECT * FROM {} WHERE {} MATCH ? """.format(table_name, table_name), (q,))
</code></pre>
<p>When I try running the same command by loading the database on the command line, I use</p>
<pre><code>SELECT * FROM umls WHERE umls MATCH 'dopamine';
</code></pre>
<p>In both cases I get the error</p>
<pre><code>Error: no such column: umls
</code></pre>
<p>I get the same error for any table in the database. What am I doing wrong?</p>
|
<python><sqlite>
|
2024-12-17 17:47:34
| 2
| 377
|
Misha
|
79,288,829
| 8,874,732
|
I have a fastapi server running on ec2 and frontend deployed over amplify. I get CORS error for the first couple of times I call the API and then not
|
<p>I have a fastapi server running on ec2 and react frontend deployed over amplify. The backend API works fine when i call through postman, but gives CORS issue when used on frontend. The error is gone if i make the request 2 or 3 times in a row automatically.</p>
<p>I did some research and found it has to do with caching the CORS configuration. My CORS config is shown in code. how do I fix this to make it work from the 1st API call.</p>
<p>In the picture I have made 3 calls, and on 3rd call it works fine.</p>
<pre><code>import os
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from src.views.index import router as root_router
app = FastAPI()
app.add_middleware(
CORSMiddleware,
allow_origins=os.getenv("trusted_origins").split(","),
allow_credentials=True,
allow_methods=["GET", "POST", "OPTIONS"],
allow_headers=["*"]
)
app.include_router(root_router)
@app.get("/api/v0")
def read_root():
return {"success": {"message": "healthy", "code": 200}}
</code></pre>
<pre><code>trusted_origins=http://localhost:3002,https://api.dwellner.ca
</code></pre>
<p><a href="https://i.sstatic.net/e82Kbuzv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e82Kbuzv.png" alt="enter image description here" /></a></p>
|
<python><amazon-ec2><cors><fastapi>
|
2024-12-17 17:18:49
| 0
| 717
|
Dholu
|
79,288,828
| 1,660,121
|
Combine Python f-string with printf style formatting?
|
<p>I need to format a command line where some parameters come from simple variables, and others are the result of a longer expression. Python f-strings work well for the variables, and I'm using a printf-style <code>%s</code> and providing the long expression outside the string because it would just clutter the command template too much if placed inline:</p>
<pre class="lang-py prettyprint-override"><code>run(f"""zstd -dc {diff_file} |
git apply -p2 {exclude_args} --directory='%s'""" % (os.getcwd() + "/output"))
</code></pre>
<p>I didn't store the directory parameter in a named variable because it only gets used once as opposed to <code>diff_file</code> and <code>exclude_args</code>.</p>
<p>Is there a cleaner way to avoid putting <code>os.getcwd() + "/output"</code> directly in the f-string, and also avoid mixing old printf <code>%s</code> syntax and new f-strings ?</p>
<p>Are there downsides to doing it like above other than it being a bit confusing to someone who reads the code ?</p>
<p>Edit: to clarify, my question is how to put placeholders in f-strings without declaring additional named variables, not how to inline calls to <code>getcwd()</code> in f-strings.</p>
|
<python><formatting><f-string>
|
2024-12-17 17:18:48
| 3
| 956
|
patraulea
|
79,288,705
| 3,003,072
|
Speed up string assignments to list in Cython
|
<p>I used <code>re</code> to group a long string, and put the obtained substrings into a list based on conditions specified by a <code>dict</code> to output. I found a significant improvement on the speed using Cython comparing to Python, but would expect a further improvement.
A simple code in <code>temp_test.pyx</code> is:</p>
<pre><code>cimport cython
import re
@cython.wraparound(False)
@cython.boundscheck(False)
def cleave(str long_str, str rules, dict term_char):
"""
Cleaves a long string into substrings.
"""
cdef:
object compiler = re.compile(rules)
Py_ssize_t i, ns
Py_ssize_t nk = 0
Py_ssize_t nq = <ssize_t> len(long_str)
int p, t
list split_str = compiler.split(long_str)
list substrs = nq * [None]
str s
ns = <ssize_t> len(split_str)
for i in range(ns):
if split_str[i] is not None:
s = split_str[i]
p = len(s)
if p == 1 and s in term_char:
t = term_char[s]
if t == 0:
substrs[nk] = split_str[i - 1]
else:
substrs[nk] = split_str[i] + split_str[i + 1]
nk += 1
return substrs[:nk]
</code></pre>
<p>with the test code:</p>
<pre><code>from temp_test import cleave
from typing import Dict
def test_cleave():
long_str: str = r"ABCEDFRFR"
rules: str = r"([BD])"
term_char: Dict[str, int] = {"B": 0, "D": 1}
subs = cleave(long_str, rules, term_char)
print(subs)
</code></pre>
<p>After using <code>cython -a</code> to annotate the <code>temp_test.pyx</code>, I found nearly all lines were highlighted. Since the <code>cleave</code> will be called large number of times, and there exist lots of assignments of strings from a list (<em>i.e.</em>, <code>split_str </code>) to another list (<em>i.e.</em>, <code>substrs</code>) in the function, I found there always exist checks like:</p>
<pre><code>if (unlikely(__pyx_v_split_str == Py_None)) {
PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()");
__PYX_ERR(0, 24, __pyx_L1_error)
}
__pyx_t_5 = __Pyx_PyList_GET_SIZE(__pyx_v_split_str); if (unlikely(__pyx_t_5 == ((Py_ssize_t)-1))) __PYX_ERR(0, 24, __pyx_L1_error)
__pyx_v_ns = ((Py_ssize_t)__pyx_t_5);
</code></pre>
<ol>
<li>I think disable those checks can improve the performance. How to achieve this?</li>
<li>I found that <code>@cython.wraparound(False)</code> was highlighted too with lots of checks (too long not show here), is this because I check the key in dict? <em>I.e.</em>, <code>s in term_char</code>. Do we have a better alternative?</li>
</ol>
<p>I'm very often use <code>memoryview</code>, <code>malloc</code>, <code>calloc</code> <em>etc.</em> for numeric operations and calculations, but not familiar with string operations using cython, any suggestions are very appreciated.</p>
|
<python><cython>
|
2024-12-17 16:41:55
| 1
| 616
|
Elkan
|
79,288,622
| 1,971,089
|
Django - VS Code - custom model manager method typing
|
<p>I'm trying custom model managers to add annotations to querysets.<br />
My problem, which started as a little annoyance but I now realize can be an actual problem, is that VS Code does not recognise the methods defined in the custom model manager/queryset.</p>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code>from django.db import models
from rest_framework.generics import ListAPIView
# models.py
class CarQuerySet(models.QuerySet):
def wiht_wheels(self): # NOTE: intentional typo
pass # assume this does some annotaion
class Car(models.Model):
objects = CarQuerySet.as_manager()
# views.py
class ListCarsView(ListAPIView):
def get_queryset(self):
return Car.objects.wiht_weels() # <--- white instead of yellow
</code></pre>
<p>At first, I was just annoyed by the fact that <code>wiht_weels</code> is printed in white as opposed to the usual yellow for methods/functions.</p>
<p>Then I was more annoyed because this means VS Code will not give me any hints as to what args the method expects or what it returns.</p>
<p>Finally, I accidentally made a typo on a name of one of these custom model methods, I hit refactor->rename, but it only renamed it in place, not on the places where it is used (views), probably because VS Code doesn't understand that method is being used anywhere.</p>
<p>Is there a solution to this?</p>
|
<python><django><visual-studio-code><django-models>
|
2024-12-17 16:16:25
| 3
| 474
|
Marco Castanho
|
79,288,467
| 4,245,867
|
Python selenium: Impossible to close a frame using xpath or class_name
|
<p>I'm trying to close a frame on <a href="https://www.bvc.com.co/variable-income-local-market/cemargos?tab=issuer-information" rel="nofollow noreferrer">this page</a>.</p>
<p>What I want is to click in here:</p>
<p><a href="https://i.sstatic.net/LJCbiKdr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LJCbiKdr.png" alt="enter image description here" /></a></p>
<p>It seems to be easy, but so far the following code (which should work) has failed:</p>
<pre><code>import selenium.webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = selenium.webdriver.Chrome()
driver.maximize_window()
driver.get('https://www.bvc.com.co/variable-income-local-market/cemargos?tab=issuer-information')
#X(close) bvc frame
xpath = '//*[@id="__next"]/div/div[1]/div/div[1]/div/div/div/div/div[3]/div/div/div/div[3]/div[2]/span'
class_name = 'sc-843139d2-14 iVPGqd'
# Trying with XPath
if 1:
try:
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, xpath))).click()
except:
driver.find_element(By.XPATH, xpath).click()
# Trying with class_name
if 1:
try:
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.CLASS_NAME, class_name))).click()
except:
driver.find_element(By.CLASS_NAME, class_name).click()
</code></pre>
<p>The output using XPath:</p>
<pre><code>raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
Stacktrace:
#0 0x64a95375031a <unknown>
...
</code></pre>
<p>The output using class_name:</p>
<pre><code>selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":".sc-843139d2-14 iVPGqd"}
</code></pre>
|
<python><html><selenium-webdriver>
|
2024-12-17 15:32:47
| 3
| 615
|
Ivan Castro
|
79,288,430
| 4,050,510
|
Intall python package with pip, respecting current installation
|
<p>I want to use a singularity container with jax preinstalled e.g. based on <a href="https://catalog.ngc.nvidia.com/orgs/nvidia/containers/jax" rel="nofollow noreferrer">this image</a>. On top of that, I want to install PyMC with numpyro, to get NUTS sampling with GPU acceleration.
My attempt is to use the def file</p>
<pre><code>Bootstrap: docker
From: nvcr.io/nvidia/jax:23.04-py3
%post
pip install pymc numpyro
%test
python -c "import jax;print(jax.devices())"
</code></pre>
<p>The test fails, since the post-step uninstalls the local jax packages and installs a new version from PyPA. That is not acceptable, since the docker image comes with some very specific locally compiled jaxlib versions and the like.</p>
<p>Can I give some instructions to pip to "install these two packages, and select some version that is compatible with what is already installed"?</p>
|
<python><pip><jax><singularity-container>
|
2024-12-17 15:21:13
| 1
| 4,934
|
LudvigH
|
79,288,333
| 13,280,838
|
How to get input from user for ad hoc manual runs in Snowpark
|
<p>We have a small piece of code using snowpark using python to be run from inside snowflake. We need to get input from the user for ad hoc manual runs.</p>
<p>For example consider a code that takes <code>start_date</code> and <code>end_date</code> as input from the user. In normal scheduled runs it would be empty and would be calculated in the code. <code>start_date</code> will be <code>current_date - 2</code> and <code>end_date</code> will be <code>current_date - 1</code>. In case we need to make an adhoc run for a specific date, the start_date and end_date would be the value from the user. For example consider <code>start_date = 2024-12-01</code> and <code>end_date = 2024-12-02</code>.</p>
<p>I have tried input(), but the code seems to be running indefinitely.
In databricks I usually use widgets to get the input - how do I do it here?</p>
|
<python><snowflake-cloud-data-platform>
|
2024-12-17 14:48:07
| 0
| 669
|
rainingdistros
|
79,288,237
| 1,145,808
|
Linking GMenuModel to actions for context menus in Python Gtk3
|
<p>I've built an application using Gtk3 and Gtk.Builder, using GMenuModel for the menus. So far so good. Now I'd like to add context (i.e. right-click) menus.</p>
<p>The menus themselves appear, but as I can't seem to find the right incantation to link the actions, they are always ghosted. Here is my toy code example:</p>
<pre><code>import sys
import os
import gi
gi.require_version("Gtk", "3.0")
from gi.repository import GLib, Gio, Gtk
_curr_dir = os.path.split(__file__)[0]
# This would typically be its own file
MENU_XML = """
<?xml version="1.0" encoding="UTF-8"?>
<interface>
<menu id="context_menu">
<section>
<item>
<attribute name="label" translatable="yes">Option 1</attribute>
<attribute name="action">app.option1</attribute>
</item>
<item>
<attribute name="label" translatable="yes">Option 2</attribute>
<attribute name="action">app.option2</attribute>
</item>
</section>
</menu>
</interface>
"""
class AppWindow(Gtk.ApplicationWindow):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.set_default_size(400, 300)
self.connect("destroy", Gtk.main_quit)
# Load the UI from the Glade file
builder = Gtk.Builder()
builder = Gtk.Builder.new_from_string(MENU_XML, -1)
# Get the context menu defined in the XML
self.context_menu = Gtk.Menu.new_from_model(builder.get_object("context_menu"))
# Add a simple label
self.label = Gtk.Label(label="Right-click to open context menu")
self.add(self.label)
self.label.show()
# Connect the button press event (right-click)
self.connect("button-press-event", self.on_right_click)
def on_right_click(self, widget, event):
# Check if it's the right-click (button 3)
if event.button == 3:
# Show the context menu
# self.context_menu.popup(None, None, None, None, event.button, event.time)
self.context_menu.show_all()
self.context_menu.popup_at_pointer(event)
class Application(Gtk.Application):
def __init__(self, *args, **kwargs):
super().__init__(
*args,
application_id="org.example.myapp",
**kwargs
)
self.window = None
def do_startup(self):
Gtk.Application.do_startup(self)
action = Gio.SimpleAction.new("option1", None)
action.connect("activate", self.on_option1_activated)
self.add_action(action)
action = Gio.SimpleAction.new("option2", None)
action.connect("activate", self.on_option2_activated)
self.add_action(action)
builder = Gtk.Builder.new_from_string(MENU_XML, -1)
self.set_app_menu(builder.get_object("app-menu"))
def do_activate(self):
# We only allow a single window and raise any existing ones
if not self.window:
# Windows are associated with the application
# when the last one is closed the application shuts down
self.window = AppWindow(application=self, title="Main Window")
self.window.present()
def on_quit(self, action, param):
self.quit()
def on_option1_activated(self, widget):
print("Option 1 selected")
def on_option2_activated(self, widget):
print("Option 2 selected")
if __name__ == "__main__":
app = Application()
app.run()
</code></pre>
<p>How can I link up the actions so that I can use the menu?</p>
|
<python><gtk3>
|
2024-12-17 14:20:26
| 2
| 829
|
DobbyTheElf
|
79,288,127
| 1,834,164
|
How to Set Dask Dashboard Address with SLURMRunner (Jobqueue) and Access It via SSH Port Forwarding?
|
<p>I am trying to run a <strong>Dask Scheduler and Workers</strong> on a remote cluster using <code>SLURMRunner</code> from <code>dask-jobqueue</code>. I want to bind the Dask <strong>dashboard</strong> to <code>0.0.0.0</code> (so it’s accessible via port forwarding) and access it from my local machine.</p>
<p>However, the dashboard always binds to the default IP and port (<code>10.x.x.x:8787</code>), and none of my attempts to configure it seem to work. Here’s what I’ve tried so far:</p>
<hr />
<h3>1. <strong>Environment Variable Approach</strong></h3>
<p>Before running the script, I set:</p>
<pre class="lang-bash prettyprint-override"><code>export DASK_DISTRIBUTED__SCHEDULER__DASHBOARD__ADDRESS="0.0.0.0:8789"
python my_script.py
</code></pre>
<p>But when I check the logs, the dashboard is still bound to the default IP and port (<code>8787</code>).</p>
<hr />
<h3>2. <strong>Programmatically Setting the Dashboard Address</strong></h3>
<p>I tried setting the dashboard address using <code>dask.config.set()</code> before initializing the <code>SLURMRunner</code>:</p>
<pre class="lang-py prettyprint-override"><code>import dask
from dask_jobqueue import SLURMRunner
from dask.distributed import Client
dask.config.set({"distributed.scheduler.dashboard.address": "0.0.0.0:8789"})
runner = SLURMRunner()
client = Client(runner)
print(client.dashboard_link)
</code></pre>
<p>Despite setting this, the logs still show the dashboard binding to <code>10.x.x.x:8787</code>.</p>
<hr />
<h3>3. <strong>Using a Configuration File</strong></h3>
<p>I created a <code>dask.yaml</code> file at <code>~/.config/dask/dask.yaml</code> with the following content:</p>
<pre class="lang-yaml prettyprint-override"><code>distributed:
scheduler:
dashboard:
address: "0.0.0.0:8789"
</code></pre>
<p>I ensured the configuration was being picked up by running:</p>
<p>but the scheduler still binds to <code>10.x.x.x:8787</code>.</p>
<hr />
<h3>Question:</h3>
<p>How can I <strong>force the Dask Scheduler</strong> launched by <code>SLURMRunner</code> to bind the dashboard to <code>0.0.0.0:8789</code> so that I can access it via SSH port forwarding?</p>
<p>Is there another way to configure or override this behavior in <code>SLURMRunner</code>?</p>
<hr />
<h3>Notes:</h3>
<ul>
<li><code>Python</code> 3.10.10, <code>dask</code> 2024.9.1, <code>dask-jobqueue</code> 0.9.0</li>
<li>SLURMRunner is part of a larger script, so I’d prefer to stick with it rather than switching to manual <code>dask-scheduler</code> launches.</li>
</ul>
<p>Any help or suggestions would be greatly appreciated!</p>
|
<python><dask><slurm><dask-distributed><dask-jobqueue>
|
2024-12-17 13:51:08
| 1
| 397
|
user1834164
|
79,287,991
| 13,634,560
|
Why does scipy.sparse_matrix not accept values directly from a dataframe?
|
<p>I create a scipy sparse matrix that works fine</p>
<pre><code>X = csr_matrix(
(df["rating"], (item_idx, user_idx)),
shape=(M, N)
)
</code></pre>
<p>But to simplify where (roughly equivalent)</p>
<pre><code>item_idx ~= df[item_id]
</code></pre>
<p>I do the following</p>
<pre><code>csr_matrix(
(df["points"],
(df["user_id"], df["item_id"])])),
shape=(N, M)
)
</code></pre>
<p>which returns an error about index and size of sparse matrix. Note that this error occurs even when the "id" is mapped to exactly the number of unique names.</p>
<pre><code>ValueError: axis 0 index 99 exceeds matrix dimension 97
</code></pre>
<p>Can anyone help me understand why an index error would be returned here, when only the order of the values change? the range is consistent. Does it have to do with sparsity?</p>
<p>MRE:</p>
<pre><code>df = pd.DataFrame({
"user_id": np.random.randint(0, 75, 300),
"item_id": np.random.randint(0, 100, 300),
"rating": np.random.randint(0, 10, 300)
})
def mappings(df, user_id, item_id):
user_id_idx_mapping = dict(zip( df[user_id].unique(), range(df[user_id].nunique()) ))
user_idx_id_mapping = dict(zip( range(df[user_id].nunique()), df[user_id].unique() ))
item_id_idx_mapping = dict(zip( df[item_id].unique(), range(df[item_id].nunique()) ))
item_idx_id_mapping = dict(zip( range(df[item_id].nunique()), df[item_id].unique() ))
return user_id_idx_mapping, user_idx_id_mapping, item_id_idx_mapping, item_idx_id_mapping
user_id_idx_mapping, user_idx_id_mapping, item_id_idx_mapping, item_idx_id_mapping = mappings(df, "user_id", "item_id")
N = df["user_id"].nunique()
M = df["item_id"].nunique()
user_idx = [user_id_idx_mapping[int(j)] for j in df["user_id"]]
item_idx = [item_id_idx_mapping[int(l)] for l in df["item_id"]]
true = dict(zip([int(k) for k in list(df["user_id"].values)], user_idx))
X_works = csr_matrix(
(df["rating"], (item_idx, user_idx)),
shape=(M, N)
)
X_doesnt = csr_matrix(
(df["rating"], (df["item_id"], df["user_id"])),
shape=(M, N)
)
print(true)
</code></pre>
|
<python><scipy>
|
2024-12-17 13:06:25
| 2
| 341
|
plotmaster473
|
79,287,821
| 14,551,577
|
Python CosmosDB upsert_item deletes document
|
<p>I am going to update a document using <code>upsert_item</code> function of CosmosDB Python SDK.</p>
<p>Here is a script:</p>
<pre><code>from dotenv import load_dotenv
from azure.cosmos import CosmosClient
import os
import uuid
def update():
load_dotenv()
# Initialize the Cosmos DB client
endpoint = os.getenv("COSMOSDB_ENDPOINT")
key = os.getenv("COSMOSDB_KEY")
client = CosmosClient(endpoint, key)
# Specify your database and container (collection) names
database_name = os.getenv("COSMOSDB_DB")
container_name = os.getenv("COSMOSDB_CONTAINER")
# Retrieve an item by its ID
container = client.get_database_client(database_name).get_container_client(container_name)
id = str(uuid.uuid4())
url = "https://google.com/"
container.create_item(
{
"id": id,
"status": "pending", # => started, analyzing, finished
"url": url,
"categories": ["7149b375-8cb2-4180-ae03-27fd0da409d0"],
"doctype": "url"
}
)
query_text = f"SELECT * FROM c WHERE c.id='{id}'"
query_items_response = container.query_items(query=query_text, enable_cross_partition_query=True)
data = list(query_items_response)
print(len(data)) # returns 1
for item in data:
print(f"Updating {item['url']}")
item["categories"] = []
item["doctype"] = "pdfDocument"
container.upsert_item(item)
updated_query_items_response = container.query_items(query=query_text, enable_cross_partition_query=True)
updated_data = list(updated_query_items_response)
print(updated_data)
print(len(updated_data)) # returns 0
# Confirm created data using CosmosDB data explorer.
if __name__ == "__main__":
update()
</code></pre>
<p>After creating a document, I updated it using the <code>upsert_item</code> method. Sometimes, when I query the data by its ID, I can see the document.</p>
<p>However, in reality, it has been removed. Even after waiting for some time, I am unable to retrieve the document, both through the code and in the Cosmos DB Data Explorer.</p>
<p>And I found a strange problem.
If I change <code>item["doctype"] = "pdfDocument"</code> to <code>item["doctype"] = "site"</code> or others, it works as expected.</p>
<p>It is quite strange.</p>
|
<python><azure-cosmosdb><azure-cosmosdb-sqlapi><upsert>
|
2024-12-17 12:16:34
| 1
| 644
|
bcExpt1123
|
79,287,799
| 9,658,774
|
How to correctly identify entity types for tokens using spaCy using python?
|
<p>I'm using spaCy to extract and identify entity types (like ORG, GPE, DATE, etc.) from a text description. However, I am noticing some incorrect results, and I'm unsure how to fix this.</p>
<p>Here is the code I am using:</p>
<pre><code>import spacy
nlp = spacy.load("en_core_web_sm")
def getPayeeName(description):
description = description.replace("-", " ").replace("/", " ").strip()
doc = nlp(description)
for token in doc:
print(f"Token: {token.text}, Entity: {token.ent_type_ if token.ent_type_ else 'None'}")
# Example input
description = "UPI DR 400874707203 BENGALORE 08 JAN 2024 14:38:56 MEDICAL LTD HDFC 50200"
getPayeeName(description)
</code></pre>
<p>Token: UPI, Entity: ORG</p>
<p>Token: DR, Entity: ORG</p>
<p>Token: 400874707203, Entity: None</p>
<p>Token: BENGALORE, Entity: None</p>
<p>Token: 08, Entity: DATE</p>
<p>Token: JAN, Entity: DATE</p>
<p>Token: 2024, Entity: DATE</p>
<p>Token: 14:38:56, Entity: None</p>
<p>Token: MEDICAL, Entity: ORG</p>
<p>Token: LTD, Entity: ORG</p>
<p>Token: HDFC, Entity: ORG</p>
<p>Token: 50200, Entity: ORG</p>
<ul>
<li><p>50200 is identified as ORG, but it is just a number.</p>
</li>
<li><p>BENGALORE is a city, but it is not recognized as a GPE or location
(returns None).</p>
</li>
<li><p>UPI and DR are acronyms/abbreviations, but they are incorrectly
identified as ORG.</p>
</li>
</ul>
<p>I want the entity recognition to be more accurate and reliable.
How can I fix these issues? Are there additional spaCy configurations, custom rules, or pre-trained models I should use to improve the entity recognition?</p>
<p>Note: I tried ChatGPT as well, but still this issue is not solved.</p>
|
<python><machine-learning><nlp><spacy>
|
2024-12-17 12:09:49
| 1
| 921
|
PrakashT
|
79,287,757
| 243,031
|
mock motor AsyncIOMotorClient with uuid settings
|
<p>I have <code>AsyncIOMotorClient</code> code as below.</p>
<pre><code>def get_client():
"""
Get Client.
Get the client for the database connection.
"""
return motor.motor_asyncio.AsyncIOMotorClient(
os.getenv(
"DB_CONN_STR",
"mongodb://user:pass@127.0.0.1:27017"
),
uuidRepresentation='standard',
)
</code></pre>
<p>in this, I am setting <code>uuidRepresentation='standard'</code> to make sure UUID will parse properly.</p>
<p>I use <a href="https://pypi.org/project/mongomock-motor/" rel="nofollow noreferrer">mongomock-motor</a> and mock the client as below.</p>
<pre><code>from mongomock_motor import AsyncMongoMockClient
class MockSession():
client = AsyncMongoMockClient(
uuidRepresentation='standard',
)
def override_get_session():
return MockSession()
app.dependency_overrides[get_session] = override_get_session
</code></pre>
<p>When I run testcase where it need UUID, it fail with below error.</p>
<pre><code>app_client = <starlette.testclient.TestClient object at 0x10344a9d0>
def test_valid_payload(app_client):
payload = {
"country": "ARE",
"email": "test@test.com",
"first_name": "TEst",
"last_name": "Surname",
"password": "PAssword@123",
"phone": "+911234567890"
}
> response = app_client.post("/doctor/signup", json=payload)
tests/integration/doctor/test_post_signup.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
venv/lib/python3.11/site-packages/starlette/testclient.py:597: in post
return super().post(
venv/lib/python3.11/site-packages/httpx/_client.py:1144: in post
return self.request(
venv/lib/python3.11/site-packages/starlette/testclient.py:488: in request
return super().request(
venv/lib/python3.11/site-packages/httpx/_client.py:825: in request
return self.send(request, auth=auth, follow_redirects=follow_redirects)
venv/lib/python3.11/site-packages/httpx/_client.py:914: in send
response = self._send_handling_auth(
venv/lib/python3.11/site-packages/httpx/_client.py:942: in _send_handling_auth
response = self._send_handling_redirects(
venv/lib/python3.11/site-packages/httpx/_client.py:979: in _send_handling_redirects
response = self._send_single_request(request)
venv/lib/python3.11/site-packages/httpx/_client.py:1014: in _send_single_request
response = transport.handle_request(request)
venv/lib/python3.11/site-packages/starlette/testclient.py:381: in handle_request
raise exc
venv/lib/python3.11/site-packages/starlette/testclient.py:378: in handle_request
portal.call(self.app, scope, receive, send)
venv/lib/python3.11/site-packages/anyio/from_thread.py:290: in call
return cast(T_Retval, self.start_task_soon(func, *args).result())
/opt/homebrew/Cellar/python@3.11/3.11.10/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py:456: in result
return self.__get_result()
/opt/homebrew/Cellar/python@3.11/3.11.10/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py:401: in __get_result
raise self._exception
venv/lib/python3.11/site-packages/anyio/from_thread.py:221: in _call_func
retval = await retval_or_awaitable
venv/lib/python3.11/site-packages/fastapi/applications.py:1054: in __call__
await super().__call__(scope, receive, send)
venv/lib/python3.11/site-packages/starlette/applications.py:113: in __call__
await self.middleware_stack(scope, receive, send)
venv/lib/python3.11/site-packages/starlette/middleware/errors.py:187: in __call__
raise exc
venv/lib/python3.11/site-packages/starlette/middleware/errors.py:165: in __call__
await self.app(scope, receive, _send)
venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py:62: in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
venv/lib/python3.11/site-packages/starlette/_exception_handler.py:62: in wrapped_app
raise exc
venv/lib/python3.11/site-packages/starlette/_exception_handler.py:51: in wrapped_app
await app(scope, receive, sender)
venv/lib/python3.11/site-packages/starlette/routing.py:715: in __call__
await self.middleware_stack(scope, receive, send)
venv/lib/python3.11/site-packages/starlette/routing.py:735: in app
await route.handle(scope, receive, send)
venv/lib/python3.11/site-packages/starlette/routing.py:288: in handle
await self.app(scope, receive, send)
venv/lib/python3.11/site-packages/starlette/routing.py:76: in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
venv/lib/python3.11/site-packages/starlette/_exception_handler.py:62: in wrapped_app
raise exc
venv/lib/python3.11/site-packages/starlette/_exception_handler.py:51: in wrapped_app
await app(scope, receive, sender)
venv/lib/python3.11/site-packages/starlette/routing.py:73: in app
response = await f(request)
venv/lib/python3.11/site-packages/fastapi/routing.py:301: in app
raw_response = await run_endpoint_function(
venv/lib/python3.11/site-packages/fastapi/routing.py:212: in run_endpoint_function
return await dependant.call(**values)
src/noveraapi/routers/doctor.py:64: in create_doctor
new_doctor = await collection.insert_one(
venv/lib/python3.11/site-packages/mongomock_motor/__init__.py:43: in wrapper
return getattr(proxy_source, method_name)(*args, **kwargs)
venv/lib/python3.11/site-packages/mongomock/collection.py:497: in insert_one
return InsertOneResult(self._insert(document, session), acknowledged=True)
venv/lib/python3.11/site-packages/mongomock_motor/patches.py:58: in wrapper
return fn(data, *args, **kwargs)
venv/lib/python3.11/site-packages/mongomock/collection.py:552: in _insert
BSON.encode(data, check_keys=check_keys)
venv/lib/python3.11/site-packages/bson/__init__.py:1429: in encode
return cls(encode(document, check_keys, codec_options))
venv/lib/python3.11/site-packages/bson/__init__.py:1050: in encode
return _dict_to_bson(document, check_keys, codec_options)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class 'bson.binary.Binary'>, uuid = UUID('097cb7d0-1b2a-43a5-8686-a3525615d254'), uuid_representation = 0
@classmethod
def from_uuid(
cls: Type[Binary], uuid: UUID, uuid_representation: int = UuidRepresentation.STANDARD
) -> Binary:
"""Create a BSON Binary object from a Python UUID.
Creates a :class:`~bson.binary.Binary` object from a
:class:`uuid.UUID` instance. Assumes that the native
:class:`uuid.UUID` instance uses the byte-order implied by the
provided ``uuid_representation``.
Raises :exc:`TypeError` if `uuid` is not an instance of
:class:`~uuid.UUID`.
:param uuid: A :class:`uuid.UUID` instance.
:param uuid_representation: A member of
:class:`~bson.binary.UuidRepresentation`. Default:
:const:`~bson.binary.UuidRepresentation.STANDARD`.
See :ref:`handling-uuid-data-example` for details.
.. versionadded:: 3.11
"""
if not isinstance(uuid, UUID):
raise TypeError("uuid must be an instance of uuid.UUID")
if uuid_representation not in ALL_UUID_REPRESENTATIONS:
raise ValueError(
"uuid_representation must be a value from bson.binary.UuidRepresentation"
)
if uuid_representation == UuidRepresentation.UNSPECIFIED:
> raise ValueError(
"cannot encode native uuid.UUID with "
"UuidRepresentation.UNSPECIFIED. UUIDs can be manually "
"converted to bson.Binary instances using "
"bson.Binary.from_uuid() or a different UuidRepresentation "
"can be configured. See the documentation for "
"UuidRepresentation for more information."
)
E ValueError: cannot encode native uuid.UUID with UuidRepresentation.UNSPECIFIED. UUIDs can be manually converted to bson.Binary instances using bson.Binary.from_uuid() or a different UuidRepresentation can be configured. See the documentation for UuidRepresentation for more information.
venv/lib/python3.11/site-packages/bson/binary.py:272: ValueError
</code></pre>
<p>how to make <code>uuidRepresentation</code> working in mock client ?</p>
|
<python><mongodb><python-asyncio><pymongo>
|
2024-12-17 11:54:02
| 0
| 21,411
|
NPatel
|
79,287,551
| 28,063,240
|
Changes detected after squashmigrations
|
<p><code>python manage.py showmigrations</code> shows:</p>
<pre class="lang-none prettyprint-override"><code>mainapp
[X] 0001_initial
...
[X] 0240_employer_data
[X] 0241_person_metadata
[X] 0242_personemployer_employerworkplace
[X] 0243_personemployer_employed_personemployer_stage
[X] 0244_remove_employerworkplace_and_more
</code></pre>
<p>I ran:</p>
<pre class="lang-none prettyprint-override"><code>python manage.py squashmigrations mainapp 0244
</code></pre>
<p>and now <code>showmigrations</code> shows:</p>
<pre class="lang-none prettyprint-override"><code>mainapp
[-] 0001_squashed_0244_remove_employerworkplace_and_more (244 squashed migrations) Run 'manage.py migrate' to finish recording.
</code></pre>
<p>But <code>python manage.py migrate</code> reports the errors:</p>
<pre class="lang-none prettyprint-override"><code>No migrations to apply.
Your models in app(s): 'mainapp' have changes that are not yet reflected in a migration, and so won't be applied.
Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.
</code></pre>
<p>How can there be changes detected immediately after running <code>squashmigrations</code> without doing anything in between?</p>
<p><code>python manage.py makemigrations --dry-run</code> shows many "Alter field" lines like</p>
<pre class="lang-none prettyprint-override"><code>Migrations for 'mainapp':
mainapp/migrations/0001_squashed_0244_remove_employerworkplace_and_more.py
~ Alter field field on employergroup
~ Alter field image_alt on employee
~ Alter field context on employer
...
</code></pre>
<p>I would expect that after I run <code>squashmigrations</code>, <code>makemigrations</code> would show "No changes detected"</p>
<p>So what happened? What might have caused this bizarre situation? How might I fix it?</p>
|
<python><django><django-migrations><makemigrations>
|
2024-12-17 10:39:32
| 1
| 404
|
Nils
|
79,287,524
| 4,828,720
|
How to handle ever-changing password in psycopg2?
|
<p>Similar to <a href="https://stackoverflow.com/questions/57562372/how-to-handle-ever-changing-password-in-sqlalchemypsycopg2">how to handle ever-changing password in sqlalchemy+psycopg2?</a> I an faced with a PostgreSQL setup where passwords are rotated frequently. My application runs longer than a password is valid and I need to use the new password for any new connections being made by the application.</p>
<p>I am using a <a href="https://www.psycopg.org/docs/pool.html#psycopg2.pool.ThreadedConnectionPool" rel="nofollow noreferrer">psycopg2.pool.ThreadedConnectionPool</a>.</p>
<p>How can I update the pool to use the new password <em>for any new connection it makes</em> after I acquired a new password?</p>
<p>I can not and do not need to kill existing connections, they keep working as long as they are alive. I just need to make sure that new connections use the new password.</p>
|
<python><postgresql><psycopg2>
|
2024-12-17 10:34:30
| 1
| 1,190
|
bugmenot123
|
79,287,522
| 6,930,340
|
Compute percentage of positive rows in a group_by polars DataFrame
|
<p>I need to compute the percentage of positive values in the <code>value</code> column grouped by the <code>group</code> column.</p>
<pre><code>import polars as pl
df = pl.DataFrame(
{
"group": ["A", "A", "A", "A", "A", "B", "B", "B", "B", "B"],
"value": [2, -1, 3, 1, -2, 1, 2, -1, 3, 2],
}
)
shape: (10, 2)
┌───────┬───────┐
│ group ┆ value │
│ --- ┆ --- │
│ str ┆ i64 │
╞═══════╪═══════╡
│ A ┆ 2 │
│ A ┆ -1 │
│ A ┆ 3 │
│ A ┆ 1 │
│ A ┆ -2 │
│ B ┆ 1 │
│ B ┆ 2 │
│ B ┆ -1 │
│ B ┆ 3 │
│ B ┆ 2 │
└───────┴───────┘
</code></pre>
<p>In group <code>A</code> there are 3 out of 5 positive values (60%), while in column <code>B</code> there are 4 out 5 positive values (80%).</p>
<p>Here's the expected dataframe.</p>
<pre><code>┌────────┬──────────────────┐
│ group ┆ positive_percent │
│ --- ┆ --- │
│ str ┆ f64 │
╞════════╪══════════════════╡
│ A ┆ 0.6 │
│ B ┆ 0.8 │
└────────┴──────────────────┘
</code></pre>
|
<python><python-polars>
|
2024-12-17 10:34:13
| 2
| 5,167
|
Andi
|
79,287,505
| 3,439,054
|
Vectorize objects in python Jax
|
<p>I am unsure what is the best way to vectorize objects in Python Jax.
In particular, I want to write a code that handles both calling a method from a single instantiation of a class and from multiple (vectorized) instantiations of the class.
In the following, I write a simple example of what I would like to achieve.</p>
<pre><code>import jax
import jax.numpy as jnp
import jax.random as random
class Dummy:
def __init__(self, x, key):
self.x = x
self.key = key
def to_pytree(self):
return (self.x, self.key), None
def get_noisy_x(self):
self.key, subkey = random.split(self.key)
return self.x + random.normal(subkey, self.x.shape)
@staticmethod
def from_pytree(auxiliary, pytree):
return Dummy(*pytree)
jax.tree_util.register_pytree_node(Dummy,
Dummy.to_pytree,
Dummy.from_pytree)
</code></pre>
<p>The class <code>Dummy</code> contains some information, <code>x</code> and <code>keys</code>, and has a method, <code>get_noisy_x</code>. The following code works as expected:</p>
<pre><code>key = random.PRNGKey(0)
dummy = Dummy(jnp.array([1., 2., 3.]), key)
dummy.get_noisy_x()
</code></pre>
<p>I would like <code>get_noisy_x</code> to work also on a vectorized version of the object <code>Dummy</code>.</p>
<pre><code>key = random.PRNGKey(0)
key, subkey = random.split(key)
key_batch = random.split(subkey, 100)
dummy_vmap = jax.vmap(lambda x: Dummy(jnp.array([1., 2., 3.]), x))(key_batch)
</code></pre>
<p>I would expect <code>dummy_vmap</code> to be an array of <code>Dummy</code> objects; however, instead, <code>dummy_vmap</code> results to be only one <code>Dummy</code> with vectorized <code>x</code> and <code>key</code>. This is not ideal for me because that modifies the behavior of the code. For example, if I call <code>dummy_vmap.get_noisy_x()</code>, I get returned an error saying that <code>self.key, subkey = random.split(self.key)</code> does not work because self.key is not a single key. While this error could be solved in several ways - and actually, in this example, vectorization is not really needed, my goal is to understand how to write code in a object-oriented way, that both handles correctly</p>
<pre><code>dummy = Dummy(jnp.array([1., 2., 3.]), key)
dummy.get_noisy_x()
</code></pre>
<p>and</p>
<pre><code>vectorized_dummy = .... ?
vectorized_dummy.get_noisy_x()
</code></pre>
<p>Notice that the example that I have made could work in several ways without involving vectorization. What I look for, however, is a more generic way to deal with vectorization in much more complicated scenarios.</p>
<h2>Update</h2>
<p>I have found out that I need to vectorize get_noisy_x as well.</p>
<pre class="lang-py prettyprint-override"><code>dummy_vmap = jax.vmap(lambda x: Dummy(jnp.array([1., 2., 3.]), x))(key_batch)
jax.vmap(lambda self: Dummy.get_noisy_x(self))(dummy_vmap) # this function call works exactly as expected.
</code></pre>
<p>However, this solution seems a bit counter-intuitive, and not really scalable, as in a larger project I would need to vectorize all functions of interest.</p>
|
<python><vectorization><jax>
|
2024-12-17 10:29:50
| 1
| 324
|
Sam
|
79,287,422
| 774,133
|
Wrong padding tokens in HF model prediction
|
<p>Please consider the following code:</p>
<pre class="lang-py prettyprint-override"><code>from datasets import load_dataset_builder, load_dataset
import numpy as np
import os
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, DataCollatorForSeq2Seq, Seq2SeqTrainer, Seq2SeqTrainingArguments
class QGenMetrics:
def __init__(self, tokenizer, ignore_index=-100):
self.tokenizer = tokenizer
self.ignore_index = ignore_index
def clean_labels(self, labels):
labels[labels == self.ignore_index] = self.tokenizer.pad_token_id
return labels
def compute_metrics_validation(self, eval_preds):
predictions = eval_preds.predictions
labels = eval_preds.label_ids
# predictions, labels = eval_preds
#
try:
labels = self.clean_labels(labels)
predictions = self.tokenizer.batch_decode(predictions, skip_special_tokens=True)
labels = self.tokenizer.batch_decode(labels, skip_special_tokens=True)
except Exception as e:
print(e)
print("PREDS")
print(predictions)
print("LABELS")
print(labels)
for i, pred in enumerate(predictions):
if -100 in pred:
print(f"preds[{i}]: {pred}")
assert False
res = {"metric":1.0}
return res
def compute_metrics_test(self, test_preds):
res = {"metric":1.0}
return res
#<
def actual_encoding(examples, tokenizer, max_source_len=None, max_target_len=None, ignore_label=-100):
# no padding and no truncation: a collator will do the job
# prompts_enc = tokenizer(examples["prompt"], padding="max_length", truncation=True, max_length=200)
# targets_enc = tokenizer(text_target=examples["question"], padding="max_length", truncation=True, max_length=200) # max_length=target_length, padding="max_length", truncation=True, return_tensors="pt")
prompts_enc = tokenizer(examples["question"],
truncation = True if max_source_len else False,
max_length = max_source_len if max_source_len else None,
padding = "max_length" if max_source_len else False,
return_tensors="pt" if max_source_len else None)
targets_enc = tokenizer(examples["text"],
truncation = True if max_target_len else False,
padding = "max_length" if max_target_len else False,
max_length = max_target_len if max_target_len else None,
return_tensors="pt" if max_source_len else None)
print(type(prompts_enc))
examples["input_ids"] = prompts_enc["input_ids"]
examples["attention_mask"] = prompts_enc["attention_mask"]
# unused
# labels = []
# for ex_labels in targets_enc["input_ids"]:
# proc_labels = [label if label != 0 else ignore_label for label in ex_labels]
# labels.append(proc_labels)
examples["labels"] = targets_enc["input_ids"] # labels
return examples
#< actual_encoding
# download Bilkies/QuestionGeneration from HF hub
# https://huggingface.co/datasets/Bilkies/QuestionGeneration
ds_name = 'Bilkies/QuestionGeneration'
ds_builder = load_dataset_builder(ds_name)
print(ds_builder.info)
dataset = load_dataset(ds_name)
display(dataset)
train_ds = dataset['train']
print("number of original training point", len(train_ds))
# subsample train_ds
train_ds = train_ds.select(range(1000))
print("after sampling", len(train_ds))
test_ds = dataset['validation'].select(range(500))
# split training_ds in 80/20 for training and validation
train_ds = train_ds.train_test_split(test_size=0.2)
valid_ds = train_ds['test']
train_ds = train_ds['train']
model_id = "google/flan-t5-small"
tokenizer = AutoTokenizer.from_pretrained(model_id)
max_source_len = 31
max_target_len = 50
train_ds = train_ds.map(actual_encoding, fn_kwargs={"tokenizer": tokenizer, "max_source_len":max_source_len, "max_target_len":max_target_len}, batched=True, num_proc=2)
valid_ds = valid_ds.map(actual_encoding, fn_kwargs={"tokenizer": tokenizer, "max_source_len": max_source_len, "max_target_len": max_target_len}, batched=True, num_proc=2)
test_ds = test_ds.map(actual_encoding, fn_kwargs={"tokenizer": tokenizer, "max_source_len": max_source_len, "max_target_len": max_target_len}, batched=True, num_proc=2)
for ds in [train_ds, valid_ds, test_ds]:
ds_in_lens = [len(ex["input_ids"]) for ex in ds.iter(batch_size=1)]
ds_lab_lens = [len(ex["labels"]) for ex in ds.iter(batch_size=1)]
check = np.array([l == ds_in_lens[0] for l in ds_in_lens[1:]]).all() and np.array([l == ds_lab_lens[0] for l in ds_lab_lens[1:]]).all()
assert check, "check lengths in {ds}"
model = AutoModelForSeq2SeqLM.from_pretrained(model_id)
model = model.to(torch.device("cuda:0"))
bs = 1
data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=model, padding="longest", label_pad_token_id=-100)
evaluator = QGenMetrics(tokenizer)
trainer = Seq2SeqTrainer(
model=model,
train_dataset=train_ds,
eval_dataset=valid_ds,
data_collator=data_collator,
compute_metrics=evaluator.compute_metrics_validation,
args=Seq2SeqTrainingArguments(
output_dir="./_remove",
gradient_accumulation_steps=1,
per_device_train_batch_size=per_dev_bs,
per_device_eval_batch_size=per_dev_bs,
num_train_epochs=1,
seed = 3,
data_seed = 4,
predict_with_generate=True,
eval_strategy="epoch",
report_to="none"
) #< training args
) #< trainer
trainer.train()
</code></pre>
<p>Which I explain here:</p>
<pre class="lang-py prettyprint-override"><code># download Bilkies/QuestionGeneration from HF hub
# https://huggingface.co/datasets/Bilkies/QuestionGeneration
ds_name = 'Bilkies/QuestionGeneration'
ds_builder = load_dataset_builder(ds_name)
print(ds_builder.info)
dataset = load_dataset(ds_name)
display(dataset)
train_ds = dataset['train']
print("number of original training point", len(train_ds))
# subsample train_ds
train_ds = train_ds.select(range(1000))
print("after sampling", len(train_ds))
test_ds = dataset['validation'].select(range(500))
# split training_ds in 80/20 for training and validation
train_ds = train_ds.train_test_split(test_size=0.2)
valid_ds = train_ds['test']
train_ds = train_ds['train']
</code></pre>
<p>After downloading the dataset <code>Bilkies/QuestionGeneration</code>, I keep only the first 1000 examples for "speed" purposes. The I create three data partitions:</p>
<ul>
<li>the test dataset <code>test_ds</code>, that corresponds to the partition <code>valid</code> of the HF dataset</li>
<li><code>train_ds</code> and <code>valid_ds</code> that correspond to 80% and 20% of the training data, respectively.</li>
</ul>
<pre class="lang-py prettyprint-override"><code>model_id = "google/flan-t5-small"
tokenizer = AutoTokenizer.from_pretrained(model_id)
max_source_len = 31
max_target_len = 50
train_ds = train_ds.map(actual_encoding, fn_kwargs={"tokenizer": tokenizer, "max_source_len":max_source_len, "max_target_len":max_target_len}, batched=True, num_proc=2)
valid_ds = valid_ds.map(actual_encoding, fn_kwargs={"tokenizer": tokenizer, "max_source_len": max_source_len, "max_target_len": max_target_len}, batched=True, num_proc=2)
test_ds = test_ds.map(actual_encoding, fn_kwargs={"tokenizer": tokenizer, "max_source_len": max_source_len, "max_target_len": max_target_len}, batched=True, num_proc=2)
for ds in [train_ds, valid_ds, test_ds]:
ds_in_lens = [len(ex["input_ids"]) for ex in ds.iter(batch_size=1)]
ds_lab_lens = [len(ex["labels"]) for ex in ds.iter(batch_size=1)]
check = np.array([l == ds_in_lens[0] for l in ds_in_lens[1:]]).all() and np.array([l == ds_lab_lens[0] for l in ds_lab_lens[1:]]).all()
assert check, "check lengths in {ds}"
</code></pre>
<p>I download a model and the related tokenizer, then tokenize the three datasets using <code>actual_encoding</code> with pre-determined max lengths for <code>input_ids</code> and <code>labels</code>. The <code>for</code> loop at the end checks that all the fields used in training have the same length.</p>
<p>Then I train the model, performing a validation step at the end of every training epoch. The validation is based on the generated <code>token_id</code>s and not on the logits (<code>predict_with_generate=True</code>)</p>
<pre class="lang-py prettyprint-override"><code>bs = 1
data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=model, padding="longest", label_pad_token_id=-100)
evaluator = QGenMetrics(tokenizer)
trainer = Seq2SeqTrainer(
model=model,
train_dataset=train_ds,
eval_dataset=valid_ds,
data_collator=data_collator,
compute_metrics=evaluator.compute_metrics_validation,
args=Seq2SeqTrainingArguments(
output_dir="./_remove",
gradient_accumulation_steps=1,
per_device_train_batch_size=bs,
per_device_eval_batch_size=bs,
num_train_epochs=1,
seed = 3,
data_seed = 4,
predict_with_generate=True,
eval_strategy="epoch",
report_to="none"
) #< training args
) #< trainer
trainer.train()
</code></pre>
<p>The model is evaluated with the by the <code>QGenMetrics</code>, specifically by its <code>compute_metrics_validation</code> method:</p>
<pre class="lang-py prettyprint-override"><code>def compute_metrics_validation(self, eval_preds):
predictions = eval_preds.predictions
labels = eval_preds.label_ids
try:
labels = self.clean_labels(labels)
predictions = self.tokenizer.batch_decode(predictions, skip_special_tokens=True)
labels = self.tokenizer.batch_decode(labels, skip_special_tokens=True)
except Exception as e:
print(e)
print("PREDS")
print(predictions)
print("LABELS")
print(labels)
for i, pred in enumerate(predictions):
if -100 in pred:
print(f"preds[{i}]: {pred}")
assert False
res = {"metric":1.0}
return res
</code></pre>
<p>The labels are cleaned and their padding value -100 is substituted by the padding token provided by the tokenizer. Then both the <code>predictions</code> and the <code>labels</code> are decoded by the tokenizer.</p>
<p>The problem is that the tokenizer finds the value -100 in the predicted tokens:</p>
<pre><code>Trainer.tokenizer is now deprecated. You should use Trainer.processing_class instead.
[[repeated for every validation sample]]
out of range integral type conversion attempted
[...]
preds[0]: [ 0 3 9 28376 1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 -100]
[...]
preds[199]: [ 0 328 1944 3 9 467 13 3 2951 7 16 8 3 2951
7 5892 5 1 0 0 -100]
</code></pre>
<p>As you can see:</p>
<ul>
<li><p>Trainer.tokenizer is used, but I cannot see where because I explicitly call the tokenizer when I need it.</p>
</li>
<li><p>Every validation prediction has a final -100, which should not be there.</p>
</li>
</ul>
<p>I am sure the error is caused by a very silly step I made in my code, but I cannot see which one.</p>
<p>Is there anyone who can help me?</p>
<p>I have prepared a notebook on colab, but I am not sure it runs fine as I cannot get access to a runtime with a GPU.</p>
<p><a href="https://colab.research.google.com/drive/1Ms_gcI_loWXKDPVkEtjG_W3xnsZH0YTT?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1Ms_gcI_loWXKDPVkEtjG_W3xnsZH0YTT?usp=sharing</a></p>
|
<python><pytorch><huggingface-tokenizers><huggingface-trainer>
|
2024-12-17 10:04:58
| 0
| 3,234
|
Antonio Sesto
|
79,287,318
| 1,383,029
|
VIsual Studio does not accept launch.json (pydebug)
|
<p>I want to debug my python project using a remote debugger. So I created this launch.json file in .vscode folder of my project.</p>
<pre><code>{
"version": "0.2.0",
"configurations": [
{
"name": "Python Debugger: Remote Attach",
"type": "debugpy",
"request": "attach",
"connect": {
"host": "localhost",
"port": 5678
}
}
]
}
</code></pre>
<p>This is the code in my main.py file:</p>
<pre><code>import debugpy
debugpy.listen(("0.0.0.0", 5678))
debugpy.wait_for_client()
</code></pre>
<p>But when I hit F5, I only get this dialog:</p>
<p><a href="https://i.sstatic.net/ojFb6mA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ojFb6mA4.png" alt="enter image description here" /></a></p>
<p>When I click on "Open 'launch.json', it opens the launch.json I created. No errors in file.</p>
<p>Any idea what could be wrong? The error message is really undescriptive 🙂</p>
|
<python><visual-studio-code><debugging><remote-debugging>
|
2024-12-17 09:33:48
| 0
| 2,155
|
user1383029
|
79,287,256
| 7,256,443
|
Is egg-info still the recommended way to manage python packages installed in editable mode?
|
<p>I am installing local python package in editable mode, using a <code>pyproject.toml</code> file to configure it. The <code>pyproject.toml</code> looks like this:</p>
<pre class="lang-ini prettyprint-override"><code>[build-system]
requires = ["setuptools", "setuptools-scm"]
build-backend = "setuptools.build_meta"
[project]
dependencies = [
"numpy==1.26.4",
"pandas==2.2.3",
]
name = "pkg_name"
requires-python = ">=3.11"
dynamic = ["version"]
[project.optional-dependencies]
dev = ["pytest"]
</code></pre>
<p>My install command is this: <code>python -m pip install -e .</code></p>
<p>I noticed that this creates a <code>.egg-info</code> directory which contains the metadata for the package. My question is, is this still the recommended way to create python packages in editable mode despite the fact that <code>.egg</code> files have been deprecated, or is there a more modern alternative (perhaps more closely related to the <code>.whl</code> format) to an <code>.egg-info</code> directory which I should be using instead? If so, how should this be specified in the config for my package?</p>
|
<python><python-wheel><egg>
|
2024-12-17 09:08:25
| 0
| 1,033
|
Ben Jeffrey
|
79,287,225
| 6,912,069
|
python polars in jupyter lab leads to error due to infer_schema_legth
|
<p>I often run into data fetching errors when I'm working in JupyterLab and trying to use <code>polars</code> instead of <code>pandas</code> as the dataframe library.</p>
<p>I do this by running the statement</p>
<pre><code>%config SqlMagic.autopolars = True
</code></pre>
<p>However, when I try to fetch some data with the <code>%%sql</code> magic command, I often get an error like this one:</p>
<pre><code>ComputeError: could not append value: "Dr." of type: str to the builder;
make sure that all rows have the same schema or consider increasing `infer_schema_length`
it might also be that a value overflows the data-type's capacity
</code></pre>
<p>I know that polars default length for schema inference is 100 and normally I would just increase it in the following way:</p>
<pre><code>df = pl.read_database(query=qry, connection=con, infer_schema_length=10_000)
</code></pre>
<p>I'd like to set the option globally, so that I can continue to profit from the convenience of jupyters magic commands, but I didn't find an obvious way to do so, using the <a href="https://docs.pola.rs/api/python/stable/reference/config.html" rel="nofollow noreferrer">polars.Config</a> object.</p>
<p>Help would be much appreciated</p>
|
<python><jupyter-notebook><python-polars><magic-command>
|
2024-12-17 08:57:06
| 0
| 686
|
N. Maks
|
79,286,793
| 4,423,300
|
Error in pip install pypiwin32 in Dockerfile
|
<p>I am trying to add python modules in Docker. My Dockerfile looks like:</p>
<pre><code>FROM python:3.9
ADD ./src ./src
# ADD main.py .
RUN pip install --upgrade pip
RUN pip install pyyaml
RUN pip install coloredlogs
RUN pip install pypiwin32
# RUN python -m pip install --upgrade pywin32
WORKDIR ./src
CMD ["python", "./main.py"]
</code></pre>
<p>While building the image, I am getting following error:
<a href="https://i.sstatic.net/WxU6Iu6w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WxU6Iu6w.png" alt="error" /></a></p>
<p>I tried RUN pip install pywin32 as well, but that also did not work. Thanks.</p>
|
<python><python-3.x><docker><dockerfile><pywin32>
|
2024-12-17 05:51:23
| 1
| 637
|
SheCodes
|
79,286,671
| 1,870,832
|
Open pdf in pdf-js viewer from streamlit app
|
<p>I have a streamlit app, and I want it to display a pdf in an iframe. My functionality requirements for my pdf viewer/iframe are:</p>
<ul>
<li>I want the pdf to open to a particular (parameterizable) page</li>
<li>I want the pdf to open with particular (parameterizable) quote/text already highlighted</li>
<li>I want the pdf to be scrollable in the viewer/iframe</li>
</ul>
<p>The requirements above led me to go with (or attempt to go with) pdf.js, instead of the streamlit-pdf-viewer custom component.</p>
<p>I've stripped down my streamlit app to the following minimal <code>app.py</code>, which just includes three buttons/links that unsuccessfully attempt to display the pdf viewer per my requirements:</p>
<pre><code>
import streamlit as st
import streamlit.components.v1 as components
import urllib
def main():
st.title("Hello from streamlit-n-pdfjs!")
# Locations of my (public) r2 bucket and the document in it I want to view
BUCKET_URL = "https://pub-ec8aa50844b34a22a2e6132f8251f8b5.r2.dev"
DOCUMENT_NAME = "FINAL_REPORT.pdf"
# "Media stored in the folder ./static/ relative to the running app file is served at path app/static/[filename]"
# ^ https://docs.streamlit.io/develop/concepts/configuration/serving-static-files
local_pdfjs_path = "./app/static/pdfjs-4-9-155-dist/web/viewer.html"
# Attempt to link to the doc using the pdf.js viewer
PAGENUM = 100
HIGHLIGHT_QUOTE = 'exited the stage'
ENCODED_QUOTE = urllib.parse.quote(HIGHLIGHT_QUOTE)
FULL_DOC_URL = f"{BUCKET_URL}/{DOCUMENT_NAME}#page={PAGENUM}&search={ENCODED_QUOTE}"
pdfjs_viewer = f"{local_pdfjs_path}?file={FULL_DOC_URL}"
# Clicking the link below opens the correct pdf to the correct page, but does not search/highlight the quote text,
# ...and of course does not open in an iframe
st.markdown(f"[link to the doc I can't get to open in iframe w/ buttons below]({FULL_DOC_URL})") # opens doc in a new tab but doesn't search/highlight quote
# clicking button below says "404: Not Found"
if st.button("Show PDF in iframe with highlights, via pdfjs viewer"):
components.iframe(pdfjs_viewer, height=800, scrolling=True)
# Clicking the button below opens an iframe border, but
# just says "this page has been blocked by chrome" inside
if st.button("Show PDF in iframe with highlights, via regular url w/ encoded params"):
components.iframe(FULL_DOC_URL, height=800, scrolling=True)
if __name__ == "__main__":
main()
</code></pre>
<p>...and the latest (unzipped) release/code of pdf.js vendor'd into my repo under <code>vendor/</code> like so:</p>
<pre><code>
|streamlit_n_pdfjs
|--app.py
|--vendor
|----pdfjs-4-9-155-dist
|------web
|--------viewer.html
</code></pre>
<p>Finally, I have a <code>.streamlit/config.toml</code> with:</p>
<pre><code>[server]
enableStaticServing = true
</code></pre>
<p>and I launch my streamlit app locally with:</p>
<pre><code>PYTHONPATH=. streamlit run app.py
</code></pre>
<p>What I get is documented in the code above, but for clarity there's three links/buttons shown in the app (none of which work to my requirements):
<a href="https://i.sstatic.net/k15wlLb8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/k15wlLb8.png" alt="enter image description here" /></a></p>
<ol>
<li>A link which opens the pdf file in a new tab to a given page, but does not use an iframe (deliberately, this is mainly testing the pdf/doc is available at the url), and not with the desired text/quote highlighted successfully</li>
<li>A button that attempts to launch an iframe with the document using pdf.js, but just triggers a 404 message</li>
<li>A button that attempts to launch an iframe with the document just via default browser pdf rendering from the url, but which triggers a message "this page has been blocked by Chrome"</li>
</ol>
|
<javascript><python><pdf><streamlit><pdf.js>
|
2024-12-17 04:04:45
| 0
| 9,136
|
Max Power
|
79,286,464
| 19,366,064
|
UV Python Packing: How to set environment variables in virtual envrionments
|
<p>How do I set environment variables in virtual environment creating by UV? I try setting it in <code>.venv/scripts/activate_this.py</code> and it doesn't work.</p>
|
<python><uv>
|
2024-12-17 01:29:35
| 1
| 544
|
Michael Xia
|
79,286,346
| 8,094,926
|
Return something from onclick?
|
<p>I have a function that creates a chart with pandas & matplotlib. In the function there is an <code>onclick</code> handler that creates a table and then shows it with <code>plt.show()</code> upon click.</p>
<p>When the entire function ends with <code>plt.show()</code>, the chart displays and <code>onclick</code> works. But if I change the code to return an encoded binary of the fig object, then send it to a UI, <code>onclick</code> does <em>not</em> work.</p>
<p>How can I get onclick to work when I return the encoded binary of the fig?</p>
<pre><code>def create_chart():
# other code
df = pd.DataFrame(
data = {"A": A, "B": B, "C": C},
index=D,
)
fig, ax = plt.subplots()
legend_elements = [...]
ax.legend(handles=legend_elements, loc="best")
# Line
ax.plot(df["A"], marker="o",color='DodgerBlue',linewidth=4.5,markersize=6)
plt.tight_layout(rect=(-.05, 0, 1, 1))
fig.set_size_inches(get_chart_size(B),4.8)
# Experiment with onclick
def onclick(event):
""" onclick """
if event.inaxes:
# Create a new figure for the table
table_ax = plt.subplots()
table_ax.axis('off')
table = table_ax.table(cellText=df.values, colLabels=df.columns, loc='center')
table.auto_set_font_size(False)
table.set_fontsize(10)
table.scale(1.2, 1.2)
plt.show() # should I return something here?
fig.canvas.mpl_connect('button_press_event', onclick)
# plt.show() # onclick works locally if i do this as the end of function
buf = BytesIO()
fig.savefig(buf, format="png")
buf.seek(0)
# Encode the image as base64
img = base64.b64encode(buf.read()).decode("ascii")
return img
</code></pre>
|
<python><python-3.x><pandas><matplotlib>
|
2024-12-16 23:39:05
| 0
| 468
|
chocalaca
|
79,286,118
| 5,868,293
|
How to calculate daily weights which satisfy certain conditions
|
<p>I have the following pandas dataframe which represents the consumption of 7 days (<code>day_0</code> is today, <code>day_-1</code> is yesterday etc) of 10 people (<code>ids</code>):</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randint(8, 15, size=(10, 7)))
df.columns = ['day_0', 'day_-1', 'day_-2', 'day_-3', 'day_-4', 'day_-5', 'day_-6']
df.index.name = 'id'
print(df.reset_index())
id day_0 day_-1 day_-2 day_-3 day_-4 day_-5 day_-6
0 0 10 10 14 8 14 14 14
1 1 10 13 11 11 8 10 10
2 2 10 12 9 12 9 10 10
3 3 12 12 9 11 9 12 13
4 4 12 13 8 12 8 11 9
5 5 13 9 8 13 9 12 10
6 6 8 9 8 14 8 13 14
7 7 13 10 14 12 8 9 11
8 8 8 8 10 12 11 14 14
9 9 14 13 13 9 11 14 13
</code></pre>
<p>I would like to find daily weights (so in total 7 weights: <code>w_0, w_-1, w_-2, w_-3, w_-4, w_-5, w_-6</code>) which need to have the following properties:</p>
<ol>
<li><code>w_0</code> > <code>w_-1</code> > <code>w_-2</code> > ... > <code>w_-6</code> > 0</li>
<li><code>w_0</code> + <code>w_-1</code> + <code>w_-2</code> + ... + <code>w_-6</code> = 7</li>
<li>the weighted average for <strong>exactly</strong> 7 out of 10 <code>ids</code> to be below a threshold (e.g. 11)</li>
</ol>
<p>I can achieve prerequisites 1 & 2 by using the exponential decay function and later normalizing:</p>
<pre><code>import numpy as np
n = 7
_lambda = 0.5
# Calculate the weights using exponential decay
weights = np.exp(-_lambda * np.arange(n))
# Normalize the weights so that their sum is equal to the length of the time series
weights *= n / np.sum(weights)
</code></pre>
<p>But I don't know how I could apply also prerequisite 3.</p>
<p>Is that possible? How can I do that in python?</p>
|
<python><pandas><numpy><scipy><scipy-optimize>
|
2024-12-16 21:29:30
| 1
| 4,512
|
quant
|
79,286,117
| 4,663,089
|
Use attribute name as column name for sqlalchemy in pandas read_sql()
|
<p>I am using <code>pd.read_sql()</code> together with SQL alchemy. However, it is not using the attribute name of the mapping for the columns in the pandas dataframe, but the original SQL column name:</p>
<pre><code>class DBtable(Base):
name: mapped_column("firstname")
</code></pre>
<p>So in such case, after calling</p>
<pre><code>stmt = select(DBTable)
df = pd.read_sql(stmt, engine)
</code></pre>
<p>I would get a df with 'firstname' as the column name.</p>
<p>I use a work around by using <code>column_property</code>, but is there maybe a nicer way to achieve this?</p>
<pre><code>class DBtable(Base):
name: Mapped[str] = column_property(mapped_column("firstname"))
</code></pre>
<p>I don't want to use the alias inside the query or rename the dataframe column later, as this renaming should be transparent to the user and the goal is to unify naming across several tables and databases...</p>
|
<python><pandas><sqlalchemy><orm>
|
2024-12-16 21:29:15
| 0
| 344
|
ju.
|
79,286,066
| 13,392,257
|
Count redis queue length in docker
|
<p>I have a Celery+Redis python application.</p>
<p>I have to find number of tasks in queue. I trying this command, but the output is zero.
I am confident that queue is not empty.</p>
<pre class="lang-none prettyprint-override"><code>redis-cli -n 1 -h localhost -p 6379 llen celery
(integer) 0
</code></pre>
<p>I think I am using invalid arguments.
My redis + celery configuration:</p>
<pre class="lang-yaml prettyprint-override"><code>celery:
build: ./project
command: celery -A core_app worker --loglevel=info --concurrency=15 --max-memory-per-child=1000000
volumes:
- ./project:/usr/src/app
- ./project/media:/project/media
- ./project/logs:/project/logs
env_file:
- .env
environment:
# environment variables declared in the environment section override env_file
- DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
- CELERY_BROKER=redis://redis:6379/0
- CELERY_BACKEND=redis://redis:6379/0
depends_on:
- django
- redis
redis:
build:
context: ./redis_customization
dockerfile: Dockerfile
image: redis:7.2-alpine
restart: always
ports:
- 6379:6379
# Run the init script
command: sh -c "./init.sh"
# Run as privileged to allow the container to change the vm.overcommit_memory setting
privileged: true
</code></pre>
|
<python><redis><celery>
|
2024-12-16 21:02:45
| 2
| 1,708
|
mascai
|
79,285,844
| 8,618,242
|
2D stacking and optimizing
|
<p>I have big rectangle (pallet), and would like to stack boxes (smaller rectangles) in way to optimize space usage, currently my problem is I'm not sure how I can rotate the last row/column properly to maximize the utilization of the available space.</p>
<p>I have tried:</p>
<h3>Rect class</h3>
<pre class="lang-py prettyprint-override"><code>from typing import List, Tuple
import matplotlib.pyplot as plt
import matplotlib.patches as patches
class Rect:
def __init__(self, corner: Tuple[float, float], size: Tuple[float, float]):
self.length = max(size)
self.width = min(size)
self.center = (corner[0] + self.length/2, corner[1] + self.width/2)
self.size = (self.length, self.width)
@property
def min_corner(self) -> Tuple[float, float]:
return (self.center[0] - self.size[0]/2, self.center[1] - self.size[1]/2)
@property
def max_corner(self) -> Tuple[float, float]:
return (self.center[0] + self.size[0]/2, self.center[1] + self.size[1]/2)
@property
def area(self) -> float:
return self.length * self.width
def collides_with(self, other: 'Rect') -> bool:
"""Checks if this rectangle collides with another rectangle."""
self_min_x, self_min_y = self.min_corner
self_max_x, self_max_y = self.max_corner
other_min_x, other_min_y = other.min_corner
other_max_x, other_max_y = other.max_corner
# Check for overlap
return (
(
(self_max_x < other_max_x and self_max_y < other_max_y) and
(self_max_x > other_min_x and self_max_y > other_min_y)
)
or
(
(other_max_x < self_max_x and other_max_y < self_max_y) and
(other_max_x > self_min_x and other_max_y > self_min_y)
)
)
def get_patch(self):
"""Returns a matplotlib Rectangle patch for visualization."""
x, y = self.min_corner
rect_width, rect_height = self.size
return patches.Rectangle(
(x, y),
rect_width,
rect_height,
edgecolor='red',
facecolor='lightgreen',
linewidth=1
)
</code></pre>
<h2>Pallet Class</h2>
<pre class="lang-py prettyprint-override"><code>class Pallet:
def __init__(self, size: Tuple[float, float]):
self.size = size
self.length = max(size)
self.width = min(size)
self.rects: List[Rect] = []
def add_rect(self, rect: Rect) -> bool:
"""Attempts to add a rectangle to the pallet. Returns True if successful, False otherwise."""
if rect.area > self.length * self.width:
return False
# Check if the rectangle's corners are inside the pallet size
max_corner = rect.max_corner
min_corner = rect.min_corner
x_max, y_max = max_corner
x_min, y_min = min_corner
if (not (0 <= x_max <= self.length and 0 <= y_max <= self.width)) or (x_min< 0 or y_min<0):
print("Out of Pallet")
return False
for r in self.rects:
if r.collides_with(rect):
print("Collision")
return False
self.rects.append(rect)
return True
def fill_with_rects(self, rect_size: Tuple[float, float]):
rect_length = rect_size[0]
rect_width = rect_size[1]
rows_x = int(self.length // rect_length)
cols_y = int(self.width // rect_width)
for i in range(rows_x):
for j in range(cols_y):
cx = rect_length * (i)
cy = rect_width * (j)
corner = (cx, cy)
box = Rect(corner, (rect_length, rect_width))
box_added = pallet.add_rect(box)
def visualize(self):
fig, ax = plt.subplots(figsize=(10, 8))
ax.set_xlim(0, self.length)
ax.set_ylim(0, self.width)
ax.set_aspect('equal')
for box in self.rects:
box_patch = box.get_patch()
ax.add_patch(box_patch)
ax.set_xlabel("Pallet Length")
ax.set_ylabel("Pallet Width")
plt.grid(True)
plt.show()
</code></pre>
<h3>Test</h3>
<pre class="lang-py prettyprint-override"><code>if __name__=="__main__":
# Filling a pallet
pallet = Pallet(size=(120, 100))
pallet.fill_with_rects((32, 17))
print("Number of rectangles in pallet:", len(pallet.rects))
pallet.visualize()
</code></pre>
<h3>Current result:</h3>
<p><a href="https://i.sstatic.net/MHIyJgpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MHIyJgpB.png" alt="enter image description here" /></a></p>
<h3>Desired result:</h3>
<p>Additional box column in the end as there still space to contain more rectangles without collision.</p>
|
<python><matplotlib><optimization><bin-packing>
|
2024-12-16 19:21:43
| 1
| 4,115
|
Bilal
|
79,285,708
| 7,422,352
|
How does GIL inserts the interupted thread and waiting threads back into the Ready Queue in Python?
|
<p><strong>Context:</strong></p>
<p>I am executing the a python program that creates 5 threads as follows:</p>
<pre class="lang-py prettyprint-override"><code>import time
from threading import Thread
def do_work(thread_number):
print(f"Starting thread {thread_number}")
time.sleep(1)
print(f"Ending thread {thread_number}")
for i in range(5):
t = Thread(target=do_work, args=(i,))
t.start()
</code></pre>
<p>And I got the following outputs when I repeatedly observed the same program:</p>
<p><strong>Output 1:</strong></p>
<pre><code>Starting thread 0
Starting thread 1
Starting thread 2
Starting thread 3
Starting thread 4
Ending thread 1
Ending thread 0
Ending thread 4
Ending thread 3
Ending thread 2
</code></pre>
<p><strong>Ouput 2:</strong></p>
<pre><code>Starting thread 0
Starting thread 1
Starting thread 2
Starting thread 3
Starting thread 4
Ending thread 4
Ending thread 2
Ending thread 3
Ending thread 0
Ending thread 1
</code></pre>
<p><strong>Output 3:</strong></p>
<pre><code>Starting thread 0
Starting thread 1
Starting thread 2
Starting thread 3
Starting thread 4
Ending thread 4
Ending thread 1
Ending thread 3
Ending thread 2
Ending thread 0
</code></pre>
<p><strong>My Question:</strong></p>
<p>It is clear from the output that the GIL in putting back the interupted thread at the last in the waiting queue.</p>
<p>And this is the reason why <code>Started Thread <thread number></code> is getting printed first everytime before any <code>Ending Thread <thread number></code>.</p>
<p>But since the threads are executed in order, i.e., first 0th, then 1st, then 2nd,.... then 4th, and if they are put back at the end of the Ready Queue, why aren't they ending in the same order?</p>
<p>What factors are changing this order?</p>
|
<python><python-3.x><multithreading><gil>
|
2024-12-16 18:24:56
| 1
| 5,381
|
Deepak Tatyaji Ahire
|
79,285,657
| 5,334,367
|
Python Mako: JS not working in rendered html
|
<p>I have got the following folder structure</p>
<pre><code>daemon.py
template.mako
html/
main.html
javascript/
script.js
</code></pre>
<p>Now my daemon is running a wsgi server, using bottle. The endpoint does the following:</p>
<pre class="lang-py prettyprint-override"><code>@route("/main")
def main():
lookup = TemplateLookup(directories=["html", "html/javascript"])
return Template(filename="template.mako", lookup=lookup).render()
</code></pre>
<p>The mako template now includes the main.html (which contains the header) and does some looped function execution, rendering a card item from bulma for each turn of the loop. A button is supposed to be expanding the card via the script.js. The latter in turn is included via <code><script type="text/javascript" src="javascript/script.js"></script></code>.</p>
<p>I've tried many different ways of providing the right path in <code>src</code> and also moving the <code><script></code> element from main.html to template.mako. The site itself is rendering just fine, correctly including the header. But besides the 200 for the site itself, I always get a 404 for the script upon loading the page.</p>
<p>Does anyone have an idea, what I am doing wrong?</p>
|
<javascript><python><mako>
|
2024-12-16 18:06:00
| 1
| 673
|
Robert
|
79,285,637
| 13,812,982
|
How to get the default parameters of a postGres connection?
|
<p>I am experiencing an issue where my postGres client connection object goes 'stale' if I have not run a query on it for a while (say 1 hour). I believe it probably involves the idle timeout or stay-live process. Attempting to use the connection object causes the code to block for around 5 minutes before throwing an exception with this text:</p>
<blockquote>
<p>SQL Error: server closed the connection unexpectedly This probably
means the server terminated abnormally before or while processing the
request.</p>
</blockquote>
<p>When I look <a href="https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS" rel="nofollow noreferrer">here</a> it suggests that the <code>psycopg2.connect()</code> function will recognize various parameters for the <code>connect()</code> method, some of which seem related to timeouts.</p>
<p>For example <code>keepalives_idle</code>, the doc says that "Controls the number of seconds of inactivity after which TCP should send a keepalive message to the server. <em>A value of zero uses the system default</em>. This parameter is ignored for connections made via a Unix-domain socket, or if keepalives are disabled. It is only supported on systems where TCP_KEEPIDLE or an equivalent socket option is available, and on Windows; on other systems, it has no effect." (I am on Windows).</p>
<p>But how do I find out what <code>system default</code> is being used.</p>
<p>If I try:</p>
<p><code>conn.info.parameter_status('keepalives_idle')</code></p>
<p>The result is <code>None</code>.</p>
|
<python><postgresql><psycopg2>
|
2024-12-16 17:58:30
| 0
| 4,331
|
DS_London
|
79,285,538
| 2,329,474
|
Add auth to FastAPI via Middleware and include in OpenAPI
|
<p><em>Basic goal</em>: A means of authenticating most (but not all) routes in one place (a separate file), seamlessly so that I don't have to add code/documentation to every one of the routes in order to get auth or openapi to work.</p>
<p>I would like to implement a basic <code>Bearer</code> Authorization in FastAPI, but do so in class-based middleware. I've done this successfully, but I have not seen how to do this in a way that OpenAPI is aware of it (i.e., there's no way to set auth tokens in the docs).</p>
<pre class="lang-py prettyprint-override"><code>class AuthMiddleware:
def __init__(self, settings):
self.settings = settings
async def __call__(self, request: Request, call_next):
if self.is_protected_url(request.method, request.url.path):
auth = request.headers.get("Authorization")
scheme, data = (auth or " ").split(" ", 1)
if scheme != "Bearer":
return JSONResponse(None, 401, {"WWW-Authenticate": "Bearer"})
if data != self.settings.auth_token:
return JSONResponse(None, 401, {"WWW-Authenticate": "Bearer"})
return await call_next(request)
def is_protected_url(self, method, url):
""" does what it does """
</code></pre>
<p>In the main file:</p>
<pre class="lang-py prettyprint-override"><code>from path.to.auth import AuthMiddleware
auth_middleware = AuthMiddleware(settings)
app.middleware("http")(auth_middleware)
</code></pre>
<p>There seems to be some existing auth classes to use, but I can't figure out how to properly get them to work in a class-based Middleware that is in a separate file (<code>Depends</code> doesn't resolve).</p>
<p>See</p>
<ul>
<li><a href="https://stackoverflow.com/a/74088523">https://stackoverflow.com/a/74088523</a></li>
<li><a href="https://stackoverflow.com/a/71107848">https://stackoverflow.com/a/71107848</a></li>
<li><a href="https://stackoverflow.com/a/73464007">https://stackoverflow.com/a/73464007</a></li>
<li><a href="https://github.com/florimondmanca/awesome-asgi?tab=readme-ov-file#authentication" rel="nofollow noreferrer">https://github.com/florimondmanca/awesome-asgi?tab=readme-ov-file#authentication</a></li>
<li><a href="https://fastapi.tiangolo.com/reference/security/#fastapi.security.APIKeyCookie--example" rel="nofollow noreferrer">https://fastapi.tiangolo.com/reference/security/#fastapi.security.APIKeyCookie--example</a></li>
</ul>
|
<python><authorization><fastapi><openapi><middleware>
|
2024-12-16 17:22:28
| 1
| 373
|
dmcblue
|
79,285,449
| 17,142,551
|
Find average rate per group in specific years using groupby transform
|
<p>I'm trying to find a better/faster way to do this. I have a rather large dataset (~200M rows) with individual dates per row. I want to find the average yearly rate per group from 2018 to 2019. I know I could create a small df with the results and merge it back in but, I was trying to find a way to use transform. Not sure if it would just be faster to merge. Extra points for one-liners.</p>
<p>Sample data</p>
<pre><code>rng = np.random.default_rng(seed=123)
df = pd.DataFrame({'group':rng.choice(list('ABCD'), 100),
'date':[(pd.to_datetime('2018')+pd.Timedelta(days=x)).normalize() for x in rng.integers(0, 365*5, 100)],
'foo':rng.integers(1, 100, 100),
'bar':rng.integers(50, 200, 100)})
df['year'] = df['date'].dt.year
</code></pre>
<p>This works</p>
<pre><code>#find average 2018 and 2019 'foo' and 'bar'
for col in ['foo', 'bar']:
for y in [2018, 2019]:
df[col+'_'+str(y)+'_total'] = df.groupby('group')['year'].transform(lambda x: df.loc[x.where(x==y).dropna().index, col].sum())
#find 2018 and 2019 rates
for y in [2018, 2019]:
df['rate_'+str(y)] = df['foo_'+str(y)+'_total'].div(df['bar_'+str(y)+'_total'])
#find average rate
df['2018_2019_avg_rate'] = df[['rate_2018', 'rate_2019']].mean(axis=1)
</code></pre>
<p>Thing's I've tried that don't quite work (I'm using <code>apply</code> to test if it works before switching to <code>transform</code>)</p>
<pre><code>#gives yearly totals for each year and each column, but further 'apply'ing to find rates then averaging isn't working after I switch to transform
df.groupby(['group', 'year'])['year'].apply(lambda x: df.loc[x.where(x.between(2018, 2019)).dropna().index, ['foo', 'bar']].sum())
#close but is averaging too early
df.groupby(['group', 'year'])['year'].apply(lambda x: df.loc[i, 'foo'].sum()/denom if (denom:=df.loc[i:=x.where(x.between(2018, 2019)).dropna().index, 'bar'].sum())>0 else np.nan)
</code></pre>
|
<python><pandas>
|
2024-12-16 16:55:33
| 2
| 1,842
|
amance
|
79,285,328
| 9,476,917
|
API Call working with Postman, failing with Python's request
|
<p>If I test a postman post request I receive a status code 200 and receive a valid response as expected. When I try to use python's <code>requests</code> library I get the following error code:</p>
<blockquote>
<p>requests.exceptions.ConnectionError:
HTTPSConnectionPool(host='myurl.com', port=443): Max retries exceeded
with url: /RestApi/v1/Authentication/RequestToken (Caused by
NameResolutionError("<urllib3.connection.HTTPSConnection object at
0x000001A95F827220>: Failed to resolve 'myurl.com' ([Errno 11001]
getaddrinfo failed)"))</p>
</blockquote>
<p>I tried my own implementation of the python request as well as the suggested code by postman:</p>
<pre><code>import requests
import json
url = "https://myurl.com/RestApi/v1/Authentication/RequestToken"
payload = json.dumps({
"Credentials": {
"Username": "myUser",
"Password": "myPassword"
}
})
headers = {
'Prefer': 'respond-async',
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
</code></pre>
<p>The requests are made from the same client (my laptop). Any idea why postman works while python fails?</p>
|
<python><post><python-requests><request><postman>
|
2024-12-16 16:06:45
| 0
| 755
|
Maeaex1
|
79,285,301
| 13,546,726
|
Whoosh/searching.py name "q" is not defined
|
<p>I'm using whoosh library version 2.7.4
when trying to use search:</p>
<pre><code>query = rule['parse_tree'].to_whoosh_query()
hits = searcher.search(query, limit = None)
</code></pre>
<p>I'm getting an error inside site-package/whoosh/searching.py:
name 'q' is not defined</p>
<p>What could be causing this error?</p>
<p><strong>stack trace:</strong></p>
<pre><code>NameError Traceback (most recent call last)
<ipython-input-22-caefc8e0229a> in <module>
----> 1 search_results = search_rules(parsed_rules_by_level, df_full)
<ipython-input-5-1a9f0507c3e6> in search_rules(parsed_rules_by_level, df_full, search_dates)
369 continue
370 query = rule['parse_tree'].to_whoosh_query()
--> 371 hits = searcher.search(query, limit=None)
372 for hit in hits:
373 session_id = hit['unique_session_id']
~/.local/lib/python3.6/site-packages/whoosh/searching.py in search(self, q, **kwargs)
784 c = self.collector(**kwargs)
785 # Call the lower-level method to run the collector
--> 786 self.search_with_collector(q, c)
787 # Return the results object from the collector
788 return c.results()
~/.local/lib/python3.6/site-packages/whoosh/searching.py in search_with_collector(self, q, collector, context)
817 collector.prepare(self, q, context)
818
--> 819 collector.run()
820
821 def correct_query(self, q, qstring, correctors=None, terms=None, maxdist=2,
~/.local/lib/python3.6/site-packages/whoosh/query/spans.py in estimate_size(self, ixreader)
558
559 def estimate_size(self, ixreader):
--> 560 return self._and_query().estimate_size(ixreader)
561
562 def estimate_min_size(self, ixreader):
~/.local/lib/python3.6/site-packages/whoosh/query/spans.py in _and_query(self)
555
556 def _and_query(self):
--> 557 return q.And(self.qs)
558
559 def estimate_size(self, ixreader):
NameError: name 'q' is not defined
</code></pre>
|
<python><whoosh>
|
2024-12-16 15:59:32
| 0
| 309
|
Sam324
|
79,285,272
| 4,714,567
|
ModuleNotFoundError: No module named 'torch', but torch is installed
|
<p>I'm trying to use (and have successfully installed) Layout Parser, which requires detectron2 for certain functionality. While trying to install detectron2, I ran into the following error:</p>
<pre><code>> python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
[snip]
ModuleNotFoundError: No module named 'torch'
[end of output]
[/snip]
</code></pre>
<p>Setting the flags as specified in the <a href="https://detectron2.readthedocs.io/en/latest/tutorials/install.html" rel="nofollow noreferrer">installation instructions</a> does nothing:</p>
<pre><code>CC=clang CXX=clang++ ARCHFLAGS="-arch x86_64" pip3 install 'git+https://github.com/facebookresearch/detectron2.git@v0.4#egg=detectron2'
[same output]
</code></pre>
<ul>
<li>I have torch (2.4.1) and torchvision (0.19.1) installed, and the versions match.</li>
<li>I'm on macOS Sequoia 15.1.1</li>
<li>I'm using Python 3.10.14 and pip 24.2</li>
</ul>
<p>Full output of the installation command is at <a href="https://pastebin.com/YbZ3u3A5" rel="nofollow noreferrer">this pastebin</a>.</p>
|
<python><installation><pytorch><pip>
|
2024-12-16 15:50:24
| 1
| 1,639
|
Niek
|
79,285,243
| 320,475
|
Python typing and Redis asyncio: telling the type checker to expect an Awaitable
|
<p>Minimal example:</p>
<pre><code>import redis.asyncio as redis
client = redis.Redis()
client.hset(key, mapping=...)
</code></pre>
<p>Pylance complains about it like so:</p>
<pre><code>"int" is not awaitable
"int" is incompatible with protocol "Awaitable[_T_co@Awaitable]"
"__await__" is not presentPylancereportGeneralTypeIssues
</code></pre>
<p><code>hset</code> is declared as <code>(Awaitable[int] | int)</code>.</p>
<p>The simplest solution seems to be to <code>cast(Awaitable, client.hset(...))</code>. Is there any other, dry-er way to tell the type-checker that I am expecting all Redis commands to return an <code>Awaitable</code> so I don't have to <code>cast()</code> to <code>Awaitable</code> every time I touch Redis?</p>
|
<python><redis><python-asyncio><python-typing><redis-py>
|
2024-12-16 15:43:23
| 0
| 6,157
|
maligree
|
79,285,068
| 4,436,517
|
Setting slice of column to list of values on polars dataframe
|
<p>In the code below I'm creating a polars- and a pandas dataframe with identical data. I want to select a set of rows based on a condition on column <code>A</code>, then update the corresponding rows for column <code>C</code>. I've included how I would do this with the pandas dataframe, but I'm coming up short on how to get this working with polars. The closest I've gotten is by using <code>when-then-otherwise</code>, but I'm unable to use anything other than a single value in <code>then</code>.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import polars as pl
df_pd = pd.DataFrame({'A': ['x', 'x', 'x', 'x', 'y', 'y', 'y', 'y'],
'B': [1, 1, 2, 2, 1, 1, 2, 2],
'C': [1, 2, 3, 4, 5, 6, 7, 8]})
df_pl = pl.DataFrame({'A': ['x', 'x', 'x', 'x', 'y', 'y', 'y', 'y'],
'B': [1, 1, 2, 2, 1, 1, 2, 2],
'C': [1, 2, 3, 4, 5, 6, 7, 8]})
df_pd.loc[df_pd['A'] == 'x', 'C'] = [-1, -2, -3, -4]
df_pl ???
</code></pre>
<p>Expected output:</p>
<pre><code>┌─────┬─────┬─────┐
│ A ┆ B ┆ C │
│ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ i64 │
╞═════╪═════╪═════╡
│ x ┆ 1 ┆ -1 │
│ x ┆ 1 ┆ -2 │
│ x ┆ 2 ┆ -3 │
│ x ┆ 2 ┆ -4 │
│ y ┆ 1 ┆ 5 │
│ y ┆ 1 ┆ 6 │
│ y ┆ 2 ┆ 7 │
│ y ┆ 2 ┆ 8 │
└─────┴─────┴─────┘
</code></pre>
|
<python><dataframe><python-polars>
|
2024-12-16 14:48:27
| 4
| 1,159
|
rindis
|
79,285,066
| 6,038,082
|
Need help on fixing tkinter error on editing tk.Entry , move button up and saving in file from gui inputs
|
<p>I have a tkinter gui in Python which reads an input cfg file which looks as below:<br></p>
<p>Input cfg file <br></p>
<pre><code>[job1]
EXEC_PATH = /usr/path
RUNIT = echo job1
TYPES = ['schedule' , 'tumbling']
[job2]
EXEC_PATH = /usr/path
RUNIT = echo job2
PARENTS = job1
TYPES = [ 'tumbling']
[job3]
EXEC_PATH = /usr/path
RUNIT = echo job3
PARENTS = job2
</code></pre>
<p>The tkinter shows the order of the jobs based on the PARENTS option defined in the config file.</p>
<p>Problems<br></p>
<ol>
<li>I have an option menu defined on each node/job in the gui. If I consecutively click on 'Show job details' and 'Edit job details' or sometimes even on single time clicking on the menus giving below text in the right side tk.Entry box :</li>
</ol>
<blockquote>
<p>.!panedwindow.!frame.!frame2.!frame.!entry</p>
</blockquote>
<ol start="2">
<li><p>After editing, in the right side tk.Entry box if I press 'Save' the edited information should be save in a cfg file in disk, which is not happening.</p>
</li>
<li><p>Can I move up the 'Validate' button just beneath where the job/node params are ended ?</p>
</li>
<li><p>Instead of a dialog box validating the input file, can I use a label with green and red text on success and failure respectively?</p>
</li>
</ol>
<p>Below is my code :</p>
<pre><code>import tkinter as tk
from tkinter import ttk, filedialog, messagebox
from argparse import ArgumentParser
import configparser
import os
import subprocess
def parse():
parser = ArgumentParser()
parser.add_argument("-input_file", help="Input input file.", required=False)
parser.add_argument("-type", help="Filter sections by types.", choices=['schedule', 'tumbling'], required=False)
args = parser.parse_args()
return args
class JobLevels:
def __init__(self, input_file=None, type_filter=None):
self.input_file = input_file
self.type_filter = type_filter
self.deps_dict = {}
self.level_dict = {}
self.visited_files = set()
self.sections = {}
def parse_input_file(self, input_file):
if input_file in self.visited_files:
return
self.visited_files.add(input_file)
config = configparser.ConfigParser(allow_no_value=True)
config.optionxform = str # Preserve case sensitivity
config.read(input_file)
for section in config.sections():
if section == 'include':
for include_file in config.options(section):
include_file_path = os.path.join(os.path.dirname(input_file), include_file)
self.parse_input_file(include_file_path)
else:
self.sections[section] = {}
self.sections[section]['EXEC_PATH'] = config.get(section, 'EXEC_PATH', fallback=None)
self.sections[section]['RUNIT'] = config.get(section, 'RUNIT', fallback=None)
self.sections[section]['PARENTS'] = config.get(section, 'PARENTS', fallback=None)
self.sections[section]['TYPES'] = config.get(section, 'TYPES', fallback=None)
self.sections[section]['RATING'] = config.get(section, 'RATING', fallback='0')
def filter_sections_by_type(self):
if not self.type_filter:
return
filtered_sections = {}
for section, attributes in self.sections.items():
types = attributes['TYPES']
if types:
types_list = [type.strip() for type in types.strip('[]').split(',')]
if self.type_filter in types_list:
filtered_sections[section] = attributes
else:
# If TYPES key is not present
filtered_sections[section] = attributes
self.sections = filtered_sections
def build_deps_dict(self):
for section, attributes in self.sections.items():
deps = attributes['PARENTS']
if deps:
deps_list = [dep.strip() for dep in deps.split(',')]
self.deps_dict[section] = deps_list
def assign_level(self, section):
if section not in self.level_dict:
if section not in self.deps_dict or not self.deps_dict[section]:
self.level_dict[section] = 0
else:
levels = []
for dep_section in self.deps_dict[section]:
temp_level = self.assign_level(dep_section)
levels.append(temp_level)
self.level_dict[section] = max(levels) + 1
return self.level_dict[section]
def start_proc(self):
if self.input_file:
self.visited_files.clear()
self.sections.clear()
self.deps_dict.clear()
self.level_dict.clear()
self.parse_input_file(self.input_file)
self.filter_sections_by_type()
self.build_deps_dict()
for section in self.deps_dict:
self.assign_level(section)
class Viewer:
def __init__(self, root, node_order_finder):
self.root = root
self.node_order_finder = node_order_finder
self.main_paned_window = ttk.PanedWindow(root, orient=tk.HORIZONTAL)
self.main_paned_window.pack(fill=tk.BOTH, expand=True)
self.left_paned_window = ttk.PanedWindow(self.main_paned_window, orient=tk.VERTICAL)
self.main_paned_window.add(self.left_paned_window, weight=3)
self.right_panel = tk.Frame(self.main_paned_window, width=200, bg="lightgray")
self.main_paned_window.add(self.right_panel, weight=1)
self.top_frame = tk.Frame(self.left_paned_window)
self.left_paned_window.add(self.top_frame, weight=3)
self.bottom_panel = tk.Frame(self.left_paned_window, height=200, bg="lightgray")
self.left_paned_window.add(self.bottom_panel, weight=1)
self.canvas = tk.Canvas(self.top_frame, bg="white")
self.h_scrollbar = tk.Scrollbar(self.top_frame, orient=tk.HORIZONTAL, command=self.canvas.xview)
self.v_scrollbar = tk.Scrollbar(self.top_frame, orient=tk.VERTICAL, command=self.canvas.yview)
self.scrollable_frame = tk.Frame(self.canvas)
self.scrollable_frame.bind(
"<Configure>",
lambda e: self.canvas.configure(
scrollregion=self.canvas.bbox("all")
)
)
self.canvas.create_window((0, 0), window=self.scrollable_frame, anchor="nw")
self.canvas.configure(xscrollcommand=self.h_scrollbar.set, yscrollcommand=self.v_scrollbar.set)
self.h_scrollbar.pack(side=tk.BOTTOM, fill=tk.X)
self.v_scrollbar.pack(side=tk.RIGHT, fill=tk.Y)
self.canvas.pack(side=tk.LEFT, fill=tk.BOTH, expand=True)
self.type_var = tk.StringVar(value="All")
self.create_widgets()
self.draw_graph()
def create_widgets(self):
control_frame = tk.Frame(self.right_panel)
control_frame.pack(side=tk.TOP, fill=tk.X)
type_label = tk.Label(control_frame, text="Event:")
type_label.pack(side=tk.LEFT, padx=5, pady=5)
type_options = ["All", "schedule", "tumbling"]
type_menu = ttk.Combobox(control_frame, textvariable=self.type_var, values=type_options)
type_menu.pack(side=tk.LEFT, padx=5, pady=5)
type_menu.bind("<<ComboboxSelected>>", self.on_type_change)
browse_button = tk.Button(control_frame, text="Browse input File", command=self.browse_file)
browse_button.pack(side=tk.LEFT, padx=5, pady=5)
save_button = tk.Button(control_frame, text="Save", command=self.save_input_file)
save_button.pack(side=tk.LEFT, padx=5, pady=5)
self.stage_params_frame = tk.Frame(self.right_panel)
self.stage_params_frame.pack(side=tk.TOP, fill=tk.BOTH, expand=True)
self.validate_button = tk.Button(self.right_panel, text="Validate", command=self.validate_input)
self.validate_button.pack(side=tk.TOP, padx=5, pady=5)
def browse_file(self):
input_file = filedialog.askopenfilename(filetypes=[("input files", "*.input")])
if input_file:
self.node_order_finder.input_file = input_file
self.node_order_finder.start_proc()
self.draw_graph()
def save_input_file(self):
save_file = filedialog.asksaveasfilename(defaultextension=".input", filetypes=[("input files", "*.input")])
if save_file:
config = configparser.ConfigParser()
for section, attributes in self.node_order_finder.sections.items():
config[section] = {k: v.get() if isinstance(v, tk.Entry) else v for k, v in attributes.items()}
with open(save_file, 'w') as configfile:
config.write(configfile)
self.node_order_finder.input_file = save_file
self.node_order_finder.start_proc()
self.draw_graph()
def validate_input(self):
try:
result = subprocess.run(['ls', self.node_order_finder.input_file], capture_output=True, text=True)
if result.returncode == 0:
messagebox.showinfo("Validation", "Validation successful")
else:
messagebox.showerror("Validation", f"Error: {result.stderr}")
except Exception as e:
messagebox.showerror("Validation", f"Error: {str(e)}")
def on_type_change(self, type):
self.node_order_finder.type_filter = self.type_var.get() if self.type_var.get() != "All" else None
self.node_order_finder.start_proc()
self.draw_graph()
def draw_graph(self):
for widget in self.scrollable_frame.winfo_children():
widget.destroy()
self.level_frames = {}
levels = {}
for section, level in self.node_order_finder.level_dict.items():
if level not in levels:
levels[level] = []
levels[level].append(section)
colors = ["lightblue", "lightgreen", "lightyellow", "lightpink", "lightgray"]
for level, nodes in sorted(levels.items()):
level_frame = tk.Frame(self.scrollable_frame, bg=colors[level % len(colors)], bd=2, relief=tk.SOLID)
level_frame.pack(fill=tk.X, padx=10, pady=5)
self.level_frames[level] = level_frame
level_label = tk.Label(level_frame, text=f"Level {level}", bg=colors[level % len(colors)], font=("Arial", 12, "bold"), anchor="w")
level_label.pack(side=tk.TOP, fill=tk.X)
for node in nodes:
self.draw_node(level_frame, node)
def draw_node(self, parent, node):
level = self.node_order_finder.level_dict.get(node, 0)
label = f'{node}({level})'
if node in self.node_order_finder.sections:
if self.node_order_finder.sections[node]['RATING'] == '1':
color = 'lightblue'
else:
color = 'skyblue'
fg_color = 'darkblue'
node_label = tk.Label(parent, text=label, bg=color, fg=fg_color, font=("Arial", 10), bd=1, relief=tk.SOLID, padx=5, pady=5)
node_label.pack(side=tk.LEFT, padx=5, pady=5)
node_label.bind("<Button-3>", lambda type, node=node: self.show_context_menu(type, node))
def show_context_menu(self, type, node):
context_menu = tk.Menu(self.root, tearoff=0)
context_menu.add_command(label="Show job details", command=lambda: self.show_stage_params(node, readonly=True))
context_menu.add_command(label="Edit job details", command=lambda: self.show_stage_params(node, readonly=False))
context_menu.add_command(label="Show Upward Dependency", command=lambda: self.show_upward_dependency(node))
context_menu.add_command(label="Show Downward Dependency", command=lambda: self.show_downward_dependency(node))
context_menu.post(type.x_root, type.y_root)
self.root.bind("<Button-1>", lambda e: context_menu.unpost())
def show_stage_params(self, node, readonly):
for widget in self.stage_params_frame.winfo_children():
widget.destroy()
selected_node_label = tk.Label(self.stage_params_frame, text=f"Stage: {node}", font=("Arial", 12, "bold"))
selected_node_label.pack(anchor="w")
params = self.node_order_finder.sections[node]
for param, value in params.items():
param_frame = tk.Frame(self.stage_params_frame)
param_frame.pack(fill=tk.X, pady=2)
param_label = tk.Label(param_frame, text=f"{param}:")
param_label.pack(side=tk.LEFT)
param_entry = tk.Entry(param_frame)
param_entry.insert(0, value)
param_entry.pack(side=tk.LEFT, fill=tk.X, expand=True)
if readonly:
param_entry.config(state='readonly')
params[param] = param_entry
if not readonly:
add_button = tk.Button(self.stage_params_frame, text="+", command=lambda: self.add_param(node))
add_button.pack(side=tk.TOP, padx=5, pady=5)
self.validate_button.pack(side=tk.TOP, padx=5, pady=5)
def add_param(self, node):
param_frame = tk.Frame(self.stage_params_frame)
param_frame.pack(fill=tk.X, pady=2)
param_label = tk.Entry(param_frame)
param_label.pack(side=tk.LEFT)
param_entry = tk.Entry(param_frame)
param_entry.pack(side=tk.LEFT, fill=tk.X, expand=True)
remove_button = tk.Button(param_frame, text="-", command=lambda: self.remove_param(param_frame))
remove_button.pack(side=tk.LEFT, padx=5, pady=5)
self.stage_params_frame.params[param_label] = param_entry
def remove_param(self, param_frame):
param_frame.destroy()
def show_upward_dependency(self, node):
for widget in self.bottom_panel.winfo_children():
widget.destroy()
selected_node_label = tk.Label(self.bottom_panel, text=f"Stage: {node}", font=("Arial", 12, "bold"))
selected_node_label.pack(anchor="w")
upward_deps = [dep for dep, deps in self.node_order_finder.deps_dict.items() if node in deps]
self.display_dependencies(upward_deps, "Upward Dependencies")
def show_downward_dependency(self, node):
for widget in self.bottom_panel.winfo_children():
widget.destroy()
selected_node_label = tk.Label(self.bottom_panel, text=f"Stage: {node}", font=("Arial", 12, "bold"))
selected_node_label.pack(anchor="w")
downward_deps = self.node_order_finder.deps_dict.get(node, [])
self.display_dependencies(downward_deps, "Downward Dependencies")
def display_dependencies(self, dependencies, title):
title_label = tk.Label(self.bottom_panel, text=title, font=("Arial", 12, "bold"))
title_label.pack(anchor="w")
for dep in dependencies:
dep_label = tk.Label(self.bottom_panel, text=dep, font=("Arial", 10))
dep_label.pack(anchor="w")
if __name__ == '__main__':
args = parse()
job_obj = JobLevels(args.input_file, args.type)
job_obj.start_proc()
root = tk.Tk()
root.title("Job Visualization")
app = Viewer(root, job_obj)
root.geometry("800x600")
root.mainloop()
</code></pre>
|
<python><tkinter>
|
2024-12-16 14:47:48
| 0
| 1,014
|
A.G.Progm.Enthusiast
|
79,285,011
| 4,788,260
|
How does Py4J single-threading model work with FastAPI in a high-concurrency scenario?
|
<p>I’m integrating Py4J with a FastAPI application where multiple concurrent API requests need to communicate with a Java application. I’ve read the Py4J <a href="https://www.py4j.org/advanced_topics.html#the-single-threading-model" rel="nofollow noreferrer">single-threading model</a> documentation, but I have questions about connection management:</p>
<ol>
<li>Does the <code>client_server.jvm</code> object always use the same thread for requests originating from a single Python thread? If so, does this behavior persist when multiple FastAPI requests share the same <code>client_server</code> instance?</li>
</ol>
<pre class="lang-py prettyprint-override"><code>client_server = ClientServer(java_parameters=JavaParameters(address=remote_host))
client_server.shutdown_callback_server()
</code></pre>
<ol start="2">
<li><p>Is it better to share a single <code>ClientServer</code> instance across requests, or should I create a new instance for each request? Would a connection pool with multiple <code>ClientServer</code> instances improve performance in high-concurrency scenarios?</p>
</li>
<li><p>If I reuse the <code>client_server</code> instance in FastAPI across multiple requests, does it retain any persistent resources (e.g., imports, variables) between calls that could cause issues? If the same FastAPI thread is reused for a future request, does the import remain, or how is it collected by GC? Are there any risks of conflicts when the same <code>ClientServer</code> instance is reused for multiple requests?</p>
</li>
</ol>
<pre class="lang-py prettyprint-override"><code>jvm = client_server.jvm
java_import(jvm, "<java class>")
</code></pre>
<p>If the same FastAPI thread handles a future request, does the import remain? How is it managed by the garbage collector? Are there risks of conflicts with reused ClientServer instances?</p>
<p>Any advice on making this setup scalable and performant would be greatly appreciated!</p>
|
<python><multithreading><fastapi><py4j>
|
2024-12-16 14:31:56
| 0
| 4,065
|
Sanjay Sharma
|
79,284,760
| 6,572,639
|
PyMongo Async client not raising exception when connection fails
|
<p>It seems that a pymongo 4.10 async client does not raise an exception when there is a problem with the connection.</p>
<p>Taken from <a href="https://pymongo.readthedocs.io/en/stable/async-tutorial.html#making-a-connection-with-asyncmongoclient" rel="nofollow noreferrer">the doc</a>, a test without any mongo DB running locally yields:</p>
<pre><code>>>> import asyncio
>>> from pymongo import AsyncMongoClient
>>> client = AsyncMongoClient('mongodb://localhost:27017/')
>>> asyncio.run(client.aconnect())
# no errors
</code></pre>
<p>When activating debug logs I see the connection being refused but I would expect an exception to be raised.</p>
<pre><code>>>> import logging
>>> logging.basicConfig(level='DEBUG')
>>> asyncio.run(client.aconnect())
DEBUG:asyncio:Using selector: KqueueSelector
DEBUG:pymongo.topology:{"topologyId": {"$oid": "676020be62e71d3fe6f27721"}, "serverHost": "localhost", "serverPort": 27017, "awaited": false, "durationMS": 2.786167000522255, "failure": "\"AutoReconnect('localhost:27017: [Errno 61] Connection refused (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms)')\"", "message": "Server heartbeat failed"}
</code></pre>
<p>I would expect the DEBUG log error to be an exception. Am I misunderstanding something with the async client ?</p>
|
<python><mongodb><asynchronous><pymongo>
|
2024-12-16 12:58:33
| 1
| 1,352
|
Plup
|
79,284,739
| 520,556
|
Jupyter Notebook shortcut keys are not working anymore
|
<p>I am running <code>Python</code> 3.11.9 (conda version 24.11.0) on <code>macOS Sonoma</code> 14.7.1. It seems that I cannot use the <code>Jupyter Notebook</code> (<code>ipython</code> version 8.29.0; <code>jupyter notebook</code> version 7.0.6) shortcut keys anymore: I cannot insert a new cell above (A) or below (B) or delete it (D) etc. Nothing works. There are suggestions to use <code>JupyteLab</code>, which seems to work, but that's really not the answer to the question: How can this issue be fixed? Can I downgrade? And if so, what is the recommended version?</p>
<p>Thanks!</p>
|
<python><jupyter-notebook><hotkeys>
|
2024-12-16 12:51:13
| 0
| 1,598
|
striatum
|
79,284,718
| 6,322,924
|
Easiest way to run SLURM on multiple files
|
<p>I have a Python script that processes approximately 10,000 FITS files one by one. For each file, the script generates an output in the same directory as the input files and creates a single CSV file to record statistics about the processed files.</p>
<p>Previously, I parallelized the script using async with multiprocessing pools, but now I have access to a SLURM cluster and would like to run it using SLURM.</p>
<p>What is the simplest way to achieve this?
All the files are stored in the same directory, and there’s no specific order in which they need to be processed.
EDIT: I also need to activate conda enviroment before running python script. Python should accept filename and start running code. Usually I send filename via args.
Thanks</p>
<p>******** EDIT update:
I managed to make it work.<br />
First, I created bash script for submitting jobs:</p>
<pre><code>#!/bin/bash
# Define the directory containing FITS files
INPUT_DIR="input_dir"
LOG_DIR="${INPUT_DIR}/logs"
# Ensure the logs directory exists
mkdir -p "$LOG_DIR"
# List all FITS files and write their paths to a temporary file
find "$INPUT_DIR" -name "*.fits" > file_list.txt
# Loop through each FITS file and submit a SLURM job
while IFS= read -r filepath; do
sbatch run2.sh "$filepath"
done < file_list.txt
</code></pre>
<p>So, that script is calling run2.sh script which contains following:</p>
<pre><code>#!/bin/bash
#SBATCH -p long
#SBATCH -J test
#SBATCH -n 1
#SBATCH -t 00:05:00
#SBATCH --output=file.out
#SBATCH --error=file.err
source miniconda3/bin/activate my_env
# Define variables
# EVENT_PATH="directory_path"
# Run Python script
python3 -u my_python_code.py "$1" "False" 3
</code></pre>
<p>My next concern is that in this way I am creating 10k jobs, because I have 10k images to analyse, although analyzing each image only takes few seconds.
Maybe there is smarter way to do it.</p>
|
<python><jobs><slurm><hpc><sbatch>
|
2024-12-16 12:43:24
| 1
| 607
|
Falco Peregrinus
|
79,284,623
| 9,669,142
|
Reindexing only valid with uniquely valued index objects
|
<p>There are a couple of articles about this issue already, but none of them solve my issue.</p>
<p>I have two sets of Python dataframes (df_A1, df_A2 and df_B1, df_B2) and I want to combine the A's together and the B's together. I can concat the A's just fine, but the B's give me the error 'Reindexing only valid with uniquely valued index objects'. There are a couple of causes that I found online but that won't work for me.</p>
<ul>
<li><p>Duplicate column names.
The B dataframes have indeed columns with the same name, but the A dataframes have this as well and it works just fine with that. So this shouldn't be an issue in the same code. Also, I used df.index.is_unique which returns True for all dataframes, but of course df.columns.is_unique returns False.</p>
</li>
<li><p>Empty columnnames.
I had that as well and made sure all empty columnnames are now filled using <code>[item or '_empty_' for item in <list_columns>]</code></p>
</li>
<li><p>Using <code>ignore_index=True</code>.
Did that for the A's and B's, the A's are still working with concat and the B's still don't.</p>
</li>
</ul>
<p>I have the traceback here:</p>
<pre><code>Traceback (most recent call last):
File ~\AppData\Local\anaconda3\Lib\site-packages\spyder_kernels\customize\utils.py:209 in exec_encapsulate_locals
exec_fun(compile(code_ast, filename, "exec"), globals)
File c:\<path>\python_script.py:168
df_B = pd.concat([df_B1 , df_B2], axis=0, ignore_index=True)
File ~\AppData\Local\anaconda3\Lib\site-packages\pandas\core\reshape\concat.py:395 in concat
return op.get_result()
File ~\AppData\Local\anaconda3\Lib\site-packages\pandas\core\reshape\concat.py:680 in get_result
indexers[ax] = obj_labels.get_indexer(new_labels)
File ~\AppData\Local\anaconda3\Lib\site-packages\pandas\core\indexes\base.py:3885 in get_indexer
raise InvalidIndexError(self._requires_unique_msg)
InvalidIndexError: Reindexing only valid with uniquely valued Index objects
</code></pre>
<p>Does anyone has an idea on how to fix this?</p>
|
<python><python-3.x><pandas><concatenation>
|
2024-12-16 12:09:43
| 0
| 567
|
Fish1996
|
79,284,591
| 5,837,773
|
Signature in mexc API - websocket access
|
<p>I am trying to access websocket user data streams in mexc api. Have read the <a href="https://mexcdevelop.github.io/apidocs/spot_v3_en/#websocket-user-data-streams" rel="nofollow noreferrer">documentation</a> but still have no luck to connect. I believe I need to create listen key first, but have no luck to make a valid signature. This is my code:</p>
<pre><code>def create_listen_key(api_key, api_secret):
url = "https://api.mexc.com/api/v3/userDataStream"
timestamp = str(int(time.time() * 1000))
params = {"please_sign_it": None}
signature = generate_signature(api_key=api_key, api_secret=api_secret, ign_params=params,
req_time=timestamp)
paramsWithSignature = {
"please_sign_it": None,
"timestamp": timestamp,
"signature": signature
}
headers = {
"X-MEXC-APIKEY": api_key,
"Content-Type": "application/json",
}
response = requests.post(url, headers=headers, data=paramsWithSignature)
if response.status_code == 200:
return json.loads(response.content)
else:
raise Exception(f"Failed to create listen key: {response.text}")
</code></pre>
<p>and this is a function for generating signature</p>
<pre><code>def generate_signature(api_key, api_secret, req_time, sign_params=None):
ic("def generate_signature")
if sign_params:
sign_params = urlencode(sign_params, quote_via=quote)
ic(sign_params)
to_sign = f"{api_key}{req_time}{sign_params}"
else:
to_sign = f"{api_key}{req_time}"
sign = hmac.new(api_secret.encode('utf-8'), to_sign.encode('utf-8'), hashlib.sha256).hexdigest()
return sign
</code></pre>
<p>I believe I tried all variants but no matter what i do:</p>
<pre><code>Exception: Failed to create listen key: {"code":700002,"msg":"Signature for this request is not valid."}
</code></pre>
<p>If someone can suggest how this should be done would appreciate a lot.</p>
|
<python><python-requests><bots><signature>
|
2024-12-16 11:59:30
| 1
| 409
|
Gregor Sotošek
|
79,284,479
| 13,280,838
|
Snowflake - Error while creating Temp View from snowpark dataframe
|
<p>Hope you are all doing well.</p>
<p>I am facing a weird issue in Snowpark (Python) while creating a temp view from Dataframe.
I have searched online and while I have had hits, there is no proper solution.</p>
<p><strong>Issue:</strong></p>
<ul>
<li>I have a python list of lists containing date, directory name, file name. I am creating a dataframe from it. The dataframe is then used to create or replace a temp view.</li>
<li>The code is being run on a python worksheet from within snowflake.</li>
<li>I am randomly receiving error while creating the view.
<pre class="lang-bash prettyprint-override"><code>snowflake.snowpark.exceptions.SnowparkCreateViewException: (1203): Your dataframe may include DDL or DML operations. Creating a view from this DataFrame is currently not supported.
</code></pre>
</li>
<li>The error occurs very randomly. But I think it occurs when the number of records increases. Just a guess since the same process works for me if I run with less than 200 records but with more records it fails. Maybe just a co-incidence.</li>
</ul>
<p><strong>Approaches Tried:</strong></p>
<ul>
<li>Have removed the part where I create additional columns.</li>
<li>Have tried creating a copy of the dataframe using <code>df.select("*")</code> and then using that to create the view.</li>
<li>Have tried caching the dataframe before creating the view.</li>
</ul>
<p><strong>Actual Code:</strong></p>
<pre class="lang-py prettyprint-override"><code> # assume the list is being created in a different process.
cur_run_files_list.append((date, dir_name, file_name))
tst_vw_nm_fq = "DB_NAME.SCHEMA_NAME.TEST_VIEW"
# --------------------------------------------------------------------------------
df_cur_run_files_list = session.create_dataframe(
cur_run_files_list,
schema=["EXP_DATE", "DIR_NAME", "FILE_NAME"],
)
df_cur_run_files_list.create_or_replace_temp_view(tst_vw_nm_fq)
</code></pre>
<p><strong>Test Code - As requested:</strong>
Note: This code fails intermittently only - values hard coded for testing...</p>
<pre class="lang-py prettyprint-override"><code># assume the list is being created in a different process.
# cur_run_files_list.append((date, dir_name, file_name))
cur_run_files_list = [
("2024-12-01", "dir_name01", "file_name01.txt"),
("2024-12-01", "dir_name02", "file_name02.txt"),
("2024-12-01", "dir_name03", "file_name03.txt"),
("2024-12-01", "dir_name04", "file_name04.txt"),
("2024-12-01", "dir_name05", "file_name05.txt"),
("2024-12-01", "dir_name06", "file_name06.txt"),
("2024-12-01", "dir_name07", "file_name07.txt"),
("2024-12-01", "dir_name08", "file_name08.txt"),
("2024-12-01", "dir_name09", "file_name09.txt"),
("2024-12-01", "dir_name10", "file_name10.txt"),
]
tst_vw_nm_fq = "DB_NAME.SCHEMA_NAME.TEST_VIEW"
# --------------------------------------------------------------------------------
df_cur_run_files_list = session.create_dataframe(
cur_run_files_list,
schema=["EXP_DATE", "DIR_NAME", "FILE_NAME"],
)
df_cur_run_files_list.create_or_replace_temp_view(tst_vw_nm_fq)
return df_cur_run_files_list # Added since snowpark needs a return value
</code></pre>
|
<python><dataframe><snowflake-cloud-data-platform>
|
2024-12-16 11:17:11
| 0
| 669
|
rainingdistros
|
79,284,460
| 1,826,066
|
Generate function signature in python from dataclass
|
<p>I want to have single file that defines a bunch of constants so that I don't need to hardcode them later on.</p>
<p>For example, I would do this:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass(frozen=True)
class CONSTANTS:
STATUS_SUCCESS = 200
STATUS_ERROR = 400
SI_UNIT_MASS = "kg"
SI_UNIT_LENGTH = "m"
</code></pre>
<p>Now let's say I have the following function:</p>
<pre class="lang-py prettyprint-override"><code>def func():
something = True
if something:
return CONSTANTS.STATUS_SUCCESS
else:
return CONSTANTS.STATUS_ERROR
</code></pre>
<p>I would like to have type hints for <code>func</code>.
So I thought I could just add this:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass(frozen=True)
class TYPES:
STATUS = Literal[CONSTANTS.STATUS_SUCCESS, CONSTANTS.STATUS_ERROR]
SI_UNIT = Literal[CONSTANTS.SI_UNIT_MASS, CONSTANTS.SI_UNIT_LENGTH]
</code></pre>
<p>and update the functions signature:</p>
<pre class="lang-py prettyprint-override"><code>def func() -> TYPES.STATUS:
</code></pre>
<p>But here I get this error:</p>
<pre class="lang-bash prettyprint-override"><code>Variable not allowed in type expressionPylancereportInvalidTypeForm
</code></pre>
<p>The error goes away when I write:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass(frozen=True)
class TYPES:
STATUS = Literal[200, 400]
SI_UNIT = Literal["kg", "m"]
</code></pre>
<p>But this has the downside that I might forget to add a new unit or status code to the type. I'd like this to be dynamic so that I don't have to repeat myself.</p>
<p>Is there a standard approach to do this?
I also thing I would like to rewrite <code>CONSTANTS</code> in a way were I would like to have</p>
<pre class="lang-py prettyprint-override"><code>@dataclass(frozen=True)
class CONSTANTS:
STATUS = STATUS
SI_UNIT = SI_UNIT
</code></pre>
<p>with</p>
<pre class="lang-py prettyprint-override"><code>@dataclass(frozen=True)
class STATUS:
SUCCESS = 200
ERROR = 400
...
</code></pre>
<p>So my thought was that I should maybe have <code>OPTIONS</code> and <code>STATUS</code> inherit from a base class that defines something like a <code>TYPE</code> property. But I can't get an example to work that feels natural.</p>
|
<python><python-typing>
|
2024-12-16 11:12:14
| 1
| 1,351
|
Thomas
|
79,284,333
| 21,099,067
|
Fast static vector implementation in Python
|
<p>Consider the following two naive implementations of 3D static vectors in Python:</p>
<pre class="lang-py prettyprint-override"><code>class Vec1(tuple):
def __new__(c,x,y,z):
return super().__new__(c,(x,y,z))
def __add__(s,o):
return s.__class__(s[0]+o[0],s[1]+o[1],s[2]+o[2])
def __matmul__(s,o):
return s[0]*o[0]+s[1]*o[1]+s[2]*o[2]
class Vec2:
def __init__(s,x,y,z):
s.c=(x,y,z)
def __add__(s,o):
sc=s.c
oc=o.c
return Vec2(sc[0]+oc[0],sc[1]+oc[1],sc[2]+oc[2])
def __matmul__(s,o):
sc=s.c
oc=o.c
return sc[0]*oc[0]+sc[1]*oc[1]+sc[2]*oc[2]
</code></pre>
<p>A clean implementation is <code>Vec1</code> which inherits methods from <code>tuple</code>. However, the initialization of <code>Vec1</code> (and, consequently, addition <code>__add__</code>) is 1.5 times slower than that of <code>Vec2</code> (on my machine). I suppose, the reason for this is the call of <code>tuple.__new__()</code>. I'm aware that the initialization is still very fast (a fraction of a microsecond) and that I can improve the performance using Cython, Numba, or PyPy. But I'm interested in the question of how far I can get without using any third-party modules. Is it possible to initialize a <code>Vec1</code> instance some other way? E.g. by directly writing to the memory location of the tuple elements?</p>
|
<python><performance>
|
2024-12-16 10:23:28
| 0
| 337
|
V T
|
79,283,991
| 8,057,071
|
Celery max_retries with ack_task=True with reject_on_worker_lost=True
|
<p>I want to ask a question regarding the usage of <code>max_tries</code> with the options <code>reject_on_worker_lost=True</code> and <code>ack_late=True</code>.</p>
<p>If I enable the above settings below for a task</p>
<pre><code>@shared_task(acks_late=True, reject_on_worker_lost=True, max_retries=4)
</code></pre>
<p>will it still count the max retries(will the max retries work) when there is a reject on worker loss preventing infinite queueing? I didn't find a specific area in the documentation regarding this as well</p>
|
<python><celery>
|
2024-12-16 08:05:31
| 0
| 6,127
|
Sathiraumesh
|
79,283,820
| 5,846,366
|
Retrieve specific metrics from AWS Cloudwatch with boto3
|
<p>I have a serverless application, which consists of numerous lambda functions. Each lambda function has a tag "STAGE", which is set either to "production" or "staging". What I need now is to retrieve the data on overall invocations for each production lambda function for the last calendar month (30 days or 4 weeks), with a period of 1 day. I manage to achieve that in the AWS Console (Cloudwatch > Explorer), like shown on the screenshot below:</p>
<p><a href="https://i.sstatic.net/M640unSp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M640unSp.png" alt="enter image description here" /></a></p>
<p>But the thing is that I need to retrieve this data programmatically to pass it down an ETL pipeline of sorts, that will generate a report with a specific format. I've came up with a following script to retrieve this data:</p>
<pre><code>import datetime
import boto3
client = boto3.client("cloudwatch", region_name="eu-west-1")
from_date = datetime.date.today()
month_ago = from_date - datetime.timedelta(days=30)
response = client.get_metric_data(
MetricDataQueries=[{
"Id": "test",
"MetricStat": {
"Metric": {
"Namespace": "AWS/Lambda",
"MetricName": "Invocations",
"Dimensions": [{
"Name": "STAGE",
"Value": "production"
}]
},
"Period": 86400,
"Stat": "Sum"
},
"ReturnData": True,
}],
StartTime=month_ago.strftime("%Y-%m-%dT00:00:00Z"),
EndTime=from_date.strftime("%Y-%m-%dT00:00:00Z")
)
</code></pre>
<p>But all it returns me with these params is an empty dataset: <code>{ 'MetricDataResults': [{'Id': 'test', 'Label': 'Invocations', 'Timestamps': [], 'Values': [], 'StatusCode': 'Complete'}] ...}</code></p>
<p>And trying to run with the following Expression <code>"SELECT SUM(Invocations) FROM SCHEMA(\"AWS/Lambda\") WHERE \"STAGE\" = 'production' GROUP BY FunctionName"</code> instead of a MetricStat dict raises a <code>MaxQueryTimeRangeExceed</code> exception, since we're allowed to <code>SELECT</code> the data only on the last 3 hours.</p>
<p>How would I build the query correctly to retrieve the same data with the same filters as I do in the AWS Console in this case? Thanks in advance!</p>
|
<python><amazon-web-services><boto3><amazon-cloudwatch>
|
2024-12-16 06:53:16
| 0
| 1,209
|
AlexNikolaev94
|
79,283,770
| 6,320,608
|
How to force Python multiple processes to share the same memory for one import
|
<p>I am using the CPU version of PyTorch and need to spawn multiple processes with it. However, whenever I do <code>import torch</code>, a large chunk of memory is consumed (roughly 200 MB as of PyTorch 2.4.0+cpu). This is problematic if I would like to spawn a large number of processes that run PyTorch.</p>
<p>How could I avoid the memory waste by forcing multiple processes to share the same <code>import torch</code>, or alternatively, how could I find out what is consuming so much memory when I do <code>import torch</code>?</p>
<p>Given below is a small example that I have been using:</p>
<pre><code>import multiprocessing as mp
import resource
m1 = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
import torch
m2 = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
delta = m2 - m1
print(delta) # roughly 200 MB for torch==2.4.0+cpu
def f(x):
print(resource.getrusage(resource.RUSAGE_SELF).ru_maxrss) # roughly 134 MB
return x * x
if __name__ == "__main__":
mp.set_start_method("fork")
with mp.Pool(5) as p:
print(p.map(f, [1, 2, 3]))
</code></pre>
|
<python><pytorch><python-import>
|
2024-12-16 06:06:54
| 0
| 2,917
|
Bojian Zheng
|
79,283,639
| 11,581,214
|
Can PyMuPDF be used to make a form field visible and printable?
|
<p>Is it possible to change the setting (programmatically with PyMuPDF) that allows PDF form fields to be printable? I have been able to read and manipulate form field content, style, and appearance but have been unable to change the field setting to "Visible" from the apparent default value of "Visible but doesn't print." I can change this setting manually with Adobe Acrobat Pro (see below), but have been unable to identify an automated way to do this.</p>
<p>Flattening the PDF makes the file contents printable but renders the form no longer fillable. These forms will be provided to users who will submit data and may also wish to print the form.</p>
<p>Any solution that is supported by PyMuPDF or is automated and works within Adobe Acrobat Pro would be welcome. Thank you.</p>
<p><a href="https://i.sstatic.net/OT03yt18.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OT03yt18.png" alt="form field properties in Adobe Acrobat Pro" /></a></p>
<p>EDIT</p>
<p>Building on @KJ's example file with four Text form fields, each reflecting one of the possible values. PyMuPDF (1.24.10) returns widget.field_flags = 0 for all of them.</p>
<pre><code>import fitz
fname = 'fieldstates.pdf'
pdf = fitz.open(fname)
for page in pdf:
widgets = page.widgets()
for widget in widgets:
print('********************')
print('field_name\t', widget.field_name)
print('field_flags\t', widget.field_flags)
</code></pre>
<pre>
********************
field_name Text1
field_flags 0
********************
field_name Text2
field_flags 0
********************
field_name Text3
field_flags 0
********************
field_name Text4
field_flags 0
</pre>
|
<python><pdf-generation><acrobat><pymupdf>
|
2024-12-16 04:08:31
| 0
| 524
|
BalooRM
|
79,283,566
| 1,762,447
|
Why am I getting "Part number must be an integer between 1 and 10000" error in S3 multipart upload?
|
<p>I’m working on uploading a large database dump (~85 GB) to an Amazon S3 bucket using a multipart upload via boto3. However, I keep encountering this error:</p>
<pre><code>botocore.exceptions.ClientError: An error occurred (InvalidArgument) when calling the UploadPart operation: Part number must be an integer between 1 and 10000, inclusive.
</code></pre>
<p>Here’s the relevant part of my code:</p>
<pre><code>from boto3.s3.transfer import TransferConfig
def upload_encrypted_dbdump(self, ciphertext_stream):
s3 = self.s3_session_client()
prev_total_size = 77309411328 # Previous file size, around 77 GB
# Amazon S3's maximum number of parts for multipart upload
max_parts = 10000
# Calculate the new size with a 10% increase
adjusted_size = int(prev_total_size * 1.1)
# Calculate the part size with rounding up
part_size = (adjusted_size + max_parts - 1) // max_parts
print(f"Calculated part size: {part_size} bytes")
s3_response = s3.upload_fileobj(
Fileobj=ciphertext_stream,
Bucket=self.s3_bucket,
Key=f'{self.s3_folder}/{self.s3_dbdump_name}',
ExtraArgs={'ACL': 'bucket-owner-full-control'},
Config=TransferConfig(multipart_chunksize=part_size)
)
</code></pre>
<p>Steps I Tried:
I calculated the part size dynamically based on the size of the previous backup, increasing it by 10%.</p>
<p>Example: If the previous size is 77 GB, I calculate a new size as adjusted_size = int(77309411328 * 1.1) → 85040352460 bytes.
Then, I divide this size by max_parts = 10000 to calculate the part size.</p>
<p>part_size = (adjusted_size + max_parts - 1) // max_parts
For this example, it calculates as part_size = 85040352460 / 10000 ≈ 8504036 bytes.
When I run the code, I encounter the "Part number must be an integer between 1 and 10000, inclusive" error.</p>
<p><strong>What I Observed:</strong></p>
<p>When I manually set the max_parts to 1000 which worked fine but when I changed it to 10,000 or even 9,000 I get the same error.</p>
<p>I also tried math.ceil(adjusted_size / max_parts) for part size calculation, but the issue persists.</p>
<p><strong>Questions:</strong></p>
<p>Why am I getting the "Part number must be an integer between 1 and 10000" error when the calculated part_size seems valid?</p>
<p>Any help would be appreciated! Thank you.</p>
|
<python><amazon-s3><boto3>
|
2024-12-16 03:01:19
| 1
| 3,206
|
tkyass
|
79,283,509
| 899,954
|
How to create a Python object that represents a simple hierarchy of self-typed children
|
<p>I have a simple class in mind, called Page. Page has an ID and a Title. Page can also have child Page's stored in the Page's <code>children</code> attribute.</p>
<p>I thought about doing this:</p>
<pre><code>
class Page:
def __init__(self, id: int, title: str):
self.id = id
self.title = title
self.children = list[Page]
def add_child(self, child: Page): # Python doesn't like type hint of "Page"
self.children.append(child)
</code></pre>
<p>2 questions:</p>
<ol>
<li>Do I have the <code>children</code> attribute done right? It is supposed to be a list of other Page's.</li>
<li>How can I provide the proper type hint in <code>add_child</code>?</li>
</ol>
<p>Would love to learn if there are other suggestions for this kind of pattern.</p>
<p>EDIT:
Currently, <code>add_child</code> gives the following error:</p>
<pre><code>TypeError: descriptor 'append' for 'list' objects doesn't apply to a 'Page' object
</code></pre>
|
<python><object><hierarchy>
|
2024-12-16 02:07:43
| 0
| 783
|
HanSooloo
|
79,283,414
| 4,330,537
|
Python and Selenium script and setting focus not working
|
<p>I have some code I open up a web page and then I try to set focus on a test box so a user can start typing, however when the page opens and the cursor is flashing on the correct text box if the user starts typing the URL bar is where the typing will occur how do I get it to show in the text box?</p>
<pre><code> # import the required libraries
import undetected_chromedriver as uc
# define Chrome options
options = uc.ChromeOptions()
# set headless to False to run in non-headless mode
options.headless = False
# set up uc.Chrome(use_subprocess=True, options=options)
from arcgis.gis import GIS
from arcgis.geometry import Point, Polyline, Polygon
import datetime
import os
from selenium import webdriver
from selenium.webdriver.chrome.service import Service as ChromeService
from selenium.webdriver.common.keys import Keys
from webdriver_manager.chrome import ChromeDriverManager
import time
from selenium.webdriver.support.select import Select
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
#driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install()))
driver = uc.Chrome(use_subprocess=True, options=options)
driver.get("https://library.usask.ca/#gsc.tab=0")
q_field = driver.find_element("id", "primoQueryTemp")
q_field.send_keys("_")
q_field.click()
time.sleep(566)
</code></pre>
<p>what I tried is in there
find_element
send_keys
and even
.click()
but it is still defaulting to URL box</p>
|
<python><selenium-webdriver>
|
2024-12-16 00:28:05
| 3
| 835
|
RobM
|
79,283,320
| 12,149,993
|
Python signal handler doesn't terminate process before program exits
|
<p>I'm want to terminate my python script smoothly by using a signal handler. But with a keyboard interruption the handler doesn't finish before the program exits.</p>
<p>Any hints on how to get the handler to do the termination?</p>
<pre><code>import sys,subprocess,signal
def signal_handler(sig, frame):
zipProc.terminate()
zipProc.wait()
sys.exit('Stopped')
test_dir = '/mnt/Z/testdir'
out_file = '/mnt/Z/testdir.7z'
cmd = ['7z', 'a', '-t7z', out_file, test_dir]
zipProc = subprocess.Popen(cmd)
zipProc.communicate()
for sig in [signal.SIGTERM, signal.SIGINT, signal.SIGQUIT, signal.SIGHUP]:
signal.signal(sig, signal_handler)
</code></pre>
|
<python><subprocess><signals>
|
2024-12-15 22:55:51
| 1
| 494
|
Bret Hess
|
79,283,291
| 14,503,336
|
Virtual environments for Python not working in VSCode
|
<p>I use the VSCode debugger for all of my Python projects extensively. But for one of my projects, the debugger just doesn't work.</p>
<p>Whenever I try to debug something, it seemingly tries to activate the virtual environment. Then, the debugger just hangs until failure.</p>
<p><a href="https://i.sstatic.net/BLAv5tzu.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BLAv5tzu.gif" alt="debugger hanging" /></a></p>
<p>For any other project of mine, using debug with files, or modules, or whatever other options there are for Python, works fine. For this specific project, it has this awkward behavior.</p>
<p>I've tried:</p>
<ul>
<li>Running it over and over again.</li>
<li>Completely deleting the folder and recloning the repository with Git</li>
<li>Using a completely new <code>launch.json</code>.</li>
<li>Reinstalling all Python extensions, and switching to pre-releases.</li>
<li>Restarting my computer.</li>
<li>Emptying <code>~/.zshrc</code>.</li>
<li>Deleting every single file in the directory except for a single testing <code>start.py</code>, and running that. The error persists.</li>
<li>Making sure all my dependencies are functioning, and successfully running the file I want to run with <code>python</code> on the CLI.</li>
</ul>
<p>Nothing has worked.</p>
<p>Every other project I have tested works fine. It seems the only delimiting factor of the issue is, absurdly, the name of the folder from which I run the code in. (This paragraph is wrong; I was being obtuse. I've edited below to line out what the real delimiting factor was).</p>
<p><strong>Edit:</strong> I have finally gotten the project to work by <em>not</em> using a virtual environment. Upon further testing, other projects have the same issue when using a virtual environment.</p>
<p>I've never had this issue before with virtual environments, so I don't know why it's acting up now. While the issue has been narrowed down, my problems persist as I need a virtual environment for my project.</p>
|
<python><python-3.x><visual-studio-code><debugging><python-venv>
|
2024-12-15 22:33:46
| 1
| 599
|
Anonyo Noor
|
79,282,843
| 9,482,200
|
VS Code and Python: move directory refactor
|
<p>When I move a .py file in VS Code, the Python extension automatically triggers a refactor, editing imports throughout the project. But it doesn't trigger if I move the entire directory with the .py files.</p>
<p>I've been able to move the entire directory correctly only by dragging and dropping files one by one, or by text replacement of imports across the entire project.</p>
<p>Refactor triggers only if you drag and drop only one file. In addition, there may be subdirectories.</p>
<p>Is there any way to refactor the imports when moving the whole directory, just like for a single .py file?</p>
|
<python><visual-studio-code>
|
2024-12-15 17:47:48
| 0
| 360
|
Anton Ivanov
|
79,282,812
| 4,983,969
|
uasyncio.Event not resuming main loop immediately after set on Raspberry Pi Pico
|
<p>I'm programming in Micropython for the Raspberry Pi Pico. In my main loop I await for an uasyncio.Event, which I successfully set from an interrupt context.</p>
<p>I expected the code to pause and then resume soon after the event was set, but instead:</p>
<ul>
<li>If the event was already set, it continues as expected</li>
<li>If the event was not set beforehand and there is no other tasks running, the program terminates without warning</li>
<li>If I create a keep_alive task that simply loops, the main loop will resume from the event wait just before the keep_alive tasks wakes up from sleep</li>
</ul>
<p>Of course, setting the keep_alive sleep cycle very short minimizes the issue, but is very hacky.
It feels like waiting on an Event doesn't keep the loop alive by itself, and setting the event won't resume execution unless something else wake the loop
(both of which to me were unexpected)
Should I not be using Events to start with?</p>
<p>Finally, I also tried to trigger a wake up by creating a trivial task from IRQ context using schedule, but the task won't even start until the loop awakes on its own from a preexisting sleep</p>
<p>In the following code, 4 seconds pass between "set" and "end". I reckon that's because the main loop code doesn't resume when the Event is set, but when the uasyncio scheduler wakes up for the keep_alive task. Also, awake never prints because it resumes the main loop before that and terminates.</p>
<pre><code>import uasyncio
from machine import Timer
async def keep_awake():
while True:
print("awake")
await uasyncio.sleep(5)
def set(_=None):
global event
event.set()
print("set")
async def main():
print("start")
# The time will call set() after a second
Timer(-1).init(mode=Timer.ONE_SHOT, period=1000, callback=set)
print("waiting...")
global event
await event.wait()
print("end")
event = uasyncio.Event()
uasyncio.create_task(keep_awake())
uasyncio.run(main())
</code></pre>
|
<python><events><interrupt><micropython><raspberry-pi-pico>
|
2024-12-15 17:28:38
| 0
| 1,082
|
Emilio Martinez
|
79,282,810
| 8,124,392
|
GAN results fluctuating
|
<p>I have the following GAN architecture:</p>
<pre><code>import torch, os, torchvision
import torch.nn as nn
import torch.optim as optim
from torchvision import transforms, datasets, utils
class MappingNetwork(nn.Module):
def __init__(self, latent_dim, style_dim):
super(MappingNetwork, self).__init__()
self.mapping = nn.Sequential(
nn.Linear(latent_dim, style_dim),
nn.ReLU(),
nn.Linear(style_dim, style_dim)
)
def forward(self, z):
return self.mapping(z)
class PixelNorm(nn.Module):
def __init__(self, epsilon=1e-8):
super(PixelNorm, self).__init__()
self.epsilon = epsilon
def forward(self, x):
return x / torch.sqrt(torch.mean(x**2, dim=1, keepdim=True) + self.epsilon)
class NoiseInjection(nn.Module):
def __init__(self):
super(NoiseInjection, self).__init__()
self.weight = nn.Parameter(torch.zeros(1))
def forward(self, x, noise=None):
if noise is None:
batch, _, height, width = x.size()
noise = torch.randn(batch, 1, height, width, device=x.device)
return x + self.weight * noise
class MinibatchStdDev(nn.Module):
def forward(self, x):
batch_std = torch.std(x, dim=0, keepdim=True)
batch_std = batch_std.mean().expand(x.size(0), 1, x.size(2), x.size(3))
return torch.cat([x, batch_std], dim=1)
class MinibatchDiscrimination(nn.Module):
def __init__(self, num_features, num_kernels, kernel_dim):
super(MinibatchDiscrimination, self).__init__()
self.T = nn.Parameter(torch.randn(num_features, num_kernels * kernel_dim))
self.num_kernels = num_kernels
self.kernel_dim = kernel_dim
def forward(self, x):
# Compute minibatch discrimination
x = x @ self.T
x = x.view(-1, self.num_kernels, self.kernel_dim)
diffs = x.unsqueeze(0) - x.unsqueeze(1)
abs_diffs = torch.abs(diffs).sum(-1)
minibatch_features = torch.exp(-abs_diffs).sum(1)
return minibatch_features
class Discriminator(nn.Module):
def __init__(self, resolution, input_mask=False, minibatch_features=100, kernel_dim=5):
super(Discriminator, self).__init__()
self.input_mask = input_mask
input_channels = 3 + 1 if self.input_mask else 3
# Apply spectral normalization for stability
self.from_rgb = nn.utils.spectral_norm(nn.Conv2d(input_channels, 256, kernel_size=1))
self.blocks = nn.ModuleList()
res = resolution
while res > 4:
self.blocks.append(
nn.Sequential(
nn.utils.spectral_norm(nn.Conv2d(256, 256, kernel_size=3, padding=1)),
nn.LeakyReLU(0.2),
nn.AvgPool2d(kernel_size=2)
)
)
res //= 2
self.minibatch_stddev = MinibatchStdDev()
self.intermediate_features = nn.Sequential(
nn.Flatten(),
nn.Linear(4 * 4 * (256 + 1), 128), # +1 channel from minibatch stddev
nn.LeakyReLU(0.2)
)
self.minibatch_discrimination = MinibatchDiscrimination(128, minibatch_features, kernel_dim)
# Final Layer
self.final = nn.Sequential(
nn.Linear(128 + minibatch_features, 1), # Append minibatch features
nn.Tanh() # Limit discriminator outputs to [-1, 1] for stability
)
self.apply(self.init_weights)
def init_weights(self, m):
if isinstance(m, nn.Conv2d) or isinstance(m, nn.Linear):
nn.init.xavier_normal_(m.weight)
if m.bias is not None:
nn.init.constant_(m.bias, 0)
def forward(self, img, mask=None):
if self.input_mask and mask is not None:
x = torch.cat((img, mask), dim=1)
else:
x = img
x = self.from_rgb(x)
for block in self.blocks:
x = block(x)
x = self.minibatch_stddev(x)
x = self.intermediate_features(x)
minibatch_features = self.minibatch_discrimination(x)
x = torch.cat([x, minibatch_features], dim=1)
return self.final(x)
class Generator(nn.Module):
def __init__(self, latent_dim, style_dim, resolution, output_mask=False):
super(Generator, self).__init__()
self.output_mask = output_mask
self.mapping = MappingNetwork(latent_dim, style_dim)
self.initial = nn.Sequential(
nn.Linear(style_dim, 4 * 4 * 512),
nn.LeakyReLU(0.2),
nn.Unflatten(1, (512, 4, 4)),
PixelNorm() # Pixel normalization for stability
)
self.blocks = nn.ModuleList()
self.noise_injections = nn.ModuleList()
self.to_rgb = nn.Sequential(
nn.Conv2d(256, 3, kernel_size=1),
nn.Tanh() # Scale outputs to [-1, 1]
)
if self.output_mask:
self.to_mask = nn.Conv2d(256, 1, kernel_size=1)
in_channels = 512
res = 4
while res < resolution:
out_channels = max(256, in_channels // 2)
self.blocks.append(
nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1),
nn.LeakyReLU(0.2),
nn.Upsample(scale_factor=2),
PixelNorm() # Add PixelNorm
)
)
self.noise_injections.append(NoiseInjection())
in_channels = out_channels
res *= 2
self.apply(self.init_weights)
def init_weights(self, m):
if isinstance(m, nn.Conv2d) or isinstance(m, nn.Linear):
nn.init.kaiming_normal_(m.weight, nonlinearity='leaky_relu')
if m.bias is not None:
nn.init.constant_(m.bias, 0)
def forward(self, z):
style = self.mapping(z)
x = self.initial(style)
for block, noise in zip(self.blocks, self.noise_injections):
x = block(x)
x = noise(x)
img = self.to_rgb(x)
if self.output_mask:
mask = self.to_mask(x)
return img, mask
return img
# Hyperparameters
latent_dim = 128
style_dim = 512
image_size = 64 # Resolution of images (e.g., 64x64)
batch_size = 16
num_epochs = 50
learning_rate_gen = 2e-4
learning_rate_disc = 1e-4
ema_decay = 0.999 # Decay rate for EMA
gp_weight = 0.5 # Weight for gradient penalty
lambda_r1 = 0.1 # Regularization weight for generator
initial_noise_std = 0.1 # Initial discriminator input noise
final_noise_std = 0.01 # Final noise after decay
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
generator = Generator(latent_dim, style_dim, resolution=image_size).to(device)
ema_generator = Generator(latent_dim, style_dim, resolution=image_size).to(device)
ema_generator.load_state_dict(generator.state_dict())
discriminator = Discriminator(resolution=image_size).to(device)
optimizer_G = optim.Adam(generator.parameters(), lr=learning_rate_gen, betas=(0.0, 0.99))
optimizer_D = optim.Adam(discriminator.parameters(), lr=learning_rate_disc, betas=(0.0, 0.99))
scheduler_G = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer_G, T_max=150)
scheduler_D = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer_D, T_max=150)
transform = transforms.Compose([
transforms.Resize((image_size, image_size)),
transforms.RandomHorizontalFlip(),
transforms.ColorJitter(brightness=0.2, contrast=0.2, saturation=0.2, hue=0.1),
transforms.RandomAffine(degrees=15, translate=(0.1, 0.1)),
transforms.RandomCrop(image_size, padding=4),
transforms.GaussianBlur(kernel_size=(3, 3)),
transforms.ToTensor(),
transforms.Normalize([0.5] * 3, [0.5] * 3),
transforms.RandomErasing(p=0.5, scale=(0.02, 0.2), ratio=(0.3, 3.3))
])
dataset_path = os.path.join(os.getcwd(), 'dataset', 'sub-data')
dataset = datasets.ImageFolder(root=dataset_path, transform=transform)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=True)
output_dir = os.path.join(os.getcwd(), 'output')
def gradient_penalty(discriminator, real_images, fake_images, device):
alpha = torch.rand(real_images.size(0), 1, 1, 1).to(device)
interpolates = (alpha * real_images + (1 - alpha) * fake_images).requires_grad_(True)
disc_interpolates = discriminator(interpolates)
grad_outputs = torch.ones_like(disc_interpolates)
gradients = torch.autograd.grad(
outputs=disc_interpolates,
inputs=interpolates,
grad_outputs=grad_outputs,
create_graph=True,
retain_graph=True,
only_inputs=True,
)[0]
gradients = gradients.view(gradients.size(0), -1)
penalty = ((gradients.norm(2, dim=1) - 1) ** 2).mean()
return penalty
def r1_regularization(discriminator, real_images, device):
real_images.requires_grad_(True)
outputs = discriminator(real_images)
grad_outputs = torch.ones_like(outputs, device=device)
gradients = torch.autograd.grad(
outputs=outputs,
inputs=real_images,
grad_outputs=grad_outputs,
create_graph=True,
retain_graph=True,
only_inputs=True,
)[0]
penalty = gradients.pow(2).sum(dim=(1, 2, 3)).mean()
return penalty
g_losses, d_losses = [], []
for epoch in range(num_epochs):
noise_std = initial_noise_std * (1 - epoch / num_epochs) + final_noise_std * (epoch / num_epochs) # Gradual noise decay
for i, (real_images, _) in enumerate(dataloader):
real_images = real_images.to(device)
if real_images.size(0) == 0: # Skip empty batches
continue
# Train Discriminator
optimizer_D.zero_grad()
real_images.requires_grad_(True)
real_images = real_images + torch.randn_like(real_images) * noise_std
z1 = torch.randn(real_images.size(0), latent_dim, device=device)
z2 = torch.randn(real_images.size(0), latent_dim, device=device)
mix_ratio = torch.rand(1).item()
mixed_z = mix_ratio * z1 + (1 - mix_ratio) * z2
fake_images = generator(mixed_z).detach()[:real_images.size(0)]
fake_images = fake_images + torch.randn_like(fake_images) * noise_std
real_loss = torch.relu(1 - discriminator(real_images)).mean()
fake_loss = torch.relu(1 + discriminator(fake_images)).mean()
gp = gradient_penalty(discriminator, real_images, fake_images, device)
r1_penalty = lambda_r1 * r1_regularization(discriminator, real_images, device)
d_loss = real_loss + fake_loss + gp_weight * gp + r1_penalty
d_loss.backward()
optimizer_D.step()
# Train Generator
for _ in range(3): # Train generator three times per discriminator step
optimizer_G.zero_grad()
mix_ratio = torch.rand(1).item() # Style mixing
mixed_latent = mix_ratio * z1 + (1 - mix_ratio) * z2
fake_images = generator(mixed_latent)
g_loss = -discriminator(fake_images).mean()
g_loss.backward()
torch.nn.utils.clip_grad_norm_(generator.parameters(), max_norm=10)
optimizer_G.step()
torch.nn.utils.clip_grad_norm_(discriminator.parameters(), max_norm=10)
g_losses.append(g_loss.item())
d_losses.append(d_loss.item())
if i % 100 == 0:
print(f"Epoch [{epoch}/{num_epochs}], Step [{i}/{len(dataloader)}], "
f"D-Loss: {d_loss.item():.4f}, G-Loss: {g_loss.item():.4f}, "
f"G-LR: {scheduler_G.get_last_lr()[0]:.5f}, "
f"D-LR: {scheduler_D.get_last_lr()[0]:.5f}")
with torch.no_grad():
test_z = torch.randn(16, latent_dim, device=device)
test_images = generator(test_z)
utils.save_image((test_images + 1) / 2, os.path.join(output_dir, f"generated_step_{i}.png"), nrow=4)
# Update EMA Generator
with torch.no_grad():
for ema_param, gen_param in zip(ema_generator.parameters(), generator.parameters()):
ema_param.data = ema_decay * ema_param.data + (1 - ema_decay) * gen_param.data
# Save generated images and models periodically
if epoch % 5 == 0:
with torch.no_grad():
z = torch.randn(16, latent_dim, device=device)
sample_images = ema_generator(z)
sample_images = (sample_images + 1) / 2 # Rescale to [0, 1]
os.makedirs(output_dir, exist_ok=True)
utils.save_image(sample_images, os.path.join(output_dir, f"generated_epoch_{epoch}.png"), nrow=4)
# Save models
model_dir_gen = os.path.join(output_dir, 'checkpoints', 'generator')
model_dir_disc = os.path.join(output_dir, 'checkpoints', 'discriminator')
os.makedirs(model_dir_gen, exist_ok=True)
os.makedirs(model_dir_disc, exist_ok=True)
torch.save(generator.state_dict(), os.path.join(model_dir_gen, f"generator_epoch_{epoch}.pth"))
torch.save(discriminator.state_dict(), os.path.join(model_dir_disc, f"discriminator_epoch_{epoch}.pth"))
# Step the learning rate schedulers
scheduler_G.step()
scheduler_D.step()
# Save final generator and discriminator
torch.save(generator.state_dict(), os.path.join(model_dir_gen, "generator_final.pth"))
torch.save(discriminator.state_dict(), os.path.join(model_dir_disc, "discriminator_final.pth"))
print("Training completed.")
</code></pre>
<p>When I train on a batch of 100 images just to test it out, my output looks like this:</p>
<pre><code>Epoch [0/50], Step [0/7], D-Loss: 2.5000, G-Loss: 1.0000, G-LR: 0.00020, D-LR: 0.00010
Epoch [1/50], Step [0/7], D-Loss: 2.5000, G-Loss: 1.0000, G-LR: 0.00020, D-LR: 0.00010
Epoch [2/50], Step [0/7], D-Loss: 2.2945, G-Loss: 1.0000, G-LR: 0.00020, D-LR: 0.00010
Epoch [3/50], Step [0/7], D-Loss: 0.2880, G-Loss: 1.0000, G-LR: 0.00020, D-LR: 0.00010
Epoch [4/50], Step [0/7], D-Loss: 0.4982, G-Loss: 1.0000, G-LR: 0.00020, D-LR: 0.00010
Epoch [5/50], Step [0/7], D-Loss: 0.5007, G-Loss: 1.0000, G-LR: 0.00020, D-LR: 0.00010
Epoch [6/50], Step [0/7], D-Loss: 0.4992, G-Loss: 1.0000, G-LR: 0.00020, D-LR: 0.00010
Epoch [7/50], Step [0/7], D-Loss: 0.4939, G-Loss: 1.0000, G-LR: 0.00020, D-LR: 0.00010
Epoch [8/50], Step [0/7], D-Loss: 0.4971, G-Loss: 1.0000, G-LR: 0.00020, D-LR: 0.00010
Epoch [9/50], Step [0/7], D-Loss: 0.2764, G-Loss: 1.0000, G-LR: 0.00020, D-LR: 0.00010
Epoch [10/50], Step [0/7], D-Loss: 0.4993, G-Loss: 1.0000, G-LR: 0.00020, D-LR: 0.00010
Epoch [11/50], Step [0/7], D-Loss: 0.4992, G-Loss: 1.0000, G-LR: 0.00020, D-LR: 0.00010
Epoch [12/50], Step [0/7], D-Loss: 0.5003, G-Loss: 1.0000, G-LR: 0.00020, D-LR: 0.00010
Epoch [13/50], Step [0/7], D-Loss: 0.5000, G-Loss: 1.0000, G-LR: 0.00020, D-LR: 0.00010
Epoch [14/50], Step [0/7], D-Loss: 0.5000, G-Loss: 1.0000, G-LR: 0.00020, D-LR: 0.00010
Epoch [15/50], Step [0/7], D-Loss: 0.5000, G-Loss: 1.0000, G-LR: 0.00020, D-LR: 0.00010
Epoch [16/50], Step [0/7], D-Loss: 0.5000, G-Loss: 1.0000, G-LR: 0.00019, D-LR: 0.00010
Epoch [17/50], Step [0/7], D-Loss: 0.5000, G-Loss: 1.0000, G-LR: 0.00019, D-LR: 0.00010
Epoch [18/50], Step [0/7], D-Loss: 2.5000, G-Loss: -0.9990, G-LR: 0.00019, D-LR: 0.00010
Epoch [19/50], Step [0/7], D-Loss: 2.4753, G-Loss: 1.0000, G-LR: 0.00019, D-LR: 0.00010
Epoch [20/50], Step [0/7], D-Loss: 2.4670, G-Loss: -0.9871, G-LR: 0.00019, D-LR: 0.00010
Epoch [21/50], Step [0/7], D-Loss: 2.2677, G-Loss: -0.9488, G-LR: 0.00019, D-LR: 0.00010
Epoch [22/50], Step [0/7], D-Loss: 0.2090, G-Loss: 1.0000, G-LR: 0.00019, D-LR: 0.00009
Epoch [23/50], Step [0/7], D-Loss: 0.2786, G-Loss: 1.0000, G-LR: 0.00019, D-LR: 0.00009
Epoch [24/50], Step [0/7], D-Loss: 0.3120, G-Loss: 1.0000, G-LR: 0.00019, D-LR: 0.00009
Epoch [25/50], Step [0/7], D-Loss: 0.3474, G-Loss: 1.0000, G-LR: 0.00019, D-LR: 0.00009
Epoch [26/50], Step [0/7], D-Loss: 0.4567, G-Loss: 1.0000, G-LR: 0.00019, D-LR: 0.00009
Epoch [27/50], Step [0/7], D-Loss: 0.3537, G-Loss: 1.0000, G-LR: 0.00018, D-LR: 0.00009
Epoch [28/50], Step [0/7], D-Loss: 0.3916, G-Loss: 1.0000, G-LR: 0.00018, D-LR: 0.00009
Epoch [29/50], Step [0/7], D-Loss: 0.2660, G-Loss: 1.0000, G-LR: 0.00018, D-LR: 0.00009
Epoch [30/50], Step [0/7], D-Loss: 0.3715, G-Loss: 1.0000, G-LR: 0.00018, D-LR: 0.00009
Epoch [31/50], Step [0/7], D-Loss: 0.2640, G-Loss: 1.0000, G-LR: 0.00018, D-LR: 0.00009
Epoch [32/50], Step [0/7], D-Loss: 0.3432, G-Loss: 1.0000, G-LR: 0.00018, D-LR: 0.00009
Epoch [33/50], Step [0/7], D-Loss: 0.3162, G-Loss: 1.0000, G-LR: 0.00018, D-LR: 0.00009
Epoch [34/50], Step [0/7], D-Loss: 0.2603, G-Loss: 1.0000, G-LR: 0.00018, D-LR: 0.00009
Epoch [35/50], Step [0/7], D-Loss: 0.1340, G-Loss: 1.0000, G-LR: 0.00017, D-LR: 0.00009
Epoch [36/50], Step [0/7], D-Loss: 0.2660, G-Loss: 1.0000, G-LR: 0.00017, D-LR: 0.00009
Epoch [37/50], Step [0/7], D-Loss: 0.5936, G-Loss: 1.0000, G-LR: 0.00017, D-LR: 0.00009
Epoch [38/50], Step [0/7], D-Loss: 0.8096, G-Loss: 0.9763, G-LR: 0.00017, D-LR: 0.00008
Epoch [39/50], Step [0/7], D-Loss: 2.4998, G-Loss: -1.0000, G-LR: 0.00017, D-LR: 0.00008
Epoch [40/50], Step [0/7], D-Loss: 2.5080, G-Loss: -0.9832, G-LR: 0.00017, D-LR: 0.00008
Epoch [41/50], Step [0/7], D-Loss: 1.7576, G-Loss: 0.9975, G-LR: 0.00017, D-LR: 0.00008
Epoch [42/50], Step [0/7], D-Loss: 0.8342, G-Loss: 1.0000, G-LR: 0.00016, D-LR: 0.00008
Epoch [43/50], Step [0/7], D-Loss: 0.7444, G-Loss: 0.4705, G-LR: 0.00016, D-LR: 0.00008
Epoch [44/50], Step [0/7], D-Loss: 0.2642, G-Loss: 0.9983, G-LR: 0.00016, D-LR: 0.00008
Epoch [45/50], Step [0/7], D-Loss: 0.3112, G-Loss: 1.0000, G-LR: 0.00016, D-LR: 0.00008
Epoch [46/50], Step [0/7], D-Loss: 0.3192, G-Loss: 0.9999, G-LR: 0.00016, D-LR: 0.00008
Epoch [47/50], Step [0/7], D-Loss: 0.2842, G-Loss: 1.0000, G-LR: 0.00016, D-LR: 0.00008
Epoch [48/50], Step [0/7], D-Loss: 0.2764, G-Loss: 1.0000, G-LR: 0.00015, D-LR: 0.00008
Epoch [49/50], Step [0/7], D-Loss: 0.4195, G-Loss: 1.0000, G-LR: 0.00015, D-LR: 0.00008
Training completed.
</code></pre>
<p><a href="https://i.sstatic.net/gZLhBKIz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gZLhBKIz.png" alt="enter image description here" /></a></p>
<p>Where am I going wrong? I don't think this is due to simply using a small dataset.</p>
|
<python><deep-learning><pytorch><neural-network><generative-adversarial-network>
|
2024-12-15 17:24:46
| 0
| 3,203
|
mchd
|
79,282,792
| 6,038,082
|
How to drag and drop items across frames in tkinter?
|
<p>I have a tkinter gui which displays some nodes in levels.
The levels are split into frames and on each frame the nodes are placed as per a level dictionary defined in the class Viewer.
I want to drag and drop this nodes from one level to another and as I do that the levels should adjust themselves.<br><br>
For example, if a level has only one node and that node is moved up then that level will cease to exist practically and the below levels should move up to fill the level hierarchy logic.
However the drag and drop is not working as expected.<br> Below is my code :<br>
Some notes about the code :</p>
<ol>
<li><p>level_dict in Viewer defines the levels. The DEPS key denotes the a nodes dependency on another.</p>
</li>
<li><p>DragDrop class implements the drag and drop feature on clicking.</p>
<pre><code> import tkinter as tk
from tkinter import ttk, filedialog
import os
class DragDrop(tk.Label):
def __init__(self, parent, text, app, **kwargs):
super().__init__(parent, text=text, **kwargs)
self.parent = parent
self.app = app
self.text = text
self.bind("<Button-1>", self.on_click)
self.bind("<B1-Motion>", self.on_drag)
self.bind("<ButtonRelease-1>", self.on_release)
self._drag_data = {"x": 0, "y": 0, "item": None}
def on_click(self, event):
self._drag_data["item"] = self
self._drag_data["x"] = event.x
self._drag_data["y"] = event.y
def on_drag(self, event):
x = self.winfo_x() - self._drag_data["x"] + event.x
y = self.winfo_y() - self._drag_data["y"] + event.y
self.place(x=x, y=y)
def on_release(self, event):
self._drag_data = {"x": 0, "y": 0, "item": None}
self.app.update_node_position(self)
class Viewer:
def __init__(self, root):
self.root = root
self.level_dict = {'JK': 0, 'pun': 1, 'utp': 1, 'pun utp': 0, 'utk': 1, 'gjr': 2, 'wbk': 3, 'nest': 4, 'mahm': 5, 'ksl': 6, 'krtk': 5}
self.sections = {'JK': {'DEPS': None, 'TYPES': None, 'RATING': '0'}, 'pun': {'DEPS': 'JK', 'TYPES': None, 'RATING': '0'}, 'utp': {'DEPS': 'JK', 'TYPES': None, 'RATING': '0'}, 'utk': {'DEPS': 'pun utp', 'TYPES': None, 'RATING': '0'}, 'gjr': {'DEPS': 'utk', 'TYPES': None, 'RATING': '0'}, 'wbk': {'DEPS': 'gjr', 'TYPES': None, 'RATING': '0'}, 'nest': {'DEPS': 'wbk', 'TYPES': None, 'RATING': '0'}, 'mahm': {'DEPS': 'nest', 'TYPES': None, 'RATING': '0'}, 'ksl': {'DEPS': 'mahm', 'TYPES': None, 'RATING': '0'}, 'krtk': {'DEPS': 'nest', 'TYPES': None, 'RATING': '0'}}
self.canvas = tk.Canvas(root, bg="white")
self.h_scrollbar = tk.Scrollbar(root, orient=tk.HORIZONTAL, command=self.canvas.xview)
self.v_scrollbar = tk.Scrollbar(root, orient=tk.VERTICAL, command=self.canvas.yview)
self.scrollable_frame = tk.Frame(self.canvas)
self.scrollable_frame.bind(
"<Configure>",
lambda e: self.canvas.configure(
scrollregion=self.canvas.bbox("all")
)
)
self.canvas.create_window((0, 0), window=self.scrollable_frame, anchor="nw")
self.canvas.configure(xscrollcommand=self.h_scrollbar.set, yscrollcommand=self.v_scrollbar.set)
self.h_scrollbar.pack(side=tk.BOTTOM, fill=tk.X)
self.v_scrollbar.pack(side=tk.RIGHT, fill=tk.Y)
self.canvas.pack(side=tk.LEFT, fill=tk.BOTH, expand=True)
self.event_var = tk.StringVar(value="All")
self.create_widgets()
self.draw_graph()
def create_widgets(self):
control_frame = tk.Frame(self.root)
control_frame.pack(side=tk.TOP, fill=tk.X)
event_label = tk.Label(control_frame, text="Select Types:")
event_label.pack(side=tk.LEFT, padx=5, pady=5)
event_options = ["All", "zones", "states"]
event_menu = ttk.Combobox(control_frame, textvariable=self.event_var, values=event_options)
event_menu.pack(side=tk.LEFT, padx=5, pady=5)
event_menu.bind("<<ComboboxSelected>>", self.on_event_change)
browse_button = tk.Button(control_frame, text="Browse data_file", command=self.browse_file)
browse_button.pack(side=tk.LEFT, padx=5, pady=5)
save_button = tk.Button(control_frame, text="Save", command=self.save_data_file_file)
save_button.pack(side=tk.LEFT, padx=5, pady=5)
def browse_file(self):
data_file_file = filedialog.askopenfilename(filetypes=[("data_file files", "*.data_file")])
if data_file_file:
self.data_file_file = data_file_file
self.main_func()
self.draw_graph()
def save_data_file_file(self):
save_file = filedialog.asksaveasfilename(defaultextension=".data_file", filetypes=[("data_file files", "*.data_file")])
if save_file:
config = configparser.ConfigParser()
for section, attributes in self.sections.items():
config[section] = attributes
config[section]['LEVEL'] = str(self.level_dict[section])
with open(save_file, 'w') as configfile:
config.write(configfile)
def on_event_change(self, event):
self.event_filter = self.event_var.get() if self.event_var.get() != "All" else None
self.main_func()
self.draw_graph()
def draw_graph(self):
for widget in self.scrollable_frame.winfo_children():
widget.destroy()
self.level_frames = {}
levels = {}
for section, level in self.level_dict.items():
if level not in levels:
levels[level] = []
levels[level].append(section)
colors = ["lightblue", "lightgreen", "lightyellow", "lightpink", "lightgray"]
for level, nodes in sorted(levels.items()):
level_frame = tk.Frame(self.scrollable_frame, bg=colors[level % len(colors)], bd=2, relief=tk.SOLID)
level_frame.pack(fill=tk.X, padx=10, pady=5)
self.level_frames[level] = level_frame
level_label = tk.Label(level_frame, text=f"Level {level}", bg=colors[level % len(colors)], font=("Arial", 12, "bold"), anchor="w")
level_label.pack(side=tk.TOP, fill=tk.X)
for node in nodes:
self.draw_node(level_frame, node)
def draw_node(self, parent, node):
level = self.level_dict.get(node, 0)
label = f'{node}({level})'
if node in self.sections:
if self.sections[node]['RATING'] == '1':
color = 'lightblue'
else:
color = 'skyblue'
fg_color = 'darkblue'
node_label = DragDrop(parent, text=label, app=self, bg=color, fg=fg_color, font=("Arial", 10), bd=1, relief=tk.SOLID, padx=5, pady=5)
node_label.pack(side=tk.LEFT, padx=5, pady=5)
def update_node_position(self, node_label):
node_text = node_label.cget("text")
node_name = node_text.split('(')[0]
old_level = self.level_dict[node_name]
for level, frame in self.level_frames.items():
if node_label.winfo_y() >= frame.winfo_y() and node_label.winfo_y() < frame.winfo_y() + frame.winfo_height():
if old_level != level:
self.level_dict[node_name] = level
self.draw_graph()
break
# Remove empty levels and adjust subsequent levels
self.adjust_levels()
def adjust_levels(self):
levels = sorted(self.level_frames.keys())
for i, level in enumerate(levels):
if not any(node in self.level_dict and self.level_dict[node] == level for node in self.sections):
del self.level_frames[level]
for node in self.level_dict:
if self.level_dict[node] > level:
self.level_dict[node] -= 1
self.draw_graph()
if __name__ == '__main__':
root = tk.Tk()
root.title("Order Viewer")
app = Viewer(root)
root.geometry("800x600")
root.mainloop()
</code></pre>
</li>
</ol>
|
<python><python-3.x><tkinter>
|
2024-12-15 17:10:14
| 2
| 1,014
|
A.G.Progm.Enthusiast
|
79,282,707
| 2,649,384
|
Why does the Python datetime.replace() method return a new object instead of mutate the existing object?
|
<p>Calling the .replace() method on a Python DateTime object returns a new instance of a DateTime object instead of mutating the existing object. This seems to be a common pitfall when learning the method.</p>
|
<python><datetime>
|
2024-12-15 16:17:07
| 1
| 4,998
|
Izzo
|
79,282,555
| 4,767,610
|
VScode activate python in powershell terminal and then execute line/selection in the same terminal
|
<p>I recently reinstalled VScode and I am struggling with the following:</p>
<p>I have set the python.execSelectionInTerminal to Shift+Enter.
Afterwards, I open the default terminal (powershell) and then I activate python by running <code>python</code> in the shell.</p>
<p>My issue is that whenever I execute a line in the script that I have open using <code>Shift+Enter</code> even though I have an active python terminal open, VScode opens a new one. Is there any way to stop this?</p>
|
<python><visual-studio-code>
|
2024-12-15 14:53:15
| 0
| 3,092
|
User2321
|
79,282,214
| 601,314
|
Fails to FadeOut a text in Manim
|
<p>The methode FadeOut fails with the object effectifs_text, I tried to tinker with scale without success.</p>
<pre><code>from manim import *
import numpy as np
class NormalDistributionNumbers(Scene):
def construct(self):
# Étape 1: Générer une liste de 100 nombres suivant une loi normale N(175, 20), arrondis à l'unité
random_numbers = np.random.normal(175, 15, 100).round().astype(int)
# Étape 2: Afficher ces nombres
numbers_text = VGroup(*[Text(f"{num}") for num in random_numbers])
numbers_text.arrange_in_grid(rows=10, cols=10, buff=0.5)
self.play(Write(numbers_text.scale(0.5)))
self.wait(2)
# Étape 3: Trier les nombres dans l'ordre croissant
sorted_numbers = sorted(random_numbers)
sorted_numbers_text = VGroup(
*[Text(f"{num}") for num in sorted_numbers])
sorted_numbers_text.arrange_in_grid(rows=10, cols=10, buff=0.5)
self.play(Transform(numbers_text.scale(0.5),
sorted_numbers_text.scale(0.5)))
self.wait(2)
# Étape 4: Colorier les nombres inférieurs à 175 en rouge et ceux supérieurs ou égaux à 175 en bleu
# Étape 5: Partitionner la série en 8 classes de même amplitude entière et labelliser les valeurs avec 8 couleurs
min_val = min(sorted_numbers)
max_val = max(sorted_numbers)
amplitude = max_val - min_val
# Assure que class_width est un entier
class_width = (amplitude + 7) // 8
colors = [PURE_RED, ORANGE, YELLOW,
PURE_GREEN, PURE_BLUE, PURPLE, PINK, TEAL]
for num_text, num in zip(numbers_text, sorted_numbers):
# Assure que class_index est dans les limites
class_index = min((num - min_val) // class_width, 7)
num_text.set_color(colors[class_index])
self.wait(2)
# Étape 6: Fusionner les nombres de chaque classe et les remplacer par leur effectif
class_counts = [0] * 8
for num in sorted_numbers:
# Assure que class_index est dans les limites
class_index = min((num - min_val) // class_width, 7)
class_counts[class_index] += 1
effectifs_text = VGroup(
*[Text(f"{count}").set_color(colors[i]) for i, count in enumerate(class_counts)])
effectifs_text.arrange(RIGHT, buff=1)
self.play(Transform(numbers_text.scale(
0.5), effectifs_text.scale(0.5)))
self.wait(2)
self.play(FadeOut(effectifs_text.scale(0.0005)))
print(type(effectifs_text))
# Étape 7: Construire un tableau d'effectifs pour chacune des classes de même amplitude
table_data = [["Taille (cm)", "Effectif"]]
for i, count in enumerate(class_counts):
class_label = f"[{min_val + i *
class_width}, {min_val + (i + 1) * class_width} ["
table_data.append([class_label, str(count)])
table = Table(
table_data,
include_outer_lines=True
)
# Colorier les cellules du tableau
for i, row in enumerate(table_data[1:], start=1):
table.add_to_back(table.get_cell(
(i, 0), color=colors[(i-1) % len(colors)]))
table.add_to_back(table.get_cell(
(i, 1), color=colors[(i-1) % len(colors)]))
self.play(FadeIn(table.scale(0.5)))
self.wait(2)
self.play(table.animate.shift(3*LEFT))
# Pour exécuter la scène
if __name__ == "__main__":
from manim import config
config.media_width = "75%"
scene = NormalDistributionNumbers()
scene.render()
</code></pre>
|
<python><manim>
|
2024-12-15 11:23:47
| 1
| 1,871
|
Jean-Pat
|
79,282,130
| 16,869,946
|
Split a Pandas column of lists with different lengths into multiple columns
|
<p>I have a Pandas DataFrame that looks like:</p>
<pre><code>ID result
1 [.1,.5]
2 [.4,-.2,-.3,.1,0]
3 [0,.1,.6]
</code></pre>
<p>How can split this column of lists into two columns?</p>
<p>Desired result:</p>
<pre><code>ID result_1 result_2 result_3 result_4 result_5
1 .1 .5 NaN NaN NaN
2 .4 -.2 -.3 .1 0
3 0 .1 .6 NaN NaN
</code></pre>
<p>I have digged into it a little and found this: <a href="https://stackoverflow.com/questions/35491274/split-a-pandas-column-of-lists-into-multiple-columns">Split a Pandas column of lists into multiple columns</a></p>
<p>but this only seems to apply to list with a constant number of elements.</p>
<p>Thank you so much in advance.</p>
|
<python><pandas>
|
2024-12-15 10:17:38
| 2
| 592
|
Ishigami
|
79,282,088
| 8,831,742
|
Turning a grid map array into a shapely object
|
<p>I have a (2000x2000) python array of 0's and 1's that represent whether a cell is free or not. I'd like to turn this dense representation into a <code>shapely</code> object.
For example, an array of the type:</p>
<pre><code>0 0 0 0
0 1 1 0
0 1 1 0
0 0 0 0
</code></pre>
<p>would become a square.
I tried building one cell at a time and joining it to the larger shape or building an array of individual squares and joining them together but it's proving to be too slow. How can i obtain the shape in a more efficient way?</p>
|
<python><performance><shapely>
|
2024-12-15 09:37:53
| 1
| 353
|
none none
|
79,281,712
| 9,357,484
|
Gudhi python library did not show the output for dimension 1
|
<p>The following code is for "Homology Groups". I used <a href="https://github.com/GUDHI/TDA-tutorial/blob/master/Tuto-GUDHI-simplex-Trees.ipynb" rel="nofollow noreferrer">library</a> for implementation purpose.</p>
<pre><code>import gudhi
# Create a simplicial complex
simplex_tree = gudhi.SimplexTree()
simplex_tree.insert([1, 2, 3])
simplex_tree.insert([2, 3, 4])
simplex_tree.insert([3, 4, 5])
# Compute homology
homology = simplex_tree.persistence()
print("Homology groups:")
for interval in homology:
dim, (birth, death) = interval
print(f"Dimension {dim}: birth = {birth}, death = {death}")
</code></pre>
<p>The output I get</p>
<pre><code>Homology groups:
Dimension 0: birth = 0.0, death = inf
</code></pre>
<p>Why I did not get the output for Dimension 1?</p>
|
<python><topology>
|
2024-12-15 02:30:37
| 2
| 3,446
|
Encipher
|
79,281,613
| 14,149,761
|
How to transpose and modify a pandas dataframe based on column values
|
<p>I have a set of data like below:</p>
<pre><code>name A B C
foo 1 0 0
bar 0 1 0
coo 0 0 1
</code></pre>
<p>That I am trying to alter to look like the table below:</p>
<pre><code> name
A foo
B bar
C coo
</code></pre>
<p>I've done research but have gotten no luck. I'm not sure if it is possible within python. Do I need to manually change this table to get the result I am looking for?</p>
|
<python><pandas>
|
2024-12-15 00:32:01
| 5
| 303
|
skoleosho97
|
79,281,608
| 9,547,278
|
Simple crawler to scrape ICD-11 database using API requests
|
<p>I tried to make this simple crawler to crawl down the entire ICD-11 database (<a href="https://icd.who.int/browse/2024-01/foundation/en#455013390" rel="nofollow noreferrer">https://icd.who.int/browse/2024-01/foundation/en#455013390</a>) and collect all the titles and descriptions of all diseases, but it keeps stopping at the very top level of the link tree and does not go down the tree into the various branches and their sub-branches and so forth. Where is this going wrong?</p>
<pre><code>import requests
# Configuration
token_endpoint = 'https://icdaccessmanagement.who.int/connect/token'
api_base_url = 'http://id.who.int/icd/entity/1435254666'
client_id = 'XXXX' # Replace with your client_id or set it as an environment variable
client_secret = 'XXX' # Replace with your client_secret or set it as an environment variable
scope = 'icdapi_access'
grant_type = 'client_credentials'
# Get the OAUTH2 token
# Set data to post
payload = {'client_id': client_id, 'client_secret': client_secret, 'scope': scope, 'grant_type': grant_type}
# Make request
r = requests.post(token_endpoint, data=payload, verify=False).json()
token = r['access_token']
# Access ICD API
uri = 'https://id.who.int/icd/entity/455013390'
def crawler(uri):
# HTTP header fields to set
headers = {'Authorization': 'Bearer '+token, 'Accept': 'application/json', 'Accept-Language': 'en', 'API-Version': 'v2'}
# Make request
r = requests.get(uri, headers=headers, verify=False).json()
# Print the result
try: print(r['title']['@value'], r['fullySpecifiedName']['@value'])
except: pass
try: print(r['definition']['@value'])
except: pass
try: print(r['child'])
except: pass
# Return children
try: return r['child']
except: return None
url_list = crawler(uri)
for url in url_list: url_list += crawler(url)
</code></pre>
|
<python><web-crawler>
|
2024-12-15 00:17:52
| 0
| 474
|
Legion
|
79,281,233
| 11,092,636
|
Unexpected Unresolved attribute reference 'all' for class 'bool'
|
<p>When I do a matrix multiplication with <code>numpy</code> (Python 3.12.1, numpy 1.26.4, PyCharm 2024.3.1 (Professional Edition)), I get this warning which I believe is wrong:</p>
<blockquote>
<p>Unresolved attribute reference 'all' for class 'bool'</p>
</blockquote>
<p>Proof:
<a href="https://i.sstatic.net/6kQMNnBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6kQMNnBM.png" alt="enter image description here" /></a></p>
<p>Minimial Reproducible Example</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
a_matrix = np.array([[1, 2], [3, 4]])
b_matrix = np.array([[10], [20]])
a = [5.0]
b = [2.0]
if ((a_matrix @ np.array([[round(a[0], 0)], [round(b[0], 0)]])) == b_matrix).all():
print("Success")
</code></pre>
|
<python><numpy><pycharm>
|
2024-12-14 19:20:08
| 1
| 720
|
FluidMechanics Potential Flows
|
79,281,225
| 8,185,618
|
Cannot change the default indexType from AutoIndex to HNSW when creating an index in the Milvus collection via the RESTful API
|
<p>I have successfully created a Milvus collection and inserted data <strong>directly</strong> without any noticeable challenges. Now, I want to create a Milvus collection using the <strong>RESTful API</strong> as detailed in the documentation at <a href="https://milvus.io/api-reference/restful/v2.5.x/v2/Collection%20(v2)/Create.md" rel="nofollow noreferrer">Create Collection</a>.</p>
<p>I utilized the following code, which I adapted from the provided link, to create the collection, configure it, insert data, and then create an index.</p>
<pre class="lang-py prettyprint-override"><code>import time
import requests
import json
import numpy as np
def generate_random_vector():
float_array = np.random.uniform(-1, 1, 512)
normalized_array = float_array / np.linalg.norm(float_array, ord=2)
return normalized_array
def create_collection(collection_name: str, server_address: str):
url = f"{server_address}/v2/vectordb/collections/create"
payload = json.dumps({
"collectionName": collection_name,
"dimension": 512,
"metricType": "COSINE",
"vectorFieldName": "Embedding_Features",
"primaryFieldName": "IDs_Features",
"idType": "Int64",
"indexType": "HNSW",
"primaryKey": {"name": "IDs_Features", "type": "INT64", "primaryKey": True},
"vectorField": {
"name": "Embedding_Features",
"type": "FLOAT_VECTOR",
"primaryKey": False,
},
"indexes": [{
"fieldName": "Embedding_Features",
"indexName": "Embedding_Features",
"metricType": "COSINE",
}],
"auto_index": False,
})
headers = {
"x-api-key": "YOUR_API_KEY",
"Content-Type": "application/json",
}
response = requests.post(url, headers=headers, data=payload)
return response.json()
def validate_collection(collection_name: str, server_address: str):
url = f"{server_address}/v2/vectordb/collections/describe"
payload = json.dumps({"collectionName": collection_name})
headers = {
"x-api-key": "YOUR_API_KEY",
"Content-Type": "application/json",
}
response = requests.post(url, headers=headers, data=payload)
return response.json()
def drop_collection(collection_name: str, server_address: str):
url = f"{server_address}/v2/vectordb/collections/drop"
payload = json.dumps({"collectionName": collection_name})
headers = {
"x-api-key": "YOUR_API_KEY",
"Content-Type": "application/json",
}
response = requests.post(url, headers=headers, data=payload)
return response.json()
def insert_data_into_collection(
collection_name: str,
start_id: int,
n: int,
ids_name: str,
features_name: str,
server_address: str,
):
url = f"{server_address}/v2/vectordb/entities/insert"
data = [{"IDs_Features": start_id + i + 1, "Embedding_Features": list(generate_random_vector())} for i in range(n)]
payload = json.dumps({"data": data, "collectionName": collection_name})
headers = {
"x-api-key": "YOUR_API_KEY",
"Content-Type": "application/json",
}
response = requests.post(url, headers=headers, data=payload)
return response.json()
def describe_collection(collection_name: str, server_address: str):
url = f"{server_address}/v2/vectordb/collections/describe"
payload = json.dumps({"collectionName": collection_name})
headers = {
"x-api-key": "YOUR_API_KEY",
"Content-Type": "application/json",
}
response = requests.post(url, headers=headers, data=payload)
return response.json()
def describe_index(collection_name: str, server_address: str):
url = f"{server_address}/v2/vectordb/indexes/describe"
payload = json.dumps({"indexName": "Embedding_Features", "collectionName": collection_name})
headers = {
"x-api-key": "YOUR_API_KEY",
"Content-Type": "application/json",
}
response = requests.post(url, headers=headers, data=payload)
return response.json()
def create_index(collection_name: str, server_address: str):
url = f"{server_address}/v2/vectordb/indexes/create"
payload = json.dumps({
"collectionName": collection_name,
"indexParams": [{
"metricType": "COSINE",
"index_type": "HNSW",
"fieldName": "Embedding_Features",
"params": {"M": 128, "efConstruction": 256},
}],
})
headers = {
"x-api-key": "YOUR_API_KEY",
"Content-Type": "application/json",
}
response = requests.post(url, headers=headers, data=payload)
return response.json()
</code></pre>
<p>When I attempted to create the collection using:</p>
<pre class="lang-py prettyprint-override"><code>create_collection(collection_name=collection_name, server_address=server_address)
</code></pre>
<p>I received the following message:</p>
<pre class="lang-py prettyprint-override"><code>{'code': 0, 'data': {}}
</code></pre>
<p>Subsequently, when I described the collection:</p>
<pre class="lang-py prettyprint-override"><code>describe_collection(collection_name=collection_name, server_address=server_address)
</code></pre>
<p>I got this response:</p>
<pre class="lang-py prettyprint-override"><code>{'code': 0,
'data': {'aliases': [],
'autoId': False,
'collectionID': 454176377651168636,
'collectionName': 'collection1',
'consistencyLevel': 'Bounded',
'description': '',
'enableDynamicField': True,
'fields': [{'autoId': False,
'clusteringKey': False,
'description': '',
'id': 100,
'name': 'IDs_Features',
'nullable': False,
'partitionKey': False,
'primaryKey': True,
'type': 'Int64'},
{'autoId': False,
'clusteringKey': False,
'description': '',
'id': 101,
'name': 'Embedding_Features',
'nullable': False,
'params': [{'key': 'dim', 'value': '512'}],
'partitionKey': False,
'primaryKey': False,
'type': 'FloatVector'}],
'functions': None,
'indexes': [{'fieldName': 'Embedding_Features',
'indexName': 'Embedding_Features',
'metricType': 'COSINE'}],
'load': 'LoadStateLoading',
'partitionsNum': 1,
'properties': [],
'shardsNum': 1},
'message': ''}
</code></pre>
<p>When I described the collection index:</p>
<pre class="lang-py prettyprint-override"><code>describe_index(collection_name=collection_name, server_address=server_address)
</code></pre>
<p>I received this message:</p>
<pre class="lang-py prettyprint-override"><code>{'code': 0,
'data': [{'failReason': '',
'fieldName': 'Embedding_Features',
'indexName': 'Embedding_Features',
'indexState': 'Finished',
'indexType': 'AUTOINDEX',
'indexedRows': 0,
'metricType': 'COSINE',
'pendingRows': 0,
'totalRows': 0}]}
</code></pre>
<p>This indicated that the index type was set to <strong>AUTOINDEX</strong>, despite my configuration specifying <strong>HNSW</strong>, which caused issues with indexing.</p>
<p>Afterwards, I inserted 10,000 rows into the collection:</p>
<pre class="lang-py prettyprint-override"><code>number_vectors = 10000
for i in range(0, number_vectors, 500):
response = insert_data_into_collection(
collection_name=collection_name,
start_id=i,
n=500,
ids_name="IDs_Features",
features_name="Embedding_Features",
server_address=server_address,
)
if response["data"]["insertCount"] == 500:
print(f"Great! inserted ids {i} to {i+500} successfully")
else:
print(f"There are some errors for {i}")
time.sleep(1)
</code></pre>
<p>Finally, when I tried to create the index:</p>
<pre class="lang-py prettyprint-override"><code>create_index(collection_name=collection_name, server_address=server_address)
</code></pre>
<p>I encountered the following error:</p>
<pre class="lang-py prettyprint-override"><code>{'code': 65535, 'message': 'only metric type can be passed when use AutoIndex'}
</code></pre>
<p>Is it possible for me to set the <strong>indexType</strong> to <strong>HNSW</strong> and either ignore or remove the default <strong>AutoIndex</strong> value?</p>
<p>By adjusting the <code>create_index</code> function:</p>
<pre class="lang-py prettyprint-override"><code>def create_index(collection_name: str, server_address: str):
url = f"{server_address}/v2/vectordb/indexes/create"
payload = json.dumps(
{
"collectionName": collection_name,
"indexParams": [
{
"metricType": "COSINE",
"fieldName": "Embedding_Features",
"params": {"M": 128, "efConstruction": 256, "index_type": "HNSW"},
}
],
}
)
# Example: {'metric_type': 'IP', 'index_type': 'HNSW', 'params': {'nlist': 1024, 'efConstruction': 40, 'M': 1024}}
headers = {
"x-api-key": "YOUR_API_KEY",
"Content-Type": "application/json",
}
response = requests.post(url, headers=headers, data=payload)
response = json.loads(response.text)
return response
</code></pre>
<p>I encountered the following error:</p>
<pre class="lang-py prettyprint-override"><code>{'code': 65535,
'message': 'CreateIndex failed: at most one distinct index is allowed per field'}
</code></pre>
|
<python><rest><vector-database><milvus>
|
2024-12-14 19:15:01
| 0
| 978
|
BarzanHayati
|
79,281,123
| 1,440,839
|
How to convert contourf values and plot to surface of 3d sphere
|
<p>I am trying to use matplotlib contourf to generate a contour plot of temperature data. I would then like to map this data onto a 3d sphere. I am using vpython to render the image. The issue I have is that the polygons are not generated across the surface of the sphere, and also there are many gaps in the data. Can someone please explain how this can be achieved ?</p>
<p>Please note I do not wish to solve this by generating a texture from the contour and then wrapping the sphere with the texture. I wish to translate the polygon paths generated by contourf, and directly translate those paths onto the sphere.</p>
<pre class="lang-py prettyprint-override"><code>from vpython import vector
import numpy as np
from matplotlib import pyplot as plt
from vpython import triangle, vertex, vec
from scipy.interpolate import griddata
def spherical_to_cartesian(lat, lon, radius=3):
lons = np.radians(lon)
lats = np.radians(lat)
x = radius * np.cos(lats) * np.cos(lons)
y = radius * np.cos(lats) * np.sin(lons)
z = radius * np.sin(lats)
return np.array([x, y, z])
shape = (721, 1440)
lats = np.linspace(-90, 90, shape[0])
lons = np.linspace(-180, 180, shape[1])
min_temp = -30
max_temp = 50
temps = np.random.uniform(min_temp, max_temp, size=shape)
lons_grid, lats_grid = np.meshgrid(lons, lats)
new_lons = np.linspace(lons.min(), lons.max(), 72) # 72 points in longitude
new_lats = np.linspace(lats.min(), lats.max(), 36) # 36 points in latitude
new_lons_grid, new_lats_grid = np.meshgrid(new_lons, new_lats)
radius = 3
lats_flat = lats_grid.flatten()
lons_flat = lons_grid.flatten()
temps_flat = temps.flatten()
coarse_temps = griddata(
(lons_flat, lats_flat),
temps_flat,
(new_lons_grid, new_lats_grid),
method='linear' # Use 'linear', 'nearest', or 'cubic' as needed
)
norm = plt.Normalize(coarse_temps.min(), vmax=coarse_temps.max())
cmap = plt.get_cmap("inferno", 100)
levels = 100
levels = np.linspace(coarse_temps.min(), coarse_temps.max(), levels + 1)
bucketed_data = np.digitize(coarse_temps, levels) - 1
fig, ax = plt.subplots()
contours = ax.contourf(new_lons_grid, new_lats_grid, bucketed_data, levels=np.arange(len(levels)))
plt.close(fig)
def create_polygon(region, temp_color):
if len(region) < 3:
# Can't form a polygon with fewer than 3 points
return
# Calculate the centroid of the region
centroid = vec(sum(p[0] for p in region) / len(region),
sum(p[1] for p in region) / len(region),
sum(p[2] for p in region) / len(region))
# Tessellate the region into triangles
for i in range(len(region) - 1):
v0 = vec(*region[i])
v1 = vec(*region[i + 1])
triangle(
v0=vertex(pos=v0, color=temp_color),
v1=vertex(pos=v1, color=temp_color),
v2=vertex(pos=centroid, color=temp_color)
)
v0 = vec(*region[-1])
v1 = vec(*region[0])
triangle(
v0=vertex(pos=v0, color=temp_color),
v1=vertex(pos=v1, color=temp_color),
v2=vertex(pos=centroid, color=temp_color)
)
def split_contours(segs, kinds=None):
if kinds is None:
return segs # nothing to be done
new_segs = []
for i, seg in enumerate(segs):
segkinds = kinds[i]
boundaries = [0] + list(np.nonzero(segkinds == 79)[0])
for b in range(len(boundaries) - 1):
new_segs.append(seg[boundaries[b] + (1 if b > 0 else 0):boundaries[b + 1]])
return new_segs
allsegs = contours.allsegs
allkinds = contours.allkinds
colors = cmap(norm(coarse_temps))
rgb_colors = [
tuple(int(c * 255) for c in color[:3])
for color in colors.reshape(-1, colors.shape[-1])
]
for clev in range(len(contours.allsegs)):
kinds = None if allkinds is None else allkinds[clev]
segs = split_contours(allsegs[clev], kinds)
rgb = cmap(clev)[:3]
coords = [spherical_to_cartesian(lat, lon, radius=3) for seg in segs for lon, lat in seg]
temp_color = vector(*rgb)
create_polygon(coords, temp_color)
import time
while True:
time.sleep(0.03)
</code></pre>
<p>This is how the sphere is rendered. I would like the contours to be rendered on the surface of the sphere, as if the 2d contours were projected onto the 3d surface.</p>
<p><a href="https://i.sstatic.net/82HEa6dT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82HEa6dT.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><scipy><delaunay><scipy-spatial>
|
2024-12-14 17:53:17
| 0
| 401
|
John
|
79,280,931
| 1,478,191
|
Override Django Admin Template In 3rd Party Package?
|
<p>I am developing a Python package which includes a Django addon app. The package is related to authentication so I wanted to extend the <code>django.contrib.admin</code> login page.</p>
<p>Is it possible for my 3rd party package to override another 3rd party package?</p>
<p>The template for the login page is under <code>django/contrib/admin/templates/admin/login.html</code>. Which means the template is registered as <code>admin/login.html</code>.</p>
<p>I put my overridden template in my package's <code>templates/admin/login.html</code> directory but the template does not get overridden.</p>
<p>The order of my package and Django contrib admin in <code>INSTALLED_APPS</code>, template app dirs = true don't seem to change this.</p>
|
<python><django><templates><jinja2>
|
2024-12-14 15:49:07
| 1
| 4,290
|
recoup8063
|
79,280,838
| 459,745
|
After `uv init`, adding script to `[project.scripts]` does not work
|
<p>I created a new project with uv 0.5.7:</p>
<pre class="lang-bash prettyprint-override"><code>uv init myproject
cd myproject
uv sync
</code></pre>
<p>And my project looks like this:</p>
<pre class="lang-none prettyprint-override"><code>├── hello.py
├── pyproject.toml
├── README.md
└── uv.lock
</code></pre>
<p>At this point, the following works:</p>
<pre class="lang-none prettyprint-override"><code>uv run python hello.py
uv run python -m hello
</code></pre>
<p>I would like to make a script name <code>um</code> by adding the following to <code>pyproject.toml</code>:</p>
<pre class="lang-none prettyprint-override"><code>[project.scripts]
um = "hello:main"
</code></pre>
<p>However, that script is not found when I run it:</p>
<pre class="lang-none prettyprint-override"><code>$ uv run um [umbala:main:]
error: Failed to spawn: `um`
Caused by: No such file or directory (os error 2)
</code></pre>
<p>I need help to create this script, <code>um</code>. What did I miss?</p>
|
<python><uv>
|
2024-12-14 14:53:24
| 3
| 41,381
|
Hai Vu
|
79,280,773
| 2,475,195
|
RuntimeError: Trying to backward through the graph a second time on loss tensor
|
<p>I have the following training code. I am quite sure I call <code>loss.backward()</code> just once, and yet I am getting the error from the title. What am I doing wrong? Note that the <code>X_train_tensor</code> is output from another graph calculation, so it has <code>required_grad=True</code> as you can see in the print statement. Is this the source of the problem, and if so, how can I change it? It won't allow me to toggle it directly on the tensor.</p>
<pre><code>for iter in range(max_iters):
start_ix = 0
loss = None
while start_ix < len(X_train_tensor):
loss = None
end_ix = min(start_ix + batch_size, len(X_train_tensor))
out, loss, accuracy = model(X_train_tensor[start_ix:end_ix], y_train_tensor[start_ix:end_ix])
# every once in a while evaluate the loss on train and val sets
if (start_ix==0) and (iter % 10 == 0 or iter == max_iters - 1):
out_val, loss_val, accuracy_val = model(X_val_tensor, y_val_tensor)
print(f"step {iter}: train loss={loss:.2f} train_acc={accuracy:.3f} | val loss={loss_val:.2f} val_acc={accuracy_val:.3f} {datetime.datetime.now()}")
optimizer.zero_grad(set_to_none=True)
print (iter, start_ix, X_train_tensor.requires_grad, y_train_tensor.requires_grad, loss.requires_grad)
loss.backward()
optimizer.step()
start_ix = end_ix + 1
</code></pre>
<p>This is the error:</p>
<pre><code>RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
</code></pre>
<p>Update: this is where the model input tensors are coming from, as output of other (autoencoder) model:</p>
<pre><code>autoencoder.eval()
with torch.no_grad(): # it seems like adding this line solves the problem?
X_train_encoded, loss = autoencoder(X_train_tensor)
X_val_encoded, loss = autoencoder(X_val_tensor)
X_test_encoded, loss = autoencoder(X_test_tensor)
</code></pre>
<p>Adding the <code>with torch.no_grad()</code> line above solves the issue, but I don't understand why. Does it actually change how the outputs are generated, how does that work?</p>
|
<python><deep-learning><pytorch><tensor><autograd>
|
2024-12-14 14:11:59
| 2
| 4,355
|
Baron Yugovich
|
79,280,663
| 310,370
|
How to download ffmpeg utilities into Python venv with pip or manual way for torchaudio
|
<p>torchaudio requiring avutil and other binary dll files</p>
<p>Source : <a href="https://pytorch.org/audio/2.3.0/installation.html" rel="nofollow noreferrer">https://pytorch.org/audio/2.3.0/installation.html</a></p>
<p>However they given example only for Anaconda</p>
<p>I am not using Anaconda but I am using Python 3.10.11 pip venv</p>
<p>I can't find these DLL files anywhere</p>
<p>How can I find and install them into Python venv?</p>
<p>Since I can't find, I am getting below error</p>
<pre><code>Traceback (most recent call last):
File "R:\MMAudio_v1\MMAudio\venv\lib\site-packages\gradio\queueing.py", line 625, in process_events
response = await route_utils.call_process_api(
File "R:\MMAudio_v1\MMAudio\venv\lib\site-packages\gradio\route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
File "R:\MMAudio_v1\MMAudio\venv\lib\site-packages\gradio\blocks.py", line 2047, in process_api
result = await self.call_function(
File "R:\MMAudio_v1\MMAudio\venv\lib\site-packages\gradio\blocks.py", line 1594, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "R:\MMAudio_v1\MMAudio\venv\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "R:\MMAudio_v1\MMAudio\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 2505, in run_sync_in_worker_thread
return await future
File "R:\MMAudio_v1\MMAudio\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 1005, in run
result = context.run(func, *args)
File "R:\MMAudio_v1\MMAudio\venv\lib\site-packages\gradio\utils.py", line 869, in wrapper
response = f(*args, **kwargs)
File "R:\MMAudio_v1\MMAudio\venv\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "R:\MMAudio_v1\MMAudio\gradio_demo.py", line 60, in video_to_audio
clip_frames, sync_frames, duration = load_video(video, duration)
File "R:\MMAudio_v1\MMAudio\mmaudio\eval_utils.py", line 178, in load_video
reader = StreamingMediaDecoder(video_path)
File "R:\MMAudio_v1\MMAudio\venv\lib\site-packages\torio\io\_streaming_media_decoder.py", line 526, in __init__
self._be = ffmpeg_ext.StreamingMediaDecoder(os.path.normpath(src), format, option)
File "R:\MMAudio_v1\MMAudio\venv\lib\site-packages\torio\_extension\utils.py", line 25, in __getattr__
self._import_once()
File "R:\MMAudio_v1\MMAudio\venv\lib\site-packages\torio\_extension\utils.py", line 39, in _import_once
self.module = self.import_func()
File "R:\MMAudio_v1\MMAudio\venv\lib\site-packages\torio\_extension\utils.py", line 143, in _init_ffmpeg
ext = _find_ffmpeg_extension(ffmpeg_vers)
File "R:\MMAudio_v1\MMAudio\venv\lib\site-packages\torio\_extension\utils.py", line 122, in _find_ffmpeg_extension
raise ImportError(
ImportError: Failed to intialize FFmpeg extension. Tried versions: ['6', '5', '4', '']. Enable DEBUG logging to see more details about the error.
</code></pre>
|
<python><ffmpeg><pip><python-venv><torchaudio>
|
2024-12-14 12:57:27
| 1
| 23,982
|
Furkan Gözükara
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.