QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
79,481,158
| 4,451,315
|
Keep rows where a field of a list[struct] column contains a message
|
<p>Say I have the following data:</p>
<pre class="lang-py prettyprint-override"><code>import duckdb
rel = duckdb.sql("""
FROM VALUES
([{'a': 'foo', 'b': 'bta'}]),
([]),
([{'a': 'jun', 'b': 'jul'}, {'a':'nov', 'b': 'obt'}])
df(my_col)
SELECT *
""")
</code></pre>
<p>which looks like this:</p>
<pre class="lang-py prettyprint-override"><code>ββββββββββββββββββββββββββββββββββββββββββββββββ
β my_col β
β struct(a varchar, b varchar)[] β
ββββββββββββββββββββββββββββββββββββββββββββββββ€
β [{'a': foo, 'b': bta}] β
β [] β
β [{'a': jun, 'b': jul}, {'a': nov, 'b': obt}] β
ββββββββββββββββββββββββββββββββββββββββββββββββ
</code></pre>
<p>I would like to keep all rows where for any of the items in one of the elements of <code>'my_col'</code>, field <code>'a'</code> contains the substring <code>'bt'</code></p>
<p>So, expected output:</p>
<pre><code>ββββββββββββββββββββββββββββββββββββββββββββββββ
β my_col β
β struct(a varchar, b varchar)[] β
ββββββββββββββββββββββββββββββββββββββββββββββββ€
β [{'a': foo, 'b': bta}] β
β [{'a': jun, 'b': jul}, {'a': nov, 'b': obt}] β
ββββββββββββββββββββββββββββββββββββββββββββββββ
</code></pre>
<p>How can I write a SQL query to do that?</p>
|
<python><duckdb>
|
2025-03-03 14:03:52
| 1
| 11,062
|
ignoring_gravity
|
79,481,080
| 1,195,909
|
Special many to many relation in SQLAlchemy
|
<p>I am trying to use <code>sqlalchemy</code> to model a database consisting of two classes <code>A</code> and <code>B</code>. The <code>B</code> class has two fields: <code>B.a</code> (1 to n) and <code>B.alist</code> (n to n).</p>
<p>I am trying to follow the <a href="https://docs.sqlalchemy.org/en/20/orm/basic_relationships.html#setting-bi-directional-many-to-many" rel="nofollow noreferrer">Setting Bi-Directional Many-to-many</a> example (see the code below), but I get the following warning:</p>
<blockquote>
<p>SAWarning: relationship 'A.blist' will copy column a.id to column b.a_id, which conflicts with relationship(s): 'A.bs' (copies a.id to b.a_id). If this is not the intention, consider if these relationships should be linked with back_populates, or if viewonly=True should be applied to one or more if they are read-only. For the less common case that foreign key constraints are partially overlapping, the orm.foreign() annotation can be used to isolate the columns that should be written towards. To silence this warning, add the parameter 'overlaps="bs"' to the 'A.blist' relationship. (Background on this warning at: <a href="https://sqlalche.me/e/20/qzyx" rel="nofollow noreferrer">https://sqlalche.me/e/20/qzyx</a>) (This warning originated from the <code>configure_mappers()</code> process, which was invoked automatically in response to a user-initiated operation.)</p>
</blockquote>
<p>How should I model this to have a <code>B</code> class with one instance of <code>A</code> in <code>B.a</code> and other instances in <code>B.alist</code>?:</p>
<pre class="lang-py prettyprint-override"><code>from typing import List
from sqlalchemy import ForeignKey, Table, Column, Integer
from sqlalchemy import create_engine
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
from sqlalchemy.orm import relationship
from sqlalchemy.orm import Session
class Base(DeclarativeBase):
pass
associative_table = Table(
"associative_table",
Base.metadata,
Column("a_id", Integer, ForeignKey("a.id"), primary_key=True),
Column("b_id", Integer, ForeignKey("b.id"), primary_key=True),
)
class A(Base):
__tablename__ = 'a'
id: Mapped[int] = mapped_column(primary_key=True)
bs: Mapped[List["B"]] = relationship(back_populates="a")
blist: Mapped[List["B"]] = relationship(back_populates="alist")
class B(Base):
__tablename__ = 'b'
id: Mapped[int] = mapped_column(primary_key=True)
a_id: Mapped[int] = mapped_column(ForeignKey("a.id"))
a: Mapped[A] = relationship(back_populates="bs")
alist: Mapped[List["A"]] = relationship(back_populates="blist")
engine = sqlalchemy.create_engine("sqlite+pysqlite:///:memory:", echo=True)
Base.metadata.create_all(engine)
with Session(engine) as session:
a1 = A()
a2 = A()
a3 = A()
session.add_all([a1, a2, a3])
b1 = B(
a = a1,
)
session.add(b1)
# b1.alist.append(a2) # It raises `AttributeError` because b1.alist is None.
session.commit()
</code></pre>
|
<python><database-design><sqlalchemy>
|
2025-03-03 13:25:13
| 0
| 3,463
|
msampaio
|
79,481,011
| 4,752,874
|
Unable to Upload csv into Azure Blob Storage with LocationParseError label empty or too long
|
<p>Trying to load a dataframe as a csv into azure blob storage but getting the below error. I have had a look online but can't see what I am doing wrong. Some advice would be much appreciated.</p>
<blockquote>
<p>'DefaultEndpointsProtocol=https;AccountName=test;AccountKey=Test==;EndpointSuffix=core.windows.net', label empty or too long</p>
</blockquote>
<pre><code>from azure.storage.blob import ContainerClient
from azure.storage.blob import BlobServiceClient
client = "DefaultEndpointsProtocol=https;AccountName=test;AccountKey=Test==;EndpointSuffix=core.windows.net"
container_name = "test"
blob_name = "testCsv.csv"
container = ContainerClient.from_connection_string(client, container_name)
blob_service_client = BlobServiceClient(client)
data = {
"calories": [420, 380, 390],
"duration": [50, 40, 45]
}
df = pd.DataFrame(data)
csv_string = df.to_csv(encoding='utf-8', index=False, header=True)
csv_bytes = str.encode(csv_string)
blob_client = blob_service_client.get_blob_client(container=container_name, blob=blob_name)
blob_client.upload_blob(csv_bytes, overwrite=True)
</code></pre>
|
<python><azure><azure-blob-storage>
|
2025-03-03 12:42:48
| 1
| 349
|
CGarden
|
79,480,448
| 20,770,190
|
What is the default value of temperature parameter in ChatOpenAI in Langchain?
|
<p>What is the default value of the temperature parameter when I create an instance of <code>ChatOpenAI</code> in Langchain?</p>
<pre class="lang-py prettyprint-override"><code>from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o")
print(llm.temperature) # it is None!
llm=ChatOpenAI(model="gpt-4o", temperature=0.1)
print(llm.temperature) # it is 0.1
</code></pre>
<p>when I don't explicitly set a value to <code>temperature</code> it would be <code>None</code>! so what does it mean? is it Zero or One in this case?</p>
|
<python><openai-api><langchain>
|
2025-03-03 08:01:16
| 1
| 301
|
Benjamin Geoffrey
|
79,480,411
| 6,006,584
|
External API call in async handler
|
<p>I have the following code in FastAPI route handler. <code>client</code> is <code>aiohttp.ClientSession()</code>. The service is singleton, meaning all use the same class where I have this client.</p>
<pre><code>async def handler():
log...
async with client.post(
f"{config.TTS_SERVER_ENDPOINT}/v2/models/{self.MODEL_NAME}/infer",
json=request_payload,
) as response:
response_data = await response.json() # β
Get JSON response
log...
</code></pre>
<p>I am load testing the system and getting in logs and in jmeter results that I am only handling 2-3 requests per second - is that reasonable?</p>
<p>I would expect to see a lot of messages "start" and then a lot of "finish" messages, but this is not the case.</p>
<p>I see that the interval between start and finish logs are getting larger and larger, starting from 0.5 seconds to 5-6 seconds - what could be the bottleneck here?</p>
<p>I am running FastAPI in Docker with one CPU, 2G memory and with this command :</p>
<pre><code>CMD ["uv", "run", "gunicorn", "-k", "uvicorn.workers.UvicornWorker", "-w", "4", "--threads", "16","--worker-connections", "2000", "-b", "0.0.0.0:8000","--preload", "src.main:app"]
</code></pre>
<p>where <code>uv</code> is the package manager I am using.</p>
<p>What is going on here? It does not seem reasonable for me that I am only handling such amount of requests.</p>
|
<python><fastapi>
|
2025-03-03 07:41:29
| 0
| 1,638
|
ddor254
|
79,480,260
| 275,002
|
TA-lib is not properly detecting Engulfing candle
|
<p>As you see in the attached image, there was a Bearish Engulfing candle on December 18.
<a href="https://i.sstatic.net/iVfTUclj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iVfTUclj.png" alt="enter image description here" /></a></p>
<p>But when I see the same data and engulfing value, it shows something else, in fact <code>0</code>.</p>
<p>Below is the function computing, detecting engulfing candle pattern:</p>
<pre><code>def detect_engulfing_pattern(tsla_df):
df = tsla_df.rename(columns={"Open": "open", "High": "high", "Low": "low", "Close": "close"})
df['engulfing'] = talib.CDLENGULFING(df['open'], df['high'], df['low'], df['close'])
return df
</code></pre>
<p><a href="https://i.sstatic.net/fHqRcV6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fHqRcV6t.png" alt="enter image description here" /></a></p>
|
<python><ta-lib><candlesticks>
|
2025-03-03 05:53:25
| 1
| 15,089
|
Volatil3
|
79,480,121
| 1,141,601
|
Bitunix API Signature Failure with Python
|
<p>I have a code snippet I am trying to prove works and allows me to connect to the bitunix futures API. I have an idea for a bot I want to write.</p>
<p>Instructions for creating the signature: [https://openapidoc.bitunix.com/doc/common/sign.html]</p>
<p>Api I want to call: [https://openapidoc.bitunix.com/doc/account/get_single_account.html]</p>
<p>Here is the code I have written so far. I have a valid API key & secret (shown below as abc) but it continues to fail. I get the following error: I can reproduce this in the script as well as with Postman.</p>
<blockquote>
<p>Response: {'code': 10007, 'data': None, 'msg': 'Signature Error'}</p>
</blockquote>
<pre><code>import requests
import hashlib
import uuid
import time
from urllib.parse import urlencode
def sha256_hex(input_string):
return hashlib.sha256(input_string.encode('utf-8')).hexdigest()
def generate_signature(api_key, secret_key, nonce, timestamp, query_params):
"""
Generate the double SHA-256 signature as per Bitunix API requirements.
"""
# Concatenate nonce, timestamp, api_key, query_params, and an empty body
digest_input = nonce + timestamp + api_key + query_params + ""
# First SHA-256 hash
digest = sha256_hex(digest_input)
# Append secret_key to the digest
sign_input = digest + secret_key
# Second SHA-256 hash
signature = sha256_hex(sign_input)
return signature
def get_account_info(api_key, secret_key, margin_coin):
"""
Call the Bitunix 'Get Single Account' endpoint with proper signing.
"""
# Generate nonce: 32-character random string
nonce = uuid.uuid4().hex
# Generate timestamp in YYYYMMDDHHMMSS format
timestamp = time.strftime("%Y%m%d%H%M%S", time.gmtime())
# Set query parameters
params = {"marginCoin": margin_coin}
query_string = urlencode(params)
# Generate the signature
signature = generate_signature(api_key, secret_key, nonce, timestamp, query_string)
headers = {
"api-key": api_key,
"sign": signature,
"timestamp": timestamp,
"nonce": nonce,
"Content-Type": "application/json"
}
base_url = "https://fapi.bitunix.com"
request_path = "/api/v1/futures/account"
url = f"{base_url}{request_path}?{query_string}"
print(f"Query String: {query_string}")
print(f"Digest Input: {nonce + timestamp + api_key + query_string + ''}") # Debugging digest input
print(f"Signature: {signature}")
print(f"URL: {url}")
response = requests.get(url, headers=headers)
return response.json()
# Example usage
if __name__ == "__main__":
api_key = "your_api_key" # Replace with your actual API key
secret_key = "your_secret_key" # Replace with your actual secret key
margin_coin = "USDT" # Example margin coin
result = get_account_info(api_key, secret_key, margin_coin)
print("Response:", result)
</code></pre>
|
<python><digital-signature><sha256>
|
2025-03-03 03:41:20
| 0
| 431
|
S.A.Jay
|
79,480,120
| 9,983,652
|
why result of scaling each column always equal to zero?
|
<p>I am using minmaxscaler trying to scaling each column. The scaled result for each column is always all zero. For example , below the values of df_test_1 after finishing scaling is all zero. But even with all values of zero, using inverse_transferm from this values of zero can still revert back to original values. But why the results of scaled are shown all zero?</p>
<pre><code>from sklearn.preprocessing import MinMaxScaler
df_dict={'A':[-1,-0.5,0,1],'B':[2,6,10,18]}
df_test=pd.DataFrame(df_dict)
print('original scale data')
print(df_test)
scaler_model_list=[]
df_test_1=df_test.copy()
for col in df_test.columns:
scaler = MinMaxScaler()
scaler_model_list.append(scaler) # need to save scalerfor each column since there are different if we want to use inverse_transform() later
df_test_1.loc[:,col]=scaler.fit_transform(df_test_1.loc[:,col].values.reshape(1,-1))[0]
print('after finishing scaling')
print(df_test_1)
print('after inverse transformation')
print(scaler_model_list[0].inverse_transform(df_test_1.iloc[:,0].values.reshape(1,-1)))
print(scaler_model_list[1].inverse_transform(df_test_1.iloc[:,1].values.reshape(1,-1)))
original scale data
A B
0 -1.0 2
1 -0.5 6
2 0.0 10
3 1.0 18
after finishing scaling
A B
0 0.0 0
1 0.0 0
2 0.0 0
3 0.0 0
after inverse transformation
[[-1. -0.5 0. 1. ]]
[[ 2. 6. 10. 18.]]
</code></pre>
|
<python><pandas>
|
2025-03-03 03:41:18
| 1
| 4,338
|
roudan
|
79,480,081
| 398,348
|
Getting Numpy error on VSCode (Windows11) even though numpy is installed
|
<p>reference:
<a href="https://code.visualstudio.com/docs/python/python-tutorial#_start-vs-code-in-a-project-workspace-folder" rel="nofollow noreferrer">https://code.visualstudio.com/docs/python/python-tutorial#_start-vs-code-in-a-project-workspace-folder</a></p>
<p>I created file hello.py</p>
<pre><code>import numpy as np
msg = "Roll a dice!"
print(msg)
print(np.random.randint(1,9))
</code></pre>
<p>In VSC I go to the Play button at the top and click one of the options for instance, "Run Python File In..." - they all give the same error.</p>
<pre><code> line 1, in <module>
import numpy as np
ModuleNotFoundError: No module named 'numpy'
</code></pre>
<p>following instructions, I did</p>
<blockquote>
<p>py -m pip install numpy<br />
WARNING: Ignoring invalid distribution ~
(C:\Python312\Lib\site-packages) WARNING: Ignoring invalid
distribution ~ip (C:\Python312\Lib\site-packages) WARNING: Ignoring
invalid distribution ~ (C:\Python312\Lib\site-packages) WARNING:
Ignoring invalid distribution ~ip (C:\Python312\Lib\site-packages)
Requirement already satisfied: numpy in c:\python312\lib\site-packages
(2.1.3) WARNING: Ignoring invalid distribution ~
(C:\Python312\Lib\site-packages) WARNING: Ignoring invalid
distribution ~ip (C:\Python312\Lib\site-packages) WARNING: Ignoring
invalid distribution ~ (C:\Python312\Lib\site-packages) WARNING:
Ignoring invalid distribution ~ip (C:\Python312\Lib\site-packages)</p>
</blockquote>
<p>Running it again, I still get</p>
<pre><code>ModuleNotFoundError: No module named 'numpy'
</code></pre>
<p>In the terminal window, I</p>
<pre><code>python -c "import numpy; print(numpy.__version__)"
2.1.3
</code></pre>
|
<python><numpy><visual-studio-code>
|
2025-03-03 03:02:33
| 1
| 3,795
|
likejudo
|
79,480,032
| 12,883,179
|
In numpy find a percentile in 2d with some condition
|
<p>I have this kind of array</p>
<pre><code>a = np.array([[-999, 9, 7, 3],
[2, 1, -999, 1],
[1, 5, 4, 6],
[0, 6, -999, 9],
[1, -999, -999, 6],
[8, 4, 4, 8]])
</code></pre>
<p>I want to get 40% percentile of each row in that array where it is not equal -999</p>
<p>If I use <code>np.percentile(a, 40, axis=1)</code> I will get <code>array([ 3.8, 1. , 4.2, 1.2, -799. , 4.8])</code> which is still include -999</p>
<p>the output I want will be like this</p>
<pre><code>[
6.2, # 3 or 7 also ok
1,
4.2, # 4 or 5 also ok
4.8, # 0 or 6 also ok
1,
4
]
</code></pre>
<p>Thank you</p>
|
<python><numpy>
|
2025-03-03 02:06:50
| 1
| 492
|
d_frEak
|
79,479,888
| 651,174
|
How to initialize dict cursor in mysql.connector
|
<p>I used to be able to do something like:</p>
<pre><code>self.conn = MySQLdb.connect(user=...)
self.dict_cursor = self.conn.cursor(MySQLdb.cursors.DictCursor)
</code></pre>
<p>However, now it's not so simple with the new mysql-python connector. How would I do a similar pattern to initialize a dict cursor?</p>
<pre><code>self.conn = mysql.connector.connect(user=...)
self.dict_cursor = self.conn ???
</code></pre>
|
<python><mysql-connector>
|
2025-03-02 23:03:47
| 1
| 112,064
|
David542
|
79,479,705
| 8,774,513
|
How to make a custom diagram with python for data analysis
|
<p>I am looking for an idea how to change a diagram to make it fit to that picture:</p>
<p><a href="https://i.sstatic.net/cwVZTYig.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cwVZTYig.png" alt="enter image description here" /></a></p>
<p>Maybe it would make sense to make the output in a table to klick a row and show it in the diagram. But I have no idea which bib I can use.</p>
<p>The data I have already converted:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
# Excel-Datei lesen
df = pd.read_csv('Data/DrivingRange.csv', sep=',')
df = df.drop(index=0)
# Carry-Distanz
df['Carry-Distanz'] = pd.to_numeric(df['Carry-Distanz'], errors='coerce')
# Club speed in km/h
df['Schl.gsch.'] = pd.to_numeric(df['Schl.gsch.'], errors='coerce')
# Club Path
df['Schwungbahn'] = pd.to_numeric(df['Schwungbahn'], errors='coerce')
# Face Angle
df['SchlagflΓ€che'] = pd.to_numeric(df['SchlagflΓ€che'], errors='coerce')
# Face to Path
df['SchlagflΓ€chenstellung'] = pd.to_numeric(df['SchlagflΓ€chenstellung'], errors='coerce')
# Club speed in mph
df['Schl_gsch_mph'] = df['Schl.gsch.'] * 0.621371
# Sequence of clubs
sequence = ['Lob-Wedge', 'Sandwedge', 'Eisen 9', 'Eisen 8', 'Eisen 7', 'Eisen 6', 'Eisen 5', 'Hybrid 4', 'Driver']
df['SchlΓ€gerart'] = pd.Categorical(df['SchlΓ€gerart'], categories=sequence, ordered=True)
# Sort DataFrame by category
df_sorted = df.sort_values(by='SchlΓ€gerart')
df_sorted_filtered = df_sorted[['SchlΓ€gerart','Schl_gsch_mph', 'Carry-Distanz', 'Schwungbahn', 'SchlagflΓ€che', 'SchlagflΓ€chenstellung']]
df_sorted_filtered.rename(columns={'SchlΓ€gerart': 'Club', 'Schl_gsch_mph': 'Club Speed', 'Carry-Distanz' : 'Carry Distance', 'Schwungbahn': 'Club Path', 'SchlagflΓ€che' : 'Face Angle', 'SchlagflΓ€chenstellung' : 'Face to Path'}, inplace=True)
print(df_sorted_filtered)
</code></pre>
<p>Output:</p>
<pre><code> Club Club Speed Carry Distance Club Path Face Angle Face to Path
30 Lob-Wedge 67.779147 68.842956 -3.89 4.12 8.01
31 Lob-Wedge 67.712039 58.803586 5.71 20.79 15.08
1 Sandwedge 62.611826 76.651350 -1.75 -0.25 1.50
2 Sandwedge 62.007853 60.411199 2.02 -2.80 -4.82
3 Sandwedge 67.197544 93.361768 6.17 -5.94 -12.11
</code></pre>
|
<python><pandas><data-analysis><mathlib>
|
2025-03-02 20:27:34
| 3
| 419
|
0x53olution
|
79,479,426
| 11,277,108
|
Create a function to use elimination logic to split a list of strings into substrings
|
<p>I have a list of strings where each string represents the names of two tennis players that played in a match. The list of strings collectively is all known matches in a tournament. An example of a string would be:</p>
<pre><code>"halep-simona-williams-serena"
</code></pre>
<p>I'd like to split each string into a tuple of two strings:</p>
<pre><code>("halep-simona", "williams-serena")
</code></pre>
<p>Given one string then this clearly isn't possible but given a list of strings that represent a tournament of matches where players have played each other then I'm hopeful there's a way of using elimination logic to split the names.</p>
<p>I've had a go at this with some success:</p>
<pre><code>def split_into_names(urls: list[str]) -> list[tuple[str | None, str | None]]:
matchups = list()
for idx, url in enumerate(urls):
other_urls = urls[:idx] + urls[idx + 1 :]
full_name_combo = get_name_combo(
url=url,
other_urls=other_urls,
)
matchups.append(full_name_combo)
return matchups
def get_name_combo(url: str, other_urls: list[str]) -> tuple[str | None, str | None]:
full_name_combos = generate_potential_name_combos(url)
for full_name_combo in full_name_combos:
if check_name_combo(
full_name_combo=full_name_combo,
url=url,
other_urls=other_urls,
):
return full_name_combo
else:
return (None, None)
def generate_potential_name_combos(url: str) -> list[tuple[str, str]]:
parts = url.split("-")
full_names = list()
for i in range(1, len(parts)):
p1 = "-".join(parts[:i])
p2 = "-".join(parts[i:])
full_names.append((p1, p2))
return full_names
def check_name_combo(
full_name_combo: tuple[str, str],
url: str,
other_urls: list[str],
) -> bool:
for full_name in full_name_combo:
for idx, check_url in enumerate(other_urls):
# can't eliminate url name options if both players have
# played each other previously in the same order
if check_url == url:
continue
check_url_part = check_url.partition(full_name)
# can't check url if name not present
if not check_url_part[1]:
continue
# can't be correct if full name splits the url into three
elif check_url_part[0] and check_url_part[2]:
continue
# this is the correct split so need to exit here with the answer
elif check_url_part[0] and not check_url_part[2]:
return True
elif not check_url_part[0] and check_url_part[2]:
clean_potential_name = check_url_part[2].strip("-")
remaining_urls = other_urls[:idx] + other_urls[idx + 1 :]
for remaining_url in remaining_urls:
if remaining_url == check_url:
continue
elif clean_potential_name in remaining_url:
return True
else:
pass
return False
</code></pre>
<p>If I run the above I'm able to match a list of real life sample strings. However, if I start to build some more complex edge cases then I run into trouble. Here is the code to test the real life and edge case lists:</p>
<pre><code>def main():
names = [
"halep-simona-williams-serena",
"radwanska-agnieszka-halep-simona",
"williams-serena-wozniacki-caroline",
"halep-simona-ivanovic-ana",
"kvitova-petra-wozniacki-caroline",
"sharapova-maria-radwanska-agnieszka",
"williams-serena-bouchard-eugenie",
"sharapova-maria-kvitova-petra",
"radwanska-agnieszka-wozniacki-caroline",
"bouchard-eugenie-ivanovic-ana",
"williams-serena-halep-simona",
"kvitova-petra-radwanska-agnieszka",
"sharapova-maria-wozniacki-caroline",
"halep-simona-bouchard-eugenie",
"williams-serena-ivanovic-ana",
]
result = split_into_names(names)
# success :-)
assert result == [
("halep-simona", "williams-serena"),
("radwanska-agnieszka", "halep-simona"),
("williams-serena", "wozniacki-caroline"),
("halep-simona", "ivanovic-ana"),
("kvitova-petra", "wozniacki-caroline"),
("sharapova-maria", "radwanska-agnieszka"),
("williams-serena", "bouchard-eugenie"),
("sharapova-maria", "kvitova-petra"),
("radwanska-agnieszka", "wozniacki-caroline"),
("bouchard-eugenie", "ivanovic-ana"),
("williams-serena", "halep-simona"),
("kvitova-petra", "radwanska-agnieszka"),
("sharapova-maria", "wozniacki-caroline"),
("halep-simona", "bouchard-eugenie"),
("williams-serena", "ivanovic-ana"),
]
names = [
"test-test-test-test-test", # I don't think this can ever be split...
"test-test-test-test-test-phil",
"test-test-test-test-matt",
"test-test-test-test-test-james",
"test-test-kris-test-test",
"test-test-bob-test-test-phil",
"test-test-rich-test-test-matt",
]
result = split_into_names(names)
# fail :-(
assert result == [
("test-test", "test-test-test"),
("test-test-test", "test-test-phil"),
("test-test", "test-test-matt"),
("test-test-test", "test-test-james"),
("test-test-kris", "test-test"),
("test-test-bob", "test-test-phil"),
("test-test-rich", "test-test-matt"),
]
if __name__ == "__main__":
main()
</code></pre>
<p><strong>Important: I don't need a less complex solution for the real life cases. I need a solution for the edge cases.</strong></p>
<p>Is what I'm trying to achieve feasible given the high number of permutations of names? If yes, then how would I adapt my code so that the edge cases pass?</p>
<p>If it helps then I'm ok if the answer involves returning <code>(None, None)</code> for combinations that could never be matched (I have identified one edge case already where this is almost certainly needed). However, I can't have any returned tuples that are incorrect.</p>
|
<python><text-parsing>
|
2025-03-02 16:58:08
| 1
| 1,121
|
Jossy
|
79,479,405
| 12,663,117
|
Why is my virtual environment not working?
|
<p>I have a Python virtual environment (created with venv) kept in a directory in my project. I created it and sourced the activate script. I am sure that it worked, because my terminal prompt starts with <code>(venv)</code>. Running <code>which pip</code> points to the pip installed in the venv, however, when I run <code>which python3</code>, I get <code>/user/bin/python3</code> as a result. Also, it is still using my site-packages directory for module sourcing. why is this so?</p>
<p>OS: Ubuntu 20.04</p>
<p>When I try to install modules, I get the following error:</p>
<pre><code> bash:
/home/ntolb/CODING_PROJECTS/python_workspaces/webscraping-projects_workspaces/scr-proj_ws1/venv/bin/pip: /home/ntolb/CODING_PROJECTS/python_workspaces/webscraping-projects_workspaces/scr-proj_ws1/venv/bin/python:
bad interpreter: No such file or directory
</code></pre>
<p>even though I can clearly see it in the VS Code file explorer.</p>
|
<python><linux><python-venv><venv>
|
2025-03-02 16:44:25
| 2
| 869
|
Nate T
|
79,479,392
| 2,302,485
|
What frame rate to expect using Python sounddevice.InputStream on macbook?
|
<p>I record audio on a macbook using this code:</p>
<pre class="lang-py prettyprint-override"><code> def callback(indata, frame_count, time_info, status):
if status:
print('Error:', status)
frames.append(indata.copy())
...
with sd.InputStream(callback=callback, channels=1, samplerate=1600) as stream:
while True:
sd.sleep(100)
</code></pre>
<p>After 5 seconds passed I get only ~5000 frames in total and about 100 frames per sleep call. I don't understand why the frame count is so low, shouldn't it be 16000 frames per second?</p>
|
<python><macos><python-sounddevice>
|
2025-03-02 16:31:32
| 0
| 402
|
egor10_4
|
79,479,376
| 405,017
|
How to output an empty attribute in FastHTML?
|
<p>In <a href="https://fastht.ml/" rel="nofollow noreferrer">FastHTML</a> I want to output <code><option value="">(select)</option></code> for a <a href="https://docs.fastht.ml/explains/explaining_xt_components.html" rel="nofollow noreferrer">FastTag component</a>. However, this codeβ¦</p>
<pre class="lang-py prettyprint-override"><code>from fasthtml.core import to_xml
from fasthtml.components import Option
opt = Option("(select)", value="")
print(to_xml(opt))
</code></pre>
<p>β¦results in <code><option>(select)</option></code>. This causes the browser to use the value <code>(select)</code> when this option is chosen. I don't want to use <code>0</code> or <code>-</code> or something else, I want a truly empty value. How do I get it?</p>
|
<python><html><fasthtml>
|
2025-03-02 16:15:40
| 1
| 304,256
|
Phrogz
|
79,478,980
| 7,729,531
|
Use Flask-SocketIO to send messages continuously
|
<p>I would like to build an application to monitor files on my local computer through the browser. I have written a minimalistic example to illustrate this.</p>
<p>The frontend should be able to request messages from the backend on events. In my example below, I am calling these "standard messages". They are triggered in the frontend by a button click and a <code>fetch</code> call, and handled in the backend by a <code>route</code>.</p>
<p>The backend should also send messages every 5 seconds. I am calling these "continuous messages" and I am trying to send them using Flask-SocketIO. They are sent by the backend with socketio <code>start_background_task</code> and <code>emit</code>, and are received by the frontend with <code>socket.on()</code>.</p>
<p>The "standard messages" part works fine if I comment the socketio part (and replace <code>socketio.run</code> with <code>app.run</code>). But when I uncomment the socketio part to have "continuous messages", the communication fails. The backend doesn't receive the "standard messages" calls, and the frontend doesn't receive the "continuous messages".</p>
<p>It looks like I am not writing the socketio part correctly. What do I need to add so that both messages type work correctly?</p>
<p>Backend</p>
<pre class="lang-py prettyprint-override"><code>import time
import logging
from flask import Flask
from flask_cors import CORS
from flask_socketio import SocketIO
app = Flask(__name__)
CORS(app)
logging.basicConfig(level=logging.DEBUG)
# Continuous messages ##########################################
socketio = SocketIO(app, cors_allowed_origins="*", logger=True, engineio_logger=True)
@socketio.on('connect')
def handle_connect():
print('Client connected')
socketio.start_background_task(send_continuous_messages)
def send_continuous_messages():
while True:
print("Backend sends a continuous message")
socketio.emit("continuous_message", {"data": "New continuous message"})
time.sleep(5)
################################################################
@app.route("/standard-message", methods=["GET"])
def generate_standard_message():
print("Backend sends a standard message")
return {"data": "New standard message"}
if __name__ == "__main__":
print("Starting app")
socketio.run(app, debug=True)
</code></pre>
<p>Frontend</p>
<pre class="lang-js prettyprint-override"><code>const myButton = document.getElementById('my-button');
const BASE_URL = "http://localhost:5000";
localStorage.debug = '*';
// Continuous messages #########################################
const socket = io(BASE_URL);
socket.on('connect', () => {
console.log('WebSocket connected');
});
socket.on('disconnect', () => {
console.log('WebSocket disconnected');
});
socket.on('continuous_message', (data) => {
console.log('Frontend received a continuous message:', data);
});
// #############################################################
async function receiveStandardMessage() {
console.log('Frontend is requesting a standard message');
const response = await fetch(`${BASE_URL}/standard-message`);
const data = await response.json();
console.log('Frontend received a standard message:', data);
}
myButton.addEventListener('click', receiveStandardMessage);
</code></pre>
<p>index.html</p>
<pre><code><!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>My app</title>
<script src="https://cdn.socket.io/4.0.0/socket.io.min.js"></script>
</head>
<body>
<div id="app">
<button id="my-button">Send standard message</button>
</div>
<script src="app.js"></script>
</body>
</html>
</code></pre>
<p>Server logs</p>
<pre><code>Server initialized for eventlet.
INFO:engineio.server:Server initialized for eventlet.
Starting app
INFO:werkzeug: * Restarting with stat
Server initialized for eventlet.
INFO:engineio.server:Server initialized for eventlet.
Starting app
WARNING:werkzeug: * Debugger is active!
INFO:werkzeug: * Debugger PIN: 141-243-050
(6318) wsgi starting up on http://127.0.0.1:5000
(6318) accepted ('127.0.0.1', 52009)
GotWW8oC3BEz4J3jAAAA: Sending packet OPEN data {'sid': 'GotWW8oC3BEz4J3jAAAA', 'upgrades': ['websocket'], 'pingTimeout': 20000, 'pingInterval': 25000, 'maxPayload': 1000000}
INFO:engineio.server:GotWW8oC3BEz4J3jAAAA: Sending packet OPEN data {'sid': 'GotWW8oC3BEz4J3jAAAA', 'upgrades': ['websocket'], 'pingTimeout': 20000, 'pingInterval': 25000, 'maxPayload': 1000000}
127.0.0.1 - - [06/Mar/2025 17:40:49] "GET /socket.io/?EIO=4&transport=polling&t=PLiLCOB HTTP/1.1" 200 335 0.001486
GotWW8oC3BEz4J3jAAAA: Received packet MESSAGE data 0
INFO:engineio.server:GotWW8oC3BEz4J3jAAAA: Received packet MESSAGE data 0
Client connected
GotWW8oC3BEz4J3jAAAA: Sending packet MESSAGE data 0{"sid":"w_DlD83bH5OBQRbiAAAB"}
INFO:engineio.server:GotWW8oC3BEz4J3jAAAA: Sending packet MESSAGE data 0{"sid":"w_DlD83bH5OBQRbiAAAB"}
127.0.0.1 - - [06/Mar/2025 17:40:50] "POST /socket.io/?EIO=4&transport=polling&t=PLiLCOy&sid=GotWW8oC3BEz4J3jAAAA HTTP/1.1" 200 202 0.003777
Backend sends a continuous message
emitting event "continuous_message" to all [/]
INFO:socketio.server:emitting event "continuous_message" to all [/]
GotWW8oC3BEz4J3jAAAA: Sending packet MESSAGE data 2["continuous_message",{"data":"New continuous message"}]
INFO:engineio.server:GotWW8oC3BEz4J3jAAAA: Sending packet MESSAGE data 2["continuous_message",{"data":"New continuous message"}]
Backend sends a continuous message
emitting event "continuous_message" to all [/]
INFO:socketio.server:emitting event "continuous_message" to all [/]
GotWW8oC3BEz4J3jAAAA: Sending packet MESSAGE data 2["continuous_message",{"data":"New continuous message"}]
INFO:engineio.server:GotWW8oC3BEz4J3jAAAA: Sending packet MESSAGE data 2["continuous_message",{"data":"New continuous message"}]
</code></pre>
<p>Client logs</p>
<pre><code>Frontend is requesting a standard message
Firefox canβt establish a connection to the server at ws://localhost:5000/socket.io/?EIO=4&transport=websocket&sid=GotWW8oC3BEz4J3jAAAA.
WebSocket disconnected
</code></pre>
|
<javascript><python><flask><flask-socketio>
|
2025-03-02 11:11:34
| 2
| 440
|
tvoirand
|
79,478,834
| 13,742,665
|
ModuleNotFoundError: No module named 'bs4' but in pip list is already installed
|
<p>in here try to install beautifulSoup using pip, but the module is not found,</p>
<p>I try to verify python path and pip on my computer here us the result</p>
<pre class="lang-bash prettyprint-override"><code>/Users/macbookairm2/Code/Dev/Fiverr/Scraping/project/AAMUProject/RosterBio/venv/bin/python
venvβ RosterBio git:(dockerized) β which pip
/Users/macbookairm2/Code/Dev/Fiverr/Scraping/project/AAMUProject/RosterBio/venv/bin/pip
venvβ RosterBio git:(dockerized) β
</code></pre>
<p>and here is my pip list</p>
<pre class="lang-bash prettyprint-override"><code> pip list
Package Version
------------------------- -----------
alembic 1.14.1
amqp 5.3.1
annotated-types 0.7.0
anyio 4.4.0
attrs 24.2.0
automaton 3.2.0
autopage 0.5.2
azure-functions 1.21.3
bcrypt 4.2.1
beautifulsoup4 4.13.3
cachetools 5.5.2
certifi 2024.8.30
cffi 1.17.1
charset-normalizer 3.3.2
click 8.1.7
cliff 4.8.0
cmd2 2.5.11
cryptography 44.0.1
debtcollector 3.0.0
decorator 5.2.0
dnspython 2.7.0
dogpile.cache 1.3.4
eventlet 0.39.0
fastapi 0.115.8
fasteners 0.19
futurist 3.0.0
gnureadline 8.2.13
greenlet 3.1.1
h11 0.14.0
httpcore 1.0.5
httpx 0.27.2
idna 3.8
importlib_metadata 8.6.1
iso8601 2.1.0
Jinja2 3.1.5
jmespath 1.0.1
jsonpatch 1.33
jsonpointer 3.0.0
jsonschema 4.23.0
jsonschema-specifications 2024.10.1
keystoneauth1 5.10.0
keystonemiddleware 10.9.0
kombu 5.4.2
legacy-cgi 2.6.2
logutils 0.3.5
magnum 19.0.0
Mako 1.3.9
mangum 0.19.0
markdown-it-py 3.0.0
MarkupSafe 3.0.2
mdurl 0.1.2
msgpack 1.1.0
netaddr 1.3.0
networkx 3.4.2
numpy 2.2.3
openstacksdk 4.3.0
os-client-config 2.1.0
os-service-types 1.7.0
osc-lib 3.2.0
oslo.cache 3.10.1
oslo.concurrency 7.1.0
oslo.config 9.7.1
oslo.context 5.7.1
oslo.db 17.2.0
oslo.i18n 6.5.1
oslo.log 7.1.0
oslo.messaging 16.1.0
oslo.metrics 0.11.0
oslo.middleware 6.3.1
oslo.policy 4.5.1
oslo.reports 3.5.1
oslo.serialization 5.7.0
oslo.service 4.1.0
oslo.upgradecheck 2.5.0
oslo.utils 8.2.0
oslo.versionedobjects 3.6.0
outcome 1.3.0.post0
packaging 24.1
pandas 2.2.3
Paste 3.10.1
PasteDeploy 3.1.0
pbr 6.1.1
pecan 1.5.1
pip 25.0.1
platformdirs 4.3.6
prettytable 3.14.0
prometheus_client 0.21.1
psutil 7.0.0
pycadf 4.0.1
pycparser 2.22
pydantic 2.10.6
pydantic_core 2.27.2
pydot 3.0.4
Pygments 2.18.0
PyJWT 2.10.1
pyOpenSSL 25.0.0
pyparsing 3.2.1
pyperclip 1.9.0
PySocks 1.7.1
python-barbicanclient 7.0.0
python-cinderclient 9.6.0
python-dateutil 2.9.0.post0
python-dotenv 1.0.1
python-glanceclient 4.7.0
python-heatclient 4.1.0
python-keystoneclient 5.5.0
python-neutronclient 11.4.0
python-novaclient 18.8.0
python-octaviaclient 3.9.0
python-openstackclient 7.3.1
python-swiftclient 4.6.0
pytz 2024.2
PyYAML 6.0.2
referencing 0.36.2
repoze.lru 0.7
requests 2.32.3
requestsexceptions 1.4.0
rfc3986 2.0.0
rich 13.8.0
Routes 2.5.1
rpds-py 0.23.1
selenium 4.25.0
setuptools 75.8.0
shellingham 1.5.4
simplegeneric 0.8.1
six 1.16.0
sniffio 1.3.1
sortedcontainers 2.4.0
soupsieve 2.6
SQLAlchemy 2.0.38
starlette 0.45.3
statsd 4.0.1
stevedore 5.4.1
taskflow 5.11.0
tenacity 9.0.0
testresources 2.0.1
testscenarios 0.5.0
testtools 2.7.2
trio 0.26.2
trio-websocket 0.11.1
typer 0.12.5
typing_extensions 4.12.2
tzdata 2024.2
urllib3 2.2.3
uvicorn 0.34.0
vine 5.1.0
warlock 2.0.1
wcwidth 0.2.13
webdriver-manager 4.0.2
WebOb 1.8.9
websocket-client 1.8.0
Werkzeug 3.1.3
wrapt 1.17.2
WSME 0.12.1
wsproto 1.2.0
yappi 1.6.10
zipp 3.21.0
venvβ RosterBio git:(dockerized)
</code></pre>
<p>the <code>beautifulsoup4</code> is already install on my computer but the module is still not found</p>
<pre class="lang-bash prettyprint-override"><code>uvicorn manage:app --host 0.0.0.0 --port 8000 --reload
INFO: Will watch for changes in these directories: ['/Users/macbookairm2/Code/Dev/Fiverr/Scraping/nivyschedulescraper/AAMUProject/RosterBio/src']
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: Started reloader process [62922] using WatchFiles
Process SpawnProcess-1:
Traceback (most recent call last):
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/opt/homebrew/lib/python3.12/site-packages/uvicorn/_subprocess.py", line 80, in subprocess_started
target(sockets=sockets)
File "/opt/homebrew/lib/python3.12/site-packages/uvicorn/server.py", line 65, in run
return asyncio.run(self.serve(sockets=sockets))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/opt/homebrew/lib/python3.12/site-packages/uvicorn/server.py", line 69, in serve
await self._serve(sockets)
File "/opt/homebrew/lib/python3.12/site-packages/uvicorn/server.py", line 76, in _serve
config.load()
File "/opt/homebrew/lib/python3.12/site-packages/uvicorn/config.py", line 434, in load
self.loaded_app = import_from_string(self.app)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.12/site-packages/uvicorn/importer.py", line 22, in import_from_string
raise exc from None
File "/opt/homebrew/lib/python3.12/site-packages/uvicorn/importer.py", line 19, in import_from_string
module = importlib.import_module(module_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 999, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/Users/macbookairm2/Code/Dev/Fiverr/Scraping/nivyschedulescraper/AAMUProject/RosterBio/src/manage.py", line 2, in <module>
from core import create_app
File "/Users/macbookairm2/Code/Dev/Fiverr/Scraping/nivyschedulescraper/AAMUProject/RosterBio/src/core/__init__.py", line 2, in <module>
from api.v1.routers.rosterbio import router
File "/Users/macbookairm2/Code/Dev/Fiverr/Scraping/nivyschedulescraper/AAMUProject/RosterBio/src/api/v1/routers/rosterbio.py", line 4, in <module>
from services.rosterbio_service import RosterBioService
File "/Users/macbookairm2/Code/Dev/Fiverr/Scraping/nivyschedulescraper/AAMUProject/RosterBio/src/services/rosterbio_service.py", line 5, in <module>
from modules.spider.rosterbio import AamuSportRosterSpider
File "/Users/macbookairm2/Code/Dev/Fiverr/Scraping/nivyschedulescraper/AAMUProject/RosterBio/src/modules/spider/rosterbio.py", line 3, in <module>
from bs4 import BeautifulSoup
ModuleNotFoundError: No module named 'bs4'
</code></pre>
<p>anyone can help?, form beautifulsoup I refer to this <a href="https://pypi.org/project/beautifulsoup4/" rel="nofollow noreferrer">docs</a> Thanks!</p>
|
<python><python-3.x><beautifulsoup><pip><modulenotfounderror>
|
2025-03-02 09:11:36
| 1
| 833
|
perymerdeka
|
79,478,810
| 2,604,247
|
How Come Memory Footprint of a Tensorflow Model is Increasing After tflite Conversion?
|
<p>Trained a simple tensorflow model containing some LSTM and Dense feed-forward layers. After training, I am quantising and converting the model to a <code>tf.lite</code> format for edge deployment. Here is the relevant part of the code.</p>
<pre class="lang-py prettyprint-override"><code>...
model_size:int = sum(weight.numpy().nbytes for weight in model.trainable_weights)
print(f'Model size: {model_size / (1024):.2f} KB')
tf_lite_converter = tf.lite.TFLiteConverter.from_keras_model(model=model)
tf_lite_converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
tf_lite_converter.target_spec.supported_types = [tf.float16]
tflite_model:bytes = tf_lite_converter.convert()
print(f'Size of the tf lite model is {len(tflite_model)/1024} kB')
</code></pre>
<p>As the <code>tf_lite</code> is just a byte array, I am just dividing its length by 1024 to get the size in memory.</p>
<p>Original model size: 33.4 kB
After compression and quantisation: 55 kB</p>
<p>So how is that possible or even beneficial if the tf_lite converter increases the size in memory? Or, am I measuring the sizes (in memory) wrongly? Any clue how to get a fair comparison then?</p>
<h4>Note</h4>
<p>I already compared the sizes in disk (as I persist the models) and yes, tflite shows a clear compression benefit. But is that where the benefit is to be found?</p>
|
<python><tensorflow><machine-learning><tflite>
|
2025-03-02 08:44:09
| 0
| 1,720
|
Della
|
79,478,793
| 11,748,924
|
Change placeholder based on latest chat input streamlit
|
<p>I have this code that supposed to change placeholder dynamically based on latest user input:</p>
<pre><code>def main():
last_prompt = st.session_state.get(
"last_prompt", "A cat detective interrogating a suspicious goldfish in a bubble-filled aquarium courtroom.")
prompt_input = st.chat_input(last_prompt)
if prompt_input:
st.session_state.last_prompt = prompt_input
</code></pre>
<p>But it's not working as I expected, the placeholder still show default placeholder instead of latest chat input.
<a href="https://i.sstatic.net/e9R8wuvI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e9R8wuvI.png" alt="enter image description here" /></a></p>
|
<python><streamlit>
|
2025-03-02 08:23:17
| 2
| 1,252
|
Muhammad Ikhwan Perwira
|
79,478,562
| 6,312,979
|
Polars import from list of list into Colums
|
<p>I am trying to import a list of lists into Polars and get the data in seperate columns.</p>
<p>Example.</p>
<pre><code>numbers = [['304-144635', 0], ['123-091523', 7], ['305-144931', 12], ['623-101523', 16], ['305-145001', 22], ['623-111523', 27], ['9998-2603', 29]]
</code></pre>
<p>Does not work like Pandas and this does not work.</p>
<pre><code>df = pl.DataFrame(numbers)
</code></pre>
<p>new to polars and excited to start using. Thanks</p>
|
<python><dataframe><python-polars>
|
2025-03-02 02:47:34
| 2
| 2,181
|
diogenes
|
79,478,541
| 18,769,241
|
Longest palindrome substring algorithm not working properly for even palindromes
|
<p>The algorithm I wrote in Python seems to be working fine when it comes to extracting longest palindromes that have even number of characters but not for their odd counterparts:</p>
<pre><code>some_input_2 = "abdbaabeeba"
size_of_some_input_2 = len(some_input_2)
max = 0
final_max = 0
idx_center_letter = 0
final_idx_center_letter = 0
for j in range(2):
for idx, letter in enumerate(some_input_2):
i = 0
while i <= int(size_of_some_input_2 / 2) + 1 and not idx - i < 0 and not idx + i+j >= size_of_some_input_2:
upper_idx = idx + i +j
bottom_idx = idx - i
if some_input_2[bottom_idx] == some_input_2[upper_idx]:
i = i + 1
else:
break
if max < i :
max = i #-j
idx_center_letter = idx + 1
if final_max <= max:
final_max = max
final_idx_center_letter = idx_center_letter
print(max)
print(idx_center_letter)
print(some_input_2[final_idx_center_letter - final_max:final_idx_center_letter + final_max])
</code></pre>
<p>here the even palindrome <code>abdbaa</code> is the output which is wrong. If the input was <code>some_input_2 = "abbaabeeba"</code> then the output would be <code>abeeba</code> which is correct. Any idea what is wrong here?</p>
|
<python><algorithm>
|
2025-03-02 02:14:02
| 3
| 571
|
Sam
|
79,478,025
| 5,320,122
|
ActiveMQ Artemis with STOMP: Same message from a queue is consumed by two consumers
|
<p>ActiveMQ Artemis version 2.27</p>
<p>Consumers spin up as pods automatically by KEDA when it sees unread messages in the activemq broker. One consumer process one message, send ACK and disconnect. More frequently than always, I am seeing the same message is read by 2 or more consumers even though ACK is sent. What settings should I enable on the broker side so that every consumer gets a different message?</p>
<p>ActiveMQ is deployed as a pod in Kubernetes. The <code>connection_id</code> is always the same. Here is my consumer side code to connect to the broker:</p>
<pre><code>def connect_subscribe(conn, connection_id, user_id, user_pw, queue_name):
conn.start()
conn.connect(user_id, user_pw, wait=True)
conn.subscribe(destination=queue_name, id=connection_id, ack='client-individual',
headers = {'subscription-type': 'ANYCAST','consumer-window-size': 0})
if conn.is_connected():
logger.info('Connected to broker.')
</code></pre>
<p>This method is called after STOMP listener is initialized. And, this is how a message is consumed. Sample code:</p>
<pre><code> def process_message(self, headers, body):
res_dict = json.loads(body)
job_id = res_dict['job_id']
vm_uuid = res_dict['vm_uuid']
logger.info('Processing the request with message body: {}'.format(body))
logger.info('To Be implemented!')
sleep_in_seconds = os.getenv('SLEEP_IN_SECONDS', default=1)
time.sleep(int(sleep_in_seconds))
self.send_ack_and_disconnect(headers['ack'])
</code></pre>
<p>Here is the broker.xml:</p>
<pre><code><?xml version='1.0'?>
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xi="http://www.w3.org/2001/XInclude"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<name>ROAD</name>
<persistence-enabled>true</persistence-enabled>
<!-- this could be ASYNCIO, MAPPED, NIO
ASYNCIO: Linux Libaio
MAPPED: mmap files
NIO: Plain Java Files
-->
<journal-type>ASYNCIO</journal-type>
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-device-block-size>4096</journal-device-block-size>
<journal-file-size>10M</journal-file-size>
<!--
This value was determined through a calculation.
Your system could perform 35.71 writes per millisecond
on the current journal configuration.
That translates as a sync write every 28000 nanoseconds.
Note: If you specify 0 the system will perform writes directly to the disk.
We recommend this to be 0 if you are using journalType=MAPPED and journal-datasync=false.
-->
<journal-buffer-timeout>28000</journal-buffer-timeout>
<!--
When using ASYNCIO, this will determine the writing queue depth for libaio.
-->
<journal-max-io>4096</journal-max-io>
<!--
You can verify the network health of a particular NIC by specifying the <network-check-NIC> element.
<network-check-NIC>theNicName</network-check-NIC>
-->
<!--
Use this to use an HTTP server to validate the network
<network-check-URL-list>http://www.apache.org</network-check-URL-list> -->
<!-- <network-check-period>10000</network-check-period> -->
<!-- <network-check-timeout>1000</network-check-timeout> -->
<!-- this is a comma separated list, no spaces, just DNS or IPs
it should accept IPV6
Warning: Make sure you understand your network topology as this is meant to validate if your network is valid.
Using IPs that could eventually disappear or be partially visible may defeat the purpose.
You can use a list of multiple IPs, and if any successful ping will make the server OK to continue running -->
<!-- <network-check-list>10.0.0.1</network-check-list> -->
<!-- use this to customize the ping used for ipv4 addresses -->
<!-- <network-check-ping-command>ping -c 1 -t %d %s</network-check-ping-command> -->
<!-- use this to customize the ping used for ipv6 addresses -->
<!-- <network-check-ping6-command>ping6 -c 1 %2$s</network-check-ping6-command> -->
<!-- how often we are looking for how many bytes are being used on the disk in ms -->
<disk-scan-period>5000</disk-scan-period>
<!-- once the disk hits this limit the system will block, or close the connection in certain protocols
that won't support flow control. -->
<max-disk-usage>90</max-disk-usage>
<!-- should the broker detect dead locks and other issues -->
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<page-sync-timeout>80000</page-sync-timeout>
<!-- the system will enter into page mode once you hit this limit.
This is an estimate in bytes of how much the messages are using in memory
The system will use half of the available memory (-Xmx) by default for the global-max-size.
You may specify a different value here if you need to customize it to your needs.
<global-max-size>100Mb</global-max-size>
-->
<acceptors>
<!-- useEpoll means: it will use Netty epoll if you are on a system (Linux) that supports it -->
<!-- amqpCredits: The number of credits sent to AMQP producers -->
<!-- amqpLowCredits: The server will send the # credits specified at amqpCredits at this low mark -->
<!-- amqpDuplicateDetection: If you are not using duplicate detection, set this to false
as duplicate detection requires applicationProperties to be parsed on the server. -->
<!-- amqpMinLargeMessageSize: Determines how many bytes are considered large, so we start using files to hold their data.
default: 102400, -1 would mean to disable large mesasge control -->
<!-- Note: If an acceptor needs to be compatible with HornetQ and/or Artemis 1.x clients add
"anycastPrefix=jms.queue.;multicastPrefix=jms.topic." to the acceptor url.
See https://issues.apache.org/jira/browse/ARTEMIS-1644 for more information. -->
<!-- STOMP Acceptor. -->
<acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true;sslEnabled=true;keyStorePath=/certs/broker.ks;keyStorePassword=ENC(xxxx);trustStorePath=/certs2/broker.ts;trustStorePassword=ENC(xxxx);needClientAuth=true;enabledProtocols=TLSv1.2</acceptor>
</acceptors>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="amq"/>
</security-setting>
</security-settings>
<address-settings>
<!-- if you define auto-create on certain queues, management has to be auto-create -->
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
</address-settings>
<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ" />
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue" />
</anycast>
</address>
<address name="model_supervised">
<anycast>
<queue name="model_supervised" />
</anycast>
</address>
</addresses>
</core>
</configuration>
</code></pre>
<p>Application Logs:</p>
<p>Consumer 1:</p>
<pre><code>2025-02-07 12:16:52,277 DEBUG ActiveMQclient:69 - On Message: {"vm_uuid": "422f6867-f9e5-1682-8c8f-0173b81a5b93", "job_id": "Job-pRHNIWrFpBuuGPdh19CQVwBD"}
2025-02-07 12:16:52,292 INFO model_supervised_consumer:30 - Process data in message body {"vm_uuid": "422f6867-f9e5-1682-8c8f-0173b81a5b93", "job_id": "Job-pRHNIWrFpBuuGPdh19CQVwBD"}
2025-02-07 12:16:52,379 DEBUG databasewrapper:98 - Initialized DB Wrapper for - sallydb
2025-02-07 12:16:52,379 DEBUG databasewrapper:487 - Getting status of job: Job-pRHNIWrFpBuuGPdh19CQVwBD
2025-02-07 12:16:52,381 DEBUG databasewrapper:491 - Query responded with result: ('FE_SUPDONE',)
2025-02-07 12:16:52,381 DEBUG databasewrapper:505 - The status of the job Job-pRHNIWrFpBuuGPdh19CQVwBD is FE_SUPDONE.
2025-02-07 12:16:52,381 INFO model_supervised_consumer:369 - Updating status of job: Job-pRHNIWrFpBuuGPdh19CQVwBD to MODEL_SUPRVD_DONE
2025-02-07 12:16:52,405 DEBUG ActiveMQclient:93 - Ack message 4028128
2025-02-07 12:16:52,406 DEBUG ActiveMQclient:35 - Disconnected from broker
</code></pre>
<p>Consumer 2 comes up after 6 seconds and gets the same message:</p>
<pre><code>2025-02-07 12:16:57,550 DEBUG ActiveMQclient:69 - On Message: {"vm_uuid": "422f6867-f9e5-1682-8c8f-0173b81a5b93", "job_id": "Job-pRHNIWrFpBuuGPdh19CQVwBD"}
2025-02-07 12:16:57,568 INFO model_supervised_consumer:30 - Process data in message body {"vm_uuid": "422f6867-f9e5-1682-8c8f-0173b81a5b93", "job_id": "Job-pRHNIWrFpBuuGPdh19CQVwBD"}
2025-02-07 12:16:57,656 DEBUG databasewrapper:98 - Initialized DB Wrapper for - sallydb
2025-02-07 12:16:57,656 DEBUG databasewrapper:487 - Getting status of job: Job-pRHNIWrFpBuuGPdh19CQVwBD
2025-02-07 12:16:57,658 DEBUG databasewrapper:491 - Query responded with result: ('MODEL_SUDONE',)
2025-02-07 12:16:57,658 DEBUG databasewrapper:505 - The status of the job Job-pRHNIWrFpBuuGPdh19CQVwBD is MODEL_SUDONE.
2025-02-07 12:16:57,658 DEBUG model_supervised_consumer:40 - The job status is not FE_SUPDONE. Hence do not process the request.
2025-02-07 12:16:57,659 DEBUG ActiveMQclient:97 - NAck message 4028128
2025-02-07 12:16:57,659 DEBUG ActiveMQclient:35 - Disconnected from broker
</code></pre>
<p>There are checks within the application code to see if the job is in a desired state to process the message. If not, I am sending a NAck. This is saving me.</p>
<p>I expected that activemq will keep a track of messages sent to which consumers. But, looking at the behavior, it's not the case.</p>
<p>Please advise.</p>
<p>UPDATE:
This is the STOMP listener's <code>on_message</code> method that receives the messages. It forks a process to call <code>process_message()</code>. Since Since <code>on_message</code> bails out in a second. Hence to process a long running operation, a python process is spawned. Does this help you?</p>
<pre><code>def on_message(self, headers, body):
logger.debug(f'Message received: {str(body)}, with ID: {str(headers["ack"])}' )
# create process
p = Process(target=self.process_message, args=(headers, body), )
# start process
p.start()
# wait for process to return
p.join()
</code></pre>
|
<python><activemq-artemis><stomp>
|
2025-03-01 17:53:54
| 1
| 496
|
Amit
|
79,477,980
| 12,663,117
|
How can I install PyQt5 in python venv?
|
<p>I am on Ubuntu 20.04 and have made several attempts to install using pip. For some reason, pip just errors out. I have 3 different versions of pip on my system, and when I first tried installing PyQt5 into the system's main site-packages folder, I couldn't get any of them to install it successfully. I finally got it installed using the Apt package manager, but this is obviously not an option for a virtual environment.</p>
<p>Below is the full trace that I get when trying to install with pip:</p>
<pre><code>ERROR: Command erored out with exit status 1:
command: /home/ntolb/CODING_PROJECTS/python_workspaces/webscraping-projects_workspaces/scr-proj_ws1/bin/python /tmp/tmpuo_qeg4k prepare_metadata_for_build_wheel /tmp/tmpo5bkbzi5
cwd: /tmp/pip-install-9g1_3u0b/pyqt5
complete output (36 lines):
Querying qmake about your Qt installation...
Traceback (most recent call last):
File "/tmp/tmpuo_qeg4k", line 126, in prepare_metadata_for_build_wheel
hook = backend.prepare_metadata_for_build_wheel
AttributeError: module 'sipbuild.api' has no attribute 'prepare_metadata_for_build_wheel'
During handling of the above exception, another exception occured:
Traceback (most recent call last):
File "/tmp/tmpuo_qeg4k", line 280, in <module>
main()
File "/tmp/tmpuo_qeg4k", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/tmp/tmpuo_qeg4k", line 130, in prepare_metadata_for_build_wheel
return _get_wheel_metadata_from_wheel(backend, metadata_directory,
File "/tmp/tmpuo_qeg4k", line 159, in _get_wheel_metadata_from_wheel
whl_basename = backend.build_wheel(metadata_directory, config_settings)
File "/tmp/pip-build-env-gbcn5_fc/overlay/lib/python3.8/site-packages/sipbuild/api.py", line 28, in build_wheel
project = AbstractProject.bootstrap('wheel',
File "/tmp/pip-build-env-gbcn5_fc/overlay/lib/python3.8/site-packages/sipbuild/abstract_project.py", line 74, in bootstrap
project.setup(pyproject, tool, tool_description)
File "/tmp/pip-build-env-gbcn5_fc/overlay/lib/python3.8/site-packages/sipbuild/project.py", line 608, in setup
self.apply_user_defaults(tool)
File "/tmp/pip-install-9g1_3u0b/pyqt5/project.py", line 68, in apply_user_defaults
super().apply_user_defaults(tool)
File "/tmp/pip-build-env-gbcn5_fc/overlay/lib/python3.8/site-packages/pyqtbuild/project.py", line 51, in apply_user_defaults
super().apply_user_defaults(tool)
File "/tmp/pip-build-env-gbcn5_fc/overlay/lib/python3.8/site-packages/sipbuild/project.py", line 237, in apply_user_defaults
self.builder.apply_user_defaults(tool)
File "/tmp/pip-build-env-gbcn5_fc/overlay/lib/python3.8/site-packages/pyqtbuild/builder.py", line 58, in apply_user_defaults
self._get_qt_configuration()
File "/tmp/pip-build-env-gbcn5_fc/overlay/lib/python3.8/site-packages/pyqtbuild/builder.py", line 483, in _get_qt_configuration
for line in project.read_command_pipe([self.qmake, '-query']):
File "/tmp/pip-build-env-gbcn5_fc/overlay/lib/python3.8/site-packages/sipbuild/project.py", line 575, in read_command_pipe
raise UserException(
sipbuild.exceptions.UserException
----------------------------------------
ERROR: Command errored out with exit status 1: /home/ntolb/CODING_PROJECTS/python_workspaces/webscraping-projects_workspaces/scr-proj_ws1/bin/python /tmp/tmpuo_qeg4k prepare_metadata_for_build_wheel /tmp/tmpo5bkbzi5 Check the logs for full command output.
</code></pre>
<p>As I've said, Ive tried multiple versions of pip (which leads me to believe that the problem is with the package itself.) I've also tried updating pip, to no avail.</p>
<p>In summary, how can I install PyQt5 into my venv in this situation.</p>
<p><em><strong>EDIT</strong></em></p>
<p>I messed around with the version using:</p>
<pre><code>pip install pyqt5=={version}
</code></pre>
<p>The current version is 5.15.11. I tried 5.15.10, 9, and 8 with no success. Next, I skipped back to 5.15.5, and it worked! This gives me a usable version of PyQt5 in my virtual environment, but I would rather be using the current. I am leaving the question open in hopes that someone knows why the current is not working.</p>
|
<python><linux><pip><pyqt><pyqt5>
|
2025-03-01 17:29:38
| 1
| 869
|
Nate T
|
79,477,784
| 12,357,035
|
Fatal Python error: GC object already tracked when invoking set constructor
|
<p>I have following Python2 function in a big Python-Cython project (print statements added for debugging):</p>
<pre><code>import gc
def e_closure (startstate):
"""Return all states reachable from startstates on epsilon transitions.
"""
print('eclos1', startstate)
print(len(gc.get_objects()))
sys.stdout.flush()
work = [startstate]
set()
print('eclos2')
print(len(gc.get_objects()))
sys.stdout.flush()
result = set()
print('eclos3')
sys.stdout.flush()
while work:
s = work.pop()
result.add(s)
for n in s.get_transitions(state.EPSILON):
if n not in result:
work.append(n)
x = sorted(result, key=lambda s: s.name)
return x
</code></pre>
<p>This is crashing with error <code>Fatal Python error: GC object already tracked</code> when <code>set()</code> constructor is called. That too not first invocation, but instead exactly on 10th invocation. If I add additional <code>set()</code> call in previous lines, it crashes there. But it is not crashing if I add <code>dict()</code> or <code>list()</code> calls instead.</p>
<p>Sample logs:</p>
<pre><code>('eclos1', start)
6961
eclos2
6962
eclos3
('eclos1', d0)
6962
eclos2
6963
eclos3
('eclos1', d2)
6963
eclos2
6964
eclos3
('eclos1', d3)
6964
eclos2
6965
eclos3
('eclos1', d4)
6965
eclos2
6966
eclos3
('eclos1', d1)
6966
eclos2
6967
eclos3
('eclos1', c0)
6967
eclos2
6968
eclos3
('eclos1', c2)
6968
eclos2
6969
eclos3
('eclos1', c3)
6969
eclos2
6970
eclos3
('eclos1', c4)
6970
</code></pre>
<p>In this entire function, only <code>get_transitions</code> is a cython function. Also, it was working before and causing issues only after cythonization. But I'm unable to understand what is the issue and how to debug it.</p>
<p>Python: <code>Python 2.7.12</code>
Cython: <code>Cython version 0.23.4</code></p>
|
<python><garbage-collection><cython><cythonize>
|
2025-03-01 14:55:53
| 0
| 3,414
|
Sourav Kannantha B
|
79,477,586
| 2,881,548
|
How to Convert 'data.ilearner' Model to 'model.pkl' in Azure ML?
|
<p>I created a pipeline in Azure ML that trains a model using <strong>Boosted Decision Tree Regression</strong>. From my understanding, the model is saved as <strong><code>data.ilearner</code></strong>.</p>
<p>However, I am unable to convert this model into a <strong><code>model.pkl</code></strong> format that can be loaded using <code>joblib</code>.</p>
<h3><strong>Questions:</strong></h3>
<ol>
<li>How can I create a <code>model.pkl</code> file in <strong>Azure ML</strong> for a Boosted Decision Tree Regression model?</li>
<li>How can I convert <code>data.ilearner</code> to <code>model.pkl</code>?</li>
</ol>
<p><a href="https://i.sstatic.net/KnoJYUyG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KnoJYUyG.png" alt="enter image description here" /></a></p>
<p>I attempted to use the following Python script to load and convert the model:</p>
<pre class="lang-py prettyprint-override"><code>import lightgbm as lgb
import joblib
# Load the LightGBM model
model = lgb.Booster(model_file="data.ilearner")
# Save as a Pickle file
joblib.dump(model, "model.pkl")
</code></pre>
<p>But when running the script, I get the following error:</p>
<pre><code>% python3 convert_to_model_pkl.py
[LightGBM] [Fatal] Unknown model format or submodel type in model file data.ilearner
Traceback (most recent call last):
File "/Users/tomasz.olchawa/ng/ml/convert_to_model_pkl.py", line 5, in <module>
model = lgb.Booster(model_file="data.ilearner")
File "/Users/tomasz.olchawa/ng/ml/myenv/lib/python3.13/site-packages/lightgbm/basic.py", line 3697, in __init__
_safe_call(
~~~~~~~~~~^
_LIB.LGBM_BoosterCreateFromModelfile(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<3 lines>...
)
^
)
^
File "/Users/../ng/ml/myenv/lib/python3.13/site-packages/lightgbm/basic.py", line 313, in _safe_call
raise LightGBMError(_LIB.LGBM_GetLastError().decode("utf-8"))
lightgbm.basic.LightGBMError: Unknown model format or submodel type in model file data.ilearner
</code></pre>
|
<python><azure><machine-learning>
|
2025-03-01 12:45:07
| 1
| 681
|
THM
|
79,477,559
| 4,306,541
|
Eliminating list elements having string from another list
|
<p>I'm trying to a get a subset of elements of L1, which do not match with any element of another list</p>
<pre><code>>>> L1 = ["apple", "banana", "cherry", "date"]
>>> L2 = ["bananaana", "datedate"]
>>>
</code></pre>
<p>For instance, from these two lists, I want to create a list ['apple', 'cherry']</p>
<pre><code>>>> [x for x in L1 for y in L2 if x in y]
['banana', 'date']
>>>
</code></pre>
<p>The above nested list comprehension helps to get element which matches, but not other way</p>
<pre><code>>>> [x for x in L1 for y in L2 if x not in y]
['apple', 'apple', 'banana', 'cherry', 'cherry', 'date']
>>>
^^^^^^^^ I was expecting here ['apple', 'cherry']
</code></pre>
|
<python><list><list-comprehension>
|
2025-03-01 12:30:19
| 4
| 469
|
Susanta Dutta
|
79,477,454
| 5,722,359
|
Why is the greyscale & B&W images created by Pillow giving a numpy.ndarray that has a 3D shape?
|
<p>Input:</p>
<pre><code>import numpy as np
from PIL import Image
with Image.open("testimage.png") as img:
img.convert('L') # convert to greyscale
#img.convert('1') # convert to B&W
image_pil = np.asarray(img)
print(f'{image_pil.shape=}')
</code></pre>
<p>Output:</p>
<pre><code>image_pil.shape=(266, 485, 3)
</code></pre>
<p>Question:</p>
<p>Why is the shape of <code>image_pil</code> not <code>(266, 485)</code> or <code>(266, 485, 1)</code>? I was expecting a 2D greyscale image but it gave a shape of <code>(266, 485, 3)</code> indicate that it is still a RGB image. Also, <code>.convert('1')</code> had the same outcome. Why are they not shaped<code>(266, 485)</code>?</p>
|
<python><numpy><python-imaging-library>
|
2025-03-01 11:01:36
| 1
| 8,499
|
Sun Bear
|
79,477,395
| 8,040,369
|
Getting count of unqiue column values based on another column in df
|
<p>I have df as below</p>
<pre><code>User Action
AA Page1
AA Page1
AA Page2
BB Page2
BB Page3
CC Page3
CC Page3
</code></pre>
<p>Is there a way to get the count for different Pages for each user as a df. something like below</p>
<pre><code>User Page1 Page2 Page3
AA 2 1 0
BB 0 1 1
CC 0 0 2
</code></pre>
<p>So far i have tried to get the overall count all the actions per user</p>
<pre><code>df['User'].value_counts().reset_index(name='Action')
</code></pre>
<p>What should i do to get the unique values of Action column as separate columns of a df and its count as values?</p>
|
<python><pandas>
|
2025-03-01 10:06:27
| 1
| 787
|
SM079
|
79,476,892
| 1,980,208
|
Pandas groupby make all elements 0 if first element is 1
|
<p>I have the following df:</p>
<pre><code>| day | first mover |
| -------- | -------------- |
| 1 | 1 |
| 2 | 1 |
| 3 | 0 |
| 4 | 0 |
| 5 | 0 |
| 6 | 1 |
| 7 | 0 |
| 8 | 1 |
</code></pre>
<p>i want to group this Data frame in the order bottom to top with a frequency of 4 rows. Furthermore if first row of group is 1 make all other entries 0. Desired output:</p>
<pre><code>| day | first mover |
| -------- | -------------- |
| 1 | 1 |
| 2 | 0 |
| 3 | 0 |
| 4 | 0 |
| 5 | 0 |
| 6 | 0 |
| 7 | 0 |
| 8 | 0 |
</code></pre>
<p>The first half i have accomplished. I am confuse about how to make other entries 0 if first entry in each group is 1.</p>
<pre><code>N=4
(df.iloc[::-1].groupby(np.arange(len(df))//N
</code></pre>
|
<python><pandas>
|
2025-02-28 23:29:23
| 5
| 439
|
prem
|
79,476,789
| 11,609,834
|
How to get the dot product of inner dims in numpy array
|
<p>I would like to compute the dot product (matmul) of the inner dimension of two 3D arrays.
In the following example, I have an array of 10 2x3 matrixes (X) and an array of 8 1x3 matrixes. The result Z should be a 10 element array of an 8 x 2 matrix (you might also think of this as an 10 x 8 array of 2-d vectors.)</p>
<pre class="lang-py prettyprint-override"><code>X = np.arange(0, 10 * 2 * 3).reshape(10, 2, 3)
Y = np.arange(0, 8 * 1 * 3).reshape(8, 1, 3)
# write to Z, which has the dot-product of internal dims of X and Y
Z = np.empty((10, 8, 2))
for i, x_i in enumerate(X):
for j, y_j in enumerate(Y):
z = x_i @ y_j.T
# subscriptz to flatten it.
Z[i, j, :] = z[:, 0]
</code></pre>
<p>is correct, but I would like a vectorized solution.</p>
|
<python><numpy>
|
2025-02-28 22:04:35
| 1
| 1,013
|
philosofool
|
79,476,753
| 62,225
|
Apache Beam: Use S3 bucket for Snowflake CSV output when using apache_beam.io.snowflake.ReadFromSnowflake in Python
|
<p>I'm trying to figure out how to work around an issue I've encountered with the Snowflake connector in Python.</p>
<p>Basically, the issue boils down to the fact that to use Snowflake in Python it seems you're required to use Google Cloud Storage to store the temporary CSV files that get generated. However, my requirements are that I need to use AWS S3 for this -- GCS is out of the question.</p>
<p>Right now I'm just running my pipieline using the <code>DirectRunner</code>, but eventually I'll need to be able to run Beam on Spark <strong>or</strong> Flink. So having a solution that works for all three runners would be fantastic.</p>
<p>I've got everything else set up properly, using something like this:</p>
<pre class="lang-py prettyprint-override"><code>def learning(
beam_options: Optional[S3Options] = None,
test: Callable[[beam.PCollection], None] = lambda _: None,
) -> None:
with beam.Pipeline(options=beam_options) as pipeline:
snowflakeRows = pipeline | "Query snowflake" >> ReadFromSnowflake(
server_name="<server>",
database="production_data",
schema="public",
staging_bucket_name="s3://my-bucket/",
storage_integration_name="s3_integration_data_prod",
query="select * from beam_poc_2025",
csv_mapper=snowflake_csv_mapper,
username="<work-email>",
password='<password>',
)
snowflakeRows | "Printing Snowflake" >> beam.Map(print)
</code></pre>
<p>I know that this successfully authenticates to Snowflake because I get a Duo push notification to approve the login. It just errors out later on with this stack trace:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "/home/sean/Code/beam-poc/main.py", line 43, in <module>
app.learning(beam_options=beam_options)
File "/home/sean/Code/beam-poc/example/app.py", line 205, in learning
with beam.Pipeline(options=beam_options) as pipeline:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sean/.local/share/mise/installs/python/3.12.9/lib/python3.12/site-packages/apache_beam/pipeline.py", line 620, in __exit__
self.result = self.run()
^^^^^^^^^^
File "/home/sean/.local/share/mise/installs/python/3.12.9/lib/python3.12/site-packages/apache_beam/pipeline.py", line 594, in run
return self.runner.run_pipeline(self, self._options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sean/.local/share/mise/installs/python/3.12.9/lib/python3.12/site-packages/apache_beam/runners/direct/direct_runner.py", line 184, in run_pipeline
return runner.run_pipeline(pipeline, options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sean/.local/share/mise/installs/python/3.12.9/lib/python3.12/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 195, in run_pipeline
self._latest_run_result = self.run_via_runner_api(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sean/.local/share/mise/installs/python/3.12.9/lib/python3.12/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 221, in run_via_runner_api
return self.run_stages(stage_context, stages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sean/.local/share/mise/installs/python/3.12.9/lib/python3.12/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 468, in run_stages
bundle_results = self._execute_bundle(
^^^^^^^^^^^^^^^^^^^^^
File "/home/sean/.local/share/mise/installs/python/3.12.9/lib/python3.12/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 793, in _execute_bundle
self._run_bundle(
File "/home/sean/.local/share/mise/installs/python/3.12.9/lib/python3.12/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 1032, in _run_bundle
result, splits = bundle_manager.process_bundle(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sean/.local/share/mise/installs/python/3.12.9/lib/python3.12/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 1398, in process_bundle
raise RuntimeError(result.error)
RuntimeError: org.apache.beam.sdk.util.UserCodeException: java.lang.IllegalArgumentException: No filesystem found for scheme s3
at org.apache.beam.sdk.util.UserCodeException.wrap(UserCodeException.java:39)
at org.apache.beam.sdk.io.FileIO$MatchAll$MatchFn$DoFnInvoker.invokeProcessElement(Unknown Source)
at org.apache.beam.fn.harness.FnApiDoFnRunner.processElementForParDo(FnApiDoFnRunner.java:810)
at org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:348)
at org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:275)
at org.apache.beam.fn.harness.FnApiDoFnRunner.outputTo(FnApiDoFnRunner.java:1837)
at org.apache.beam.fn.harness.FnApiDoFnRunner.access$3100(FnApiDoFnRunner.java:145)
at org.apache.beam.fn.harness.FnApiDoFnRunner$NonWindowObservingProcessBundleContext.output(FnApiDoFnRunner.java:2695)
at org.apache.beam.sdk.transforms.MapElements$2.processElement(MapElements.java:151)
at org.apache.beam.sdk.transforms.MapElements$2$DoFnInvoker.invokeProcessElement(Unknown Source)
at org.apache.beam.fn.harness.FnApiDoFnRunner.processElementForParDo(FnApiDoFnRunner.java:810)
at org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:348)
at org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:275)
at org.apache.beam.fn.harness.FnApiDoFnRunner.outputTo(FnApiDoFnRunner.java:1837)
at org.apache.beam.fn.harness.FnApiDoFnRunner.access$3100(FnApiDoFnRunner.java:145)
at org.apache.beam.fn.harness.FnApiDoFnRunner$NonWindowObservingProcessBundleContext.outputWindowedValue(FnApiDoFnRunner.java:2725)
at org.apache.beam.sdk.transforms.Reshuffle$RestoreMetadata$1.processElement(Reshuffle.java:187)
at org.apache.beam.sdk.transforms.Reshuffle$RestoreMetadata$1$DoFnInvoker.invokeProcessElement(Unknown Source)
at org.apache.beam.fn.harness.FnApiDoFnRunner.processElementForParDo(FnApiDoFnRunner.java:810)
at org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:348)
at org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:275)
at org.apache.beam.fn.harness.FnApiDoFnRunner.outputTo(FnApiDoFnRunner.java:1837)
at org.apache.beam.fn.harness.FnApiDoFnRunner.access$3100(FnApiDoFnRunner.java:145)
at org.apache.beam.fn.harness.FnApiDoFnRunner$NonWindowObservingProcessBundleContext.output(FnApiDoFnRunner.java:2695)
at org.apache.beam.sdk.transforms.Reshuffle$1.processElement(Reshuffle.java:123)
at org.apache.beam.sdk.transforms.Reshuffle$1$DoFnInvoker.invokeProcessElement(Unknown Source)
at org.apache.beam.fn.harness.FnApiDoFnRunner.processElementForParDo(FnApiDoFnRunner.java:810)
at org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:348)
at org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:275)
at org.apache.beam.fn.harness.BeamFnDataReadRunner.forwardElementToConsumer(BeamFnDataReadRunner.java:213)
at org.apache.beam.sdk.fn.data.BeamFnDataInboundObserver.multiplexElements(BeamFnDataInboundObserver.java:172)
at org.apache.beam.sdk.fn.data.BeamFnDataInboundObserver.awaitCompletion(BeamFnDataInboundObserver.java:136)
at org.apache.beam.fn.harness.control.ProcessBundleHandler.processBundle(ProcessBundleHandler.java:552)
at org.apache.beam.fn.harness.control.BeamFnControlClient.delegateOnInstructionRequestType(BeamFnControlClient.java:150)
at org.apache.beam.fn.harness.control.BeamFnControlClient$InboundObserver.lambda$onNext$0(BeamFnControlClient.java:115)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
at org.apache.beam.sdk.util.UnboundedScheduledExecutorService$ScheduledFutureTask.run(UnboundedScheduledExecutorService.java:163)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
at java.base/java.lang.Thread.run(Thread.java:1570)
Caused by: java.lang.IllegalArgumentException: No filesystem found for scheme s3
at org.apache.beam.sdk.io.FileSystems.getFileSystemInternal(FileSystems.java:557)
at org.apache.beam.sdk.io.FileSystems.match(FileSystems.java:129)
at org.apache.beam.sdk.io.FileSystems.match(FileSystems.java:150)
at org.apache.beam.sdk.io.FileSystems.match(FileSystems.java:162)
at org.apache.beam.sdk.io.FileIO$MatchAll$MatchFn.process(FileIO.java:753)
</code></pre>
<p>Which seems to be trying to say that the expansion service doesn't have <code>s3</code> registered as a valid filesystem scheme.</p>
<p>Is there a way to register <code>s3</code> as a filesystem scheme for the expansion service from within a Python pipeline? Or will this require writing my own custom expansion service?</p>
<p>My preferred solution would be that I add some code to the Python pipeline so that it registers <code>s3</code> as a schema. Mostly because I'm working on a proof-of-concept right now, and writing a custom expansion service seems like it'd take longer than I want to spend on a PoC.</p>
|
<python><amazon-s3><snowflake-cloud-data-platform><apache-beam>
|
2025-02-28 21:47:44
| 0
| 752
|
Sean Hagen
|
79,476,729
| 7,373,440
|
How to launch a FastAPI webserver from Dagster?
|
<p>I am creating an MLOps pipeline which process data, train model, and deploy model into an inference service. I am trying to create this all without Cloud services such as S3. I am using Dagster as the automated training pipeline.</p>
<p>I have managed to process data, train model, and output model artifacts, all written as dagster's assets. When I am trying to deploy the model as inference service using dagster's assets and python's subprocess module, it keeps failing. Seems like due to dagster job quits thus making the launched service quits too.</p>
<p>Here is my inference service deployment code (as dagster's asset)</p>
<pre><code>@dg.asset(group_name="ml_deploy", deps=[train_model])
def deploy_inference(train_model):
print('Start inference service..')
subprocess.Popen([
"mlops-pipelines/start_inference.sh"
], start_new_session=True)
return "Inference Service deployed successfully"
</code></pre>
<p><code>start_inference.sh</code> is my FastAPI deployment script as below:</p>
<pre><code>#!/bin/bash
# Define the log file and PID file
mkdir -p uvicorn_logs
LOG_FILE="uvicorn_logs/uvicorn.log"
PID_FILE="uvicorn_logs/uvicorn.pid"
# Start Uvicorn and save the PID
nohup uvicorn inference:app --host 0.0.0.0 --port 8001 > "$LOG_FILE" 2>&1 &
# Save the process ID
echo $! > "$PID_FILE"
echo "Uvicorn started. PID: $(cat $PID_FILE)"
</code></pre>
<p>How can I launch a web service from Dagster? Or this is not the correct way to do it?</p>
|
<python><mlops><dagster>
|
2025-02-28 21:29:42
| 0
| 3,071
|
addicted
|
79,476,711
| 536,538
|
Avoid having my custom button bound to multiple instances of that button when pressed
|
<p>My example seems to pass <code>self</code> in the button object (based on printing <code>my_id</code> to debug and it matching the <code>my_id</code> for each button, but when I click on "Option A" all 3 buttons animate. When I click on "Option B", B and C animate.</p>
<p>Why is the on_touch_down event trigging other buttons of the same <code>RoundedButton</code> class on the same page? (these are in a screen manager, not shown in code).</p>
<pre><code><RoundedButton@Button>:
style: "elevated"
background_color: (0, 0, 0, 0)
background_normal: ""
background_down: ""
canvas.before:
Color:
rgba: (0.72, 0.68, 0.92, 1) if self.state == 'normal' else (0.33, 0.48, 0.92, 1)
RoundedRectangle:
size: self.size
pos: self.pos
radius: [20]
Color:
rgba: (0.72, 0.68, 0.92, 1)
Line:
rounded_rectangle: (self.pos[0], self.pos[1], self.size[0], self.size[1], 20)
width: 1 if self.state == 'normal' else 5
</code></pre>
<p>...</p>
<pre><code>
MDBoxLayout:
orientation: 'horizontal'
RoundedButton:
my_id: 'optionA'
text: 'Option A'
size_hint: 0.15, 0.35
on_touch_down: app.animate_darken(self)
RoundedButton:
my_id: 'optionB'
text: 'Option B'
size_hint: 0.15, 0.35
on_touch_down: app.animate_darken(self)
RoundedButton:
my_id: 'optionC'
text: 'Option C'
size_hint: 0.15, 0.35
on_touch_down: app.animate_darken(self)
</code></pre>
<p>And elsewhere, that animate_darken function:</p>
<pre><code> def animate_darken(self, widget, fade_length=0.5, *args):
print(widget.my_id)
BG_color = widget.background_color
A = Animation(
background_color=(0,0,0,1),
duration=fade_length #seconds
)
A += Animation(x=0, y=0, duration=0.3)
A += Animation(background_color=BG_color) # reset it
A.start(widget)
</code></pre>
<p>Q: How do I bind each RoundeButton only to that button, and animate it without affecting other buttons on the screen?</p>
|
<python><python-3.x><kivy><kivymd>
|
2025-02-28 21:14:46
| 1
| 4,741
|
Marc Maxmeister
|
79,476,701
| 11,354,959
|
TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType. Huggingface model locally
|
<p>I'm trying to run a LM locally I don't really have much knowledge of huggingface.
What I did is creating account there then creating a token that can read/write.
I created a project did <code>pip install transformers tensorflow</code>
After that I created a fily test.py:</p>
<pre class="lang-py prettyprint-override"><code>from transformers import pipeline
HF_TOKEN ='MY_TOKEN_VALUE_HERE'
hugging_face_model = 'mistralai/Mistral-Small-24B-Instruct-2501'
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe = pipeline(
"text-generation",
model=hugging_face_model,
token=HF_TOKEN,
max_length=200
)
pipe(messages)
</code></pre>
<p>And I get this error:</p>
<pre><code>Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/louis/.vscode-server/extensions/ms-python.debugpy-2025.0.1-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 71, in <module>
cli.main()
File "/home/louis/.vscode-server/extensions/ms-python.debugpy-2025.0.1-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 501, in main
run()
File "/home/louis/.vscode-server/extensions/ms-python.debugpy-2025.0.1-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 351, in run_file
runpy.run_path(target, run_name="__main__")
File "/home/louis/.vscode-server/extensions/ms-python.debugpy-2025.0.1-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 310, in run_path
return _run_module_code(code, init_globals, run_name, pkg_name=pkg_name, script_name=fname)
File "/home/louis/.vscode-server/extensions/ms-python.debugpy-2025.0.1-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 127, in _run_module_code
_run_code(code, mod_globals, init_globals, mod_name, mod_spec, pkg_name, script_name)
File "/home/louis/.vscode-server/extensions/ms-python.debugpy-2025.0.1-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 118, in _run_code
exec(code, run_globals)
File "/home/louis/zero-to-million/backend/test.py", line 13, in <module>
pipe = pipeline(
File "/home/louis/zero-to-million/backend/venv/lib/python3.10/site-packages/transformers/pipelines/__init__.py", line 940, in pipeline
framework, model = infer_framework_load_model(
File "/home/louis/zero-to-million/backend/venv/lib/python3.10/site-packages/transformers/pipelines/base.py", line 290, in infer_framework_load_model
model = model_class.from_pretrained(model, **kwargs)
File "/home/louis/zero-to-million/backend/venv/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained
return model_class.from_pretrained(
File "/home/louis/zero-to-million/backend/venv/lib/python3.10/site-packages/transformers/modeling_tf_utils.py", line 2909, in from_pretrained
resolved_archive_file, sharded_metadata = get_checkpoint_shard_files(
File "/home/louis/zero-to-million/backend/venv/lib/python3.10/site-packages/transformers/utils/hub.py", line 1010, in get_checkpoint_shard_files
if not os.path.isfile(index_filename):
File "/usr/lib/python3.10/genericpath.py", line 30, in isfile
st = os.stat(path)
TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
</code></pre>
<p>I understand the error but I am not sure what path it's referring to?</p>
|
<python><tensorflow><huggingface>
|
2025-02-28 21:07:43
| 0
| 370
|
El Pandario
|
79,476,595
| 4,175,822
|
Is it possible to type hint a callable that takes positional arguments only using a generic for the positional type?
|
<p>In python, is it possible to type hint a callable that takes positional arguments only using a generic for the positional type?</p>
<p>The context is that I want a to ingest positional args only.
See this temporalio code:</p>
<pre><code>@overload
async def execute_activity(
activity: Callable[..., Awaitable[ReturnType]],
arg: None,
*,
args: Sequence[Any],
task_queue: Optional[str] = None,
schedule_to_close_timeout: Optional[timedelta] = None,
schedule_to_start_timeout: Optional[timedelta] = None,
start_to_close_timeout: Optional[timedelta] = None,
heartbeat_timeout: Optional[timedelta] = None,
retry_policy: Optional[temporalio.common.RetryPolicy] = None,
cancellation_type: ActivityCancellationType = ActivityCancellationType.TRY_CANCEL,
activity_id: Optional[str] = None,
versioning_intent: Optional[VersioningIntent] = None,
) -> ReturnType: ...
</code></pre>
<p>The doc is present <a href="https://docs.temporal.io/develop/python/core-application#activity-execution" rel="nofollow noreferrer">here</a></p>
<p>The activity is a callable where any positional arguments are allowed.
I want a generic for the ... input, so I can use it to pass in the typed arg input in the args input.</p>
<p>It looks like I could use <a href="https://docs.python.org/3/library/typing.html#typing.ParamSpec" rel="nofollow noreferrer">ParamSpec</a> but that allows keyword arguments too, and appears that it would not throw needed errors when keyword args are defined.
Is there a way to do this in python where the input positional arg type in activity can be used as the args type in that signature? How can I do this?</p>
<p>My goal would be some psudocode like:</p>
<pre><code>@overload
async def execute_activity(
activity: Callable[GenericIterable, Awaitable[ReturnType]],
arg: None,
*,
args: GenericIterable,
task_queue: Optional[str] = None,
schedule_to_close_timeout: Optional[timedelta] = None,
schedule_to_start_timeout: Optional[timedelta] = None,
start_to_close_timeout: Optional[timedelta] = None,
heartbeat_timeout: Optional[timedelta] = None,
retry_policy: Optional[temporalio.common.RetryPolicy] = None,
cancellation_type: ActivityCancellationType = ActivityCancellationType.TRY_CANCEL,
activity_id: Optional[str] = None,
versioning_intent: Optional[VersioningIntent] = None,
) -> ReturnType: ...
# where the below function could be input
def some_activity(a: int, b: str) -> float:
# sample allowed activity definition
return 3.14
</code></pre>
<p>The solution may be: don't do that, use one input and output type so that generics/type hints will do their job here.</p>
<h3>Working answer is below</h3>
<p>If you want this in temporal upvote this feature request: <a href="https://github.com/temporalio/sdk-python/issues/779" rel="nofollow noreferrer">https://github.com/temporalio/sdk-python/issues/779</a></p>
|
<python><python-typing><temporal-workflow>
|
2025-02-28 20:14:06
| 1
| 2,821
|
spacether
|
79,476,468
| 11,092,636
|
mypy exclude option is not working - bug?
|
<p>MRE:</p>
<ul>
<li>Make a folder (let's call it <code>MyProjectFolder</code>).</li>
<li>Make a <code>helloroot.py</code> empty file.</li>
<li>Make a subfolder named <code>FolderNotIgnored</code>.</li>
<li>Make a <code>hello.py</code> in that subfolder.</li>
<li>Make a <code>mypy.ini</code> file containing:</li>
</ul>
<pre><code>[mypy]
exclude=FolderNotIgnored/
</code></pre>
<p>The tree will look like this:</p>
<pre><code>β helloroot.py
β mypy.ini
ββββFolderNotIgnored
hello.py
</code></pre>
<p>Open the terminal.
Run <code>pytest --mypy</code>.</p>
<p>We expect mypy to ignore <code>FolderNotIgnored\hello.py</code>.</p>
<p>However, I get this:</p>
<pre class="lang-py prettyprint-override"><code>================================================================= test session starts ==================================================================
platform win32 -- Python 3.12.5, pytest-8.3.4, pluggy-1.5.0
rootdir: C:\Users\CENSORED\Documents\Challenges\MyProjectFolder
plugins: anyio-4.8.0, hydra-core-1.3.2, mypy-0.10.3
collected 3 items
FolderNotIgnored\hello.py .. [ 66%]
helloroot.py . [100%]
========================================================================= mypy =========================================================================
Success: no issues found in 2 source files
================================================================== 3 passed in 0.10s ===================================================================
</code></pre>
|
<python><mypy>
|
2025-02-28 19:04:31
| 1
| 720
|
FluidMechanics Potential Flows
|
79,476,439
| 5,029,763
|
Groupby column with relativedelta value
|
<p>I have a dataframe with a column of the <code>dateutil.relativedelta</code> type. When I try grouping by this column, I get the error <code>TypeError: '<' not supported between instances of 'relativedelta' and 'relativedelta'</code>. I didn't find anythind in the pandas <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer">page</a> that indicated that there restrictions on the types of columns that can be used to group the dataframe. Am I missing something?</p>
<p>Minimal example below:</p>
<pre><code>from dateutil.relativedelta import relativedelta
import pandas as pd
import itertools
def expand_grid(data_dict):
rows = itertools.product(*data_dict.values())
return pd.DataFrame.from_records(rows, columns=data_dict.keys())
ref_dates = pd.date_range(start="2024-06", end="2025-02", freq="MS").tolist()
windows = [relativedelta(months=-30),relativedelta(months=-18)]
max_horizon = [relativedelta(months=2)]
params = expand_grid({'ref_date': ref_dates, 'max_horizon': max_horizon, 'window': windows})
for name, group in params.groupby(by=['window']): print(name)
</code></pre>
|
<python><pandas><python-dateutil>
|
2025-02-28 18:48:58
| 1
| 1,935
|
user5029763
|
79,476,420
| 5,431,734
|
Finding original indices for rearranged arrays
|
<p>Given two numpy arrays of equal shape, I want to track how elements from the first array have moved in the second array. Specifically, for each element in the second array, I want to find its original position in the first array. The arrays are not sorted.</p>
<p><strong>Example 1: All elements present</strong></p>
<pre><code>a1 = np.array([1, 2, 3, 4]) # original array
a2 = np.array([2, 1, 3, 4]) # new array
# Result: [1, 0, 2, 3]
# Explanation:
# - 2 was originally at index 1
# - 1 was originally at index 0
# - 3 was originally at index 2
# - 4 was originally at index 3
</code></pre>
<p><strong>Example 2: With new elements</strong></p>
<pre><code>a1 = np.array([1, 2, 3, 4])
a2 = np.array([2, 1, 33, 4])
# Result: [1, 0, -1, 3]
# The value 33 wasn't in the original array, so it gets -1
</code></pre>
<p>My solution is:</p>
<pre><code>[a1.tolist().index(v) if v in a1 else -1 for v in a2]
</code></pre>
<p>or <code>np.where(a2[:, None] == a1)[1]</code> but this will not work in example 2</p>
<p>Is there a better way to do this? In real life my arrays have million of rows. Columns not that many, less than 10.</p>
|
<python><numpy>
|
2025-02-28 18:42:00
| 4
| 3,725
|
Aenaon
|
79,476,158
| 6,003,629
|
How can I perform a calculation on a rolling window over a partition in polars?
|
<p>I have a Dataset containing GPS Coordinates of a few planes. I would like to calculate the bearing of each plane at every point in time.</p>
<p>The Dataset as among others these columns:</p>
<ol>
<li><code>event_uid</code></li>
<li><code>plane_no</code></li>
<li><code>timestamp</code></li>
<li><code>gps_lat</code></li>
<li><code>gps_lon</code></li>
</ol>
<p>I can calculate the bearing for a single <code>plane_no</code> by doing something as follows:</p>
<pre class="lang-py prettyprint-override"><code>lf.sort("timestamp")
.with_row_index('new_index')
.with_columns(
pl.struct(pl.col('gps_lat'), pl.col('gps_lon'))
.rolling('new_index', period='2i')
.shift(-1)
.alias("latlon")
).with_columns(
pl.col("latlon")
.map_elements(lambda val: calculate_bearing(val[0]['gps_lat'], val[0]['gps_lon'], val[1]['gps_lat'], val[1]['gps_lon']), return_dtype=pl.Float64)
.alias("bearing")
).drop(["latlon", "new_index"])
</code></pre>
<p>In order to work on my entire dataset though I need to run this on every plane_no as a partition. I can't combine <code>rolling</code> with <code>over</code> in polars. Is there an alternative? Any possible improvement ideas or overall comments are also welcome.</p>
|
<python><window-functions><python-polars>
|
2025-02-28 16:32:47
| 2
| 385
|
jimfawkes
|
79,476,105
| 7,391,480
|
segment-geospatial installation breaks jupyter lab in Ubuntu
|
<p>I would like to install jupyter inside a conda environment and then install <code>segment-geospatial</code>, a python library from <a href="https://samgeo.gishub.org/" rel="nofollow noreferrer">samgeo</a>.</p>
<p>However installing <code>segment-geospatial</code> breaks something that causes jupyter notebooks to stop working. Jupyter lab works before I install the package. After I install it, I receive the following error when I try to start jupyter lab:</p>
<pre><code>ModuleNotFoundError: No module named 'pysqlite2'
</code></pre>
<p>Any ideas on how to fix this?</p>
<p>Here's the steps I took:</p>
<pre><code>conda create -c conda-forge -n MY_ENV_NAME python pandas jupyter ipykernel
conda activate MY_ENV_NAME
jupyter lab
# jupyter lab opens fine
conda install segment-geospatial
#install completes fine
jupyter lab
# jupyter lab does not open, I receive the error:
ModuleNotFoundError: No module named 'pysqlite2'
</code></pre>
<p>I'm using Ubuntu 22.04</p>
|
<python><jupyter-notebook><conda><geospatial><pysqlite>
|
2025-02-28 16:12:25
| 1
| 1,364
|
edge-case
|
79,476,089
| 310,298
|
managing multi-component project with uv package manager
|
<p>I have an application that is made of a crawler, api, and data sub projects. crawler populates data in a DB, and an API serves data from that DB, so both depends on the data project. I am new with uv package manager and think thought this is a good use case for workspaces. My dir structure is as follows (simplified here):</p>
<pre><code>myapp
|- pyproject.toml - define main project myapp, no dependencies. designate crawler, api, data as workspace members and set their source to workspace.
|- crawler
| |- pyproject.toml - define project crawler and its dependencies (including data, from workspace)
| |- main.py
|- api
| |- pyproject.toml - define project api and its dependencies (including data, from workspace)
| |- main.py
|- data
| |- pyproject.toml - define project data and its dependencies
| |- main.py
</code></pre>
<p>I'd expect to have a .venv for each project but it seems to create a .venv for the myapp root project regardless of where I run a uv command (in a sub project or in root), or regardless if I specify the --project flag (which I thought should run the command in that project's venv.</p>
<p>Appreciate some guidance on how to manage a project like this, including how the pyprojects should be managed and how the uv commands should be executed. Thanks.</p>
|
<python><uv>
|
2025-02-28 16:04:59
| 0
| 6,354
|
itaysk
|
79,476,088
| 20,895,654
|
Python dictionary vs dataclass (and metaclasses) for dynamic attributes
|
<p>Less object oriented snippet (1):</p>
<pre><code>class Trickjump(TypedDict):
id: int
name: str
difficulty: str
...
</code></pre>
<p>Dataclass snippet (2):</p>
<pre><code>@dataclass
class Trickjump():
id: IdAttr
name: NameAttr
difficulty: DifficultyAttr
...
</code></pre>
<p>My issue lies in the fact that I can't decide which approach would be better in my case. I need attributes that are dynamically creatable with a name and custom functionality, where the user gives me the attribute's name as a string and the value as a string. I want to then use the attribute's custom functionality with the given value to produce a result.</p>
<p>In the first code snippet, which I find is less object oriented, I could get the custom functionality of an attribute and use it with the value in the following way:</p>
<p><code>Attributes.get("attr_name").sort_key(value) </code></p>
<p>While in the second code snippet I would have to dynamically create classes with <code>type(...)</code> to act as attributes and it would also directly hold the value, probably best by using metaclasses.</p>
<p>I think the first way is way less work, but I feel like the second option is something more advanced but possibly also a bit over the top. Which way do you think is the best for my scenario? Or is there another solution?</p>
|
<python><attributes><metaprogramming><python-dataclasses>
|
2025-02-28 16:04:09
| 1
| 346
|
JoniKauf
|
79,475,564
| 3,577,502
|
Powershell and CMD combining command-line filepath arguments to Python
|
<p>I was making user-entered variable configurable via command line parameters & ran into this weird behaviour:</p>
<pre><code>PS D:> python -c "import sys; print(sys.argv)" -imgs ".\Test V4\Rilsa\" -nl 34
['-c', '-imgs', '.\\Test V4\\Rilsa" -nl 34']
PS D:> python -c "import sys; print(sys.argv)" -imgs ".\TestV4\Rilsa\" -nl 34
['-c', '-imgs', '.\\TestV4\\Rilsa\\', '-nl', '34']
</code></pre>
<p>If the name of my folder is <code>Test V4</code> with a space character, then all following parameters end up in the same argument element <code>'.\\Test V4\\Rilsa" -nl 34'</code>. There is also a trailing <code>"</code> quote after the directory name. I tried this again in CMD, thinking it was a Powershell quirk & experienced the same behaviour.</p>
<p>What's going on here? I'm assuming it has something to do with backslashes in Powershell -- though it's the default in Windows for directory paths -- but why do I get diverging behaviour depending on space characters & what's a good way to handle this assuming Windows paths are auto-completed into this form by the shell (i.e. trailing <code>\</code>)?</p>
|
<python><powershell><command-line>
|
2025-02-28 12:42:04
| 1
| 887
|
Lovethenakedgun
|
79,475,440
| 2,266,881
|
How can I convert a Polars dataframe to a column-oriented JSON object?
|
<p>I'm trying to convert a Polars dataframe to a JSON object, but I can't seem to find a way to change the format of it between row/col orientation. In Pandas, by default, it creates a column-oriented JSON object, and it can be changed easily, but in Polars it's row-oriented and I can't seem to find a way to change that, since the <code>write_json</code> method doesn't actually receives any argument other than the target filename if you want to save it directly.</p>
<p>Any idea how to achieve that?</p>
|
<python><json><dataframe><python-polars>
|
2025-02-28 11:54:07
| 2
| 1,594
|
Ghost
|
79,475,378
| 3,437,721
|
Azure function Open Telemetry
|
<p>I currently have an Azure APIM which uses the event-metric policy to send custom metrics to application insights:</p>
<pre><code><emit-metric name="appid-info" namespace="app-stats">
<dimension name="api-name" value="@(context.Api.Name)" />
<dimension name="app-id" value="@(context.Variables.GetValueOrDefault<string>("appid"))" />
</emit-metric>
</code></pre>
<p>This is just a small sample.</p>
<p>I am starting to using a function app which is called from the APIM, and I want to emit the same metrics in the python function app as I do in the APIM:</p>
<p><a href="https://learn.microsoft.com/en-us/python/api/overview/azure/monitor-opentelemetry-exporter-readme?view=azure-python-preview#metric-instrument-usage" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/python/api/overview/azure/monitor-opentelemetry-exporter-readme?view=azure-python-preview#metric-instrument-usage</a></p>
<p>I start by using the import:</p>
<pre><code>from azure.monitor.opentelemetry.exporter import AzureMonitorLogExporter
</code></pre>
<p>Then I created a function which can be called:</p>
<pre><code>def log_to_app_insights():
"""Log data to Azure Application Insights"""
try:
exporter = AzureMonitorMetricExporter(
connection_string=os.environ["APPLICATIONINSIGHTS_CONNECTION_STRING"]
)
except Exception as e:
logging.error(f"Failed to log to Application Insights: {str(e)}")
</code></pre>
<p>Note that I have set the connection string as an env variable on the function.</p>
<p>What do I need to do next - I want to be able to call this function at the end of the function execution (preferably async).</p>
<p>But the MS example above mentioned Periodic Readers and timers etc. I just want to emit the custom metrics to my namespace from the function so they appear as the same way as the emit-metrics.</p>
<p>Is this possible?</p>
|
<python><azure><azure-functions><open-telemetry>
|
2025-02-28 11:31:28
| 0
| 2,329
|
user3437721
|
79,475,261
| 4,759,736
|
Slack sdk OAuth 2 redirect url not working
|
<p>I created a Slack app and I'm trying to get OAuth 2.0 to work.
I'm using Python with flask to try and exchange tokens but I can't seem to get the redirect url to open properly.</p>
<p>Slack requires the redirect to be https, and I'm using localhost to redirect at <code>http://127.0.0.1:5000/slack/oauth/callback</code>. It seems like Slack won't accept an http url, and flask doesn't work with https. Is there something I'm missing to get this working?</p>
<p>I saw some other solutions to try and apply a <a href="https://www.reddit.com/r/flask/comments/11luchr/redirect_is_using_http_instead_of_https/" rel="nofollow noreferrer">proxy fix</a>, but that didn't work for me.</p>
<p>Thanks in advance and let me know if you need any other details.</p>
<p>Here's what I'm currently doing right now:</p>
<pre class="lang-py prettyprint-override"><code>import html
from slack_sdk.oauth import AuthorizeUrlGenerator
from slack_sdk.oauth.installation_store import FileInstallationStore
from slack_sdk.oauth.state_store import FileOAuthStateStore
from flask import Flask
SLACK_CLIENT_ID = "MY_CLIENT_ID"
state_store = FileOAuthStateStore(expiration_seconds=300, base_dir="./data")
installation_store = FileInstallationStore(base_dir="./data")
# Build https://slack.com/oauth/v2/authorize with sufficient query parameters
authorize_url_generator = AuthorizeUrlGenerator(
client_id=SLACK_CLIENT_ID,
scopes=["app_mentions:read", "chat:write"],
user_scopes=["bookmarks:read", "bookmarks:write", "canvases:read", "canvases:write", "chat:write", "groups:history",
"groups:read", "groups:write", "groups:write.invites", "groups:write.topic", "pins:read", "pins:write"],
)
app = Flask(__name__)
@app.route("/slack/install", methods=["GET"])
def oauth_start():
state = state_store.issue()
url = authorize_url_generator.generate(state)
return f'<a href="{html.escape(url)}">' \
f'<img alt=""Add to Slack"" height="40" width="139" src="https://platform.slack-edge.com/img/add_to_slack.png" srcset="https://platform.slack-edge.com/img/add_to_slack.png 1x, https://platform.slack-edge.com/img/add_to_slack@2x.png 2x" /></a>'
from slack_sdk.web import WebClient
@app.route("/slack/oauth/callback", methods=["GET"])
def oauth_callback():
print("Hello world!") # Doesn't reach this
# Redirect is https://127.0.0.1:5000/slack/oauth/callback
if __name__ == "__main__":
print("install at 'http://127.0.0.1:5000/slack/install'") # Opens ok
app.run()
</code></pre>
|
<python><flask><slack-api>
|
2025-02-28 10:45:31
| 2
| 4,772
|
Green Cell
|
79,475,225
| 12,825,882
|
Azure documen intelligence python SDK doesn't separate pages
|
<p>When trying to extract content from a MS Word .docx file using Azure Document Intelligence, I expected the returned response to contain a page element for each page in the document and for each of those page elements to contain multiple lines in <a href="https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/prebuilt/layout?view=doc-intel-4.0.0&tabs=rest%2Csample-code#pages" rel="nofollow noreferrer">line with the documentation</a>.</p>
<p>Instead, I always receive as a single page with no (<code>None</code>) lines and the entire document's contents as a list of words.</p>
<p>Sample document:
<a href="https://i.sstatic.net/nuCQgvYP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nuCQgvYP.png" alt="enter image description here" /></a></p>
<p>Minimal reproducible example:</p>
<pre class="lang-py prettyprint-override"><code>from azure.core.credentials import AzureKeyCredential
from azure.ai.documentintelligence import DocumentIntelligenceClient
from azure.ai.documentintelligence.models import DocumentAnalysisFeature, AnalyzeResult, AnalyzeDocumentRequest
def main():
client = DocumentIntelligenceClient(
'MY ENDPOINT',
AzureKeyCredential('MY KEY')
)
document = 'small_test_document.docx'
with open(document, "rb") as f:
poller = client.begin_analyze_document(
"prebuilt-layout",
analyze_request=f,
content_type="application/octet-stream"
)
result = poller.result()
print(f'Found {len(result.pages)} page(s)')
for page in result.pages:
print(f'Page #{page.page_number}')
print(f' {page.lines=}')
print(f' {len(page.words)=}')
if __name__ == '__main__':
main()
</code></pre>
<p>Expected output:</p>
<pre class="lang-none prettyprint-override"><code>Found 2 page(s)
Page #1
page.lines=6
len(page.words)=58
Page #2
page.lines=1
len(page.words)=8
</code></pre>
<p>Actual output:</p>
<pre class="lang-none prettyprint-override"><code>Found 1 page(s)
Page #1
page.lines=None
len(page.words)=66
</code></pre>
<p>My question is: Why, and what should I do differently to get the expected output?</p>
|
<python><azure><azure-document-intelligence>
|
2025-02-28 10:29:35
| 1
| 624
|
PangolinPaws
|
79,475,116
| 6,265,680
|
UnicodeDecodeError when using SHAP 0.42 and xgboost 2.1.1
|
<p>I am trying to explain my xgboost (v 2.1.1) model with shap (v 0.42) but I get this error:</p>
<pre><code>UnicodeDecodeError Traceback (most recent call last)
Cell In[53], line 1
----> 1 shap_explainer = shap.TreeExplainer(model)
File ~/anaconda3/envs/geoproc/lib/python3.8/site-packages/shap/explainers/_tree.py:149, in Tree.__init__(self, model, data, model_output, feature_perturbation, feature_names, approximate, **deprecated_options)
147 self.feature_perturbation = feature_perturbation
148 self.expected_value = None
--> 149 self.model = TreeEnsemble(model, self.data, self.data_missing, model_output)
150 self.model_output = model_output
151 #self.model_output = self.model.model_output # this allows the TreeEnsemble to translate model outputs types by how it loads the model
File ~/anaconda3/envs/geoproc/lib/python3.8/site-packages/shap/explainers/_tree.py:859, in TreeEnsemble.__init__(self, model, data, data_missing, model_output)
857 self.original_model = model.get_booster()
858 self.model_type = "xgboost"
--> 859 xgb_loader = XGBTreeModelLoader(self.original_model)
860 self.trees = xgb_loader.get_trees(data=data, data_missing=data_missing)
861 self.base_offset = xgb_loader.base_score
File ~/anaconda3/envs/geoproc/lib/python3.8/site-packages/shap/explainers/_tree.py:1444, in XGBTreeModelLoader.__init__(self, xgb_model)
1442 self.read_arr('i', 29) # reserved
1443 self.name_obj_len = self.read('Q')
-> 1444 self.name_obj = self.read_str(self.name_obj_len)
1445 self.name_gbm_len = self.read('Q')
1446 self.name_gbm = self.read_str(self.name_gbm_len)
File ~/anaconda3/envs/geoproc/lib/python3.8/site-packages/shap/explainers/_tree.py:1566, in XGBTreeModelLoader.read_str(self, size)
1565 def read_str(self, size):
-> 1566 val = self.buf[self.pos:self.pos+size].decode('utf-8')
1567 self.pos += size
1568 return val
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x98 in position 1155: invalid start byte
</code></pre>
<p>I am running:</p>
<blockquote>
<p>model = XGBRegressor().fit(X_train, y_train)<br />
explainer = shap.TreeExplainer(model)</p>
</blockquote>
<p>Is this an incompatibility between the versions? Older fixes as suggested here (<a href="https://stackoverflow.com/questions/61928198/getting-unicodedecodeerror-when-using-shap-on-xgboost">Getting UnicodeDecodeError when using shap on xgboost</a>) don't work unfortunately.</p>
<p>I can read in the SHAP releases that version 0.45.0 "fixed XGBoost model load". Could that be it? However, if there is a fix without upgrading, I'd prefer that.</p>
|
<python><xgboost><shap>
|
2025-02-28 09:48:32
| 1
| 321
|
Christin Abel
|
79,475,093
| 5,615,873
|
How to get the duration of a note as a number in music21?
|
<p>In <strong>music21</strong>, if I want to get the duration of a note, I get an object, of the form <em>'music21.duration.Duration {number}'</em>. I can't find anywhere how to get this {number} alone.
Is there some other music21 property or method (not one's own) that provides such a value?</p>
|
<python><music21>
|
2025-02-28 09:41:59
| 1
| 3,537
|
Apostolos
|
79,475,051
| 6,666,611
|
What's the difference between uv lock --upgrade and uv sync?
|
<p>Being a total newbie in the python ecosystem, I'm discovering uv and was wondering if there was a difference between the following commands :<br />
<code>uv lock --upgrade</code><br />
and<br />
<code>uv sync</code></p>
<p>If there's any, what are the exact usage for each of them ?</p>
|
<python><dependency-management><uv>
|
2025-02-28 09:28:12
| 1
| 1,176
|
n3wbie
|
79,474,670
| 338,479
|
How can I wait for any one thread out of several to finish?
|
<p>Say I want to launch multiple threads to perform tasks, and as they finish I want to process their work units. And optionally launch more threads to do more work units.</p>
<p>Is there any standard way to tell Python that I want to wait for any one of my threads to complete? Bonus points if it's easy to know which one.</p>
<p>Googling the question produced no useful results. All the answers were how to wait for <em>all</em> the threads, or how to wait for the first thread. Neither of these are what I'm looking for.</p>
<p>I've toyed with using a counting semaphore, but is there a better way?</p>
|
<python><multithreading>
|
2025-02-28 06:12:30
| 1
| 10,195
|
Edward Falk
|
79,474,514
| 9,743,391
|
How to remove xarray plot bad value edge colour
|
<p>I know set_bad can colour the pixel into a specific colour but in my example I only want to have edge colour for blue and grey pixels with values and not the bad pixels (red)</p>
<pre><code>import matplotlib.pyplot as plt
import xarray as xr
import numpy as np
from matplotlib import colors
fig, ax = plt.subplots(1, 1, figsize=(12, 8))
# provide example data array with a mixture of floats and nans in them
data = xr.DataArray([np.random.rand(10, 10)]) # Example data
# set a few nans
data[0, 1, 1] = np.nan
data[0, 1, 2] = np.nan
data[0, 1, 3] = np.nan
data[0, 2, 1] = np.nan
data[0, 2, 2] = np.nan
data[0, 2, 3] = np.nan
data[0, 3, 1] = np.nan
data[0, 3, 2] = np.nan
data[0, 3, 3] = np.nan
cmap = colors.ListedColormap(['#2c7bb6', "#999999"])
# make nan trends invalid and set edgecolour to white
cmap.set_bad(color = 'red')
data.plot(edgecolor = "grey", cmap = cmap)
</code></pre>
<p><a href="https://i.sstatic.net/L5eHmHdr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/L5eHmHdr.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><python-xarray>
|
2025-02-28 04:28:47
| 1
| 626
|
Jane
|
79,474,503
| 10,262,805
|
Why does my Keras model show 4 parameters instead of 2?
|
<p>I'm training a simple Keras model with one Dense layer, but when I call model.summary(), it shows 4 parameters instead of 2.</p>
<pre><code>import tensorflow as tf
import numpy as np
# Generate dummy data
X_train = np.random.rand(40) # Shape (40,)
y_train = np.random.rand(40)
# Define a simple model
model = tf.keras.Sequential([
tf.keras.layers.Dense(1)
])
# Compile the model
model.compile(loss=tf.keras.losses.mae,
optimizer=tf.keras.optimizers.SGD(),
metrics=["mae"])
# Train the model
model.fit(tf.expand_dims(X_train, axis=-1), y_train, epochs=100)
# Check model summary
model.summary()
</code></pre>
<p>Since Dense(1) should have 1 weight + 1 bias, I expected 2 parameters, not 4.</p>
<p>Why does passing a 1D input (shape=(40,)) cause Keras to use 4 parameters instead of 2?
Is Keras automatically handling the input shape in a way that duplicates the parameters?</p>
<p>this is the result:</p>
<p><a href="https://i.sstatic.net/5YVb6dHO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5YVb6dHO.png" alt="enter image description here" /></a></p>
|
<python><tensorflow><machine-learning><keras><deep-learning>
|
2025-02-28 04:18:51
| 2
| 50,924
|
Yilmaz
|
79,474,354
| 188,331
|
Debug a dead kernel using ipykernel kernel --debug
|
<p>I just added a new kernel to Juypter Notebook using:</p>
<pre><code>python -m ipykernel install --user --name=venv3.12
</code></pre>
<p>and edited the path of Python to the Python executable at a <code>venv</code> folder. The contents of the <code>kernel.json</code> is as follows:</p>
<pre><code>{
"argv": [
"/home/jupyter-raptor/venv3.12/bin/python",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "Python 3.12",
"language": "python",
"metadata": {
"debugger": true
}
}
</code></pre>
<p>Then, I checked the kernel is installed successfully or not by listing:</p>
<pre><code>$ jupyter kernelspec list
Available kernels:
venv3.12 /home/jupyter-raptor/.local/share/jupyter/kernels/venv3.12
python3 /opt/tljh/user/share/jupyter/kernels/python3
</code></pre>
<p>However, the <code>venv3.12</code> kernel cannot be started and listed as Disconnected in a new Juypter Notebook. Therefore, I want to debug the problem. I know the command <code>python -m kernel --debug</code> can debug a kernel, but how can I specify the kernel <code>venv3.12</code> to debug?</p>
<p>Thanks.</p>
|
<python><ubuntu><jupyter-notebook><python-venv>
|
2025-02-28 02:11:59
| 0
| 54,395
|
Raptor
|
79,474,175
| 10,034,073
|
Why does RawPy/Libraw increase the brightness range when postprocessing?
|
<p>I'm processing a raw photo<sup>1</sup> with <code>RawPy</code> as follows:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import rawpy
with rawpy.imread('my_picture.nef') as rpy:
raw = rpy.raw_image
print('Raw:', raw.shape, np.min(raw), np.max(raw))
image = rpy.postprocess(
output_bps=16,
no_auto_bright=True,
no_auto_scale=True,
user_wb=(1, 1, 1, 1),
gamma=(1, 1),
user_black=0,
user_sat=65535
)
print('Processed:', image.shape, np.min(image), np.max(image))
</code></pre>
<p>Which produces this output:</p>
<pre><code>Raw: (4016, 6016) 157 4087
Processed: (4016, 6016, 3) 0 5589
</code></pre>
<p>My understanding is that the call to <code>rpy.postprocess()</code> is configured to not adjust the brightness at all, except as a side effect of demosaicing.<sup>2</sup> So I have two questions:</p>
<ol>
<li><p><strong>Why did the min/max values widen from 157β4087 to 0β5589?</strong></p>
</li>
<li><p><strong>Is there any way to predict the min/max values after postprocessing?</strong> Ideally, I want a formula like "if your black/saturation levels are ___ and ___, then the widest possible range after postprocessing is ___ to ___."</p>
</li>
</ol>
<h2>Other Attempts and Demosaicing</h2>
<p>I understand that 4087 is the max value of the Bayer-patterned pixels in the raw image, while 5589 is the maximum R, G, or B value of some pixel. Thus, I tried adding the color channels together:</p>
<pre class="lang-py prettyprint-override"><code>>>> image = np.sum(image, axis=-1)
>>> np.min(image), np.max(image)
452 10743
</code></pre>
<p>This didn't help much, as I don't see a clear relationship between 157β4087 and 452β10743. But maybe that's because there's nothing solid white or black in the image? Thus, I also tried things like this:</p>
<pre class="lang-py prettyprint-override"><code>with rawpy.imread('my_picture.nef') as rpy:
rpy.raw_image[:] = 4095 # Or 0, or some other integer
# OR
rpy.raw_image[:] = np.random.randint(157, 4088, size=(4016, 6016))
# Postprocessing like before ...
</code></pre>
<p>Interestingly, when the raw image is all the same pixel value (1000, 4095, etc.) the output values aren't scaled. The min/max stay about the same. But using random numbers is basically equivalent to the real photo. I don't understand this phenomenon either.</p>
<hr />
<p><sup>1</sup> From a Nikon D3400, if that matters.</p>
<p><sup>2</sup> There's also the <code>adjust_maximum_thr</code> parameter, which I don't fully understand, but messing with it didn't seem to change anything here.</p>
|
<python><image-processing><libraw><demosaicing><rawpy>
|
2025-02-27 23:37:28
| 0
| 444
|
kviLL
|
79,473,967
| 23,260,297
|
How to merge two dataframes based on condition
|
<p>I have 2 dataframes:</p>
<p>df1:</p>
<pre><code>BA Product FixedPrice Vintage DeliveryPeriod
A foo 10.00 Vintage 2025 Mar25
B foo 11.00 Vintage 2025 Dec25
B foo 12.00 Vintage 2024 Sep25
C bar 2.00 None Nov25
</code></pre>
<p>df2:</p>
<pre><code>Service DeliveryPeriod FloatPrice
ICE - Vintage 2025 Mar25 5.00
ICE - Vintage 2025 Dec25 4.00
ICE - Vintage 2024 Sep25 6.00
ICE - Vintage 2023 Nov25 1.00
</code></pre>
<p>How can I get a result like this:</p>
<pre><code>BA Product FixedPrice Vintage DeliveryPeriod FloatPrice
A foo 10.00 Vintage 2025 Mar25 5.00
B foo 11.00 Vintage 2025 Dec25 4.00
B foo 12.00 Vintage 2024 Sep25 6.00
C bar 2.00 None Nov25 0.00
</code></pre>
<p>I was going to use <code>merge</code> but the columns don't match and neither do the values. I need to get the float price based on if <code>Service</code> contains the value from <code>Vintage</code> and matches the delivery period.</p>
|
<python><pandas>
|
2025-02-27 21:22:11
| 2
| 2,185
|
iBeMeltin
|
79,473,943
| 1,028,270
|
How do I get the _env_file param of an instantiated BaseSettings object?
|
<p>When instantiating a class like this:</p>
<pre><code>settings = Settings(_env_file='some/path/prod.env', _env_file_encoding='utf-8')
</code></pre>
<p>I want to store <code>some/path/prod.env</code> in a property somewhere. Does pydantic settings track this? If not how can I do this? I shouldn't have to override init to just capture this value and store it in a propery.</p>
|
<python><pydantic><pydantic-settings>
|
2025-02-27 21:09:10
| 1
| 32,280
|
red888
|
79,473,884
| 2,986,153
|
how to set truncated t-distribution prior in bambi
|
<pre><code>%pip install bambi==0.15.0
%pip install numpy==1.26.4
</code></pre>
<pre><code>import bambi as bmb
import polars as pl
from polars import col, lit
import pandas as pd
import numpy as np
import arviz as az
np.random.seed(42)
num_rows = int(10e3)
# Generate random data
shape, scale = 10, 6 # Gamma parameters to achieve mean of 60
revenue = np.random.gamma(shape, scale, size=num_rows)
# Create Polars DataFrame
pl_a = pl.DataFrame({
"recipe": "A",
"revenue": revenue
})
# Generate random data
shape, scale = 11, 6 # Gamma parameters to achieve mean of 66
revenue = np.random.gamma(shape, scale, size=num_rows)
# Create Polars DataFrame
pl_b = pl.DataFrame({
"recipe": "B",
"revenue": revenue
})
pl_ab = pl.concat([pl_a, pl_b])
m_lm = bmb.Model(
"revenue ~ recipe",
pl_ab.to_pandas(),
family="gaussian",
priors={
"Intercept": bmb.Prior("Normal", mu=60, sigma=2),
"recipe": bmb.Prior("Normal", mu=0, sigma=2),
"sigma": bmb.Prior("StudentT", nu=3, mu=20, sigma=5)
# bmb.Prior("HalfStudentT", nu=3, sigma=5)
}
)
m_lm.fit()
</code></pre>
<p>In the above code I can specify a <code>StudentT</code> or <code>HalfStudentT</code> prior for <code>sigma</code>, but I am unsuccessful in setting a <code>truncated StudentT</code> with a specified <code>mu</code>.</p>
<p>I have failed to set a truncated prior in every way I could imagine:</p>
<pre><code>m_trunc = bmb.Model(
"revenue ~ recipe",
pl_ab.to_pandas(),
family="gaussian",
priors={
"Intercept": bmb.Prior("Normal", mu=60, sigma=2),
"recipe": bmb.Prior("Normal", mu=0, sigma=2),
"sigma": bmb.Prior(
"Truncated",
prior=bmb.Prior("StudentT", nu=3, mu=20, sigma=5),
lower=0, upper=np.inf
)
}
)
m_trunc.fit()
</code></pre>
<p><code>TypeError: Truncated.dist() missing 1 required positional argument: 'dist'</code></p>
<pre><code>m_trunc = bmb.Model(
"revenue ~ recipe",
pl_ab.to_pandas(),
family="gaussian",
priors={
"Intercept": bmb.Prior("Normal", mu=60, sigma=2),
"recipe": bmb.Prior("Normal", mu=0, sigma=2),
"sigma": bmb.Prior(
"Truncated",
dist="StudentT",
nu=3, mu=0, sigma=1,
lower=0, upper=np.inf
)
}
)
m_trunc.fit()
</code></pre>
<p><code>TypeError: 'str' object is not callable</code></p>
<pre><code>m_trunc = bmb.Model(
"revenue ~ recipe",
pl_ab.to_pandas(),
family="gaussian",
priors={
"Intercept": bmb.Prior("Normal", mu=60, sigma=2),
"recipe": bmb.Prior("Normal", mu=0, sigma=2),
"sigma": bmb.Prior(
"StudentT", nu=3, mu=0, sigma=1, lower=0, upper=5 # Truncation applied here
)
}
)
m_trunc.fit()
</code></pre>
<p><code>TypeError: RandomVariable.make_node() got an unexpected keyword argument 'lower'</code></p>
<pre><code>import pymc as pm
m_trunc = bmb.Model(
"revenue ~ recipe",
pl_ab.to_pandas(),
family="gaussian",
priors={
"Intercept": bmb.Prior("Normal", mu=60, sigma=2),
"recipe": bmb.Prior("Normal", mu=0, sigma=2),
"sigma": bmb.Prior(
pm.Bound(pm.StudentT, lower=0, upper=5), # Bound the StudentT prior
nu=3, mu=0, sigma=1 # Parameters for StudentT
)
}
)
m_trunc.fit()
</code></pre>
<p><code>AttributeError: module 'pymc' has no attribute 'Bound'</code></p>
|
<python><bambi>
|
2025-02-27 20:36:53
| 0
| 3,836
|
Joe
|
79,473,854
| 2,804,645
|
Type narrowing via exception in function
|
<p>I'm trying to understand why an exception raised based on the type of a variable doesn't narrow down the type of this variable.</p>
<p>I'd like to do something like this:</p>
<pre class="lang-py prettyprint-override"><code>def ensure_int(obj: int | str) -> None:
if isinstance(obj, str):
raise ValueError("obj cannot be str")
def f(x: int | str) -> int:
ensure_int(x)
return x
</code></pre>
<p>I would've thought that calling <code>ensure_int</code> in <code>f</code> would narrow the type of <code>x</code> down to <code>int</code>, but it doesn't. <a href="https://mypy-play.net/?mypy=latest&python=3.13&gist=1cbd4dd92a72f96415d815e53bd64f6e" rel="nofollow noreferrer">Mypy gives</a>:</p>
<pre class="lang-none prettyprint-override"><code>error: Incompatible return value type (got "Union[int, str]", expected "int") [return-value]
</code></pre>
<p>Why does this not work? If I inline the code of <code>ensure_int</code> into <code>f</code> then the error goes away.</p>
<p>So, my questions are:</p>
<ol>
<li>Are there scenarios where calling <code>ensure_int</code> would not guarantee that the type of <code>x</code> is <code>int</code>?</li>
<li>Is there a way of fixing this with additional annotations or similar? I read about <code>TypeGuard</code> and <code>TypeIs</code> but they only work with functions which return <code>bool</code>.</li>
</ol>
|
<python><python-typing><mypy>
|
2025-02-27 20:19:30
| 2
| 410
|
Stan
|
79,473,710
| 11,318,930
|
Can polars have a boolean in a 'with_columns' statement?
|
<p>I am using polars to hash some columns in a data set. One column is contains lists of strings and the other column strings. My approach is to cast each column as type string and then hash the columns. The problem I am having is with the type casting.</p>
<p>I am using the with_columns method a follows:</p>
<pre><code>list_of_lists = [
['base', 'base.current base', 'base.current base.inventories - total', 'ABCD'],
['base', 'base.current base', 'base.current base.inventories - total', 'DEFG'],
['base', 'base.current base', 'base.current base.inventories - total', 'ABCD'],
['base', 'base.current base', 'base.current base.inventories - total', 'HIJK']
]
list_of_strings = ['(bobbyJoe460)',
'bobby, Joe (xx866e)',
'137642039575',
'mamamia']
pl_df_1 = pl.DataFrame({'lists': list_of_lists,'stris':list_of_strings}, strict=False)
pl_df_1.with_columns(pl.col(['lists','stris'])
.cast(pl.List(pl.Categorical))
.hash(seed=140)
.name.suffix('_hashed')
)
</code></pre>
<p>Note that the cast is <code>pl.List(pl.Categorical)</code>. If I omit the <code>pl.List</code> the cast fails with the error</p>
<p>With the inclusion of <code>pl.List</code> the code gives:</p>
<pre><code>lists stris lists_hashed stris_hashed
list[str] str u64 u64
["base", "base.current base", β¦ "ABCD"] "(bobbyJoe460)" 11845069150176100519 594396677107
["base", "base.current base", β¦ "DEFG"] "bobby, Joe (xx866e)" 6761150988783483050 594396677107
["base", "base.current base", β¦ "ABCD"] "137642039575" 11845069150176100519 594396677107
["base", "base.current base", β¦ "HIJK"] "mamamia" 8290133271651710679 594396677107
</code></pre>
<p>Note that the string column all have the same hash value. Ideally I would like a boolean expression in the <code>with_columns</code> that would detect the column type and if it was a List use <code>pl.List(pl.Categorical)</code> and if it was String just <code>pl.Categorical</code>. Is that possible?</p>
|
<python><python-polars><polars>
|
2025-02-27 19:19:14
| 1
| 1,287
|
MikeB2019x
|
79,473,667
| 118,195
|
Would it ever make sense to use multithreading for I/O bound operations in modern Python?
|
<p>I often hear that it in some circumstances it makes sense to use multi-threading for I/O operations (file operations, network communications, etc.).
But why not just use asyncio for all I/O bound operations and stick to threads/processes for CPU-bound operations?</p>
<p>Are there any scenarios you can think of when multithreading would be better to use than asyncio in modern python (i.e. 3.10+)?</p>
|
<python>
|
2025-02-27 18:55:23
| 1
| 2,107
|
Zoman
|
79,473,549
| 996,309
|
Aborted gRPC requests erroring in the frontend with "RpcError: missing status"
|
<p>I'm using python (3.12), grpcio (1.64.0), and @protobuf-ts (2.9.3).</p>
<p>I'm trying to set up auth in my python gRPC server, following the example here: <a href="https://github.com/grpc/grpc/blob/master/examples/python/auth/token_based_auth_server.py" rel="nofollow noreferrer">https://github.com/grpc/grpc/blob/master/examples/python/auth/token_based_auth_server.py</a>. Relevant code copied below:</p>
<pre class="lang-py prettyprint-override"><code> def __init__(self):
def abort(ignored_request, context):
context.abort(grpc.StatusCode.UNAUTHENTICATED, "Invalid signature")
self._abort_handler = grpc.unary_unary_rpc_method_handler(abort)
def intercept_service(self, continuation, handler_call_details):
# Example HandlerCallDetails object:
# _HandlerCallDetails(
# method=u'/helloworld.Greeter/SayHello',
# invocation_metadata=...)
expected_metadata = (_AUTH_HEADER_KEY, _AUTH_HEADER_VALUE)
if expected_metadata in handler_call_details.invocation_metadata:
return continuation(handler_call_details)
else:
return self._abort_handler
</code></pre>
<p>However, whenever a request is aborted, the frontend throws an error: <code>RpcError("missing status")</code>, which I can see is happening in the internals of protobuf-ts.</p>
<p>My expectation is to see it throw a different error with the response status of "UNAUTHENTICATED".</p>
<p>Has anyone seen this before? Am I missing something?</p>
<hr />
<p>Closely related sub-question: Is there any reason that gRPC over HTTP doesn't map gRPC status codes to HTTP status codes when there's a clear mapping? It would help integration with frameworks that inspect HTTP status - e.g. Supertokens, which triggers the auth flow when it sees a 401 response.</p>
|
<python><authentication><grpc>
|
2025-02-27 18:00:22
| 1
| 3,719
|
Tom McIntyre
|
79,473,547
| 4,198,514
|
Tkinter - resize parent after grid_remove()
|
<p>Either I did misread some of the documentation or I still do not quite understand some concepts of Tkinter.</p>
<p>If I add widgets (e.g. Labels) to a frame using <code>grid()</code> the frame gets resized to fit them. (so far as expected)</p>
<p>If I call grid_remove on all the children (using <code>frame.winfo_children()</code> to adress all of them) the frame does not get resized again (what I would like to have).</p>
<p>Is there any specific reason why after these actions</p>
<ol>
<li>add 20 elements</li>
<li><code>grid_remove()</code> all of them</li>
<li>add 10 again</li>
</ol>
<p>these results occur?</p>
<ol>
<li>The frame resizes to fit 20.</li>
<li>Does not resize on removal.</li>
<li>Shrinks after adding 10 to only fit them?</li>
</ol>
<hr />
<h3>Edit</h3>
<p>I am using an extension of this code (which is more than the <code>MCVE</code>) to <code>collapse</code> or <code>expand</code> a <code>ttk.LabelFrame</code> triggered by click onto the Label. (Thus hiding the children with the intention of keeping subsequent <code>.grid()</code> calls without parameters available).</p>
<p>Possibly I am running on an inefficient approach here - remarks on that are welcome as well.</p>
<p><a href="https://stackoverflow.com/users/11106801/thelizzard">@TheLizzard</a>s approach of rescaling the frame to 1 and then to 0 does not really work in this case (The Label of the LabelFrame disappears) but I do quite like the approach.</p>
<p><a href="https://stackoverflow.com/users/8512262/jriggles">@JRiggles</a> approach works with the given restraints but is quite ugly when elements are added to the collapsed Labelframe.</p>
<p>At the current stage I did play around using both approaches, already tried <a href="https://stackoverflow.com/users/7432/bryan-oakley">@BryanOakley</a>s answer that I did see in a different post but it seems me binding <code>'<Expose>'</code> resulted in an infinityloop.</p>
<hr />
<p>Replication Code:</p>
<pre><code>
import tkinter as tk
from tkinter import ttk
root = tk.Tk()
root.grid_rowconfigure(0, weight=1)
root.grid_columnconfigure(0, weight=1)
app = ttk.Frame(root)
app.grid(row=0, column=0, columnspan=2, sticky=tk.NW+tk.SE)
def add(start=None):
start = start or len(app.winfo_children())
for i in range(start,start+10):
ttk.Label(app, text=f"{i}").grid(row=i, column=0, sticky=tk.NW+tk.SE)
def rem():
for _f in app.grid_slaves():
_f.grid_remove()
ttk.Button(root, text='+', command=add).grid(row=1, column=0)
ttk.Button(root, text='-', command=rem).grid(row=1, column=1)
root.mainloop()
</code></pre>
|
<python><tkinter><grid><resize><autoresize>
|
2025-02-27 17:59:30
| 4
| 2,210
|
R4PH43L
|
79,473,519
| 1,422,058
|
DB connections stay in idle state using psycopg2 Postgres driver with Python
|
<p>When running the following code to insert data to a database table, each time a DB connection appears in <code>pg_stat_activity</code> which remains there in state idle:</p>
<pre class="lang-none prettyprint-override"><code>column_names = ", ".join(columns)
query = f"INSERT INTO {table_name} ({column_names}) VALUES %s"
values = [tuple(record.values()) for record in records]
with psycopg2.connect(dbname=dbname, user=user, password=password, host=host, port=port, application_name=app_name) as conn:
with conn.cursor() as c:
psycopg2.extras.execute_values(cur=c, sql=query, argslist=values, page_size=batch_size)
</code></pre>
<p><strong>UPDATE:</strong> Code looks like this now, after input from Adrian:</p>
<pre><code>conn = psycopg2.connect(dbname=dbname, user=user, password=password, host=host, port=port, application_name=app_name)
with conn:
with conn.cursor() as c:
psycopg2.extras.execute_values(cur=c, sql=query, argslist=values, page_size=self._batch_size)
conn.commit()
conn.close()
del conn
</code></pre>
<p>When running multiple times, the following error is suddenly thrown, indicating that the connections are exhausted:</p>
<pre><code>E psycopg2.OperationalError: connection to server at "localhost" (::1), port 5446 failed: Connection refused (0x0000274D/10061)
E Is the server running on that host and accepting TCP/IP connections?
E connection to server at "localhost" (127.0.0.1), port 5446 failed: FATAL: remaining connection slots are reserved for roles with the SUPERUSER attribute
</code></pre>
<p>Might it be that this happens when connecting to the DB through a ssh tunnel? Using port forwarding via VSCode to a DB in Azure cloud.
I experience the same behaviour when using DBeaver DB client.</p>
|
<python><python-3.x><postgresql>
|
2025-02-27 17:45:15
| 2
| 1,029
|
Joysn
|
79,473,274
| 3,663,124
|
FastAPI hide sql session in object
|
<p>I want to hide the usage of sql model behind a class, let's call it Repository. So that an endpoint would be something like:</p>
<pre class="lang-py prettyprint-override"><code>@router.get("/")
def list() -> list[Resource]:
return repository.list()
</code></pre>
<p>And then have a Repository class with the proper dependency</p>
<pre class="lang-none prettyprint-override"><code>engine = create_engine(f"sqlite:///{SQLITE_FILE_PATH}")
def get_session() -> Generator[Session, None, None]:
with Session(engine) as session:
yield session
SessionDep = Annotated[Session, Depends(get_session)]
class Repository:
def list(self, session: SessionDep):
query = select(Resource)
resources = session.exec(query)
return resources
</code></pre>
<p>How can I achieve that?</p>
<p>How can I make the definition of the endpoints depend on the repository?
How can I make the repository the same for each endpoint and request, but a new session per request?
How can I do all this so then I can separately test using pytest-mock:</p>
<ul>
<li>the endpoints mocking the dependency to the repository</li>
<li>the repository with a mocked in memory DB</li>
</ul>
|
<python><pytest><fastapi>
|
2025-02-27 16:06:03
| 0
| 1,402
|
fedest
|
79,473,246
| 4,463,825
|
Automate initializing random integers for various columns in data frame Python
|
<p>I am trying to minimize the following code to a for loop, as I have 14 of the similar columns, or something in a fewer number of code lines. What would be a pythonic way to do it?</p>
<pre><code>df['fan1_rpm'] = np.random.randint(5675, 5725, 30)
df['fan2_rpm'] = np.random.randint(5675, 5725, 30)
df['fan3_rpm'] = np.random.randint(5675, 5725, 30)
df['fan4_rpm'] = np.random.randint(5675, 5725, 30)
.
.
.
.
</code></pre>
|
<python><pandas><numpy>
|
2025-02-27 15:55:43
| 2
| 993
|
Jesh Kundem
|
79,473,192
| 5,666,203
|
Disagreement between SciPy quaternion and Wolfram
|
<p>I'm calculating rotation quaternions from Euler angles in Python using SciPy and trying to validate against an external source (Wolfram Alpha).</p>
<p>This Scipy code gives me one answer:</p>
<pre><code>from scipy.spatial.transform import Rotation as R
rot = R.from_euler('xyz', [30,45,60], degrees=1)
quat = rot.as_quat()
print(quat[3], quat[0], quat[1], quat[2]) # w, x, y, z
(w,x,y,z) = 0.8223, 0.0222, 0.4396, 0.3604
</code></pre>
<p>while Wolfram Alpha gives a different answer</p>
<pre><code>(w,x,y,z) = 0.723, 0.392, 0.201, 0.532
</code></pre>
<p><a href="https://i.sstatic.net/Jnp7oV2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jnp7oV2C.png" alt="Wolfram Alpha solution" /></a></p>
<p>Why the difference? Is it a difference in paradigm for how the object is rotated (ex. extrinsic rotations vs. intrinsic rotations)?</p>
|
<python><validation><scipy><quaternions><wolframalpha>
|
2025-02-27 15:38:36
| 1
| 1,144
|
AaronJPung
|
79,473,140
| 8,223,979
|
Add edges to colorbar in seaborn heatmap
|
<p>I have the following heatmap:</p>
<pre><code>import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
# Create a sample heatmap
data = np.random.rand(10, 12)
ax = sns.heatmap(data)
plt.show()
</code></pre>
<p>How can I add a black edge to the colormap? Is there any way to do this without needing to use subplots?</p>
|
<python><seaborn><heatmap><colorbar><color-scheme>
|
2025-02-27 15:22:21
| 1
| 1,097
|
Caterina
|
79,473,134
| 8,223,979
|
Extend bbox in heatmap
|
<p>I have the following heatmap:</p>
<pre><code>import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
# Create a sample heatmap
data = np.random.rand(10, 12)
ax = sns.heatmap(data)
# Set the x-axis label
ax.set_xlabel('X Axis Label')
# Change the background color of the x-axis label
ax.xaxis.label.set_bbox(dict(facecolor='lightgrey', edgecolor='none', boxstyle='square,pad=0.3'))
plt.show()
</code></pre>
<p>I would like to extend the grey box from the beginning to the end of the x-axis (the edges of the heatmap). How can I do that?</p>
|
<python><matplotlib><seaborn><heatmap><x-axis>
|
2025-02-27 15:19:54
| 0
| 1,097
|
Caterina
|
79,473,066
| 3,663,124
|
Pytest mocker not mocking
|
<p>I have the following structure for my code which is a fastapi app:</p>
<pre><code>/repo
/src
main.app
/routes
/resource
resource.py
/repository
repository.py
/tests
/routes
/resource
resource_test.py
</code></pre>
<p>/src/routes/resource/resource.py</p>
<pre class="lang-py prettyprint-override"><code>def get_repository() -> Repository:
return repository
RepositoryDep = Annotated[Repository, Depends(get_repository)]
router = APIRouter()
@router.get("/")
def list(repo: RepositoryDep) -> list[Alumno]:
return repo.list()
</code></pre>
<p>/src/repository/repository.py</p>
<pre class="lang-py prettyprint-override"><code>class Repository:
...
repository = Repository()
</code></pre>
<p>Then I have some tests for my router like this
tests/routes/resource/resource.py</p>
<pre class="lang-py prettyprint-override"><code>
client = TestClient(app)
def test_get_resources(mocker) -> None:
mock_repo = MagicMock(Repository)
mock_repo.list.return_value = [
Resource(id=1, name="Something", age=20),
Resource(id=2, name="Something2", age=22)
]
mocker.patch('src.routes.resource.resource.get_repository', return_value=mock_repo)
response = client.get(
"/resources/",
)
assert response.status_code == 200
content = response.json()
assert len(content) == 2
</code></pre>
<p>But this tests fails, the repository used (checked by debugging) is not the mock I set up, it's the real object.</p>
<p>Why is it not mocking?</p>
|
<python><mocking><pytest>
|
2025-02-27 14:58:06
| 1
| 1,402
|
fedest
|
79,473,058
| 4,048,657
|
PyTorch - Efficiently computing the per pixel gradient of an image with respect to some parameter?
|
<p>I have a function that takes a single parameter (alpha) as an input an outputs a N by N image (2048x2048). I want to obtain the gradient of this image with respect to the parameter (alpha). I'm not talking about a sobel filter, I'm looking to see how my image changes as I change alpha. The function takes around 2 seconds to evaluate.</p>
<pre class="lang-py prettyprint-override"><code>grad_img = torch.autograd.functional.jacobian(render_with_alpha, alpha).squeeze(-1)
</code></pre>
<p>This works, and does exactly what I want. However, it takes minutes for a 64x64 image, and I terminated the 1024x1024 after 16h before it finished (I'm looking to compute 2048x2048).</p>
<p>Another approach, which is certainly too slow is use <code>backward()</code> for each pixel. To get this working, I had to rerun the forward pass every time, which makes this method impractical (is there a way around this?).</p>
<p>Two alternative methods from the pytorch documentation appear to be jacrev and jacfwd. In my case, since I have a single input and a large output, it appears that <code>jacfwd</code> would be ideal. However, I cannot get it to work with my code. If I understand correctly, it does not use autograd and when I use it, the code has errors about missing storage.</p>
<blockquote>
<p>RuntimeError: Cannot access data pointer of Tensor that doesn't have storage</p>
</blockquote>
<p>I have this error every time I use a view, or use detach(). I do both of these quite often in my code.</p>
<p>So how can I efficiently compute the per pixel gradient of the image?</p>
|
<python><pytorch>
|
2025-02-27 14:55:22
| 0
| 1,239
|
Cedric Martens
|
79,473,015
| 1,745,001
|
How to gracefully stop an asyncio server in python 3.8
|
<p>As part of learning <code>python</code> and <code>asyncio</code> I have a simple TCP client/server architecture using <code>asyncio</code> (I have reasons why I need to use that) where I want the server to completely exit when it receives the string 'quit' from the client. The server, stored in <code>asyncio_server.py</code>, looks like this:</p>
<pre><code>import socket
import asyncio
class Server:
def __init__(self, host, port):
self.host = host
self.port = port
async def handle_client(self, reader, writer):
# Callback from asyncio.start_server() when
# a client tries to establish a connection
addr = writer.get_extra_info('peername')
print(f'Accepted connection from {addr!r}')
request = None
try:
while request != 'quit':
request = (await reader.read(255)).decode('utf8')
print(f"Received {request!r} from {addr!r}")
response = 'ok'
writer.write(response.encode('utf8'))
await writer.drain()
print(f'Sent {response!r} to {addr!r}')
print('----------')
except Exception as e:
print(f"Error handling client {addr!r}: {e}")
finally:
print(f"Connection closed by {addr!r}")
writer.close()
await writer.wait_closed()
print(f'Closed the connection with {addr!r}')
asyncio.get_event_loop().stop() # <<< WHAT SHOULD THIS BE?
async def start(self):
server = await asyncio.start_server(self.handle_client, self.host, self.port)
async with server:
print(f"Serving on {self.host}:{self.port}")
await server.serve_forever()
async def main():
server = Server(socket.gethostname(), 5000)
await server.start()
if __name__ == '__main__':
asyncio.run(main())
</code></pre>
<p>and when the client sends <code>quit</code> the connection is closed and the server exits but always with the error message:</p>
<pre><code>Traceback (most recent call last):
File "asyncio_server.py", line 54, in <module>
asyncio.run(main())
File "C:\Python38-32\lib\asyncio\runners.py", line 44, in run
return loop.run_until_complete(main)
File "C:\Python38-32\lib\asyncio\base_events.py", line 614, in run_until_complete
raise RuntimeError('Event loop stopped before Future completed.')
RuntimeError: Event loop stopped before Future completed.
</code></pre>
<p>What do I need to do instead of or in addition to calling <code>asyncio.get_event_loop().stop()</code> and/or <code>server.serve_forever()</code> to have the server exit gracefully with no error messages?</p>
<p>I've tried every alternative I can find with google, including calling <code>cancel()</code> on the server, using separate <code>loop</code> construct in <code>main()</code>, trying alternatives to <code>stop()</code>, alternatives to <code>run_forever()</code>, etc., etc. and cannot figure out what I'm supposed to do to gracefully stop the server and exit the program without error messages when it receives a <code>quit</code> message from the client.</p>
<p>I'm using python 3.8.10 and cannot upgrade to a newer version due to managed environment constraints. I'm also calling python from <code>git bash</code> in case that matters.</p>
<hr />
<p>Additional Information:</p>
<p>The client code, stored in asyncio_client.py`, is below in case that's useful.</p>
<pre><code>import socket
import asyncio
class Client:
def __init__(self, host, port):
self.host = host
self.port = port
async def handle_server(self, reader, writer):
addr = writer.get_extra_info('peername')
print(f'Connected to from {addr!r}')
request = input('Enter Request: ')
while request.lower().strip() != 'quit':
writer.write(request.encode())
await writer.drain()
print(f'Sent {request!r} to {addr!r}')
response = await reader.read(1024)
print(f'Received {response.decode()!r} from {addr!r}')
print('----------')
request = input('Enter Request: ')
writer.write(request.encode())
await writer.drain()
print(f'Sent {request!r} to {addr!r}')
writer.close()
await writer.wait_closed()
print(f'Closed the connection with {addr!r}')
async def start(self):
reader, writer = await asyncio.open_connection(host=self.host, port=self.port)
await self.handle_server(reader, writer)
async def main():
client = Client(socket.gethostname(), 5000)
await client.start()
if __name__ == '__main__':
asyncio.run(main())
</code></pre>
|
<python><python-3.x><python-asyncio>
|
2025-02-27 14:35:52
| 2
| 208,382
|
Ed Morton
|
79,472,665
| 1,987,477
|
how to access a dictionary key storing a list in a list of lists and dictionaries
|
<p>I have the following list:</p>
<pre><code>plates = [[], [], [{'plate ID': '193a', 'ra': 98.0, 'dec': 11.0, 'sources': [[3352102441297986560, 99.28418829069784, 11.821604434173034], [3352465726807951744, 100.86164898224092, 12.756149587760696]]}], [{'plate ID': '194b', 'ra': 98.0, 'dec': 11.0, 'sources': [[3352102441297986560, 99.28418829069784, 11.821604434173034], [3352465726807951744, 100.86164898224092, 12.756149587760696]]}], [], [], [], [], [], [], [], [], [], []]
</code></pre>
<p>I'd need to loop <code>plates</code>, find the key <code>'sources'</code> and store some data to another list.</p>
<pre><code>import pandas as pd
matched_plates = []
matches_sources_ra = []
matches_sources_dec =[]
plates = [[], [], [{'plate ID': '193a', 'ra': 98.0, 'dec': 11.0, 'sources': [[3352102441297986560, 99.28418829069784, 11.821604434173034], [3352465726807951744, 100.86164898224092, 12.756149587760696]]}], [{'plate ID': '194b', 'ra': 98.0, 'dec': 11.0, 'sources': [[3352102441297986560, 99.28418829069784, 11.821604434173034], [3352465726807951744, 100.86164898224092, 12.756149587760696]]}], [], [], [], [], [], [], [], [], [], []]
plates_df = pd.DataFrame(plates)
for idx, row in plates_df.iterrows():
if 'sources' in row.keys():
print(row["plate ID"])
matched_plates.append([row["plate ID"],len(row["sources"])])
matches_sources_ra.append(row["sources"][0][1])
matches_sources_dec.append(row["sources"][0][2])
</code></pre>
<p>This code never enters the <code>if</code>, what am I doing wrong?</p>
<p>Thank you for your help</p>
|
<python><pandas>
|
2025-02-27 12:21:27
| 2
| 1,325
|
user123892
|
79,472,606
| 125,673
|
Cannot install pysqlite3
|
<p>Trying to install pysqlite3 in VS Code and I get this error;</p>
<pre><code>PS E:\Gradio> pip install pysqlite3 --user
WARNING: Ignoring invalid distribution ~ip (C:\Python312\Lib\site-packages)
DEPRECATION: Loading egg at c:\python312\lib\site-packages\vboxapi-1.0-py3.12.egg is deprecated. pip 25.1 will enforce this behaviour change. A possible replacement is to use pip for package installation. Discussion can be found at https://github.com/pypa/pip/issues/12330Collecting pysqlite3
Using cached pysqlite3-0.5.4.tar.gz (40 kB)
Preparing metadata (setup.py) ... done
Building wheels for collected packages: pysqlite3
Building wheel for pysqlite3 (setup.py) ... error
error: subprocess-exited-with-error
Γ python setup.py bdist_wheel did not run successfully.
β exit code: 1
β°β> [14 lines of output]
running bdist_wheel
running build
running build_py
creating build\lib.win-amd64-cpython-312\pysqlite3
copying pysqlite3\dbapi2.py -> build\lib.win-amd64-cpython-312\pysqlite3
copying pysqlite3\__init__.py -> build\lib.win-amd64-cpython-312\pysqlite3
running build_ext
Builds a C extension linking against libsqlite3 library
building 'pysqlite3._sqlite3' extension
creating build\temp.win-amd64-cpython-312\Release\src
"C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.40.33807\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL
/DNDEBUG /MD -DMODULE_NAME=\"pysqlite3.dbapi2\" -I/usr/include -IC:\Python312\include -IC:\Python312\Include "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.40.33807\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.40.33807\ATLMFC\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\VS\include" "-IC:\Program Files
(x86)\Windows Kits\10\include\10.0.22621.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\cppwinrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /Tcsrc\blob.c /Fobuild\temp.win-amd64-cpython-312\Release\src\blob.obj
blob.c
C:\Users\hijik\AppData\Local\Temp\pip-install-h74k21cy\pysqlite3_229de85cbf4142f49709615fc2e65b74\src\blob.h(4): fatal error C1083: Cannot open include file: 'sqlite3.h': No such file or directory
xe' failed with exit code 2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pysqlite3
Running setup.py clean for pysqlite3
Failed to build pysqlite3
ERROR: Failed to build installable wheels for some pyproject.toml based projects (pysqlite3)
PS E:\Gradio> pip install wheel --user
WARNING: Ignoring invalid distribution ~ip (C:\Python312\Lib\site-packages)
DEPRECATION: Loading egg at c:\python312\lib\site-packages\vboxapi-1.0-py3.12.egg is deprecated. pip 25.1 will enforce this behaviour change. A possible replacement is to use pip for package installation. Discussion can be found at https://github.com/pypa/pip/issues/12330Requirement already satisfied: wheel in c:\users\hijik\appdata\roaming\python\python312\site-packages (0.43.0)
WARNING: Ignoring invalid distribution ~ip (C:\Python312\Lib\site-packages)
PS E:\Gradio>
</code></pre>
|
<python><pip><pysqlite>
|
2025-02-27 12:01:16
| 0
| 10,241
|
arame3333
|
79,472,181
| 10,873,394
|
Page number in PyMuPDF multiprocessing with extract_text
|
<p>So in pymupdf documentation states that <a href="https://pymupdf.readthedocs.io/en/latest/recipes-multiprocessing.html#multiprocessing" rel="nofollow noreferrer">PyMuPDF does not support running on multiple threads</a></p>
<p>So they use multiprocessing, and they do this weird thing with segments in example code:</p>
<pre class="lang-py prettyprint-override"><code> seg_size = int(num_pages / cpu + 1)
seg_from = idx * seg_size
seg_to = min(seg_from + seg_size, num_pages)
for i in range(seg_from, seg_to): # work through our page segment
page = doc[i]
# page.get_text("rawdict") # use any page-related type of work here, eg
</code></pre>
<p>Why not load document first->get number of pages and then pass number to handler function? instead of using segments as in example code ? Would this cause issues?</p>
<pre class="lang-py prettyprint-override"><code>def extract_text_from_page(args: Tuple[bytes, int]) -> Tuple[int, str]:
pdf_stream, page_num = args
# Open a new Document instance in this process
doc = pymupdf.open(stream=pdf_stream)
page = doc.load_page(page_num) # Load the specific page
text = page.get_text(sort=True) # Extract text with sorting
doc.close() # Clean up
return (page_num, text)
</code></pre>
|
<python><python-3.x><pdf><multiprocessing><pymupdf>
|
2025-02-27 09:26:24
| 0
| 424
|
MichaΕ Darowny
|
79,472,150
| 5,049,813
|
How to extend type-hinting for a class in a stub file
|
<p>I have this code, and it annoys me that I have to cast <code>f</code> twice:</p>
<pre class="lang-py prettyprint-override"><code> with h5py.File(computed_properties_path, "r") as f:
# get the set of computed metrics
computed_metrics = set()
# iterating through the file iterates through the keys which are dataset names
f = cast(Iterable[str], f)
dataset_name: str
for dataset_name in f:
# re-cast it as a file
f = cast(h5py.File, f)
dataset_group = index_hdf5(f, [dataset_name], h5py.Group)
for metric_name in dataset_group:
logger.info(f"Dataset: {dataset_name}, Metric: {metric_name}")
</code></pre>
<p>I just want to be able to tell the static type checker that if I iterate through a file, I'll get back strings (which are keys to the groups and datasets in the file).</p>
<p>I've tried creating this <code>.pyi</code> stub to create a class that does this, but I get an error saying that File is not defined. My guess is that this is because Pylance now relies solely on my stub, rather than looking up extra definitions in the original file.</p>
<p>I've <a href="https://claude.ai/share/accffc34-87b9-42a9-959b-80fede48d9af" rel="nofollow noreferrer">tried a lot of different options through Claude</a> and ChatGPT, but can't quite seem to figure out how to extend the type-hinting so that Pylance knows that iterating through an <code>h5py.File</code> object will yield strings.</p>
|
<python><python-typing><h5py>
|
2025-02-27 09:15:50
| 1
| 5,220
|
Pro Q
|
79,472,096
| 2,516,892
|
Parsing of CSV-File leads to "NBSP" and "SPA" Characters in Windows
|
<p>I am parsing a CSV document to write a specification in SysML v2, which is essentially just a simple text file.
Parsing it on Linux I get the result I desire. I use a SysOn-Docker to display my SysML v2 file and everything works as it's supposed to.</p>
<p>However, when I parse create the file in Windows there are special characters appearing:</p>
<ul>
<li><code>NBSP</code></li>
<li><code>SPA</code></li>
</ul>
<p><a href="https://i.sstatic.net/i3AIywj8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/i3AIywj8.png" alt="NBSP and SPA Characters" /></a></p>
<p>It seems, due to these characters the SysOn Docker can't properly read the file in the Windows Docker (however, under Linux no issues at all).</p>
<p>I have tried several ways to write the file differently:</p>
<pre><code>with open(filename, "w") as text_file:
text_file.write(systring)
with codecs.open(filename, "w", "utf-8-sig") as text_file:
text_file.write(systring)
with io.open(filename, 'w', encoding="utf-8) as file:
text_file.write(systring)
</code></pre>
<p>However, all with the same result. The file doesn't change.</p>
<p>Right now I'm really considering removing all of these special characters manually with a <code>.replace</code> - but it doesn't seem to be the proper way?</p>
|
<python><python-3.x><encoding><character-encoding><sysml2>
|
2025-02-27 08:55:56
| 1
| 1,661
|
Qohelet
|
79,471,816
| 2,072,676
|
Working directory structure for a Django-Python project used as an API backend only
|
<p>I'm a newbie in Django Python and I need to create an API backend for a frontend done in React. I'm forced by the customer to use Django Python, no options on this!</p>
<p>The project is very simple: it needs to expose <strong>~15 endpoints</strong>, use Django <strong>ORM</strong> to connect to a <strong>PostgreSQL</strong> database, and have basic business logic. I must have a minimum of <strong>80% unit tests coverage</strong> of the code and expose a <strong>swagger</strong> UI.</p>
<p>My problem is to create a "standard", well-defined, wisely organized structure of the working directory structure as defined by the best practice.</p>
<p>In .NET I put my endpoints in the <em>controllers</em>, then I create the <em>business logic</em> with <em>interfaces</em> and the <em>data layer</em> where I have the <em>model</em> and the <em>entities</em>, so everything is all well organized and separated.</p>
<p>How can I achieve a similar organization in Django?</p>
<p>I've seen that some people use <strong>DRF</strong> (Django RESTful framework), is it good for my purpose? is it a best practice for projects like mine?</p>
|
<python><django><directory-structure>
|
2025-02-27 06:53:34
| 1
| 5,183
|
Giox
|
79,471,693
| 191,064
|
Difference in output of sympy.binomial vs scipy.special.comb
|
<p>Why does the following code that calls <code>comb</code> produce the wrong output?</p>
<pre><code>from scipy.special import comb
from sympy import binomial
def coup(n=100, m=10):
expectation = 0
for j in range(1, n + 1):
coeff = ((-1)**(j-1)) * comb(n, j)
denominator = 1 - (comb(n - j, m) / comb(n, m))
expectation += coeff / denominator
print(expectation)
# output: 1764921496446.9807
# correct output when using binomial: 49.9445660566641
</code></pre>
|
<python><scipy><sympy><numerical-methods>
|
2025-02-27 05:47:02
| 1
| 2,599
|
bettersayhello
|
79,471,516
| 243,755
|
Intermittent SSLCertVerificationError when using requests
|
<p>It is very weird to me that this SSLCertVerificationError not always happens, it happens sometimes. (e.g. The first request run well, but the second request may hit this issue)</p>
<p>I have installed <code>certifi</code>, env <code>SSL_CERT_FILE</code> is also set. Not sure why this error happens Intermittently, anyone has any ideas? Thanks</p>
<p>Here's the full stacktrace</p>
<pre><code>Traceback (most recent call last):
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/starlette/applications.py", line 113, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 187, in __call__
raise exc
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 165, in __call__
await self.app(scope, receive, _send)
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/starlette/routing.py", line 715, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/starlette/routing.py", line 735, in app
await route.handle(scope, receive, send)
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/starlette/routing.py", line 73, in app
response = await f(request)
^^^^^^^^^^^^^^^^
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/fastapi/routing.py", line 214, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/starlette/concurrency.py", line 39, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 962, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/dde/llm/wiki/app.py", line 48, in get_table_metadata
page_content = get_page_content(space, table_name.upper())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/backoff/_sync.py", line 105, in retry
ret = target(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/dde/utils/wiki.py", line 22, in get_page_content
page = confluence.get_page_by_title(space, title, expand='body.storage', start=None, limit=None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/atlassian/confluence.py", line 334, in get_page_by_title
response = self.get(url, params=params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/atlassian/rest_client.py", line 436, in get
response = self.request(
^^^^^^^^^^^^^
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/atlassian/rest_client.py", line 383, in request
response = self._session.request(
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jianfezhang/github/metadata-repo/data-discovery-exp/.venv/lib/python3.11/site-packages/requests/adapters.py", line 698, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='wiki.corp.xxx**strong text**.com', port=443): Max retries exceeded with url: /rest/api/content?type=page&expand=body.storage&spaceKey=DBC&title=SRCH_SRP_TOOL_EVENT_FACT (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)')))
</code></pre>
|
<python><python-requests><ssl-certificate>
|
2025-02-27 03:00:00
| 0
| 29,674
|
zjffdu
|
79,471,221
| 3,310,237
|
In Python GDAL, why I get two EPSG attributes in ExportToWkt?
|
<p>I'm processing some satellite <a href="https://lpdaac.usgs.gov/products/mod13q1v006/" rel="nofollow noreferrer">MOD13Q1</a> images. If I print the ExportToWkt output, I get:</p>
<p><code>PROJCS["unnamed",GEOGCS["Unknown datum based upon the custom spheroid",DATUM["Not_specified_based_on_custom_spheroid",SPHEROID["Custom spheroid",6371007.181,0]],PRIMEM["Greenwich",0],UNIT["degree",0.0174532925199433,AUTHORITY["EPSG","9122"]]],PROJECTION["Sinusoidal"],PARAMETER["longitude_of_center",0],PARAMETER["false_easting",0],PARAMETER["false_northing",0],UNIT["metre",1,AUTHORITY["EPSG","9001"]],AXIS["Easting",EAST],AXIS["Northing",NORTH]]</code></p>
<p>Why I have two EPSG tags?</p>
|
<python><gdal><projection>
|
2025-02-26 23:06:02
| 0
| 465
|
Mauro Assis
|
79,471,135
| 2,904,824
|
Generating Keys
|
<p>Using PyJWT, how do you generate a new key from scratch?</p>
<p>I cannot find a single example of how to do this in PyJWT. I can find examples with jwcrypto, but porting that looks tedious at best.</p>
<p>EDIT: I'm deep in OIDC, and I need asymmetric crypto and actual JWKs.</p>
|
<python><pyjwt>
|
2025-02-26 22:12:27
| 1
| 667
|
AstraLuma
|
79,471,079
| 3,745,677
|
How to handle malformed API request in Flask?
|
<p>There is quite an old game (that no longer works) that has to make some API calls in order to be playable. I am creating a Flask mock server to handle those requests, however it turned out that the requests are not compliant with HTTP standard and are malformed. For example:</p>
<pre><code>Get /config.php http/1.1
</code></pre>
<p>to which flask reports <code>code 400, message Bad request version ('http/1.1')</code>.</p>
<p>After searching for various solutions, here is what I tried (and none worked):</p>
<ol>
<li><code>before_request</code></li>
<li>Python decorator</li>
<li><code>wsgi_app</code> middleware</li>
</ol>
<p>Here is my code:</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask, request
from functools import wraps
app = Flask(__name__)
class RequestMiddleware:
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
print('Middleware', environ['REQUEST_METHOD'], environ['SERVER_PROTOCOL'])
return self.app(environ, start_response)
def RequestDecorator(view):
@wraps(view)
def decorated(*args, **kwargs):
print('Decorator', args, kwargs)
return view(*args, **kwargs)
return decorated
@app.before_request
def RequestHook():
print('Before request: url: %s, path: %s' % (request.url, request.path))
app.wsgi_app = RequestMiddleware(app.wsgi_app)
@app.route("/config.php", methods=["GET"])
@RequestDecorator
def get_config():
return ("{}", 200)
</code></pre>
<p>Example output:</p>
<pre><code>Middleware GET HTTP/1.1
Before request: url: http://localhost/config.php, path: /config.php
Decorator () {}
"GET /config.php HTTP/1.1" 200 -
code 400, message Bad request version ('http/1.1')
"Get /config.php http/1.1" 400 -
</code></pre>
<p>Malformed request is not getting output from any of the solutions. My goal was to intercept the request before it is rejected in order to string replace <code>Get</code> to <code>GET</code> and <code>http/1.1</code> to <code>HTTP/1.1</code>. Is it even possible?</p>
|
<python><flask>
|
2025-02-26 21:37:45
| 1
| 802
|
lolbas
|
79,470,854
| 2,005,559
|
supply extra parameter as function argument for scipy optimize curve_fit
|
<p>I am defining a <code>piecewise</code> function for some data,</p>
<pre><code>def fit_jt(x, e1, e2, n1, E1, E2, N1, N2):
a = 1.3
return np.piecewise(x, [x <= a, x > a], [
lambda x: 1 / e1 +
(1 - np.float128(np.exp(-e2 * x / n1))) / e2, lambda x: 1 / E1 +
(1 - np.float128(np.exp(-E2 * x / N1))) / E2 + x / N2
])
</code></pre>
<p>which is called in <code>main</code> as:</p>
<pre><code> popt_jt, pcov_jt = optimize.curve_fit(fit_jt,
time.values,
jt.values,
method='trf')
</code></pre>
<p>Now, the problem here is the <code>a</code> is hardcoded in the function <code>fit_jt</code>. Is it possible to supply the value of <code>a</code> from the <code>main</code> (without making a lot of changes)?</p>
|
<python><scipy><scipy-optimize>
|
2025-02-26 19:31:13
| 1
| 3,260
|
BaRud
|
79,470,828
| 2,893,712
|
Nodriver Cannot Start Headless Mode
|
<p>I found <a href="https://github.com/ultrafunkamsterdam/nodriver/" rel="nofollow noreferrer">Nodriver</a>, which is the successor <a href="https://github.com/ultrafunkamsterdam/undetected-chromedriver/" rel="nofollow noreferrer">Undetected-Chromedriver</a>. I am trying to run in headless mode but am having problems.</p>
<pre><code>import nodriver as uc
async def main():
browser = await uc.start(headless=True)
page = await browser.get('https://bot.sannysoft.com/')
if __name__ == '__main__':
uc.loop().run_until_complete(main())
</code></pre>
<p>However I get error</p>
<pre><code>Traceback (most recent call last):
File "C:\no_drive_test.py", line 21, in <module>
uc.loop().run_until_complete(main())
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^
File "C:\Users\USERNAME\AppData\Local\Programs\Python\Python313\Lib\asyncio\base_events.py", line 721, in run_until_complete
return future.result()
~~~~~~~~~~~~~^^
File "C:\Users\USERNAME\AppData\Local\Programs\Python\Python313\no_drive_test.py", line 5, in main
browser = await uc.start(headless=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USERNAME\AppData\Local\Programs\Python\Python313\Lib\site-packages\nodriver\core\util.py", line 95, in start
return await Browser.create(config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USERNAME\AppData\Local\Programs\Python\Python313\Lib\site-packages\nodriver\core\browser.py", line 90, in create
await instance.start()
File "C:\Users\USERNAME\AppData\Local\Programs\Python\Python313\Lib\site-packages\nodriver\core\browser.py", line 393, in start
await self.connection.send(cdp.target.set_discover_targets(discover=True))
File "C:\Users\USERNAME\AppData\Local\Programs\Python\Python313\Lib\site-packages\nodriver\core\connection.py", line 413, in send
await self._prepare_headless()
File "C:\Users\USERNAME\AppData\Local\Programs\Python\Python313\Lib\site-packages\nodriver\core\connection.py", line 492, in _prepare_headless
response, error = await self._send_oneshot(
^^^^^^^^^^^^^^^
TypeError: cannot unpack non-iterable NoneType object
</code></pre>
<p>I tried to create issue on Nodriver github page but it looks like it's only available to collaborators of the project</p>
|
<python><python-3.x><undetected-chromedriver><nodriver>
|
2025-02-26 19:19:47
| 2
| 8,806
|
Bijan
|
79,470,747
| 4,363,864
|
Combine dataframe columns elements by pair
|
<p>I have this dataframe, already sorted by ID then SEQ:</p>
<pre><code>x = pd.DataFrame({
'ID': [1, 1, 1, 1, 2, 2, 2],
'SEQ': [1, 2, 3, 4, 1, 3, 4]
})
</code></pre>
<p>Inline:</p>
<pre><code> ID SEQ
0 1 1
1 1 2
2 1 3
3 1 4
4 2 1
5 2 3
6 2 4
</code></pre>
<p>And I want to combine the next <code>SEQ</code> value with the actual one, if and only if they share the same ID. So <code>ROW[y].SEQ + ROW[y+1].SEQ</code>.</p>
<p>So at the moment I do:</p>
<pre><code>x['SEQ'] = x['SEQ'].astype('str')
x['ID_NEXT'] = x['ID'].shift(-1)
x['SEQ_NEXT'] = x['SEQ'].shift(-1)
x['COMBINE'] = x['SEQ']+' - '+x['SEQ_NEXT']
x = x[x['ID']==x['ID_NEXT']]
</code></pre>
<p>And I obtain what I want:</p>
<pre><code> ID SEQ ID_NEXT SEQ_NEXT COMBINE
0 1 1 1 2 1 - 2
1 1 2 1 3 2 - 3
2 1 3 1 4 3 - 4
4 2 1 2 3 1 - 3
5 2 3 2 4 3 - 4
</code></pre>
<p>Is there a more efficient way to do this ?</p>
|
<python><pandas><dataframe>
|
2025-02-26 18:31:18
| 1
| 10,820
|
obchardon
|
79,470,598
| 1,552,080
|
Writing complex Pandas DataFrame to HDF5 using h5py
|
<p>I have a Pandas DataFrame with mixed scalar and array-like data of different raw types (int, float, str). The DataFrame's types look like this:</p>
<pre><code>'col1', dtype('float64')
'col2', dtype('O') <-- array, item type str
'col3', dtype('int64'
'col4', dtype('bool')
...
'colA', dtype('O') <-- array, item type str
...
'colB', dtype('O') <-- array, item type float
...
'colC', dtype('O') <-- scalar, str
...
'colD', dtype('O') <-- array, item type int
...
some more mixed data type columns
</code></pre>
<p>The length of the numeric array-like data is variable fron DataFrame row to row.</p>
<p>Currently, I try to write this DataFrame naively to a HDF5 file by</p>
<pre><code>with h5py.File(self.file_path, 'w') as f:
f.create_dataset(dataset_name, data=dataframe)
</code></pre>
<p>This operation fails with the error message</p>
<pre><code> tid = h5t.py_create(dtype, logical=1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "h5py/h5t.pyx", line 1669, in h5py.h5t.py_create
File "h5py/h5t.pyx", line 1693, in h5py.h5t.py_create
File "h5py/h5t.pyx", line 1753, in h5py.h5t.py_create
TypeError: Object dtype dtype('O') has no native HDF5 equivalent
</code></pre>
<p>What is a way / the best way to write this data to HDF5?</p>
|
<python><arrays><pandas><hdf5><h5py>
|
2025-02-26 17:31:44
| 1
| 1,193
|
WolfiG
|
79,470,526
| 12,162,229
|
Grouped Rolling Mean in Polars
|
<p>Similar question is asked <a href="https://stackoverflow.com/questions/76164821/how-to-group-by-and-rolling-in-polars">here</a></p>
<p>However it didn't seem to work in my case.</p>
<p>I have a dataframe with 3 columns, date, groups, prob. What I want is to create a 3 day rolling mean of the prob column values grouped by groups and date. However following the above linked answer I got all nulls returned.</p>
<pre><code>import polars as pl
from datetime import date
import numpy as np
dates = pl.date_range(date(2024, 12, 1), date(2024, 12, 30), "1d", eager=True).alias(
"date")
len(dates)
days = pl.concat([dates,dates])
groups = pl.concat([pl.select(pl.repeat("B", n = 30)).to_series(),
pl.select(pl.repeat("A", n = 30)).to_series()]).alias('groups')
data = pl.DataFrame([days, groups])
data2 = data.with_columns(pl.lit(np.random.rand(data.height)).alias("prob"))
data2.with_columns(
rolling_mean =
pl.col('prob')
.rolling_mean(window_size = 3)
.over('date','groups')
)
"""
shape: (60, 4)
ββββββββββββββ¬βββββββββ¬βββββββββββ¬βββββββββββββββ
β date β groups β prob β rolling_mean β
β --- β --- β --- β --- β
β date β str β f64 β f64 β
ββββββββββββββͺβββββββββͺβββββββββββͺβββββββββββββββ‘
β 2024-12-01 β B β 0.938982 β null β
β 2024-12-02 β B β 0.103133 β null β
β 2024-12-03 β B β 0.724672 β null β
β 2024-12-04 β B β 0.495868 β null β
β 2024-12-05 β B β 0.621124 β null β
β β¦ β β¦ β β¦ β β¦ β
β 2024-12-26 β A β 0.762529 β null β
β 2024-12-27 β A β 0.766366 β null β
β 2024-12-28 β A β 0.272936 β null β
β 2024-12-29 β A β 0.28709 β null β
β 2024-12-30 β A β 0.403478 β null β
ββββββββββββββ΄βββββββββ΄βββββββββββ΄βββββββββββββββ
""""
</code></pre>
<p>In the documentation I found .rolling_mean_by and tried using it instead but instead of doing a rolling mean it seems to just return the prob value for each row.</p>
<pre><code>data2.with_columns(
rolling_mean =
pl.col('prob')
.rolling_mean_by(window_size = '3d', by = 'date')
.over('groups', 'date')
)
"""
shape: (60, 4)
ββββββββββββββ¬βββββββββ¬βββββββββββ¬βββββββββββββββ
β date β groups β prob β rolling_mean β
β --- β --- β --- β --- β
β date β str β f64 β f64 β
ββββββββββββββͺβββββββββͺβββββββββββͺβββββββββββββββ‘
β 2024-12-01 β B β 0.938982 β 0.938982 β
β 2024-12-02 β B β 0.103133 β 0.103133 β
β 2024-12-03 β B β 0.724672 β 0.724672 β
β 2024-12-04 β B β 0.495868 β 0.495868 β
β 2024-12-05 β B β 0.621124 β 0.621124 β
β β¦ β β¦ β β¦ β β¦ β
β 2024-12-26 β A β 0.762529 β 0.762529 β
β 2024-12-27 β A β 0.766366 β 0.766366 β
β 2024-12-28 β A β 0.272936 β 0.272936 β
β 2024-12-29 β A β 0.28709 β 0.28709 β
β 2024-12-30 β A β 0.403478 β 0.403478 β
ββββββββββββββ΄βββββββββ΄βββββββββββ΄βββββββββββββββ
""""
</code></pre>
|
<python><dataframe><python-polars><rolling-computation><polars>
|
2025-02-26 17:08:21
| 1
| 317
|
AColoredReptile
|
79,470,488
| 3,267,763
|
bazel build and run grpc server with python
|
<p>I am trying to setup a larger bazel infrastructure to</p>
<ol>
<li>Create .py files from a <code>.proto</code> file for gRPC using <code>rules_proto_grpc_python</code></li>
<li>Run the server code that implements the interfaces defined in 1 via a <code>py_binary</code></li>
</ol>
<p>However, I am getting the error</p>
<pre class="lang-bash prettyprint-override"><code>google.protobuf.runtime_version.VersionError: Detected incompatible Protobuf Gencode/Runtime versions
when loading helloworld/hello.proto: gencode 5.29.1 runtime 5.27.2. Runtime version
cannot be older than the linked gencode version. See Protobuf version
guarantees at https://protobuf.dev/support/cross-version-runtime-guarantee.
</code></pre>
<p>indicating that the protobuf version that <code>rules_proto_grpc_python</code> uses to generate the .py files is different from the version used at runtime. I tried to require <code>protobuf>=5.29.1</code> for the `py_binary`` but that is not possible due to conficts with other packages in the project. In this minimal example, it <em>is</em> possible but issues several warnings about conflicting symlinks which seems concerning as well. Is there a way to either</p>
<ol>
<li>Specify the protobuf version to use for <code>rules_proto_grpc_python</code></li>
<li>Generate the <code>.py</code> files from the <code>.proto</code> in another way?</li>
</ol>
<h1>Edit 2025-02-27</h1>
<p>I managed to get this to work by playing around with the requirements in my larger repository. Specifying the minimum protobuf version causes it to work.</p>
<h1>Here are the relevant files to make a minimal example</h1>
<p>The top-level MODULE.bazel:</p>
<pre><code># MODULE.bazel
bazel_dep(name = "aspect_bazel_lib", version = "2.7.8")
bazel_dep(name = "aspect_rules_py", version = "1.3.2")
bazel_dep(name = "rules_python", version = "1.1.0")
bazel_dep(name = "rules_proto", version = "7.1.0")
bazel_dep(name = "rules_proto_grpc_python", version = "5.0.0")
# deps for python projects.
_PY_MINOR_VER = 10
_PY_VERSION = "3.%s" % _PY_MINOR_VER
_PY_VER_ = "python_3_%s" % _PY_MINOR_VER
python = use_extension("@rules_python//python/extensions:python.bzl", "python")
python.toolchain(
python_version = _PY_VERSION,
is_default = True,
)
# You can use this repo mapping to ensure that your BUILD.bazel files don't need
# to be updated when the python version changes to a different `3.12` version.
use_repo(
python,
"python_3_%s" % _PY_MINOR_VER,
)
pip = use_extension("@rules_python//python/extensions:pip.bzl", "pip")
pip.parse(
hub_name = "pypi",
# We need to use the same version here as in the `python.toolchain` call.
python_version = _PY_VERSION,
requirements_lock = "//:requirements_lock.txt",
)
use_repo(pip, "pypi")
</code></pre>
<p>Inside a directory /proto_library/proto/hello.proto</p>
<pre><code># proto_library/proto/hello.proto
syntax = "proto3";
package helloworld;
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
</code></pre>
<p>The BUILD.bazel alongside the proto file</p>
<pre><code># proto_library/proto/BUILD.bazel
load("@rules_proto//proto:defs.bzl", "proto_library")
load("@rules_proto_grpc_python//:defs.bzl", "py_proto_library", "py_grpc_library")
proto_library(
name = "hello_proto",
srcs = ["hello.proto"],
# Add these to control Python package structure
strip_import_prefix = "/proto_library/proto",
import_prefix = "helloworld",
visibility = ["//visibility:public"],
)
py_proto_library(
name = "hello_py_proto",
protos = [":hello_proto"],
visibility = ["//visibility:public"],
)
py_grpc_library(
name = "hello_py_grpc",
protos = [":hello_proto"],
visibility = ["//visibility:public"],
deps = [":hello_py_proto"],
)
</code></pre>
<p>The python code to implement the server</p>
<pre><code># proto_library/src/hello_server.py
import logging
import sys
from concurrent import futures
import grpc
from helloworld import hello_pb2, hello_pb2_grpc
class Greeter(hello_pb2_grpc.GreeterServicer):
def SayHello(self, request, context):
return hello_pb2.HelloReply(message="Hello, %s!" % request.name)
def serve():
port = "50051"
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
hello_pb2_grpc.add_GreeterServicer_to_server(Greeter(), server)
server.add_insecure_port("[::]:" + port)
server.start()
print("Server started, listening on " + port)
server.wait_for_termination()
if __name__ == "__main__":
logging.basicConfig()
serve()
</code></pre>
<p>And finally, the BUILD file inside proto_library</p>
<pre><code># proto_library/BUILD.bazel
load("@aspect_rules_py//py:defs.bzl", "py_binary", "py_library")
load("@pypi//:requirements.bzl", "requirement")
py_binary(
name = "hello_test",
srcs = ["src/hello_server.py"],
deps = [
"//proto_library/proto:hello_py_proto",
"//proto_library/proto:hello_py_grpc",
requirement("grpcio"),
requirement("protobuf"),
],
# Add these to handle Python packaging
imports = [".."], # Make parent directories importable
visibility = ["//visibility:public"],
package_collisions = "warning",
# Create proper Python package structure
main = "hello_server.py",
)
</code></pre>
|
<python><grpc><bazel><grpc-python><bazel-python>
|
2025-02-26 16:54:53
| 1
| 682
|
kgully
|
79,470,483
| 1,778,537
|
Tkinter: Canvas item color change on state change
|
<p>The Tkinter code below displays a root window containing a canvas and a blue rectangle.
Clicking on the rectangle toggles its state between 'normal' and 'disabled'.</p>
<p>I'd like to also toggle its color. I know how to do it using <code>canvas.itemconfig(rect, fill='red')</code> for example, by checking its state.</p>
<p>However, given that <code>fill</code> and <code>disabledfill</code> were defined, I expected the color to change automatically when the state changes. This does not happen.</p>
<p>Is it possible to achieve this color change simply by altering the state?"</p>
<p><strong>ps.</strong> the color changes as I expect clicking on the canvas but not on the rectangle.</p>
<p><strong>MWE</strong></p>
<pre><code>import tkinter as tk
# toggle rect state
def toggle_rect(event):
item = canvas.find_closest(event.x, event.y)
current_state = canvas.itemcget(item, "state")
print(f'current state: {current_state}')
new_state = "disabled" if current_state == "normal" else "normal"
canvas.itemconfig(item, state=new_state)
print(f'new state: {canvas.itemcget(item, "state")}')
root = tk.Tk()
canvas = tk.Canvas(root, width=400, height=400, bg='white')
canvas.pack()
rect = canvas.create_rectangle(50, 50, 150, 150, fill='blue', disabledfill='red', state='normal')
canvas.bind("<Button-1>", toggle_rect)
root.mainloop()
</code></pre>
|
<python><tkinter><tkinter-canvas>
|
2025-02-26 16:51:14
| 1
| 354
|
Sigur
|
79,470,417
| 1,628,971
|
How to add a group-specific index to a polars dataframe with an expression instead of a map_groups user-defined function?
|
<p>I am curious whether I am missing something in the Polars Expression library in how this could be done more efficiently. I have a dataframe of protein sequences, where I would like to create k-long substrings from the protein sequences, like the <code>kmerize</code> function below.</p>
<pre class="lang-py prettyprint-override"><code>def kmerize(sequence, ksize):
kmers = [sequence[i : (i + ksize)] for i in range(len(sequence) - ksize + 1)]
return kmers
</code></pre>
<p>Within Polars, I did a <code>group_by</code> on the sequence, where within <a href="https://docs.pola.rs/api/python/stable/reference/lazyframe/api/polars.lazyframe.group_by.LazyGroupBy.map_groups.html#polars.lazyframe.group_by.LazyGroupBy.map_groups" rel="nofollow noreferrer"><code>map_groups</code></a>, the sequence was repeated by its length and exploded, then a row index was added. This row index was used to slice the sequences into k-mers, and then filtered by only keeping k-mers of the correct size.</p>
<p>Here is a minimally reproducible example:</p>
<pre class="lang-py prettyprint-override"><code>from io import StringIO
import polars as pl
s = """sequence_name,sequence,length
sp|O43236|SEPT4_HUMAN Septin-4 OS=Homo sapiens OX=9606 GN=SEPTIN4 PE=1 SV=1,MDRSLGWQGNSVPEDRTEAGIKRFLEDTTDDGELSKFVKDFSGNASCHPPEAKTWASRPQVPEPRPQAPDLYDDDLEFRPPSRPQSSDNQQYFCAPAPLSPSARPRSPWGKLDPYDSSEDDKEYVGFATLPNQVHRKSVKKGFDFTLMVAGESGLGKSTLVNSLFLTDLYRDRKLLGAEERIMQTVEITKHAVDIEEKGVRLRLTIVDTPGFGDAVNNTECWKPVAEYIDQQFEQYFRDESGLNRKNIQDNRVHCCLYFISPFGHGLRPLDVEFMKALHQRVNIVPILAKADTLTPPEVDHKKRKIREEIEHFGIKIYQFPDCDSDEDEDFKLQDQALKESIPFAVIGSNTVVEARGRRVRGRLYPWGIVEVENPGHCDFVKLRTMLVRTHMQDLKDVTRETHYENYRAQCIQSMTRLVVKERNRNKLTRESGTDFPIPAVPPGTDPETEKLIREKDEELRRMQEMLHKIQKQMKENY,478
sp|O43521|B2L11_HUMAN Bcl-2-like protein 11 OS=Homo sapiens OX=9606 GN=BCL2L11 PE=1 SV=1,MAKQPSDVSSECDREGRQLQPAERPPQLRPGAPTSLQTEPQGNPEGNHGGEGDSCPHGSPQGPLAPPASPGPFATRSPLFIFMRRSSLLSRSSSGYFSFDTDRSPAPMSCDKSTQTPSPPCQAFNHYLSAMASMRQAEPADMRPEIWIAQELRRIGDEFNAYYARRVFLNNYQAAEDHPRMVILRLLRYIVRLVWRMH,198
sp|O60238|BNI3L_HUMAN BCL2/adenovirus E1B 19 kDa protein-interacting protein 3-like OS=Homo sapiens OX=9606 GN=BNIP3L PE=1 SV=1,MSSHLVEPPPPLHNNNNNCEENEQSLPPPAGLNSSWVELPMNSSNGNDNGNGKNGGLEHVPSSSSIHNGDMEKILLDAQHESGQSSSRGSSHCDSPSPQEDGQIMFDVEMHTSRDHSSQSEEEVVEGEKEVEALKKSADWVSDWSSRPENIPPKEFHFRHPKRSVSLSMRKSGAMKKGGIFSAEFLKVFIPSLFLSHVLALGLGIYIGKRLSTPSASTY,219
sp|O95197|RTN3_HUMAN Reticulon-3 OS=Homo sapiens OX=9606 GN=RTN3 PE=1 SV=2,MAEPSAATQSHSISSSSFGAEPSAPGGGGSPGACPALGTKSCSSSCADSFVSSSSSQPVSLFSTSQEGLSSLCSDEPSSEIMTSSFLSSSEIHNTGLTILHGEKSHVLGSQPILAKEGKDHLDLLDMKKMEKPQGTSNNVSDSSVSLAAGVHCDRPSIPASFPEHPAFLSKKIGQVEEQIDKETKNPNGVSSREAKTALDADDRFTLLTAQKPPTEYSKVEGIYTYSLSPSKVSGDDVIEKDSPESPFEVIIDKAAFDKEFKDSYKESTDDFGSWSVHTDKESSEDISETNDKLFPLRNKEAGRYPMSALLSRQFSHTNAALEEVSRCVNDMHNFTNEILTWDLVPQVKQQTDKSSDCITKTTGLDMSEYNSEIPVVNLKTSTHQKTPVCSIDGSTPITKSTGDWAEASLQQENAITGKPVPDSLNSTKEFSIKGVQGNMQKQDDTLAELPGSPPEKCDSLGSGVATVKVVLPDDHLKDEMDWQSSALGEITEADSSGESDDTVIEDITADTSFENNKIQAEKPVSIPSAVVKTGEREIKEIPSCEREEKTSKNFEELVSDSELHQDQPDILGRSPASEAACSKVPDTNVSLEDVSEVAPEKPITTENPKLPSTVSPNVFNETEFSLNVTTSAYLESLHGKNVKHIDDSSPEDLIAAFTETRDKGIVDSERNAFKAISEKMTDFKTTPPVEVLHENESGGSEIKDIGSKYSEQSKETNGSEPLGVFPTQGTPVASLDLEQEQLTIKALKELGERQVEKSTSAQRDAELPSEEVLKQTFTFAPESWPQRSYDILERNVKNGSDLGISQKPITIRETTRVDAVSSLSKTELVKKHVLARLLTDFSVHDLIFWRDVKKTGFVFGTTLIMLLSLAAFSVISVVSYLILALLSVTISFRIYKSVIQAVQKSEEGHPFKAYLDVDITLSSEAFHNYMNAAMVHINRALKLIIRLFLVEDLVDSLKLAVFMWLMTYVGAVFNGITLLILAELLIFSVPIVYEKYKTQIDHYVGIARDQTKSIVEKIQAKLPGIAKKKAE,1032
sp|P10415|BCL2_HUMAN Apoptosis regulator Bcl-2 OS=Homo sapiens OX=9606 GN=BCL2 PE=1 SV=2,MAHAGRTGYDNREIVMKYIHYKLSQRGYEWDAGDVGAAPPGAAPAPGIFSSQPGHTPHPAASRDPVARTSPLQTPAAPGAAAGPALSPVPPVVHLTLRQAGDDFSRRYRRDFAEMSSQLHLTPFTARGRFATVVEELFRDGVNWGRIVAFFEFGGVMCVESVNREMSPLVDNIALWMTEYLNRHLHTWIQDNGGWDAFVELYGPSMRPLFDFSWLSLKTLLSLALVGACITLGAYLGHK,239"""
ksize = 24
df = pl.scan_csv(StringIO(s))
df.group_by("sequence").map_groups(
lambda group_df: group_df.with_columns(kmers=pl.col("sequence").repeat_by("length"))
.explode("kmers")
.with_row_index(),
schema={"index": pl.UInt32, "sequence_name": pl.String, "sequence": pl.String, "length": pl.Int64, "kmers": pl.String},
).with_columns(pl.col("kmers").str.slice("index", ksize)).filter(
pl.col("kmers").str.len_chars() == ksize
).rename(
{"index": "start"}
).collect()
</code></pre>
<p>This code produces this dataframe:</p>
<p><a href="https://i.sstatic.net/UmPCilqE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UmPCilqE.png" alt="Kmer dataframe produced by code above" /></a></p>
<p>Is there a more efficient way to do this in Polars? I will be using dataframes with ~250k sequences, each ~100-1000 letters long, so I'd like to do this as low-resource as possible.</p>
<p>Thank you and have a beautiful day!</p>
|
<python><dataframe><bioinformatics><python-polars><polars>
|
2025-02-26 16:25:20
| 2
| 1,694
|
Olga Botvinnik
|
79,470,369
| 2,123,706
|
when creating a table using sql alchemy in python, why do string columns become varchar(max) in SQL?
|
<p>I have a dataframe that I want to write to a table in SQL Server. I use SQL alchemy:</p>
<pre><code>import pandas as pd
import sqlalchemy
from sqlalchemy import create_engine, text
server =
database =
driver =
database_con = f'mssql://@{server}/{database}?driver={driver}'
engine = create_engine(database_con, fast_executemany=True)
con = engine.connect()
pd.DataFrame({'col1':['a','b','c','dd'], 'col2':[11,22,33,44]}).to_sql(
name='test',
con=con,
if_exists='append',
index=None)
con.commit()
</code></pre>
<p>This writes and creates the table successfully in SSMS. But when I examine the column definitions, I see that the string column is a data type <code>varchar(max)</code></p>
<p><a href="https://i.sstatic.net/AJylxsP8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJylxsP8.png" alt="enter image description here" /></a></p>
<p>This causes issues later on if we want to create indexes using one of these varchar columns (cannot create index on varchar max column).</p>
<p>Is there a way to specify when creating the table that I want a <code>varchar(10)</code> for <code>col1</code>, and not default to <code>varchar(max)</code>?</p>
|
<python><sqlalchemy>
|
2025-02-26 16:09:46
| 1
| 3,810
|
frank
|
79,470,149
| 7,615,872
|
Custom ValidationError message for extra fields
|
<p>I have defined a model using Pydantic V2 as follows:</p>
<pre><code>from pydantic import BaseModel
class Data(BaseModel):
valid: bool = False
name: str | None = None
model_config = ConfigDict(extra="forbid")
</code></pre>
<p>When I use this model to validate some data using <code>model_validate</code> with some extra field</p>
<pre><code>Data.model_validate({"valid": True, "name": "John", "age": 30})
</code></pre>
<p>It raises this expected error:</p>
<pre><code>pydantic_core._pydantic_core.ValidationError: 1 validation error for Data
age
Extra inputs are not permitted [type=extra_forbidden, input_value=30, input_type=int]
For further information visit https://errors.pydantic.dev/2.10/v/extra_forbidden
</code></pre>
<p>What I want to do is to replace the default message <code>Extra inputs are not permitted</code> by a custom message <code>Unknown field.</code>. To do so, I overwrote the <code>model_validate</code> method so I can customize my message. My model is defined now as follows:</p>
<pre><code>from pydantic import BaseModel, ConfigDict
from pydantic_core import InitErrorDetails, ValidationError
class Data(BaseModel):
valid: bool = False
name: str | None = None
model_config = ConfigDict(extra="forbid") # Forbid extra fields
@classmethod
def model_validate(cls, value):
try:
return super().model_validate(value)
except ValidationError as e:
modified_errors = []
for error in e.errors():
if error['type'] == 'extra_forbidden':
modified_errors.append(
InitErrorDetails(
type=error['type'],
loc=error['loc'],
input=error['input'],
ctx={"error": "Unknown field."} # Set custom message in `ctx`
)
)
else:
modified_errors.append(error)
raise ValidationError.from_exception_data(
title="Validation failed",
line_errors=modified_errors
)
</code></pre>
<p>But, still when I run <code>Data.model_validate({"valid": True, "name": "John", "age": 30})</code>. It does not show the custom message <code>"Unknown field."</code>. What's wrong with my validator?</p>
|
<python><pydantic><pydantic-v2>
|
2025-02-26 14:51:09
| 1
| 1,085
|
Mehdi Ben Hamida
|
79,470,120
| 459,888
|
How to override txnVersion in Spark?
|
<p>Following the instructions for idempotency in <a href="https://docs.delta.io/latest/delta-streaming.html" rel="nofollow noreferrer">Idempotent table writes in foreachBatch</a>, I set the txnVersion and txnApp options. It works as intended. However, those options keep their values even after foreachBatch ends.</p>
<p>How to override txnVersion and txnApp after foreachBatch ends (meaning no new data to write)?</p>
|
<python><scala><apache-spark><databricks><delta>
|
2025-02-26 14:43:58
| 0
| 786
|
pgrandjean
|
79,470,048
| 2,840,125
|
Pandas DataFrame conditional formatting function with additional inputs
|
<p>I developed a Qt GUI that reads data from a database and displays it on the screen in a table. The table includes conditional formatting which colors cell backgrounds based on cell contents. For example, if the cell contains the letter 'G' for good, then the cell background gets colored green.</p>
<p>I'm currently working on a function that will export that table to an HTML document which I can convert to PDF. I have the data itself in <code>dfhealth</code> and the colors defined in <code>dfdefs</code>, both of which come from <code>pd.read_sql</code> statements.</p>
<pre><code>dfhealth
NAME STATUS
0 A G
1 B N
dfdefs
STATUS COLOR
0 G GREEN
1 N YELLOW
</code></pre>
<p>My issue is with the function for defining the conditional formatting of the DataFrame that will later be converted to HTML:</p>
<pre><code>def condFormat(s, df=dfdefs): # <-- Error on this line
dcolors = {"GREEN": "rgb(146, 208, 80)",
"YELLOW": "rgb(255, 255, 153)",
"RED": "rgb(218, 150, 148)",
None: "rgb(255, 255, 255)"}
dftemp = df.query("STATUS == @s")
scolor = dcolors[dftemp["COLOR"].iloc[0]]
return "background-color: " + scolor
def main():
dfhealth = pd.read_sql("SELECT name, status FROM health", conn)
dfdefs = pd.read_sql("SELECT status, color FROM definitions", conn)
dfhealth.style.applymap(condFormat)
html = dfhealth.to_html()
return html
</code></pre>
<p>I get the following error on the line shown above: "NameError: name 'dfdefs' is not defined". I can't figure out how to tell the <code>condFormat</code> function that it needs to compare the contents of each cell to the STATUS column of <code>dfdefs</code> to get the color. Thank you in advance for your help.</p>
|
<python><pandas><dataframe><formatting>
|
2025-02-26 14:18:04
| 2
| 477
|
Kes Perron
|
79,469,894
| 4,444,546
|
polars cum sum to create a set and not actually sum
|
<p>I'd like to use a function like cumsum, but that would create a set of all values contained in the column up to the point, and not to sum them</p>
<pre><code>df = pl.DataFrame({"a": [1, 2, 3, 4]})
df["a"].cum_sum()
shape: (4,)
Series: 'a' [i64]
[
1
3
6
10
]
</code></pre>
<p>but I'd like to have something like</p>
<pre><code>df["a"].cum_sum()
shape: (4,)
Series: 'a' [i64]
[
{1}
{1, 2}
{1, 2, 3}
{1, 2, 3, 4}
]
</code></pre>
<p>also note that I'm working on big (several Millions of rows) df, so I'd like to avoid indexing and map_elements (as I've read that it slows down a lot)</p>
|
<python><python-polars><cumsum>
|
2025-02-26 13:24:52
| 1
| 5,394
|
ClementWalter
|
79,469,787
| 243,031
|
Change required fields to non-required fields
|
<p>We are using <a href="https://docs.pydantic.dev/latest/" rel="nofollow noreferrer"><code>pydantic</code></a> to validate our API request payload. We have <code>insert</code> and <code>update</code> endpoint.</p>
<p>Both endpoint have same payload requirements. We created model for that:</p>
<pre><code>from pydantic import Field
from pydantic import BaseModel
class Client(BaseModel):
first_name: str
last_name: str
</code></pre>
<p>When we use this <code>Client</code> model for insert endpoint, we want <code>first_name</code> and <code>last_name</code> required.</p>
<p>When user sends payload for <code>update</code> endpoint, they can pass only <code>first_name</code> or <code>last_name</code> or both, depending on what they want to update.</p>
<p>For now, we are creating new model like:</p>
<pre><code>from pydantic import Field
from pydantic import BaseModel
class ClientUpdate(BaseModel):
first_name: str | None = Field(default=None)
last_name: str | None = Field(default=None)
</code></pre>
<p>But the issue is, when we want to add new field, we have to update both models. Is there any way to use <code>Client</code> model, but pass some flag where it will disable <code>required</code> constraint for <code>first_name</code> and <code>last_name</code>?</p>
|
<python><pydantic><pydantic-v2>
|
2025-02-26 12:50:17
| 0
| 21,411
|
NPatel
|
79,469,031
| 12,036,671
|
Does `clickhouse_connect.get_client` return a new client instance every time?
|
<p>As the question mentions, does <code>clickhouse_connect.get_client</code> in the python client return a new client instance every time it is called? I can't seem to find if it is explicitly mentioned as such in the documentation, but it seems implied. I'm a little confused because of the name <code>get_client</code> (instead of say <code>create_client</code>).</p>
|
<python><clickhouse><clickhouse-client>
|
2025-02-26 08:40:02
| 1
| 825
|
Sandil Ranasinghe
|
79,468,879
| 680,074
|
How to fetch the records using ZOHO CRM API filtering on Created_time Criteria
|
<p>I am trying to fetch the deals from ZOHO CRM API unfortunately the Created_Time criteria is not working, its giving 200 records everytime and its not in the real case only 3 or 4 records exists greater than the below date. But when I am passing the if-modified-since clause in the header it works perfectly.</p>
<pre><code>criteria = "(Created_Time:greater_than:2025-02-25T00:00:00+00:00)"
headers = {
"Authorization": f"Zoho-oauthtoken {access_token}"
}
params = {
"criteria": criteria
}
response = requests.get(url, headers=headers, params=params)
if response.status_code == 200:
data = response.json()
# Extract records from the response
deals = data.get("data", [])
if deals:
df = pd.DataFrame(deals)
# Print fetched data and record count
print(f"Number of records fetched: {len(df)}")
print(df.head()) # Display first few rows of the dataframe
else:
print("No records found for the specified Created_Time.")
elif response.status_code == 204:
print("No content returned for the specified criteria.")
else:
print(f"Error: {response.status_code} - {response.text}")
</code></pre>
|
<python><crm><zoho>
|
2025-02-26 07:40:58
| 0
| 1,672
|
Tayyab Vohra
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.