QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,336,731
| 15,804,190
|
Mock date.today() but leave other date methods alone
|
<p>I am trying to test some python code that involves setting/comparing dates, and so I am trying to leverage <code>unittest.mock</code> in my testing (using <code>pytest</code>). The current problem I'm hitting is that using <code>patch</code> appears to override all the other methods for the patched class (<code>datetime.date</code>) and so causes other errors because my code is using other methods of the class.</p>
<p>Here is a simplified version of my code.</p>
<pre class="lang-py prettyprint-override"><code>#main.py
from datetime import date, timedelta, datetime
def date_distance_from_today(dt: str | date) -> timedelta:
if not isinstance(dt, date):
dt = datetime.strptime(dt, "%Y-%m-%d").date()
return date.today() - dt
</code></pre>
<pre class="lang-py prettyprint-override"><code>#tests.py
from datetime import date, timedelta
from unittest.mock import patch
from mock_experiment import main
def test_normal(): # passes fine today, Jan 7
assert main.date_distance_from_today(date(2025, 1, 1)) == timedelta(6)
def test_normal_2(): # passes fine today, Jan 7
assert main.date_distance_from_today("2025-01-01") == timedelta(6)
def test_with_patch_on_date(): # exception thrown
with patch("mock_experiment.main.date") as patch_date:
patch_date.today.return_value = date(2025, 1, 2)
assert main.date_distance_from_today(date(2025, 1, 1)) == timedelta(1)
</code></pre>
<p>When I run these tests, the first two pass but the third gets the following exception:</p>
<pre><code>def func1(dt: str | date) -> timedelta:
> if not isinstance(dt, date):
E TypeError: isinstance() arg 2 must be a type, a tuple of types, or a union
</code></pre>
<p>This makes sense to me (although not what I want) since I borked out the <code>date</code> object and turned it into a MagicMock and so it doesn't get handled how I want in this <code>isinstance</code> call.</p>
<p>I also tried patching <code>date.today</code>, which also failed as shown below:</p>
<pre class="lang-py prettyprint-override"><code>def test_with_mock_on_today():
with patch("mock_experiment.main.date.today") as patch_today:
patch_today.return_value = date(2025, 1, 2)
assert main.distance_from_today(date(2025, 1, 1)) == timedelta(1)
</code></pre>
<p>Exception</p>
<pre><code>TypeError: cannot set 'today' attribute of immutable type 'datetime.date'
</code></pre>
|
<python><date><mocking><pytest><python-unittest>
|
2025-01-07 16:41:32
| 3
| 3,163
|
scotscotmcc
|
79,336,604
| 20,302,906
|
Failed creating mock folders with pyfakefs
|
<p>I'm working on a project that uses <a href="https://pytest-pyfakefs.readthedocs.io/en/stable/usage.html#test-scenarios" rel="nofollow noreferrer">pyfakefs</a> to mock my filesystem to test folder creation and missing folders in a previously defined tree structure. I'm using <strong>Python 3.13</strong> on <strong>Windows</strong> and get this output from the terminal after running my test:</p>
<p><em>Terminal output:</em></p>
<p>(Does anyone have a tip for formatting terminal output without getting automatic syntax highlighting?)</p>
<pre><code>E
======================================================================
ERROR: test_top_folders_exist (file_checker.tests.file_checker_tests.TestFolderCheck.test_top_folders_exist)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Users\juank\dev\projects\python\gamedev_eco\file_checker\tests\file_checker_tests.py", line 20, in test_top_folders_exist
self.fs.create_dir(Path.cwd() / "gdd")
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
File "C:\Users\juank\AppData\Local\Programs\Python\Python313\Lib\site-packages\pyfakefs\fake_filesystem.py", line 2191, in create_dir
dir_path = self.absnormpath(dir_path)
File "C:\Users\juank\AppData\Local\Programs\Python\Python313\Lib\site-packages\pyfakefs\fake_filesystem.py", line 1133, in absnormpath
path = self.replace_windows_root(path)
File "C:\Users\juank\AppData\Local\Programs\Python\Python313\Lib\site-packages\pyfakefs\fake_filesystem.py", line 1418, in replace_windows_root
if path and self.is_windows_fs and self.root_dir:
^^^^^^^^^^^^^
File "C:\Users\juank\AppData\Local\Programs\Python\Python313\Lib\site-packages\pyfakefs\fake_filesystem.py", line 357, in root_dir
return self._mount_point_dir_for_cwd()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\juank\AppData\Local\Programs\Python\Python313\Lib\site-packages\pyfakefs\fake_filesystem.py", line 631, in _mount_point_dir_for_cwd
if path.startswith(str_root_path) and len(str_root_path) > len(mount_path):
^^^^^^^^^^^^^^^
AttributeError: 'WindowsPath' object has no attribute 'startswith'
----------------------------------------------------------------------
Ran 1 test in 0.011s
FAILED (errors=1)
</code></pre>
<p><em>Test:</em></p>
<pre><code>from pyfakefs.fake_filesystem_unittest import TestCase
class TestFolderCheck(TestCase):
"""Test top folders = gdd marketing business"""
@classmethod
def setUp(cls):
cls.setUpClassPyfakefs()
cls.fake_fs().create_dir(Path.cwd() / "gamedev_eco")
cls.fake_fs().cwd = Path.cwd() / "gamedev_eco"
def test_top_folders_exist(self):
self.fs.create_dir(Path.cwd() / "gdd")
</code></pre>
<p>What is confusing for me is that the Setup class method can create a folder and change cwd to that new folder but I'm not able to create a folder inside a test.</p>
<p>Does anyone have experience working with pyfakefs?</p>
<p>Can anyone lend me a hand with this issue please?</p>
|
<python><mocking><python-unittest><pyfakefs>
|
2025-01-07 16:01:21
| 1
| 367
|
wavesinaroom
|
79,336,594
| 2,236,794
|
uwsgi with https getting socket option missing
|
<p>I am running a Flask application on Docker with uwsgi. I have been running it for years now, but we need to add https to it. I know I can use an HAProxy and do ssl offloading, but in our current setup we cant do it this way, at least not right now. We need to do the SSL directly on the application. I have tried multiple options and I keep getting "The -s/--socket option is missing and stdin is not a socket." Not sure what else to try. The server is uWSGI==2.0.26. Please help. below is my uwsgi.ini file.</p>
<pre><code>[uwsgi]
module = wsgi:app
master = true
processes = 5
enable-threads = true
single-interpreter = true
buffer-size = 32768
# protocol = http
# socket = 0.0.0.0:5000
# protocol = https
shared-socket = 0.0.0.0:5000
https = 0,/app/ssl/app_cert.crt,/app/ssl/app_cert.key
stdout_logfile = /dev/stdout
stdout_logfile_maxbytes = 0
stderr_logfile = /dev/stdout
stderr_logfile_maxbytes = 0
chmod-socket = 660
vacuum = true
die-on-term = true
py-autoreload = 1
</code></pre>
|
<python><flask><uwsgi><dockerpy>
|
2025-01-07 15:57:52
| 1
| 561
|
user2236794
|
79,336,492
| 6,808,709
|
Is it possible to use XOAUTH2 through imaplib to access Google Workspace Gmail without IMAP being enabled on the domain?
|
<p>I have a task to build an integration with Gmail that will collect emails from a users inbox and store them within a Django system. The client I'm working with does not allow IMAP to be enabled on their Google Workspace account.</p>
<p>The following code runs but generates a <code>An error occurred: [ALERT] IMAP access is disabled for your domain. Please contact your domain administrator for questions about this feature. (Failure)</code> error.</p>
<p>I've looked all over the Google Workspace admin but haven't found any way to enable access for this.</p>
<p>Is this even possible? I've been told that since XOAUTH2 is an authentication method I'm getting blocked at the domain level and there is going to be no way this will ever work.</p>
<p>My current (rough) code looks like this:</p>
<pre><code>from imaplib import IMAP4_SSL
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
import os.path
import pickle
def get_gmail_oauth_credentials():
SCOPES = ['https://mail.google.com/']
creds = None
if os.path.exists('token.pickle'):
with open('token.pickle', 'rb') as token:
creds = pickle.load(token)
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
'credentials.json',
SCOPES
)
# This will open the browser for user consent
creds = flow.run_local_server(port=0)
# Save the credentials for future runs
with open('token.pickle', 'wb') as token:
pickle.dump(creds, token)
return creds
def connect_to_gmail_imap(user_email):
# Get OAuth2 credentials
creds = get_gmail_oauth_credentials()
auth_string = f'user={user_email}\1auth=Bearer {creds.token}\1\1'
# Connect to Gmail's IMAP server
imap_conn = IMAP4_SSL('imap.gmail.com')
# Authenticate using XOAUTH2
imap_conn.authenticate('XOAUTH2', lambda x: auth_string)
return imap_conn
if __name__ == '__main__':
# Replace with your Gmail address
USER_EMAIL = 'test@example.com'
try:
imap = connect_to_gmail_imap(USER_EMAIL)
print("Successfully connected to Gmail!")
# Example: List all mailboxes
typ, mailboxes = imap.list()
if typ == 'OK':
print("\nAvailable mailboxes:")
for mailbox in mailboxes:
print(mailbox.decode())
imap.select('INBOX')
typ, messages = imap.search(None, 'RECENT')
if typ == 'OK':
print(f"\nFound {len(messages[0].split())} recent messages")
imap.logout()
except Exception as e:
print(f"An error occurred: {e}")
</code></pre>
|
<python><imap><google-workspace><imaplib>
|
2025-01-07 15:21:14
| 0
| 723
|
Pete Dermott
|
79,336,443
| 10,190,983
|
Python: BeautifulSoup scraping yield data
|
<p>I am trying to scrape Yield tables for several countries and several maturities from a website.
So far I only get empty tables:</p>
<p><a href="https://i.sstatic.net/md6bhS2D.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/md6bhS2D.png" alt="enter image description here" /></a></p>
<p>while it should rather look like:</p>
<p><a href="https://i.sstatic.net/Tp2yhYJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Tp2yhYJj.png" alt="enter image description here" /></a></p>
<p>So far I have been doing the following:</p>
<pre><code>import time
import datetime as dt
import pandas as pd
from bs4 import BeautifulSoup
from dateutil.relativedelta import relativedelta
import requests
import re
import os
path = os.getcwd()
def ZCCWord(Date,country):
# Site URL
url="http://www.worldgovernmentbonds.com/country/"+country
html_content = requests.get(url).text
soup = BeautifulSoup(html_content, "lxml")
#gdp = soup.find_all("table", attrs={"class": "w3-table w3-white table-padding-custom w3 small font-family-arial table-valign-middle"})
gdp = soup.find_all("table") # , attrs={"class": "w3-table money pd44 -f15"})
table1 = gdp[0]
body = table1.find_all("tr")
body_rows = body[1:]
all_rows = [] # will be a list for list for all rows
for row_num in range(len(body_rows)): # A row at a time
row = [] # this will old entries for one row
for row_item in body_rows[row_num].find_all("td"): #loop through all row entries
aa = re.sub("(\xa0)|(\n)|,","",row_item.text)
#append aa to row - note one row entry is being appended
row.append(aa)
# append one row to all_rows
all_rows.append(row)
AAA = pd.DataFrame(all_rows)
ZCC = pd.DataFrame()
ZCC = AAA[1].str.extract('([^a-zA-Z]+)([a-zA-Z]+)', expand=True).dropna().reset_index(drop=True)
ZCC.columns = ['TENOR', 'PERIOD']
ZCC['TENOR'] = ZCC['TENOR'].str.strip().str.isdigit() # Remove leading/trailing spaces
#ZCC = ZCC[ZCC['TENOR'].str.isdigit()]
ZCC['TENOR'] = ZCC['TENOR'].astype(int)
ZCC['RATES'] = AAA[2].str.extract(r'([0-9.]+)', expand=True).dropna().reset_index(drop=True).astype(float)
ZCC['RATES'] = ZCC['RATES']/100
row2 = []
for i in range(len(ZCC)):
if ZCC['PERIOD'][i]=='month' or ZCC['PERIOD'][i]=='months':
b = ZCC['TENOR'][i]
bb = Date + relativedelta(months = b)
row2.append(bb)
else:
b = ZCC['TENOR'][i]
bb = Date + relativedelta(years = b)
row2.append(bb)
ZCC['DATES'] = pd.DataFrame(row2)
ZCC = ZCC.reindex(['TENOR','PERIOD','DATES','RATES'], axis=1)
return ZCC
LitsCountries = ['spain','portugal','latvia','ireland','united-kingdom',
'germany', 'france','italy','sweden','finland','greece',
'poland','romania','hungary','netherlands']
todays_date = path+'\\WorldYields' +str(dt.datetime.now().strftime("%Y-%m-%d-%H-%M") )+ '.xlsx'
writer = pd.ExcelWriter(todays_date, engine='xlsxwriter',engine_kwargs={'options':{'strings_to_urls': False}})
dictYield = {}
for i in range(len(LitsCountries)):
country = LitsCountries[i]
Date = pd.to_datetime('today').date()
country = LitsCountries[i]
ZCC = ZCCWord(Date,country)
dictYield[i] = ZCC
ZCC.to_excel(writer, sheet_name=country)
writer.close()
time.sleep(60) # wait one minute
</code></pre>
<p>I would be fine also with other websites, solutions or methods which provide similar outputs.
Any idea?</p>
<p>thanks in advance!</p>
|
<python><pandas><web-scraping><beautifulsoup>
|
2025-01-07 15:05:55
| 1
| 609
|
Luca91
|
79,336,434
| 5,118,757
|
Python polars: pass named row to pl.DataFrame.map_rows
|
<p>I'm looking for a way to apply a user defined function taking a dictionary, and not a tuple, of arguments as input when using <a href="https://docs.pola.rs/api/python/stable/reference/dataframe/api/polars.DataFrame.map_rows.html" rel="nofollow noreferrer"><code>pl.DataFrame.map_rows</code></a>.</p>
<p>Trying something like</p>
<pre><code>df.map_rows(lambda x: udf({k:v for k, v in zip(df.columns, x)}))
</code></pre>
<p>I'm getting a <code>RuntimeError: Already mutably borrowed</code></p>
<p>In the doc it is said that :</p>
<blockquote>
<p>The frame-level map_rows cannot track column names (as the UDF is a black-box that may arbitrarily drop, rearrange, transform, or add new columns); if you want to apply a UDF such that column names are preserved, you should use the expression-level map_elements syntax instead.</p>
</blockquote>
<p>But how does this prevent polars to pass a dict and not a tuple to the <code>udf</code> ? Just like calling <code>df.row(i, named=True)</code>. Why the struct can't be named ?</p>
<p>I know I can iterate trough <code>df.rows()</code> and do my user-defined stuff, then convert back to <code>pl.DataFrame</code>, but I would have liked a way to do this without leaving the polars API.</p>
|
<python><dataframe><user-defined-functions><python-polars>
|
2025-01-07 15:02:37
| 1
| 320
|
paulduf
|
79,336,420
| 8,150,186
|
Unable to construct a DueDate String for the Invoice API call
|
<p>I am trying to make an API call using python and <code>httpx</code> to the Xero Invoice Api. I use the following api method:</p>
<pre class="lang-py prettyprint-override"><code>async def create_invoice(
invoice_detail: InvoiceCreateRequest,
token_manager=Depends(oauth_manager)
) -> InvoiceCreateResponse:
base_url = 'https://api.xero.com/api.xro/2.0/Invoices'
headers = {
'Xero-Tenant-Id': get_settings().TENANT_ID,
'Accept': 'application/json',
'Content-Type': 'application/json',
'Authorization': f'Bearer {await token_manager.access_token}'
}
data = invoice_request_to_dict(invoice_detail)
async with httpx.AsyncClient() as client:
response = await client.request(
method='POST',
url=base_url,
headers=headers,
data=data
)
</code></pre>
<p>The <code>data</code> object looks like this:</p>
<pre class="lang-py prettyprint-override"><code>{
"Type": "ACCREC",
"Contact": {
"ContactID": "3ed357da-0988-4ea1-b1b7-96829e0dde69"
},
"DueDate": r"\/Date(1518685950940+0000)\/",
"LineItems": [
{
"Description": "Services as agreed",
"Quantity": "4",
"UnitAmount": "100.00",
"AccountCode": "200"
}
],
"Status": "AUTHORISED"
}
</code></pre>
<p>This leaves me with the following error:</p>
<pre><code>b'{\r\n "ErrorNumber": 14,\r\n "Type": "PostDataInvalidException",\r\n "Message": "Invalid Json data"\r\n}'
</code></pre>
<p>As far as I can figure out, it has to do with the way the <code>DueDate</code> is passed to the Xero API. They use the MS .NET date format but python adds escape characters before the <code>\</code> to give this:</p>
<p><code>'\\/Date(1518685950940+0000)\\/'</code></p>
<p>I have set up a native API call via Postman to the same Xero endpoint with the same payload and it works fine. Changing the <code>DueDate</code> object to look like the one above generates the same type of error:</p>
<pre class="lang-bash prettyprint-override"><code>JSON for post data was invalid,Could not convert string to DateTime: \/Date(1518685950940+0000)\/. Path 'DueDate', line 6, position 45.</Message>
</code></pre>
<p>I have not managed to find a way to reformat the string to rid of the additional escape characters. Is there a specfic way this is to be done? I have not been able to find anything regarding this in the dev docs.</p>
|
<python><xero-api>
|
2025-01-07 14:57:43
| 1
| 1,032
|
Paul
|
79,336,338
| 4,581,311
|
Lifetime of local variable in Python if its reference was copied
|
<p>I did some experiment to access and use a local variable of a function
at the caller level by saving the reference of the local variable into a variable that was defined at the caller level rather than using the return statement.
I checked this case in a member function of a class also.
And I experienced that the memory space of the local variable remained valid even if I explicitly deleted the local variable in the function in both cases.
My goal would be to use simple local variables in member functions of classes and before the end of function to save the references of some of the local variables into the variable at higher level to avoid a lot of 'self.' text in the source code.</p>
<pre><code>cat b.py
b=[0,0]
print('\nIn function:')
def f():
global b
x=[i for i in range(10)] # here may be a lots of statements to compute x var
b=x # end of the above I save the reference of local x var in global var b
print('f1: b, id(b), id(x) ==', b, id(b), id(x))
del x # deleting local var
print('f2: b, id(b) ==', b, id(b))
print('m1: b, id(b) ==', b, id(b))
f()
print('m2: b, id(b) ==', b, id(b))
print('\nIn class:')
class c1:
def f(s, n):
# ----here is a lots of statements to compute x var ---
x=n
# -------------
s.y=x # end of the above I save the reference of local x var in object var y
print('c1: id(x), id(s.y) ==', id(x), id(s.y))
del x # deleting local var
print('c2: id(s.y) ==', id(s.y))
o=c1()
o.f([1,2,3])
print('m3: id(o.y), o.y ==', id(o.y), o.y)
>>> exec(open('b.py').read())
In function:
m1: b, id(b) == [0, 0] 140281480270856
f1: b, id(b), id(x) == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] 140281547613640 140281547613640
f2: b, id(b) == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] 140281547613640
m2: b, id(b) == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] 140281547613640
In class:
c1: id(x), id(s.y) == 140281480270344 140281480270344
c2: id(s.y) == 140281480270344
m3: id(o.y), o.y == 140281480270344 [1, 2, 3]
</code></pre>
<p>This also would be a legal way to use local variables of functions?</p>
<p>0801 2025</p>
<p>I knew that the memory space remained until there was more than 0 reference for it in the same scope.
But I haven't known that it will be true between different scope also.
I thought that the place of the local variable of function will be destroyed, (I come from C)
after the function has finished even if there is a reference from a parent place.
Now I think probably this situation was also when I used global
variable in a function, because Python doesn't copy the values, and
probably this was with a return statement also.
But due to these special circumstances (return, global) I didn't start to think about what goes on behind the scenes.
I wanted to create an object/instance with the next goal:
after invocation of o.f(par) I can use o.var as the result of o.f(par)
and I repeated it more times with a newer parameter of o.f(par).
So I had to figure out how to give the information from the local memory place of o.f(par)
to o.var and I was not sure that local memory place remained when o.f() had finished and the reference is there outside of scope.</p>
|
<python>
|
2025-01-07 14:30:22
| 1
| 749
|
László Szilágyi
|
79,336,266
| 5,213,451
|
Can I make an asynchronous function indifferentiable from a synchronous one?
|
<p>If I can write a function in either synchronous or asynchronous fashion (e.g. for retrieving some data from an API), I would like to ideally only implement it once as asynchronous, and run it as synchronous whenever I implement a non-<code>async</code> function.</p>
<p>I'm therefore looking for a <code>run</code> function with the following signature:</p>
<pre class="lang-py prettyprint-override"><code>def run(coro: Coroutine[Any, Any, _ReturnT]) -> _ReturnT: ...
</code></pre>
<p>An easy solution would be to simply use the run function defined in <code>asyncio</code>:</p>
<pre class="lang-py prettyprint-override"><code>run1 = asyncio.run
</code></pre>
<p>The problem is that if I wrap this function into a synchronous version:</p>
<pre class="lang-py prettyprint-override"><code>def sync_f():
return run(async_f())
</code></pre>
<p>Then I can't use <code>sync_f</code> exactly as synchronous function.
To see this, imagine another (sync) module builds on <code>sync_f</code> to do other synchronous stuff:</p>
<pre class="lang-py prettyprint-override"><code>def sync_g():
print("Doing some synchronous things")
res = sync_f()
print("Doing some other synchronous things")
return res
</code></pre>
<p>And that finally some asynchronous function <code>async_h</code> wants to use the <code>sync_g</code> logic:</p>
<pre class="lang-py prettyprint-override"><code>async def async_h():
print("Doing some asynchronous things")
res = sync_g()
print("Doing some other asynchronous things")
return res
</code></pre>
<p>Then running that last function, for instance in a <code>__main__</code> block with <code>asyncio.run(async_h())</code>, will result in the following error: <code>RuntimeError: asyncio.run() cannot be called from a running event loop</code>.</p>
<p>I tried to be a bit smarter with my definition of <code>run</code>, trying to see if a higher-level loop is currently running, and running my coroutine in <em>that</em>:</p>
<pre class="lang-py prettyprint-override"><code>def run2(coro: Coroutine[Any, Any, _ReturnT]) -> _ReturnT:
try:
loop = asyncio.get_running_loop()
except RuntimeError:
return asyncio.run(coro)
else:
return loop.run_until_complete(coro)
</code></pre>
<p>But to no avail: <code>RuntimeError: This event loop is already running</code>.
Which makes sense, but I couldn't find something that would use the current run (something like <code>loop.wait_until_complete(coro)</code>).</p>
<p>Isn't there any way to wrap an asynchronous function into a normal one that will work exactly as a synchronous one, without having the implementation detail of the asynchronous version leaking into the higher contexts?</p>
|
<python><python-asyncio>
|
2025-01-07 14:00:05
| 4
| 1,000
|
Thrastylon
|
79,336,182
| 12,642,588
|
Sample Size for A/B Test
|
<p>I want to test if a feature increases conversion.</p>
<p>I want to detect a 5% increase, and my baseline is 80%. However, my code says that I need a sample size of 903 per group. Is there another method of sample calculation, perhaps for non-parametric tests, that can handle a smaller sample size?</p>
<pre><code>import statsmodels.stats.api as sms
# Proportions
p1 = 0.80
p2 = 0.85
# Calculate effect size using the proportions
effect_size = sms.proportion_effectsize(p1, p2)
# Desired power and significance level
power = 0.80
alpha = 0.05
# Calculate the required sample size per group
sample_size_per_group = sms.NormalIndPower().solve_power(effect_size, power=power,alpha=alpha, ratio=1)
print(f"Required sample size per group: {sample_size_per_group}")
</code></pre>
|
<python><statistics><hypothesis-test>
|
2025-01-07 13:30:39
| 0
| 485
|
bakun
|
79,336,132
| 357,313
|
What does IPython expect after an underscore in a numeric literal?
|
<p>I know that <code>_</code>s can be used as digit separator within numeric literals, between (groups of) digits, for instance <code>1_000</code> means <code>1000</code> and <code>0_0</code> means <code>00</code> or just <code>0</code>. Today I accidentally typed a <em>letter</em> after the underscore, say <code>0_z</code>. After pressing Enter, IPython showed the continuation prompt and the cursor: it expected more! But what? Example:</p>
<pre class="lang-none prettyprint-override"><code>Python 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
Type 'copyright', 'credits' or 'license' for more information
IPython 8.31.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: 0_z
...:
...:
...: |
</code></pre>
<p>The vanilla Python prompt gives an expected syntax error instead:</p>
<pre class="lang-none prettyprint-override"><code>Python 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> 0_z
File "<stdin>", line 1
0_z
^
SyntaxError: invalid decimal literal
</code></pre>
<p>Or course <kbd>Ctrl</kbd>+<kbd>C</kbd> gets me out of the problem. But exactly what (feature?) got me into it?</p>
|
<python><ipython>
|
2025-01-07 13:14:55
| 0
| 8,135
|
Michel de Ruiter
|
79,336,036
| 10,944,175
|
How to remove Holoviews BoxWhisker duplicate legend?
|
<p>I'm using holoviews and panel in python with a bokeh backend to create a boxplot.
Unfortunately, the legend in the boxplot shows all the entries twice. I don't know if I am doing something wrong or if it is a bug, but I would like to get rid of the duplicate entries.</p>
<p>Here is a minimal working example:</p>
<pre><code>import holoviews as hv
import panel as pn
import numpy as np
import pandas as pd
hv.extension('bokeh')
pn.extension()
np.random.seed(42)
values = np.random.uniform(10, 20, size=100)
names = np.random.choice(['Name_A', 'Name_B', 'Name_C'], size=100, replace=True)
df = pd.DataFrame({'value': values, 'name': names})
boxplot = hv.BoxWhisker(df, kdims='name', vdims='value').opts(
box_color='name',
cmap='Set1')
box_plot_pane = pn.pane.HoloViews(boxplot.opts(show_legend=True))
box_plot_pane.show()
</code></pre>
<p>which results in the following image, showing the duplicate legend entries:</p>
<p><img src="https://i.sstatic.net/xFa769nim.png" alt="Boxplot with duplicate legend entries" /></p>
<p>Is there a workaround in case of a bug or am I doing something wrong?</p>
|
<python><bokeh><panel><holoviews>
|
2025-01-07 12:41:13
| 1
| 549
|
Freya W
|
79,336,023
| 7,959,614
|
Forward fill Numpy matrix / mask with values based on condition
|
<p>I have the following matrix</p>
<pre><code>import numpy as np
A = np.array([
[0, 0, 0, 0, 1, 0, 1],
[0, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]
]).astype(bool)
</code></pre>
<p>How do I fill all the rows column-wise after a column is <code>True</code>?</p>
<p>My desired output:</p>
<pre><code> [0, 0, 0, 0, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 1],
[1, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0]
</code></pre>
|
<python><numpy>
|
2025-01-07 12:36:21
| 3
| 406
|
HJA24
|
79,335,929
| 13,840,270
|
Difference between bool(prod( ... )) and all( ... )
|
<p>Since I am not "native" in Python, I used <code>bool(prod( ... )</code> to check if all elements in a list of boolean are <code>True</code>. I made use of the property, that boolean are mapped to <code>0</code> and <code>1</code> and since any <code>0</code> in a product makes the whole product <code>0</code> it served my needs.</p>
<p>Now stumbled across <code>all( ... )</code>. As can be seen from this program, they return the equivalent output:</p>
<pre class="lang-py prettyprint-override"><code>from random import randint
from math import prod
# create a array where each element has a 10% chance of being False, else True
l_bool=[randint(0,9)>0 for _ in range(10)]
# check if all elements are True
bool(prod(l_bool)),all(l_bool)
</code></pre>
<p>I understand that <a href="https://docs.python.org/3/library/math.html#math.prod" rel="nofollow noreferrer"><code>prod()</code></a> is more versatile, as it needs to store the intermediate result of the product and it "is intended specifically for use with numeric values".</p>
<p>From the documentation of <a href="https://docs.python.org/3/library/functions.html#all" rel="nofollow noreferrer"><code>all()</code></a> it can be seen that it iterates through input until catching a <code>False</code> and is probably optimized for exactly this task.</p>
<p>However, I would assume, that <code>prod()</code> internally also returns <code>0</code> as soon as any is encountered and my understanding of iterables is should not produce a large overhead to keep the intermediate multiplication result in memory.</p>
<p>I get that <code>all()</code> should be used here but is there any <em>significant</em> difference I am overlooking? I think I must be since <code>bool(prod( ... )</code> is significantly (~45%) slower:</p>
<pre class="lang-py prettyprint-override"><code>import time
n=100_000
t0 = time.time()
for _ in range(n):
bool(prod(randint(0,9)>0 for _ in range(10)))
t_bool_prod=time.time()-t0
t1=time.time()
for _ in range(n):
all(randint(0,9)>0 for _ in range(10))
t_all=time.time()-t1
print(f"bool(prod( ... )) takes {round(100*((t_bool_prod/t_all)-1),2)}% longer!")
</code></pre>
|
<python>
|
2025-01-07 12:02:44
| 2
| 3,215
|
DuesserBaest
|
79,335,633
| 8,031,956
|
"OSError: [Errno 9] Bad file descriptor" with pandas.to_excel() function
|
<p>I am using the function <code>.to_excel()</code> in pandas,</p>
<pre><code>import pandas as pd
df = pd.DataFrame([[1, 2], [3, 4]])
with pd.ExcelWriter('output.xlsx') as writer:
df.to_excel(writer, sheet_name="MySheet")
</code></pre>
<p>This results in the error <code>OSError: [Errno 9] Bad file descriptor</code></p>
<p>The excel file already exists, but is not open elsewhere.</p>
<p>I am using python 3.10.11, pandas 2.2.3, openpyxl 3.1.5</p>
<p><strong>Traceback:</strong></p>
<pre><code> with pd.ExcelWriter('output.xlsx') as writer:
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\io\excel\_base.py:1353 in __exit__
self.close()
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\io\excel\_base.py:1357 in close
self._save()
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\io\excel\_openpyxl.py:110 in _save
self.book.save(self._handles.handle)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\openpyxl\workbook\workbook.py:386 in save
save_workbook(self, filename)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\openpyxl\writer\excel.py:294 in save_workbook
writer.save()
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\openpyxl\writer\excel.py:275 in save
self.write_data()
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\openpyxl\writer\excel.py:60 in write_data
archive.writestr(ARC_APP, tostring(props.to_tree()))
File ~\AppData\Local\Programs\Python\Python310\lib\zipfile.py:1816 in writestr
with self.open(zinfo, mode='w') as dest:
File ~\AppData\Local\Programs\Python\Python310\lib\zipfile.py:1180 in close
self._fileobj.seek(self._zinfo.header_offset)
OSError: [Errno 9] Bad file descriptor
</code></pre>
|
<python><pandas><openpyxl>
|
2025-01-07 10:26:51
| 1
| 559
|
Rémi Baudoux
|
79,335,629
| 4,247,881
|
Transpose dataframe with List elements
|
<p>I have a dataframe like</p>
<pre class="lang-none prettyprint-override"><code>┌─────┬────────┬───────┬───────┬───────┬───────┬───────┬───────┬───────┬───────┬───┬───────┬───────┬───────┬───────┬───────┬───────┬───────┬───────┬───────┬───────┐
│ rul ┆ 647833 ┆ 64783 ┆ 64783 ┆ 64783 ┆ 64720 ┆ 64783 ┆ 64783 ┆ 64783 ┆ 64682 ┆ … ┆ 64681 ┆ 64681 ┆ 64681 ┆ 64719 ┆ 64681 ┆ 64681 ┆ 64682 ┆ 64682 ┆ 64682 ┆ 64682 │
│ e ┆ --- ┆ 0 ┆ 4 ┆ 1 ┆ 1 ┆ 2 ┆ 5 ┆ 6 ┆ 8 ┆ ┆ 7 ┆ 2 ┆ 3 ┆ 8 ┆ 5 ┆ 6 ┆ 1 ┆ 3 ┆ 2 ┆ 4 │
│ --- ┆ list[s ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ tr] ┆ list[ ┆ list[ ┆ list[ ┆ list[ ┆ list[ ┆ list[ ┆ list[ ┆ list[ ┆ ┆ list[ ┆ list[ ┆ list[ ┆ list[ ┆ list[ ┆ list[ ┆ list[ ┆ list[ ┆ list[ ┆ list[ │
│ ┆ ┆ str] ┆ str] ┆ str] ┆ str] ┆ str] ┆ str] ┆ str] ┆ str] ┆ ┆ str] ┆ str] ┆ str] ┆ str] ┆ str] ┆ str] ┆ str] ┆ str] ┆ str] ┆ str] │
╞═════╪════════╪═══════╪═══════╪═══════╪═══════╪═══════╪═══════╪═══════╪═══════╪═══╪═══════╪═══════╪═══════╪═══════╪═══════╪═══════╪═══════╪═══════╪═══════╪═══════╡
│ cs_ ┆ ["Info ┆ ["Inf ┆ ["Inf ┆ ["Inf ┆ ["Cri ┆ ["Inf ┆ ["Inf ┆ ["Inf ┆ ["Inf ┆ … ┆ ["Inf ┆ ["Inf ┆ ["Inf ┆ ["Cri ┆ ["Inf ┆ ["Inf ┆ ["Inf ┆ ["Inf ┆ ["Inf ┆ ["Inf │
│ rul ┆ ", ┆ o", ┆ o", ┆ o", ┆ tical ┆ o", ┆ o", ┆ o", ┆ o", ┆ ┆ o", ┆ o", ┆ o", ┆ tical ┆ o", ┆ o", ┆ o", ┆ o", ┆ o", ┆ o", │
│ e_d ┆ "Ok"] ┆ "Ok"] ┆ "Ok"] ┆ "Ok"] ┆ ", ┆ "Ok"] ┆ "Ok"] ┆ "Ok"] ┆ "Ok"] ┆ ┆ "Ok"] ┆ "Ok"] ┆ "Ok"] ┆ ", ┆ "Ok"] ┆ "Ok"] ┆ "Ok"] ┆ "Ok"] ┆ "Ok"] ┆ "Ok"] │
│ ata ┆ ┆ ┆ ┆ ┆ "No ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ "No ┆ ┆ ┆ ┆ ┆ ┆ │
│ _ch ┆ ┆ ┆ ┆ ┆ data ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ data ┆ ┆ ┆ ┆ ┆ ┆ │
│ eck ┆ ┆ ┆ ┆ ┆ avail ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ avail ┆ ┆ ┆ ┆ ┆ ┆ │
│ ┆ ┆ ┆ ┆ ┆ iab… ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ iab… ┆ ┆ ┆ ┆ ┆ ┆ │
└─────┴────────┴───────┴───────┴───────┴───────┴───────┴───────┴───────┴───────┴───┴───────┴───────┴───────┴───────┴───────┴───────┴───────┴───────┴───────┴───────┘
</code></pre>
<p>I am simply trying to do a transpose so the rules will be columns and the current columns rows.
As polars transpose can't seem to work with List types i am try to convert the lists to comma separated strings like</p>
<pre><code>df = site_df.with_columns([pl.all().exclude("cs_rule_data_check").arr.join(",")])
</code></pre>
<p>but I keep getting</p>
<pre class="lang-none prettyprint-override"><code>polars.exceptions.SchemaError: invalid series dtype: expected `FixedSizeList`, got `list[str]`
</code></pre>
<p>An example taking from answer below is</p>
<pre><code>df = pl.DataFrame({
"rule": "cs_rule_data_check",
"647833": [["Info","Ok"]],
"647201": [["Critical"]]
})
print(df)
#df = df.transpose(include_header=True, header_name="variable", column_names="rule")
df.unpivot(index="rule").pivot(on="rule", index="variable")
</code></pre>
<p>both transpose and unpivot give the err.</p>
<pre><code>pyo3_runtime.PanicException: called `Result::unwrap()` on an `Err` value: InvalidOperation(ErrString("cannot cast List type (inner: 'String', to: 'String')"))
</code></pre>
<p>I am running the latest polars</p>
<p>How can I transpose my dataframe?</p>
|
<python><dataframe><python-polars><polars>
|
2025-01-07 10:25:39
| 1
| 972
|
Glenn Pierce
|
79,335,580
| 10,722,752
|
Getting strange output when using group by apply with np.select function
|
<p>I am working with a Timeseries data wherein I am trying to perform outlier detection using IQR method.</p>
<p>Sample Data:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'datecol' : pd.date_range('2024-1-1', '2024-12-31'),
'val' : np.random.random.randin(low = 100, high = 5000, size = 8366})
</code></pre>
<p>my function:</p>
<pre><code>def is_outlier(x):
iqr = x.quantile(.75) - x.quantile(.25)
outlier = (x <= x.quantile(.25) - 1.5*iqr) | (x >= x.quantile(.75) + 1.5*iqr)
return np.select([outlier], [1], 0)
df.groupby(df['datecol'].dt.weekday)['val'].apply(is_outlier)
</code></pre>
<p>to which the output is something like below:</p>
<pre><code>0 [1,1,0,0,....
1 [1,0,0,0,....
2 [1,1,0,0,....
3 [1,0,1,0,....
4 [1,1,0,0,....
5 [1,1,0,0,....
6 [1,0,0,1,....
</code></pre>
<p>I am expecting a single series as output which I can add back to the original <code>dataframe</code> as a flag column.</p>
<p>Can someone please help me with this</p>
|
<python><pandas><numpy>
|
2025-01-07 10:07:29
| 1
| 11,560
|
Karthik S
|
79,335,533
| 11,863,823
|
How to type Polars' Series in Python?
|
<p>I'm trying to type my functions in Python for <code>polars.Series</code> objects of a specific <code>dtype</code>.</p>
<p>For instance, in a MWE, a function could look like:</p>
<pre class="lang-py prettyprint-override"><code>import typing as tp
import polars as pl
u = pl.Series(range(5))
def f(L: tp.Sequence[int]) -> int:
return len(L) + sum(L)
print(f(u))
</code></pre>
<p>It runs as expected but Mypy complains about this:</p>
<pre><code>error: Argument 1 to "f" has incompatible type "Series"; expected "Sequence[int]" [arg-type]
</code></pre>
<p>I would have expected <code>pl.Series</code> to be recognized as a <code>tp.Sequence</code>, but it is not the case, even though <code>pl.Series</code> have the required <code>__len__</code> and <code>__getitem__</code> methods.</p>
<p>A way to do this would be to extend my typing:</p>
<pre class="lang-py prettyprint-override"><code>import typing as tp
import polars as pl
u = pl.Series(range(5))
def f(L: tp.Sequence[int] | pl.Series) -> int:
return len(L) + sum(L)
print(f(u)) # should pass (u is a pl.Series, result is an int)
print(f(u+0.1)) # should fail (u is a pl.Series but result is not an int)
</code></pre>
<p>but Mypy finds no issue in this code, where it should flag the second call to <code>f</code> (this is because <code>sum</code> is not type-checked and this is beyond the scope of this MWE, my issue still stands as I wanted to ensure that my <code>pl.Series</code> is full of integers and the <code>sum</code> call was a simple proxy for this; I just wanted to demonstrate that typing my input as <code>pl.Series</code> does not fix the issue).</p>
<p>Last attempt, I tried to subscript the <code>pl.Series</code> type as it has a <code>dtype</code> attribute that describes the data inside of it (in my case <code>u.dtype</code> is <code>Int64</code>, an internal Polars type).</p>
<pre class="lang-py prettyprint-override"><code>import typing as tp
import polars as pl
u = pl.Series(range(5))
def f(L: tp.Sequence[int] | pl.Series[int]) -> int:
return len(L) + sum(L)
</code></pre>
<p>but Series cannot be subscripted like this:</p>
<pre><code>error: "Series" expects no type arguments, but 1 given [type-arg]
</code></pre>
<p>I could not find any further leads on the proper way to constraint the inner type of my Polars Series, so any hint is welcome, thanks!</p>
|
<python><python-typing><mypy><python-polars>
|
2025-01-07 09:53:10
| 0
| 628
|
globglogabgalab
|
79,335,164
| 19,546,216
|
How can I set a cookie using Selenium on a site BEFORE accessing it?
|
<p>When I try to access our test environment LOCALLY (<a href="https://www.test.net" rel="nofollow noreferrer">https://www.test.net</a>), we are immediately redirected to Google SSO.</p>
<p>How can we set a cookie for a site <strong>BEFORE</strong> accessing it?</p>
<p><a href="https://i.sstatic.net/7wLAfzeK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7wLAfzeK.png" alt="" /></a></p>
<p>I read from <a href="https://www.selenium.dev/documentation/webdriver/interactions/cookies/" rel="nofollow noreferrer">https://www.selenium.dev/documentation/webdriver/interactions/cookies/</a> that we have to be on the page (loaded) before we can set the cookie.</p>
<p>I tried the following but cannot pass through:</p>
<p>Trial ## (Getting <code>Message: invalid cookie domain: Cookie 'domain' mismatch</code> because it's redirected to Google SSO):</p>
<pre><code>def get_url(self, url):
if os.getenv('GITHUB_ACTIONS', 'false').lower() == 'true':
return
else:
self.driver.get('https:test.net/')
time.sleep(3)
cookie = {
'name': '_oauth2_proxy_qs_staging',
'value': self.cookie,
'domain': 'test.net'
}
self.driver.add_cookie(cookie)
self.driver.refresh()
self.driver.get('https:test.net/')
time.sleep(3)
</code></pre>
<p>Other trials #1 (Stays in Google SSO):</p>
<pre><code>def get_url(self, url):
if os.getenv('GITHUB_ACTIONS', 'false').lower() == 'true':
return
else:
self.driver.get('https:test.net/')
time.sleep(3)
cookie = {
'name': '_oauth2_proxy_qs_staging',
'value': self.cookie,
}
self.driver.add_cookie(cookie)
self.driver.refresh()
self.driver.get('https:test.net/')
time.sleep(3)
</code></pre>
<p>Other trials #2 (<code>Getting selenium.common.exceptions.InvalidCookieDomainException: Message: invalid cookie domain</code>, even if we retain the <code>'domain'</code>, same error, because it's redirected to Google SSO)</p>
<pre><code>def get_url(self, url):
if os.getenv('GITHUB_ACTIONS', 'false').lower() == 'true':
return
else:
self.driver.get('about:blank')
time.sleep(3)
cookie = {
'name': '_oauth2_proxy_qs_staging',
'value': self.cookie,
'domain': 'test.net'
}
self.driver.add_cookie(cookie)
self.driver.refresh()
self.driver.get('https:test.net/')
time.sleep(3)
</code></pre>
<p>Other trials #3 (Working BUT SOLUTION WAS NOT ACCEPTED DUE TO A SITE DEPENDENCY, this works because a subdomain of <code>test.net</code> was loaded before we can set the cookie)</p>
<pre><code>def get_url(self, url):
# THIS IS NOT AN ACCEPTED SOLUTION
if os.getenv('GITHUB_ACTIONS', 'false').lower() == 'true':
return
else:
self.driver.get('https:test.test.net/')
time.sleep(3)
cookie = {
'name': '_oauth2_proxy_qs_staging',
'value': self.cookie,
'domain': 'test.net'
}
self.driver.add_cookie(cookie)
self.driver.refresh()
self.driver.get('https:test.net/')
time.sleep(3)
</code></pre>
<p>Is there another way besides using a VPN for this case? Since it's only a problem when we run locally. In Github Actions we have our own runner that doesn't require us Google SSO.</p>
|
<python><selenium-webdriver><cookies><google-sso>
|
2025-01-07 07:24:28
| 0
| 321
|
Faith Berroya
|
79,335,053
| 8,206,716
|
Replace substring if key is found in another file
|
<p>I have files associated with people scattered around different directories. I can find them with a master file. Some of them need to be pulled into my working directory. Once I've pulled them, I need to update the master file to reflect the change. To keep track of which files were moved, I record the person's name in another file.</p>
<p>master.txt</p>
<pre><code>Bob "/home/a/bob.txt"
Linda "/home/b/linda.txt"
Joshua "/home/a/josh.txt"
Sam "/home/f/sam.txt"
</code></pre>
<p>moved.txt</p>
<pre><code>Linda
Sam
</code></pre>
<p>Expected result of master.txt</p>
<pre><code>Bob "/home/a/bob.txt"
Linda "/workingdir/linda.txt"
Joshua "/home/a/josh.txt"
Sam "/workingdir/sam.txt"
</code></pre>
<p>I've tried</p>
<pre><code>grep -f moved.txt master.txt | sed "s?\/.*\/?"`pwd`"\/?"
grep -f moved.txt master.txt | sed "s?\/.*\/?"`pwd`"\/?" master.txt
grep -f moved.txt master.txt | sed -i "s?\/.*\/?"`pwd`"\/?"
</code></pre>
<p>As an added complication, this is going to execute as part of a python script, so it needs to be able to work within a subprocess.run(cmd).</p>
<hr />
<p>Update 1:</p>
<p>Based on some questions, here is what the relevant section of my Python code looks like. I'm trying to figure out what the next step should be in order to update the paths of the flagged files in master.</p>
<pre><code>commands = ['program finder.exe "flaggedfile" > master.txt'
,'sed "\#"`pwd`"#d" list.txt | sed "s/:.*//" > moved.txt'
,'program mover.exe moved.txt .'
#,'cry'
]
for cmd in commands:
status = subprocess.run(cmd
,cwd=folder
,shell=True
,stdout=subprocess.DEVNULL
,stderr=subprocess.DEVNULL
)
</code></pre>
<p>"program" is a program that I work with, and "finder.exe" and "mover.exe" are executables for that program, which I'm using to locate flagged files and move into the working directory.</p>
|
<python><string><sed>
|
2025-01-07 06:24:45
| 3
| 535
|
David Robie
|
79,334,708
| 12,961,237
|
Is there any way to limit the operations TVMs autoscheduler can use when creating a schedule?
|
<p>I'm building my own auto tuner for TVM schedules. I would like to test it against TVMs built-in <code>auto_scheduler</code>. However the <code>auto_scheduler</code> uses a lot of advanced scheduling operations that my tuner does not yet support. Is there anyway to restrict <code>auto_scheduler</code> to only use the operations Split, Tile and Reorder?</p>
|
<python><gpu><apache-tvm>
|
2025-01-07 01:13:34
| 0
| 1,192
|
Sven
|
79,334,568
| 1,324,833
|
How to use subprocess.run when the subprocess requires an <ENTER> to complete
|
<p>I have an windows console executable provided by a third party. It's a file converter and requires an input and output file. On completion it has a "Press any key to exit" prompt. I'm trying to run it from within my program using subprocess.run. It works but it shows the prompt and an unhandled exception error in the console.</p>
<pre><code>def load_mydata(self, file):
_output = file.with_suffix(".csv")
# delete output file if it already exists
_output.unlink(missing_ok=True)
_args = 'MyDataConverter.exe input="{}" output="{}"'.format(file, _output)
print(_args)
subprocess.run(_args)
print('tada')
</code></pre>
<p>returns this between the print statements:</p>
<pre><code>Press any key to exit.
Unhandled Exception: System.InvalidOperationException: Cannot read keys when either application does not have a console or when console input has been redirected from a file. Try Console.Read.
at System.Console.ReadKey(Boolean intercept)
at Converter.Program.ProgramExit(TextWriter output, Int32 returnValue)
at Converter.Program.Main(String[] args)
</code></pre>
<p>I've tried this:</p>
<pre><code>subprocess.run(_args, input=b'\n')
</code></pre>
<p>and</p>
<pre><code>with open('MyResponsefile.txt', 'r') as fIn:
subprocess.run(_args, stdin=fIn)
</code></pre>
<p>where the response file just contained a character and a newline.</p>
<p>I even tried</p>
<pre><code>try:
subprocess.run(_args)
except:
pass
</code></pre>
<p>but I get the same result every time.</p>
|
<python>
|
2025-01-06 23:23:54
| 1
| 1,237
|
marcp
|
79,334,475
| 4,755,229
|
Writing Cython class with template?
|
<p><strong>Background</strong>: I am fairly new to the Cython, and I am not experienced with C++. I am aware of C++ template feature, and I am not really sure how to write a proper class in C++.</p>
<p><strong>Objective</strong>: I am trying to write a class in Cython that mimics behaviors of a dictionary. But, instead of strings, I want to use enum, or other user-defined types as keys.</p>
<p><strong>Previous Research</strong>: Based on <a href="https://stackoverflow.com/a/44530741/4755229">1</a>, <a href="https://github.com/cython/cython/wiki/WrappingCPlusPlus#templates" rel="nofollow noreferrer">2</a>, and <a href="https://cython-docs2.readthedocs.io/en/latest/src/userguide/wrapping_CPlusPlus.html" rel="nofollow noreferrer">3</a>, it seems like I can use template in Cython classes. But I was not able to find any other documentation on this.</p>
<p><strong>My Attempts</strong>: Given these, what I wrote is just a simple modification to a Cython class I wrote without template, which reads,</p>
<pre class="lang-py prettyprint-override"><code>cdef class ScalarDict[T]:
cdef:
public long N
public long ndata
size_t size
long __current_buffer_len
T * __keys
double ** __values
def __cinit__(self, long ndata, long N=4):
cdef long i
self.__current_buffer_len = N
self.N = 0
self.ndata = ndata
self.size = ndata*sizeof(double)
self.__keys = <T *>malloc(self.__current_buffer_len*sizeof(T))
self.__values = <double **>malloc(self.__current_buffer_len*sizeof(double *))
def __dealloc__(self):
cdef long i
if self.N > 0:
for i in range(self.N):
if self.__values[i] != NULL:
free(self.__values[i])
if self.__values != NULL:
free(self.__values)
if self.__keys != NULL:
free(self.__keys)
# other methods...
</code></pre>
<p>But this fails to compile, with error messages including <code>Name options only allowed for 'public', 'api', or 'extern' C class</code> and <code>Expected 'object', 'type' or 'check_size'</code> on the line with <code>cdef class ScalarDict[T]:</code>.</p>
<p>What is the correct way of doing this? Or, is it not possible?</p>
|
<python><c++><cython>
|
2025-01-06 22:12:08
| 0
| 498
|
Hojin Cho
|
79,334,435
| 425,893
|
How do I post directly to a specific URL with Python Mechanize?
|
<p>With perl's mechanize module, nearly anything seems possible. But I'm not using perl, unfortunately, I still expected to be able to do what was needed with python. The login page that I have to automate can't be reached directly. The visible URL in the browser does not contain the web form (nor does anything, in web 2.0 fashion it's just a bunch of xhr requests and built in javascript), but there is a url that I can post to directly with curl. The documentation suggests that <code>browser.open(url, data=whatever)</code> might intelligently decide, but it appears that it is still using the GET http method. Or possibly, it is using POST, but is doing <code>application/x-www-form-urlencoded</code> instead of json.</p>
<p>Code in question's pretty basic:</p>
<p><code>r = br.open('https://major-retailer.com/api/client/experience/v1/load', data=json.dumps(j))</code></p>
<p>All this does is give me a generic bad request response:</p>
<p><code>mechanize._response.get_seek_wrapper_class.<locals>.httperror_seek_wrapper: HTTP Error 400: Bad Request</code></p>
<p>I can make this work with requests, but only for this initial post, after that it becomes a little too much to handle to manage cookies and referrers and so on. What little I can find on python mechanize suggests that it may be insufficient for any sophisticated task, hoping that's not the case.</p>
|
<python><python-3.x><mechanize-python>
|
2025-01-06 21:52:30
| 1
| 5,645
|
John O
|
79,334,421
| 9,415,280
|
Tensorflow dataset splitted sizing parameter problem: Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence
|
<p>Pretty new with data generator and dataset from tensorflow. I struggle with sizing batch, epochs and step... I can't figure the good set up to remove error "Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence"</p>
<p>I try using size of a chunck of my data called by data generator and try with complete size of all my dataset, and size of splited dataset but no one seem to work.</p>
<p>Here a simplify code of my last try</p>
<pre><code>def data_generator(df, chunk_size):
total_number_sample = 10000
for start_idx in range(1, total_number_sample , chunk_size):
end_idx = start_idx + chunk_size-1
df_subset = df.where(col('idx').between(start_idx, end_idx))
feature = np.array(df_subset.select("vector_features_scaled").rdd.map(lambda row: row[0].toArray()).collect())
label = df_subset.select("ptype_s_l_m_v").toPandas().values.flatten()
yield feature, label
</code></pre>
<pre><code>dataset = tf.data.Dataset.from_generator(
lambda: data_generator(df, chunk_size),
output_signature=(
tf.TensorSpec(shape=(None, 24), dtype=tf.float32),
tf.TensorSpec(shape=(None, 4), dtype=tf.float32)
))
</code></pre>
<p>I split and batch my data this way for trainning/validation</p>
<pre><code>batch_sz = 100
split_ratio = .9
split_size = math.floor((chunk_size*10) * split_ratio)
train_dataset = dataset.take(split_size).batch(batch_sz)
train_dataset = train_dataset.prefetch(tf.data.experimental.AUTOTUNE)
test_dataset = dataset.skip(split_size).batch(batch_sz)
test_dataset = test_dataset.prefetch(tf.data.experimental.AUTOTUNE)
steps_per_epoch=math.ceil(10000 * split_ratio) / batch_sz)
validation_steps=math.ceil((10000-split_size)) / batch_sz)
model.fit(train_dataset,
steps_per_epoch=steps_per_epoch,
epochs=3,
validation_data=test_dataset,
validation_steps=validation_steps,
verbose=2)
results = model.evaluate(dataset.batch(batch_sz))
</code></pre>
<p>without batching all work great (model.fit() and model.evaluate())</p>
<p>but when I use batch I got this error:</p>
<pre><code>W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence
[[{{node IteratorGetNext}}]]
/usr/lib/python3.11/contextlib.py:155: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset.
self.gen.throw(typ, value, traceback)
</code></pre>
<p>I see lot of tread about steps_per_epoch epoch and batch size but I'm not finding a solution while apply on splitted data.</p>
|
<python><tensorflow><tensorflow-datasets>
|
2025-01-06 21:44:06
| 1
| 451
|
Jonathan Roy
|
79,334,351
| 3,614,460
|
How to convert UNIX time to Datetime when using Pandas.DataFrame.from_dict?
|
<p>I am reading from a json data file and loading it into a dictionary. It as key:value like below.</p>
<pre><code>"1707195600000":1,"1707282000000":18,"1707368400000":1,"1707454800000":13,"1707714000000":18,"1707800400000":12,"1707886800000":155,"1707973200000":1"
</code></pre>
<p>Code Snippet:</p>
<pre><code>with open('data.json', 'r') as json_file:
data_pairs = json.load(json_file)
dataframe = pd.DataFrame.from_dict(data_pairs, orient='index')
</code></pre>
<p>Can it be done with <code>Pandas.DataFrame.from_dict</code>? Or I should convert it all the keys in the dictionary before using <code>from_dict</code>?</p>
|
<python><pandas>
|
2025-01-06 21:11:43
| 2
| 442
|
binary_assemble
|
79,334,315
| 6,151,828
|
Python: minimize function in respect to i-th variable
|
<p>I have a function <code>func(x)</code> where the argument is a vector of length <code>n</code>. I would like to minimize it in respect to <code>i</code>-th component of <code>x</code> while keeping the other components fixed. So to express it as a function of a single component I would do something like</p>
<pre><code>import numpy as np
from scipy.optimize import minimize_scalar
def func(x):
#do some calculations
return function_value
def func_i(x_i, x0, i):
x = np.copy(x0)
x[i] = x_i
return func(x)
res = minimize_scalar(func_i, args=(x0, i))
</code></pre>
<p>Is there a more efficient way of doing this? This kind of calculations will be done repeatedly, cycling over variables, and I worry that <code>x = np.copy(x0)</code>, <code>x[i] = x_i</code> slow down the calculation. (The problem emerges in the context of Gibbs sampling, so minimizing in respect to all the variables simultaneously is not what I want.)</p>
|
<python><optimization><vectorization>
|
2025-01-06 20:56:07
| 1
| 803
|
Roger V.
|
79,334,065
| 1,788,656
|
geopandas.read_file of a shapefile gives error if crs parameter is specified
|
<p>All,
I use ESRI World Countries Generalized shape file, that is available <a href="https://hub.arcgis.com/datasets/esri::world-countries-generalized/explore?location=18.531425%2C34.576759%2C5.67" rel="nofollow noreferrer">here</a>
using GeoPandas</p>
<pre><code>shp_file =gpd.read_file('World_Countries/World_Countries_Generalized.shp')
print(shp_file.crs)
</code></pre>
<p>The CRS I got is <code>EPSG:3857</code>, yet once I add the CRS to the gpd.read_file as the following</p>
<pre><code>shp_file1 =gpd.read_file('../../Downloads/World_Countries/World_Countries_Generalized.shp',crs='EPSG:3857')
</code></pre>
<p>I got the following error
<code>/opt/anaconda3/envs/geo_env/lib/python3.12/site-packages/pyogrio/raw.py:198: RuntimeWarning: driver ESRI Shapefile does not support open option CRS return ogr_read(</code></p>
<p>Do you know why I get this error, and does it mean the file is not read correctly?</p>
<p>Thanks</p>
|
<python><python-3.x><geopandas><shapefile>
|
2025-01-06 19:05:41
| 1
| 725
|
Kernel
|
79,333,808
| 1,802,693
|
Data model duplication issue in bokeh (the library is possibly not prepared for this use case)
|
<p>I want to implement a stepping logic for a multi-timeframe charting solution with bokeh library. I need to export a static HTML file with all the data, therefore the size of the HTML export matters.</p>
<p>In this solution multiple timeframes can be visualized at the same time, the charts of different timeframes are drawn on each other, BUT <strong>only until a given point in time</strong> (let's call this <code>trigger_date</code>).</p>
<pre class="lang-py prettyprint-override"><code>time_tracker = ColumnDataSource(data=dict(trigger_date=[max_dt]))
</code></pre>
<p>I'm using <code>ColumnDataSource</code> to store the data for each timeframe and I need to generate at least 2 buttons (backward step, forward step) for each timeframe.
To each button I need to pass all the data, which means every data source of all timeframes and every dataframe of all timeframes for implementing the step logic (to show the charts until a given point in time only and <code>emit()</code> changes properly).</p>
<p><strong>Important</strong>: The number and the values of the timeframes (aggregation units) are not static and are not known in advance. It's determined by the user's input only. A timeframe can be literally anything. <strong>CAN NOT</strong> be listed as a constant list, e.g.: <code>30s, 1m, 5m, 15m, 30m, 1h, 4h, 1d, ...</code></p>
<p>Therefore it's simply impossible to define such thing:</p>
<pre class="lang-py prettyprint-override"><code>step_buttons[timeframe]['prev'].js_on_click(CustomJS(
args = dict(
time_tracker = time_tracker,
direction = -1,
min_dt = min_dt,
max_dt = max_dt,
# ALL TIMEFRAMES CAN NOT BE LISTED HERE BECAUSE IT'S UNKNOWN
datasource_5m = ...,
datasource_15m = ...,
datasource_30m = ...,
datasource_1h = ...,
datasource_4h = ...,
?????????????? = ...,
),
code = JS_CODE_STEP_LOGIC,
))
</code></pre>
<p>Therefore I <strong>HAVE TO</strong> wrap all possible data sources for each timeframe into a more dynamic and more complex data structure (with arbitrary number of keys representing the timeframes), that can be passed to the button as a single argument:</p>
<pre class="lang-py prettyprint-override"><code>js_data[timeframe] = {
'data_source' : ColumnDataSource(dataframes[timeframe]),
'candle_data' : dataframes[timeframe].to_dict(orient="list"),
}
# or
# data_sources = {}
# candle_data = {}
# for timeframe in dataframes.keys():
# data_sources[timeframe] = ColumnDataSource(dataframes[timeframe])
# candle_data[timeframe] = dataframes[timeframe].to_dict(orient="list")
# ...
for tf in timeframes:
# I'VE A LOT OF THESE BUTTONS
# THE ARGUMENT LIST CAN NOT BE FIXED HERE
# I'VE TO PUT DATA SOURCES INTO A HIERARCHY WITH TIMEFRAME KEYS (candle_data_and_sources )
# AND IMPLEMENT A DYNAMIC LOGIC ON JS SIDE
# THE TIMEFRAMES ARE NOT KNOWN IN ADVANCE
# THIS IS WHAT DUPLICATES THE DATA
# AND INCREASES THE SIZE OF THE GENERATED HTML
step_buttons[tf]['prev'].js_on_click(CustomJS(
args = dict(
candle_data_and_sources = js_data, # need to use complex structure here
# or
# data_sources = data_sources
# candle_data = candle_data
time_tracker = time_tracker,
direction = -1,
timeframe = tf,
min_dt = min_dt,
max_dt = max_dt,
),
code = JS_CODE_STEP_LOGIC,
))
step_buttons[tf]['next'] = ...
</code></pre>
<p><strong>BUT</strong> in this case the model will be obviously duplicated and will result a much larger file size then required. This will cause performance issues during visualization. The browser will fail to open this file.</p>
<p><strong>QUESTIONS</strong>:</p>
<ul>
<li><strong>How could I pass all available data only once to each button without duplicating the model here?</strong></li>
<li>Do I feel correctly that hardcoding all possible timeframes into the button's arguments feels not the good way to implement this (and in my case it might be even impossible)..</li>
</ul>
<p><strong>Additional information 1</strong>:<br />
I've tried to workaround this limitation with setting this complex data structure as a global variable on JS side, but I could not find a working solution.
See the details here: <a href="https://stackoverflow.com/questions/79332523/initialize-global-variable-in-bokeh-and-use-it-in-handler-code">Initialize global variable in bokeh and use it in handler code</a>?</p>
<p><strong>Additional information 2</strong>:<br />
The step logic being used is similar to this:</p>
<pre class="lang-py prettyprint-override"><code>JS_CODE_STEP_LOGIC = """
const trigger_date = new Date(time_tracker.data['trigger_date'][0]);
let new_date = new Date(trigger_date);
new_date.setDate(new_date.getDate() + 1 * direction * get_tf_value(timeframe));
if (direction < 0){
new_date = new Date(Math.max(min_dt, new_date));
} else if (direction > 0){
new_date = new Date(Math.min(max_dt, new_date));
}
time_tracker.data['trigger_date'][0] = new_date.toISOString();
// I NEED TO DO THE FOLLOWING LOGIC FOR EACH TIMEFRAME
// THE NUMBER/VALUE OF TIMEFRAMES HERE ARE DYNAMIC
// THEREFORE THEY ARE ADDRESSING THE DATASOURCE IN THE HIERARCHY
for (const [timeframe, data] of Object.entries(candle_data_and_sources)) {
const filtererd_obejcts = {};
for (const [key, value] of Object.entries(data['candle_data'])) {
if(!filtererd_obejcts[key]){
filtererd_obejcts[key] = [];
}
}
for (let i = 0; i < data['candle_data'].trigger_dt.length; i++) {
if (new Date(data['candle_data'].trigger_dt[i]) <= new_date) {
for (const [key, value] of Object.entries(data['candle_data'])) {
filtererd_obejcts[key].push(value[i]);
}
}
}
data['data_source'].data = filtererd_obejcts;
data['data_source'].change.emit();
}
time_tracker.change.emit();
"""
</code></pre>
|
<javascript><python><bokeh>
|
2025-01-06 17:13:44
| 0
| 1,729
|
elaspog
|
79,333,765
| 1,547,004
|
Type Hinting and Type Checking for IntEnum custom types
|
<p>Qt has several <code>IntEnum</code>'s that support custom , user-specified types or roles. A few examples are:</p>
<ul>
<li><a href="https://doc.qt.io/qt-6/qt.html#ItemDataRole-enum" rel="nofollow noreferrer"><code>QtCore.Qt.ItemDataRole.UserRole</code></a></li>
<li><a href="https://doc.qt.io/qt-6/qevent.html#Type-enum" rel="nofollow noreferrer"><code>QtCore.QEvent.Type.User</code></a></li>
</ul>
<p>In both cases, a user type/role is created by choosing an integer >= the User type/role</p>
<pre class="lang-py prettyprint-override"><code>myType = QtCore.QEvent.Type.User + 1
</code></pre>
<p>The problem is that all of the functions that deal with these type/roles expect an instance of the <code>IntEnum</code>, not an <code>int</code>, and mypy will report an error.</p>
<pre class="lang-py prettyprint-override"><code>from PySide6.QtCore import QEvent
class MyEvent(QEvent):
def __init__(self) -> None:
super().__init__(QEvent.Type.User + 1)
</code></pre>
<p>Mypy error:</p>
<pre><code>No overload variant of "__init__" of "QEvent" matches argument type "int"
</code></pre>
<p>Integrated type checking in VS code with Pylance gives a similar error:</p>
<pre><code>No overloads for "__init__" match the provided arguments PylancereportCallIssue
QtCore.pyi(2756, 9): Overload 2 is the closest match
Argument of type "int" cannot be assigned to parameter "type" of type "Type" in function "__init__"
"int" is not assignable to "Type" PylancereportArgumentType
</code></pre>
<p>What type hinting can I do from my end to satisfy mypy? Is this something that needs to be changed in Qt type hinting?</p>
|
<python><qt><pyside><mypy><pyside6>
|
2025-01-06 16:56:32
| 2
| 37,968
|
Brendan Abel
|
79,333,553
| 2,085,376
|
Trying to install an older version of Jax
|
<p>Trying to add a specific version of jax and jaxlib</p>
<pre><code>pip install -U jaxlib==0.4.10
ERROR: Ignored the following yanked versions: 0.4.32
ERROR: Could not find a version that satisfies the requirement jaxlib==0.4.10 (from versions: 0.4.17, 0.4.18, 0.4.19, 0.4.20, 0.4.21, 0.4.22, 0.4.23, 0.4.24, 0.4.25, 0.4.26, 0.4.27, 0.4.28, 0.4.29, 0.4.30, 0.4.31, 0.4.33, 0.4.34, 0.4.35, 0.4.36, 0.4.38)
ERROR: No matching distribution found for jaxlib==0.4.10
</code></pre>
<p>Looks like my old app needs jax to be '<=0.4.10'</p>
<p>Not sure how to move forward</p>
|
<python><jax>
|
2025-01-06 15:38:54
| 1
| 4,586
|
NoIdeaHowToFixThis
|
79,333,402
| 494,739
|
How to resize dimensions of video through ffmpeg-python?
|
<p>I'm trying to resize a video file which a user has uploaded to Django, by using <a href="https://github.com/kkroening/ffmpeg-python" rel="nofollow noreferrer"><code>ffmpeg-python</code></a>. The documentation isn't very easy to understand, so I've tried to cobble this together from various sources.</p>
<p>This method is run in a celery container, in order to not slow the experience for the user. The problem I'm facing is that I can't seem to resize the video file. I've tried two different approaches:</p>
<pre class="lang-py prettyprint-override"><code>from django.db import models
from io import BytesIO
from myapp.models import MediaModel
def resize_video(mypk: str) -> None:
instance = MediaModel.objects.get(pk=mypk)
media_instance: models.FileField = instance.media
media_output = "test.mp4"
buffer = BytesIO()
for chunk in media_instance.chunks():
buffer.write(chunk)
stream_video = ffmpeg.input("pipe:").video.filter("scale", 720, -1) # resize to 720px width
stream_audio = ffmpeg.input("pipe:").audio
process = (
ffmpeg.output(stream_video, stream_audio, media_output, acodec="aac")
.overwrite_output()
.run_async(pipe_stdin=True, quiet=True)
)
buffer.seek(0)
process_out, process_err = process.communicate(input=buffer.getbuffer())
# (pdb) process_out
# b''
# attempting to use `.concat` instead
process2 = (
ffmpeg.concat(stream_video, stream_audio, v=1, a=1)
.output(media_output)
.overwrite_output()
.run_async(pipe_stdin=True, quiet=True)
)
buffer.seek(0)
process2_out, process2_err = process2.communicate(input=buffer.getbuffer())
# (pdb) process2_out
# b''
</code></pre>
<p>As we can see, no matter which approach chosen, the output is an empty binary. The <code>process_err</code> and <code>process2_err</code> both generate the following message:</p>
<pre><code>ffmpeg version N-111491-g31979127f8-20230717 Copyright (c) 2000-2023 the
FFmpeg developers
built with gcc 13.1.0 (crosstool-NG 1.25.0.196_227d99d)
configuration: --prefix=/ffbuild/prefix --pkg-config-flags=--static
--pkg-config=pkg-config --cross-prefix=x86_64-w64-mingw32- --arch=x86_64
--target-os=mingw32 --enable-gpl --enable-version3 --disable-debug
--disable-w32threads --enable-pthreads --enable-iconv --enable-libxml2
--enable-zlib --enable-libfreetype --enable-libfribidi --enable-gmp
--enable-lzma --enable-fontconfig --enable-libvorbis --enable-opencl
--disable-libpulse --enable-libvmaf --disable-libxcb --disable-xlib
--enable-amf --enable-libaom --enable-libaribb24 --enable-avisynth
--enable-chromaprint --enable-libdav1d --enable-libdavs2
--disable-libfdk-aac --enable-ffnvcodec --enable-cuda-llvm --enable-frei0r
--enable-libgme --enable-libkvazaar --enable-libass --enable-libbluray
--enable-libjxl --enable-libmp3lame --enable-libopus --enable-librist
--enable-libssh --enable-libtheora --enable-libvpx --enable-libwebp
--enable-lv2 --enable-libvpl --enable-openal --enable-libopencore-amrnb
--enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg
--enable-libopenmpt --enable-librav1e --enable-librubberband
--enable-schannel --enable-sdl2 --enable-libsoxr --enable-libsrt
--enable-libsvtav1 --enable-libtwolame --enable-libuavs3d --disable-libdrm
--disable-vaapi --enable-libvidstab --enable-vulkan --enable-libshaderc
--enable-libplacebo --enable-libx264 --enable-libx265 --enable-libxavs2
--enable-libxvid --enable-libzimg --enable-libzvbi
--extra-cflags=-DLIBTWOLAME_STATIC --extra-cxxflags=
--extra-ldflags=-pthread --extra-ldexeflags= --extra-libs=-lgomp
--extra-version=20230717
libavutil 58. 14.100 / 58. 14.100
libavcodec 60. 22.100 / 60. 22.100
libavformat 60. 10.100 / 60. 10.100
libavdevice 60. 2.101 / 60. 2.101
libavfilter 9. 8.102 / 9. 8.102
libswscale 7. 3.100 / 7. 3.100
libswresample 4. 11.100 / 4. 11.100
libpostproc 57. 2.100 / 57. 2.100
"Input #0, mov,mp4,m4a,3gp,3g2,mj2, frompipe:':\r\n"
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: mp42mp41
creation_time : 2020-11-10T15:01:09.000000Z
Duration: 00:00:04.16, start: 0.000000, bitrate: N/A
Stream #0:0[0x1](eng): Video: h264 (Main) (avc1 / 0x31637661),
yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 2649 kb/s, 25 fps, 25
tbr, 25k tbn (default)
Metadata:
creation_time : 2020-11-10T15:01:09.000000Z
handler_name : ?Mainconcept Video Media Handler
vendor_id : [0][0][0][0]
encoder : AVC Coding
Stream #0:1[0x2](eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz,
stereo, fltp, 317 kb/s (default)
Metadata:
creation_time : 2020-11-10T15:01:09.000000Z
handler_name : #Mainconcept MP4 Sound Media Handler
vendor_id : [0][0][0][0]
Stream mapping:
Stream #0:0 (h264) -> scale:default (graph 0)
scale:default (graph 0) -> Stream #0:0 (libx264)
Stream #0:1 -> #0:1 (aac (native) -> aac (native))
[libx264 @ 00000243a23a1100] using SAR=1/1
[libx264 @ 00000243a23a1100] using cpu capabilities: MMX2 SSE2Fast SSSE3
SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 00000243a23a1100] profile High, level 3.0, 4:2:0, 8-bit
[libx264 @ 00000243a23a1100] 264 - core 164 - H.264/MPEG-4 AVC codec -
Copyleft 2003-2023 - http://www.videolan.org/x264.html - options: cabac=1
ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00
mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11
fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1
sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0
constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1
weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40
intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0
qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
"Output #0, mp4, toaa37f8d7685f4df9af85b1cdcd95997e.mp4':\r\n"
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: mp42mp41
encoder : Lavf60.10.100
Stream #0:0: Video: h264 (avc1 / 0x31637661), yuv420p(tv, progressive),
800x450 [SAR 1:1 DAR 16:9], q=2-31, 25 fps, 12800 tbn
Metadata:
encoder : Lavc60.22.100 libx264
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo,
fltp, 128 kb/s (default)
Metadata:
creation_time : 2020-11-10T15:01:09.000000Z
handler_name : #Mainconcept MP4 Sound Media Handler
vendor_id : [0][0][0][0]
encoder : Lavc60.22.100 aac
frame= 0 fps=0.0 q=0.0 size= 0kB time=N/A bitrate=N/A
speed=N/A \r'
frame= 21 fps=0.0 q=28.0 size= 0kB time=00:00:02.75 bitrate=
0.1kbits/s speed=4.75x \r'
[out#0/mp4 @ 00000243a230bd80] video:91kB audio:67kB subtitle:0kB other
streams:0kB global headers:0kB muxing overhead: 2.838559%
frame= 104 fps=101 q=-1.0 Lsize= 162kB time=00:00:04.13 bitrate=
320.6kbits/s speed=4.02x
[libx264 @ 00000243a23a1100] frame I:1 Avg QP:18.56 size: 2456
[libx264 @ 00000243a23a1100] frame P:33 Avg QP:16.86 size: 1552
[libx264 @ 00000243a23a1100] frame B:70 Avg QP:17.55 size: 553
[libx264 @ 00000243a23a1100] consecutive B-frames: 4.8% 11.5% 14.4%
69.2%
[libx264 @ 00000243a23a1100] mb I I16..4: 17.3% 82.1% 0.6%
[libx264 @ 00000243a23a1100] mb P I16..4: 5.9% 15.2% 0.4% P16..4: 18.3%
0.9% 0.4% 0.0% 0.0% skip:58.7%
[libx264 @ 00000243a23a1100] mb B I16..4: 0.8% 0.3% 0.0% B16..8: 15.4%
1.0% 0.0% direct: 3.6% skip:78.9% L0:34.2% L1:64.0% BI: 1.7%
[libx264 @ 00000243a23a1100] 8x8 transform intra:68.2% inter:82.3%
[libx264 @ 00000243a23a1100] coded y,uvDC,uvAC intra: 4.2% 18.4% 1.2% inter:
1.0% 6.9% 0.0%
[libx264 @ 00000243a23a1100] i16 v,h,dc,p: 53% 25% 8% 14%
[libx264 @ 00000243a23a1100] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 19% 6% 70% 1%
1% 1% 1% 0% 0%
[libx264 @ 00000243a23a1100] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 46% 21% 15% 2%
5% 4% 3% 3% 1%
[libx264 @ 00000243a23a1100] i8c dc,h,v,p: 71% 15% 13% 1%
[libx264 @ 00000243a23a1100] Weighted P-Frames: Y:30.3% UV:15.2%
[libx264 @ 00000243a23a1100] ref P L0: 46.7% 7.5% 34.6% 7.3% 3.9%
[libx264 @ 00000243a23a1100] ref B L0: 88.0% 10.5% 1.5%
[libx264 @ 00000243a23a1100] ref B L1: 98.1% 1.9%
[libx264 @ 00000243a23a1100] kb/s:177.73
[aac @ 00000243a23a2e00] Qavg: 1353.589
</code></pre>
<p>I'm at a loss right now, would love any feedback/solution.</p>
|
<python><django><ffmpeg><ffmpeg-python>
|
2025-01-06 14:42:57
| 2
| 772
|
kunambi
|
79,333,342
| 11,460,896
|
Can't Connect to MongoDB Replica Set Running on Docker with Python: "Temporary failure in name resolution" Error
|
<p>I'm trying to set up a MongoDB replica set on Docker and connect to it using Python. However, I'm encountering the following error:</p>
<h4>Error Message</h4>
<pre><code>ServerSelectionTimeoutError: mongo3:27017: [Errno -3] Temporary failure in name resolution (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms),mongo2:27017: [Errno -3] Temporary failure in name resolution (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms),mongo1:27017: [Errno -3] Temporary failure in name resolution (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms), Timeout: 30s, Topology Description: <TopologyDescription id: 677bdb27131f81fe29981b4d, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('mongo1', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('mongo1:27017: [Errno -3] Temporary failure in name resolution (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms)')>, <ServerDescription ('mongo2', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('mongo2:27017: [Errno -3] Temporary failure in name resolution (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms)')>, <ServerDescription ('mongo3', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('mongo3:27017: [Errno -3] Temporary failure in name resolution (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms)')>]>
</code></pre>
<h4>Docker Configuration</h4>
<p>I created a Docker network:</p>
<pre><code>docker network create mongo-net
</code></pre>
<p>My <code>docker-compose.yml</code> file looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>services:
mongo1:
image: mongo:latest
container_name: mongo1
hostname: mongo1
networks:
- mongo-net
ports:
- "27017:27017"
command: ["--replSet", "myReplicaSet"]
mongo2:
image: mongo:latest
container_name: mongo2
hostname: mongo2
networks:
- mongo-net
ports:
- "27018:27017"
command: ["--replSet", "myReplicaSet"]
mongo3:
image: mongo:latest
container_name: mongo3
hostname: mongo3
networks:
- mongo-net
ports:
- "27019:27017"
command: ["--replSet", "myReplicaSet"]
networks:
mongo-net:
driver: bridge
</code></pre>
<p>I started the replica set:</p>
<pre><code>docker exec -it mongo1 mongosh
rs.initiate({
_id: "myReplicaSet",
members: [
{ _id: 0, host: "mongo1:27017" },
{ _id: 1, host: "mongo2:27017" },
{ _id: 2, host: "mongo3:27017" }
]
})
</code></pre>
<p>Replica set status:</p>
<pre><code>rs.status()
</code></pre>
<p>Output:</p>
<pre><code>{
set: 'myReplicaSet',
date: ISODate('2025-01-06T13:26:53.825Z'),
myState: 1,
term: Long('1'),
...
members: [
{
_id: 0,
name: 'mongo1:27017',
stateStr: 'PRIMARY',
...
},
{
_id: 1,
name: 'mongo2:27017',
stateStr: 'SECONDARY',
...
},
{
_id: 2,
name: 'mongo3:27017',
stateStr: 'SECONDARY',
...
}
],
...
}
</code></pre>
<h4>Python Code</h4>
<pre class="lang-py prettyprint-override"><code>from pymongo import MongoClient
uri = "mongodb://localhost:27017,localhost:27018,localhost:27019/?replicaSet=myReplicaSet"
client = MongoClient(uri)
db = client.my_database
collection = db.my_collection
document = {"name": "Replica Test", "value": 42}
result = collection.insert_one(document)
</code></pre>
<p>After trying the above configurations and code, I get a "Temporary failure in name resolution" error when attempting to connect.</p>
<p>How can I resolve this issue?</p>
<hr />
|
<python><mongodb><docker>
|
2025-01-06 14:23:37
| 0
| 307
|
birdalugur
|
79,333,087
| 1,021,819
|
How do I resolve SnowparkSQLException: ... User is empty in function for Snowpark python SPROC invocation?
|
<p>When invoking a Snowpark-registered SPROC, I get the following error:</p>
<pre><code>SnowparkSQLException: (1304): <uuid>: 100357 (P0000): <uuid>: Python Interpreter Error:
snowflake.connector.errors.ProgrammingError: 251005: User is empty in function MY_FUNCTION with handler compute
</code></pre>
<p>for the following python code and invocation:</p>
<pre class="lang-py prettyprint-override"><code>def my_function(session: Session,
input_table: str,
limit: int) -> None:
# Even doing nothing doesn't work!
return
sproc_my_function = my_session.sproc.register(func=my_function,
name='my_function',
is_permanent=True,
replace=True,
stage_location='@STAGE_LOC',
execute_as="owner",
input_table = 'x.y.MY_INPUT_TABLE'
sproc_process_row(my_session,
input_table,
100,
)
</code></pre>
<p>I can't find a reference to this exception and "User is empty in function" anywhere on the internet - which makes me wonder if its a drop-through of some sort. I also can't find a way to pass a user to the register method (this is already done successfully when my_session is set up).</p>
<p>Please help!</p>
|
<python><stored-procedures><snowflake-cloud-data-platform>
|
2025-01-06 12:50:23
| 1
| 8,527
|
jtlz2
|
79,332,999
| 1,712,287
|
Point halving in elliptic curve cryptography
|
<p>In elliptic curve cryptography there is scalar multiplication using point addition and point doubling.</p>
<p>Is there anyway of point halving. To make it simple, if a point is P, then is there any way of getting point P/2</p>
<p>I have used following python code. Could you help me to modify the code so that I can make perfect singling operation?</p>
<pre><code>from sympy import mod_inverse, isprime
# secp256k1 parameters
p = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F
a = 0
b = 7
# Elliptic curve point doubling formula
def point_doubling(x, y, p):
# Slope lambda for point doubling
lambda_ = (3 * x**2 + a) * mod_inverse(2 * y, p) % p
x_r = (lambda_**2 - 2 * x) % p
y_r = (lambda_ * (x - x_r) - y) % p
return x_r, y_r
# Point halving function
def point_halving(x_P, y_P, p):
# Solve for x_Q such that x_P = lambda^2 - 2x_Q (mod p)
for x_Q in range(p):
# Check if y_Q exists
y_Q_squared = (x_Q**3 + a * x_Q + b) % p
y_Q = pow(y_Q_squared, (p + 1) // 4, p) # Modular square root
# Check if 2Q = P
if y_Q and point_doubling(x_Q, y_Q, p) == (x_P, y_P):
return x_Q, y_Q
return None
# Generator point G (compressed coordinates from secp256k1 spec)
x_G = 55066263022277343669578718895168534326250603453777594175500187360389116729240
y_G = 32670510020758816978083085130507043184471273380659243275938904335757337482424
# Halve the generator point
Q = point_halving(x_G, y_G, p)
print("Halved point Q:", Q)
</code></pre>
|
<python><cryptography><elliptic-curve>
|
2025-01-06 12:20:22
| 0
| 1,238
|
Asif Iqbal
|
79,332,996
| 2,523,860
|
Wait-Until strategy in PyTest
|
<p>I'm testing asynchronous code using PyTest.</p>
<p>Can't find any existing libraries that provide simple function like "Await until" or "Polling":</p>
<pre class="lang-py prettyprint-override"><code>operation.run_in_background()
await_until(lambda: operation.is_ready(), interval=1, timeout=10)
</code></pre>
<p>It's easy to implement such function, but I believe there is a ready library for this.</p>
<p>Thank you.</p>
|
<python><asynchronous><pytest><assert>
|
2025-01-06 12:19:03
| 1
| 909
|
Aleks Ya
|
79,332,886
| 3,241,486
|
Query IMF data via smdx using pandasdmx
|
<p>I would like to send this exact query <a href="http://dataservices.imf.org/REST/SDMX_JSON.svc/CompactData/PCPS/M.W00.PZINC.?startPeriod=2021&endPeriod=2022" rel="nofollow noreferrer">http://dataservices.imf.org/REST/SDMX_JSON.svc/CompactData/PCPS/M.W00.PZINC.?startPeriod=2021&endPeriod=2022</a> using the <a href="https://pandasdmx.readthedocs.io/en/v1.0/index.html" rel="nofollow noreferrer"><code>pandasdmx</code></a> package.</p>
<p>Reading this <a href="https://datahelp.imf.org/knowledgebase/articles/1952905-sdmx-2-0-and-sdmx-2-1-restful-web-service" rel="nofollow noreferrer">IMF SMDX doc</a> nor <a href="https://sdmxcentral.imf.org/overview.html" rel="nofollow noreferrer">this</a> one solved my issue...Maybe I'm just too confused how SMDX works...</p>
<pre><code>import pandasdmx as sdmx
imf = sdmx.Request('IMF')
flow_response = imf.dataflow()
# HTTPError: 404 Client Error: Not Found for url:
# https://sdmxcentral.imf.org/ws/public/sdmxapi/rest/dataflow/IMF/latest
</code></pre>
<p>Anyone who understands how to retreive SMDX data from IMF? (using a different package also viable)</p>
|
<python><sdmx>
|
2025-01-06 11:32:03
| 1
| 2,533
|
chamaoskurumi
|
79,332,523
| 1,802,693
|
Initialize global variable in bokeh and use it in handler code
|
<p>What I want to achieve here:</p>
<ul>
<li>I want to generate static HTML and do the data initialization exactly once</li>
<li>I want to pass a complex data structure to the document and use it by multiple buttons / UI elements</li>
<li>I don't want to pass the same complex data structure to each button / UI element as <code>source</code> property, because it will generate larger HTML file</li>
</ul>
<pre class="lang-py prettyprint-override"><code>from bokeh.models import Div, CustomJS
from bokeh.layouts import column
from bokeh.plotting import show
from bokeh.io import curdoc
dummy_div = Div(text="")
init_code = CustomJS(code="""
window.sharedData = { initialized: true };
console.log("Data initialized in Div change");
""")
#dummy_div.js_on_change("text", init_code)
button1 = Button(label="Log Complex Data", button_type="success")
button1.js_on_click(CustomJS(code="""
console.log("Current shared data:", window.sharedData);
"""))
# button_N = ...
layout = column(dummy_div, button1)
curdoc().add_root(layout)
curdoc().on_event('document_ready', lambda event: init_code.execute(curdoc()))
show(layout)
</code></pre>
<p>Is it possible to implement something like this with this library?</p>
<p><strong>Context</strong>:<br />
<strong>This part is not needed for answering the above question, I've just wanted to show the use-case</strong> because some people wish not to give a simple answer without this</p>
<p>I've a complex hierarchy of ColumnDataSource-es and other data for a very specific step logic stored in form of a dict, what I need to use on JS side. I can not pass the ColumnDataSource objects separately, because the number of ColumnDataSource-s to be used is unknown in advance. There is a dynamic logic how the buttons should be generated and how they should read this hierarchy, the logic is dependent on a number of timeframe keys within the dict. I need to pass this dict to each generated button. Since the DataSource is wrapped, duplication occurs.</p>
<p>This is how I <strong>need to organize</strong> the data for the step logic:</p>
<pre class="lang-py prettyprint-override"><code>js_data[aggr_interval] = {
'data_source' : ColumnDataSource(dataframe),
'candle_data' : dataframe.to_dict(orient="list"),
}
</code></pre>
<p>This is the step logic:</p>
<pre><code> time_tracker = ColumnDataSource(data=dict(trigger_date=[max_dt]))
# I'VE A LOT OF THESE BUTTONS
# THE ARGUMENT LIST CAN NOT BE FIXED HERE
# I'VE TO PUT DATA SOURCES INTO A HIERARCHY WITH TIMEFRAME KEYS (candle_data_and_sources )
# AND IMPLEMENT A DYNAMIC LOGIC ON JS SIDE
# THE TIMEFRAMES ARE NOT KNOWN IN ADVANCE
# THIS IS WHAT DUPLICATES THE DATA
# AND INCREASES THE SIZE OF THE GENERATED HTML
step_buttons['prev'].js_on_click(CustomJS(
args = dict(
candle_data_and_sources = candle_data_and_sources,
time_tracker = time_tracker,
direction = -1,
min_dt = min_dt,
max_dt = max_dt,
),
code = JS_CODE_STEP_LOGIC,
))
</code></pre>
<pre><code>JS_CODE_STEP_LOGIC = """
const trigger_date = new Date(time_tracker.data['trigger_date'][0]);
let new_date = new Date(trigger_date);
new_date.setDate(new_date.getDate() + 1 * direction);
if (direction < 0){
new_date = new Date(Math.max(min_dt, new_date));
} else if (direction > 0){
new_date = new Date(Math.min(max_dt, new_date));
}
time_tracker.data['trigger_date'][0] = new_date.toISOString();
// I NEED TO DO THE FOLLOWING LOGIC FOR EACH TIMEFRAME
// THE NUMBER/VALUE OF TIMEFRAMES HERE ARE DYNAMIC
// THEREFORE THEY ARE ADDRESSING THE DATASOURCE IN THE HIERARCHY
for (const [timeframe, data] of Object.entries(candle_data_and_sources)) {
const filtererd_obejcts = {};
for (const [key, value] of Object.entries(data['candle_data'])) {
if(!filtererd_obejcts[key]){
filtererd_obejcts[key] = [];
}
}
for (let i = 0; i < data['candle_data'].trigger_dt.length; i++) {
if (new Date(data['candle_data'].trigger_dt[i]) <= new_date) {
for (const [key, value] of Object.entries(data['candle_data'])) {
filtererd_obejcts[key].push(value[i]);
}
}
}
data['data_source'].data = filtererd_obejcts;
data['data_source'].change.emit();
}
time_tracker.change.emit();
"""
</code></pre>
|
<javascript><python><bokeh>
|
2025-01-06 08:40:58
| 1
| 1,729
|
elaspog
|
79,332,328
| 11,770,390
|
pydantic model: How to exclude field from being hashed / eq-compared?
|
<p>I have the following hashable pydantic model:</p>
<pre><code>class TafReport(BaseModel, frozen=True):
download_date: dt
icao: str
issue_time: dt
validity_time_start: dt
validity_time_stop: dt
raw_report: str
</code></pre>
<p>Now I don't want these reports to be considered different just because their download date is different (I insert that with the <code>datetime.now()</code>). How can i exclude <code>download_date</code> from being considered in the <code>__hash__</code> and <code>__eq__</code> functions so that I can do stunts like:</p>
<pre><code>tafs = list(set(tafs))
</code></pre>
<p>and have a unique set of <code>tafs</code> even though two might have differing download date? I'm looking for a solution where I don't have to overwrite the <code>__hash__</code> and <code>__eq__</code> methods...</p>
<p>I checked out <a href="https://stackoverflow.com/questions/70587513/pydantic-exclude-multiple-fields-from-model">this</a> topic but it only answers how to exclude a field from the model in general (so it doesn't show up in the json dumps), but I do want it to show up in the json dump.</p>
|
<python><pydantic>
|
2025-01-06 07:06:19
| 1
| 5,344
|
glades
|
79,331,937
| 3,511,656
|
Why Are plot_gate_map and plot_error_map Showing Empty Plots in Jupyter Notebook with Qiskit?
|
<p>so I've ben playing around with Qiskit recently and have had success not only logging into the IBM machines but also executing circuits on their backends. I recently tried to plot the gate_map and error_map of each backend, but when I try to do so the plots are just empty in my juypter notebook e.g.</p>
<p><a href="https://i.sstatic.net/VxiH3qth.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VxiH3qth.png" alt="enter image description here" /></a></p>
<p>Am I missing something here? Are you unable to plot these things for their machines now? I'm seeing several references online that seem to indicate that you can somehow. Does anyone have any tips how I can get this plotting working? My code is below. Thank you for the help in advance!</p>
<pre><code>from qiskit_ibm_runtime import QiskitRuntimeService
from qiskit.visualization import plot_gate_map, plot_error_map
import matplotlib.pyplot as plt
# Read the API token from the file.
with open('ibm_quantum_token.txt', 'r') as token_file:
api_token = token_file.read().strip()
# Save the token to your Qiskit account configuration.
QiskitRuntimeService.save_account(channel="ibm_quantum", token=api_token, set_as_default=True, overwrite=True)
# Load saved credentials
service = QiskitRuntimeService()
print("Account loaded successfully!")
from qiskit.visualization import plot_gate_map, plot_error_map
import matplotlib.pyplot as plt
backends = service.backends()
print("Available backends:")
for backend in backends:
# Print the name of each backend.
print(backend)
fig_gate_map = plt.figure(figsize=(8, 6))
plot_gate_map(backend)
plt.title(f"Gate Map of {backend.name}", fontsize=16)
plt.show()
fig_error_map = plt.figure(figsize=(8, 6))
plot_error_map(backend)
plt.title(f"Gate Error of {backend.name}", fontsize=16)
plt.savefig("gate_error:"+ str(backend.name) + str(".pdf"))
plt.show()
</code></pre>
|
<python><qiskit>
|
2025-01-06 02:23:24
| 0
| 1,133
|
Eigenvalue
|
79,331,481
| 3,614,460
|
How to line plot Pandas Dataframe as sub graphs?
|
<p>I have a DataFrame that looks like this:
<code>{"1578286800000":71,"1578373200000":72,"1578459600000":72,"1578546000000":74,"1578632400000":7,"1578891600000":7,"1578978000000":6,"1579064400000":7,"1579150800000":6}</code></p>
<p>The format is:
<code>Datetime:int</code></p>
<p>I want to create sub graph out of the data like, graph one would be for the first 5 data pairs and graph two would be for the rest.</p>
<p>I've tried to graph the entire dataframe but keeps getting this error:
<code>ValueError: If using all scalar values, you must pass an index</code></p>
<p>As you can see the dataframe doesn't have an index, and I don't know how to specify <code>Datetime</code> as the x axis and <code>int</code> as the y axis.</p>
<p>Edit 1 (with code):</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_json("somedata.json")
df.plot.line()
plt.show()
</code></pre>
<p><code>somedata.json</code> contains the same data as mentioned at the beginning of the question.</p>
<p>Edit 2:</p>
<pre><code>with open('temp.json', 'r') as json_file:
data_pairs = json.load(json_file)
dataframe = pd.DataFrame.from_dict(data_pairs, orient='index')
fig, axes = plt.subplots(2, 1)
dataframe[0:5].plot(ax=axes[0], legend=False)
_ = plt.xticks(rotation=45)
dataframe[5:].plot(ax=axes[1], legend=False)
_ = plt.xticks(rotation=45)
</code></pre>
|
<python><pandas><matplotlib>
|
2025-01-05 20:20:49
| 2
| 442
|
binary_assemble
|
79,331,430
| 7,468,566
|
pd.testing.assert_frame_equal has AssertionError with same dtypes
|
<pre><code>import pandas as pd
import pyarrow as pa
import numpy as np
from datetime import datetime, timedelta
df = pd.DataFrame({
"product_id": pd.Series(
["PROD_" + str(np.random.randint(1000, 9999)) for _ in range(100)],
dtype=pd.StringDtype(storage="pyarrow")
),
"transaction_timestamp": pd.date_range(
start=datetime.now() - timedelta(days=30),
periods=100,
freq='1H'
),
"sales_amount": pd.Series(
np.round(np.random.normal(500, 150, 100), 2),
dtype=pd.Float64Dtype()
),
"customer_segment": pd.Series(
np.random.choice(['Premium', 'Standard', 'Basic'], 100),
dtype=pd.StringDtype(storage="pyarrow")
),
"is_repeat_customer": pd.Series(
np.random.choice([True, False], 100, p=[0.3, 0.7])
)
})
def types_mapper(pa_type):
if pa_type == pa.string():
return pd.StringDtype("pyarrow")
df = df.convert_dtypes(dtype_backend="pyarrow")
df_pa = pa.Table.from_pandas(df).to_pandas(types_mapper=types_mapper)
pd.testing.assert_frame_equal(df, df_pa)
</code></pre>
<p>The dtypes are seemingly the same but I get the following error.</p>
<pre><code>AssertionError: Attributes of DataFrame.iloc[:, 0] (column name="product_id") are different
Attribute "dtype" are different
[left]: string[pyarrow]
[right]: string[pyarrow]
</code></pre>
|
<python><pandas><pyarrow>
|
2025-01-05 19:43:02
| 2
| 2,583
|
itstoocold
|
79,331,372
| 11,439,134
|
Find location of tcl/tk header files
|
<p>I'm trying to find a cross-platform way to find tcl.h and tk.h without having to search the entire system. I'm just wondering if there's a way to find this from Tcl or tkinter? <code>root.tk.exprstring('$tcl_pkgPath')</code> doesn't seem to include the right directories. Thanks!</p>
|
<python><tkinter><gcc><tcl>
|
2025-01-05 19:09:10
| 1
| 1,058
|
Andereoo
|
79,331,102
| 4,098,506
|
QStyledItemDelegate in QTableView is misaligned
|
<p>I want to show a list of files with star rating in a QTableView. For this I use the following delegate:</p>
<pre><code>class StarRatingDelegate(QStyledItemDelegate):
def __init__(self, parent=None):
super().__init__(parent)
def paint(self, painter, option, index):
file: File = index.data(Qt.UserRole)
star_rating_widget = StarRatingWidget(10, self.parent())
star_rating_widget.set_rating(file.rating)
star_rating_widget.render(painter, option.rect.topLeft())
</code></pre>
<p><code>StarRatingWidget</code> Is a simple QWidget that contains 5 QLables in a QHBoxLayout.</p>
<p>This all works so far, but all StarRatingWidgets are shifted to the top left:
<a href="https://i.sstatic.net/F2zRCMVo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F2zRCMVo.png" alt="Outcome" /></a></p>
<p>The first column shows the rating as number. You can see that all stars are shifted slightly to the left and a little bit more than one row height to the top.</p>
<p>Tests revealed that <code>option.rect</code> returns the coordinates with (0, 0) being the top left corner of the first cell, but <code>star_rating_widget.render</code> treats the coordinates with (0, 0) being the top left of the window. So the widgets are shifted by the space between the table and the window border and additionally by the height of the table header.</p>
<p>Before someone asks, here is the full code. It requires pyside6 to run.</p>
<pre><code>#!/usr/bon/env python
from PySide6.QtCore import Qt, Signal, QAbstractItemModel, QModelIndex, QEvent
from PySide6.QtGui import QMouseEvent
from PySide6.QtWidgets import QApplication, QLabel, QTableView, QMainWindow, QSizePolicy, QHBoxLayout, QWidget, QStyledItemDelegate
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.setGeometry(100, 100, 250, 600)
self.central_widget = QWidget()
self.main_layout = QHBoxLayout()
self.central_widget.setLayout(self.main_layout)
self.setCentralWidget(self.central_widget)
self.list = QTableView()
self.list.setSelectionBehavior(QTableView.SelectionBehavior.SelectRows)
self.list.setSelectionMode(QTableView.SelectionMode.SingleSelection)
self.list.horizontalHeader().setStretchLastSection = True
self.list.verticalHeader().hide()
self.list.show_grid = False
self.list.setItemDelegateForColumn(1, StarRatingDelegate(self.list))
self.list.setModel(ListModel())
self.main_layout.addWidget(self.list)
class ListModel(QAbstractItemModel):
def __init__(self):
super().__init__()
self.horizontal_header_labels = ['Number', 'Stars']
def rowCount(self, parent=QModelIndex()):
return 50
def columnCount(self, parent=QModelIndex()):
return len(self.horizontal_header_labels)
def data(self, index, role):
if not index.isValid():
return None
if role == Qt.DisplayRole:
rating = (index.row() - 2) % 7
return None if rating >= 5 else rating + 1
return None
def headerData(self, section, orientation, role):
if orientation == Qt.Horizontal and role == Qt.DisplayRole:
return self.horizontal_header_labels[section]
return None
def index(self, row, column, parent=QModelIndex()):
if self.hasIndex(row, column, parent):
return self.createIndex(row, column)
return QModelIndex()
def parent(self, index):
return QModelIndex()
class StarRatingWidget(QWidget):
rating_changed = Signal(int)
def __init__(self, font_size, parent=None):
super().__init__(parent)
self.rating = 0
self.hovered_star: int|None = None
self.stars: List[QLabel] = []
self.font_size: int = font_size
self.init_ui()
def star_mouse_event(self, i: int):
def event(event: QMouseEvent):
if event.type() == QEvent.Enter:
self.hovered_star = i
self.update()
elif event.type() == QEvent.Leave:
self.hovered_star = None
self.update()
return event
def init_ui(self):
layout = QHBoxLayout()
for i in range(5):
star = QLabel()
star.mousePressEvent = lambda _, i=i: self.set_rating(i + 1)
star.enterEvent = self.star_mouse_event(i)
star.leaveEvent = self.star_mouse_event(i)
star.setSizePolicy(QSizePolicy(QSizePolicy.Fixed, QSizePolicy.Fixed))
layout.addWidget(star)
self.stars.append(star)
self.setLayout(layout)
self.update()
def set_rating(self, rating: int|None):
if rating != self.rating:
self.rating = rating
self.update()
self.rating_changed.emit(rating)
def update(self):
for i, star in enumerate(self.stars):
rating = self.rating if self.rating is not None else 0
if i < rating:
star.setText('★')
else:
star.setText('☆')
if self.rating is None:
color = 'gray'
weight = 'normal'
elif i == self.hovered_star:
color = 'blue'
weight = 'bold'
else:
color = 'yellow'
weight = 'normal'
star.setStyleSheet(f'font-size: {self.font_size}px; color: {color}; font-weight: {weight}')
class StarRatingDelegate(QStyledItemDelegate):
def __init__(self, parent=None):
super().__init__(parent)
def paint(self, painter, option, index):
rating = index.data()
star_rating_widget = StarRatingWidget(10, self.parent())
star_rating_widget.set_rating(rating)
star_rating_widget.render(painter, option.rect.topLeft())
def main():
app = QApplication([])
main_window = MainWindow()
main_window.show()
QApplication.exec()
if __name__ == '__main__':
main()
</code></pre>
|
<python><pyside6><qstyleditemdelegate>
|
2025-01-05 16:32:59
| 1
| 662
|
Mr. Clear
|
79,331,045
| 5,480,536
|
How to prevent users from installing my Python package on Windows?
|
<p>I have a Python package that I publish to PyPI.</p>
<p>However, it does not support Windows, and I want to prevent that users install it on Windows.</p>
<p>Can this be done using just <code>pyproject.toml</code>?</p>
<hr />
<p>Note:</p>
<p>I am using <code>pyproject.toml</code> and <code>setup.py</code>, but I understand that using <code>cmdclass</code> in <code>setup.py</code> is no longer advised.</p>
|
<python><windows><python-packaging>
|
2025-01-05 16:02:46
| 0
| 1,476
|
icpp-pro
|
79,330,953
| 3,621,464
|
Lemma of puncutation in spacy
|
<p>I'm using spacy for some downstream tasks, mainly noun phrase extraction. My texts contain a lot of parentheses, and while applying the lemma, I noticed all the punctuation that doesn't end sentences becomes <code>--</code>:</p>
<pre><code>import spacy
nlp = spacy.load("de_core_news_sm")
doc = nlp("(Das ist ein Test!)")
for token in doc:
print(f"Text: '{token.text}', Lemma: '{token.lemma_}'")
</code></pre>
<p>Output:</p>
<pre><code>Text: '(', Lemma: '--'
Text: 'Das', Lemma: 'der'
Text: 'ist', Lemma: 'sein'
Text: 'ein', Lemma: 'ein'
Text: 'Test', Lemma: 'Test'
Text: '!', Lemma: '--'
Text: ')', Lemma: '--'
</code></pre>
<p>Is that normal, and if yes, why, and what can I do to keep the parentheses?</p>
<p>I'm on 3.7.4 with Python 3.11</p>
|
<python><spacy><lemmatization>
|
2025-01-05 15:02:04
| 1
| 4,481
|
MERose
|
79,330,931
| 16,869,946
|
GridSearchCV with data indexed by time
|
<p>I am trying to use the <code>GridSearchCV</code> from <code>sklearn.model_selection</code>. My data is a set of classification that is indexed by time. As a result, when doing cross validation, I want the training set to be exclusively the data with time all before the data in the test set.</p>
<p>So my training set <code>X_train, y_train</code> looks like</p>
<pre><code>Time feature_1 feature_2 result
2020-01-30 3 6 1
2020-02-01 4 2 0
2021-03-02 7 1 0
</code></pre>
<p>and the test set <code>X_test, y_test</code> looks like</p>
<pre><code>Time feature_1 feature_2 result
2023-01-30 3 6 1
2023-02-01 4 2 0
2024-03-02 7 1 0
</code></pre>
<p>Suppose I am using a model such as <code>xgboost</code>, then to optimise the hyperparameters, I used <code>GridSearchCV</code> and the code looks like</p>
<pre><code>param_grid = {
'max_depth': [1,2,3,4,5],
'min_child_weight': [0,1,2,3,4,5],
'gamma': [0.5, 1, 1.5, 2, 5],
'colsample_bytree': [0.6, 0.8, 1.0],
}
clf = XGBClassifier(learning_rate=0.02,
n_estimators=600,
objective='binary:logistic',
silent=True,
nthread=1)
grid_search = GridSearchCV(
estimator=clf,
param_grid=param_grid,
scoring='accuracy',
n_jobs= -1)
grid_search.fit(X_train, y_train)
</code></pre>
<p>However, how should i set the <code>cv</code> in <code>grid_search</code>? Thank you so much in advance.</p>
<p><strong>Edit</strong>: So I tried to set <code>cv=0</code> since I want my training data to be strictly "earlier" then test data and I got the following errors: <code>InvalidParameterError: The 'cv' parameter of GridSearchCV must be an int in the range [2, inf), an object implementing 'split' and 'get_n_splits', an iterable or None. Got 0 instead.</code></p>
|
<python><gridsearchcv>
|
2025-01-05 14:49:27
| 1
| 592
|
Ishigami
|
79,330,907
| 16,383,578
|
How to flatten a bunch of asynchronous generators asynchronously?
|
<p>I have a bunch of webpages to scrape, these webpages have addresses that differ only in page number thus can be processed in parallel using <code>aiohttp</code>.</p>
<p>Now I am using an asynchronous function to process these webpages, each call takes one address as argument and returns a flat list of strings. I am passing these urls all at once, I want a flat list of all the strings from each function call, I don't care about the order of these strings, I want a string as soon as it is yielded, regardless of whether other function calls have completed, and I don't want to concatenate the results.</p>
<p>I just can't make it work.</p>
<p>This is a Minimal Reproducible Example that illustrates the same problem:</p>
<pre><code>import asyncio
async def test(n):
await asyncio.sleep(0.5)
for i in range(1, 11):
yield n * i
async def run_test():
ls = []
for i in range(10):
async for j in test(i):
ls.append(j)
return ls
asyncio.run(run_test())
</code></pre>
<p>The above code runs, but doesn't produce the expected result. It waits 5 seconds instead of 0.5 seconds, and every time I run it the output is the same.</p>
<p>I have tried this:</p>
<pre><code>async def run_test():
ls = []
for t in asyncio.as_completed([test(i) for i in range(10)]):
for i in await t:
ls.append(i)
return ls
</code></pre>
<p>But it also doesn't work:</p>
<pre><code>TypeError: An asyncio.Future, a coroutine or an awaitable is required
</code></pre>
<p>This doesn't work, either:</p>
<pre><code>import asyncio
async def test(n):
await asyncio.sleep(0.5)
for i in range(1, 11):
yield n * i
async def run_test():
ls = []
for x in await asyncio.gather(*(test(i) for i in range(10))):
for j in x:
ls.append(j)
return ls
asyncio.run(run_test())
</code></pre>
<pre><code>TypeError: An asyncio.Future, a coroutine or an awaitable is required
</code></pre>
<p>I know I can do it like this:</p>
<pre><code>import asyncio
async def test(n):
await asyncio.sleep(0.5)
return [n * i for i in range(1, 11)]
async def run_test():
ls = []
for x in asyncio.as_completed([test(i) for i in range(10)]):
ls.extend(await x)
return ls
asyncio.run(run_test())
</code></pre>
<p>But as I specifically stated above, I want to use asynchronous generators.</p>
<p>So how can I yield from asynchronous generators concurrently?</p>
<hr />
<p>Perhaps my wording isn't specific enough.</p>
<p>I meant that in the first example every time I ran <code>run_test()</code> it outputs:</p>
<pre><code>[
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
2, 4, 6, 8, 10, 12, 14, 16, 18, 20,
3, 6, 9, 12, 15, 18, 21, 24, 27, 30,
4, 8, 12, 16, 20, 24, 28, 32, 36, 40,
5, 10, 15, 20, 25, 30, 35, 40, 45, 50,
6, 12, 18, 24, 30, 36, 42, 48, 54, 60,
7, 14, 21, 28, 35, 42, 49, 56, 63, 70,
8, 16, 24, 32, 40, 48, 56, 64, 72, 80,
9, 18, 27, 36, 45, 54, 63, 72, 81, 90,
]
</code></pre>
<p>And that would be the expected result if the code were to run synchronously.</p>
<p>I guess people assumed that I wanted numbers from 0 to 99, I don't know what gave people that idea.</p>
<p>Of course I can do this:</p>
<pre><code>[
10 * i + j
for i in range(10)
for j in range(10)
]
</code></pre>
<p>But why would I use that over <code>list(range(100))</code>?</p>
<p>The point is the output of each function doesn't matter, I just want to collect the entries as soon as they become available.</p>
<p>This is a slightly more complicated example, it gives a different output each time it is run, it is synchronous of course, but it demonstrates what I wanted to achieve asynchronously:</p>
<pre><code>import random
def test(n):
for i in range(1, 11):
yield n * i
def run_test():
gens = [test(i) for i in range(10)]
ls = []
while gens:
gen = random.choice(gens)
try:
ls.append(next(gen))
except StopIteration:
gens.remove(gen)
return ls
run_test()
</code></pre>
|
<python><python-3.x><python-asyncio>
|
2025-01-05 14:36:20
| 2
| 3,930
|
Ξένη Γήινος
|
79,330,857
| 1,447,207
|
Find straight lines in image
|
<p>I would like to find the locations of some lines in a series of images. The images are exported from GoPro videos, with the camera looking down onto a sloping floor (a beach, so to speak) in a wave tank. My goal (but not the topic of this question) is to analyse the images for the position of particles before and after the wave has passed. The point of this question is that I want to identify the lines that separate the walls of the tank from the floor, so that I can transform the section of the image that shows the floor to be a rectangle, and stitch together the images from multiple cameras to shows the whole floor. The transformation and stitching works well when I have manually identified the lines separating the floor and the walls, but the cameras have been moved between experiments, so I thought an automated procedure might be time saving.</p>
<p>An example image, where I have manually identified the lines I want, is shown here:</p>
<p><a href="https://i.sstatic.net/HgSY8ROy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HgSY8ROy.png" alt="enter image description here" /></a></p>
<p>The camera has been moved slightly between the images, which is why I would like to automate the process. But I know that the lines I want always go all the way across the image, from the left edge to the right edge, and at an angle fairly close to horizontal.</p>
<p>I've tried using Hough transform from skimage, both directly on the image, and on the result of edge detection, but with very limited success. Here is a code example, and the results, of using Hough transform, and plotting the most prominent lines, as per the example <a href="https://scikit-image.org/docs/0.24.x/auto_examples/edges/plot_line_hough_transform.html#" rel="nofollow noreferrer">here</a>:</p>
<pre><code>from skimage.transform import hough_line, hough_line_peaks
from skimage.filters import scharr
from skimage.color import rgb2lab
import matplotlib.image as mpimg
im = mpimg.read('image.png')
edges = scharr(rgb2lab(im[:,:,:])[:,:,0]) # Performing edge detection on the L-channel
angles = np.linspace(0.95*np.pi/2, 1.05*np.pi/2, 100) # Using a fairly narrow range of possible angles
h, theta, d = hough_line(edges, angles)
peaks = hough_line_peaks(h, theta, d, threshold=0.99*np.amax(h)) # Using a high threshold, default is 0.5*max(h)
plt.imshow(im)
for _, angle, dist in zip(*peaks):
(x0, y0) = dist * np.array([np.cos(angle), np.sin(angle)])
plt.axline((x0, y0), slope=np.tan(angle + np.pi / 2), lw=0.5, alpha=0.75, c='r')
</code></pre>
<p><a href="https://i.sstatic.net/F0EJ8YJV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F0EJ8YJV.png" alt="enter image description here" /></a></p>
<p>Lots and lots of lines have been (mis)identified, but not the ones I want.</p>
<p>I also tried the probabilistic Hough transform:</p>
<pre><code>from skimage.transform import probabilistic_hough_line
lines = probabilistic_hough_line(edges, threshold=10, line_length=5000, theta=angles)
plt.imshow(im)
for l in lines:
plt.plot([l[0][0], l[1][0]], [l[0][1], l[1][1]], c='r', alpha=0.75, lw=0.5)
</code></pre>
<p><a href="https://i.sstatic.net/JfSJUBr2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JfSJUBr2.png" alt="enter image description here" /></a></p>
<p>This time it looks like the lines I want have actually been identified, but more or less, but also loads of other lines. I tried changing the threshold parameter, but to no obvious effect.</p>
<p>I realise that I might not be able to automatically find exactly the lines I want, as there are also other lines is the image, but what I've found so far just looks totally bonkers. Any suggestions are most appreciated.</p>
<p><strong>Edit:</strong></p>
<p>I've found that if I use a slight Gaussian blur, and the <a href="https://scikit-image.org/docs/stable/auto_examples/edges/plot_canny.html" rel="nofollow noreferrer">Canny edge detector</a> instead of Scharr, it seems to work much better:</p>
<pre><code>from skimage.feature import canny
l = rgb2lab(im[:,:,:])[:,:,0]
edges = canny(gaussian(l, 3), 1, 10, 20)
</code></pre>
<p><a href="https://i.sstatic.net/pzgqKilf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pzgqKilf.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/rnap9nkZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rnap9nkZ.png" alt="enter image description here" /></a></p>
<p><strong>Update:</strong> Here is the original image: <a href="https://nordam.folk.ntnu.no/gopro1_GX010111_001.png" rel="nofollow noreferrer">https://nordam.folk.ntnu.no/gopro1_GX010111_001.png</a></p>
|
<python><image-processing><computer-vision><scikit-image>
|
2025-01-05 14:02:56
| 0
| 803
|
Tor
|
79,330,788
| 1,060,344
|
Disable specific mypy checks for some files matching a naming pattern
|
<p>We have a python project wherein we are using mypy. In our project, the test files live next to the source as opposed to a separate package for tests. Their names follow the following pattern: <code>test_<module>.py</code>. Something like this:</p>
<pre><code>src/
package_1/
__init__.py
package_1.py
test_package_1.py
package_2/
__init__.py
package_2.py
test_package_2.py
</code></pre>
<p>In mypy, I want to disable <code>union-attr</code> rule but only for all the test files. So far I have been able to either disable that rule for all the files or not run mypy on all the test files altogether. But I want to just disable <code>union-attr</code> for the test files.</p>
<p>Is that possible at all?</p>
|
<python><mypy>
|
2025-01-05 13:18:33
| 1
| 2,213
|
vaidik
|
79,330,764
| 1,142,881
|
How can I silence `UndefinedMetricWarning`?
|
<p>How can I silence the following warning while running <code>GridSearchCV(model, params, cv=10, scoring='precision', verbose=1, n_jobs=20, refit=True)</code>?</p>
<pre><code>/opt/dev/myenv/lib/python3.9/site-packages/sklearn/metrics/_classification.py:1531: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 due to no predicted samples. Use `zero_division` parameter to control this behavior.
</code></pre>
<p>I have tried without success:</p>
<pre><code>import os, warnings
warnings.simplefilter("ignore")
warnings.filterwarnings("ignore")
with warnings.catch_warnings():
warnings.simplefilter("ignore")
os.environ["PYTHONWARNINGS"] = "ignore"
</code></pre>
|
<python><scikit-learn>
|
2025-01-05 13:03:49
| 1
| 14,469
|
SkyWalker
|
79,330,637
| 11,516,350
|
Inconsistent behavior between Flask real session storage and test_request_context during tests
|
<p>I’m using Flask’s session storage to temporarily save a list of dataclass objects. Here’s an example of the Transaction class and my TransactionMemoryRepository:</p>
<pre><code>@dataclass
class Transaction:
transaction_date: datetime.date
amount: decimal.Decimal
concept: str
</code></pre>
<p>The repository has methods to save and retrieve transactions from the session:</p>
<pre><code>from flask import session
class TransactionMemoryRepository:
@staticmethod
def save_transactions(transactions: list[Transaction]):
session['transactions'] = transactions
@staticmethod
def get_transactions() -> list[Transaction]:
tmp_transactions = session.get('transactions', [])
transactions = []
for tmp_transaction in tmp_transactions:
transaction = Transaction(
transaction_date=datetime.strptime(tmp_transaction['transaction_date'], '%a, %d %b %Y %H:%M:%S %Z'),
amount=Decimal(tmp_transaction['amount']),
concept=tmp_transaction['concept'],
category=tmp_transaction.get('category'),
id=tmp_transaction.get('id')
)
transactions.append(transaction)
return transactions
</code></pre>
<p>The issue is that in real execution, Flask’s session storage saves the list of Transaction objects as a list of dictionaries. This is why I need to map each dictionary back to a Transaction object when reading from the session.</p>
<p>However, during tests using test_request_context, <strong>the behavior is different:
The session stores the objects as actual Transaction instances</strong>, which causes the read method to fail with the error:</p>
<pre><code>TypeError: 'Transaction' object is not subscriptable
</code></pre>
<p>Here’s my test setup using pytest:</p>
<pre><code> @pytest.fixture
def flask_app():
app = Flask(__name__)
app.secret_key = "test_secret_key"
return app
@pytest.fixture
def flask_request_context(flask_app):
with flask_app.test_request_context():
yield
</code></pre>
<p>Then, I use this fixture on my test:</p>
<pre><code> def test_save_and_get_transactions(self, flask_request_context):
transactions = [
Transaction(amount=Decimal(100), concept="Concept 1",
transaction_date=datetime.now()),
Transaction(amount=Decimal(200), concept="Concept 2",
transaction_date=datetime.now())
]
TransactionMemoryRepository.save_transactions(transactions)
result = TransactionMemoryRepository.get_transactions()
#asserts ...
</code></pre>
<p>The issue: In production, session['transactions'] <strong>becomes a list of dictionaries</strong>, but during tests, <strong>it stores actual Transaction objects</strong>. As a result, the get_transactions() method works fine in the real application but fails in tests, because I'm accessing attributes as if they were dictionaries.</p>
<p><strong>Question:</strong></p>
<p>Why is there a difference between how Flask's session behaves during real execution and in tests using test_request_context?
How can I ensure the session behaves the same way in both environments so that my tests reflect the actual behavior?</p>
<p>The temporary solution, for now, is mapping to a dict when save but this is a workaround.</p>
<p>I’m attaching two images to show the debugging results. You can see that during normal execution, the session returns a Transaction object with properly typed attributes, while during tests, it returns a dict.</p>
<p><a href="https://i.sstatic.net/3KJnz2Rl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3KJnz2Rl.png" alt="REAL execution is reading DICT instances" /></a></p>
<p><a href="https://i.sstatic.net/IYikCSZW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IYikCSZW.png" alt="TEST execution is reading real DATACLASS Transaction instances" /></a></p>
|
<python><flask><testing><pytest><flask-session>
|
2025-01-05 11:37:19
| 0
| 1,347
|
UrbanoJVR
|
79,330,620
| 1,142,881
|
How to specify the levels to iterate in a grid search with an ensemble classifier?
|
<p>I have the following setup but I can't find a way to pass levels to explore in the Grid search for svm* and mlp*:</p>
<pre><code>steps = [('preprocessing', StandardScaler()),
('feature_selection', SelectKBest(mutual_info_classif, k=15)),
('clf', VotingClassifier(estimators=[("mlp1", mlp1),
("mlp2", mlp2),
("mlp3", mlp3),
("svm1", svm1),
("svm2", svm2)
], voting='soft'))
]
model = Pipeline(steps=steps)
params = [{
'preprocessing': [StandardScaler(), MinMaxScaler(), MaxAbsScaler()],
'feature_selection__score_func': [f_classif, mutual_info_classif]
}]
grid_search = GridSearchCV(model, params, cv=10, scoring='balanced_accuracy', verbose=1, n_jobs=20, refit=True)
</code></pre>
|
<python><machine-learning><scikit-learn>
|
2025-01-05 11:22:57
| 0
| 14,469
|
SkyWalker
|
79,330,475
| 12,016,688
|
Why ctypes.c_long.from_address.value yields reference count of an object?
|
<p>In this <a href="https://youtu.be/ydf2sg2C6qQ?t=280" rel="nofollow noreferrer">pycon conference</a> the presenter says that <code>ctypes.c_long.from_address(id(SOME_OBJECT)).value</code> would yields the reference count of an object.
I tried to find this in the documentation but found nothing that helps me. The <a href="https://docs.python.org/3/library/ctypes.html#ctypes._CData.from_address" rel="nofollow noreferrer">doc</a> says:</p>
<blockquote>
<p><strong>from_address</strong>(address)</p>
<p>This method returns a ctypes type instance using the memory specified by address which must be an integer.</p>
<p>This method, and others that indirectly call this method, raises an auditing event ctypes.cdata with argument address.</p>
</blockquote>
<p>It didn't mentioned to reference count (at least I can't figure it out). I tried it and this seems correct:</p>
<pre class="lang-py prettyprint-override"><code>Python 3.13.1 (main, Dec 3 2024, 17:59:52) [Clang 16.0.0 (clang-1600.0.26.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import ctypes, sys
>>> name = "some random name 123"
>>> sys.getrefcount(name)
2
>>> ctypes.c_long.from_address(id(name)).value
1
>>> x, y = name, name
>>> sys.getrefcount(name)
4
>>> ctypes.c_long.from_address(id(name)).value
3
</code></pre>
<p>Can someone explain this?</p>
|
<python><memory><ctypes><reference-counting>
|
2025-01-05 09:25:08
| 1
| 2,470
|
Amir reza Riahi
|
79,330,420
| 9,970,706
|
How to get maximum average of subarray?
|
<p>I have been working this leet code questions</p>
<p><a href="https://leetcode.com/problems/maximum-average-subarray-i/description/" rel="nofollow noreferrer">https://leetcode.com/problems/maximum-average-subarray-i/description/</a></p>
<p>I have been able to create a solution after understanding the sliding window algorithm. I was wondering with my code where my logic is going wrong, I do think my my issue seems to be in this section section of the code, but I am unable to pinpoint why.</p>
<pre class="lang-py prettyprint-override"><code> while temp > k:
temp -= nums[left]
left += 1
ans = temp / (curr - left + 1)
</code></pre>
<p>While I do appreciate other solutions and ways solving this problem, I want to understand and get solution working first before I start looking at different ways of doing the problem, this way i get a better understanding of the algorithm.</p>
<p>Full code reference</p>
<pre class="lang-py prettyprint-override"><code>def findMaxAverage(self, nums, k):
"""
:type nums: List[int]
:type k: int
:rtype: float
"""
left = 0
ans = 0
temp = 0
for curr in range(len(nums)):
temp += nums[curr]
curr += 1
while temp > k:
temp -= nums[left]
left += 1
ans = temp / (curr - left + 1)
return ans
</code></pre>
|
<python><python-3.x><logic><average><sub-array>
|
2025-01-05 08:34:18
| 2
| 781
|
Zubair Amjad
|
79,330,304
| 10,000,669
|
Optimizing sieving code in the Self Initializing Quadratic Sieve for PyPy
|
<p>I've coded up the Self Initializing Quadratic Sieve (<a href="https://www.rieselprime.de/ziki/Self-initializing_quadratic_sieve" rel="nofollow noreferrer">SIQS</a>) in Python, but it has been coded with respect to being as fast as possible in PyPy(not native Python).</p>
<p>Here is the complete code:</p>
<pre><code>import logging
import time
import math
from math import sqrt, ceil, floor, exp, log2, log, isqrt
from rich.live import Live
from rich.table import Table
from rich.console import Console
import random
import sys
LOWER_BOUND_SIQS = 1000
UPPER_BOUND_SIQS = 4000
logging.basicConfig(
format='[%(levelname)s] %(asctime)s - %(message)s',
level=logging.INFO
)
def get_gray_code(n):
gray = [0] * (1 << (n - 1))
gray[0] = (0, 0)
for i in range(1, 1 << (n - 1)):
v = 1
j = i
while (j & 1) == 0:
v += 1
j >>= 1
tmp = i + ((1 << v) - 1)
tmp >>= v
if (tmp & 1) == 1:
gray[i] = (v - 1, -1)
else:
gray[i] = (v - 1, 1)
return gray
MULT_LIST = [
1, 2, 3, 5, 7, 9, 10, 11, 13, 14,
15, 17, 19, 21, 23, 25, 29, 31,
33, 35, 37, 39, 41, 43, 45,
47, 49, 51, 53, 55, 57, 59, 61, 63,
65, 67, 69, 71, 73, 75, 77, 79, 83,
85, 87, 89, 91, 93, 95, 97, 101, 103, 105, 107,
109, 111, 113, 115, 119, 121, 123, 127, 129, 131, 133,
137, 139, 141, 143, 145, 147, 149, 151, 155, 157, 159,
161, 163, 165, 167, 173, 177, 179, 181, 183, 185, 187,
191, 193, 195, 197, 199, 201, 203, 205, 209, 211, 213,
215, 217, 219, 223, 227, 229, 231, 233, 235, 237, 239,
241, 249, 251, 253, 255
]
def create_table(relations, target_relations, num_poly, start_time):
end = time.time()
elapsed = end - start_time
relations_per_second = len(relations) / elapsed if elapsed > 0 else 0
poly_per_second = num_poly / elapsed if elapsed > 0 else 0
percent = (len(relations) / target_relations) * 100 if target_relations > 0 else 0
percent_per_second = percent / elapsed if elapsed > 0 else 0
remaining_percent = 100.0 - percent
seconds = int(remaining_percent / percent_per_second) if percent_per_second > 0 else 0
m, s = divmod(seconds, 60)
h, m = divmod(m, 60)
table = Table(title="Processing Status")
table.add_column("Metric", style="cyan", no_wrap=True)
table.add_column("Value", style="magenta")
table.add_row("Relations per second", f"{relations_per_second:,.2f}")
table.add_row("Poly per second", f"{poly_per_second:,.2f}")
table.add_row("Percent", f"{percent:,.2f}%")
table.add_row("Percent per second", f"{percent_per_second:,.4f}%")
table.add_row("Estimated Time", f"{h:d}:{m:02d}:{s:02d}")
return table
class QuadraticSieve:
def __init__(self, M, B=None, T=2, prime_limit=20, eps=30, lp_multiplier=20, multiplier=None):
self.logger = logging.getLogger(__name__)
self.prime_log_map = {}
self.root_map = {}
self.M = M
self.B = B
self.T = T
self.prime_limit = prime_limit
self.eps = eps
self.lp_multiplier = lp_multiplier
self.multiplier = multiplier
self.console = Console()
print(f"B: {B}")
print(f"M: {M}")
print(f"prime_limit: {prime_limit}")
print(f"eps: {eps}")
print(f"lp_multiplier: {lp_multiplier}")
@staticmethod
def gcd(a, b):
a, b = abs(a), abs(b)
while a:
a, b = b % a, a
return b
@staticmethod
def legendre(n, p):
val = pow(n, (p - 1) // 2, p)
return val - p if val > 1 else val
@staticmethod
def jacobi(a, m):
a = a % m
t = 1
while a != 0:
while a % 2 == 0:
a //= 2
if m % 8 in [3, 5]:
t = -t
a, m = m, a
if a % 4 == 3 and m % 4 == 3:
t = -t
a %= m
return t if m == 1 else 0
@staticmethod
def modinv(n, p):
n = n % p
x, u = 0, 1
while n:
x, u = u, x - (p // n) * u
p, n = n, p % n
return x
def factorise_fast(self, value, factor_base):
factors = set()
if value < 0:
factors ^= {-1}
value = -value
for factor in factor_base[1:]:
while value % factor == 0:
factors ^= {factor}
value //= factor
return factors, value
@staticmethod
def tonelli_shanks(a, p):
a %= p
if p % 8 in [3, 7]:
x = pow(a, (p + 1) // 4, p)
return x, p - x
if p % 8 == 5:
x = pow(a, (p + 3) // 8, p)
if pow(x, 2, p) != a % p:
x = (x * pow(2, (p - 1) // 4, p)) % p
return x, p - x
d = 2
symb = 0
while symb != -1:
symb = QuadraticSieve.jacobi(d, p)
d += 1
d -= 1
n = p - 1
s = 0
while n % 2 == 0:
n //= 2
s += 1
t = n
A = pow(a, t, p)
D = pow(d, t, p)
m = 0
for i in range(s):
i1 = pow(2, s - 1 - i)
i2 = (A * pow(D, m, p)) % p
i3 = pow(i2, i1, p)
if i3 == p - 1:
m += pow(2, i)
x = (pow(a, (t + 1) // 2, p) * pow(D, m // 2, p)) % p
return x, p - x
@staticmethod
def prime_sieve(n):
sieve = [True] * (n + 1)
sieve[0], sieve[1] = False, False
for i in range(2, int(n**0.5) + 1):
if sieve[i]:
for j in range(i * 2, n + 1, i):
sieve[j] = False
return [i for i, is_prime in enumerate(sieve) if is_prime]
def find_b(self, N):
x = ceil(exp(0.5 * sqrt(log(N) * log(log(N)))))
return x
def choose_multiplier(self, N, B):
prime_list = self.prime_sieve(B)
if self.multiplier is not None:
self.logger.info("Using multiplier k = %d", self.multiplier)
return prime_list
NUM_TEST_PRIMES = 300
LN2 = math.log(2)
num_primes = min(len(prime_list), NUM_TEST_PRIMES)
log2n = math.log(N)
scores = [0.0 for _ in MULT_LIST]
num_multipliers = 0
for i, curr_mult in enumerate(MULT_LIST):
knmod8 = (curr_mult * (N % 8)) % 8
logmult = math.log(curr_mult)
scores[i] = 0.5 * logmult
if knmod8 == 1:
scores[i] -= 2 * LN2
elif knmod8 == 5:
scores[i] -= LN2
elif knmod8 in (3, 7):
scores[i] -= 0.5 * LN2
num_multipliers += 1
for i in range(1, num_primes):
prime = prime_list[i]
contrib = math.log(prime) / (prime - 1)
modp = N % prime
for j in range(num_multipliers):
curr_mult = MULT_LIST[j]
knmodp = (modp * curr_mult) % prime
if knmodp == 0 or self.legendre(knmodp, prime) == 1:
if knmodp == 0:
scores[j] -= contrib
else:
scores[j] -= 2 * contrib
best_score = float('inf')
best_mult = 1
for i in range(num_multipliers):
if scores[i] < best_score:
best_score = scores[i]
best_mult = MULT_LIST[i]
self.multiplier = best_mult
self.logger.info("Using multiplier k = %d", best_mult)
return prime_list
def get_smooth_b(self, N, B, prime_list):
factor_base = [-1, 2]
self.prime_log_map[2] = 1
for p in prime_list[1:]:
if self.legendre(N, p) == 1:
factor_base.append(p)
self.prime_log_map[p] = round(log2(p))
self.root_map[p] = self.tonelli_shanks(N, p)
return factor_base
def decide_bound(self, N, B=None):
if B is None:
B = self.find_b(N)
self.B = B
self.logger.info("Using B = %d", B)
return B
def build_factor_base(self, N, B, prime_list):
fb = self.get_smooth_b(N, B, prime_list)
self.logger.info("Factor base size: %d", len(fb))
return fb
def new_poly_a(self, factor_base, N, M, poly_a_list):
small_B = 1024
lower_polypool_index = 2
upper_polypool_index = small_B - 1
poly_low_found = False
for i in range(small_B):
if factor_base[i] > LOWER_BOUND_SIQS and not poly_low_found:
lower_polypool_index = i
poly_low_found = True
if factor_base[i] > UPPER_BOUND_SIQS:
upper_polypool_index = i - 1
break
# Compute target_a and bit threshold
target_a = int(math.sqrt(2 * N) / M)
target_mul = 0.9
target_bits = int(target_a.bit_length() * target_mul)
too_close = 10
close_range = 5
min_ratio = LOWER_BOUND_SIQS
while True:
poly_a = 1
afact = []
qli = []
while True:
found_a_factor = False
while(found_a_factor == False):
randindex = random.randint(lower_polypool_index, upper_polypool_index)
potential_a_factor = factor_base[randindex]
found_a_factor = True
if potential_a_factor in afact:
found_a_factor = False
poly_a = poly_a * potential_a_factor
afact.append(potential_a_factor)
qli.append(randindex)
j = target_a.bit_length() - poly_a.bit_length()
if j < too_close:
poly_a = 1
s = 0
afact = []
qli = []
continue
elif j < (too_close + close_range):
break
a1 = target_a // poly_a
if a1 < min_ratio:
continue
mindiff = 100000000000000000
randindex = 0
for i in range(small_B):
if abs(a1 - factor_base[i]) < mindiff:
mindiff = abs(a1 - factor_base[i])
randindex = i
found_a_factor = False
while not found_a_factor:
potential_a_factor = factor_base[randindex]
found_a_factor = True
if potential_a_factor in afact:
found_a_factor = False
if not found_a_factor:
randindex += 1
if randindex > small_B:
continue
poly_a = poly_a * factor_base[randindex]
afact.append(factor_base[randindex])
qli.append(randindex)
diff_bits = (target_a - poly_a).bit_length()
if diff_bits < target_bits:
if poly_a in poly_a_list:
if target_bits > 1000:
print("SOMETHING WENT WRONG")
sys.exit()
target_bits += 1
continue
else:
break
poly_a_list.append(poly_a)
return poly_a, sorted(qli), set(afact)
def generate_first_polynomial(self, factor_base, N, M, poly_a_list):
a, qli, factors_a = self.new_poly_a(factor_base, N, M, poly_a_list)
s = len(qli)
B = []
for l in range(s):
p = factor_base[qli[l]]
r1 = self.root_map[p][0]
aq = a // p
invaq = self.modinv(aq, p)
gamma = r1 * invaq % p
if gamma > p // 2:
gamma = p - gamma
B.append(aq * gamma)
b = sum(B) % a
c = (b * b - N) // a
soln_map = {}
Bainv = {}
for p in factor_base:
Bainv[p] = []
if a % p == 0 or p == 2:
continue
ainv = self.modinv(a, p)
# store bainv
for j in range(s):
Bainv[p].append((2 * B[j] * ainv) % p)
# store roots
r1, r2 = self.root_map[p]
r1 = ((r1 - b) * ainv) % p
r2 = ((r2 - b) * ainv) % p
soln_map[p] = [r1, r2]
return a, b, c, B, Bainv, soln_map, s, factors_a
def sieve(self, N, B, factor_base, M):
# ------------------------------------------------
# 1) TIMING
# ------------------------------------------------
start = time.time()
# ------------------------------------------------
# 2) FACTOR BASE & RELATED
# ------------------------------------------------
fb_len = len(factor_base)
fb_map = {val: i for i, val in enumerate(factor_base)}
target_relations = fb_len + self.T
large_prime_bound = B * self.lp_multiplier
# ------------------------------------------------
# 3) THRESHOLD & MISC
# ------------------------------------------------
threshold = int(math.log2(M * math.sqrt(N)) - self.eps)
lp_found = 0
ind = 1
matrix = [0] * fb_len
relations = []
roots = []
partials = {}
num_poly = 0
interval_size = 2 * M + 1
grays = get_gray_code(20)
poly_a_list = []
poly_ind = 0
sieve_values = [0] * interval_size
r1 = 0
r2 = 0
def process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a):
nonlocal ind
val = sieve_values[x]
sieve_values[x] = 0
lpf = 0
if val > threshold:
xval = x - M
relation = a * xval + b
poly_val = a * xval * xval + 2 * b * xval + c
local_factors, value = self.factorise_fast(poly_val, factor_base)
local_factors ^= factors_a
if value != 1:
if value < large_prime_bound:
if value in partials:
rel, lf, pv = partials[value]
relation *= rel
local_factors ^= lf
poly_val *= pv
lpf = 1
else:
partials[value] = (relation, local_factors, poly_val * a)
return 0
else:
return 0
for fac in local_factors:
idx = fb_map[fac]
matrix[idx] |= ind
ind = ind + ind
relations.append(relation)
roots.append(poly_val * a)
return lpf
with Live(console=self.console) as live:
while len(relations) < target_relations:
if num_poly % 10 == 0:
live.update(create_table(relations, target_relations, num_poly, start))
if poly_ind == 0:
a, b, c, B, Bainv, soln_map, s, factors_a = self.generate_first_polynomial(factor_base, N, M, poly_a_list)
end = 1 << (s - 1)
poly_ind += 1
else:
v, e = grays[poly_ind]
b = (b + 2 * e * B[v])
c = (b * b - N) // a
poly_ind += 1
if poly_ind == end:
poly_ind = 0
v, e = grays[poly_ind] # v, e for next iteration
for p in factor_base:
if p < self.prime_limit or a % p == 0:
continue
log_p = self.prime_log_map[p]
r1, r2 = soln_map[p]
soln_map[p][0] = (r1 - e * Bainv[p][v]) % p
soln_map[p][1] = (r2 - e * Bainv[p][v]) % p
amx = r1 + M
bmx = r2 + M
apx = amx - p
bpx = bmx - p
k = p
while k < M:
sieve_values[apx + k] += log_p
sieve_values[bpx + k] += log_p
sieve_values[amx - k] += log_p
sieve_values[bmx - k] += log_p
k += p
num_poly += 1
x = 0
while x < 2 * M - 6:
# for some reason need to do all this for max performance gain in PyPy3
lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a)
x += 1
lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a)
x += 1
lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a)
x += 1
lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a)
x += 1
lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a)
x += 1
lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a)
x += 1
lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a)
x += 1
lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a)
x += 1
print(f"\n{num_poly} polynomials sieved")
print(f"{lp_found} relations from partials")
print(f"{target_relations - lp_found} normal smooth relations")
print(f"{target_relations} total relations\n")
return matrix, relations, roots
def solve_bits(self, matrix, n):
self.logger.info("Solving linear system in GF(2).")
lsmap = {lsb: 1 << lsb for lsb in range(n)}
# GAUSSIAN ELIMINATION
m = len(matrix)
marks = []
cur = -1
# m -> number of primes in factor base
# n -> number of smooth relations
mark_mask = 0
for row in matrix:
if cur % 100 == 0:
print("", end=f"{cur, m}\r")
cur += 1
lsb = (row & -row).bit_length() - 1
if lsb == -1:
continue
marks.append(n - lsb - 1)
mark_mask |= 1 << lsb
for i in range(m):
if matrix[i] & lsmap[lsb] and i != cur:
matrix[i] ^= row
marks.sort()
# NULL SPACE EXTRACTION
nulls = []
free_cols = [col for col in range(n) if col not in marks]
k = 0
for col in free_cols:
shift = n - col - 1
val = 1 << shift
fin = val
for v in matrix:
if v & val:
fin |= v & mark_mask
nulls.append(fin)
k += 1
if k == self.T:
break
return nulls
def extract_factors(self, N, relations, roots, null_space):
n = len(relations)
for vector in null_space:
prod_left = 1
prod_right = 1
for idx in range(len(relations)):
bit = vector & 1
vector = vector >> 1
if bit == 1:
prod_left *= relations[idx]
prod_right *= roots[idx]
idx += 1
sqrt_right = isqrt(prod_right)
prod_left = prod_left % N
sqrt_right = sqrt_right % N
factor_candidate = self.gcd(N, prod_left - sqrt_right)
if factor_candidate not in (1, N):
other_factor = N // factor_candidate
self.logger.info("Found factors: %d, %d", factor_candidate, other_factor)
return factor_candidate, other_factor
return 0, 0
def factor(self, N, B=None):
overall_start = time.time()
self.logger.info("========== Quadratic Sieve V4 Start ==========")
self.logger.info("Factoring N = %d", N)
step_start = time.time()
B = self.decide_bound(N, self.B)
step_end = time.time()
self.logger.info("Step 1 (Decide Bound) took %.3f seconds", step_end - step_start)
step_start = time.time()
prime_list = self.choose_multiplier(N, self.B)
step_end = time.time()
self.logger.info("Step 2 (Choose Multiplier) took %.3f seconds", step_end - step_start)
kN = self.multiplier * N
if kN.bit_length() < 140:
LOWER_BOUND_SIQS = 3
step_start = time.time()
factor_base = self.build_factor_base(kN, B, prime_list)
step_end = time.time()
self.logger.info("Step 3 (Build Factor Base) took %.3f seconds", step_end - step_start)
step_start = time.time()
matrix, relations, roots = self.sieve(kN, B, factor_base, self.M)
step_end = time.time()
self.logger.info("Step 4 (Sieve Interval) took %.3f seconds", step_end - step_start)
n = len(relations)
step_start = time.time()
null_space = self.solve_bits(matrix, n)
step_end = time.time()
self.logger.info("Step 5 (Solve Dependencies) took %.3f seconds", step_end - step_start)
step_start = time.time()
f1, f2 = self.extract_factors(N, relations, roots, null_space)
step_end = time.time()
self.logger.info("Step 6 (Extract Factors) took %.3f seconds", step_end - step_start)
if f1 and f2:
self.logger.info("Quadratic Sieve successful: %d * %d = %d", f1, f2, N)
else:
self.logger.warning("No non-trivial factors found with the current settings.")
overall_end = time.time()
self.logger.info("Total time for Quadratic Sieve: %.10f seconds", overall_end - overall_start)
self.logger.info("========== Quadratic Sieve End ==========")
return f1, f2
if __name__ == '__main__':
## 60 digit number
#N = 373784758862055327503642974151754627650123768832847679663987
#qs = QuadraticSieve(B=111000, M=400000, T=10, prime_limit=45, eps=34, lp_multiplier=20000)
### 70 digit number
N = 3605578192695572467817617873284285677017674222302051846902171336604399
qs = QuadraticSieve(B=300000, M=350000, prime_limit=47, eps=40, T=10, lp_multiplier=256)
## 80 digit number
#N = 4591381393475831156766592648455462734389 * 1678540564209846881735567157366106310351
#qs = QuadraticSieve(B=700_000, M=600_000, prime_limit=52, eps=45, T=10, lp_multiplier=256)
factor1, factor2 = qs.factor(N)
</code></pre>
<p>Now, the main running time in the comes from the following section which is where basically a giant sieving process:</p>
<pre><code> def sieve(self, N, B, factor_base, M):
start = time.time()
fb_len = len(factor_base)
fb_map = {val: i for i, val in enumerate(factor_base)}
target_relations = fb_len + self.T
large_prime_bound = B * self.lp_multiplier
threshold = int(math.log2(M * math.sqrt(N)) - self.eps)
lp_found = 0
ind = 1
matrix = [0] * fb_len
relations = []
roots = []
partials = {}
num_poly = 0
interval_size = 2 * M + 1
grays = get_gray_code(20)
poly_a_list = []
poly_ind = 0
sieve_values = [0] * interval_size
r1 = 0
r2 = 0
def process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a):
nonlocal ind
val = sieve_values[x]
sieve_values[x] = 0
lpf = 0
if val > threshold:
xval = x - M
relation = a * xval + b
poly_val = a * xval * xval + 2 * b * xval + c
local_factors, value = self.factorise_fast(poly_val, factor_base)
local_factors ^= factors_a
if value != 1:
if value < large_prime_bound:
if value in partials:
rel, lf, pv = partials[value]
relation *= rel
local_factors ^= lf
poly_val *= pv
lpf = 1
else:
partials[value] = (relation, local_factors, poly_val * a)
return 0
else:
return 0
for fac in local_factors:
idx = fb_map[fac]
matrix[idx] |= ind
ind = ind + ind
relations.append(relation)
roots.append(poly_val * a)
return lpf
with Live(console=self.console) as live:
while len(relations) < target_relations:
if num_poly % 10 == 0:
live.update(create_table(relations, target_relations, num_poly, start))
if poly_ind == 0:
a, b, c, B, Bainv, soln_map, s, factors_a = self.generate_first_polynomial(factor_base, N, M, poly_a_list)
end = 1 << (s - 1)
poly_ind += 1
else:
v, e = grays[poly_ind]
b = (b + 2 * e * B[v])
c = (b * b - N) // a
poly_ind += 1
if poly_ind == end:
poly_ind = 0
v, e = grays[poly_ind] # v, e for next iteration
for p in factor_base:
if p < self.prime_limit or a % p == 0:
continue
log_p = self.prime_log_map[p]
r1, r2 = soln_map[p]
soln_map[p][0] = (r1 - e * Bainv[p][v]) % p
soln_map[p][1] = (r2 - e * Bainv[p][v]) % p
amx = r1 + M
bmx = r2 + M
apx = amx - p
bpx = bmx - p
k = p
while k < M:
sieve_values[apx + k] += log_p
sieve_values[bpx + k] += log_p
sieve_values[amx - k] += log_p
sieve_values[bmx - k] += log_p
k += p
num_poly += 1
x = 0
while x < 2 * M - 6:
# for some reason need to do all this for max performance gain in PyPy3
lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a)
x += 1
lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a)
x += 1
lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a)
x += 1
lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a)
x += 1
lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a)
x += 1
lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a)
x += 1
lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a)
x += 1
lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a)
x += 1
print(f"\n{num_poly} polynomials sieved")
print(f"{lp_found} relations from partials")
print(f"{target_relations - lp_found} normal smooth relations")
print(f"{target_relations} total relations\n")
return matrix, relations, roots
</code></pre>
<p>I have "optimized" the code as much as I can so far, and it runs pretty fast in PyPy, but I am wondering if there is anything I am missing that can tweak out more performance gain. I haven't really been able to get anything meaningful out of profiling the code because of the way that PyPy works, but through a bit of testing have made various improvements that have cut down the time a lot like some loop unrolling, precision reduction, and the way I work with the arrays.</p>
<p>Unfortunately, this sieving code is still not fast enough to be feasible in factoring the size of numbers which I'm targeting(100 to 115 digit semiprimes). By my initial estimates with semi-decent parameter selection and without multiprocessing, the sieving code itself would taken around 70 hours when compiled using PyPy3 on my device.</p>
<p>Is there anything I can do to better optimize this code for the PyPy JIT?</p>
|
<python><performance><optimization><pypy><sieve-algorithm>
|
2025-01-05 06:30:44
| 1
| 1,429
|
J. Doe
|
79,330,288
| 4,586,008
|
Pandas: Get moving average by weekday and hour
|
<p>I have a Series which is hourly sampled:</p>
<pre><code>Datetime
2023-01-01 11:00:00 6
2023-01-01 12:00:00 60
2023-01-01 13:00:00 53
2023-01-01 14:00:00 14
2023-01-01 17:00:00 4
2023-01-01 18:00:00 66
2023-01-01 19:00:00 38
2023-01-01 20:00:00 28
2023-01-01 21:00:00 0
2023-01-02 11:00:00 9
2023-01-02 12:00:00 32
2023-01-02 13:00:00 44
2023-01-02 14:00:00 12
2023-01-02 18:00:00 42
2023-01-02 19:00:00 43
2023-01-02 20:00:00 34
2023-01-02 21:00:00 9
2023-01-03 11:00:00 19
...
</code></pre>
<p>(We can assume missing hours as 0s).</p>
<p>I want to compute a 4-week moving average of the series, but using the past 4 week values <strong>on same weekday and hour</strong>.</p>
<p>For example, to compute the average on <code>2023-01-31 01:00:00</code> (Tuesday), take the average of the hours on
<code>2023-01-03 01:00:00</code>, <code>2023-01-10 01:00:00</code>, <code>2023-01-17 01:00:00</code>, and <code>2023-01-24 01:00:00</code>.</p>
<p>The series is quite long. What is an efficient way to do this?</p>
|
<python><pandas><datetime>
|
2025-01-05 06:14:52
| 2
| 640
|
lpounng
|
79,330,224
| 2,893,496
|
Mismatch in shapes of Input and Prediction in Tensorflow
|
<p>I am compiling the Tensorflow model thusly:</p>
<pre><code>model = tf.keras.Sequential()
model.add( tf.keras.layers.InputLayer(shape=(2,)) )
model.add( tf.keras.layers.Dense(1024) )
model.add( tf.keras.layers.Dense(1024) )
model.add( tf.keras.layers.Dense(units=1) )
model.compile(loss="mean_squared_error", optimizer="adam", metrics=["mse"])
</code></pre>
<p>I expect it to "learn" to take two floating point numbers and predict one floating point number.</p>
<p>When i train it with</p>
<pre><code>model.fit(x=trainData, y=trainRes, epochs=12, batch_size=100)
</code></pre>
<p><code>trainData</code> and <code>trainRes</code> are both numpy arrays.</p>
<p><code>trainData.shape</code> is <code>(10000, 2)</code> and <code>trainRes.shape</code> is <code>(10000,)</code>. It seems to be doing the epochs and even <code>model.evaluate(x=testData, y=testRes) </code> runs (although outputs huge MSE), but when i attempt to run:</p>
<pre><code>res = model.predict(testData[0])
</code></pre>
<p>I get an error:</p>
<pre><code>Invalid input shape for input Tensor("data:0", shape=(2,), dtype=float32). Expected shape (None, 2), but input has incompatible shape (2,)
Arguments received by Sequential.call():
• inputs=tf.Tensor(shape=(2,), dtype=float32)
• training=False
• mask=None
</code></pre>
<p>For whatever reason the following works:</p>
<pre><code>res = model.predict(testData[0:1])
</code></pre>
<p>However, rather than a single value it returns 1x1 array.</p>
<p>My best guess is that somehow Keras is interpreting the entire array as a single unit for training purposes, rather than going "line by line". This also explains why the training is nonsense, and it does not approach anything sensible.</p>
|
<python><tensorflow>
|
2025-01-05 05:06:57
| 1
| 5,886
|
v010dya
|
79,330,156
| 506,824
|
Why has VS Code stopped discovering my python tests?
|
<p>I'm using</p>
<ul>
<li>VS Code Version: 1.96.2</li>
<li>Python 3.11.8</li>
<li>MacOS Sonoma 14.5</li>
</ul>
<p>I have a project that's been working for a long time, but now VS Code is failing to discover/run my tests. When I go into the TESTING tab, I have a progress indicator that keep running forever and the test output is showing 0/0 tests run. It looks like it's stuck in test discovery. Clicking the refresh button doesn't help; I just get:</p>
<pre><code>2025-01-04 23:02:05.066 [info] Running discovery for pytest using the new test adapter.
2025-01-04 23:02:05.066 [error] Test discovery already in progress, not starting a new one.
</code></pre>
<p>in the output tab. There are no problems listed in the problems tab. If I got into the terminal window and run the tests manually, all is good:</p>
<pre><code>(.venv) roysmith@Roys-Mac-Mini dyk-tools % pytest
============================================================================== test session starts ===============================================================================
platform darwin -- Python 3.11.8, pytest-8.3.3, pluggy-1.5.0
rootdir: /Users/roysmith/dev/dyk-tools
plugins: socket-0.7.0, mock-3.14.0
collected 150 items
src/tests/bot/test_dykbot.py .............. [ 9%]
src/tests/db/test_models.py ... [ 11%]
src/tests/test___init__.py . [ 12%]
src/tests/test_conftest.py . [ 12%]
src/tests/web/test_core.py ...... [ 16%]
src/tests/web/test_data.py ...... [ 20%]
src/tests/wiki/test_article.py ........................ [ 36%]
src/tests/wiki/test_hook.py ............................. [ 56%]
src/tests/wiki/test_hook_set.py ......... [ 62%]
src/tests/wiki/test_nomination.py ............x...................... [ 85%]
src/tests/wiki/test_nomination_list.py ...................... [100%]
========================================================================= 149 passed, 1 xfailed in 4.15s =========================================================================
</code></pre>
<p>The only thing I can think of is that I've been refactoring my .zsh startup files so perhaps I messed up something in my environment, but given that the tests run manually, that seems unlikely. Any ideas what I might have broken?</p>
|
<python><visual-studio-code><pytest>
|
2025-01-05 04:04:21
| 1
| 2,177
|
Roy Smith
|
79,330,093
| 18,125,194
|
Applying SMOTE-Tomek to nested cross validation with timeseries data
|
<p>I want to perform a nested cross-validation for a classification problem while ensuring that the model is not exposed to future data. Since the data is time-series, I plan to use a time-aware splitting strategy. In the inner loop of the nested cross-validation, I aim to apply SMOTE-Tomek, But I'm not sure of how to do this.</p>
<p>This is my sample dataframe.</p>
<pre><code># Test data
data = pd.DataFrame({
"Date": pd.date_range(start="2023-01-01", periods=100, freq='D'),
"Feature1": np.random.rand(100),
"Feature2": np.random.rand(100),
'Category': [random.choice(['A', 'B', 'C']) for _ in range(100)],
"Target": np.random.choice([0, 1], size=100)
})
</code></pre>
<p>and this is my code so far, not sure if my approach is correct</p>
<pre><code>!pip install imbalanced-learn
import pandas as pd
import numpy as np
import random
from sklearn.model_selection import cross_validate, GridSearchCV, TimeSeriesSplit
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
import category_encoders as ce
from imblearn.combine import SMOTETomek
# Defining X and y
encoder = ce.OneHotEncoder(handle_missing='value', handle_unknown='value', use_cat_names=True)
X0 = data[['Feature1', 'Feature2', 'Category']]
X_enc = encoder.fit_transform(X0)
y = data[['Target']]
X = np.array(X_enc)
y = np.array(y)
# Start nested cross-validation
# Defining each Model
RFC = RandomForestClassifier(random_state=0)
LR = LogisticRegression(random_state=0)
# Defining the cross-validation function with TimeSeriesSplit
def CrossValidate(model, X, y, print_model=False):
# Using TimeSeriesSplit for splitting by date
tscv = TimeSeriesSplit(n_splits=10)
cv = cross_validate(model, X, y, scoring='f1_macro', cv=tscv)
# Join scores and calculate the mean
scores = ' + '.join(f'{s:.2f}' for s in cv["test_score"])
mean_ = cv["test_score"].mean()
# Message formatting for classification model output
msg = f'Cross-validated F1 score: ({scores}) / 10 = {mean_:.2f}'
if print_model:
msg = f'{model}:\n\t{msg}\n'
print(msg)
# Inner loops
# Logistic Regression inner loop
LR_grid = GridSearchCV(LogisticRegression(random_state=0),
param_grid={'C': [10, 100]})
CrossValidate(LR_grid, X, y, print_model=False)
LR_grid.fit(X, y)
print('The best Parameters for Logistic Regression are:', LR_grid.best_params_)
# Random Forest inner loop
RFC_grid = GridSearchCV(RandomForestClassifier(random_state=0),
param_grid={'n_estimators': [2, 3],
'max_depth': [3, 5]})
CrossValidate(RFC_grid, X, y, print_model=False)
RFC_grid.fit(X, y)
print('The best Parameters for Random Forest are:', RFC_grid.best_params_)
</code></pre>
|
<python><cross-validation><imblearn>
|
2025-01-05 03:06:44
| 0
| 395
|
Rebecca James
|
79,330,009
| 343,215
|
Pipenv not working after Debian system upgrade--maybe partial uninstall?
|
<p>Seems <code>pipenv</code> was uninstalled (not intentionally) during a system update and some config directories were left behind causing it to not run properly? If I try to run pipenv, I get:</p>
<pre><code>> $ pipenv
Traceback (most recent call last):
File "/home/hostname/username/.local/bin/pipenv", line 5, in <module>
from pipenv import cli
ModuleNotFoundError: No module named 'pipenv'
</code></pre>
<p>--</p>
<p>I have a project Django website running on a Debian box at home. After updating Debian to the latest stable release (bookworm), I'm gearing up to update Python and Django. The NGINX version of the project is running fine, but I can't get <code>pipenv</code> to work in the dev version. (see error above)</p>
<p>I found an issue ticket with the same problem. This user resolved the issue by uninstalling and reinstalling: <a href="https://github.com/pypa/pipenv/issues/5609" rel="nofollow noreferrer">I can't use pipenv after upgrading my Linux Mint to the latest version (Vera). #5609</a></p>
<p>Strangely, I can't find an installation of pipenv on this box post update (no clue what I could have done). First I checked Pip as its the <a href="https://pipenv.pypa.io/en/latest/installation.html#preferred-installation-of-pipenv" rel="nofollow noreferrer">"preferred" install method</a>. I searched <code>pip list</code>, there's nothing.</p>
<pre><code>> $ pip list | grep pipenv
</code></pre>
<p>If I try uninstalling, there's nothing:</p>
<pre><code>> $ python -m pip uninstall pipenv
error: externally-managed-environment
[...]
</code></pre>
<p>Debian also has versions of some python packages. The proper name to search for is confirmed with apt search:</p>
<pre><code>> $ sudo apt search pipenv
Sorting... Done
Full Text Search... Done
pipenv/stable 2022.12.19+ds-1 all
Python package manager based on virtualenv and Pipfiles
</code></pre>
<p>Checking installed packages I get that strange Traceback error, but clearly module not found:</p>
<pre><code>> $ sudo apt list --installed | pipenv
Traceback (most recent call last):
File "/home/hostname/username/.local/bin/pipenv", line 5, in <module>
from pipenv import cli
ModuleNotFoundError: No module named 'pipenv'
</code></pre>
<p>One more try:</p>
<pre><code>> $ sudo apt --purge remove pipenv
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Package 'pipenv' is not installed, so not removed
0 upgraded, 0 newly installed, 0 to remove and 7 not upgraded.
</code></pre>
<p>Evidently there's nothing to uninstall, so I try to reinstall it with pipx:</p>
<pre><code>> $ pipx install pipenv
⚠️ File exists at /home/hostname/username/.local/bin/pipenv and points to /home/hostname/username/.local/bin/pipenv,
not /home/hostname/username/.local/pipx/venvs/pipenv/bin/pipenv. Not modifying.
⚠️ File exists at /home/hostname/username/.local/bin/pipenv-resolver and points to
/home/hostname/username/.local/bin/pipenv-resolver, not
/home/hostname/username/.local/pipx/venvs/pipenv/bin/pipenv-resolver. Not modifying.
installed package pipenv 2024.4.0, installed using Python 3.11.2
- pipenv (symlink missing or pointing to unexpected location)
- pipenv-resolver (symlink missing or pointing to unexpected location)
done! ✨ 🌟 ✨
</code></pre>
<p>It will install (<code>done</code>), but it still won't work right.
I'm guessing the Traceback is showing me that pipenv is <strong>configured</strong>, but <strong>not installed</strong>?</p>
<p>What's the solution?</p>
<p><strong>Should I start by deleting these files the pipx install script complained about in ~/.local?</strong></p>
<pre><code>/home/hostname/username/.local/bin/pipenv
/home/hostname/username/.local/bin/pipenv-resolver
</code></pre>
<p>PS. This whole machine is my personal mirror of a project machine at work. That other machine has pipenv installed <em>both</em> with pip and with apt.</p>
|
<python><pipenv-install>
|
2025-01-05 01:52:24
| 0
| 2,967
|
xtian
|
79,329,985
| 9,388,056
|
DeltaTable map type
|
<p>Using Spark, I can create a delta table with a map column type: <code>MAP<STRING, TIMESTAMP></code><br />
How do I create a delta table with a map type without Spark?<br />
I have tried multiple approaches and none of them are working.</p>
<pre><code>import pyarrow as pa
from deltalake import write_deltalake
# Create a sample Arrow Table with a map type
data = {
"id": pa.array([1, 2, 3]),
"name": pa.array(["Alice", "Bob", "Charlie"]),
"attributes": pa.array([
pa.array([("age", 30)], type=pa.map_(pa.string(), pa.int32())),
pa.array([("age", 25)], type=pa.map_(pa.string(), pa.int32())),
pa.array([("age", 35)], type=pa.map_(pa.string(), pa.int32())),
])
}
# Create an Arrow Table
table = pa.Table.from_pydict(data)
# Define the path where the Delta table will be stored
delta_table_path = "./tmp/delta_map"
# Write the Arrow Table to a Delta table
write_deltalake(delta_table_path, data=table, mode="overwrite")
</code></pre>
<p>pyarrow throws: <code>pyarrow.lib.ArrowTypeError: Could not convert 'a' with type str: was expecting tuple of (key, value) pair</code></p>
<pre><code>from deltalake import Schema, Field, DeltaTable, WriterProperties, write_deltalake
from deltalake.schema import PrimitiveType, MapType
# Define the schema for the Delta table
schema = Schema([
Field("id",PrimitiveType("string")),
Field("data", MapType("integer", "string", value_contains_null=False))
])
# Create a list of data to write to the Delta table
data = [
{"id": "1", "data": {"key1": "value1", "key2": "value2"}},
{"id": "2", "data": {"key3": "value3", "key4": "value4"}}
]
# Create a Delta table
delta_table = write_deltalake(table_or_uri="./tmp/delta_map", data=data,
schema=schema,mode="append",
writer_properties=WriterProperties(compression="ZSTD")
)
# Write the data to the Delta table
delta_table.write_data(data)
</code></pre>
<p>deltalake throws: <code>NotImplementedError: ArrowSchemaConversionMode.passthrough is not implemented to work with DeltaSchema, skip passing a schema or pass an arrow schema.</code>
Thx</p>
|
<python><python-polars><delta-lake><delta>
|
2025-01-05 01:28:03
| 1
| 620
|
Frank
|
79,329,672
| 12,357,696
|
Sympy: Define custom derivative on symbol
|
<p>I'm trying to create a custom symbol in sympy that behaves like r = √(x<sup>2</sup> + y<sup>2</sup> + z<sup>2</sup>) -- specifically, ∂r/∂x = x/r:</p>
<pre class="lang-py prettyprint-override"><code>from sympy import Symbol, symbols
x, y, z = symbols("x y z")
class _r(Symbol):
def __new__(self):
r = super().__new__(self, "r")
return r
def diff(self, var):
assert var in [x, y, z]
return 1 / self * var
r = _r()
</code></pre>
<p>The first derivative works as intended:</p>
<pre class="lang-py prettyprint-override"><code>>>> r.diff(x)
x/r
</code></pre>
<p>The second derivative should return -x<sup>2</sup>/r<sup>3</sup> + 1/r, but sympy doesn't differentiate the <code>r</code> symbol again:</p>
<pre><code>>>> r.diff(x).diff(x)
1/r
</code></pre>
<p>I inspected the intermediate result and found the following:</p>
<pre class="lang-py prettyprint-override"><code>>>> d = r.diff(x)
>>> type(d)
sympy.core.mul.Mul
>>> d.free_symbols
{r, x}
>>> for s in d.free_symbols:
>>> print(s, type(s))
r <class '__main__._r'>
x <class 'sympy.core.symbol.Symbol'>
</code></pre>
<p>So it <em>does</em> still recognize <code>r</code> as my custom object, but somehow that gets lost when calling <code>.diff</code> of the resulting <code>Mul</code> object.</p>
<p>How do I get this to work? Am I even on the right track by subclassing <code>Symbol</code>?</p>
|
<python><sympy>
|
2025-01-04 21:52:55
| 1
| 752
|
Antimon
|
79,329,653
| 6,395,618
|
How to sub-class LangGraph's MessageState or use Pydantic for channel separation
|
<p>I am trying to create a Hierarchical LLM Agent workflow using <code>LangGraph</code>. The workflow is intended to be setup where the <code>research_team</code> conducts the research and the <code>writing_team</code> writes the report. The entire workflow should be executed for each sub-section of the main report request. Below is the image of the setup:</p>
<p><a href="https://i.sstatic.net/OlO7Aih1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OlO7Aih1.png" alt="Hierarchical Agentic Workflow" /></a></p>
<p>Each <strong>Research Team</strong> and the <strong>Writing Team</strong> follows the same structure where there is a <code>Team Supervisor</code> node and <code>Worker</code> nodes. I want to have this whole graph executed in a way that both the teams communicate on same channels (<code>states</code>) while researching and writing the sub-sections of the report. This requires <code>Team Supervisor</code> LLM producing <code>structured output</code> to feed to the worker <code>Prompts</code> and workers updating the channel <code>States</code> after completing execution. All the resources out there uses <code>MessageState</code> or some kind of <code>Pydantic</code> or <code>TypedDict</code> approach but I can't find a good way to understand how the whole graph communicates on same channels while working on a particular sub-section. I'll much appreciate your help in figuring this out. Below is the code representing what I'm trying to do:</p>
<p><strong>Supervisor Node Function:</strong></p>
<pre><code>class SupervisorInput(MessagesState):
"""User request."""
main_topic: Annotated[str, ..., "The main topic of the request"]
section_topic: Annotated[Optional[str], "Sub-section topic of the main topic"]
section_content: Annotated[Optional[str], "Sub-section topic content"]
def make_supervisor_node(llm: BaseChatModel, system_prompt: str | None, members: List[str]) -> str:
options = ["FINISH"] + members
if system_prompt is None:
system_prompt = (
"You are a supervisor tasked with managing a conversation between the"
f" following teams: {members}. Given the user request,"
" respond with the team to act next. Each team will perform a"
" task and respond with their results and status. You should verify"
" the task performed by the teams to ensure it statisfies user request."
" When finished, respond with FINISH."
)
class SupervisorAction(TypedDict):
"""Supervisor action."""
# main_topic: SupervisorInput
section_topic: Annotated[str, "Sub-section topic of the main topic"]
section_search_query: Annotated[Optional[str], "Search query for the sub-section topic"]
next: Literal[*options]
def supervisor_node(state: SupervisorInput) -> Command[Literal[*members, "__end__"]]:
"""An LLM-based decision maker."""
# print(f"Supervisor Node State: {state}")
messages = [
{"role": "system", "content": system_prompt},
] + state["messages"]
response = llm.with_structured_output(SupervisorAction).invoke(messages)
print(f"Supervisor reponse: {response}")
goto = response["next"]
print(f"Going to node: {goto}")
if goto == "FINISH":
goto = END
return Command(goto=goto)
return supervisor_node
</code></pre>
<p><strong>Research Team Graph:</strong></p>
<pre><code>## Define tools
research_tools = [TavilySearchResults(max_results=5), PubmedQueryRun(), SemanticScholarQueryRun()]
## Define LLM
research_llm=ChatOpenAI(model="gpt-4o-mini", temperature=0)
tavily_agent = create_react_agent(research_llm, tools=research_tools)
def tavily_node(state: SupervisorInput) -> Command[Literal["supervisor"]]:
result = tavily_agent.invoke(state)
return Command(
update={
"messages": [
HumanMessage(content=result["messages"][-1].content, name="tavily")
]
},
# We want our workers to ALWAYS "report back" to the supervisor when done
goto="supervisor",
)
research_supervisor_prompt = ''.join(open("./prompts/research_supervisor_prompt.txt","r").readlines())
# print(research_supervisor_prompt)
research_supervisor_node = make_supervisor_node(research_llm, research_supervisor_prompt,
["tavily"])
## Define Research Team
research_team = StateGraph(SupervisorInput)
research_team.add_node("supervisor", research_supervisor_node)
research_team.add_node("tavily", tavily_node)
research_team.add_edge(START, "supervisor")
research_graph = research_team.compile()
</code></pre>
<p>The above code works but the LLM output is disjointed where it will hit the <code>__end__</code> without completing the research for all the sub-sections. Also, this code just keeps adding on to the list of <code>messages</code> which doesn't seem to be effectively working, hence I want separate <strong>channels</strong> (<code>states</code>) for update <code>section_topic</code> and <code>section_content</code> which each <code>research team</code> and <code>writing team</code> can collaborate on while researching and writing. So, probably the questions is <strong>how do I sub-class <code>MessageState</code> to have separate channels for communications and keep updating them while working a task?</strong></p>
|
<python><openai-api><langchain><large-language-model><langgraph>
|
2025-01-04 21:37:58
| 0
| 2,606
|
Krishnang K Dalal
|
79,329,522
| 2,929,914
|
Find nearest following row with values greater than or equal to current row
|
<p>Starting with this DataFrame:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df_1 = pl.DataFrame({
'name': ['Alpha', 'Alpha', 'Alpha', 'Alpha', 'Alpha'],
'index': [0, 3, 4, 7, 9],
'limit': [12, 18, 11, 5, 9],
'price': [10, 15, 12, 8, 11]
})
</code></pre>
<p>I need to add a new column ("min_index") to tell me at which index (greater than the current one) the price is equal or higher than the current limit.</p>
<p>With this example above, the expected output is:</p>
<pre><code>┌───────┬───────┬───────┬───────┬───────────┐
│ name ┆ index ┆ limit ┆ price ┆ min_index │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ i64 ┆ i64 ┆ i64 │
╞═══════╪═══════╪═══════╪═══════╪═══════════╡
│ Alpha ┆ 0 ┆ 12 ┆ 10 ┆ 3 │
│ Alpha ┆ 3 ┆ 18 ┆ 15 ┆ null │
│ Alpha ┆ 4 ┆ 11 ┆ 12 ┆ 9 │
│ Alpha ┆ 7 ┆ 5 ┆ 8 ┆ 9 │
│ Alpha ┆ 9 ┆ 9 ┆ 11 ┆ null │
└───────┴───────┴───────┴───────┴───────────┘
</code></pre>
<p>Explaining the "min_index" column results:</p>
<ul>
<li>1st row, where the limit is 12: from the 2nd row onwards, the minimum index whose price is equal or greater than 12 is 3.</li>
<li>2nd row, where the limit is 18: from the 3rd row onwards, there is no index whose price is equal or greater than 18.</li>
<li>3rd row, where the limit is 11: from the 4th row onwards, the minimum index whose price is equal or greater than 11 is 9.</li>
<li>4th row, where the limit is 5: from the 5th row onwards, the minimum index whose price is equal or greater than 5 is 9.</li>
<li>5th row, where the limit is 9: as this is the last row, there is no further index whose price is equal or greater than 9.</li>
</ul>
<p>My solution is shown below - but what would be a neat Polars way of doing it? I was able to solve it in 8 steps, but I'm sure there is a more effective way of doing it.</p>
<pre class="lang-py prettyprint-override"><code># Import Polars.
import polars as pl
# Create a sample DataFrame.
df_1 = pl.DataFrame({
'name': ['Alpha', 'Alpha', 'Alpha', 'Alpha', 'Alpha'],
'index': [0, 3, 4, 7, 9],
'limit': [12, 18, 11, 5, 9],
'price': [10, 15, 12, 8, 11]
})
# Group by name, so that we can vertically stack all row's values into a single list.
df_2 = df_1.group_by('name').agg(pl.all())
# Put the lists with the original DataFrame.
df_3 = df_1.join(
other=df_2,
on='name',
suffix='_list'
)
# Explode the dataframe to long format by exploding the given columns.
df_3 = df_3.explode([
'index_list',
'limit_list',
'price_list',
])
# Filter the DataFrame for the condition we want.
df_3 = df_3.filter(
(pl.col('index_list') > pl.col('index')) &
(pl.col('price_list') >= pl.col('limit'))
)
# Get the minimum index over the index column.
df_3 = df_3.with_columns(
pl.col('index_list').min().over('index').alias('min_index')
)
# Select only the relevant columns and drop duplicates.
df_3 = df_3.select(
pl.col(['index', 'min_index'])
).unique()
# Finally join the result.
df_final = df_1.join(
other=df_3,
on='index',
how='left'
)
print(df_final)
</code></pre>
|
<python><dataframe><python-polars>
|
2025-01-04 19:58:52
| 1
| 705
|
Danilo Setton
|
79,329,444
| 7,709,727
|
How to explicitly define a function's domain in sympy
|
<p>In sympy, I want to define a piecewise function within [1, 3], and I want to explicitly disallow values out of this range.</p>
<p>My current code is as below. I am using <code>nan</code> to denote values out of the range.</p>
<pre><code>from sympy import *
x = symbols('x')
f = Piecewise(
(x * 2 - 3, (x <= 2) & (x >= 1)),
(5 - x * 2, (x <= 3) & (x >= 2)),
(nan, True),
)
</code></pre>
<p>However, I am encountering two problems. First, I cannot use <code>f</code> in the condition of another piecewise function, because sympy cannot compare nan with 0.</p>
<pre><code>>>> g = Piecewise((2, f < 0), (3, True))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/sympy/functions/elementary/piecewise.py", line 136, in __new__
pair = ExprCondPair(*getattr(ec, 'args', ec))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/sympy/functions/elementary/piecewise.py", line 28, in __new__
cond = piecewise_fold(cond)
^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/sympy/functions/elementary/piecewise.py", line 1156, in piecewise_fold
new_args.append((expr.func(*e), And(*c)))
^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/sympy/core/relational.py", line 836, in __new__
raise TypeError("Invalid NaN comparison")
TypeError: Invalid NaN comparison
>>>
</code></pre>
<p>Second, I cannot plot <code>f</code>:</p>
<pre><code>>>> plot(f)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/sympy/plotting/plot.py", line 1873, in plot
plots.show()
File "/usr/lib/python3/dist-packages/sympy/plotting/plot.py", line 251, in show
self._backend.show()
File "/usr/lib/python3/dist-packages/sympy/plotting/plot.py", line 1549, in show
self.process_series()
File "/usr/lib/python3/dist-packages/sympy/plotting/plot.py", line 1546, in process_series
self._process_series(series, ax, parent)
File "/usr/lib/python3/dist-packages/sympy/plotting/plot.py", line 1367, in _process_series
x, y = s.get_data()
^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/sympy/plotting/plot.py", line 605, in get_data
points = self.get_points()
^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/sympy/plotting/plot.py", line 779, in get_points
f_start = f(self.start)
^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/sympy/plotting/experimental_lambdify.py", line 176, in __call__
result = complex(self.lambda_func(args))
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/sympy/plotting/experimental_lambdify.py", line 272, in __call__
return self.lambda_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<string>", line 1, in <lambda>
NameError: name 'nan' is not defined. Did you mean: 'NaN'?
>>>
</code></pre>
<p>Is there a sympy native way to specify the domain of <code>f</code>? Or do I have to track the domain manually?</p>
|
<python><math><sympy>
|
2025-01-04 19:11:20
| 1
| 1,570
|
Eric Stdlib
|
79,329,385
| 123,070
|
Converting Document Docx with Comments to markit using markitdown
|
<p>There is a new open source python library from Microsoft markitdown <a href="https://github.com/microsoft/markitdown" rel="nofollow noreferrer">https://github.com/microsoft/markitdown</a>
It basically works fine on my Docx documents (if anyone uses it, make sure you use it on Python 3.10 or higher, on 3.9 it won't work)</p>
<p>Though it misses to convert any Comments from the document.
Anyone know if there is an option to include Comments into results?</p>
<pre><code>from markitdown import MarkItDown
md = MarkItDown()
result = md.convert("my_doc.docx")
with open("my_doc.md", "w", encoding='utf-8') as file:
file.write(result.text_content)
</code></pre>
|
<python><markdown><docx><document-conversion><markitdown>
|
2025-01-04 18:24:23
| 1
| 3,356
|
Bogdan_Ch
|
79,329,171
| 1,142,881
|
Can't run pytest with a `FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pip-build-env-3fagdb36/normal/bin/ninja' `
|
<p>I have tried many things like upgrading pip, pytest etc etc but nothing works. I have a fork of <code>scikit-learn</code> and working on some new functionality and would like to run all the unit tests but I can't:</p>
<pre><code>(myenv) xxx@Thor:~/code/scikit-learn$ pytest
ImportError while loading conftest '/home/xxx/code/scikit-learn/sklearn/conftest.py'.
/opt/dev/myenv/lib/python3.9/site-packages/_scikit_learn_editable_loader.py:311: in find_spec
tree = self._rebuild()
/opt/dev/myenv/lib/python3.9/site-packages/_scikit_learn_editable_loader.py:345: in _rebuild
subprocess.run(self._build_cmd, cwd=self._build_path, env=env, stdout=subprocess.DEVNULL, check=True)
/opt/dev/miniconda3/lib/python3.9/subprocess.py:505: in run
with Popen(*popenargs, **kwargs) as process:
/opt/dev/miniconda3/lib/python3.9/subprocess.py:951: in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
/opt/dev/miniconda3/lib/python3.9/subprocess.py:1821: in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
E FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pip-build-env-3fagdb36/normal/bin/ninja'
</code></pre>
<p>I have attempted the following (existing SO answers) but none work:</p>
<pre><code>sudo apt update
sudo apt install -y ninja-build
</code></pre>
<p>and then also:</p>
<pre><code>CMAKE_ARGS="-DGGML_CUDA=on -DLLAVA_BUILD=off" FORCE_CMAKE=1 pip install llama-cpp-python --no-cache-dir
</code></pre>
|
<python><scikit-learn><pytest><runtime-error>
|
2025-01-04 16:19:50
| 0
| 14,469
|
SkyWalker
|
79,329,141
| 524,587
|
Time different methods to recurse through directories in Linux
|
<p>I wanted to find out the most efficient method to recursively count subdirectories and files, and came up with the below tests. Some seem to work, but the results are inconsistent.</p>
<ul>
<li>How can I fix the inconsistency that every method results in different numbers of directories and different numbers of files (but each test should result in precisely the same numbers of directories and files as every other test!).</li>
<li>How can I fix the broken tests, as shown in the output below?</li>
<li>Are there better / faster / more efficient techniques than the below?</li>
</ul>
<p><em>I guess this post straddles the line between StackOverflow and SuperUser, but it does relate to a script, so I guess this is the right place.</em></p>
<pre><code>#!/bin/bash
# Default to home directory if no argument is provided
dir="${1:-$HOME}"
echo "Analyzing directories and files in: $dir"
echo
# Function to time and run a command, and print the count
time_command() {
local description="$1"
local command="$2"
echo "$description"
echo "Running: $command"
start_time=$(date +%s.%N)
result=$(eval "$command")
end_time=$(date +%s.%N)
duration=$(echo "$end_time - $start_time" | bc)
echo "Count: $result"
echo "Time: $duration seconds"
}
# Methods to count directories
dir_methods=(
"Directory Method 1 (find): find '$dir' -type d | wc -l"
"Directory Method 2 (tree): tree -d '$dir' | tail -n 1 | awk '{print \$3}'"
"Directory Method 3 (du): echo 'deprecated: usually around double length of ''find'' command'"
"Directory Method 4 (ls): ls -lR '$dir' | grep '^d' | wc -l"
"Directory Method 5 (bash loop): count=0; for d in \$(find '$dir' -type d); do count=\$((count + 1)); done; echo \$count"
"Directory Method 6 (perl): perl -MFile::Find -le 'find(sub { \$File::Find::dir =~ /\\/ and \$n++ }, \"$dir\"); print \$n'"
"Directory Method 7 (python): python3 -c 'import os; print(sum([len(dirs) for _, dirs, _ in os.walk(\"$dir\")]))'"
)
# Methods to count files
file_methods=(
"File Method 1 (find): find '$dir' -type f | wc -l"
"File Method 2 (tree): tree -fi '$dir' | grep -E '^[├└─] ' | wc -l"
"File Method 3 (ls): ls -lR '$dir' | grep -v '^d' | wc -l"
"File Method 4 (bash loop): count=0; for f in \$(find '$dir' -type f); do count=\$((count + 1)); done; echo \$count"
"File Method 5 (perl): perl -MFile::Find -le 'find(sub { -f and \$n++ }, \"$dir\"); print \$n'"
"File Method 6 (python): python3 -c 'import os; print(sum([len(files) for _, _, files in os.walk(\"$dir\")]))'"
)
# Run and time each directory counting method
echo "Counting directories..."
echo
for method in "${dir_methods[@]}"; do
description="${method%%:*}"
command="${method#*: }"
if [[ "$description" == *"(du)"* ]]; then
echo "$description"
echo "Running: $command"
eval "$command"
else
time_command "$description" "$command"
fi
echo
done
# Run and time each file counting method
echo "Counting files..."
echo
for method in "${file_methods[@]}"; do
description="${method%%:*}"
command="${method#*: }"
time_command "$description" "$command"
echo
done
</code></pre>
<p>Below is a run of the above. As you can see, the number of directories and files found is different in every case(!), and some of the tests are clearly broken so it would be good to know how to fix those.</p>
<pre class="lang-none prettyprint-override"><code>Analyzing directories and files in: /home/boss
Counting directories...
Directory Method 1 (find)
Running: find '/home/boss' -type d | wc -l
Count: 598844
Time: 11.949245266 seconds
Directory Method 2 (tree)
Running: tree -d '/home/boss' | tail -n 1 | awk '{print $3}'
Count:
Time: 2.776698115 seconds
Directory Method 3 (du)
Running: echo 'deprecated: usually around double length of ''find'' command'
deprecated: usually around double length of find command
Directory Method 4 (ls)
Running: ls -lR '/home/boss' | grep '^d' | wc -l
Count: 64799
Time: 6.522804741 seconds
Directory Method 5 (bash loop)
Running: count=0; for d in $(find '/home/boss' -type d); do count=$((count + 1)); done; echo $count
Count: 604654
Time: 14.693009738 seconds
Directory Method 6 (perl)
Running: perl -MFile::Find -le 'find(sub { $File::Find::dir =~ /\/ and $n++ }, "/home/boss"); print $n'
String found where operator expected (Missing semicolon on previous line?) at -e line 1, at end of line
Unknown regexp modifier "/h" at -e line 1, at end of line
Unknown regexp modifier "/e" at -e line 1, at end of line
Can't find string terminator '"' anywhere before EOF at -e line 1.
Count:
Time: .019156779 seconds
Directory Method 7 (python)
Running: python3 -c 'import os; print(sum([len(dirs) for _, dirs, _ in os.walk("/home/boss")]))'
Count: 599971
Time: 15.013263266 seconds
Counting files...
File Method 1 (find)
Running: find '/home/boss' -type f | wc -l
Count: 5184830
Time: 13.066028457 seconds
File Method 2 (tree)
Running: tree -fi '/home/boss' | grep -E '^[├└─] ' | wc -l
Count: 0
Time: 8.431054237 seconds
File Method 3 (ls)
Running: ls -lR '/home/boss' | grep -v '^d' | wc -l
Count: 767236
Time: 6.593778380 seconds
File Method 4 (bash loop)
Running: count=0; for f in $(find '/home/boss' -type f); do count=$((count + 1)); done; echo $count
Count: 5196437
Time: 40.861512698 seconds
File Method 5 (perl)
Running: perl -MFile::Find -le 'find(sub { -f and $n++ }, "/home/boss"); print $n'
Count: 5186461
Time: 54.353541730 seconds
File Method 6 (python)
Running: python3 -c 'import os; print(sum([len(files) for _, _, files in os.walk("/home/boss")]))'
Count: 5187084
Time: 14.910791357 seconds
</code></pre>
|
<python><bash><recursion>
|
2025-01-04 15:59:31
| 2
| 4,228
|
YorSubs
|
79,329,131
| 14,896,203
|
Assist LSP with dynamically imported methods
|
<p>I have a directory structure with multiple subdirectories, each containing Python files (.py) that define classes. These classes need to be dynamically imported and attached as methods to a parent class. The parent class should have subclasses corresponding to each subdirectory, and the subclasses should contain the methods derived from the Python files within those subdirectories.</p>
<pre class="lang-py prettyprint-override"><code> def __generate_validation_attributes__(self) -> None:
_dir = Path(__file__).parent
# Get list of subdirectories in my_dir
subdirectories = [d for d in my_dir.iterdir() if d.is_dir()]
for subdir in subdirectories:
subclass_name = subdir.name
subclass = type(subclass_name, (), {})
subclass.__doc__ = f"subdirectory {subclass_name}"
# List of Python files in the subdirectory, excluding __init__.py
py_files = [f for f in subdir.glob("*.py") if f.name != "__init__.py"]
for py_file in py_files:
# Get module name including package
module_relative_path = py_file.relative_to(subdirectories)
module_name = ".".join(module_relative_path.with_suffix("").parts)
# Import the module from the file path
spec = importlib.util.spec_from_file_location(
module_name,
py_file,
)
module = importlib.util.module_from_spec(spec)
# Find classes defined in the module
classes_in_module = inspect.getmembers(
module,
lambda member: inspect.isclass(member)
and member.__name__.lower() == py_file.stem.replace("_", "").lower(),
)
for _, class_obj in classes_in_module:
# Attach the method to the subclass
self.validation_classes[class_obj.__name__] = class_obj
setattr(
subclass,
class_obj.__name__,
self. __make_method__(class_obj),
)
# Attach the subclass to the parent class
setattr(self, subclass_name, subclass())
def __make_method__(self, class_obj: type) -> Callable[..., Validate]:
def validation_method(*args, **kwargs) -> Validate:
return self.__create_validation_class__(
class_obj,
*args,
**kwargs,
)
validation_method.__name__ = class_obj.__name__
validation_method.__doc__ = class_obj.__doc__
return validation_method
</code></pre>
<p>The provided code snippet achieves this functionality by iterating over the subdirectories, creating subclasses dynamically, and attaching the methods from the Python files to the respective subclasses. The code works as expected, allowing the subclasses and methods to be accessed using the desired syntax:</p>
<pre><code>cls = MyClass()
cls.subdirectory.ChildClass("bla")
</code></pre>
<p>The issue arises when relying on Language Server Protocol (LSP) autocompletion to provide suggestions for the subclasses (subdirectory) and methods (ChildClass). One possible solution is to create stub files (.pyi) to provide type hints and enable autocompletion. However, creating stub files manually can be a tedious task and would require copying and pasting docstrings.</p>
<p>Is there a better option to achieve the desired result?</p>
|
<python><pyright>
|
2025-01-04 15:52:47
| 0
| 772
|
Akmal Soliev
|
79,329,029
| 7,462,275
|
Ipywidgets : How to get the list of comm opened in a jupyter notebook?
|
<p>I created, in a Jupyter Notebook, a GUI where many ipywidgets are created, displayed or not, closed depending of user choices, file opened, ... But I am not sure that I do not let some <code>comms</code> (<a href="https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20Low%20Level.html#comms" rel="nofollow noreferrer">https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20Low%20Level.html#comms</a>) opened.
For example, in this simple code :</p>
<pre><code>my_list = [1,2,3]
my_HBox = wg.HBox()
my_HBox.children = ( [ wg.Checkbox(value=True, description=f'{i}') for i in my_list ] )
display(my_HBox)
</code></pre>
<p>If <code>my_HBox</code> is closed (<code>my_HBox.close()</code>). The <code>comms</code> for the three checkboxes still exist. Even if they are not displayed because <code>my_HBox</code> has been closed. Indeed, they could be displayed and used, for example, <code>display(my_HBox.children[1])</code>.</p>
<p>I read carefully <a href="https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20Low%20Level.html" rel="nofollow noreferrer">https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20Low%20Level.html</a>. So, I understand what happens in the example above.</p>
<p>But in the GUI, I designed, I am not sure to close all the childrens of childrens of childrens ... Because of the GUI complexity, perhaps, I could have missed some of them.</p>
<p>So, I am looking for a function (a method or something else) to get the list of all <code>comms</code> opened in a Jupyter Notebook. That means the list of <code>Widget</code> (in Python Kernel) and <code>WidgetModel</code> (in front-end) associated via a <code>comms</code></p>
|
<python><jupyter-notebook><ipython><ipywidgets>
|
2025-01-04 14:48:57
| 0
| 2,515
|
Stef1611
|
79,328,944
| 561,243
|
testing the output of seaborn figure level plots
|
<p>I'm writing a piece of code involving data displaying with seaborn. I would like to develop some testing units to test that the output of figure level plots are properly generated.</p>
<p>I can test that the figure file is created, but how can I check that the output is the correct one?
My idea would be to use a standard dataset and compare the output figure with a reference one. But how can I compare the two plots? I can calculate the checksum of the output figure and of the reference one, is this test accurate?</p>
<p>Thanks for your help!</p>
|
<python><pytest><seaborn>
|
2025-01-04 13:49:31
| 1
| 367
|
toto
|
79,328,792
| 1,788,656
|
Wrap_lon of the regionmask does not work with data span from -180 to 180
|
<p>All,</p>
<p>I use regionmask package 0.13.0 to mask climate NetCDF data. I found that if my data extends from -180 to 180, the mask function returns all NAN even after I set <code>wrap_lon=180</code> and I did not set <code>wrap_lon</code> I got the following error</p>
<p><code>ValueError: lon has data that is larger than 180 and smaller than 0. Set `wrap_lon=False` to skip this check.</code></p>
<p>I found that <code>shp_file['geometry']</code> yields a very large number, which may explain this error, yet not sure why the mulipolygon number is so large.</p>
<p><code>0 MULTIPOLYGON (((-1832380.592 2237164.258, -182 Name: geometry, dtype: geometry</code></p>
<p>update : I printed the <code>shp_file.crs</code> and I found that the CRS is EPSG:3857,</p>
<pre><code><Projected CRS: EPSG:3857>
Name: WGS 84 / Pseudo-Mercator
Axis Info [cartesian]:
- X[east]: Easting (metre)
- Y[north]: Northing (metre)
Area of Use:
- name: World between 85.06°S and 85.06°N.
- bounds: (-180.0, -85.06, 180.0, 85.06)
Coordinate Operation:
- name: Popular Visualisation Pseudo-Mercator
- method: Popular Visualisation Pseudo Mercator
Datum: World Geodetic System 1984 ensemble
- Ellipsoid: WGS 84
- Prime Meridian: Greenwich
</code></pre>
<p>yet when I tried to open the shape file using the CRS</p>
<pre><code>hp_file =gpd.read_file("datafiles/"+filename+'.shp',\
crs='EPSG:3857')
</code></pre>
<p>I got the following error.</p>
<pre><code>geo_env/lib/python3.12/site-packages/pyogrio/raw.py:198: RuntimeWarning: driver ESRI Shapefile does not support open option CRS
</code></pre>
<p>Here is the minimal example</p>
<pre><code>import xarray as xr
import geopandas as gpd
import regionmask
#%% opening the dataset
t2m_file = xr.open_dataset("datafiles/"+"temp.nc")
# adjusting longitude.
t2m_file.coords['longitude'] = (t2m_file.coords['longitude'] + 180) % 360 - 180
t2m_file = t2m_file.sortby(t2m_file.longitude)
t2m = t2m_file['t2m']
#%%
filename='North_Africa'
shp_file =gpd.read_file("datafiles/"+filename+'.shp')
shp_region=regionmask.Regions(shp_file.geometry)
shp_file.plot()
#%%
mask_region=shp_region.mask(t2m.longitude,t2m.latitude,wrap_lon=180)
# masked temperture of the raw data
tem_masked_region=t2m.where(mask_region == 0)
</code></pre>
<p>The shape files and the netcdf are very small and could be downloaded from the box
<a href="https://app.box.com/s/nyauxuuscbk0ws5firpmyjt3y51nrlr2" rel="nofollow noreferrer">https://app.box.com/s/nyauxuuscbk0ws5firpmyjt3y51nrlr2</a></p>
<p>Thanks</p>
|
<python><python-3.x><geopandas>
|
2025-01-04 12:02:32
| 1
| 725
|
Kernel
|
79,328,696
| 11,913,986
|
Azure Cognitive Vector search query and index creation
|
<p>I wanted to create a Azure Cognitive Search to query course catalogue using vectors. I have pandas dataframe called courses_pd and it has two columns, 'content' and 'embeddings' which is the embedding I have created using model = SentenceTransformer('all-MiniLM-L6-v2') and then model.encode(x).</p>
<p>Below is the python code-snippet which creates the index in ACS and uploades the documents from Azure databricks notebook.</p>
<pre><code>from azure.search.documents.indexes import SearchIndexClient
from azure.search.documents.indexes.models import (
SimpleField,
SearchFieldDataType,
SearchableField,
SearchField,
VectorSearch,
HnswAlgorithmConfiguration,
VectorSearchProfile,
SemanticConfiguration,
SemanticPrioritizedFields,
SemanticField,
SemanticSearch,
SearchIndex,
AzureOpenAIVectorizer,
AzureOpenAIVectorizerParameters
)
# Azure Cognitive Search setup
service_endpoint = "https://yourserviceendpoint.search.windows.net"
admin_key = "ABC"
index_name = "courses-index"
# Wrap admin_key in AzureKeyCredential
credential = AzureKeyCredential(admin_key)
# Create the index client with AzureKeyCredential
index_client = SearchIndexClient(endpoint=service_endpoint, credential=credential)
# Define the index schema
fields = [
SimpleField(name="id", type="Edm.String", key=True),
SimpleField(name="content", type="Edm.String"),
SearchField(
name="embedding",
type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
searchable=True,
vector_search_dimensions=384,
vector_search_profile_name="myHnswProfile"
)
# SearchField(name="embedding", type='Collection(Edm.Single)', searchable=True)
]
# Configure the vector search configuration
vector_search = VectorSearch(
algorithms=[
HnswAlgorithmConfiguration(
name="myHnsw"
)
],
profiles=[
VectorSearchProfile(
name="myHnswProfile",
algorithm_configuration_name="myHnsw"
)
]
)
# Create the index
index = SearchIndex(
name=index_name,
fields=fields,
vector_search=vector_search
)
# Send the index creation request
index_client.create_index(index)
print(f"Index '{index_name}' created successfully.")
</code></pre>
<p>And then upload the document using the below code:</p>
<pre><code>from azure.search.documents import SearchClient
# Generate embeddings and upload data
search_client = SearchClient(endpoint=service_endpoint, index_name=index_name, credential=credential)
documents = []
for i, row in courses_pd.iterrows():
document = {
"id": str(i),
"content": row["content"],
"embedding": row["embeddings"] # Ensure embeddings are a list of floats
}
documents.append(document)
# Upload documents to the index
search_client.upload_documents(documents=documents)
print(f"Uploaded {len(documents)} documents to Azure Cognitive Search.")
</code></pre>
<p>Now, when I am querying the search_client, I am getting multiple errors, if i search using raw string, or after doing a model.encode(str), it return a <iterator object azure.core.paging.ItemPaged at 0x7fcf9f086220> but search is getting failed in the log.</p>
<pre><code>from azure.search.documents.models import VectorQuery
# Generate embedding for the query
query = "machine learning"
query_embedding = model.encode(query).tolist() # Convert to list of floats
# Create a VectorQuery
vector_query = VectorQuery(
vector=query_embedding,
k=3, # Number of nearest neighbors
fields="embedding" # Name of the field where embeddings are stored
)
# Perform the search
results = search_client.search(
vector_queries=[vector_query],
select=["id", "content"]
)
# Print the results
for result in results:
print(f"ID: {result['id']}, Content: {result['content']}")
</code></pre>
<p>The error then says:</p>
<pre><code>vector is not a known attribute of class <class 'azure.search.documents._generated.models._models_py3.VectorQuery'> and will be ignored
k is not a known attribute of class <class 'azure.search.documents._generated.models._models_py3.VectorQuery'> and will be ignored
HttpResponseError: (InvalidRequestParameter) The vector query's 'kind' parameter is not set.
</code></pre>
<p>Then tried with providing <code>kind = 'vector'</code> as a parameter in VectorQuery, then it says kind is not set!</p>
<p>Documents are getting uploaded and index is created as I can see in the portal.
<a href="https://i.sstatic.net/wizBjLoY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wizBjLoY.png" alt="enter image description here" /></a></p>
<p>I must be doing something wrong, either the way I have setup the search index or the way I am querying the index, the documentation and github codebase is not providing anything around this, so seeking help from the community, new to this.</p>
|
<python><azure><vector><azure-cognitive-search><azure-ai-search>
|
2025-01-04 10:43:14
| 3
| 739
|
Strayhorn
|
79,328,291
| 16,783,860
|
mecab-python3 AWS lambda "circular import" error
|
<p>GOAL - to create a custom AWS lambda layer for <a href="https://pypi.org/project/mecab-python3/" rel="nofollow noreferrer">mecab-python3</a>.</p>
<p>TRIED:</p>
<ul>
<li>local pip to zip and upload via S3 (using python3.11/3.12/3.13)</li>
<li>docker container approach outlined below</li>
</ul>
<pre><code>FROM amazonlinux:2023
RUN dnf install -y zip python3.11
RUN dnf install -y python3.11-pip
RUN curl https://bootstrap.pypa.io/get-pip.py -o /tmp/get-pip.py
RUN python3 /tmp/get-pip.py
RUN pip3 install setuptools
RUN mkdir /home/layers
RUN mkdir /home/python
</code></pre>
<p>docker-compose.yaml</p>
<pre><code>version: '3'
services:
aws-lambda-layers:
build: .
volumes:
- './layers:/home/layers'
working_dir: '/home/'
command: sh -c "python3.11 -m pip install -r layers/requirements.txt -t python/ && zip -r layers/file.zip python/"
</code></pre>
<p>requirements.txt</p>
<pre><code>mecab-python3
ipadic
</code></pre>
<p>In both cases, I received the following error message on <code>import MeCab</code>.</p>
<p><code>Unable to import module 'lambda_function': cannot import name '_MeCab' from partially initialized module 'MeCab' (most likely due to a circular import) (/opt/python/MeCab/__init__.py)</code></p>
<p>So, as a final resort I tried updating <code>__init__.py</code>, but nothing changed.</p>
<p>Not too relevant, but I managed to make sudachipy & sudachidict-core to work using methods similar to the ones mentioned above.</p>
<p>Has anyone here managed to make this work please?</p>
|
<python><aws-lambda><circular-dependency><mecab>
|
2025-01-04 05:16:38
| 0
| 383
|
kenta_desu
|
79,328,186
| 1,769,197
|
Python boto3: download files from s3 to local only if there are differences between s3 files and local ones
|
<p>i have the following code that download files from s3 to local. However, i cannot figure out how to download only if s3 files are different from and more updated than the local ones. What is the best way to do this ? Is it based on modified time or ETags or MD5 or all of these?</p>
<pre><code>import boto3
import pathlib
BUCKET_NAME = 'testing'
s3_client = boto3.client('s3')
response = s3_client.list_objects_v2(Bucket = BUCKET_NAME, Prefix = KEY)
if 'Contents' in response:
for obj in response['Contents']:
file_key = obj['Key']
file_name = os.path.basename(file_key) # Get the file name from the key
local_file_path = os.path.join(f'test_dir', file_name)
#Download the file
s3_client.download_file(BUCKET_NAME, file_key, local_file_path)
print(f"Downloaded {file_name}")
</code></pre>
|
<python><amazon-web-services><amazon-s3><boto3>
|
2025-01-04 03:22:13
| 2
| 2,253
|
user1769197
|
79,327,929
| 1,760,791
|
Using OpenCV to achieve a top-down view of an image with ArUco Markers
|
<p>I have several images taken of a table setup with ArUco Markers. As end result, I would like to have a top-down coordinate system which defines the configuration of the markers on the tables.</p>
<p>I managed to detect all the markers using the code:</p>
<p><code>corners, ids = self.detect_aruco_markers(image)</code></p>
<p>Afterwards, I used the real-life marker sizes to define the coordinates for the markers as well as their relative positions, and cluster them correctly to their tables.</p>
<p>However, there is an issue which relates to the fact that the photos are not taken perfectly top-down. I have attached an example image below. The issue is that in the resulting coordinate system, markers are slightly rotated and moved from where they should be. For instance, markers that should be x- or y-aligned are not in the resulting coordinate system.</p>
<p>To fix this issue, I wanted to use a perspective transformation, using a homography matrix. However, I am having trouble deciding which points exactly to use as "real world" reference points and source points for the <code>cv2.getPerspectiveTransform</code> method.
I do not have access to the real world coordinates of the markers top-down, as this changes per input image. All that I know is that the markers are of fixed size (that is given) and that they all lie in the same plane, facing upwards.</p>
<p>I tried something using the convex hull so far, but to no avail:</p>
<pre><code> # Flatten the list of corners and convert to a NumPy array
src_points = np.concatenate(corners, axis=0).reshape(-1, 2)
# Compute the bounding box of the detected markers
x_min, y_min = np.min(src_points, axis=0)
x_max, y_max = np.max(src_points, axis=0)
# Define destination points for the perspective transform
dst_points = np.array([
[0, 0],
[x_max - x_min, 0],
[x_max - x_min, y_max - y_min],
[0, y_max - y_min]
], dtype=np.float32)
# Ensure src_points has exactly four points by selecting the outermost corners
if src_points.shape[0] > 4:
# Compute the convex hull to find the outermost corners
hull = cv2.convexHull(src_points)
if hull.shape[0] > 4:
# Approximate the hull to a quadrilateral
epsilon = 0.02 * cv2.arcLength(hull, True)
approx = cv2.approxPolyDP(hull, epsilon, True)
if approx.shape[0] == 4:
src_points = approx.reshape(4, 2)
else:
rect = cv2.minAreaRect(hull)
src_points = cv2.boxPoints(rect)
else:
src_points = hull.reshape(-1, 2)
elif src_points.shape[0] < 4:
raise ValueError("Not enough points to compute perspective transform.")
# Compute the perspective transform matrix
matrix = cv2.getPerspectiveTransform(src_points.astype(np.float32), dst_points)
# Warp the image using the transformation matrix
width = int(x_max - x_min)
height = int(y_max - y_min)
warped_image = cv2.warpPerspective(image, matrix, (width, height))
</code></pre>
<p>Am I on the right track with this approach, and how should I proceed? Thanks for any help!</p>
<p><a href="https://i.sstatic.net/VqzKnBth.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VqzKnBth.jpg" alt="The example image" /></a></p>
|
<python><opencv><computer-vision><augmented-reality><aruco>
|
2025-01-03 23:32:36
| 0
| 412
|
user1760791
|
79,327,903
| 12,590,154
|
Call function from macos framework (e.g. IntelPowerGadget)
|
<p>I'll say right away - this is my second day working with Python. So the question is probably simple. But for 2 days now I have not been able to find an answer to it.
I want to use functions from the MacOS framework (IntelPowerGadget).
At the moment I have written the following</p>
<pre><code>if sys.platform == 'darwin':
bundle = objc.loadBundle(
"IntelPowerGadget",
bundle_path="/Library/Frameworks/IntelPowerGadget.framework",
module_globals=globals(),
)
functions = [('PG_Initialize', objc._C_VOID),]
#('PG_ReadSample',...),
#('PGSample_GetPackageTemperature',...)]
objc.loadBundleFunctions(bundle, globals(), functions)
PG_Initialize()
</code></pre>
<p>I can't figure out how to correctly describe and call PG_ReadSample (bool PG_ReadSample(int iPackage, PGSampleID* sampleID)) and PGSample_GetPackageTemperature(bool PGSample_GetPackageTemperature(PGSampleID sampleID, double* temp)).
I can't find enough documentation either (I'd appreciate links - if I'm looking in the wrong place).</p>
<p><strong>upd</strong> with <a href="https://stackoverflow.com/users/2809423/ali-saberi">@ali-saberi</a> answer:</p>
<pre><code>import objc
import ctypes
import sys
if sys.platform == 'darwin':
# Load the Intel Power Gadget framework
bundle = objc.loadBundle(
"IntelPowerGadget",
bundle_path="/Library/Frameworks/IntelPowerGadget.framework",
module_globals=globals(),
)
# Define PGSampleID as ctypes type (assuming it's an integer or a pointer type)
PGSampleID = ctypes.c_void_p # Assuming PGSampleID is a pointer. Adjust if it's another type.
# Define the functions
functions = [
('PG_Initialize', objc._C_BOOL),
('PG_ReadSample', objc._C_BOOL + objc._C_INT + objc._C_PTR + objc._C_ULNG_LNG),
('PGSample_GetPackageTemperature', objc._C_BOOL + objc._C_ULNG_LNG + objc._C_PTR + objc._C_DBL),
]
# Load the functions
objc.loadBundleFunctions(bundle, globals(), functions)
# Initialize the library
if not PG_Initialize():
print("Failed to initialize Intel Power Gadget")
sys.exit(1)
# Use PG_ReadSample
sample_id = PGSampleID() # Placeholder for PGSampleID pointer
iPackage = 0 # Assume we're reading from package 0
if PG_ReadSample(iPackage, ctypes.byref(sample_id)):
print("Sample read successfully")
# Use PGSample_GetPackageTemperature
temperature = ctypes.c_double()
if PGSample_GetPackageTemperature(sample_id, ctypes.byref(temperature)):
print(f"Package Temperature: {temperature.value}°C")
else:
print("Failed to get package temperature")
else:
print("Failed to read sample")
</code></pre>
<p>there is error on "if PG_ReadSample(iPackage, ctypes.byref(sample_id)):":<br />
"ValueError: depythonifying 'pointer', got 'CArgObject'"</p>
|
<python><macos>
|
2025-01-03 23:12:53
| 1
| 381
|
tremp
|
79,327,727
| 16,188,746
|
Conflicting dependencies while installing torch==1.10.0, torchaudio==0.10.0, and torchvision==0.11.0 in my Python environment
|
<p>I'm having trouble installing the following dependencies in my Python environment:</p>
<pre><code>torch==1.10.0+cpu
torchaudio==0.10.0
torchvision==0.11.0
pyannote-audio==0.0.1
lightning==2.3.3
numpy
scipy
pandas
soundfile
matplotlib
</code></pre>
<p>When running <code>pip install -r requirements.txt</code>, I encounter the following error:</p>
<pre><code>ERROR: Cannot install -r requirements.txt (line 6), -r requirements.txt (line 7) and torch==1.10.0 because these package versions have conflicting dependencies.
</code></pre>
<p>The conflict is caused by:</p>
<pre><code> The user requested torch==1.10.0
torchaudio 0.10.0 depends on torch==1.10.0
torchvision 0.11.0 depends on torch==1.10.0+cpu
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip to attempt to solve the dependency conflict
</code></pre>
<p>I am using a CPU-only environment (no CUDA).
My current Python version is 3.8.</p>
|
<python><torch><torchvision><lightning><torchaudio>
|
2025-01-03 21:31:33
| 0
| 466
|
oran ben david
|
79,327,723
| 9,215,780
|
How Can I Use GPU to Accelerate Image Augmentation?
|
<p>When setting up image augmentation pipelines using <a href="https://keras.io/api/layers/preprocessing_layers/image_augmentation/" rel="nofollow noreferrer"><code>keras.layers.Random*</code></a> or other augmentation or <a href="https://keras.io/api/layers/preprocessing_layers/" rel="nofollow noreferrer">processing</a> methods, we often integrate these pipelines with a data loader, such as the <a href="https://www.tensorflow.org/guide/data" rel="nofollow noreferrer">tf.data</a> API, which operates mainly on the CPU. But heavy augmentation operations on the CPU can become a significant bottleneck, as these processes take longer to execute, leaving the GPU underutilized. This inefficiency can impact the overall training performance.</p>
<p>To address this, is it possible to offload augmentation processing to the GPU, enabling faster execution and better resource utilization? If so, how can this be implemented effectively?</p>
|
<python><tensorflow><keras><pytorch><jax>
|
2025-01-03 21:28:17
| 1
| 17,272
|
Innat
|
79,327,540
| 9,632,639
|
How to reference an inner class or attribute before it is fully defined?
|
<p>I have a scenario where a class contains an inner class, and I want to reference that inner class (or its attributes) within the outer class. Here’s a concrete example using Django:</p>
<pre><code>from django.db import models
from django.utils.translation import gettext_lazy as _
class DummyModel(models.Model):
class StatusChoices(models.TextChoices):
ACTIVE = "active", _("Active")
INACTIVE = "inactive", _("Inactive")
status = models.CharField(
max_length=15,
choices=StatusChoices.choices,
verbose_name=_("Status"),
help_text=_("Current status of the model."),
default=StatusChoices.ACTIVE,
null=False,
blank=False,
)
class Meta:
verbose_name = _("Dummy Model")
verbose_name_plural = _("Dummy Models")
constraints = [
models.CheckConstraint(
name="%(app_label)s_%(class)s_status_valid",
check=models.Q(status__in=[choice.value for choice in DummyModel.StatusChoices]),
)
]
</code></pre>
<p>In this case, the constraints list in the Meta class tries to reference <code>DummyModel.StatusChoices</code>. However, at the time this reference is evaluated, <code>DummyModel</code> is not fully defined, leading to an error (neither <code>StatusChoices</code> is accessible in that line).</p>
<p>I would like to solve this without significantly altering the structure of the code—<code>StatusChoices</code> must remain defined inside DummyModel.</p>
<p>How can I resolve this issue while keeping the inner class and its attributes accessible as intended?</p>
|
<python><python-3.x><django>
|
2025-01-03 19:55:38
| 2
| 881
|
Julio
|
79,327,508
| 6,312,511
|
Polars is killing the kernel on import
|
<p>I am running the following code on JupyterLab, with no other notebooks open:</p>
<pre><code>!pip3 install polars --upgrade
import polars as pl
</code></pre>
<p>The first line upgrades me to polars 1.18.0 with no issues, but then the program hangs for 3-5 seconds before I get an error message saying that the kernel has died. What would be causing polars to use to much memory on import?</p>
|
<python><python-polars><polars>
|
2025-01-03 19:41:01
| 1
| 1,447
|
mmyoung77
|
79,327,463
| 1,451,632
|
pytorch failing to load
|
<p>Until recently (before Christmas certainly, but not sure precisely when I last imported), <code>torch</code> was working fine on my Mac (w M1 Max chip), but it is failing to import today. The error message is</p>
<pre><code>In [1]: import torch
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[1], line 1
----> 1 import torch
File ~/micromamba/envs/Jan03/lib/python3.13/site-packages/torch/__init__.py:367
365 if USE_GLOBAL_DEPS:
366 _load_global_deps()
--> 367 from torch._C import * # noqa: F403
370 class SymInt:
371 """
372 Like an int (including magic methods), but redirects all operations on the
373 wrapped node. This is used in particular to symbolically record operations
374 in the symbolic shape workflow.
375 """
ImportError: dlopen(/Users/sammcd/micromamba/envs/Jan03/lib/python3.13/site-packages/torch/_C.cpython-313-darwin.so, 0x0002): Symbol not found: __ZN4absl12lts_2024072212log_internal10LogMessagelsIiLi0EEERS2_RKT_
Referenced from: <7DD6F527-7A4A-3649-87B6-D68B25F8B594> /Users/sammcd/micromamba/envs/Jan03/lib/libprotobuf.28.2.0.dylib
Expected in: <D623F952-8116-35EC-859D-F7F8D5DD7699> /Users/sammcd/micromamba/envs/Jan03/lib/libabsl_log_internal_message.2407.0.0.dylib
</code></pre>
<p>This is in a fresh venv managed by micromamba with python 3.13 and pytorch 2.5.1</p>
<p>I have working venvs with python 3.9 and pytorch 2.4 and other failing venvs with python 3.12 and pytorch 2.5</p>
|
<python><pytorch>
|
2025-01-03 19:20:11
| 0
| 311
|
user1451632
|
79,327,275
| 1,137,713
|
Gekko using APOPT isn't optimizing a single linear equation represented as a PWL
|
<p>I've run into an issue where I can't get APOPT to optimize an unconstrained single piecewise linear, and it's really throwing me for a loop. I feel like there's something I'm not understanding about <code>model.pwl</code>, but it's hard (for me) to find documentation outside of the <a href="https://gekko.readthedocs.io/en/latest/model_methods.html#pwl" rel="nofollow noreferrer">GEKKO docs</a>. Here's my minimal example:</p>
<pre class="lang-py prettyprint-override"><code>model = GEKKO(remote=False)
model.options.SOLVER = 1
model.solver_options = ["minlp_as_nlp 0"]
x = model.sos1([0, 1, 2, 3, 4]) # This can also be model.Var(lb=0, ub=4), same result.
pwl = model.Var()
model.pwl(x, pwl, [0, 1, 2, 3, 4], [30, 30.1, 30.2, 30.3, 30.4], bound_x=True)
model.Minimize(pwl)
model.solve(display=True)
print(x.value)
print(pwl.value)
print(model.options.objfcnval)
</code></pre>
<p>The output that I get is:</p>
<pre><code> ----------------------------------------------------------------
APMonitor, Version 1.0.3
APMonitor Optimization Suite
----------------------------------------------------------------
--------- APM Model Size ------------
Each time step contains
Objects : 1
Constants : 0
Variables : 2
Intermediates: 0
Connections : 2
Equations : 1
Residuals : 1
Piece-wise linear model pwl1points: 5
Number of state variables: 12
Number of total equations: - 5
Number of slack variables: - 0
---------------------------------------
Degrees of freedom : 7
----------------------------------------------
Steady State Optimization with APOPT Solver
----------------------------------------------
Iter Objective Convergence
0 3.39503E+01 3.01000E+01
1 3.22900E+01 1.00000E-10
2 3.22000E+01 2.22045E-16
4 3.22000E+01 0.00000E+00
Successful solution
---------------------------------------------------
Solver : APOPT (v1.0)
Solution time : 3.819999999541324E-002 sec
Objective : 32.2000000000000
Successful solution
---------------------------------------------------
2.0
30.2
32.2
</code></pre>
<p>This is unexpected to me, as the obvious minimal value is 30 for the pwl.</p>
|
<python><nonlinear-optimization><gekko><mixed-integer-programming>
|
2025-01-03 17:53:28
| 1
| 2,465
|
iHowell
|
79,327,238
| 7,306,999
|
Replace table in HDF5 file with a modified table
|
<p>I have an existing HDF5 file with multiple tables. I want to modify this HDF5 file: in one of the tables I want to drop some rows entirely, and modify values in the remaining rows.</p>
<p>I tried the following code:</p>
<pre><code>import h5py
import numpy as np
with h5py.File("my_file.h5", "r+") as f:
# Get array
table = f["/NASTRAN/RESULT/ELEMENTAL/STRESS/QUAD4_COMP_CPLX"]
arr = np.array(table)
# Modify array
arr = arr[arr[:, 1] == 2]
arr[:, 1] = 1
# Write array back
table[...] = arr
</code></pre>
<p>This code however results in the following error when run:</p>
<pre><code>Traceback (most recent call last):
File "C:\_Work\test.py", line 10, in <module>
arr[arr[:, 1] == 2]
IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed
</code></pre>
<p>So one of the problems seems to be that the numpy array <code>arr</code> that I've created is not a two-dimensional array. However I'm not sure exactly how to create a two-dimensional array out of the HDF5 table (or whether that is even the best approach here).</p>
<p>Would anyone here be able to help put me on the right path?</p>
<h2>Edit</h2>
<p>Output from <code>h5dump</code> on my dataset is as follows</p>
<pre><code>HDF5 "C:\_Work\my_file.h5" {
DATASET "/NASTRAN/RESULT/ELEMENTAL/STRESS/QUAD4_COMP_CPLX" {
DATATYPE H5T_COMPOUND {
H5T_STD_I64LE "EID";
H5T_STD_I64LE "PLY";
H5T_IEEE_F64LE "X1R";
H5T_IEEE_F64LE "Y1R";
H5T_IEEE_F64LE "T1R";
H5T_IEEE_F64LE "L1R";
H5T_IEEE_F64LE "L2R";
H5T_IEEE_F64LE "X1I";
H5T_IEEE_F64LE "Y1I";
H5T_IEEE_F64LE "T1I";
H5T_IEEE_F64LE "L1I";
H5T_IEEE_F64LE "L2I";
H5T_STD_I64LE "DOMAIN_ID";
}
DATASPACE SIMPLE { ( 990 ) / ( H5S_UNLIMITED ) }
ATTRIBUTE "version" {
DATATYPE H5T_STD_I64LE
DATASPACE SIMPLE { ( 1 ) / ( 1 ) }
}
}
}
</code></pre>
|
<python><numpy><hdf5><h5py><nastran>
|
2025-01-03 17:35:58
| 2
| 8,674
|
Xukrao
|
79,326,904
| 762,295
|
xxhash function gives different values between spark and python
|
<p>xxhash "a" provides different values</p>
<p>spark</p>
<pre><code>select xxhash64('b') -- -6391946315847899181
</code></pre>
<p>python</p>
<pre><code>import xxhash
xxhash.xxh64('b', seed=42).intdigest() # 12054797757861652435
</code></pre>
<p>While most of the bits are different, the first and last 2 bits are different. Can anyone help make the number be equivalent for spark and python? Thanks</p>
|
<python><apache-spark>
|
2025-01-03 15:17:54
| 0
| 1,463
|
Icarus
|
79,326,870
| 12,990,915
|
How to fix the size of only the “drawing” region (ignoring titles, labels) in Matplotlib?
|
<p>I’m creating plots for a research paper using Python’s Matplotlib and would like all plots to have the <em>exact same size</em> for the “inner” data region (where the plot or image is actually drawn). By default, <code>figsize</code> sets the overall figure dimensions, including margins for titles, axis labels, tick labels, colorbars, etc. Consequently, if I have one <code>imshow</code> with a title/labels/colorbar and another without them, the actual <em>drawable area</em> changes—i.e., the region where the data is rendered is <em>not</em> the same size between the two figures.</p>
<p>This becomes even more challenging with subplots. For example, if I use <code>plt.subplots(1, 1)</code> with a title, axis labels, and a colorbar, and compare it to <code>plt.subplots(4, 4)</code> (16 subplots in a grid), it’s quite difficult to get each of those subplot “drawing” areas to be <em>exactly</em> the same size just by tweaking <code>figsize</code>. Often, the space consumed by the colorbar, axis ticks/labels, and subplot spacing changes the available data area in ways that are tricky to standardize.</p>
<p>Is there a way to pin down the data area to a fixed dimension (say, 2 inches by 2 inches) regardless of titles, labels, colorbars, or subplot configuration? Ideally, I would like to ensure that if I remove titles or axis labels from one figure—or change the number of subplots—the data region of each axes remains the same size as in another figure that <em>does</em> have titles/labels or uses a different subplot layout.</p>
<p>Any suggestions or code examples would be greatly appreciated. Thank you!</p>
<p>Examples:</p>
<pre><code>## No title, axis labels, or colorbar
fig, ax = plt.subplots(figsize=(4, 2))
ax.imshow(np.sin(np.linspace(0, 10, 100)).reshape(-1, 1),
aspect='auto',
cmap='Spectral')
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/WAHXfCwX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WAHXfCwX.png" alt="enter image description here" /></a></p>
<pre><code>## Same plot as above with title, axis labels, and colorbar
fig, ax = plt.subplots(figsize=(4, 2))
im = ax.imshow(np.sin(np.linspace(0, 10, 100)).reshape(-1, 1),
aspect='auto',
cmap='Spectral')
ax.set_title('Sine Wave', fontsize=16, fontfamily='monospace')
ax.set_xlabel('Time', fontsize=16, fontfamily='monospace')
ax.set_ylabel('Amplitude', fontsize=16, fontfamily='monospace')
plt.colorbar(im, ax=ax, aspect=3)
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/Cu84QXrk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Cu84QXrk.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2025-01-03 15:00:49
| 2
| 383
|
user572780
|
79,326,812
| 3,557,405
|
FastAPI app: dynamically create an Enum and use it in SQLAlchemy models without circular dependencies?
|
<p>My app is structured as shown below.</p>
<pre><code>app/
├── backend/
│ └── main.py
├── models/
│ └── my_model.py
└── utils/
└── enums.py
</code></pre>
<p><code>main.py</code> instantiates a config dict, containing key <code>prefix</code>, from a yaml file which is needed in <code>enums.py</code> to generate schema names at runtime.</p>
<p>utils/enums.py:</p>
<pre><code>def create_schema_names(prefix: str) -> Type[Enum]:
class SchemaNames(str, Enum):
RAW = prefix + "_RAW"
STAGING = prefix + "_STAGING"
TRANSFORMED = prefix + "_TRANSFORMED"
return SchemaNames
</code></pre>
<p>I want to use <code>SchemaNames</code> in my SQLAlchemy models in <code>my_model.py</code> to specify the schema for each model: <code>__table_args__ = {"schema": SchemaNames.RAW.value}</code>.</p>
<p><strong>Problem:</strong></p>
<p>Importing <code>SchemaNames</code> in <code>my_model.py</code> leads to circular dependencies because the config is defined in <code>main.py</code>.
I want to avoid using global variables by exporting <code>SchemaNames</code> globally.</p>
<p>How can I dynamically create the <code>SchemaNames</code> enum based on the runtime config and use it in SQLAlchemy models without causing circular dependencies or relying on global exports? What best practice approach is there for this?</p>
<p>Edit: <code>main.py</code> is structured as follows:</p>
<pre><code>from fastapi import FastAPI
from app.api import register_api_1
# in register_api_2 function my_model.py is imported so that data in it from the DB can be read.
from app.api import register_api_2
def make_config() -> ConfigClass:
# parses config from yaml file
return my_config
def create_app(config: ConfigClass) -> FastAPI:
app = FastAPI()
register_api_1(app, config)
register_api_2(app)
return app
def main():
config = make_config()
app = create_app(config)
uvicorn.run(
app,
host=config.webserver.host,
port=config.webserver.port,
log_config=config.log_config,
)
def __name__ == "__main__":
main()
</code></pre>
|
<python><oop><sqlalchemy><enums><fastapi>
|
2025-01-03 14:44:54
| 1
| 636
|
user3557405
|
79,326,712
| 4,653,423
|
Testing of cdk eks KubernetesManifest in python
|
<p>I am trying to write unit tests for my python CDK code. From the AWS doc I understand that templates are generated easily for the level-2 & level-1 constructs so that the template can be tested. I am struggling to create tests for the code that use <code>aws_cdk.aws_eks.KubernetesManifest</code> because it uses <code>aws_cdk.aws_eks.ICluster</code> and it is not directly available.</p>
<p>In production code we create this object using another stack's outputs. Now if I were to mock this object, what would be the requirements of the mocked object? Or if I were to mock the output of previous stack, what would be the steps to do that? I am unable to find any documentation that will help do such tweaking on the CDK constructs.</p>
<p>Can anyone please redirect me to the relevant documentation or steps to solve this?</p>
|
<python><aws-cdk><python-unittest>
|
2025-01-03 14:24:06
| 0
| 1,369
|
Mukund Jalan
|
79,326,576
| 1,406,168
|
Writing to application insights from FastAPI with managed identity
|
<p>I am trying to log from a FastAPI application to Azure application insights. It is working with a connection string, but I would like it to be working with managed identity. The code below does not fail - no errors or anything. But it does not log anything. Any suggestions to sove the problem, or how to troubleshoot as I get no errors:</p>
<pre><code>from fastapi import FastAPI,Request
from fastapi.middleware.cors import CORSMiddleware
from fastapi_azure_auth import SingleTenantAzureAuthorizationCodeBearer
import uvicorn
from fastapi import FastAPI, Security
import os
from typing import Dict
from azure.identity import DefaultAzureCredential
import logging
from azure.monitor.opentelemetry import configure_azure_monitor
from opentelemetry import trace,metrics
from settings import Settings
from pydantic import AnyHttpUrl,BaseModel
from contextlib import asynccontextmanager
from typing import AsyncGenerator
from fastapi_azure_auth.user import User
settings = Settings()
@asynccontextmanager
async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]:
"""
Load OpenID config on startup.
"""
await azure_scheme.openid_config.load_config()
yield
app = FastAPI(
swagger_ui_oauth2_redirect_url='/oauth2-redirect',
swagger_ui_init_oauth={
'usePkceWithAuthorizationCodeGrant': True,
'clientId': settings.xx,
'scopes': settings.xx,
},
)
if settings.BACKEND_CORS_ORIGINS:
app.add_middleware(
CORSMiddleware,
allow_origins=[str(origin) for origin in settings.BACKEND_CORS_ORIGINS],
allow_credentials=True,
allow_methods=['*'],
allow_headers=['*'],
)
azure_scheme = SingleTenantAzureAuthorizationCodeBearer(
app_client_id=settings.xx,
tenant_id=settings.xx,
scopes=settings.xx,
)
class User(BaseModel):
name: str
roles: list[str] = []
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
credential = DefaultAzureCredential()
configure_azure_monitor(
credential=credential,
connection_string="InstrumentationKey=xx-xx-xx-xx-xx"
)
@app.get("/log", dependencies=[Security(azure_scheme)])
async def root():
print("Yo test")
logger.info("Segato5", extra={"custom_dimension": "Kam_value","test1": "val1"})
meter = metrics.get_meter_provider().get_meter(__name__)
counter = meter.create_counter("segato2")
counter.add(8)
return {"whoIsTheBest": "!!"}
if __name__ == '__main__':
uvicorn.run('main:app', reload=True)
</code></pre>
|
<python><azure><fastapi><azure-application-insights><azure-managed-identity>
|
2025-01-03 13:19:36
| 1
| 5,363
|
Thomas Segato
|
79,326,501
| 775,066
|
How do I initialize empty variable for boto3 client
|
<p>I'd like to do a simple check if a variable for boto3 client is empty. I tried following approach:</p>
<pre class="lang-py prettyprint-override"><code>"""Example of boto3 client lazy initialization"""
from typing import Optional
import boto3
from botocore.client import BaseClient
from mypy_boto3_ec2 import EC2Client
class ClassA:
"""Abstract class which lazily initializes the boto3 client"""
def __init__(self) -> None:
self._client: Optional[BaseClient] = None
self.client_type: str = ""
@property
def client(self):
"""Lazy boto3 client initialization"""
if not self._client:
self._client = boto3.client(self.client_type)
return self._client
class ClassB(ClassA):
"""One of many concrete child classes of the ClassA"""
def __init__(self) -> None:
super().__init__()
self._client: Optional[EC2Client] = None
self.client_type: str = "ec2"
def some_method(self) -> dict:
"""Just a method to try random EC2Client functionality"""
result = self.client.describe_instances()
return result
</code></pre>
<p>But this code has many typing problems (a summary of issues found by mypy and pyright):</p>
<ol>
<li>I cannot override parent type <code>BaseClient</code> by concrete class <code>EC2Client</code></li>
<li>When I leave out the <code>Optional</code> for the client type, <code>None</code> value is not compatible with types <code>BaseClient</code> and <code>EC2Client</code></li>
<li>When I leave client optional, then NoneType has no attribute <code>describe_instances</code>.</li>
<li>When I leave ClassA._client untyped and equal None, the child class client is again incompatible.</li>
</ol>
<p>So how do I type an empty variable for a boto3 client? What should be the value so that I recognize the variable is empty? (How do I make it optional without the NoneType attribute problem?)</p>
<p>Questions I used as a sources:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/72221091/im-trying-to-type-annotate-around-boto3-but-module-botocore-client-has-no-at">I'm trying to type annotate around boto3, but module 'botocore.client' has no attribute 'EC2'</a></li>
<li><a href="https://stackoverflow.com/questions/74677544/what-is-the-correct-type-annotation-for-boto3-clientservice">What is the correct type annotation for boto3.client(service)</a></li>
<li><a href="https://github.com/microsoft/pyright/issues/6564" rel="nofollow noreferrer">Variable subtype override is incompatible</a></li>
</ul>
|
<python><boto3><python-typing><mypy><pyright>
|
2025-01-03 12:43:51
| 1
| 2,103
|
sumid
|
79,326,433
| 19,155,645
|
MarkItDown: missing documentation - how to use conversion features
|
<p>Microsoft released lately <a href="https://github.com/microsoft/markitdown/tree/main" rel="nofollow noreferrer">MarkItDown</a>, but the documentation for the Python API is quite short (or I did not manage to find it).</p>
<p>Any help with how to figure out the different features it offers?
At the moment the only documentation (either on <a href="https://github.com/microsoft/markitdown/tree/main" rel="nofollow noreferrer">GitHub</a> or <a href="https://pypi.org/project/markitdown/#description" rel="nofollow noreferrer">PyPi</a>) is:</p>
<pre><code>from markitdown import MarkItDown
markitdown = MarkItDown()
result = markitdown.convert("<your-file>")
print(result.text_content)
</code></pre>
<p>This works but there are some issues that are not converted well - for example, (1) if the pdf has several columns in each page (e.g. scientific paper), the paragraphs are not always converted correctly (not even an empty space in the conversion between the last character of previous and first of next); or (2) specific features of/for tables (I have to say, it works quite good - but it only creates commas between different table rows/columns, and I was hoping to be able to treat the tables separately? hopefully its possible without manual post-hoc cleaning for each table).</p>
<p>I would like to know, for example, how can I take care of these (and other similar) issues? <br>
Typing <code>help(MarkItDown)</code> is also not extensive.</p>
|
<python><markdown><large-language-model>
|
2025-01-03 12:15:02
| 0
| 512
|
ArieAI
|
79,326,248
| 25,362,602
|
Cannot select venv created on WSL (when VS Code is NOT connected to WSL)
|
<p>I am working on Windows 11 with WSL2 (Ubuntu 22.04), with VS Code NOT connected to WSL (via command <code>Connect to WSL</code>).</p>
<p>I used to work with a venv created on WSL, which I could select from the python interpreter list (command : <code>Python: Select Interpreter</code>), and it worked well.</p>
<p>Now, that venv does not appear anymore from the python interpreter list (my venv is well activated and can run python files), and from the command <code>Python: Select Interpreter</code>, I cannot do "Enter interpreter path" > "Find", because it accepts only .exe files...</p>
<p>On my "settings.json", I have :</p>
<pre class="lang-json prettyprint-override"><code> "python.pythonPath": "/home/vc/my-venv/bin/python3.12", // now this key is greyed out ("Unknown configuration setting"), it think it was not weeks/months ago (?)
"python.venvPath": "/home/vc/my-venv",
"python.venvFolders": ["/home/vc/my-venv"],
</code></pre>
<p>The command <code>Python: Clear Cache and Reload Window</code> or restarting VS Code does not solve the problem.</p>
<p>I thought it was related to VS Code update (<code>1.96.2</code>), but I reverted back to <code>1.95.3</code>, <code>1.94.1</code> and <code>1.92.2</code>, and have the same behavior.</p>
<p>I also tried to revert the version of the Python extension back to old versions, I still have the issue...</p>
<p>(after those downgrades, <code>python.pythonPath</code> on settings.json is still greyed out...)</p>
<p>Another thing I tried is to revert my settings.json (3 months back), still not working...</p>
<p>How can I select my WSL venv, please ?</p>
<p>Notes :</p>
<ul>
<li>I would prefer not working with a venv created from Windows.</li>
<li>When VS Code is connected to WSL (via command <code>Connect to WSL</code>), I can select the venv, but I would prefer my VS Code not connected to WSL, like I had before.</li>
<li>Similar SO posts exist, but I did not see any posts with VS Code NOT connected to WSL (which I want to do)</li>
</ul>
|
<python><visual-studio-code><windows-subsystem-for-linux><python-venv>
|
2025-01-03 10:58:35
| 0
| 451
|
vimchun
|
79,326,157
| 16,405,935
|
How to reorder columns if the columns have the same part name
|
<p>I want to reorder columns name if the columns have the same part name. Sample as below:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'Branch': ['Hanoi'],
'20241201_Candy': [3], '20241202_Candy': [4], '20241203_Candy': [5],
'20241201_Candle': [3], '20241202_Candle': [4], '20241203_Candle': [5],
'20241201_Biscuit': [3], '20241202_Biscuit': [4], '20241203_Biscuit': [5]})
</code></pre>
<p>Below is my Expected Ouput:</p>
<pre><code>df2 = pd.DataFrame({
'Branch': ['Hanoi'],
'20241201_Biscuit': [3], '20241201_Candle': [3], '20241201_Candy': [3],
'20241202_Biscuit': [4], '20241202_Candle': [4], '20241202_Candy': [4],
'20241203_Biscuit': [5], '20241203_Candle': [5], '20241203_Candy': [5]})
</code></pre>
<p>So I want to auto reorder dataframe if it has same date.</p>
|
<python><pandas>
|
2025-01-03 10:26:07
| 2
| 1,793
|
hoa tran
|
79,326,135
| 10,000,669
|
Very similar functions have noticeably different running times
|
<p>I have these 2 functions to find the left null space of a matrix over GF(2) for the Quadratic Sieve Factoring Algorithm, given a list of integers(each of which represent a bitarray):</p>
<pre><code>import random
import time
def print_mat(m, n):
for v in m:
print(f"{v:0{n}b}")
def solve_bits_msb(mat, n):
# GAUSSIAN ELIMINATION
m = len(mat)
marks = []
cur = -1
# m -> number of primes in factor base
# n -> number of smooth relations
mark_mask = 0
for row in mat:
if cur % 100 == 0:
print("", end=f"{cur, m}\r")
cur += 1
bl = row.bit_length()
msb = bl - 1
if msb == -1:
continue
marks.append(n - bl)
mark_mask |= 1 << msb
for i in range(m):
if mat[i] & (1 << msb) and i != cur:
mat[i] ^= row
marks.sort()
# NULL SPACE EXTRACTION
nulls = []
free_cols = [col for col in range(n) if col not in marks]
k = 0
for col in free_cols:
null = [0] * n
null[col] = 1
shift = n - col - 1
val = 1 << shift
fin = val
for v in mat:
if v & val:
fin |= v & mark_mask
nulls.append(fin)
k += 1
if k == 10:
break
return nulls
def solve_bits_lsb(matrix, n):
# GAUSSIAN ELIMINATION
m = len(matrix)
marks = []
cur = -1
# m -> number of primes in factor base
# n -> number of smooth relations
mark_mask = 0
for row in matrix:
if cur % 100 == 0:
print("", end=f"{cur, m}\r")
cur += 1
lsb = (row & -row).bit_length() - 1
if lsb == -1:
continue
marks.append(n - lsb - 1)
mark_mask |= 1 << lsb
for i in range(m):
if matrix[i] & (1 << lsb) and i != cur:
matrix[i] ^= row
marks.sort()
# NULL SPACE EXTRACTION
nulls = []
free_cols = [col for col in range(n) if col not in marks]
k = 0
for col in free_cols:
shift = n - col - 1
val = 1 << shift
fin = val
for v in matrix:
if v & val:
fin |= v & mark_mask
nulls.append(fin)
k += 1
if k == 10:
break
return nulls
n = 15000
m = n - 1
mat = [random.getrandbits(n) for _ in range(m)]
msb = False # can only run 1 at a time bcz other code gets optimized
if msb:
start1 = time.perf_counter()
nulls1 = solve_bits_msb(mat, n)
print(f"Time 1(MSB): {time.perf_counter() - start1}")
else:
start2 = time.perf_counter()
nulls2 = solve_bits_lsb(mat, n)
print(f"Time 2(LSB): {time.perf_counter() - start2}")
</code></pre>
<p>One thing I noticed however, is that the <code>solve_bits_lsb</code> version of the code starts out fast and slows down as I iterate through the rows while the <code>solve_bits_msb</code> version of the code starts out slower but speeds up greatly as I iterate through the rows. Doesn't matter for small <code>n</code> but as <code>n</code> reaches around <code>20,000+</code>, there is a very noticeable speed difference that only grows larger between the <code>solve_bits_msb</code> and <code>solve_bits_lsb</code> functions. The problem is that in the project I need this code for, the <code>solve_bits_lsb</code> version fits in much nicer in respect to how I organize my matrix, however the speed difference is a big caveat to that.</p>
<p>I was wondering why this is happening, and also is there any changes I can make to the <code>solve_bits_lsb</code> code that would increase the running time to match what I get with <code>solve_bits_msb</code>.</p>
|
<python><performance><optimization><bit-manipulation><sparse-matrix>
|
2025-01-03 10:16:46
| 1
| 1,429
|
J. Doe
|
79,326,029
| 2,092,445
|
Optimize the process of handling larger than memory feather files using python (pandas)
|
<p>I am storing stock prices for different entities as separate feather files in S3 bucket. On high level, the content of any feather file looks like below.</p>
<pre><code>month | value | observation |
-----------------------------
2024-01 | 12 | High
2024-01 | 5 | Low
</code></pre>
<p>A lambda function written in python uses pandas to update this data - insert new rows, update existing rows, delete rows etc.</p>
<p>Each day when new prices are received for a given entity, the existing code reads the feather file for this entity into memory (using pandas) and concatenates the incoming new data and then writes back the updated feather file from memory back to S3. This is working fine for now but as the size of these feather file grows, we are seeing "out of memory" exceptions in some cases when lambda tries to load a large feather file into memory during merge operations. This is the case when I have assigned 10 GB(max) memory to the lambda.</p>
<p>All the supported operations - merge, update, delete are done in memory once the files are loaded fully into memory.</p>
<p>Is there a better way or another library which can help me to do these merges/ other operations without loading everything in memory ? I check duckDB and looks like it supports predicate pushdowns to storage level but it doesn't support feather files natively.</p>
<p>Looking for any other ideas to approach this problem.</p>
<p>Thanks</p>
<h2>Update</h2>
<p>We are doing date partition by year on feather files. That makes the merge operation slow as i have to touch multiple partitions in case the incoming data (manual load in this case) has datapoints from different years.</p>
<p>Also, when the user may ask for data which ranges multiple years...for eg the query might say -> give me all data with "High" observation, i still need to visit multiple partitions which may slow down things.</p>
|
<python><pandas><feather>
|
2025-01-03 09:35:14
| 1
| 2,264
|
Naxi
|
79,325,674
| 687,331
|
blackboxprotobuf showing positive values instead of negative values for protobuf response
|
<p>I have an issue where blackboxprotobuf takes response of protobuf & returning the dictionary where i see few values where suppose to be negative instead coming as positive value.</p>
<p>Calling an APi with lat (40.741895) & long(-73.989308). Using these lat & long, a key is genereated '<strong>81859706</strong>' that be used in the api.</p>
<p>For Key Generation we are using paid framework.</p>
<pre><code>url = "https://gspe85-ssl.ls.apple.com/wifi_request_tile"
response =requests.get(url, headers={
'Accept': '*/*',
'Connection': 'keep-alive',
'X-tilekey': "81859706",
'User-Agent': 'geod/1 CFNetwork/1496.0.7 Darwin/23.5.0',
'Accept-Language': 'en-US,en-GB;q=0.9,en;q=0.8',
'X-os-version': '17.5.21F79'
})
</code></pre>
<p>Which returns protobuf as response. For the same using <em><strong>blackboxprotobuf</strong></em> to convert protobuf_to_json</p>
<p>snippet</p>
<pre><code>message, typedef = blackboxprotobuf.protobuf_to_json(response.content)
json1_data = json.loads(message)
</code></pre>
<p>Response:</p>
<pre><code> "2": [
{
"4": {
"2": {
"1": 1,
"2": 1
}
},
"5": 124103876854927,
"6": {
"1": 407295068,
"2": 3555038608 //This values should be negative
}
},
</code></pre>
<p>Any help how to debug this response & fix this issue.</p>
<p>Thank you</p>
|
<python><protocol-buffers>
|
2025-01-03 06:37:36
| 1
| 1,985
|
Anand
|
79,325,663
| 2,012,814
|
Error during summarization: '>=' not supported between instances of 'int' and 'str' using transformers
|
<p>When sending queries to</p>
<pre><code>outputs = model.generate(inputs, max_length=hf_max_length, num_return_sequences=1)
</code></pre>
<p>I have been getting this error</p>
<pre><code>Error during summarization: '>=' not supported between instances of 'int' and 'str'
</code></pre>
|
<python><huggingface-transformers>
|
2025-01-03 06:31:35
| 1
| 363
|
kurtfoster
|
79,325,612
| 4,953,146
|
Should Python's module directory match the Python version?
|
<h3>Error message</h3>
<p>Context: MacOS Ventura (Intel Imac)</p>
<p>Python code:</p>
<pre><code>import paho.mqtt.client as mqtt
</code></pre>
<p>Failure returns:</p>
<blockquote>
<p>ModuleNotFoundError: No module named 'paho'</p>
</blockquote>
<h3>Attempts to <a href="https://pypi.org/project/paho-mqtt/" rel="nofollow noreferrer">install paho-mqtt client</a></h3>
<pre class="lang-none prettyprint-override"><code>user@iMac ~ % pip install paho-mqtt
Requirement already satisfied: paho-mqtt in /usr/local/lib/python3.11/site-packages (2.1.0)
user@iMac ~ % pip3 install paho-mqtt
Requirement already satisfied: paho-mqtt in /usr/local/lib/python3.11/site-packages (2.1.0)
</code></pre>
<p>Via <a href="https://formulae.brew.sh/formula/libpaho-mqtt" rel="nofollow noreferrer">homebrew</a>: <code>brew install libpaho-mqtt</code></p>
<pre class="lang-none prettyprint-override"><code>user@iMac ~ % pip list | grep paho
paho-mqtt 2.1.0
</code></pre>
<h3>Python versions</h3>
<pre class="lang-none prettyprint-override"><code>user@iMac ~ % python3 -V
Python 3.10.8
</code></pre>
<h3>Question</h3>
<p>Should <code>python3 -V</code> return 3.11? I ask because 3.11 is found in <code>/usr/local/lib/python3.11/site-packages</code>.</p>
<p><code>/usr/bin/python3 -V</code> returns Python 3.9.6.</p>
|
<python>
|
2025-01-03 05:59:37
| 0
| 1,577
|
gatorback
|
79,325,463
| 11,850,322
|
Pandas Groupby Rolling Apply Custom Function (Pass Dataframe not Series)
|
<p>I need to do a <code>groupby</code> then <code>rolling</code> and <code>apply</code> a custom function</p>
<p>This is my custom function:</p>
<pre><code>def reg_DOL(group):
g = group.copy()
if pd.isna(group['lnebit'].iloc[0]) or pd.isna(group['lnsale'].iloc[0]):
return np.NaN
else:
g['lnebit_t0'] = g['lnebit'].iloc[0]
g['lnsale_t0'] = g['lnsale'].iloc[0]
g['t'] = range(1, len(g) + 1)
y, X = patsy.dmatrices('lnebit ~ lnebit_t0 + gebit:t -1', g, return_type='dataframe')
model = sm.OLS(y, X, missing='drop')
result = model.fit()
g['resid_ebit'] = result.resid
y, X = patsy.dmatrices('lnsale ~ lnsale_t0 + gsale:t -1', g, return_type='dataframe')
model = sm.OLS(y, X, missing='drop')
result = model.fit()
g['resid_sale'] = result.resid
y, X = patsy.dmatrices('resid_ebit ~ resid_sale', g, return_type='dataframe')
model = sm.OLS(y, X, missing='drop')
result = model.fit()
dol = result.params['resid_sale']
return dol
</code></pre>
<p>And this is my groupby rolling</p>
<pre><code>comp.groupby('gvkey').rolling(window=20, min_periods=20).apply(lambda g: reg_DOL(g))
</code></pre>
<p>So far I have been unsuccessful with this rolling. As I understand the root might be rolling only pass series not dataframe so that my rolling apply does not work here. If it is the issue, then how should I solve? If not then what is the problem?</p>
<p>Thank you</p>
|
<python><pandas><group-by>
|
2025-01-03 04:00:30
| 0
| 1,093
|
PTQuoc
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.