QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,503,307
| 8,541,953
|
GCS function: missing 1 required positional argument: 'context'
|
<p>I have a large GCS function that used to work, and after updating it to include a minor change I am getting the following error:</p>
<blockquote>
<p>TypeError: gcs_trigger() missing 1 required positional argument: 'context'</p>
</blockquote>
<p>For context, my entry function does have context:</p>
<pre><code>def gcs_trigger(event, context):
"""Handles a GCS event, processes zip files for nominal and backup procedures"""
bucket_name = event["bucket"]
file_path = event["name"]
# Initialize the storage client
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
# NOMINAL PROCEDURE
if (
file_path.endswith(".zip")
and file_path.startswith(SPECIFIC_FOLDER)
and file_path.count("/") == NOMINAL_PROCEDURE_SLASH_COUNT
):
....
</code></pre>
<p>Reading about it, I have seen it can be caused if the function is triggered by an HTTP request, but it is not my case.</p>
<p>Running <code>gcloud functions describe</code> gives me:</p>
<pre><code>eventTrigger:
eventFilters:
- attribute: bucket
value: mybucket
eventType: google.cloud.storage.object.v1.finalized
</code></pre>
<p>What am I missing?</p>
|
<python><google-cloud-platform><google-cloud-functions><google-cloud-storage>
|
2025-03-12 10:34:42
| 0
| 1,103
|
GCGM
|
79,503,227
| 266,375
|
In strawberry, how do I make a strawberry.input non-nullable but optional in an update mutation
|
<p>I have two fields in an input:</p>
<pre class="lang-py prettyprint-override"><code>@strawberry.input
class UserUpdateInput:
name: str | None = strawberry.UNSET
description: str | None = strawberry.UNSET
</code></pre>
<p><code>name</code> - in the backend database - is <strong>not nullable</strong>. <code>description</code> is nullable.</p>
<p>Above is almost what I need:</p>
<ul>
<li>The fields are optional in a mutation.</li>
<li>They are <code>strawberry.UNSET</code> when not specified.</li>
<li>Description can be set to None, or any string.</li>
</ul>
<p>But now <code>name</code> can also be set to <code>None</code> which is wrong. But taking out the <code>None</code> type-hint (i.e <code>name: str = strawberry.UNSET</code>) makes <code>name</code> no longer optional.</p>
<p>In GraphQL I'd like (something like) this:</p>
<pre class="lang-none prettyprint-override"><code>mutation Update {
updateUser(id:1, userUpdateInput: {name:"tom",description:"too old for this s..."}) {
id
}
}
</code></pre>
<p>or this:</p>
<pre class="lang-none prettyprint-override"><code>mutation Update {
updateUser(id:1, userUpdateInput: {description:null}) {
id
}
}
</code></pre>
<p>but not:</p>
<pre class="lang-none prettyprint-override"><code>mutation Update {
updateUser(id:1, userUpdateInput: {name:null}) {
id
}
}
</code></pre>
<p>Is there a way to specify both optional and non-nullable in strawberry's schema?</p>
|
<python><graphql><strawberry-graphql>
|
2025-03-12 10:03:45
| 2
| 3,662
|
Matthew Wilcoxson
|
79,503,147
| 13,672,396
|
Auto signing in docusign
|
<p>I have been trying to place signature image directly without having the user to sign it.
I have used sign here tabs and added stamp tabs in it. But in the received document, it is still expecting the user to manually sign it.</p>
<p>I have researched enough,I am not finding any resource which solves my problem.</p>
<p>here is the code took some help from AI. This does seem to work. Is it even possible to achieve this? If yes, how?</p>
<pre><code>```
args = {
"account_id": jwt_values["api_account_id"],
"base_path": jwt_values["base_path"],
"access_token": jwt_values["access_token"],
}
api_client = create_api_client(
base_path=args["base_path"],
access_token=args["access_token"],
)
env = EnvelopeDefinition(
email_subject="Please sign this document set",
)
env.documents = []
document2 = Document( # create the DocuSign document object
document_base64=base64.b64encode(file_bytes).decode('utf-8'),
name='Sample Document',
file_extension='pdf',
document_id='1'
)
env.documents.append(document2)
signer1 = Signer(
email='srinivasst@arkaenergy.com',
name='vaasu',
recipient_id="1",
routing_order="1",
)
signer1.tabs = Tabs(
sign_here_tabs=[
SignHere(
anchor_string="Valid Upto September 2023",
anchor_units="pixels",
stamp_type='signature',
stamp=Stamp(
image_type='initials_image',
signature_name='srinivas',
# image_base64=signature_base64,
stamp_image_uri="https://m.media-amazon.com/images/I/31fLWHfjAOL._SY300_SX300_QL70_FMwebp_.jpg",
status='sent'
)
)
]
)
env.recipients = Recipients(signers=[signer1])
env.status = "sent"
envelopes_api = EnvelopesApi(api_client)
results = envelopes_api.create_envelope(
account_id=args["account_id"],
envelope_definition=env,
)
</code></pre>
<pre><code>
</code></pre>
|
<python><docusignapi><docusign-sdk>
|
2025-03-12 09:32:31
| 1
| 347
|
srinivast6
|
79,502,861
| 354,051
|
Creating a masked image with smooth borders using opencv and numpy
|
<p><a href="https://i.sstatic.net/8MyDdHTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8MyDdHTK.png" alt="smooth mask" /></a></p>
<p>I'm trying to create a masked image (smooth rounded borders of 10x10 pixels) as shown here (created in photoshop) using opencv and numpy. Here is my code</p>
<pre class="lang-py prettyprint-override"><code>import cv2
import numpy as np
# Create a 128x128 white image
mask = np.ones((128, 128), np.uint8) * 255
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (10, 10))
# Erode the image to create a 10 pixel border
eroded_mask = cv2.erode(mask, kernel, cv2.BORDER_REFLECT, iterations=2)
# Apply Gaussian Blur to the eroded image
blurred_mask = cv2.GaussianBlur(eroded_mask, (21, 21), sigmaX=3, sigmaY=3, borderType=cv2.BORDER_DEFAULT)
# Display various images to see the steps
cv2.imshow('result', blurred_mask)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>This code is not working as expected. I'm getting a pure white image with no smooth borders. Any Pointers?</p>
|
<python><opencv><mask>
|
2025-03-12 07:14:49
| 1
| 947
|
Prashant
|
79,502,746
| 7,498,328
|
DeprecationWarning with groupby().apply(): How to properly exclude grouping column in pandas 2.2+
|
<p>I'm on Pandas <code>2.2.1</code> with Python <code>3.9.19</code> trying to sample 5 rows per group using <code>groupby().apply()</code>, but I keep getting this DeprecationWarning in pandas 2.2+:</p>
<pre><code>DeprecationWarning: DataFrameGroupBy.apply operated on the grouping columns.
This behavior is deprecated, and in a future version of pandas the grouping columns will be excluded from the operation.
Either pass `include_groups=False` to exclude the groupings or explicitly select the grouping columns after groupby to silence this warning.
</code></pre>
<p>This is a reproducible example:</p>
<pre><code>import pandas as pd
# Sample DataFrame
df = pd.DataFrame({
'locale': ['en_US', 'en_US', 'fr_FR', 'fr_FR', 'fr_FR', 'zh_CN', 'zh_CN'],
'query': ['A', 'B', 'C', 'D', 'E', 'F', 'G'],
'score': [1, 2, 3, 4, 5, 6, 7]
})
# Function to sample 5 rows per group
def sample_per_locale(df):
sampled_df = df.groupby("locale", group_keys=False) \
.apply(lambda x: x.sample(min(5, len(x)), random_state=42)) \
.reset_index(drop=True)
return sampled_df # Keeping locale but still getting the warning
# Run function
sampled_df = sample_per_locale(df)
print(sampled_df)
</code></pre>
<p><strong>What I’ve Tried:</strong>
Using <code>include_groups=False</code> (but my pandas' version doesn't support it):</p>
<pre><code>df.groupby("locale", group_keys=False, include_groups=False) # TypeError
</code></pre>
<p>Explicitly selecting locale after apply():</p>
<pre><code>sampled_df[['locale'] + [col for col in sampled_df.columns if col != 'locale']]
</code></pre>
<p>It still triggers the warning.</p>
<p><strong>Expected Output:</strong>
I just want <code>groupby().apply()</code> to sample 5 rows per locale without triggering this warning.</p>
<p>How do I properly exclude the grouping column in Pandas 2.2+?
What's the correct way to handle this change in future versions?
Any help would be greatly appreciated!</p>
|
<python><pandas><group-by>
|
2025-03-12 05:59:57
| 1
| 2,618
|
user321627
|
79,502,550
| 1,708,779
|
Python-Selenium Remote Connection to Dockerized Selenium Server
|
<p>I'm trying to connect to a Dockerized version of Selenium on the same host within one Python script. This is running on a server, so this is necessarily headless.</p>
<p>I'd like to use the Firefox driver. The Docker container seems to be created and runs fine, however I keep getting "the connection reset by peer error".</p>
<p>My Python script is as follows:</p>
<pre><code>import docker
from selenium import webdriver
client = docker.from_env()
client.images.pull("selenium/standalone-firefox:135.0.1-geckodriver-0.36.0-20250303")
firefox = client.containers.run("selenium/standalone-firefox:135.0.1-geckodriver-0.36.0-20250303",
detach = True,
name = "firefox",
ports = {4444: 4444, 7900: 7900},
shm_size = "2G",
environment = ["SE_START_XVFB=false",
"SE_SCREEN_WIDTH=1200",
"SE_SCREEN_HEIGHT=900"])
print(client.containers.list()) # this shows me the container, also I see it in `docker ps`
try:
driver = webdriver.Remote(
command_executor="http://127.0.0.1:4444/wd/hub",
options=webdriver.FirefoxOptions()
)
driver.get("http://www.python.org")
except Exception as e:
print("An exception occurred: ", e)
</code></pre>
<p>This raises the exception:</p>
<blockquote>
<p>An exception occurred: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))</p>
</blockquote>
<p>How do I resolve this?</p>
|
<python><docker><selenium-webdriver>
|
2025-03-12 03:01:55
| 1
| 1,690
|
GNUser
|
79,502,302
| 1,013,346
|
Serve NetCDF via WCS protocol with Python and MapServer
|
<p>I'd like to serve a NetCDF resource via WCS using MapServer Python bindings.
I borrowed a mapfile form the MapServer test suite <a href="https://github.com/MapServer/MapServer/blob/main/msautotest/wxs/wcs_netcdf_input_output.map" rel="nofollow noreferrer">wcs_netcdf_input_output.map</a>
for which I adapted the <code>wcs_onlineresource</code> to point to my API endpoint "https://mydomain/get_wcs/test/wcs"</p>
<p>My API is based on FastAPI and the <code>get_wcs</code> method looks like:</p>
<pre><code>@router.get('/get_wcs/{data_id}/wcs', response_class=Response)
async def get_wcs(data_id: str, full_request: Request):
print('hoi!')
map = mapscript.mapObj('/app/data/nc.map')
print(map)
ows_req = mapscript.OWSRequest()
ows_req.setParameter('service', 'wcs')
ows_req.setParameter('version', '2.0.0')
ows_req.setParameter('request', 'getcoverage')
ows_req.setParameter('coverageid', 'precipitation')
ows_req.setParameter('format', 'netCDF')
ows_req.type = mapscript.MS_GET_REQUEST
print(ows_req)
format = map.getOutputFormatByName("netCDF")
print(format)
mapscript.msIO_installStdoutToBuffer()
dispatch_status = map.OWSDispatch(ows_req)
result = mapscript.msIO_getStdoutBufferBytes()
print(result)
content_type = mapscript.msIO_stripStdoutBufferContentType()
print(content_type)
#print('so far no errors')
#pass
ows_req.setParameter("REQUEST", "GetCapabilities")
mapscript.msIO_installStdoutToBuffer()
dispatch_status = map.OWSDispatch(ows_req)
if dispatch_status != mapscript.MS_SUCCESS:
if os.getenv('DEBUG') == '1':
logging.debug(f"DISPATCH status: {dispatch_status}")
mapscript.msIO_stripStdoutBufferContentHeaders()
_result = mapscript.msIO_getStdoutBufferBytes()
content_type = 'text/xml'
dom = xml.dom.minidom.parseString(_result)
result_xml = dom.toprettyxml(indent="", newl="")
return Response(result_xml, media_type=content_type)
</code></pre>
<p>the <code>result</code> object seems to be some binary stream (which I guess is correct) but at this point I got lost, as I don't really know what to return, and how to handle the right request.</p>
<p>I then tried to resolve to a default <code>GetCapabilities</code> but the resulting XML (looks ok, info are read from the mapfile) but is not loaded/understood by QGIS (it looks like the following ) :</p>
<pre><code> <?xml version="1.0" ?>
<wcs:Capabilities xmlns:wcs="http://www.opengis.net/wcs/2.0" xmlns:ows="http://www.opengis.net/ows/2.0" xmlns:ogc="http://www.opengis.net/ogc" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:gml="http://www.opengis.net/gml/3.2" xmlns:gmlcov="http://www.opengis.net/gmlcov/1.0" xmlns:swe="http://www.opengis.net/swe/2.0" xmlns:crs="http://www.opengis.net/wcs/crs/1.0" xmlns:int="http://www.opengis.net/wcs/interpolation/1.0" xsi:schemaLocation="http://www.opengis.net/wcs/2.0 http://schemas.opengis.net/wcs/2.0/wcsAll.xsd " version="2.0.0" updateSequence="2007-10-30T14:23:38Z">
<ows:ServiceIdentification>
<ows:Title>First Test Service</ows:Title>
<ows:Abstract>Test Abstract</ows:Abstract>
<ows:Keywords>
<ows:Keyword>keyword</ows:Keyword>
<ows:Keyword>list</ows:Keyword>
</ows:Keywords>
<ows:ServiceType codeSpace="OGC">OGC WCS</ows:ServiceType>
<ows:ServiceTypeVersion>2.0.1</ows:ServiceTypeVersion>
<ows:ServiceTypeVersion>1.1.1</ows:ServiceTypeVersion>
<ows:ServiceTypeVersion>1.0.0</ows:ServiceTypeVersion>
<ows:Profile>http://www.opengis.net/spec/WCS/2.0/conf/core</ows:Profile>
<ows:Profile>http://www.opengis.net/spec/WCS_protocol-binding_get-kvp/1.0/conf/get-kvp</ows:Profile>
<ows:Profile>http://www.opengis.net/spec/WCS_protocol-binding_post-xml/1.0/conf/post-xml</ows:Profile>
<ows:Profile>http://www.opengis.net/spec/GMLCOV/1.0/conf/gml-coverage</ows:Profile>
<ows:Profile>http://www.opengis.net/spec/GMLCOV/1.0/conf/multipart</ows:Profile>
<ows:Profile>http://www.opengis.net/spec/GMLCOV/1.0/conf/special-format</ows:Profile>
<ows:Profile>http://www.opengis.net/spec/GMLCOV_geotiff-coverages/1.0/conf/geotiff-coverage</ows:Profile>
<ows:Profile>http://www.opengis.net/spec/WCS_service-extension_crs/1.0/conf/crs</ows:Profile>
<ows:Profile>http://www.opengis.net/spec/WCS_service-extension_scaling/1.0/conf/scaling</ows:Profile>
<ows:Profile>http://www.opengis.net/spec/WCS_service-extension_range-subsetting/1.0/conf/record-subsetting</ows:Profile>
<ows:Profile>http://www.opengis.net/spec/WCS_service-extension_interpolation/1.0/conf/interpolation</ows:Profile>
<ows:Fees>NONE</ows:Fees>
<ows:AccessConstraints>NONE</ows:AccessConstraints>
</ows:ServiceIdentification>
<ows:ServiceProvider>
<ows:ProviderName>OSGeo</ows:ProviderName>
<ows:ProviderSite xlink:type="simple" xlink:href="https://wms.wps.met.no/get_wcs/test/wcs"/>
<ows:ServiceContact>
<ows:IndividualName>Name FamilyName</ows:IndividualName>
<ows:PositionName>Software Developer</ows:PositionName>
<ows:ContactInfo>
<ows:Phone>
<ows:Voice>(000) 000-0000</ows:Voice>
<ows:Facsimile>(000) 000-00000000</ows:Facsimile>
</ows:Phone>
<ows:Address>
<ows:DeliveryPoint>000 Xoxox Rd</ows:DeliveryPoint>
<ows:City>City</ows:City>
<ows:AdministrativeArea>Province</ows:AdministrativeArea>
<ows:PostalCode>xxx xxx</ows:PostalCode>
<ows:Country>World</ows:Country>
<ows:ElectronicMailAddress>epi@epi.com</ows:ElectronicMailAddress>
</ows:Address>
<ows:OnlineResource xlink:type="simple" xlink:href="https://wms.wps.met.no/get_wcs/test/wcs"/>
<ows:HoursOfService>0800h - 1600h EST</ows:HoursOfService>
<ows:ContactInstructions>during hours of service</ows:ContactInstructions>
</ows:ContactInfo>
<ows:Role>staff</ows:Role>
</ows:ServiceContact>
</ows:ServiceProvider>
<ows:OperationsMetadata>
<ows:Operation name="GetCapabilities">
<ows:DCP>
<ows:HTTP>
<ows:Get xlink:type="simple" xlink:href="https://endpoint/get_wcs/test/wcs?"/>
<ows:Post xlink:type="simple" xlink:href="https://endpoint/get_wcs/test/wcs?">
<ows:Constraint name="PostEncoding">
<ows:AllowedValues>
<ows:Value>XML</ows:Value>
</ows:AllowedValues>
</ows:Constraint>
</ows:Post>
</ows:HTTP>
</ows:DCP>
</ows:Operation>
<ows:Operation name="DescribeCoverage">
<ows:DCP>
<ows:HTTP>
<ows:Get xlink:type="simple" xlink:href="https://wms.wps.met.no/get_wcs/test/wcs?"/>
<ows:Post xlink:type="simple" xlink:href="https://wms.wps.met.no/get_wcs/test/wcs?">
<ows:Constraint name="PostEncoding">
<ows:AllowedValues>
<ows:Value>XML</ows:Value>
</ows:AllowedValues>
</ows:Constraint>
</ows:Post>
</ows:HTTP>
</ows:DCP>
</ows:Operation>
<ows:Operation name="GetCoverage">
<ows:DCP>
<ows:HTTP>
<ows:Get xlink:type="simple" xlink:href="https://wms.wps.met.no/get_wcs/test/wcs?"/>
<ows:Post xlink:type="simple" xlink:href="https://wms.wps.met.no/get_wcs/test/wcs?">
<ows:Constraint name="PostEncoding">
<ows:AllowedValues>
<ows:Value>XML</ows:Value>
</ows:AllowedValues>
</ows:Constraint>
</ows:Post>
</ows:HTTP>
</ows:DCP>
</ows:Operation>
</ows:OperationsMetadata>
<wcs:ServiceMetadata>
<wcs:formatSupported>application/x-netCDF</wcs:formatSupported>
<wcs:Extension>
<int:InterpolationMetadata>
<int:InterpolationSupported>NEAREST</int:InterpolationSupported>
<int:InterpolationSupported>AVERAGE</int:InterpolationSupported>
<int:InterpolationSupported>BILINEAR</int:InterpolationSupported>
</int:InterpolationMetadata>
<crs:CrsMetadata>
<crs:crsSupported>http://www.opengis.net/def/crs/EPSG/0/4326</crs:crsSupported>
</crs:CrsMetadata>
</wcs:Extension>
</wcs:ServiceMetadata>
<wcs:Contents>
<wcs:CoverageSummary>
<wcs:CoverageId>precipitation</wcs:CoverageId>
<wcs:CoverageSubtype>RectifiedGridCoverage</wcs:CoverageSubtype>
<ows:Metadata xlink:type="simple" xlink:href="http://www.gdal.org/metadata_test_link.html"/>
</wcs:CoverageSummary>
</wcs:Contents>
</wcs:Capabilities>
</code></pre>
<p>My target is to load such layer (precipitation) into QGIS and be able to visualize/query the underling data. The <a href="https://github.com/MapServer/MapServer/blob/main/msautotest/wxs/data/trmm-2x2.nc" rel="nofollow noreferrer">test dataset</a> was also borrowed from the MapServer test suite.</p>
|
<python><netcdf><mapserver><wcs>
|
2025-03-11 23:02:51
| 0
| 1,367
|
epifanio
|
79,502,218
| 1,747,834
|
How to cast to long depending on Python version?
|
<p>I'm dealing with numbers, that may be too large for <code>int</code> as understood by Python-2.x, so my code's been casting them to <code>long</code>.</p>
<p>In Python-3.x there is no such thing, and one is simply supposed to use <code>int</code>. However, the code needs to work with both Python-2 and 3 (for as long RHEL8 is a thing), so I tried the following trick:</p>
<pre class="lang-py prettyprint-override"><code> if sys.version_info[0] >= 3:
long = int
...
foo = (long)(bar)
</code></pre>
<p>This works in Python-3.x, but in Python-2.x it breaks with: <code>local variable 'long' referenced before assignment</code>. Why wouldn't it just continue to use the built-in type long -- as it did before I inserted the above two lines into the function?</p>
<p>Update, the actual loop, parses the output of Linux stat-utility:</p>
<pre class="lang-py prettyprint-override"><code> if sys.version_info[0] >= 3:
long_t = int
output = output.decode(locale.getpreferredencoding())
else:
long_t = long
...
for section in [output[x:x + 5] for x in range(0, len(output), 5)]:
stats = [f.split() for f in section]
path = stats[0][1].strip('"')
result[path] = {}
for (field, type, x, y) in [
('id', str, 1, 1),
('namelen', int, 1, 3),
('type', str, 1, 5),
('blocksize', int, 2, 2),
('fblocksize', int, 2, 6),
('blockstotal', long_t, 3, 2),
('blocksfree', long_t, 3, 4),
('blocksavailable', long_t, 3, 6),
('inodestotal', long_t, 4, 2),
('inodesfree', long_t, 4, 4)
]:
result[path][field] = (type)(stats[x][y])
</code></pre>
|
<python><python-2.x>
|
2025-03-11 21:51:12
| 1
| 4,246
|
Mikhail T.
|
79,502,038
| 6,467,736
|
Error trying to install Python 3.13 alongside existing Python 3.9 installation - Windows 10
|
<p>Windows 10 system. I have Python 3.9 installed in a user (non-admin) account. I'm trying to install Python 3.13 alongside 3.9. During install, I select 'Add python.exe to PATH' and select 'Customize installation', where all options are enabled on first screen except for 'for all users (requires admin privileges)'. On second screen, 'associate files with Python', 'create shortcuts', and 'add Python to environment variables' are enabled.</p>
<p>Installation runs, then at the end, it throws error '0x80070659 - This installation is forbidden by system policy.'</p>
<p>The logfile shows several registry errors. When looking in regedit, none of these registry keys actually exists.</p>
<p>When I look inside the logfile, it appears this is the issue:</p>
<ul>
<li>Error 0x80070659: Failed to install MSI package.</li>
<li>Error 0x80070659: Failed to configure per-user MSI package.</li>
<li>Applied execute package: launcher_JustForMe, result: 0x8007065</li>
</ul>
<p>I am able to install it when logged in as admin, but that's not where I want it, and also, Python 3.9 was installed to user account, so I don't understand why I cannot install 3.13 to user account as well.</p>
<p>I tried the solution suggested here, but no change: <a href="https://answers.microsoft.com/en-us/windows/forum/all/0x80070659-this-installation-is-forbidden-by/56bddf92-1e53-4513-9adc-254be1f061ef" rel="nofollow noreferrer">https://answers.microsoft.com/en-us/windows/forum/all/0x80070659-this-installation-is-forbidden-by/56bddf92-1e53-4513-9adc-254be1f061ef</a></p>
<p>Update:
I was able to install Python 3.12. Seems the difference is 3.12 installer showed python launcher as already installed (as mentioned, I have Python 3.9 previously installed in this user account), while 3.13 did not show python launcher as already installed.</p>
<p>I'd prefer to have 3.13, so the question is still relevant if anyone knows what the issue is, thanks.</p>
|
<python><python-3.x><registry>
|
2025-03-11 20:25:20
| 1
| 427
|
jub
|
79,502,011
| 940,490
|
Type coercion stops working after an invalid input in `pandera`
|
<p>I am new to <code>pandera</code> and really loving it. I encountered a peculiar behavior in one example <a href="https://pandera.readthedocs.io/en/stable/dataframe_models.html#basic-usage" rel="nofollow noreferrer">from the package's side</a>. I am using Python 3.9 and <code>pandera==0.22.1</code>. Copying the snippet for convenience:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import pandera as pa
from pandera.typing import Index, DataFrame, Series
class InputSchema(pa.DataFrameModel):
year: Series[int] = pa.Field(gt=2000, coerce=True)
month: Series[int] = pa.Field(ge=1, le=12, coerce=True)
day: Series[int] = pa.Field(ge=0, le=365, coerce=True)
class OutputSchema(InputSchema):
revenue: Series[float]
@pa.check_types
def transform(df: DataFrame[InputSchema]) -> DataFrame[OutputSchema]:
return df.assign(revenue=100.0)
df = pd.DataFrame({
"year": ["2001", "2002", "2003"],
"month": ["3", "6", "12"],
"day": ["200", "156", "365"],
})
transform(df)
invalid_df = pd.DataFrame({
"year": ["2001", "2002", "1999"],
"month": ["3", "6", "12"],
"day": ["200", "156", "365"],
})
try:
transform(invalid_df)
except pa.errors.SchemaError as exc:
print(exc)
</code></pre>
<p>The snippet works as explained in the documentation. However, when I call <code>transform(df)</code> again, I get the following error:</p>
<pre><code>SchemaError: error in check_types decorator of function 'transform': expected series 'year' to have type int64, got object
</code></pre>
<p>This is surprising because this function call worked without issues in several lines above and all columns were cast into integers. It would sufficient for me to understand if this behavior is expected. As I am new user of this package, maybe I am doing something wrong?</p>
<p>P.S. I am aware that it is possible to make this repeated call work by overriding <code>Config</code>, but I am just wondering if this behavior is intentional.</p>
|
<python><pandera>
|
2025-03-11 20:13:52
| 0
| 1,615
|
J.K.
|
79,501,940
| 8,188,120
|
PayFast signature (python): generated signature does not match submitted signature
|
<p>I am trying to submit a PayFast payment using the API tools, in python rather than php (the docs recommended approach).</p>
<p>I am generating a signature like so:</p>
<pre class="lang-py prettyprint-override"><code>import hashlib
import urllib.parse
def generate_signature(data: dict, passphrase: str) -> str:
"""
Generates an MD5 signature for PayFast API request.
- Assumes data is already in the correct order.
- URL encodes values with uppercase encoding.
- Uses '+' for spaces.
- Encodes passphrase before appending.
"""
encoded_params = []
for key, value in data.items():
if value: # Ignore empty values
encoded_value = urllib.parse.quote_plus(str(value)).upper() # URL encode & uppercase
encoded_params.append(f"{key.upper()}={encoded_value}")
# URL-encode the passphrase as well
encoded_passphrase = urllib.parse.quote_plus(passphrase).upper()
# Concatenate parameters
param_string = "&".join(encoded_params)
# Append encoded passphrase
param_string += f"&PASSPHRASE={encoded_passphrase}"
print(f"String before hashing: {param_string}")
# Generate MD5 hash
signature = hashlib.md5(param_string.encode("utf-8")).hexdigest()
print(f"Generated Signature: {signature}")
return signature
</code></pre>
<p>Following the <a href="https://developers.payfast.co.za/docs#step_2_signature" rel="nofollow noreferrer">docs for the signature generation</a>, it looks like the parameter keys and values are meant to be upper case, and in order as specified by the <a href="https://developers.payfast.co.za/docs#step_1_form_fields" rel="nofollow noreferrer">optional form fields</a>.</p>
<p>As far as I can tell, this is what I am doing, but I am still getting the error:</p>
<pre class="lang-none prettyprint-override"><code>400 signature: Generated signature does not match submitted signature.
</code></pre>
<p>Here is what my generated signature looks like:</p>
<p>String before hashing:</p>
<pre class="lang-none prettyprint-override"><code>MERCHANT_ID=11111111&MERCHANT_KEY=UAAAAAAAAAAHA&RETURN_URL=HTTPS%3A%2F%2AAAAAAAA.EXECUTE-API.AF-SOUTH-1.AMAZONAWS.COM%2FDEV%2FPAYFAST-REDIRECT&CANCEL_URL=HTTPS%3A%2F%2AAAAAAAAA6.EXECUTE-API.AF-SOUTH-1.AMAZONAWS.COM%2FDEV%2FPAYFAST-REDIRECT%3FSTATUS%3DFAILED&NOTIFY_URL=HTTPS%3A%2F%2AAAAAAAAA6.EXECUTE-API.AF-SOUTH-1.AMAZONAWS.COM%2FDEV%2FPAYMENTS%2FPAYFAST-NOTIFY&EMAIL_ADDRESS=JOHN.DOE%40GMAIL.COM&AMOUNT=300&ITEM_NAME=COOL&THING&ITEM_DESCRIPTION=BLAH&BLAH&BLAH&passphrase=PHRASEFROMSANDBOX
</code></pre>
<p>...which I've redacted of personal information.</p>
<p>I've tried different orders of the parameters, using lower case for the keys, lowercase for the keys and values. I cannot think of any other reasons why this format is incorrect.</p>
<p>Any help would be really appreciated!</p>
|
<python><hash><payment-gateway><urlencode><payfast>
|
2025-03-11 19:39:53
| 1
| 925
|
user8188120
|
79,501,764
| 676,001
|
Why is Poetry complaining that `name` isn't set in pyproject.toml?
|
<p>I set up a new Python Poetry project with <code>poetry init</code>. I'm not creating a package, so I added this to my <code>pyproject.toml</code>:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry]
package-mode = false
</code></pre>
<p>The <a href="https://python-poetry.org/docs/basic-usage/#operating-modes" rel="nofollow noreferrer">Operating Modes</a> section of the Poetry documentation says that when operating in non-package mode, the <code>name</code> and <code>version</code> fields in <code>pyproject.toml</code> are optional, so I removed them.</p>
<p>Now when I run a Poetry command, such as <code>poetry env list</code>, I get this error:</p>
<pre class="lang-bash prettyprint-override"><code>The Poetry configuration is invalid:
- project must contain ['name'] properties
</code></pre>
<p>Why is this happening?</p>
|
<python><python-poetry>
|
2025-03-11 18:22:39
| 1
| 11,028
|
bertday
|
79,501,731
| 20,591,261
|
Transforming polars Dataframe to Nested JSON Format
|
<p>I have a dataframe that contains a product name, question, and answers. I would like to process the dataframe and transform it into a JSON format. Each product should have nested sections for questions and answers.</p>
<p>My dataframe:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({
"Product": ["X", "X", "Y", "Y"],
"Question": ["Q1", "Q2", "Q3", "Q4"],
"Anwers": ["A1", "A2", "A3", "A4"],
})
</code></pre>
<p>Desired Output:</p>
<pre class="lang-json prettyprint-override"><code>{
"faqByCommunity": {
"id": 5,
"communityName": "name",
"faqList": [
{
"id": 1,
"product": "X",
"faqs": [
{
"id": 1,
"question": "Q1",
"answer": "A1"
},
{
"id": 2,
"question": "Q2",
"answer": "A2"
}
]
},
{
"id": 2,
"product": "Y",
"faqs": [
{
"id": 1,
"question": "Q3",
"answer": "A3"
},
{
"id": 2,
"question": "Q4",
"answer": "A4"
}
]
}
]
}
}
</code></pre>
<p>Since the first part it's static , i think i could append it to the file before and after polars writes to it (Like my other <a href="https://stackoverflow.com/questions/79486991/how-to-add-a-new-level-to-json-output-using-polars-in-python">question</a>
). However, im not sure how can i work with the nested part</p>
|
<python><python-polars>
|
2025-03-11 18:04:41
| 2
| 1,195
|
Simon
|
79,501,664
| 1,185,790
|
Palantir Foundry REST API endpoint for dataset queries?
|
<p>I would like to be able to query a dataset via Palantir Foundry's REST API, but keep getting a <code>404</code> response code. The <a href="https://www.palantir.com/docs/foundry/api/v2/general/overview/introduction/" rel="nofollow noreferrer">documentation</a> does not appear to have any information about how to do this, but it seems like a fairly basic need. I've also encountered a few endpoints in the past that aren't a part of their official documentation, so I'm hoping that it's available and I'm just not calling it correctly.</p>
<p>Any help would be greatly appreciated.</p>
<p>Code is as follows:</p>
<pre><code>import requests
foundry_token = "api_token"
foundry_base_url = "https://example.palantirfoundry.com"
dataset_rid = "ri.foundry.main.dataset.xxxx"
def query_foundry_dataset(base_url, token, dataset_rid):
headers = {
"Authorization": f'Bearer {token}',
"Content-Type": 'application/json'
}
query_payload = {
"sql": "SELECT Column1, Column2 FROM dataset LIMIT 1000"
}
url = f"{base_url}/api/v2/datasets/{dataset_rid}/transactions/query"
response = requests.post(url, headers=headers, json=query_payload)
if response.status_code == 200:
return response.json()
else:
print(f"Query Failed: {response.status_code}")
print(response.text)
return None
result = query_foundry_dataset(
base_url=foundry_base_url,
token=foundry_token,
dataset_rid=dataset_rid
)
print(result)
</code></pre>
|
<python><python-requests><palantir-foundry><palantir-foundry-api>
|
2025-03-11 17:41:38
| 1
| 723
|
baobobs
|
79,501,635
| 23,260,297
|
pyinstaller FileNotFoundError when compiling exe
|
<p>I am trying to compile an exe but It keeps failing with a FileNotFoundError:</p>
<pre><code>Traceback (most recent call last):
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\Scripts\pyinstaller.exe\__main__.py", line 7, in <module>
sys.exit(_console_script_run())
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\site-packages\PyInstaller\__main__.py", line 228, in _console_script_run
run()
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\site-packages\PyInstaller\__main__.py", line 212, in run
run_build(pyi_config, spec_file, **vars(args))
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\site-packages\PyInstaller\__main__.py", line 69, in run_build
PyInstaller.building.build_main.main(pyi_config, spec_file, **kwargs)
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\site-packages\PyInstaller\building\build_main.py", line 1139, in main
build(specfile, distpath, workpath, clean_build)
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\site-packages\PyInstaller\building\build_main.py", line 1079, in build
exec(code, spec_namespace)
File "C:\Users\source\repos\PowerProjects\Solutions\HQ Daily Hedge Report\DHR_SwapTradingExposure\DHR_SwapTradingExposure.spec", line 35, in <module>
coll = COLLECT(
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\site-packages\PyInstaller\building\api.py", line 1092, in __init__
self.__postinit__()
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\site-packages\PyInstaller\building\datastruct.py", line 184, in __postinit__
self.assemble()
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\site-packages\PyInstaller\building\api.py", line 1164, in assemble
shutil.copyfile(src_name, dest_path)
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\shutil.py", line 266, in copyfile
with open(dst, 'wb') as fdst:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\source\\repos\\PowerProjects\\Solutions\\HQ Daily Hedge Report\\DHR_SwapTradingExposure\\dist\\DHR_SwapTradingExposure\\_internal\\jedi\\third_party\\typeshed\\third_party\\2and3\\requests\\packages\\urllib3\\packages\\ssl_match_hostname\\_implementation.pyi'
</code></pre>
<p>In my directory, I have two folders</p>
<pre><code>'C:\\Users\\source\\repos\\PowerProjects\\Solutions\\HQ Daily Hedge Report\\DHR_SwapTradingExposure
</code></pre>
<p>And</p>
<pre><code>'C:\\Users\\source\\repos\\PowerProjects\\Solutions\\HQ Daily Hedge Report\\DHR_ConsolidateMarks
</code></pre>
<p>If I run my pyinstaller command with the ConsolidateMarks.py file it is successful. When I run the same command with my SwapTradingExposure.py I get the file not found error. I am completely lost as to why this is happening.</p>
<p>My pyinstaller command:</p>
<pre><code> pyinstaller --onedir --add-data "config.json;." DHR_SwapTradingExposure.py
</code></pre>
<p>and</p>
<pre><code> pyinstaller --onedir --add-data "config1.json;." DHR_ConsolidateMarks.py
</code></pre>
<p>any idea how I could resolve this issue?</p>
|
<python><pyinstaller>
|
2025-03-11 17:32:52
| 0
| 2,185
|
iBeMeltin
|
79,501,620
| 2,015,882
|
Unable to load module when wheel installed in jupyter
|
<p>I have developed a python module with C++ and nanobind. I am using the stable ABI and compiling the wheel for python 3.12</p>
<p>I am building a wheel using cibuildwheel <code>cibuildwheel --output-dir wheelhouse</code></p>
<p>I have created a virtual environment and installed jupyter and ipykernel</p>
<pre><code>python -m venv .venv
source .venv/bin/activate
pip install jupyter ipykernel
</code></pre>
<p>Now I have tried to load my module "pymandos" and of course I get an error because I haven't installed the wheel yet</p>
<pre><code>python
Python 3.13.2 (main, Feb 5 2025, 08:05:21) [GCC 14.2.1 20250128] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pymandos
Traceback (most recent call last):
File "<python-input-0>", line 1, in <module>
import pymandos
ModuleNotFoundError: No module named 'pymandos'
>>>
</code></pre>
<p>I install the module doing</p>
<pre><code>cd wheelhouse
pip install pymandos-0.0.1-cp312-abi3-manylinux_2_28_x86_64.whl
</code></pre>
<p>and now I can succesfully import the module in python</p>
<pre><code>python
Python 3.13.2 (main, Feb 5 2025, 08:05:21) [GCC 14.2.1 20250128] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pymandos
>>> pymandos.__file__
'/home/jjcasmar/projects/mandos/.venv/lib/python3.13/site-packages/pymandos/__init__.py'
>>>
</code></pre>
<p>However, if I start a jupyter notebook, I can't load the module, as it crashes the server</p>
<p><a href="https://i.sstatic.net/wiSnDjcY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wiSnDjcY.png" alt="enter image description here" /></a></p>
<p>I have no idea what is happening or how I can fix this.</p>
|
<python><jupyter><ipython><nanobind>
|
2025-03-11 17:26:48
| 0
| 1,757
|
jjcasmar
|
79,501,381
| 1,132,423
|
No module named 'matplotlib' in visual studio 2022 python project
|
<p>I am using Windows. I executed the below commands on the command prompt:</p>
<pre><code>python -m pip install matplotlib.
pip3 install matplotlib
</code></pre>
<p>But VS 2022 is not recognizing the matplotlib.</p>
<p>Below is the code:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
xpoints = np.array([0, 6])
ypoints = np.array([0, 250])
plt.plot(xpoints, ypoints)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/Qiiaj3nZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qiiaj3nZ.png" alt="enter image description here" /></a></p>
<p>When I verify the installation, it says its already installed:</p>
<pre><code>C:\Users\Dell>python --version
Python 3.13.1
</code></pre>
|
<python><matplotlib><visual-studio-2022>
|
2025-03-11 15:57:09
| 0
| 1,806
|
Aditya Bokade
|
79,501,302
| 10,658,339
|
How to plot a processing message in power Bi visual
|
<p>I'm developing a visual using Python within Power BI, but due to the large dataset size, processing takes some time.</p>
<p>How can I display a message or a loading symbol to inform users that the graph is temporarily blank because the Python script is still running? I attempted using the print statement within the figure environment, but it wasn't effective:</p>
<pre><code>import matplotlib.dates as mdates
import pandas as pd
import matplotlib.pyplot as plt
from statsmodels.tsa.seasonal import seasonal_decompose
# Assign the provided dataset to a dataframe
df = dataset
# Convert the 'Date' column to datetime, using a general format to handle inconsistencies
df['feature1'] = pd.to_datetime(df['feature2'], errors='coerce')
# Check for any parsing errors
if df['feature1'].isna().any():
print("Warning: Some features could not be parsed and are set to NaT.")
# Filter out rows where 'feature1' is NaT, as they won't plot correctly
df = df.dropna(subset=['feature1'])
# Create the plot
fig, ax1 = plt.subplots(figsize=(10.0, 7.0))
print("Processing Data...")
ax1.scatter(df['feature1'], df['feature3'], linewidth=1.5, color='blue', label='Label 1')
# Set the x-axis limits and format
xmin = df['feature1'].min()
xmax = df['feature1'].max()
ax1.set_xlim(xmin, xmax)
ax1.set_ylim(275, 230)
ax1.invert_yaxis()
ax1.tick_params(axis='x', rotation=45)
# Extract a single value from each array
feature4 = df['feature5'].unique()[0]
feature6 = df['feature7'].unique()[0]
feature8 = df['feature9'].unique()[0]
name = df['feature10'].unique()[0]
# Plot the horizontal lines
ax1.plot([xmin, xmax], [feature4, feature4], color='black', linestyle='--', linewidth=1, alpha=1, label='Label 2')
ax1.plot([xmin, xmax], [feature6, feature6], color='darkgray', linestyle='--', linewidth=1, alpha=1, label='Label 3')
ax1.plot([xmin, xmax], [feature8, feature8], color='orange', linestyle='--', linewidth=1, alpha=1, label='Label 4')
# Set the x and y-axis labels and title
ax1.set_xlabel('Feature X')
ax1.set_ylabel('Feature Y')
ax1.set_title('Plot Title {}'.format(name))
plt.tight_layout()
ax1.legend()
plt.show()
</code></pre>
<p>Any guidance on how to implement a loading message until the data processing completes and the final graph is generated would be greatly appreciated. Thank you!</p>
|
<python><matplotlib><powerbi><powerbi-custom-visuals>
|
2025-03-11 15:35:27
| 0
| 527
|
JCV
|
79,501,263
| 1,194,864
|
Calculate the gradient with respect to attention but also the FFN layers for a pre-trained LLMs
|
<p>I would like to return the gradient with respect to specific layers and the FFN layer in the Transformer architecture of pre-trained LLMs from the hugging-face model. Is that even possible?</p>
<p>I am working with the code of this <a href="https://github.com/kristosh/xAI/blob/main/attn_vizualizations.py" rel="nofollow noreferrer">repo</a> which is the following:</p>
<pre><code>tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-medium-4k-instruct")
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-medium-4k-instruct", # note: check spelling if you get error
device_map="auto",
torch_dtype=torch.float16, # or torch.float32 if preferred
trust_remote_code=True
)
# Create a pipeline
generator = pipeline(
"text-generation",
model = model,
tokenizer = tokenizer,
return_full_text= False,
max_new_tokens = 100,
do_sample = False
)
# Prepare a prompt
prompt = "Whats is the co-capital of Greece according to the country's public opinion?"
inputs = tokenizer(prompt, return_tensors="pt")
inputs = inputs.to("cuda:0") # send inputs to cuda
# Run the model with attention outputs enabled
# Make sure to pass output_attentions=True
outputs = model(input_ids=inputs.input_ids, output_attentions=True)
# outputs.attentions is a tuple with one element per layer
# Each element is a tensor of shape (batch_size, num_heads, seq_len, seq_len)
attentions = outputs.attentions
# Generate output
output = generator(prompt)
print(output[0]["generated_text"])
</code></pre>
<p>How to return the gradient concerning the input or a specific attention layer (in a similar fashion with <code>grad-CAM</code> in <code>CNN</code>). Is it possible to do that in <code>transformers</code>?</p>
|
<python><pytorch><nlp><large-language-model>
|
2025-03-11 15:21:57
| 0
| 5,452
|
Jose Ramon
|
79,501,206
| 3,873,799
|
Pyside6-designer: export Python code of custom (promoted) widget
|
<p>I've created the design for a widget in PySide6 GUI.<br />
This <code>MyCustomWidget</code> is defined in my main UI, which contains many other things.<br />
<em>Please consider that this custom widget can be quite complex and have many sub-widgets</em>, and I find it much more convenient to design it in the UI.</p>
<p>I would like to isolate and export the code of this widget alone, so I can import/programmatically generate copies of it from the main UI code.</p>
<p>However, normally, the UI export can only export the entire code for my main UI (with the custom widget code embedded in it). I tried promoting the widget, which I would expect to also take and isolate the code for the promoted widget in a class or separate module.
If I promote the widget, and then I export the code for my main UI, I get something like:</p>
<pre class="lang-py prettyprint-override"><code>################################################################################
## Form generated from reading UI file 'main_window_uiERYisl.ui'
##
## Created by: Qt User Interface Compiler version 6.8.0
##
## WARNING! All changes made in this file will be lost when recompiling UI file!
################################################################################
from PySide6.QtCore import (QCoreApplication, QDate, QDateTime, QLocale,
QMetaObject, QObject, QPoint, QRect,
QSize, QTime, QUrl, Qt)
# More imports [...]
from myCustomWidget import MyCustomWidget
</code></pre>
<p>Where <code>myCustomWidget</code> is the expected module containing the widget, once it was promoted it. However, this is not exported as part of the main UI code conversion process -- only the <code>main_window.ui</code> content is exported in a single file.</p>
<p>Is there a way to isolate and export the code of a custom widget, promoted or not?</p>
|
<python><qt><widget><pyside6><pyside6-gui>
|
2025-03-11 14:59:36
| 2
| 3,237
|
alelom
|
79,501,178
| 1,194,864
|
Store images instead of showing in a server
|
<p>I am running the code found on this <a href="https://captum.ai/tutorials/Llama2_LLM_Attribution" rel="nofollow noreferrer">site</a> in my server and I would like to store images instead of showing them since I have connected remotely with an ssh connection to my <code>server</code> via an <code>SSH</code> connection.</p>
<p>The code is for instance this one:</p>
<pre><code>skip_tokens = [1] # skip the special token for the start of the text <s>
inp = TextTokenInput(
eval_prompt,
tokenizer,
skip_tokens=skip_tokens,
)
target = "playing guitar, hiking, and spending time with his family."
attr_res = llm_attr.attribute(inp, target=target, skip_tokens=skip_tokens)
attr_res.plot_token_attr(show=True)
</code></pre>
<p>How to store the files locally instead of showing them?</p>
|
<python><nlp><large-language-model>
|
2025-03-11 14:50:31
| 1
| 5,452
|
Jose Ramon
|
79,501,083
| 8,296,096
|
What is the programming construct in Python of passing some arguments in front of a constructor call? How to interpret parameters passed to an object?
|
<p>In the below code line, if I am not mistaken, we are creating the object of the class <strong>layer.Dense</strong> from <strong>tensorflow.keras</strong> and then passing some arguments to the object in the parethesis.
<code> x = layers.Dense(128 * 16 * 16)(inputs)</code>.
What is this programming construct in Python and what are we exactly doing here?</p>
<p>Note: <strong>layer.Dense</strong> is a class here.</p>
|
<python><oop><keras><language-construct>
|
2025-03-11 14:22:29
| 1
| 434
|
Sushodhan V
|
79,500,975
| 633,439
|
Finding coprime numbers
|
<p>I have following snippet which finds numbers which are not coprime to 20. But, I want to achieve the opposite, i.e., printing the coprime numbers of 20. It should be achieved with something like <code>(((i%j)!==0) & ((z%j)!==0))</code>. I am new to Python and hence not sure why this was not working. Can someone edit the code so that it prints the coprime numbers of 20 which are in the range <code>3 to 31</code>?</p>
<pre><code>#Finding coprime of a number (e.g.20)
k=0
z=20
for i in range(3,31):
for j in range(2,30):
if(((i%j)==0) & ((z%j)==0)):
k=k+1
if(k==1):
print(i) #the printed number is not co prime to 20
k=0
</code></pre>
<p>Following is the output of the code snippet:</p>
<pre><code>4
5
6
8
10
12
14
15
16
18
20
22
24
25
26
28
30
</code></pre>
<p>My desired output is:</p>
<pre><code>3
7
9
11
13
17
19
21
23
27
29
</code></pre>
|
<python>
|
2025-03-11 13:48:54
| 3
| 1,107
|
kzs
|
79,500,961
| 6,734,243
|
How to display a dash leaflet colorbar horizontally?
|
<p>I'm buiding an application in dash that contains a map. This map is build using the <a href="https://github.com/emilhe/dash-leaflet" rel="nofollow noreferrer">dash-leaflet</a> lib. I would like to have the colorbar displayed horizontally instead of vertically, is it possible and if yes what should I add to my code ?</p>
<pre class="lang-py prettyprint-override"><code>import dash_leaflet as dl
from dash import Dash
colorscale = ['red', 'yellow', 'green', 'blue', 'purple'] # rainbow
app = Dash()
app.layout = dl.Map([
dl.TileLayer(),
dl.Colorbar(
colorscale=colorscale,
width=20,
height=200,
min=0,
max=50,
position="bottomleft"
)
], center=[56, 10], zoom=6, style={'height': '100vh'})
if __name__ == "__main__":
app.run_server()
</code></pre>
<p><a href="https://i.sstatic.net/4aOiSpbL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4aOiSpbL.png" alt="enter image description here" /></a></p>
|
<python><plotly-dash><dash-leaflet>
|
2025-03-11 13:43:27
| 1
| 2,670
|
Pierrick Rambaud
|
79,500,932
| 1,826,066
|
Custom Python StrEnum auto() function that returns class name and variable
|
<p>I want to define column names for my data frame in a single file, using <code>dataclass</code> and <code>enum</code>.</p>
<p>I have the current code with help of ChatGPT that does not achieve what I want, but might be a starting point</p>
<pre class="lang-py prettyprint-override"><code>
from enum import EnumMeta, StrEnum, auto, unique
from dataclasses import dataclass
class ColumnEnumMeta(EnumMeta):
def _generate_next_value_(cls, name, start, count, last_values):
# This method is called during class creation for auto() members.
return f"{cls.__name__}____{name.upper()}"
@unique
class ColumnEnum(StrEnum, metaclass=ColumnEnumMeta):
def __repr__(self) -> str:
return self.value
@unique
class UNIT(ColumnEnum):
METER = auto()
SECOND = auto()
@unique
class SOMETHING_ELSE(ColumnEnum):
FOO= auto()
BAR = auto()
@dataclass(frozen=True)
class COLUMN:
UNIT = UNIT
SOMETHING_ELSE = SOMETHING_ELSE
# Example Usage:
print(COLUMN.UNIT.SECOND)
print(COLUMN.SOMETHING_ELSE.FOO)
</code></pre>
<p>My hope is that this would print <code>UNIT____SECOND</code> and <code>SOMETHING_ELSE____FOO</code> that is, I would like the column names to include the class name but not have to define <code>_generate_next_value_</code> in every enum. Is there a way to define a class (like my <code>ColumnEnum</code>) that takes care of modifying the <code>auto()</code> function of the derived classes?</p>
|
<python><enums>
|
2025-03-11 13:32:04
| 1
| 1,351
|
Thomas
|
79,500,909
| 16,383,578
|
What is the fastest way to generate all n-bit gray codes using NumPy?
|
<p>My goal is to create images using gray codes, an example would be this:</p>
<p><a href="https://i.sstatic.net/ObiBx418.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ObiBx418.png" alt="enter image description here" /></a></p>
<p>It is all modulo 64 groups in gray codes in polar form.</p>
<p>Now of course I know of the simple mapping <code>n ^ (n >> 1)</code> from binary, but I had found more efficient ways to generate gray codes directly than using said mapping. But since binary codes are related I will post code that generates binary codes as well.</p>
<p>I want a function that generates all n-bit gray codes in the form <code>np.zeros((1 << n, n), dtype=bool)</code>. I want it as efficient as possible, and it has to be implemented in <code>numpy</code> and only implemented in <code>numpy</code>, no other libraries are allowed.</p>
<p>Why do I disallow other libraries? Because I have installed <code>scipy</code>, <code>PIL</code>, <code>cv2</code>, <code>matplotlib</code>, <code>numba</code>... all of them require different versions of <code>numpy</code> and updating one breaks the dependency of another, and all of them provide a number of methods, it is a huge learning curve to know how to use them well. I am currently trying to familiarize myself with <code>numpy</code>, so I invented this challenge to make myself learn.</p>
<p>I have implemented a bunch of different methods, they all work correctly, I have rigorously tested them, but none of them strikes me as efficient. So far, I have found that <code>np.unpackbits</code> is the most efficient method to get binary bits of a number, but it only works with <code>np.uint8</code>, that is easy to solve, just using <code>.view(np.uint8)</code>, but the output is in mixed endianness and that is somewhat trickier to solve.
But even if I use <code>np.unpackbits</code>, converting it from binary to gray code is less efficient than generating gray codes directly.</p>
<p>And according to my tests, <code>np.concatenate(arrs)</code> is more efficient than <code>np.vstack(arrs)</code>,
<code>np.concatenate(arrs, axis=-1)</code> beats <code>np.hstack(arrs)</code>, and <code>np.concatenate(arrs).reshape((w, h)).T</code> beats <code>np.dstack(arrs)</code>. And somehow initializing an array and then broadcasting to individual columns using a loop can be more efficient than using <code>np.concatenate</code>.</p>
<p>And using <code>numpy</code> broadcasting to get <code>a & b</code> column-wise in which <code>a</code> is a 1d array and <code>b</code> is a 1d array to get binary decomposition of <code>a</code> can be much less efficient than just looping through the columns and apply <code>&</code> column by column. In particular, <code>(a & b[:, None]).T.astype(bool)</code> is much more efficient than <code>(a[:, None] & b).astype(bool)</code>.</p>
<hr />
<h2><strong>Code</strong></h2>
<pre><code>import numpy as np
lo = 1
hi = 8
UINT_BITS = {}
for dtype in (np.uint8, np.uint16, np.uint32, np.uint64):
for i in range(lo, hi + 1):
UINT_BITS[i] = dtype
lo = hi + 1
hi <<= 1
def get_dtype(n: int) -> np.uint8 | np.uint16 | np.uint32 | np.uint64:
if dtype := UINT_BITS.get(n):
return dtype
raise ValueError(f"Argument {n} is not a valid bit width")
def validate(n: int) -> None:
if not (n and isinstance(n, int)):
raise ValueError(f"Argument {n} is not a valid bit width")
def binary_codes_0(n: int) -> np.ndarray:
validate(n)
count = 1 << n
rect = np.zeros((count, n), dtype=bool)
r = 1
for i in range(n - 1, -1, -1):
count >>= 1
rect[:, i] = np.tile(
np.concatenate([np.zeros(r, dtype=bool), np.ones(r, dtype=bool)]), count
)
r <<= 1
return rect
def binary_codes_1(n: int) -> np.ndarray:
validate(n)
r = total = 1 << n
return (
np.concatenate(
[
np.tile(
np.concatenate(
[np.zeros((r := r >> 1), dtype=bool), np.ones(r, dtype=bool)]
),
1 << i,
)
for i in range(n)
]
)
.reshape((n, total))
.T
)
def binary_codes_2(n: int) -> np.ndarray:
validate(n)
chunks = np.array([(0,), (1,)], dtype=bool)
l = 2
for _ in range(n - 1):
chunks = np.concatenate(
[
np.concatenate([np.zeros((l, 1), dtype=bool), chunks], axis=-1),
np.concatenate([np.ones((l, 1), dtype=bool), chunks], axis=-1),
]
)
l <<= 1
return chunks
def binary_codes_3(n: int) -> np.ndarray:
validate(n)
rect = np.zeros([2] * n + [n], dtype=bool)
for i, a in enumerate(np.ix_(*[(0, 1)] * n)):
rect[..., i] = a
return rect.reshape(-1, n)
def binary_codes_4(n: int) -> np.ndarray:
numbers = np.arange(1 << n, dtype=get_dtype(n))
return (
np.concatenate([(numbers & 1 << i).astype(bool) for i in range(n - 1, -1, -1)])
.reshape(n, 1 << n)
.T
)
def binary_codes_5(n: int) -> np.ndarray:
numbers = np.arange((count := 1 << n), dtype=get_dtype(n))
result = np.zeros((count, n), dtype=bool)
mask = count
for i in range(n):
result[:, i] = numbers & (mask := mask >> 1)
return result
def binary_codes_6(n: int) -> np.ndarray:
return np.unpackbits(
np.arange(1 << n, dtype=get_dtype(n))[:, None].view(np.uint8),
axis=1,
bitorder="little",
count=n,
)[:, ::-1]
def binary_codes_7(n: int) -> np.ndarray:
validate(n)
return np.array(np.meshgrid(*[(0, 1)] * n, indexing="ij")).reshape((n, 1 << n)).T
def gray_codes_0(n: int) -> np.ndarray:
numbers = np.arange((count := 1 << n), dtype=get_dtype(n))
gray = numbers ^ (numbers >> 1)
return (
np.concatenate([(gray & 1 << i).astype(bool) for i in range(n - 1, -1, -1)])
.reshape((n, count))
.T
)
def gray_codes_1(n: int) -> np.ndarray:
numbers = np.arange((count := 1 << n), dtype=get_dtype(n))
gray = numbers ^ (numbers >> 1)
result = np.zeros((count, n), dtype=bool)
for i in range(n):
result[:, i] = gray & (count := count >> 1)
return result
def gray_codes_2(n: int) -> np.ndarray:
validate(n)
binary = binary_codes_6(n)
shifted = np.roll(binary, 1, axis=-1)
shifted[:, 0] = 0
return binary ^ shifted
def gray_codes_3(n: int) -> np.ndarray:
validate(n)
gray = np.array([(0,), (1,)], dtype=bool)
l = 2
for _ in range(n - 1):
gray = np.concatenate(
[
np.concatenate([np.zeros((l, 1), dtype=bool), gray], axis=-1),
np.concatenate([np.ones((l, 1), dtype=bool), gray[::-1]], axis=-1),
]
)
l <<= 1
return gray
</code></pre>
<hr />
<h2>Testing</h2>
<pre><code>import numpy as np
zeros = np.zeros(524288, dtype=bool)
ones = np.ones(524288, dtype=bool)
zeros1 = np.zeros((524288, 32), dtype=bool)
ones1 = np.ones((524288, 32), dtype=bool)
million = [list(range(i*4096, i*4096+4096)) for i in range(256)]
numbers = np.arange(1 << 16, dtype=np.uint64)
mask = np.array([1 << i for i in range(15, -1, -1)], dtype=np.uint64)
</code></pre>
<pre><code>In [3]: %timeit (numbers & mask[:, None]).T.astype(bool)
4.1 ms ± 97.6 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [4]: %timeit (numbers[:, None] & mask).astype(bool)
6.1 ms ± 423 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [5]: %timeit binary_codes_5(16)
2.02 ms ± 19.5 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [6]: %timeit binary_codes_4(16)
2.32 ms ± 27.2 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [7]: %timeit np.hstack([zeros, ones])
312 μs ± 12.2 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [8]: %timeit np.concatenate([zeros, ones])
307 μs ± 9.97 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [9]: %timeit np.vstack([zeros, ones])
315 μs ± 11.1 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [10]: %timeit np.hstack([zeros1, ones1])
19.8 ms ± 800 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [11]: %timeit np.concatenate([zeros1, ones1], axis=-1)
18.1 ms ± 265 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [12]: %timeit np.concatenate([zeros1, ones1])
9.73 ms ± 413 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [13]: %timeit np.vstack([zeros1, ones1])
10.3 ms ± 229 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [14]: %timeit np.dstack(million)[0]
78.7 ms ± 973 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [15]: %timeit np.concatenate(million).reshape((256, 4096)).T
69.9 ms ± 251 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [16]: %timeit binary_codes_0(16)
2.32 ms ± 18 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [17]: %timeit binary_codes_1(16)
6.37 ms ± 182 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [18]: %timeit binary_codes_2(16)
1.46 ms ± 28 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [19]: %timeit binary_codes_3(16)
1.64 ms ± 29.5 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [20]: %timeit binary_codes_6(16)
1.12 ms ± 9.71 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [21]: %timeit gray_codes_0(16)
2.12 ms ± 25.1 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [22]: %timeit gray_codes_1(16)
2.17 ms ± 29 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [23]: %timeit gray_codes_2(16)
4.51 ms ± 151 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [24]: %timeit gray_codes_3(16)
1.46 ms ± 19.7 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
</code></pre>
<p>Is there a more efficient way to generate all n-bit gray codes?</p>
<hr />
<p>I have figured out how to use <code>np.meshgrid</code> to do Cartesian product, and it is much slower than expected. I have edited the code above to include it.</p>
<pre><code>In [82]: %timeit binary_codes_7(16)
6.96 ms ± 249 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [83]: %timeit binary_codes_5(16)
1.74 ms ± 36.4 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [84]: %timeit binary_codes_3(16)
1.65 ms ± 15.8 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [85]: %timeit np.meshgrid(*[(0, 1)] * 16, indexing="ij")
4.33 ms ± 49.5 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [86]: np.all(np.array(np.meshgrid(*[(0, 1)] * 5, indexing="ij")).reshape((5, 32)).T == binary_codes_3(5))
Out[86]: np.True_
</code></pre>
<p>Now I have implemented everything I can think of.</p>
<hr />
<p>At this point I have realized this problem is extremely simple, we don't even need any bit operations at all.</p>
<p>In both binary and gray codes, each cell can only have two values, zero and one. It can only be one or zero.</p>
<p>Now if we have <code>np.zeros((1 << n, n), dtype=bool)</code> our job is half way done. Exactly half of the cells have the correct value: zero. We just have to flip the ones.</p>
<p>If we look at the sequences row-wise, there isn't much we can do; but if we look at the columns, it just repeats. There are groups of ones with equal length separated by groups of zeros with the same length.</p>
<p>We can just create a 1d array as a binary mask to flip everything on for each column except those gaps. Job done. The question is, how?</p>
<p>The rightmost column in binary is straightforward, just do <code>arr[:, -1][1::2] = 1</code>. But what about the second last column? It needs to be <code>(0, 0, 1, 1)</code> repeat, in other words every other pair of cells are ones, I know the indices of the start and end points, it needs to be on in <code>[range(2, 4), range(6, 8), range(10, 12)...]</code> but what is the simplest way to tell the computer to flip those cells? And the third last column, the bands of ones are <code>[range(4, 8), range(12, 16), range(20, 24)...]</code>, how do I flip those cells?</p>
<p>Surprisingly I haven't found a good answer, or perhaps unsurprising, since Google is useless, but I did find this: <a href="https://stackoverflow.com/questions/57222342/indexing-in-numpy-access-every-other-group-of-values">Indexing in NumPy: Access every other group of values</a>.</p>
<p>And no, this is not a duplicate, because doing <code>reshape</code> then <code>ravel</code> for each column would be terribly inefficient, and that doesn't create a boolean mask for indexing the array, it creates a smaller array...</p>
<p>Currently I can do this:</p>
<pre><code>arr = np.zeros((16, 4), dtype=bool)
l = 1
for i in (3, 2, 1, 0):
l2 = l * 2
for a, b in zip(range(l, 16, l2), range(l2,17,l2)):
arr[:, i][a:b] = 1
l = l2
</code></pre>
<p>But this supposedly is slow (I haven't benchmarked this), however if this is implemented in <code>numpy</code> then I think this would be the most efficient algorithm for this type of sequences. The question is, how to implement this?</p>
|
<python><arrays><numpy><gray-code>
|
2025-03-11 13:23:02
| 4
| 3,930
|
Ξένη Γήινος
|
79,500,858
| 10,452,700
|
Best Practices for Preserving Hashed Identifiers in Filenames After Character Sanitization for Accurate Merging
|
<p>We are working with a hashed identifier column (<code>vmid</code>) in a Pandas DataFrame, which has been sanitized to remove special characters due to CSV storage constraints. The sanitization function we used replaces disallowed characters with an underscore (<code>_</code>):</p>
<pre class="lang-py prettyprint-override"><code>import re
def sanitize_filename(vmid):
return re.sub(r'[<>:"/\\|?*]', '_', vmid)
</code></pre>
<p>or</p>
<pre class="lang-py prettyprint-override"><code># Replace invalid characters in the filename
filename = f"{vmid.replace('/', '_')}.csv" #store using f-string
vmid_df = result_df[result_df['vmid'] == vmid]
vmid_df.to_csv(filename, index=False)
</code></pre>
<p><strong>Problem</strong>:</p>
<p>The original <code>vmid</code> values were anonymized before processing.
After storing the sanitized version (where special characters are replaced by <code>_</code>), we lost the ability to recover the original vmid values.
Now, we need to join this DataFrame with another (<a href="https://github.com/Azure/AzurePublicDataset/blob/master/analysis/Azure%202019%20Public%20Dataset%20V2%20-%20Trace%20Analysis.ipynb" rel="nofollow noreferrer">vmtable.csv</a>), which contains the <code>vmid</code> column and its corresponding <code>vmcategory</code>.
We attempted to use <em>Levenshtein</em> distance for approximate string matching, but it often returns incorrect categories due to multiple potential matches.</p>
<p><strong>Attempts & Limitations</strong>:</p>
<ul>
<li>Using <code>Levenshtein</code>: Finds the closest match but can result in incorrect assignments.</li>
<li>Using direct equality comparison: Fails since the sanitized <code>vmid</code> is different from the original.</li>
</ul>
<p>Checking existing solutions like this <a href="https://%20https://stackoverflow.com/questions/6665398/algorithm-string-similarity-score-hash" rel="nofollow noreferrer">post</a> and this <a href="https://stackoverflow.com/questions/17388213/find-the-similarity-metric-between-two-strings">post</a>, each focuses on a single <em>matching algorithm</em>, but our issue is recovering the lost mapping before the join.</p>
<p>one possible naive potential solutions could store a <em>Mapping</em> before Sanitization</p>
<p>by storing a <strong>dictionary</strong> before applying <code>sanitize_filename()</code>:</p>
<pre class="lang-py prettyprint-override"><code>vmid_map = {original_vmid: sanitize_filename(original_vmid) for original_vmid in df['vmid']}
</code></pre>
<p>Later, use this mapping to restore the original vmid before merging or <code>left</code> join with other tables.</p>
<p><strong>Question</strong>:</p>
<p>What would be the best practice to handle this issue, ensuring we can store <strong>csv file names</strong> and later restore the original <code>vmid</code> for correct dataframe merging?</p>
<p>Would a mapping strategy or an alternative encoding approach be more reliable? Any better approaches are highly appreciated!</p>
|
<python><pandas><hash><cosine-similarity><html-escape-characters>
|
2025-03-11 13:05:50
| 0
| 2,056
|
Mario
|
79,500,760
| 3,133,018
|
Accessing a sibling inner class in initialiser
|
<p>Consider the following code, if you will:</p>
<pre class="lang-py prettyprint-override"><code>class ParentService:
BASE_DIR = '/some/path'
class Results(str, Enum):
RESULT1 = 'ResultOne.xlsx'
RESULT2 = 'ResultTwo.pdf'
def file_path(self) -> str:
return os.path.join(self.BASE_DIR, self.value)
class ParentException(Exception):
def __init__(self, result_type: Results): # Unresolved reference 'Results'
self.result_type = result_type
self.msg = f'There is a problem with {result_type}'
super().__init__(self.msg)
@classmethod
def file_exists(cls, file: Results):
if not os.path.exists(file.file_path()):
raise self.ParentException(result_type=file)
</code></pre>
<p>Is it possible – and if it is, what is the correct way – to access the <code>Results</code> inner class Enum in the <code>ParentException</code> initialiser for the purposes of limiting the <code>result_type</code> parameter options?</p>
<p>Thanks!</p>
|
<python><python-class>
|
2025-03-11 12:33:16
| 0
| 496
|
zkvvoob
|
79,500,718
| 6,699,447
|
How to include first matching pattern as a column
|
<p>I have a dataframe <code>df</code>.</p>
<pre><code>>>> import polars as pl
>>>
>>>
>>> df = pl.DataFrame({"col": ["row1", "row2", "row3"]})
>>> df
shape: (3, 1)
┌──────┐
│ col │
│ --- │
│ str │
╞══════╡
│ row1 │
│ row2 │
│ row3 │
└──────┘
</code></pre>
<p>Now I want to create a new column <code>new</code>. It should be the first matched pattern in the <code>col</code>.</p>
<p>For example, For the pattern <code>1|2</code> it should produce the following output.</p>
<pre><code>┌──────┬───────┐
│ col ┆ new │
│ --- ┆ --- │
│ str ┆ str │
╞══════╪═══════╡
│ row1 ┆ 1 │
│ row2 ┆ 2 │
│ row3 ┆ null │
└──────┴───────┘
</code></pre>
<p>I tried using with the expression API, but it's returning boolean values.</p>
<pre><code>>>> df.with_columns(new=pl.col('col').str.contains("1|2"))
shape: (3, 2)
┌──────┬───────┐
│ col ┆ new │
│ --- ┆ --- │
│ str ┆ bool │
╞══════╪═══════╡
│ row1 ┆ true │
│ row2 ┆ true │
│ row3 ┆ false │
└──────┴───────┘
</code></pre>
|
<python><python-3.x><python-polars>
|
2025-03-11 12:18:24
| 2
| 25,841
|
user459872
|
79,500,374
| 17,500,571
|
Inconsistent API Data Size When Splitting a 4-Year Dataset into Various Time Chunks (Thingsbaord)
|
<p>I'm working with a 4-year dataset that I split into smaller time intervals (chunks) to send API requests to Thingsboard. I experimented with different chunk sizes—3 hours, 6 hours, 12 hours, and 24 hours. Since the overall time period remains the same (4 years), I expected the total amount of data retrieved to be consistent across different chunk sizes. However, I'm seeing inconsistent total data sizes depending on the chunk size used.</p>
<p>I've thoroughly tested my script and I'm confident that the implementation is correct. It appears that the discrepancy comes from the Thingsboard API, which returns varying data amounts for each chunk.</p>
<p><a href="https://i.sstatic.net/53PnqVFH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/53PnqVFH.png" alt="enter image description here" /></a></p>
<p>Script :</p>
<pre><code>import json
import httpx
from urllib.parse import urlencode
import polars as pl
import asyncio
import os
import re
from tenacity import retry, stop_after_attempt, wait_exponential, retry_if_exception_type
from urllib.parse import urljoin
from itertools import chain
from config import BASE_URL, DATAFRAME_AFTER_ANALYSE
import time
numberofkeys = 0
OUTPUT_FILE = "/root/main-kafka-main/ALLDATASETS/DATAFROMMETAMORPHOSIS/new_2.jsonl"
RAW_RESPONSE_DIR = "raw_responses" # Directory to save raw API responses
error_file = []
def create_polling(deviceid, timestamp, slaveadd, template, modbusmap):
return my temple comes here I deleted it because it is long
# Function to fetch or load cached telemetry keys
async def get_cached_keys(client, header, entityID):
cache_dir = "keys_cache"
os.makedirs(cache_dir, exist_ok=True)
cache_file = os.path.join(cache_dir, f"{entityID}.json")
if os.path.exists(cache_file):
with open(cache_file, "r") as f:
keys = json.load(f)
else:
keys = await get_keys(client, header, entityID)
with open(cache_file, "w") as f:
json.dump(keys, f)
return keys
@retry(
retry=retry_if_exception_type(httpx.HTTPStatusError),
stop=stop_after_attempt(3),
wait=wait_exponential(multiplier=1, min=1, max=10)
)
async def get_keys(client, header, entityID):
all_keys_url = urljoin(BASE_URL, f"/api/plugins/telemetry/DEVICE/{entityID}/keys/timeseries")
response = await client.get(all_keys_url, headers=header)
response.raise_for_status()
data = response.json()
pattern = re.compile(r"^[0-9A-Fa-f]{4}(_H|_I)?$")
filtered_keys = [key for key in data if key and len(key) <= 6 and pattern.match(key)]
return filtered_keys
@retry(
retry=retry_if_exception_type(httpx.HTTPStatusError),
stop=stop_after_attempt(3),
wait=wait_exponential(multiplier=1, min=1, max=10)
)
async def get_telemetry_data(client, header, entityID, keys, useStrictDataTypes, startTs, endTs):
telemetry_value_path = f"/api/plugins/telemetry/DEVICE/{entityID}/values/timeseries"
query_params = {
"keys": ",".join(keys),
"useStrictDataTypes": str(useStrictDataTypes).lower(),
"startTs": startTs,
"endTs": endTs,
"limit": 53*100000000000,
"agg":"NONE",
"interval":0,
"intervalType":"MILLISECONDS"
}
url = f"{urljoin(BASE_URL, telemetry_value_path)}?{urlencode(query_params)}"
response = await client.get(url, headers=header)
response.raise_for_status()
return response.json()
@retry(
retry=retry_if_exception_type(httpx.HTTPStatusError),
stop=stop_after_attempt(3),
wait=wait_exponential(multiplier=1, min=1, max=10))
def generate_time_chunks(start_ts, end_ts, chunk_size_ms):
current_start = start_ts
while current_start < end_ts:
current_end = min(current_start + chunk_size_ms, end_ts)
yield current_start, current_end
current_start = current_end
async def get_device_data(semaphore, client, header, entityID, useStrictDataTypes, startTs, endTs, entityName, entityType, chunk_size_ms=1*24*3600*1000):
async with semaphore:
try:
filtered_keys = await get_cached_keys(client, header, entityID)
if not filtered_keys:
return
for chunk_start, chunk_end in generate_time_chunks(startTs, endTs, chunk_size_ms):
try:
data = await get_telemetry_data(client, header, entityID, filtered_keys, useStrictDataTypes, chunk_start, chunk_end)
if not data:
continue
timestamps = {entry["ts"] for entry in chain.from_iterable(data.values())}
timestamps = sorted(timestamps)
parts = entityName.rsplit("-", 1)
deviceid = parts[0]
try:
slaveadd = int(parts[1] if len(parts) > 1 else "")
except ValueError:
slaveadd = parts[1] if len(parts) > 1 else ""
data_by_ts = {}
for key, entries in data.items():
for entry in entries:
ts = entry["ts"]
data_by_ts.setdefault(ts, {})[key] = entry["value"]
pattern_key = re.compile(r"^([0-9A-Fa-f]{4})(?:_H|_I)$")
for ts in timestamps:
modbusmap = []
ts_data = data_by_ts.get(ts, {})
for key in filtered_keys:
try:
if key not in ts_data:
continue
if len(key) == 6:
new_key = key[:4].lower()
if key.endswith("H"):
modbusmap.append({"addr": f"0x{new_key}", "type": 0, "val": ts_data[key]})
else:
modbusmap.append({"addr": f"0x{new_key}", "type": 1, "val": ts_data[key]})
else:
new_key = key.lower()
modbusmap.append({"addr": f"0x{new_key}", "type": 0, "val": ts_data[key]})
modbusmap.append({"addr": f"0x{new_key}", "type": 1, "val": ts_data[key]})
except Exception as e:
print(f"Error processing key '{key}' for timestamp {ts} in chunk {chunk_start}-{chunk_end} for device {entityID}: {e}")
error_file.append(e)
continue
polling = create_polling(deviceid, ts, slaveadd, entityType, modbusmap)
yield polling
except Exception as e:
print(f"Error processing chunk {chunk_start}-{chunk_end} for device {entityID}: {e}")
continue
except Exception as r:
print(f"Error processing device {entityID}: {r}")
async def process_device(queue, semaphore, client, header, entityID, useStrictDataTypes, startTs, endTs, entityName, entityType):
async for polling in get_device_data(semaphore, client, header, entityID, useStrictDataTypes, startTs, endTs, entityName, entityType):
await queue.put(polling)
async def writer(queue, file_path):
with open(file_path, "w") as f:
while True:
item = await queue.get()
if item is None:
break
json_str = json.dumps(item)
f.write(json_str + "\n")
queue.task_done()
async def get_all_devices_from_file(file_name, token, useStrictDataTypes, startTs, endTs):
start_time = time.time()
file_path = f"{DATAFRAME_AFTER_ANALYSE}{file_name}.csv"
df_devices = pl.read_csv(file_path, columns=["id", "name", "type"])
header = {"Authorization": f"Bearer {token}"}
async with httpx.AsyncClient(http2=True) as client:
semaphore = asyncio.Semaphore(50)
queue = asyncio.Queue()
writer_task = asyncio.create_task(writer(queue, OUTPUT_FILE))
tasks = []
for entityID, entityName, entityType in df_devices.select(["id", "name", "type"]).iter_rows():
task = asyncio.create_task(
process_device(queue, semaphore, client, header, entityID, useStrictDataTypes, startTs, endTs, entityName, entityType)
)
tasks.append(task)
await asyncio.gather(*tasks)
await queue.put(None)
await writer_task
end_time = time.time()
execution_time = end_time - start_time
return f"Execution time: {execution_time:.6f} seconds"
</code></pre>
<p>My questions are:</p>
<ul>
<li>Has anyone experienced similar behavior with Thingsboard or any other API when splitting data by time intervals?</li>
<li>What could be causing these inconsistencies in the API response?</li>
<li>Are there any best practices or known workarounds when dealing with such scenarios?</li>
</ul>
|
<python><fastapi><thingsboard>
|
2025-03-11 10:15:05
| 0
| 364
|
Foxbat
|
79,500,273
| 8,384,910
|
Python shelve downgrade to read-only mode
|
<p>If multiple executions are reading from the same shelve, I assume that they must open it in read-only mode so that there are no "locked file" errors.</p>
<p>I'm using <code>shelve</code> as a cache, which is intended to be populated on the first run. When the code runs, it checks the integrity of the cache, and regenerates it when needed. Because of my context, it is safe to assume that if the cache is being updated, the code is never running concurrently.</p>
<p>How do I open and upgrade/downgrade the shelve to and from create and read-only mode to avoid locked file errors while also allowing concurrent executions?</p>
|
<python><shelve>
|
2025-03-11 09:48:07
| 1
| 9,414
|
Richie Bendall
|
79,500,233
| 14,855,041
|
How to resolve type checking error in Django when accessing Serializer.validated_data
|
<p>I'm encountering a type checking error in VSCode with Pylance (pyright) when accessing <code>serializer.validated_data["code"]</code> in a Django project. The errors are:</p>
<ul>
<li><code>"__getitem__" method not defined on type "empty" Pylance</code></li>
<li><code>Object of type "None" is not subscriptable Pylance</code></li>
</ul>
<p>The property type is inferred as:<br />
<code>(property) validated_data: empty | Unknown | dict[Unknown, Unknown] | Any | None</code></p>
<p>VSCode settings:</p>
<pre class="lang-json prettyprint-override"><code>"python.languageServer": "Pylance",
"python.analysis.typeCheckingMode": "basic"
</code></pre>
<p>I've defined the serializer class like,</p>
<pre class="lang-py prettyprint-override"><code>class InputSerializer(BaseSerializer):
code = serializers.CharField(
required=True,
max_length=255,
validators=[voucher_code_validator],
)
</code></pre>
<p>How can I fix this?</p>
|
<python><django><python-typing><pylance><pyright>
|
2025-03-11 09:35:39
| 1
| 366
|
Farhad
|
79,500,016
| 2,604,247
|
What Are the Correct Versions of Tensorflow and Tensorflow Agents to Work with On Ubuntu 24.04?
|
<p>I remember having worked with tensorflow agents (for reinforcement learning) more than three years back, on an Ubuntu 18.04 machine. Of course, all the dependencies, including python versions have been upgraded since then, and it seems with a lot of breaking changes.</p>
<p>So now that I need to build a custom <code>tf_environment</code> for a reinforcement learning projects, I cannot even get the dependencies set up to play with each other. Here is my <code>pyproject.toml</code> (I am using an old python version based on some google searches)</p>
<pre class="lang-ini prettyprint-override"><code>[project]
name = "rl-assignment"
version = "0.1.0"
description = "A project on reinforcement learning"
readme = "README.md"
requires-python = "==3.9.21"
</code></pre>
<p>and here is the requirement file</p>
<pre><code>polars==1.24.0
numpy==1.26.4
tensorflow==2.18.0
tf-agents==0.5.0 # Later versions face issue with pygame
</code></pre>
<p>Based on this set up, I am trying to run a simple import (from Google's own tutorial)</p>
<pre class="lang-py prettyprint-override"><code>from tf_agents.environments import py_environment
from tf_agents.environments import tf_environment
from tf_agents.environments import tf_py_environment
from tf_agents.environments import utils
from tf_agents.specs import array_spec
from tf_agents.environments import wrappers
from tf_agents.environments import suite_gym
from tf_agents.trajectories import time_step as ts
</code></pre>
<p>and I get</p>
<pre><code>ModuleNotFoundError: No module named 'tensorflow.python.training.tracking'
</code></pre>
<p>But this is just an example, I tried multiple upgrades and downgrades of the version numbers based on some readings and posts, but each time it throws a new error, including on based on <code>protobuf</code> version as well.</p>
<h4>Question</h4>
<p>Just give me a clean and minimal <code>requirements.txt</code> file (and optionally, a python version, the later the better) where tensorflow and tf_agents can work with each with other.</p>
|
<python><tensorflow><pip><dependency-management><tf-agent>
|
2025-03-11 08:05:46
| 1
| 1,720
|
Della
|
79,499,882
| 8,876,025
|
Claude Sonnet can call tool only once at a time
|
<p>I'm testing a function calling capability of Claude Sonnet 3.7, but it can only call the tool once before it responds to the client.</p>
<p>Here is my code:</p>
<pre><code>from llama_index.core.program.function_program import FunctionCallingProgram
from llama_index.llms.anthropic import Anthropic
from llama_index.core import PromptTemplate
class ObjectInstance(BaseModel):
type: str = Field(
...,
description="The category or class of the object extracted from the text. Must exactly match one of the predefined object types provided.",
)
name: str = Field(
...,
description="The exact name of the object as it appears in the text in upper case.",
)
description: str = Field(
...,
description="Information about the object from the text it is extracted from",
)
test_prompt = PromptTemplate(
"""
Task Description:
From a given text, extract one or more objects that match the object type provided.
For each object, call the provided tool to format the output.
OBject Types and Descriptions:
Entity Type 1: PERSON
Description: A name of a person
Entity Type 2: CITY NAME
Description: A name of a city
Text:
{content}
"""
)
llm = Anthropic(model = "claude-3-7-sonnet-20250219")
program = FunctionCallingProgram.from_defaults(
output_cls=ObjectInstance,
prompt=test_prompt,
llm=llm,
allow_parallel_tool_calls=True,
)
content = "John Doe is a person who lives in San Francisco."
output = program(content=content)
# It only extracts
# Name: JOHN DOE
# Type: PERSON
# It should extract this as well
# Name: San Francisco
# Type: CITY NAME
</code></pre>
<p>I also tried using <code>structured_predict</code> and <code>get_function_tool</code>, but both failed, resulting in extracting only one object.</p>
<p>My attempt No. 2</p>
<pre><code>response = llm.structured_predict(
output_cls, prompt, content=content, allow_parallel_tool_calls=True
)
</code></pre>
<p>My attempt No. 3</p>
<pre><code>from llama_index.core.program.function_program import get_function_tool
tool = get_function_tool(output_cls)
resp = llm.chat_with_tools(
[tool],
user_msg=prompt.format(content=content),
allow_parallel_tool_calls=True,
)
tool_calls = llm.get_tool_calls_from_response(
resp, error_on_no_tool_calls=False
)
</code></pre>
<p>For the same code and prompt, both gpt-4o and mistral-large were able to call the tool multiple times. Only Anthropic Claude fails to achieve this. I wonder if I'm missing something.</p>
|
<python><llama-index><claude>
|
2025-03-11 06:55:40
| 1
| 2,033
|
Makoto Miyazaki
|
79,499,687
| 6,011,193
|
In vscode, how to disable debug error breakpoints
|
<p>When debug python, vscode always auto focus err line and popup err, but I hope disable the feature like intellij idea disable "breakpoints > error breakpoint"</p>
<p>When debug python, vscode always auto focus err line and popup err, but I hope disable the feature like intellij idea disable "breakpoints > error breakpoint"</p>
<p><a href="https://i.sstatic.net/Wxs19jTw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Wxs19jTw.png" alt="enter image description here" /></a></p>
|
<python><visual-studio-code><debugging>
|
2025-03-11 04:52:36
| 1
| 4,195
|
chikadance
|
79,499,568
| 4,570,628
|
Cython can't link external C++ library
|
<p>Part of a Python library I'm working on is written in Cython. I need to use the <a href="https://github.com/linbox-team/linbox/tree/master/linbox" rel="nofollow noreferrer">LinBox</a> C++ library (which depends on the <a href="https://github.com/linbox-team/givaro" rel="nofollow noreferrer">Givaro</a> library). Just to test, I've written a small C++ function that eats Cython-accessible data, does something with Givaro, and returns Cython-accessible data. Everything compiles correctly, but when I run the test, I get the following error:</p>
<p><code>ImportError: dlopen(<path/to/cython>/matrices.cpython-311-darwin.so, 0x0002): symbol not found in flat namespace '__ZN6GivarolsERSoRKNS_7IntegerE'</code></p>
<p>Running <code>nm</code> on <code>matrices.cpython-311-darwin.so</code>, it looks like the symbol is undefined, because I get <code>U __ZN6GivarolsERSoRKNS_7IntegerE</code>. Here's my MWE:</p>
<pre class="lang-cpp prettyprint-override"><code>// fastcomputation.cpp
#include <vector>
#include <givaro/gfq.h>
using namespace std;
int kernel(vector<int> A, int rank, int nullity, double field) {
Givaro::GFqDom<int> F(field);
return 1;
}
</code></pre>
<pre class="lang-cpp prettyprint-override"><code>// fastcomputation.hpp
#include <vector>
int kernel(std::vector<int> A, int rank, int nullity, double field);
</code></pre>
<pre class="lang-py prettyprint-override"><code># matrices.pyx
# distutils: language = c++
from libcpp.vector cimport vector
cdef extern from "fastcomputation.hpp":
int kernel(vector[int] A, int rank, int nullity, double field)
cpdef pkernel(vector[int] A, int rank, int nullity, double field):
print(kernel(A, rank, nullity, field))
</code></pre>
<pre class="lang-py prettyprint-override"><code># setup.py
from setuptools import setup, Extension
from Cython.Build import cythonize
import os
import numpy
os.environ["CC"] = "gcc-14"
os.environ["CXX"] = "g++-14"
extensions = [
Extension(
"*",
["<path/to/cython>/*.pyx", "<path/to/cython>/fastcomputation.cpp"],
include_dirs=[
numpy.get_include(),
"<path/to/cython>",
"/opt/homebrew/Cellar/linbox/include",
"/opt/homebrew/Cellar/fflas-ffpack/2.5.0/include",
"/opt/homebrew/Cellar/givaro/4.2.0_1/include",
"/opt/homebrew/Cellar/gmp/6.3.0/include",
"/opt/homebrew/Cellar/libomp/20.1.0/include"
],
library_dirs=[
"/opt/homebrew/Cellar/linbox/lib",
"/opt/homebrew/Cellar/fflas-ffpack/2.5.0/lib",
"/opt/homebrew/Cellar/givaro/4.2.0_1/lib",
"/opt/homebrew/Cellar/gmp/6.3.0/lib",
"/opt/homebrew/Cellar/libomp/20.1.0/lib"
],
runtime_library_dirs=[
"/opt/homebrew/Cellar/linbox/lib",
"/opt/homebrew/Cellar/fflas-ffpack/2.5.0/lib",
"/opt/homebrew/Cellar/givaro/4.2.0_1/lib",
"/opt/homebrew/Cellar/gmp/6.3.0/lib",
"/opt/homebrew/Cellar/libomp/20.1.0/lib"
],
libraries=["linbox", "givaro"],
language="c++"
)
]
setup(
ext_modules=cythonize(extensions, annotate=True, language_level="3")
)
</code></pre>
<p>I am <em>not</em> an expert (or even that capable) at getting C++ to compile, so it's possible that I've misdiagnosed this and I'm leaving something out completely. The issue is that, <em>without trying to use Givaro, everything compiles and runs</em>, but I need to use Givaro.</p>
|
<python><c++><compilation><cython>
|
2025-03-11 02:55:06
| 1
| 477
|
apizzimenti
|
79,499,322
| 8,357,735
|
YOLOv11 Model Converted to TFLite Not Producing Correct Output in TensorFlow
|
<p>I'm training an ALPR detection model using the dataset from <a href="https://universe.roboflow.com/alpr-12yby/alpr-transformed/" rel="nofollow noreferrer">Roboflow ALPR</a> with YOLOv11, converted to TFLite using:</p>
<pre><code>import ultralytics as yolo
!yolo detect export model=/content/runs/detect/yolov11_anpr/weights/best.pt imgsz=640 batch=1 format=tflite
</code></pre>
<p>My Current Python Inference Code (Ultralytics YOLO)
Both .pt and .tflite models work correctly in Ultralytics' inference pipeline:</p>
<pre><code>from PIL import Image
from ultralytics import YOLO
image = Image.open("/content/Screenshot From 2025-03-08 16-37-15.png")
model = YOLO('/content/runs/detect/yolov11_anpr/weights/best_saved_model/best_float32.tflite')
results = model(image)
result = results[0]
result.show()
</code></pre>
<p>This successfully detects Persian numbers:</p>
<p><a href="https://i.sstatic.net/nfGkCxPN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nfGkCxPN.png" alt="licence plate " /></a></p>
<p>Here's a visual representation of the successful detection using Ultralytics YOLO:</p>
<p><strong>Problem</strong>
However, direct inference with TensorFlow (without Ultralytics) doesn't produce correct detections. The output data is incorrect or missing entirely.</p>
<p><strong>Questions:</strong>
Why does inference using Ultralytics YOLO work, but direct TensorFlow inference doesn't?
What preprocessing or post-processing steps am I missing for YOLOv11 TFLite inference with TensorFlow?
Any insights or solutions to correctly use the TFLite model directly with TensorFlow would be greatly appreciated!</p>
<p>you can download and test my tflite model with below link :
<a href="https://drive.google.com/file/d/1p4CaFl9g2gPjGUd68xlr_EQlxz-umTre/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/1p4CaFl9g2gPjGUd68xlr_EQlxz-umTre/view?usp=sharing</a></p>
|
<python><tensorflow><artificial-intelligence><data-science><yolo>
|
2025-03-10 22:54:14
| 1
| 320
|
farid
|
79,499,230
| 1,306,784
|
Circular Dependencies with Response Model
|
<p>I'm trying to leverage <a href="https://sqlmodel.tiangolo.com/tutorial/relationship-attributes/read-relationships/" rel="nofollow noreferrer">SQLModel "Relationships"</a> in a FastAPI app. Specifically, I want to break my model classes out into separate files.</p>
<p>I found the official SQLModel documentation describing <a href="https://sqlmodel.tiangolo.com/tutorial/code-structure/#team-model-file" rel="nofollow noreferrer">how to deal with circular dependencies</a> which describes how to use <code>TYPE_CHECKING</code> to sort of bypass the typing issues; and by itself, that allows the API to build.</p>
<p>The problem arises when I try to declare a <code>response_model</code> on my API endpoint with one of the classes from one of those files. The API request raises:</p>
<blockquote>
<p>pydantic.errors.PydanticUserError: <code>TypeAdapter[typing.Annotated[team.TeamResponse, FieldInfo(annotation=TeamResponse, required=True)]]</code> is not fully defined; you should define <code>typing.Annotated[team.TeamResponse, FieldInfo(annotation=TeamResponse, required=True)]</code> and all referenced types, then call <code>.rebuild()</code> on the instance.</p>
</blockquote>
<p>And I think that makes sense that I'm getting an error (although the particular error is cryptic); after all, I used the <code>TYPE_CHECKING</code> voodoo on the model classes, but now I'm explicitly referencing <code>TeamResponse</code>, and expecting to get all the type information for the API request.</p>
<p>So assuming I do NOT want to store all my models in one big class, how can I deal with circular dependencies in model classes AND specify a <code>response_model</code> with related data being returned?</p>
<p><strong>hero.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from typing import TYPE_CHECKING
from sqlmodel import SQLModel, Field, Relationship
if TYPE_CHECKING: from .team import Team
class Hero(SQLModel, table=True):
id: int | None = Field(default=None, primary_key=True)
name: str
team_id: int | None = Field(default=None, foreign_key="team.id")
team: "Team" = Relationship(back_populates="heroes")
class HeroResponse(BaseModel):
name: str
team: "Team"
</code></pre>
<p><strong>team.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from typing import TYPE_CHECKING
from pydantic import BaseModel
from sqlmodel import SQLModel, Field, Relationship
if TYPE_CHECKING: from .hero import Hero, HeroResponse
class Team(SQLModel, table=True):
id: int | None = Field(default=None, primary_key=True)
name: str
heroes: list["Hero"] = Relationship(back_populates="team")
class TeamResponse(BaseModel):
"""What I want returned from the API"""
name: str
heroes: list["HeroResponse"]
</code></pre>
<p><strong>app.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI, Depends
from sqlmodel import Session, create_engine, select, SQLModel
from team import TeamResponse, Team
from hero import Hero
engine = create_engine("sqlite:///db.db")
app = FastAPI()
async def get_session():
session = Session(engine)
try:
yield session
finally:
session.close()
@app.get("/teams", response_model=list[TeamResponse])
def get_teams(session = Depends(get_session)):
stmt = select(Team)
heroes = session.exec(stmt).all()
return heroes
# If you want to stage some data
if __name__ == "__main__":
SQLModel.metadata.create_all(engine)
preservers = Team(name="Preservers")
hero1 = Hero(name="The Flash", team=preservers)
hero2 = Hero(name="Superman", team=preservers)
with Session(engine) as session:
session.add_all([preservers,hero1, hero2])
session.refresh(preservers)
session.refresh(hero1)
session.refresh(hero2)
session.commit()
</code></pre>
|
<python><fastapi><sqlmodel>
|
2025-03-10 21:46:00
| 1
| 4,801
|
Xedni
|
79,499,210
| 9,415,280
|
tensorflow dataset loop endless will model.predict
|
<p>I use this code to setup my dataset for trainning and predict:</p>
<pre><code>train_dataset = train_dataset.batch(train_batch_sz)
train_dataset = train_dataset.repeat().prefetch(5)
test_dataset_sim = test_dataset
test_dataset = test_dataset.batch(test_batch_sz)
test_dataset = test_dataset.repeat().prefetch(5)
</code></pre>
<p>Trainning work perfectly but when it is time to</p>
<pre><code>model.predict(test_dataset)
</code></pre>
<p>the predict loop endless. I imagine it was due to ".repeat()" so I try:</p>
<pre><code>model.predict(test_dataset_sim )
</code></pre>
<p>and got this error:</p>
<pre><code>
ValueError: Exception encountered when calling Sequential.call().
Invalid input shape for input Tensor("data:0", shape=(24,), dtype=float32). Expected shape (None, 24), but input has incompatible shape (24,)
Arguments received by Sequential.call():
• inputs=tf.Tensor(shape=(24,), dtype=float32)
• training=False
• mask=None
File <command-8335341383601104>, line 35
---> 35 y_pred = model.predict(test_dataset_sim)
36 y_pred_classes = np.argmax(y_pred, axis=1)
37 y_pred_prob = np.max(y_pred, axis=1)
File /databricks/python/lib/python3.11/site-packages/keras/src/models/functional.py:285, in Functional._adjust_input_rank(self, flat_inputs)
283 adjusted.append(ops.expand_dims(x, axis=-1))
284 continue
--> 285 raise ValueError(
286 f"Invalid input shape for input {x}. Expected shape "
287 f"{ref_shape}, but input has incompatible shape {x.shape}"
288 )
289 # Add back metadata.
290 for i in range(len(flat_inputs)):
</code></pre>
<p>I change my dataset to:</p>
<pre><code>test_dataset_sim = test_dataset.batch(test_batch_sz)
</code></pre>
<p>I got this warning and don't get all my predict step as required</p>
<pre><code>2025-03-10 21:05:32.085357: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence
</code></pre>
<p>What is the good way to make it work properly?</p>
|
<python><tensorflow><dataset><tensorflow-datasets>
|
2025-03-10 21:38:06
| 1
| 451
|
Jonathan Roy
|
79,499,140
| 12,158,757
|
How to easily modify the number of neurons of a certain layer in a Tensorflow neural network?
|
<p>In <a href="/questions/tagged/tensorflow" class="s-tag post-tag" title="show questions tagged 'tensorflow'" aria-label="show questions tagged 'tensorflow'" rel="tag" aria-labelledby="tag-tensorflow-tooltip-container" data-tag-menu-origin="Unknown">tensorflow</a>, given a model <code>h = Model(input_layer, output_layer)</code> for example (its summary looks like below)</p>
<pre><code>Model: "functional_1"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ Layer (type) ┃ Output Shape ┃ Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ input_layer (InputLayer) │ (None, 784) │ 0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense (Dense) │ (None, 64) │ 50,240 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense_1 (Dense) │ (None, 32) │ 2,080 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense_2 (Dense) │ (None, 64) │ 2,112 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense_3 (Dense) │ (None, 784) │ 50,960 │
└─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 105,392 (411.69 KB)
Trainable params: 105,392 (411.69 KB)
Non-trainable params: 0 (0.00 B)
</code></pre>
<p>If I want to change the number of neurons of <code>dense_1</code> from <code>32</code> to <code>16</code>, is there any easy and handy way to update the new configuration?</p>
<p>I tried <code>h.layers[2].units = 16</code>, which unfortunately didn't change the model setup at all. I am wondering if there is kind of "assignment" operation to do that.</p>
|
<python><tensorflow><keras><deep-learning>
|
2025-03-10 20:56:26
| 0
| 105,741
|
ThomasIsCoding
|
79,499,064
| 4,992,910
|
loading a dataset with datasets.load_dataset is hanging
|
<p>I'm trying to load some data using datasets.load_datasets. It runs correctly on a head node. The issue is happening on a slurm node. I'm using a conda env with datasets installed.</p>
<p>When I run on head node with the conda env active, this command works:</p>
<pre><code>python -c "from datasets import load_dataset; d=load_dataset(\"json\", data_files={\"train\": \"/scratch/train/shard1.jsonl\"}); print(d)"
</code></pre>
<p>The issue occurs when I submit the job to the cluster. This hangs:</p>
<pre><code>salloc --nodes 1 --qos interactive --time 00:15:00 --constraint gpu --account=my_account --mem=1G --gres=gpu:1
srun --nodes=1 --ntasks-per-node=1 --constraint=gpu --account=my_account --gres=gpu:1 \
bash -c '
source /global/homes/my_username/miniconda3/etc/profile.d/conda.sh &&
conda activate my_env &&
python -c "from datasets import load_dataset; load_dataset(\"json\", data_files={\"train\": \"/scratch/my_username/train/shard1.jsonl\"})"
'
</code></pre>
<p>I get similar behavior when I submit with sbatch. I'm using a tiny data file to test this:</p>
<pre><code>{"text": "ACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGT"}
</code></pre>
|
<python><dataset><torch>
|
2025-03-10 20:01:44
| 1
| 454
|
ate50eggs
|
79,499,056
| 11,063,709
|
How can I apply member functions of a list of objects across slices of a JAX array using vmap?
|
<p>I have a list of a objects, each of which has a function to be applied on a slice of a <code>jax.numpy.array</code>. There are <code>n</code> objects and <code>n</code> corresponding slices. How can I vectorise this using <code>vmap</code>?</p>
<p>For example, for the following code snippet:</p>
<pre><code>import jax
import jax.numpy as jnp
class Obj:
def __init__(self, i):
self.i = i
def f1(self, x): return (x - self.i)
x = jnp.arange(9).reshape(3, 3).astype(jnp.float32)
functions_obj = [Obj(1).f1, Obj(2).f1, Obj(3).f1]
</code></pre>
<p>how would I apply the functions in <code>functions_obj</code> to slices of <code>x</code>?</p>
<p>More details, probably not relevant:
My specific use-case is running the member functions of a lot of Reinforcement Learning <code>Gym</code> environment objects on slices of an <code>actions</code> array, but I believe my problem is more general and I formulated it as above. (P.S.: I know about <code>AsyncVectorEnv</code> by the way but that does not solve my problem as I am not trying to run the <code>step</code> function).</p>
|
<python><jax>
|
2025-03-10 19:58:21
| 1
| 1,442
|
Warm_Duscher
|
79,498,948
| 5,678,653
|
How can I derive names from numeric properties using numpy?
|
<p>My work is currently to do with functions and properties of the unit Octahedron. While it is not central to the question here, it may help with the context.</p>
<p>A unit octahedron has side lengths of <code>√2</code>, and it's six vertices are at <code>(±1,0,0),(0,±1,0),(0,0,±1)</code></p>
<p>As I am using this to consider geographic projections, I am labelling these <code>NS</code> in the <code>z-axis</code>, <code>EW</code> in the y-axis, and <code>AP</code> in the <code>x-axis</code>.</p>
<pre class="lang-py prettyprint-override"><code>vertices = { # ... of the unit octahedron
'N': (0, 0, 1), 'S': (0, 0, -1),
'E': (0, 1, 0), 'W': (0, -1, 0),
'A': (1, 0, 0), 'P': (-1, 0, 0)
}
</code></pre>
<p>Each of the eight sides of the octahedron (just as their spherical counterparts) are called <code>octants</code>, and they are their composite vertices, so I am naming them accordingly. For example, <code>NEA</code> consists of the vertices <code>N, E, A</code>. For reasons to do with geographic naming conventions, each name is the format (regex) <code>[NS][WE][AP]</code> (Eg <code>NEA</code>: "North East Atlantic"). Here I am calling them <code>sides</code>.</p>
<p>Every point <code>[u,v,w]</code> on the surface of the unit octahedron meets the constraint
<code>|u|+|v|+|w|=1</code>.</p>
<p>I am using something like the following for such validation.</p>
<pre class="lang-py prettyprint-override"><code>np.allclose(1, np.abs(uvw).sum(axis=-1), rtol=1e-8, atol=1e-8)
</code></pre>
<p>The side it belongs to can be also determined by the signs of its coordinates. For example, <code>[0.363 0.52 0.117]</code> is on <code>NEA</code> while <code>[-0.363 0.52 -0.117]</code> is on <code>SEP</code>.</p>
<p><code>np.sign()</code> is useful for this,</p>
<p>I am currently generating the signs as follows:</p>
<pre class="lang-py prettyprint-override"><code>s = [-1, 1]
signs = np.stack(np.meshgrid(s, s, s), axis=-1,).reshape((-1, 3))
</code></pre>
<p>But this doesn't automatically correlate with the sides NEA, NWA, etc.</p>
<p>Again, a function essential to my purpose is to rotate surface points on any given side of the octahedron onto the plane, for the purpose of being able to draw nets, etc.</p>
<p>The rotation matrices themselves again correlate very strongly to the nature of the octahedron (no surprise there, right?)</p>
<p>I am defining these matrices as follows:</p>
<pre class="lang-py prettyprint-override"><code>z_rotate = (
((-1, 0, 1), ( 1, -2, 1), ( 1, 1, 1)), # NEA
(( 0, -1, -1), (-2, 1, -1), ( 1, 1, -1)), # SEA
(( 0, -1, 1), ( 2, 1, 1), (-1, 1, 1)), # NEP
(( 1, 0, -1), (-1, -2, -1), (-1, 1, -1)), # SEP
(( 0, 1, 1), (-2, -1, 1), ( 1, -1, 1)), # NWA
((-1, 0, -1), ( 1, 2, -1), ( 1, -1, -1)), # SWA
(( 1, 0, 1), (-1, 2, 1), (-1, -1, 1)), # NWP
(( 0, 1, -1), ( 2, -1, -1), (-1, -1, -1)) # SWP
)/np.sqrt((2, 6, 3))[:, np.newaxis]
</code></pre>
<p><strong>The question</strong></p>
<p>I am looking for a solution to the conundrum of deriving the acronymic names (<code>NEA</code>, etc) from their vertices (remembering that they are 'reversed' order; eg NS govern the Z plane - the 3rd axis of a coordinate); and likewise being able to derive/identify each of the various properties of each side from the the vertices that compose them, such as derive the coordinate signs of each side into a <code>{sign_tuple:name}</code> dict.</p>
<p>The rotation matrices actually hold the coordinate signs (in the final row of each corresponding matrix, so being able to derive and/or correlate those with each vertex would allow us skip the coordinate sign task by using something like (but with the correlating key).</p>
<pre class="lang-py prettyprint-override"><code>signs = z_rotate[:, -1]
</code></pre>
<p>There are interesting patterns to the the first and second lines of the rotation matrices - something I have wondered about -</p>
<ul>
<li>the sign of Z coordinate always directly corresponds to N/S</li>
<li>various flips happen between the X,Y coordinates.</li>
</ul>
<p>It would be amazing to see the rotation arrays being derived from a single prototype following the intrinsic relations of the octahedron (rather than being derived analytically via the normals - which is of no interest to me), but that is beyond my expectations.</p>
|
<python><numpy><computational-geometry>
|
2025-03-10 19:10:12
| 1
| 2,248
|
Konchog
|
79,498,680
| 72,437
|
Performing Collection Queries Within a Transaction: Python vs. Swift
|
<p>In Python, we can perform queries on collections within a transaction using the following code snippet:</p>
<pre><code>def write_upload_success(filename: str, uid: str, doc_id: str) -> DocumentReference:
db = firestore.client()
transaction = db.transaction()
@firestore.transactional
def transaction_operation(transaction) -> str:
# Query to check if the filename exists
notes_ref = db.collection('users').document(uid).collection('notes')
# Find the largest order number in the existing notes
order_query = notes_ref.order_by('order', direction=firestore.Query.DESCENDING).limit(1).get(transaction=transaction)
max_order_note = None
for note in order_query:
max_order_note = note
break
order = max_order_note.get('order') + 1 if max_order_note else 1
note_ref = notes_ref.document(doc_id)
#
# ...
#
return note_ref
return transaction_operation(transaction)
</code></pre>
<p>It appears that Python allows querying collections within a transaction using:</p>
<pre><code>notes_ref.order_by('order', direction=firestore.Query.DESCENDING).limit(1).get(transaction=transaction)
</code></pre>
<p>However, I haven't been able to find an equivalent approach in Swift. Is it not possible to perform collection queries within a transaction in Swift?</p>
<p>Thanks.</p>
<p>p/s Almost all AIs will wrongly suggest using <code>transaction.getDocuments</code> in Swift, which this function doesn't exist.</p>
|
<python><swift><firebase><google-cloud-firestore>
|
2025-03-10 17:08:57
| 1
| 42,256
|
Cheok Yan Cheng
|
79,498,670
| 7,959,614
|
Vectorize ordinal regression using numpy and scipy special
|
<p>I have a function that calculates the probability of belonging to category k ~ {1, 2, ..., K} based on <code>eta</code> and the cutoff points, <code>c</code> between the categories.</p>
<pre><code>import numpy as np
import scipy.special as ss
def pmf(K: int, eta: np.ndarray, c: np.ndarray) -> np.array:
"""
Example
-------
>>> K = 5
>>> p = np.array([[0.1, 0.3, 0.2, 0.35, 0.05]])
>>> cum_p = np.cumsum(p)
>>> cum_logits = ss.logit(cum_p[:-1])
>>> eta = np.zeros((1, 1))
>>> p_K = pmf(K=K, eta=eta, c=cum_logits)
>>> print(p_K)
[[0.1 0.3 0.2 0.35 0.05]]
"""
p = np.zeros((eta.shape[1], K))
for k in range(K):
if k == 0:
p[:,k] = 1 - ss.expit(eta - c[0])
elif k == K - 1:
p[:,k] = ss.expit(eta - c[-1])
else:
p[:,k] = ss.expit(eta - c[k - 1]) - ss.expit(eta - c[k])
return p
</code></pre>
<p>Is it possible to remove the for-loop? For the boundaries its easily done as follows:</p>
<pre><code>p[:,0] = 1 - ss.expit(eta - c[:,0])
p[:,-1] = ss.expit(eta - c[:,-1])
</code></pre>
<p>But how to "deal" with the other values of <code>k</code>?</p>
|
<python><numpy><scipy>
|
2025-03-10 17:02:32
| 1
| 406
|
HJA24
|
79,498,634
| 3,336,423
|
Loading standard C++ library makes ctypesgen's Python binding crash if library is not loaded first
|
<p>I'm experiencing a wierd crash when trying to run a C++ library entry point from a Python script. I'm under Windows 10.</p>
<p>I isolated this into a very simple example.</p>
<p>Here is my C++ code:</p>
<p><strong>sdetests_lib_bind_python.h:</strong></p>
<pre><code>#pragma once
#ifdef SDETESTS_LIB_BIND_PYTHON_EXPORTS
/** Export symbols statement */
#define SDETESTS_LIB_BIND_PYTHON __declspec(dllexport)
#else
/** Import symbols statement */
#define SDETESTS_LIB_BIND_PYTHON __declspec(dllimport)
#endif
#ifdef __cplusplus
extern "C" {
#endif
SDETESTS_LIB_BIND_PYTHON void func(int lock);
#ifdef __cplusplus
}
#endif
</code></pre>
<p><strong>sdetests_lib_bind_python.cpp:</strong></p>
<pre><code>#include "sdetests_lib_bind_python.h"
#include <iostream>
#include <mutex>
SDETESTS_LIB_BIND_PYTHON void func( int lock )
{
if (lock!=0)
{
std::cout << "Creating mutex" << std::endl;
std::recursive_mutex mutex;
std::cout << "Created mutex" << std::endl;
std::cout << "Locking mutex" << std::endl;
std::lock_guard<std::recursive_mutex> lock_guard(mutex);
std::cout << "Locked mutex" << std::endl;
}
std::cout << "Hello world" << std::endl;
}
</code></pre>
<p>I compile this as a shared library using Visual Studio 2022 (MSVC 19.43.34808.0)</p>
<p>Later I run <code>ctypesgen</code> to generate a binding <code>sdetests_lib_bind_python.py</code> and I use it from a very simple Python script:</p>
<pre><code>import sdetests_lib_bind_python
print("Calling func without lock")
sdetests_lib_bind_python.func(0)
print("Calling func with lock")
sdetests_lib_bind_python.func(1)
</code></pre>
<p>This works fine and output:</p>
<pre><code>Hello from Python script without lock
Hello world
Hello from Python script with lock
Creating mutex
Created mutex
Locking mutex
Locked mutex
Hello world
</code></pre>
<p>However, if I load PyQt from my Python script, being now:</p>
<pre><code>from PyQt5.QtCore import QT_VERSION_STR, PYQT_VERSION_STR
print("Qt: v", QT_VERSION_STR, "\tPyQt: v", PYQT_VERSION_STR)
import sdetests_lib_bind_python
print("Calling func without lock")
sdetests_lib_bind_python.func(0)
print("Calling func with lock")
sdetests_lib_bind_python.func(1)
</code></pre>
<p>Now the program crashs with output:</p>
<pre><code>Qt: v 5.15.2 PyQt: v 5.15.9
Hello from Python script without lock
Hello world
Hello from Python script with lock
Creating mutex
Created mutex
Locking mutex
Traceback (most recent call last):
File "example_sdetests_lib_bind_python_def.py", line 10, in <module>
sdetests_lib_bind_python_def.func(1)
OSError: exception: access violation reading 0x0000000000000000
</code></pre>
<p>So the stcipt crashs if <code>PyQt</code> is loaded BEFORE (see below) <code>sdetests_lib_bind_python</code> and if <code>sdetests_lib_bind_python</code> tries to use a <code>std::recursive_mutex</code>.</p>
<p>Note that the script also crashs with other modules than PyQt, like <code>rclpy</code> (all this comes from a ROS2 environment before I isolated it as an MCVE here).</p>
<p>Note that NO crash is observed:</p>
<ul>
<li>If <code>import sdetests_lib_bind_python</code> statement is the very first line of the script, then I can load PyQt, rclpy, and everything works fine. It only crashs if those other modules are loaded BEFORE sdetests_lib_bind_python</li>
<li>If I compile with an older compiler (MSVC 19.38.33135.0)...this crash appeared recently in my enviropnment after a compiler upgrade</li>
<li>If my library does not use <code>std::recursive_mutex</code>, loading <code><mutex></code> has no impact, but creating a mutex makes the whole thing crash with <code>OSError</code>.</li>
<li>If I compile the library in "Debug" mode</li>
</ul>
<p>I'm packaging my library for external users, so asking them to load it first or, if this could be a solution, to upgrade their PyQt or rlypy modules (and whatever other module leading to a crash) is not an acceptable solution for me...</p>
<hr />
<p>As proposed by 'Ahmed AEK', I checked library loading.</p>
<p>With PyQt5 loaded after my library (no crash), I get:</p>
<pre><code>'python.exe' (Win32) : Chargé 'C:\Python38\python.exe'. Les symboles ont été chargés.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\ntdll.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\kernel32.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\KernelBase.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\ucrtbase.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Python38\python38.dll'. Les symboles ont été chargés.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\shlwapi.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\msvcrt.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\ws2_32.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\rpcrt4.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\advapi32.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\sechost.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\bcrypt.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Python38\vcruntime140.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\version.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
Le thread 27884 s'est arrêté avec le code 0 (0x0).
'python.exe' (Win32) : Chargé 'C:\Windows\System32\cryptsp.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\rsaenh.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\cryptbase.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\bcryptprimitives.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Python38\python3.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Python38\DLLs\_ctypes.pyd'. Les symboles ont été chargés.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\ole32.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Python38\DLLs\libffi-7.dll'. Le module a été généré sans symboles.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\combase.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\gdi32.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\win32u.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\gdi32full.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\msvcp_win.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\user32.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\oleaut32.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\imm32.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Python38\DLLs\_bz2.pyd'. Les symboles ont été chargés.
'python.exe' (Win32) : Chargé 'C:\Python38\DLLs\_lzma.pyd'. Les symboles ont été chargés.
'python.exe' (Win32) : Chargé 'C:\dev\vobs_sde\build\sdetests_python_binding\win64\stg\Release\sdetests_lib_bind_python.dll'. Le module a été généré sans symboles.
'python.exe' (Win32) : Chargé 'C:\Python38\vcruntime140_1.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\msvcp140.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Python38\Lib\site-packages\PyQt5\QtCore.pyd'. Le module a été généré sans symboles.
'python.exe' (Win32) : Chargé 'C:\Python38\Lib\site-packages\PyQt5\Qt5\bin\Qt5Core.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\shell32.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\mpr.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\netapi32.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\userenv.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Python38\Lib\site-packages\PyQt5\Qt5\bin\msvcp140_1.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\winmm.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\netutils.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\srvcli.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Python38\Lib\site-packages\PyQt5\sip.cp38-win_amd64.pyd'. Le module a été généré sans symboles.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\kernel.appcore.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
Le thread 936 s'est arrêté avec le code 0 (0x0).
Le thread 32580 s'est arrêté avec le code 0 (0x0).
Le thread 21676 s'est arrêté avec le code 0 (0x0).
Le programme '[30116] python.exe' s'est arrêté avec le code 0 (0x0).
</code></pre>
<p>With PyQt5 loaded before my library (crash), I get:</p>
<pre><code>'python.exe' (Win32) : Chargé 'C:\Python38\python.exe'. Les symboles ont été chargés.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\ntdll.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\kernel32.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\KernelBase.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\ucrtbase.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Python38\python38.dll'. Les symboles ont été chargés.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\shlwapi.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\msvcrt.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\ws2_32.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\rpcrt4.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\advapi32.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\sechost.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\bcrypt.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Python38\vcruntime140.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\version.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
Le thread 17140 s'est arrêté avec le code 0 (0x0).
'python.exe' (Win32) : Chargé 'C:\Windows\System32\cryptsp.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\rsaenh.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\cryptbase.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\bcryptprimitives.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Python38\python3.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Python38\Lib\site-packages\PyQt5\QtCore.pyd'. Le module a été généré sans symboles.
'python.exe' (Win32) : Chargé 'C:\Python38\Lib\site-packages\PyQt5\Qt5\bin\Qt5Core.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\ole32.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\combase.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\mpr.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\userenv.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\gdi32.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\win32u.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\gdi32full.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\msvcp_win.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\netapi32.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\user32.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\shell32.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Python38\Lib\site-packages\PyQt5\Qt5\bin\msvcp140.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\winmm.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Python38\Lib\site-packages\PyQt5\Qt5\bin\msvcp140_1.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Python38\vcruntime140_1.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Python38\vcruntime140_1.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Déchargé 'C:\Python38\vcruntime140_1.dll'
'python.exe' (Win32) : Chargé 'C:\Windows\System32\netutils.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\srvcli.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\imm32.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Python38\Lib\site-packages\PyQt5\sip.cp38-win_amd64.pyd'. Le module a été généré sans symboles.
'python.exe' (Win32) : Chargé 'C:\Python38\DLLs\_ctypes.pyd'. Les symboles ont été chargés.
'python.exe' (Win32) : Chargé 'C:\Windows\System32\oleaut32.dll'. Chargement des symboles désactivé par le paramètre Include/Exclude.
'python.exe' (Win32) : Chargé 'C:\Python38\DLLs\libffi-7.dll'. Le module a été généré sans symboles.
'python.exe' (Win32) : Chargé 'C:\Python38\DLLs\_bz2.pyd'. Les symboles ont été chargés.
'python.exe' (Win32) : Chargé 'C:\Python38\DLLs\_lzma.pyd'. Les symboles ont été chargés.
'python.exe' (Win32) : Chargé 'C:\dev\vobs_sde\build\sdetests_python_binding\win64\stg\Release\sdetests_lib_bind_python.dll'. Le module a été généré sans symboles.
Exception levée à 0x00007FF9F2412EA0 (msvcp140.dll) dans python.exe : 0xC0000005 : Violation d'accès lors de la lecture de l'emplacement 0x0000000000000000.
</code></pre>
<p>The debugger reports the crash msvcp140.dll.
A notable difference is that:</p>
<ul>
<li>When my library is loaded before PyQt5 (no crash), msvcp140.dll is loaded from 'C:\Windows\System32'</li>
<li>When my library is loaded after PyQt5 (crash), msvcp140.dll is loaded from 'C:\Python38\Lib\site-packages\PyQt5\Qt5\bin'</li>
</ul>
<p>The version of the file in System32 is 14.42.34480.0
The version of the file on PyQt5 folder is 14.26.28720.3</p>
<p>If I remove msvcp140.dll from PyQt5 folder, or replace it by a copy of the one found in System32, the crash disappears. So this is likelly the root cause of the problem. But how to fix it? It's not acceptable to have to tell every user of my dll to hack PyQt5 (or any other module using msvcp140.dll) this way....</p>
|
<python><c++><ctypes>
|
2025-03-10 16:50:57
| 1
| 21,904
|
jpo38
|
79,498,617
| 2,039,866
|
Error from Pycharm: Expected type 'SupportsIndex | slice', got 'str' instead
|
<p>My input file, infile is as follows:</p>
<pre><code>NUMBER,SYMBOL
1,AAPL
2,MSFT
3,NVDA
</code></pre>
<p>Here is my code:</p>
<pre><code>import csv
infile = "stock-symbols-nasdaq-SO.csv"
with open(infile, encoding='utf-8') as csvfile:
reader = csv.DictReader(csvfile) # Read CSV as a dictionary
for row in reader:
print(row)
symbol = row['SYMBOL'] # Error from Pycharm: Expected type 'SupportsIndex | slice', got 'str' instead
print(symbol)
</code></pre>
<p>This code "appears" to run correctly, but why is Pycharm indicating an error?
Is their anything that I can do to remove it?</p>
|
<python><pycharm><python-typing>
|
2025-03-10 16:44:36
| 1
| 770
|
Charles Knell
|
79,498,527
| 1,670,583
|
Read text from file while keeping byte offset
|
<p>Using Python, I want to read text from a (utf-8 encoded) text file, but at the same time need to know the start and end position of each character in the file (in bytes). As there might be multi-byte characters, this isn't a 1:1 mapping.</p>
<p>I can see that reading through the file character by character and keeping an offset akin to <code>offset += len(c.encode('utf-8'))</code> (or whatever the encoding of the file would be), but this seems a bit ad-hoc, especially once (ignored) decoding errors etc. come into play. Is there a standard way / library to do that? I would imagine a list of character-offset pairs or a str plus a list of integers which contains the offsets.</p>
<p>EDIT: The context in which this is applied is that the script would be reading in some (supposed) text file and return blocks of text which are "interesting" (e.g. a diff to a reference). However, the text file is untrusted (generated by student code), so it needs to be robust against any malformed input. And, yes, I could just fail as soon as there is non-utf8 input, but since this is used in a learning environment, I would like it to be best-effort.</p>
|
<python>
|
2025-03-10 15:52:04
| 1
| 375
|
incaseoftrouble
|
79,498,129
| 452,102
|
Should we close connections during program shutdown?
|
<p>We have long-lived connections that recreates itself when broken:</p>
<pre><code>class RMQ():
def __init__(self):
self.connection = ...
def recreate(self):
self.connection = ...
def publish(self):
"""All methods have retry logic to restore connection."""
try:
self.connection.channel.basic_publish(...)
except AMQPConnectionError:
self.recreate()
self.connection.channel.basic_publish(...)
def __del__(self):
if self.connection.is_open:
self.connection.close()
</code></pre>
<p>Should we explicitly close such connections during program shutdown or will Pika/Python/OS take care of such things?</p>
<p>If we have to do it, where should we do it: <code>atexit</code> or some other place? Context managers does not seem applicable during shutdown as connections live forever (decision can't be taken during each call).</p>
<p>Putting this logic in <code>__del__</code> doesn't seem to work as the underlying socket is
already <strong>closed(!)</strong> (that is the exception thrown with SSL connections). Pytest hangs if we simulate a reconnection (deadlock). <code>__del__</code> behaviour is unpredictable as per <a href="https://docs.python.org/3/reference/datamodel.html#object.__del__" rel="nofollow noreferrer">docs</a>, <a href="https://github.com/PyMySQL/PyMySQL/issues/961" rel="nofollow noreferrer">PyMySQL issue 961</a> and <a href="https://stackoverflow.com/questions/24611529/pythonic-way-to-close-connection-like-objects-in-del">Pythonic way to close connections</a>.</p>
|
<python><rabbitmq><pika>
|
2025-03-10 13:23:54
| 1
| 22,154
|
Nishant
|
79,497,967
| 3,156,085
|
Is there a callable for generating ids with `pytest.fixture(param=...)` the same way it would be generated with `pytest.mark.parametrize()`?
|
<p>I'm using parametrized fixtures but I don't find the way ids are generated practical.</p>
<p>I'd like to fall back on the way it's generated when using <code>pytest.mark.parametrize</code>.</p>
<p>I've seen that it's possible to provide a callable as the <code>ids</code> keyword argument in <code>pytest.fixture</code> (and it works), but I was wondering if there was already an implemented callable that could serve this specific purpose. Is there some internal I could replace <code>get_id</code> with?</p>
<p>I include a MRE for illustrating my issue below.</p>
<hr />
<h2>MRE:</h2>
<ul>
<li><code>test_ids.py</code>:</li>
</ul>
<pre><code>import pytest
def add3(a, b, c):
return a + b + c
@pytest.mark.parametrize("a,b,c", [
(1, 2, 3),
(4, 5, 6),
])
def test_add_with_parametrize(a, b, c):
assert a + b + c == add3(a, b, c)
@pytest.fixture(params=[(1, 2, 3), (4, 5, 6)])
def parametrized_fixture(request):
return request.param
def test_add_with_parametrized_fixture(parametrized_fixture):
a, b, c = parametrized_fixture
assert a + b + c == add3(a, b, c)
def get_id(val):
return f"{val!r}"
@pytest.fixture(params=[(1, 2, 3), (4, 5, 6)], ids=get_id)
def parametrized_fixture_bis(request):
return request.param
def test_add_with_parametrized_fixture_bis(parametrized_fixture_bis):
a, b, c = parametrized_fixture_bis
assert a + b + c == add3(a, b, c)
</code></pre>
<ul>
<li>Output:</li>
</ul>
<pre><code>pytest -v
============================= test session starts =============================
platform linux -- Python 3.11.11, pytest-8.3.5, pluggy-1.5.0 -- /home/vmonteco/.pyenv/versions/3.11.11/envs/3.11_pytest/bin/python
cachedir: .pytest_cache
rootdir: /home/vmonteco/code/MREs/MRE_pytest_ids
collected 6 items
test_ids.py::test_add_with_parametrize[1-2-3] PASSED [ 16%]
test_ids.py::test_add_with_parametrize[4-5-6] PASSED [ 33%]
test_ids.py::test_add_with_parametrized_fixture[parametrized_fixture0] PASSED [ 50%]
test_ids.py::test_add_with_parametrized_fixture[parametrized_fixture1] PASSED [ 66%]
test_ids.py::test_add_with_parametrized_fixture_bis[(1, 2, 3)] PASSED [ 83%]
test_ids.py::test_add_with_parametrized_fixture_bis[(4, 5, 6)] PASSED [100%]
============================== 6 passed in 0.01s ==============================
</code></pre>
|
<python><pytest>
|
2025-03-10 12:14:58
| 1
| 15,848
|
vmonteco
|
79,497,914
| 21,294,350
|
How do conditional expressions group from right to left?
|
<p>I checked <a href="https://docs.python.org/3/reference/expressions.html#operator-precedence" rel="nofollow noreferrer">python operator precedence</a> (<a href="https://docs.python.org/3/reference/grammar.html" rel="nofollow noreferrer">This one grammar</a> is more detailed and more appropriate for the actual Python implementation)</p>
<blockquote>
<p>Operators <em>in the same box</em> group left to right (except for exponentiation and conditional expressions, which group from right to left).</p>
<p>** Exponentiation [5]</p>
<p>if – else Conditional expression</p>
</blockquote>
<p>I can understand exponentiation that <code>2**3**2</code> is equal to <code>2**(3**2)</code>. But <a href="https://docs.python.org/3/reference/expressions.html#conditional-expressions" rel="nofollow noreferrer">Conditional expression</a></p>
<blockquote>
<p>conditional_expression ::= or_test ["if" or_test "else" expression]</p>
</blockquote>
<p>is not one <em>binary</em> operator. I can't give one similar example as <code>**</code>. Could you give one example of "group from right to left" for <code>if</code> Conditional expression?</p>
|
<python><python-3.x><conditional-operator><operator-precedence>
|
2025-03-10 11:54:56
| 1
| 782
|
An5Drama
|
79,497,742
| 777,275
|
Python openpyxl issue when save xlsx file
|
<p>I'm using Walmart template file to upload bulk listings to their site.
The template only allow 10.000 rows, so I have to split them into multiple files. The solution I'm using is to copy the original xlsx file and use new name for the file.</p>
<p>Here's the code for copying file</p>
<pre><code>newWMFile = 'c:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python312\\'+'walmart-test-save.xlsx'
shutil.copyfile('c:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python312\\walmart.xlsx', newWMFile)
</code></pre>
<p>I try to open the new file using openpyxl and just save the file without any changes.</p>
<p>After I ran the code, I open the xlsx file, everything looks the same when viewing it with Excel program, but when I uploaded it to Walmart, it said something wrong with the file. After checking, I found out that the headers were modified somehow, I only knew this when I tried to copy the header "SKU" which is a merged cell, and pasted into text editor.</p>
<p>The SKU header of the original file from Walmart, in sheet "Product Content And Site Exp" looks like this:</p>
<p>SKU</p>
<p>SKU</p>
<p>while the file after saving using openpyxl looks like this:</p>
<p>SKU</p>
<p>There's a missing SKU as you can see.</p>
<p>Here's link to the 2 files:</p>
<p>Original working file:</p>
<p><a href="https://docs.google.com/spreadsheets/d/1cyMPDW27idS5VxjacipxbnjhUq26s4X-/edit?usp=sharing&ouid=117184241470804778107&rtpof=true&sd=true" rel="nofollow noreferrer">https://docs.google.com/spreadsheets/d/1cyMPDW27idS5VxjacipxbnjhUq26s4X-/edit?usp=sharing&ouid=117184241470804778107&rtpof=true&sd=true</a></p>
<p>Corrupted file after saving using openpyxl:
<a href="https://docs.google.com/spreadsheets/d/1AmYhDS1_qu9GiNr4D9MyrEr7Yt5WMFt6/edit?usp=sharing&ouid=117184241470804778107&rtpof=true&sd=true" rel="nofollow noreferrer">https://docs.google.com/spreadsheets/d/1AmYhDS1_qu9GiNr4D9MyrEr7Yt5WMFt6/edit?usp=sharing&ouid=117184241470804778107&rtpof=true&sd=true</a></p>
<p>And here's the code that I'm using:</p>
<p><code>code</code></p>
<pre><code>workbook = load_workbook(newWMFile)
worksheet = workbook.active
workbook.save(newWMFile)
workbook.close()
</code></pre>
<p><code>code</code></p>
|
<python><openpyxl><xlsx>
|
2025-03-10 10:45:25
| 0
| 343
|
kenvu
|
79,497,737
| 16,611,809
|
Is a `ROUTEID` set by `Header add Set-Cookie`a locally stored cookie?
|
<p>I have an Apache2 server running as reverse proxy and behind this proxy there are multiple workers running a local shiny for python server. To always use the same worker for one session I use these code lines in my Apache2 config:</p>
<pre><code>(...)
Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED
<Proxy balancer://myclusterhttp>
BalancerMember http://127.0.0.1:8080 route=1
BalancerMember http://127.0.0.1:8081 route=2
BalancerMember http://127.0.0.1:8082 route=3
ProxySet stickysession=ROUTEID
</Proxy>
<Proxy balancer://myclusterws>
BalancerMember ws://127.0.0.1:8080 route=1
BalancerMember ws://127.0.0.1:8081 route=2
BalancerMember ws://127.0.0.1:8082 route=3
ProxySet stickysession=ROUTEID
</Proxy>
(...)
</code></pre>
<p>I now want to make a pop-up that tells the user that cookies are used. This popup should only popup at the first start. If I got this correct the cookie I am using right now is a session cookie that is not stored anywhere on the local machine of the user. Is it even possible to check, if the user has already seen the pop-up without introducing a real cookie? And if so, does anyone know how to forward any info from Apache2 to my shiny process?</p>
|
<python><apache2><py-shiny>
|
2025-03-10 10:42:43
| 0
| 627
|
gernophil
|
79,497,724
| 8,771,082
|
Index Pandas with multiple boolean arrays
|
<p>Using numpy, one can subset an array with one boolean array per dimension like:</p>
<pre><code>In [10]: aa = np.array(range(9)).reshape(-1, 3)
In [11]: aa
Out[11]:
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
In [12]: conditions = (np.array([True, True, False]), np.array([True, False, True]))
In [13]: aa[np.ix_(*conditions)]
Out[13]:
array([[0, 2],
[3, 5]])
</code></pre>
<p>Is there a way to do this in Pandas? I've looked in their docs<br />
<a href="https://pandas.pydata.org/docs/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer">https://pandas.pydata.org/docs/user_guide/indexing.html#boolean-indexing</a><br />
but didn't find it. (I would have posted 4 relevant links, but then the automatic question checks think I've posted code that is not properly formatted.)</p>
<p>This<br />
<a href="https://github.com/pandas-dev/pandas/issues/11290" rel="nofollow noreferrer">https://github.com/pandas-dev/pandas/issues/11290</a><br />
github issue is close, but I want to pick entire rows and columns.</p>
|
<python><pandas>
|
2025-03-10 10:37:34
| 1
| 449
|
Anton
|
79,497,618
| 624,734
|
Send signal (SIGINT) to a C sub-process
|
<p>I use some Python bindings to a C++ based library which starts some processes.</p>
<p>I have the problem that while the execution is within that code-path it does not "feel" a SIGINT, but as soon that context is finished, the Python process the SIGINT and use my defined handler. What is the proper procedure to "broadcast" a signal to all running sub-processes?</p>
|
<python><signals>
|
2025-03-10 09:54:32
| 2
| 311
|
Adrian Sevcenco
|
79,497,511
| 2,487,988
|
Can't eliminate floating point errors with Decimal
|
<p>I'm using Decimal everywhere to try and get rid of floating point errors, but it's persisting.</p>
<p>Here's my test code</p>
<pre><code>from decimal import Decimal
Demands = [9,12,8,11]
fM = Decimal(10)
alpha = Decimal(.01)
fM = Decimal((1 - Decimal(alpha)) * fM + Decimal(alpha) * Demands[(1) % 4])
print(fM)
</code></pre>
<p>But the output is still</p>
<pre><code>10.02000000000000000041633363
</code></pre>
|
<python><floating-point><decimal>
|
2025-03-10 09:10:56
| 1
| 503
|
Jeff
|
79,497,191
| 242,042
|
When using mysql.connector.aio how do we enable connection pooling (assuming it is needed)?
|
<p>I am trying to port my old mysql connector code to use the <a href="https://dev.mysql.com/doc/connector-python/en/connector-python-asyncio.html" rel="nofollow noreferrer">asyncio libraries provided by MySQL</a>. When I tried to run it, it said it didn't recognize the <code>pool_name</code> and <code>pool_size</code>.</p>
<p>It didn't explicitly state in the documentation that pooling is not supported. <a href="https://stackoverflow.com/questions/44481826/how-can-i-use-connection-pool-in-aiomysql">AIOMysql does support pooling</a>.</p>
<p>But I was also thinking, if I am running on a single thread, why would I need connection pooling? Maybe that's why it isn't explicitly supported by the MySQL AIO drivers?</p>
<p>There's a <a href="https://forums.mysql.com/read.php?50,724706,724706#msg-724706" rel="nofollow noreferrer">forum question</a> but doesn't really address whether connection pooling is needed or not.</p>
<p><a href="https://stackoverflow.com/a/66222924/242042">https://stackoverflow.com/a/66222924/242042</a> seems to indicate that connection pooling isn't worth it, but it could be specific to AIOMysql.</p>
|
<python><python-asyncio><mysql-connector>
|
2025-03-10 06:37:21
| 1
| 43,097
|
Archimedes Trajano
|
79,497,170
| 16,452,929
|
Generate 4 random numbers between -1.0 and 1.0 such their sum is 1 using python
|
<p>I am trying generate 4 random numbers between -1.0 and 1.0 such that their sum is 1 using python.
I initially looked at the dirichlet function in numpy but that only works for positive numbers.
One other way I can think of is:</p>
<pre><code>def generate_random_numbers():
numbers = np.random.uniform(-1.0, 1.0, 3)
last_number = 1 - np.sum(numbers)
if -1.0 <= last_number <= 1.0:
return np.append(numbers, last_number)
else:
return generate_random_numbers()
</code></pre>
<p>However its not that efficient.
Any other way to do this?</p>
|
<python><numpy><random>
|
2025-03-10 06:24:36
| 4
| 517
|
CS1999
|
79,497,118
| 243,031
|
pydantic get field alias based on field value with python object
|
<p>I have pydantic model as below.</p>
<pre><code>from typing import Annotated
from bson import ObjectId
from pydantic import Field
from pydantic import EmailStr
from pydantic import BaseModel
from pydantic import BeforeValidator
from pydantic import ConfigDict
from pydantic import AwareDatetime
from pydantic import field_validator
# Represents an ObjectId field in the database.
# It will be represented as a `str` on the model so that it can
# be serialized to JSON.
PyObjectId = Annotated[str, BeforeValidator(str)]
class DBTableBase(BaseModel):
# The primary key for the Table, stored as a `str` on the instance.
# This will be aliased to `_id` when sent to MongoDB,
# but provided as `id` in the API requests and responses.
id: PyObjectId | None = Field(alias="_id",
serialization_alias="id",
default=None)
model_config = ConfigDict(
json_encoders={ObjectId: str},
json_schema_extra={
"example": {
"id": "BSON_ID"
}
},
)
class ClientModel(DBTableBase):
first_name: str
last_name: str
</code></pre>
<p>I create object and print the <code>model_dump</code></p>
<pre><code>In [16]: a = ClientModel(_id=ObjectId(), first_name="first_name", last_name="last_name")
In [17]: a
Out[17]: ClientModel(id='67ce7b6190a330f1f5018315', first_name='first_name', last_name='last_name')
In [18]: a.model_dump()
Out[18]:
{'id': '67ce7b6190a330f1f5018315',
'first_name': 'first_name',
'last_name': 'last_name'}
</code></pre>
<p>This will convert <code>_id</code> of <code>ObjectId</code> to <code>str</code> as <code>id</code> in <code>model_dump</code>.</p>
<p>Is there any way to get it reverse like I pass <code>id</code> as <code>str</code> and will return <code>_id</code> as <code>ObjectId</code>?</p>
<pre><code>In [19]: b = ClientModel(id='67ce7b6190a330f1f5018315', first_name='first_name', last_name='last_name')
In [20]: b.model_dump()
Out[20]: {'id': None, 'first_name': 'first_name', 'last_name': 'last_name'}
</code></pre>
|
<python><serialization><deserialization><pydantic>
|
2025-03-10 05:53:10
| 0
| 21,411
|
NPatel
|
79,496,903
| 1,088,856
|
How to Call Python Code from Blazor WebApp
|
<p>I am trying to run a python script from a Blazor WebApp using pythonnet. The python script is located in the project but the PyModule.Import call to the script throws an error saying the module can't be found.</p>
<pre><code>Runtime.PythonDLL = @"C:\Users\homepc\AppData\Local\Programs\Python\Python312\python312.dll";
PythonEngine.Initialize();
using (Py.GIL())
{
PyObject pyScript = PyModule.Import(@"PythonCalls");
string result = pyScript.InvokeMethod("test");
}
</code></pre>
<p>How do I call a python script from a Blazor WebApp using pythonnet?</p>
|
<python><asp.net-core><blazor><python.net><blazor-webapp>
|
2025-03-10 02:44:56
| 1
| 963
|
physics90
|
79,496,846
| 7,238,426
|
How to use `@pytest.mark.parametrize` and include an item for the default mock behavior?
|
<p>I am creating a parameterized Mock PyTest to test API behaviors. I am trying to simplify the test code by testing the instance modified behavior, e.g. throw and exception, and the default behavior, i.e. load JSON from file vs. calling REST API.</p>
<p>I do not know how to add an array entry to represent the "default" mock behavior?</p>
<pre class="lang-py prettyprint-override"><code>@pytest.mark.parametrize(
("get_nearby_sensors_mock", "get_nearby_sensors_errors"),
[
(AsyncMock(side_effect=Exception), {CONF_BASE: CONF_UNKNOWN}),
(AsyncMock(side_effect=PurpleAirError), {CONF_BASE: CONF_UNKNOWN}),
(AsyncMock(side_effect=InvalidApiKeyError), {CONF_BASE: CONF_INVALID_API_KEY}),
(AsyncMock(return_value=[]), {CONF_BASE: CONF_NO_SENSORS_FOUND}),
# What do I do here?
# (AsyncMock(api.sensors, "async_get_nearby_sensors")) does not work as api is not in scope?
# (AsyncMock(side_effect=None), {}) does not call the default fixture?
(AsyncMock(), {}),
],
)
async def test_validate_coordinates(
hass: HomeAssistant,
mock_aiopurpleair,
api,
get_nearby_sensors_mock,
get_nearby_sensors_errors,
) -> None:
"""Test validate_coordinates errors."""
with (
patch.object(api, "async_check_api_key"),
patch.object(api.sensors, "async_get_nearby_sensors", get_nearby_sensors_mock),
):
result: ConfigValidation = await ConfigValidation.async_validate_coordinates(
hass, TEST_API_KEY, TEST_LATITUDE, TEST_LONGITUDE, TEST_RADIUS
)
assert result.errors == get_nearby_sensors_errors
if result.errors == {}:
assert result.data is not None
else:
assert result.data is None
</code></pre>
<p>How do I add a parameter for the "default behavior" of <code>patch.object(api.sensors, "async_get_nearby_sensors")</code> that will use the fixture to load data from canned JSON file?</p>
<p>Why mock; <code>async_validate_coordinates()</code> calls <code>async_check_api_key()</code> that needs to be mocked to pass, and <code>async_get_nearby_sensors()</code> that is mocked with a fixture to return data from a JSON file.</p>
<p>For ref <a href="https://github.com/ptr727/home-assistant-core/blob/purpleair/tests/components/purpleair/conftest.py" rel="nofollow noreferrer">this</a> is the <code>conftest.py</code> file.</p>
|
<python><mocking><pytest>
|
2025-03-10 01:27:22
| 1
| 623
|
PieterV
|
79,496,831
| 1,054,424
|
How to efficiently plot real-time rolling data with PyQtGraph
|
<p>What is an efficient method for plotting data with <code>pyqtgraph</code> for a rolling plot when samples arrive one at at time from an upstream process?</p>
<p>Based on some experimentation, plotting each sample as it comes in is very slow. A workaround is to buffer the data prior to plotting.</p>
<p>Below is some sample code. The <code>BUF_SZ</code> parameter can be changed from between {1, 5, 10, 20} to show how buffering ( <code>BUF_SZ</code> > 1) compares to not buffering (<code>BUF_SZ</code> = 1).</p>
<p>Is there a method to plot the data that does not require buffering and still provides the performance of <code>pyqtgraph</code>?</p>
<pre><code>import pyqtgraph as pg
from pyqtgraph.Qt import QtCore, QtWidgets
from collections import deque
import random
import signal
import numpy as np
# Configure global appearance: black background, white text
pg.setConfigOption('background', 'k')
pg.setConfigOption('foreground', 'w')
# Handle keyboard interrupt
signal.signal(signal.SIGINT, signal.SIG_DFL)
# Global variables for plot elements and data buffers
window_size = 500 # Plot history
data = None
win = None
data_plot = None
data_curves = None
buf_cnt = 0
app = None
def init_plot(update_interval=None):
"""Initialize the pyqtgraph window and plots"""
global data
global win, data_plot
global data_curves, app
global BUF_SZ
if update_interval:
BUF_SZ = update_interval
# Initialize data buffers
data = {
"x": deque([0]*window_size, maxlen=window_size),
"y": deque([0]*window_size, maxlen=window_size),
"z": deque([0]*window_size, maxlen=window_size)
}
# Create the Qt application if not already created
app = QtWidgets.QApplication.instance()
if app is None:
app = QtWidgets.QApplication([])
# Create a window using GraphicsLayoutWidget
win = pg.GraphicsLayoutWidget(title="Real-Time Data")
win.resize(900, 600)
# Plot
data_plot = win.addPlot(title="Data")
data_plot.showGrid(x=True, y=True, alpha=0.3)
data_curves = {
"x": data_plot.plot(pen=pg.mkPen(color=(31, 119, 180), width=2), name=' X'), # Blue
"y": data_plot.plot(pen=pg.mkPen(color=(255, 127, 14), width=2), name=' Y'), # Orange
"z": data_plot.plot(pen=pg.mkPen(color=(44, 160, 44), width=2), name=' Z') # Green
}
# Show the window
win.show()
return app
def update_plot(in_data):
global buf_cnt
# Append new data to buffers
data["x"].append(in_data["x"])
data["y"].append(in_data["y"])
data["z"].append(in_data["z"])
# Buffer data to reduce plotting overhead
buf_cnt += 1
if buf_cnt % BUF_SZ == 0:
# Update accelerometer and gyroscope curves
data_curves["x"].setData(list(data["x"]))
data_curves["y"].setData(list(data["y"]))
data_curves["z"].setData(list(data["z"]))
# Process pending Qt events so the window updates immediately
QtWidgets.QApplication.processEvents()
def sim_data():
x = random.uniform(-0.02, 0.02)
y = random.uniform(0.2, 0.3)
z = random.uniform(0.98, 1.02)
return {
"x": x,
"y": y,
"z": z,
}
if __name__ == "__main__":
import time
# !
# ! Setting BUF_SZ to 1 (update 1 sample at a time) slows down plot
# !
BUF_SZ = 5 # Buffer N samples before plotting
app = init_plot(update_interval=BUF_SZ)
rate = 10000.0 # Update rate in Hz
num_samples = 500 # Set a finite number of samples
sample_count = 0 # Track number of generated samples
timer = QtCore.QTimer()
t0 = time.time()
def generate_data():
"""Generate data """
global sample_count
if sample_count >= num_samples:
timer.stop() # Stop after reaching limit
print(f"Finished generating {num_samples} samples.")
elapsed = time.time() - t0
print(f"Elapsed: {elapsed:0.2f} sec")
exit(1)
update_plot(sim_data())
sample_count += 1
timer.timeout.connect(generate_data)
timer.start(int(1000 / rate)) # Interval in ms
app.exec_() # Start app
</code></pre>
|
<python><pyqt5><pyqtgraph>
|
2025-03-10 01:10:21
| 1
| 1,498
|
BigBrownBear00
|
79,496,750
| 3,696,153
|
Plugin - passing globals to a module
|
<p>I have Python Module system that I am trying to create.</p>
<p>I have followed a previous example and it seems to work sell, except one thing.</p>
<p>The basic 'manager' does the following:</p>
<pre><code> NAME_PY='myplugin.py'
def load_plugin( self, dirname ):
filename = os.path.join( dirname, NAME_PY )
module_name = filename[:-3]
file_path = os.path.join(self.plugin_dir, filename)
spec = importlib.util.spec_from_file_location(module_name, file_path)
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
</code></pre>
<p>All of this works and works nicely. I am at the next step.
I am trying to model this Python construct.</p>
<pre><code>if __name__ == '__main__':
print("This is the main module")
else:
print("this is not the main python file, it is being imported")
</code></pre>
<p>In my solution I want to use that same idiom and do this:</p>
<pre><code>if __name__ == 'plugin':
print("Loaded from a plugin"
if __name__ == '__main__':
print("This is a plugin, it is not an standalone python file")
</code></pre>
<p>As I understand it, <code>__name__</code> is a member of the dictionary globals().
And I just need to set that before I call: spec.loader.exec_module(module)</p>
<p>But I cannot seem to find the 'global' thing I can set my <code>__name__</code> to.</p>
<p>Suggestions?</p>
|
<python><plugins><global>
|
2025-03-09 23:17:42
| 1
| 798
|
user3696153
|
79,496,711
| 6,440,589
|
OpenCV: understanding the filterByArea parameter used in SimpleBlobDetector
|
<p>I am trying to detect a large stain using OpenCV's <code>SimpleBlobDetector</code> following <a href="https://stackoverflow.com/a/54410195/6440589">this SO answer</a>.</p>
<p>Here is the input image:</p>
<p><a href="https://i.sstatic.net/nSs0aS1P.png?s=256" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nSs0aS1P.png?s=256" alt="enter image description here" /></a></p>
<p>I first tried working with <strong>params.filterByArea = False</strong>, which allowed to detect the large black stain:</p>
<p><a href="https://i.sstatic.net/Olxdrcy1.png?s=256" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Olxdrcy1.png?s=256" alt="enter image description here" /></a></p>
<p>However, smaller spots also ended up being detected. I therefore toggled <code>params.filterByArea = True</code> hoping to enforce a criterion on object area.</p>
<p>However, when setting <strong>params.filterByArea = True</strong> with <code>params.minArea = 10</code> the largest stain is no longer identified:</p>
<p><a href="https://i.sstatic.net/8MutkRIT.png?s=256" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8MutkRIT.png?s=256" alt="enter image description here" /></a></p>
<p>I tried using other <code>minArea</code> parameters to no avail, even trying a <code>minArea</code> of 0, which should be equivalent to no filtering at all.</p>
<p>What am I missing here?</p>
|
<python><opencv><area>
|
2025-03-09 22:34:57
| 1
| 4,770
|
Sheldon
|
79,496,431
| 8,830,612
|
Azure ML - can't use the model path
|
<p>The ultimate goal - <strong>to publish my model to production</strong>. After a lot of iterations and not explicit errors I decided to do line by line check. I'm using <strong>Azure ML Notebooks</strong>.</p>
<p>Everything is working until I try to deploy the model to an Endpoint.
I concluded that the root cause lies in the <code>ini()</code> function, line <code>model_path</code>, <strong>which can't access the model path and files</strong>.</p>
<p>Setting up the environment and correct access:</p>
<pre><code>subscription_id = "xxxxxxxxxxx"
resource_group = "xxxxxxxx"
workspace = "xxxxxxxxxx"
ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
ws = ml_client.workspaces.get("xxxxxxxxxx")
print(ws.location, ":", ws.resource_group)
</code></pre>
<p>I'm getting confirmation that I'm in the right location and correct resource. Next, I'm registering the model.</p>
<pre><code>from azure.ai.ml.entities import Model
from azure.ai.ml.constants import ModelType
cloud_model = Model(
path="model.pt",
name="model_name",
type="custom",
description="Model created from cloud path."
)
ml_client.models.create_or_update(cloud_model)
latest_model_version = max([int(m.version) for m in ml_client.models.list(name="model_name")])
registered_model = ml_client.models.get(name="model_name", version=latest_model_version)
</code></pre>
<p><strong>The code above executes correcly</strong> the model have been registered and I can access it. The next block of code is the one, where the issue comes from.</p>
<pre><code>from ultralytics import YOLO
def init():
"""
This function is called when the container is initialized/started, typically after create/update of the deployment.
You can write the logic here to perform init operations like caching the model in memory
"""
global model
# AZUREML_MODEL_DIR is an environment variable created during deployment.
# It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)
model_path = registered_model.path
# deserialize the model file back into a sklearn model
model = YOLO(model_path)
logging.info("Init complete")
init()
</code></pre>
<p>The error:</p>
<pre><code>---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Cell In[10], line 1
----> 1 init()
Cell In[9], line 39, in init()
36 model_path = registered_model_path
37 # Look for the model file
38 # deserialize the model file back into a sklearn model
---> 39 model = YOLO(model_path)
40 logging.info("Init complete")
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/ultralytics/models/yolo/model.py:23, in YOLO.__init__(self, model, task, verbose)
20 self.__dict__ = new_instance.__dict__
21 else:
22 # Continue with default YOLO initialization
---> 23 super().__init__(model=model, task=task, verbose=verbose)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/ultralytics/engine/model.py:148, in Model.__init__(self, model, task, verbose)
146 self._new(model, task=task, verbose=verbose)
147 else:
--> 148 self._load(model, task=task)
150 # Delete super().training for accessing self.model.training
151 del self.training
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/ultralytics/engine/model.py:291, in Model._load(self, weights, task)
288 weights = checks.check_model_file_from_stem(weights) # add suffix, i.e. yolo11n -> yolo11n.pt
290 if Path(weights).suffix == ".pt":
--> 291 self.model, self.ckpt = attempt_load_one_weight(weights)
292 self.task = self.model.args["task"]
293 self.overrides = self.model.args = self._reset_ckpt_args(self.model.args)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/ultralytics/nn/tasks.py:905, in attempt_load_one_weight(weight, device, inplace, fuse)
903 def attempt_load_one_weight(weight, device=None, inplace=True, fuse=False):
904 """Loads a single model weights."""
--> 905 ckpt, weight = torch_safe_load(weight) # load ckpt
906 args = {**DEFAULT_CFG_DICT, **(ckpt.get("train_args", {}))} # combine model and default args, preferring model args
907 model = (ckpt.get("ema") or ckpt["model"]).to(device).float() # FP32 model
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/ultralytics/nn/tasks.py:832, in torch_safe_load(weight, safe_only)
830 ckpt = torch.load(f, pickle_module=safe_pickle)
831 else:
--> 832 ckpt = torch.load(file, map_location="cpu")
834 except ModuleNotFoundError as e: # e.name is missing module name
835 if e.name == "models":
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/ultralytics/utils/patches.py:86, in torch_load(*args, **kwargs)
83 if TORCH_1_13 and "weights_only" not in kwargs:
84 kwargs["weights_only"] = False
---> 86 return _torch_load(*args, **kwargs)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/torch/serialization.py:1425, in load(f, map_location, pickle_module, weights_only, mmap, **pickle_load_args)
1422 if "encoding" not in pickle_load_args.keys():
1423 pickle_load_args["encoding"] = "utf-8"
-> 1425 with _open_file_like(f, "rb") as opened_file:
1426 if _is_zipfile(opened_file):
1427 # The zipfile reader is going to advance the current file position.
1428 # If we want to actually tail call to torch.jit.load, we need to
1429 # reset back to the original position.
1430 orig_position = opened_file.tell()
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/torch/serialization.py:751, in _open_file_like(name_or_buffer, mode)
749 def _open_file_like(name_or_buffer, mode):
750 if _is_path(name_or_buffer):
--> 751 return _open_file(name_or_buffer, mode)
752 else:
753 if "w" in mode:
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/torch/serialization.py:732, in _open_file.__init__(self, name, mode)
731 def __init__(self, name, mode):
--> 732 super().__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'azureml:/subscriptions/xxxxxxxxxxxxxx/resourceGroups/xxxxxxxxxxxxxxxx/workspaces/xxxxxxxxxx/datastores/workspaceblobstore/paths/LocalUpload/xxxxxxxxxxxxxxxx/model.pt'
</code></pre>
<p>Really can't wrap my head arround, how in the previous lines I can access the model and it's path, but I can't use it further down the road?</p>
<p><strong>This is a serious show stopper to publish my model, as the <code>init()</code> function is a part of the scoring script which can't access the model.</strong></p>
|
<python><azure><azure-machine-learning-service><azureml-python-sdk>
|
2025-03-09 18:57:36
| 1
| 518
|
default_settings
|
79,496,388
| 242,042
|
How do I find classes that are decorated by a given decorator in Python?
|
<p>I am trying to do something like SpringBoot @Component scan to search for classes that are decorated with Temporal <code>@worfklow.defn</code> and <code>@activity.defn</code> so I can automatically register them to the worker without explicit plumbing.</p>
<p>Do Python decorators provide a facility for a lookup like that?</p>
<pre><code>importlib.find_all_classes_decorated_by(workflow.defn)
</code></pre>
<p>Just to add in regards to using <code>inspect</code>, a custom decorator I wrote as shown:</p>
<pre class="lang-py prettyprint-override"><code>@kafka_listener(topic="foo")
def consume_foo(rec: ConsumerRecord[Any, Any]):
print(rec)
</code></pre>
<p>Does not show anything when I listed the members that indicate neither the <code>topic="foo"</code> nor <code>kafka_listener</code> (filtering out <code>__doc__</code>, <code>__globals__</code> and <code>__builtins__</code></p>
<pre class="lang-none prettyprint-override"><code>consume_foo <function consume_foo at 0x00000189D7EB58A0>
~ __annotations__ {'rec': aiokafka.structs.ConsumerRecord[typing.Any, typing.Any]}
~ __call__ <method-wrapper '__call__' of function object at 0x00000189D7EB58A0>
~ __class__ <class 'function'>
~ __closure__ None
~ __code__ <code object consume_foo at 0x00000189D7D5C300, file "C:\dh\template-python\iblum_template\template_servicer.py", line 14>
~ __defaults__ None
~ __delattr__ <method-wrapper '__delattr__' of function object at 0x00000189D7EB58A0>
~ __dict__ {}
~ __dir__ <built-in method __dir__ of function object at 0x00000189D7EB58A0>
~ __eq__ <method-wrapper '__eq__' of function object at 0x00000189D7EB58A0>
~ __format__ <built-in method __format__ of function object at 0x00000189D7EB58A0>
~ __ge__ <method-wrapper '__ge__' of function object at 0x00000189D7EB58A0>
~ __get__ <method-wrapper '__get__' of function object at 0x00000189D7EB58A0>
~ __getattribute__ <method-wrapper '__getattribute__' of function object at 0x00000189D7EB58A0>
~ __getstate__ <built-in method __getstate__ of function object at 0x00000189D7EB58A0>
~ __gt__ <method-wrapper '__gt__' of function object at 0x00000189D7EB58A0>
~ __hash__ <method-wrapper '__hash__' of function object at 0x00000189D7EB58A0>
~ __init__ <method-wrapper '__init__' of function object at 0x00000189D7EB58A0>
~ __init_subclass__ <built-in method __init_subclass__ of type object at 0x00007FFA7BC4CB80>
~ __kwdefaults__ None
~ __le__ <method-wrapper '__le__' of function object at 0x00000189D7EB58A0>
~ __lt__ <method-wrapper '__lt__' of function object at 0x00000189D7EB58A0>
~ __module__ iblum_template.template_servicer
~ __name__ consume_foo
~ __ne__ <method-wrapper '__ne__' of function object at 0x00000189D7EB58A0>
~ __new__ <built-in method __new__ of type object at 0x00007FFA7BC4CB80>
~ __qualname__ consume_foo
~ __reduce__ <built-in method __reduce__ of function object at 0x00000189D7EB58A0>
~ __reduce_ex__ <built-in method __reduce_ex__ of function object at 0x00000189D7EB58A0>
~ __repr__ <method-wrapper '__repr__' of function object at 0x00000189D7EB58A0>
~ __setattr__ <method-wrapper '__setattr__' of function object at 0x00000189D7EB58A0>
~ __sizeof__ <built-in method __sizeof__ of function object at 0x00000189D7EB58A0>
~ __str__ <method-wrapper '__str__' of function object at 0x00000189D7EB58A0>
~ __subclasshook__ <built-in method __subclasshook__ of type object at 0x00007FFA7BC4CB80>
~ __type_params__ ()
</code></pre>
|
<python><python-decorators>
|
2025-03-09 18:22:54
| 1
| 43,097
|
Archimedes Trajano
|
79,496,351
| 2,057,516
|
How to include code-block headers in sticky-scrolling for python code?
|
<p>Is there a way to tell VS Code to include every code block header in the sticky scrolling feature when scrolling through python code? Currently, it only does class and method definitions. I'd like it to include loops, conditionals, try, etc blocks.</p>
<p>For example, it would be nice to see everything between lines 124 and 236 (i.e. between <code>def set_default_columns...</code> and <code>if related_name...</code>) in this:</p>
<p><a href="https://i.sstatic.net/DaLh1iW4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DaLh1iW4.png" alt="enter image description here" /></a></p>
<p>Is that possible?</p>
<p>My max sticky scroll is set to 5:</p>
<p><a href="https://i.sstatic.net/grfjOAIz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/grfjOAIz.png" alt="enter image description here" /></a></p>
|
<python><visual-studio-code><scroll><settings><sticky>
|
2025-03-09 17:52:27
| 1
| 1,225
|
hepcat72
|
79,496,308
| 6,423,456
|
How can I handle initial settings with Pydantic Settings?
|
<p>I have an app that is largely configured by environment variables.
I use Pydantic Settings to define the settings available, and validate them.
I have an initial set of settings, and the regular app settings.</p>
<p>The initial settings are ones that should not fail validation, and contain essential settings for starting the app.</p>
<p>For example, when my app starts up, if the regular <code>Settings()</code> can't be initialized because something in them failed validation, I still want to be able to send the error to Sentry. For that, I need <code>SENTRY_DSN</code> to configure Sentry. <code>SENTRY_DSN</code> can't be part of the regular settings, because if something unrelated in <code>Settings</code> fails validation, I won't have access to <code>SENTRY_DNS</code> either.</p>
<p>Right now, my settings look like this:</p>
<pre class="lang-py prettyprint-override"><code>class InitialSettings(BaseSettings):
model_config = SettingsConfigDict(
env_file="settings.env",
env_file_encoding="utf-8",
extra="ignore",
env_ignore_empty=True,
env_nested_delimiter="__",
case_sensitive=True,
)
SENTRY_DSN: Annotated[
Optional[str],
Field(None),
]
class Settings(BaseSettings):
model_config = SettingsConfigDict(
env_file="settings.env",
env_file_encoding="utf-8",
extra="ignore",
env_ignore_empty=True,
env_nested_delimiter="__",
case_sensitive=True,
)
STORAGE: Annotated[
LocalStorageSettings | S3StorageSettings,
Field(..., discriminator="STORAGE_TYPE"),
]
DEBUG: Annotated[DebugSettings, Field(default_factory=DebugSettings)]
...
</code></pre>
<p>This works. When my app starts up, I first initialize <code>InitialSettings()</code>, and then try to initialize <code>Settings()</code>. If <code>Settings()</code> fails, I can still use the <code>SENTRY_DSN</code> setting to send the error to Sentry.</p>
<p>The issues comes when I try to have both settings use the same env file (<code>settings.env</code>), AND enable the <code>extra="forbid"</code> feature on <code>Settings()</code>.</p>
<p>I like the idea of having <code>extra="forbid"</code> enabled, but that also means that if I enable it on <code>Settings()</code>, it will always fail, because the env file will contain an entry for <code>SENTRY_DSN</code>, which <code>Settings</code> doesn't know about.</p>
<p>To fix, this I tried to add InitialSettings to Settings like this:</p>
<pre class="lang-py prettyprint-override"><code>class Settings(BaseSettings):
model_config = SettingsConfigDict(
env_file="settings.env",
env_file_encoding="utf-8",
extra="forbid",
env_ignore_empty=True,
env_nested_delimiter="__",
case_sensitive=True,
)
STORAGE: Annotated[
LocalStorageSettings | S3StorageSettings,
Field(..., discriminator="STORAGE_TYPE"),
]
DEBUG: Annotated[DebugSettings, Field(default_factory=DebugSettings)]
INIT: Annotated[InitialSettings, Field(default_factory=InitialSettings)]
...
</code></pre>
<p>Now <code>Settings</code> should know about all the settings defined in <code>InitialSettings</code>, and if there's any extra settings in the env file that aren't defined in either class, it should fail.</p>
<p>This almost works.</p>
<p>The problem is that when you call <code>InitialSettings</code>, the <code>SENTRY_DSN</code> in the env file is expected to just be called <code>SENTRY_DSN</code>. When you call <code>InitialSettings</code>, because <code>InitialSettings</code> is nested under <code>INIT</code>, it expects the sentry variable to be called <code>INIT__SENTRY_DSN</code>.</p>
<p>How do I configure Pydantic Settings so that all settings under <code>InitialSettings</code> always look for <code>SENTRY_DSN</code>, no matter if they are initialized using <code>InitialSettings()</code>, or <code>Settings()</code>?</p>
<p>Note: I still want the other nested settings classes under <code>Settings</code>, like <code>STORAGE</code>, to work the same - be prefixed with <code>STORAGE__</code> in the env file.</p>
|
<python><pydantic-v2><pydantic-settings>
|
2025-03-09 17:29:29
| 2
| 2,774
|
John
|
79,496,246
| 22,213,065
|
How to Capture a Sequence of High-Quality PDF Frames from a Website (Without Screen Recording)?
|
<p>In Firefox, I can take very high-quality screenshots of a webpage by using Ctrl + P and saving the page as a PDF. This method preserves the text, images, and code in excellent resolution.</p>
<p>Now, I have created a movable bar chart race in Flourish Studio and want to convert it into a high-quality video. However, I do not want to use screen recording tools.</p>
<p>My Goal:<br />
I want to capture 30 high-resolution PDF frames from the website at different points in time (like a video sequence). Ideally, I need a tool or script that can automate the process of saving multiple PDFs from the website as it plays the animation.</p>
<p>What I Tried:<br />
I attempted to write a Python script that:</p>
<p>Opens the local HTML file of my Flourish chart in Firefox using Selenium.<br />
Waits for the page to load.<br />
Listens for the F1 key and triggers Ctrl + P to print the page as a PDF.<br />
However, the script does not save the PDF file in the output folder. I'm not sure why.</p>
<p>Here is my code:</p>
<pre><code>import time
import keyboard
from selenium import webdriver
from selenium.webdriver.firefox.service import Service
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.firefox.options import Options
# Define paths
html_file_path = r"E:\Desktop\New folder (4)\20250309101616805.html"
geckodriver_path = r"E:\Desktop\New folder (4)\geckodriver.exe"
save_path = r"E:\Desktop\New folder (4)\New folder\output.pdf" # Save PDF location
# Set up Firefox options
options = Options()
options.set_preference("print.always_print_silent", True) # Silent printing
options.set_preference("print.show_print_progress", False) # Hide progress
options.set_preference("print.print_to_file", True) # Print to file
options.set_preference("print.save_print_settings", True) # Save settings
options.set_preference("print.printer_PDF", "Save as PDF") # Set printer
options.set_preference("print.print_to_file", True) # Enable saving print output to file
options.set_preference("print.print_file_name", save_path) # Define the save location for PDF
# Start WebDriver
service = Service(executable_path=geckodriver_path)
driver = webdriver.Firefox(service=service, options=options)
# Open the HTML file
driver.get("file:///" + html_file_path)
# Wait for the page to load
time.sleep(2)
print("Press F1 to save as PDF.")
# Listen for F1 key press
while True:
if keyboard.is_pressed('F1'):
print("F1 pressed, saving as PDF...")
# Trigger print command (Ctrl + P)
body = driver.find_element(By.TAG_NAME, 'body')
body.send_keys(Keys.CONTROL + 'p')
# Wait for the print dialog to process
time.sleep(2)
print("PDF should be saved to:", save_path)
break
# Close browser
driver.quit()
</code></pre>
<p>My Questions:</p>
<p>Why is my script not saving the PDF in the specified output folder?</p>
<p>Is there a better way to automate capturing 30 sequential PDFs from the website at different animation frames?</p>
<p>Is there any tool or script that can generate a sequence of PDFs (like 30 frames per second) from a webpage?</p>
<p>Important:</p>
<p><strong>I do NOT want to use screen recording tools.</strong></p>
<p>I only need high-quality PDF frames that can later be converted into a video.</p>
<p>Any help would be greatly appreciated!</p>
|
<python><firefox><playwright><screen-capture>
|
2025-03-09 15:52:02
| 1
| 781
|
Pubg Mobile
|
79,496,136
| 6,293,038
|
plotting vertical lines on pandas line plot with multiindex x axis
|
<p>I have a dataframe whose index is a multiindex where axes[0] is the date, and axis[1] is the rank. Rank starts with 1 and ends at 100, but there can be a variable number of ranks in between as below.
Here are the ranks</p>
<pre><code>dx = pd.DataFrame({
"date": [
pd.to_datetime('2025-02-24'), pd.to_datetime('2025-02-24'), pd.to_datetime('2025-02-24'), pd.to_datetime('2025-02-24'),
pd.to_datetime('2025-02-25'), pd.to_datetime('2025-02-25'), pd.to_datetime('2025-02-25'),
pd.to_datetime('2025-02-26'), pd.to_datetime('2025-02-26'), pd.to_datetime('2025-02-26'), pd.to_datetime('2025-02-26'), pd.to_datetime('2025-02-26')
],
"rank": [0.0,1.0,2.0,100.0,0.0,1.0,100.0,0.0,1.0,2.0,3.0,100.0],
"value": [2.3, 2.5, 2.4, 2.36, 2.165, 2.54, 2.34, 2.12, 2.32, 2.43, 2.4, 2.3]
})
dx.set_index(["date", "rank"], inplace=True)
</code></pre>
<p>I want to plot this df, and <code>df.plot()</code> works fine creating a reasonable x-axis. However, I want to add a grid or vertical lines at all the <code>rank=1</code>, and all the <code>rank=100(different color)</code>.</p>
<p>I tried this :</p>
<pre><code>
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(30, 5))
dx.plot(ax=axes[0])
axes[0].tick_params('x', labelrotation=90)
xs = [x for x in dx.index if x[1]==0]
for xc in xs:
axes[0].axvline(x=xc, color='blue', linestyle='-')
</code></pre>
<p>but get this error:</p>
<pre><code>ConversionError: Failed to convert value(s) to axis units: (Timestamp('2025-02-24 00:00:00'), 0.0)
</code></pre>
<p>I also want to only show x labels for <code>rank=0</code>, and not all of them. Currently, if i set label rotation to 90, it results in that but not sure if this is the best way to ensure that.</p>
<pre><code>axes[0].tick_params('x', labelrotation=90)
</code></pre>
<p>So looking for 2 answers</p>
<ol>
<li>How to set vertical lines at specific points with this type of multiindex</li>
<li>How to ensure only certain x labels show on the chart</li>
</ol>
|
<python><pandas><dataframe><matplotlib><multi-index>
|
2025-03-09 15:24:22
| 1
| 1,105
|
dayum
|
79,496,120
| 2,572,994
|
Importing umap hangs and exits silently
|
<p>I am running some application code that relies on <code>umap</code>. However, when trying to import the module, it simply hangs and suddenly dies without any error message:</p>
<pre class="lang-py prettyprint-override"><code>print("Importing umap")
import umap
print("Imported umap") # Never reached.
</code></pre>
<p>Unfortunately, I am unable to reproduce it when I have a minimal script that simply imports the module. I debugged a bit into it and it dies in different places. My question is how can I debug this?</p>
<p>Running this on a <code>macOS Sequoia 15.0.1</code> with <code>Python 3.11</code> (tried it with 3.9 as well though). Seems to be some combination of OS + specific setup, but my peers don't run into this running the same code.</p>
|
<python><umap>
|
2025-03-09 15:12:30
| 0
| 4,041
|
Ivaylo Toskov
|
79,496,102
| 1,719,931
|
SQLAlchemy use "IN" to select pairwise correspondence
|
<p>Consider the following DB:</p>
<pre><code>from sqlalchemy import String, select, create_engine
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column, Session
class Base(DeclarativeBase):
pass
class User(Base):
__tablename__ = "user_account"
name: Mapped[str] = mapped_column(String(30))
surname: Mapped[str] = mapped_column(String(30))
def __repr__(self):
return f"User(name={self.name!r}, surname={self.surname!r})"
engine = create_engine("sqlite+pysqlite:///test3.sqlite3", echo=True)
Base.metadata.create_all(engine)
with Session(engine) as session:
user1 = User(
name="Mario",
surname="Rossi"
)
user2 = User(
name="Mario",
surname="Bianchi",
)
user3 = User(
name="Giovanni",
surname="Bianchi",
)
session.add_all([user1, user2, user3])
session.commit()
</code></pre>
<p>Now suppose I have a list of users I want to find:</p>
<pre><code>users = [("Mario", "Rossi"), ("Giovanni", "Bianchi")]
</code></pre>
<p>Then I would run:</p>
<pre><code>names = [name for name, _ in users]
surnames = [surname for _, surname in users]
with Session(engine) as session:
stmt = select(User).where(User.name.in_(names)).where(User.surname.in_(surnames))
print(session.execute(stmt).scalars().all())
</code></pre>
<p>which returns:</p>
<pre><code>[User(name='Mario', surname='Rossi'), User(name='Mario', surname='Bianchi'), User(name='Giovanni', surname='Bianchi')]
</code></pre>
<p>but "Mario Bianchi" was not in the list of input users I had.</p>
<p>How can I concatenate IN statements in order to select only pairwise correspondence?</p>
<p>That is, if I have <code>varA IN (el1, el2) AND varB IN (el3, el4)</code>, I do not wat to select entries with <code>varA==el1 AND varB==el4</code></p>
|
<python><sqlite><select><sqlalchemy><sql-in>
|
2025-03-09 14:59:55
| 1
| 5,202
|
robertspierre
|
79,496,092
| 5,688,082
|
Python's predicate composition
|
<p>I would like to implement something similar to this OCaml in Python:</p>
<pre class="lang-ml prettyprint-override"><code>let example = fun v opt_n ->
let fltr = fun i -> i mod 2 = 0 in
let fltr = match opt_n with
| None -> fltr
| Some n -> fun i -> (i mod n = 0 && fltr n) in
fltr v
</code></pre>
<p>This is easily composable/extendable, I can add as many predicates as I want at runtime.
This is of course a simplified example, in real life I have many optional inclusion/exclusion sets, and predicate checks for membership.</p>
<p>Doing this the naive way in Python fails:</p>
<pre><code>def example(v: int, opt_n=None):
"""
doesn't work!
"""
# doesn't need to be a lambda, an explicitely defined function fails too
fltr = lambda i: i % 2 == 0
if opt_n is not None:
# fails miserably -> maximum recursion depth exceeded
fltr = lambda i: fltr(i) and i % opt_n == 0
return fltr(v)
example(10, 5)
</code></pre>
<p>This is annoying because it seems that since <code>fltr</code> can only appear once on the left side of the assignment, I have to inline the initial <code>fltr</code> in every case afterward:</p>
<pre><code>def example(v: int, opt_n=None, opt_m=None):
"""annoying but works"""
fltr = None
# some inital filters
pred_0 = lambda _: True # do some real checks ...
pred_1 = lambda _: True # do some real checks ...
if opt_n is not None:
# fltr is inlined, only appears on left side, now it works
fltr = lambda i: pred_0(i) and pred_1(i) and opt_n % 2 == 0
if opt_m is not None:
# much repetition
fltr = lambda i: pred_0(i) and pred_1(i) and opt_n % 3 == 0
if fltr is None:
# inlined again
fltr = lambda i: pred_0(i) and pred_1(i)
return fltr(v)
</code></pre>
<p>Is there any way to fix my mess, maybe I am missing something, and/or what is the recommended way to compose predicates in Python?</p>
|
<python><functional-programming><composition>
|
2025-03-09 14:55:27
| 1
| 1,668
|
Denis
|
79,495,685
| 605,156
|
gpg.import_keys() is not working in python virtual environment
|
<p>I'm running this piece of code to encrypt a file using PGP public key.</p>
<pre><code>import gnupg
def pgp_encrypt(pub_file, out_file):
gpg = gnupg.GPG()
with open(pub_file, 'rb') as pgp_pub_key:
public_key_data = pgp_pub_key.read()
# import_keys_file() is NOT used as the key
# eventually will come from user-input
import_result = gpg.import_keys(public_key_data)
if import_result.count == 0:
print("Error: No keys imported. Make sure the public key file is correct.")
exit()
pgp_key_id = import_result.results[0]['fingerprint']
plaintext_data = b'This is the TEST data to encrypt'
encrypted_data = gpg.encrypt(
plaintext_data,
recipients=[pgp_key_id],
always_trust=True
)
if encrypted_data.ok:
print("Data encrypted successfully.")
print(encrypted_data.data)
with open(out_file, 'wb') as encrypted_file:
encrypted_file.write(encrypted_data.data)
else:
print("Encryption failed:")
print(encrypted_data.status)
## Apply
pgp_encrypt('pgp_pubkey.asc', 'pgp_encrypted_file')
</code></pre>
<p>So, basically it's reading the public-key file and putting the data in <code>public_key_data</code>, then importing it using <code>gpg.import_keys(public_key_data)</code> before encrypting the file.</p>
<p><code>print(encrypted_data.data)</code> in my code is not printing the result properly (on the screen, which is another issue to fix) but it's working:</p>
<pre><code>santanu@mgtucrpi5:~/Scripts/gnupg $ python pgp_encrypt.py
Data encrypted successfully.
b'-----BEGIN PGP MESSAGE-----\n\nhQIMA4QM8WwBjfPfAQ/+Jel/JySvuydbuAHDuRT/KwOoFOStYUprQ3TQsj3S3ryJ\nC6bqYD77XviU3fjtcedKxCc0F9Gxw01fb838H0AeACI9Bi4GLuUgS/FJTvrEsX4K\nMniWu4HsConIX+63Ud+RHlVCRziGsa86Uub7GwsaOvYpYhovWzNxc/ObLmoMZaSP\nYmBUHkN+rGGOx4CGGiVS7480Mp2gmd3UyFFbQwV1xO+fz5I+gOcYJSXU0R6SzdXd\nS03sI+8AXLVLmgTARi5ed5V4gr4EIb/bhN18zyUo6gO8vo34GtllFQlRZWL04GRN\n/wg0uudJd26tRxJfCwdcYONKzbNFo8wtLv7dedY+cah+2bTHKFcTWYMGyrhCZZmG\nnZ/GWXnojAz9n9BUNLT/vwQvildfSsuG2qABmk5HUjv0bOH8Ducw6UrbO1pP6hzO\nQcMxGEg8/YQCfI7Zcz1RrIRHWBDlhmG2znDFin2ApyY0N1FmagOJYSZ/ijUkBnT3\nbtIRJ0ISGR7Hjee2G80vKvy0Ozkev2dAhl4Rm3BzoLQV340jEe6dmg8QUPbP0hGU\ni+mlGNMpg50TQVE90ILewhndaBGcBxltS2hVwe+AWj0vhYK3EUqE32Hj7mZxXAWc\nfLTAIXCbsSrZ0Mtc+m6V1IkkwotHaNOea6gqoLMixHbYiwq+F5beu2taYOsespHS\nUQE28ZFF/n6HQ0EUfDuKsd14xUE6UjZvWpfaOor1OedKCife/HkrOOR/VCua1p/T\npROcEBIU2jtazibCiYD1uIy+lwS4w0en8ysFPrLnJuWcFQ==\n=UR1e\n-----END PGP MESSAGE-----\n'
</code></pre>
<p>but the moment I run it from virtual environment, I get the following error:</p>
<pre><code>santanu@mgtucrpi5:~/Scripts/gnupg $ source pgpenv/bin/activate
(pgpenv) santanu@mgtucrpi5:~/Scripts/gnupg $ python pgp_encrypt.py
Traceback (most recent call last):
File "/home/santanu/Scripts/gnupg/pgp_encrypt.py", line 34, in <module>
pgp_encrypt('pgp_pubkey.asc', 'pgp_encrypted_file')
File "/home/santanu/Scripts/gnupg/pgp_encrypt.py", line 11, in pgp_encrypt
if import_result.count == 0:
^^^^^^^^^^^^^^^^^^^
AttributeError: 'ImportResult' object has no attribute 'count'. Did you mean: 'counts'?
</code></pre>
<p>if I change it to <code>counts</code> (just to try), I get diffrent error, which doesn't look right either:</p>
<pre><code> File "/home/santanu/Scripts/gnupg/pgpenv/lib/python3.11/site-packages/gnupg/gnupg.py", line 1064, in encrypt
result = self._encrypt(stream, recipients, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: GPGBase._encrypt() got multiple values for argument 'recipients'
</code></pre>
<p>How can I fix that?<br>
ref. <a href="https://gnupg.readthedocs.io/en/latest/#importing-and-receiving-keys" rel="nofollow noreferrer">https://gnupg.readthedocs.io/en/latest/#importing-and-receiving-keys</a></p>
<p>=======================================</p>
<p>To answer @fqlenos question:</p>
<p>This the way I did the virtual env:</p>
<pre><code>% cd ~/Scripts/gnupg
% python3 -m venv pgpenv
% source pgpenv/bin/activate
% pip install gnupg
</code></pre>
<p>I tried with <code>pip install python-gnupg</code> as well but got teh same result. Is there anything I'm missing or doing incorrectly?</p>
|
<python><python-3.x><virtualenv><gnupg>
|
2025-03-09 09:23:34
| 1
| 2,261
|
MacUsers
|
79,495,237
| 15,412,256
|
Cumulative Elementwise Sum by Python Polars
|
<p>I have a weight vector:</p>
<pre class="lang-py prettyprint-override"><code>weight_vec = pl.Series("weights", [0.125, 0.0625, 0.03125])
</code></pre>
<p>And also a DataFrame containing up to <code>m</code> variables. For simplicity, we will only have two varaibles:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame(
{
"row_index": [0, 1, 2, 3, 4],
"var1": [1, 2, 3, 4, 5],
"var2": [6, 7, 8, 9, 10],
}
)
</code></pre>
<p>The size (number of observations) for these variables can be very large (tens of millions of rows).</p>
<p>I would like to:</p>
<ul>
<li><p>For each variable, and each observation x_i, where i is the row index [0,...,4], I want to transform the value of x_i to the sumproduct of all past <code>n</code>'s x_i value (including the current value [x_i,...x_i+n-1]), and the weight vector. <code>n</code> is the length of the given weight vector and <code>n</code> varies for different weight vector definition.</p>
<p>Numerically, the value of <code>var1</code> at observation index 0 is the sumproduct of the values of all [x_0, x_1, x_2] and all the values of the weight vector. When the row index appraoches to and end (e.g., max index - row index + 1 < <code>n</code>) => all the values will be assigned None.</p>
</li>
<li><p>We can assume that the height of the DataFrame is always larger or equal to the length of the weight vector to result in at least one valid result.</p>
</li>
</ul>
<p>The resulting DataFrame should look like this:</p>
<pre class="lang-py prettyprint-override"><code>shape: (5, 3)
┌───────────┬─────────┬─────────┐
│ row_index ┆ var1 ┆ var2 │
│ --- ┆ --- ┆ --- │
│ i64 ┆ f64 ┆ f64 │
╞═══════════╪═════════╪═════════╡
│ 0 ┆ 0.34375 ┆ 1.4375 │
│ 1 ┆ 0.5625 ┆ 1.65625 │
│ 2 ┆ 0.78125 ┆ 1.875 │
│ 3 ┆ null ┆ null │
│ 4 ┆ null ┆ null │
└───────────┴─────────┴─────────┘
</code></pre>
<p><strong>Numeric Caldulations</strong>:</p>
<ul>
<li>x_0_var1: (0.125 * 1 + 0.0625 * 2 + 0.03125 * 3 = 0.34375)</li>
<li>x_2_var2: (0.125 * 8 + 0.0625 * 9 + 0.03125 * 10 = 1.875)</li>
</ul>
<p>I am looking for a memory efficient, vectorized Polars operation to achieve such results.</p>
|
<python><python-polars>
|
2025-03-09 00:33:44
| 2
| 649
|
Kevin Li
|
79,494,929
| 722,036
|
Can't get my Flask app to run as a bot in Azure Web App Services
|
<p>I'm working on a chat bot and I want to deploy it to Azure Web App Services. The bot receives HTTP requests from the messaging platform, and it also runs in a while loop checking something it's designed to notify about.</p>
<p>The HTTP part is handled by Flask and it works fine. But I can't get the loop to work alongside Flask. My (limited) understanding is I have to get two separate event loops to co-exist. The main one is occupied by Flask and I need to have a separate one for my notification checker.</p>
<p>I tried all kinds of combinations of multiprocessing, asyncio and threading. Here's the latest version I ended up with:</p>
<pre class="lang-py prettyprint-override"><code>import threading
from flask import Flask, jsonify
import logging
import sys
import asyncio
import datetime
import os
import telegram
import time
logging.basicConfig(level=logging.INFO, handlers=[logging.StreamHandler(sys.stdout)])
class Bot:
def __init__(self):
self.bot_token = os.getenv('TOKEN')
self.chat_id = os.getenv('CHAT_ID')
self.telegram_bot = telegram.Bot(token=self.bot_token)
self.sleep = os.getenv('SLEEP') # set to 60
def run(self):
while True:
current_time = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
message = f"It's {current_time} o'clock!\n\nSent from " + __file__
self.send_notification(message)
time.sleep(int(self.sleep))
def send_notification(self, message):
async def send_message():
await self.telegram_bot.send_message(chat_id=self.chat_id, text=message)
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_until_complete(send_message())
bot = Bot()
app = Flask(__name__)
# Simple endpoint that responds to Azure ping
@app.route('/')
def home():
return jsonify(
hello='world',
now=datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
), 200
if __name__ == '__main__':
bot_thread = threading.Thread(target=bot.run)
bot_thread.daemon = True # ChatGPT says this ensures the bot thread will stop when the main thread exits
bot_thread.start()
app.run(host=os.getenv("HTTP_HOST", "0.0.0.0"), port=int(os.getenv("HTTP_PORT")))
</code></pre>
<p>I tried the above code in the following environments:</p>
<ul>
<li><strong>Local machine</strong> in PyCharm: works fine, it both 1. serves HTTP requests and 2. sends out a message every 60 seconds until I shut it down;</li>
<li><strong>Render.com</strong>, deployed as an experiment to a <em>Web Service</em> on their free tier: works fine, also both receives requests and sends out messages, but shuts down after 10-20 minutes because of the free tier limitation;</li>
<li><strong>Asure</strong>, deployed to a <em>Web App Service</em> on their free tier: runs and serves HTTP requests, but the bot does not work. No errors in logs. Checked all logs under https://.scm.azurewebsites.net/api/logs/docker.</li>
</ul>
<p>What I have on Azure is a Linux Web App on their free tier.</p>
<pre class="lang-json prettyprint-override"><code>{
"type": "Microsoft.Web/sites",
"kind": "app,linux",
"properties": {
"state": "Running",
"usageState": "Normal",
"siteScopedCertificatesEnabled": false,
"afdEnabled": false,
"siteProperties": {
"properties": [
{
"name": "LinuxFxVersion",
"value": "PYTHON|3.13"
},
{
"name": "WindowsFxVersion",
"value": null
}
],
"appSettings": null
},
"availabilityState": "Normal",
"csrs": [],
"cers": null,
"siteMode": null,
"computeMode": null,
"serverFarm": null,
"reserved": true,
"isXenon": false,
"hyperV": false,
"storageRecoveryDefaultState": "Running",
"contentAvailabilityState": "Normal",
"runtimeAvailabilityState": "Normal",
"dnsConfiguration": {},
"vnetRouteAllEnabled": false,
"containerAllocationSubnet": null,
"useContainerLocalhostBindings": null,
"vnetImagePullEnabled": false,
"vnetContentShareEnabled": false,
"outboundVnetRouting": null,
"siteConfig": {
"numberOfWorkers": 1,
"defaultDocuments": null,
"netFrameworkVersion": null,
"phpVersion": null,
"pythonVersion": null,
"nodeVersion": null,
"powerShellVersion": null,
"linuxFxVersion": "PYTHON|3.13",
"windowsFxVersion": null,
"windowsConfiguredStacks": null,
"requestTracingEnabled": null,
"remoteDebuggingEnabled": null,
"remoteDebuggingVersion": null,
"httpLoggingEnabled": null,
"azureMonitorLogCategories": null,
"acrUseManagedIdentityCreds": false,
"acrUserManagedIdentityID": null,
"logsDirectorySizeLimit": null,
"detailedErrorLoggingEnabled": null,
"publishingUsername": null,
"publishingPassword": null,
"appSettings": null,
"metadata": null,
"connectionStrings": null,
"machineKey": null,
"handlerMappings": null,
"documentRoot": null,
"scmType": null,
"use32BitWorkerProcess": null,
"webSocketsEnabled": null,
"alwaysOn": false,
"javaVersion": null,
"javaContainer": null,
"javaContainerVersion": null,
"appCommandLine": null,
"managedPipelineMode": null,
"virtualApplications": null,
"winAuthAdminState": null,
"winAuthTenantState": null,
"customAppPoolIdentityAdminState": null,
"customAppPoolIdentityTenantState": null,
"runtimeADUser": null,
"runtimeADUserPassword": null,
"loadBalancing": null,
"routingRules": null,
"experiments": null,
"limits": null,
"autoHealEnabled": null,
"autoHealRules": null,
"tracingOptions": null,
"vnetName": null,
"vnetRouteAllEnabled": null,
"vnetPrivatePortsCount": null,
"publicNetworkAccess": null,
"cors": null,
"push": null,
"apiDefinition": null,
"apiManagementConfig": null,
"autoSwapSlotName": null,
"localMySqlEnabled": null,
"managedServiceIdentityId": null,
"xManagedServiceIdentityId": null,
"keyVaultReferenceIdentity": null,
"ipSecurityRestrictions": null,
"ipSecurityRestrictionsDefaultAction": null,
"scmIpSecurityRestrictions": null,
"scmIpSecurityRestrictionsDefaultAction": null,
"scmIpSecurityRestrictionsUseMain": null,
"http20Enabled": true,
"minTlsVersion": null,
"minTlsCipherSuite": null,
"scmMinTlsCipherSuite": null,
"supportedTlsCipherSuites": null,
"scmSupportedTlsCipherSuites": null,
"scmMinTlsVersion": null,
"ftpsState": null,
"preWarmedInstanceCount": null,
"functionAppScaleLimit": 0,
"elasticWebAppScaleLimit": null,
"healthCheckPath": null,
"fileChangeAuditEnabled": null,
"functionsRuntimeScaleMonitoringEnabled": null,
"minimumElasticInstanceCount": 1,
"azureStorageAccounts": null,
"http20ProxyFlag": null,
"sitePort": null,
"antivirusScanEnabled": null,
"storageType": null,
"sitePrivateLinkHostEnabled": null,
"clusteringEnabled": false
},
"functionAppConfig": null,
"daprConfig": null,
"deploymentId": "<app>",
"slotName": null,
"trafficManagerHostNames": null,
"sku": "Free",
"scmSiteAlsoStopped": false,
"targetSwapSlot": null,
"hostingEnvironment": null,
"hostingEnvironmentProfile": null,
"clientAffinityEnabled": false,
"clientAffinityProxyEnabled": false,
"blockPathTraversal": false,
"clientCertEnabled": false,
"clientCertMode": "Required",
"clientCertExclusionPaths": null,
"hostNamesDisabled": false,
"ipMode": "IPv4",
"vnetBackupRestoreEnabled": false,
"domainVerificationIdentifiers": null,
"kind": "app,linux",
"managedEnvironmentId": null,
"workloadProfileName": null,
"resourceConfig": null,
"containerSize": 0,
"dailyMemoryTimeQuota": 0,
"suspendedTill": null,
"siteDisabledReason": 0,
"functionExecutionUnitsCache": null,
"maxNumberOfWorkers": null,
"cloningInfo": null,
"hostingEnvironmentId": null,
"slotSwapStatus": null,
"httpsOnly": false,
"endToEndEncryptionEnabled": false,
"functionsRuntimeAdminIsolationEnabled": false,
"redundancyMode": "None",
"inProgressOperationId": null,
"geoDistributions": null,
"privateEndpointConnections": [],
"publicNetworkAccess": "Enabled",
"buildVersion": null,
"targetBuildVersion": null,
"migrationState": null,
"eligibleLogCategories": "AppServiceAppLogs,AppServiceAuditLogs,AppServiceConsoleLogs,AppServiceHTTPLogs,AppServiceIPSecAuditLogs,AppServicePlatformLogs,ScanLogs,AppServiceAuthenticationLogs",
"inFlightFeatures": [
"SiteContainers"
],
"storageAccountRequired": false,
"virtualNetworkSubnetId": null,
"keyVaultReferenceIdentity": "SystemAssigned",
"autoGeneratedDomainNameLabelScope": null,
"defaultHostNameScope": "Global",
"privateLinkIdentifiers": null,
"sshEnabled": true
},
"apiVersion": "2022-03-01"
}
</code></pre>
<p>Is there a way to get it to work on Azure Web App Services within Free Tier? Is there a specific way I should utilise to get a separate thread/process to get it to work on Azure? Or should I just do it differently? I do not necessarily <em>have</em> to have it run a <code>While True</code> loop, I just need the app to check something every minute while listening to incoming requests.</p>
<p>One workaround I can think of is just have a separate endpoint that does the check and then just trigger it every minute via some external cron, but that seems way too cumbersome for the task, when the code itself should just be able to run a loop internally. (On the other hand, this can have a positive effect of keeping the Azure app alive, as their free tier will shut it down after 20 minutes similarly to Render.com. If I got the loop to work, my plan was to have the bot send a request to itself e.g. on every 10th check.)</p>
|
<python><python-3.x><azure><flask><azure-web-app-service>
|
2025-03-08 19:37:54
| 0
| 5,341
|
ᴍᴇʜᴏᴠ
|
79,494,797
| 4,703,291
|
How to deduplicate index of Dask dataframe?
|
<p>In the code provided below, I am trying to merge two Dask dataframes</p>
<pre><code>def merge_with_aggregated_4(trans_ddf, agg_ddf):
# First join condition: Adjust based on minutes
trans_ddf["base_hour"] = trans_ddf["column4"].dt.floor("h")
trans_ddf['adjusted_hour'] = trans_ddf['column4'].apply(adjust_hour_row, meta=('column4', 'datetime64[ns]'))
# First join
merged_ddf = trans_ddf.merge(agg_ddf, left_on="adjusted_hour", right_on="hour", how="left", suffixes=("", "_joined"))
# Identify rows that were not joined and remove columns belong to agg_ddf
unmatched_ddf = merged_ddf[merged_ddf["mean_column1"].isna()].drop(
columns=[col for col in agg_ddf.columns if col != "hour"]
)
# Secondary join based only on the original hour
unmatched_ddf = unmatched_ddf.merge(agg_ddf, left_on="base_hour", right_on="hour", how="left", suffixes=("", "_fallback"))
# Fill missing values using the second join
for col in agg_ddf.columns:
if col != "hour":
merged_ddf[col] = merged_ddf[col].fillna(unmatched_ddf[col])
return merged_ddf
</code></pre>
<p>And in test I print result index</p>
<pre><code>result_ddf = merge_with_aggregated_4(trans_ddf, agg_ddf)
print(f' result index: {result_ddf.index.compute()}')
</code></pre>
<p>And it is duplicated
result index:</p>
<p><code>Index([2, 3, 7, 2, 0, 1, 4, 5, 6, 8, 9, 10, 0, 1], dtype='int64')</code></p>
<p>This leads to problems in future computations , I want deduplication of Index. Can anyone help me with that</p>
<p>reset_index() and index.drop_duplicates() do not help</p>
<pre><code>ValueError: cannot reindex on an axis with duplicate labels
self = Index([0, 1, 4, 5, 6, 8, 9, 10, 0, 1], dtype='int64')
target = Index([0, 1, 0, 1, 4, 5, 6, 8, 9, 10], dtype='int64'), method = None
</code></pre>
|
<python><dask><dask-dataframe>
|
2025-03-08 17:51:10
| 0
| 463
|
Oleg
|
79,494,676
| 16,765,223
|
Run EXE software using App in Windows IIS
|
<p>I hosted my Django app on Windows IIS in Windows Server 2022 Standard</p>
<p>But my application has a feature that opens software (.exe) and run specific user tasks provided in the request of the site.</p>
<p>For example, a user provides some input from my site, and then it processes it with my app by opening software not owned by me using python code in <code>views.py</code></p>
<pre><code>script_path = "C:\inetpub\wwwroot\webapp\script\runthescript.py"
subprocess.run(["C:/Program Files/My Soft/Soft.exe", "-runScriptFile", script_path])
</code></pre>
<h2>MY PROBLEM</h2>
<p>When I tested my application locally using <code>python manage.py runserver</code> it was working due to admin privileges and <code>session 1</code> access, but the same when I tried after hosting with IIS then everything working except software to start.</p>
<h2>WHAT I TRIED:</h2>
<p>I tried providing my AppPool Identity as (IIS APPPOOL\webapp) Administrator privileges.</p>
<p>Tried using Task Scheduler, but it works with the background process but not with the GUI app.</p>
<h2>ISSUE</h2>
<p>When I googled it, I found that it is due to <code>privileges</code> and <code>session 0</code> access. IIS has only <code>session 0 isolation</code> so that it is unable to access GUI.</p>
<p>Your small help, idea or suggestion definitely be helpful for me. :)</p>
|
<python><django><iis><permissions><windows-server>
|
2025-03-08 16:26:27
| 1
| 823
|
MDEV
|
79,494,538
| 16,383,578
|
Fastest way to find all permutations of 0, 1 of width n without itertools in pure Python?
|
<p>What is an efficient way to get the same result as <code>list(product((0, 1), repeat=n))</code> without using itertools and any imports?</p>
<p>For example, given n=3, the output be: <code>[(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1)]</code> in exactly the same order.</p>
<p>This is a programming challenge, it is an exercise, using standard library is staying in comfort zone and doing so doesn't result in learning something new. This problem is impractical, but in finding novel solutions I can gain new programming techniques and insights that I can apply elsewhere.</p>
<p>And no, using <code>f'{n:b}'</code> and <code>bin(n)</code> is stupid because you are converting a number to a string and then from string to int, doing all those unnecessary operations, I have proven it to be slow.</p>
<p>I tried to solve the problem myself, I wrote an efficient function to do this, then two slow functions to compare with the original function.</p>
<pre><code>from typing import Generator, Tuple
def powerset_indices(n: int) -> Generator[Tuple[int, ...], None, None]:
if not isinstance(n, int) or n < 1:
raise ValueError("The argument n must be a positive integer")
numeral = [0] * n
maxi = n - 1
for _ in range(1 << n):
yield tuple(numeral)
i = maxi
while True:
if not (d := numeral[i]):
numeral[i] = 1
break
else:
numeral[i] = 0
i -= 1
def powerset_indices1(n: int) -> Generator[Tuple[int, ...], None, None]:
if not isinstance(n, int) or n < 1:
raise ValueError("The argument n must be a positive integer")
for i in range(1 << n):
yield tuple(map(int, f"{i:0{n}b}"))
def powerset_indices2(n: int) -> Generator[Tuple[bool, ...], None, None]:
if not isinstance(n, int) or n < 1:
raise ValueError("The argument n must be a positive integer")
places = [1 << i for i in range(n - 1, -1, -1)]
for i in range(1<<n):
yield tuple((i & p) > 0 for p in places)
</code></pre>
<pre><code>In [133]: from itertools import product
In [134]: %timeit list(powerset_indices(16))
19.8 ms ± 92.2 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [135]: %timeit list(product((0, 1), repeat=16))
6.86 ms ± 32.5 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [136]: %timeit list(powerset_indices1(16))
184 ms ± 485 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [137]: %timeit list(powerset_indices2(16))
136 ms ± 277 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
</code></pre>
<p>My original function is only slower than <code>product</code> and it is faster than the two later functions by a wide margin. Can anyone offer a solution faster than <code>powerset_indices</code> while only using built-in Python?</p>
|
<python><algorithm><binary><permutation>
|
2025-03-08 14:43:31
| 3
| 3,930
|
Ξένη Γήινος
|
79,494,450
| 10,714,490
|
WHY is there a TabError in Python?
|
<p>I'm not asking WHAT is TabError nor HOW to fix TabError, I'm asking <strong>WHY</strong>.</p>
<p>I completely understand a <code>TabError</code> means <code>inconsistent use of tabs and spaces in indentation</code> exactly as it prints. I mean, the Python Interpreter <em>should</em> be able to just simply convert each Tab to 4 spaces.</p>
<p>Is there any ambiguous situation that makes TabError sense?</p>
<p>The only one I can see is that, some may use 2 or other number of spaces as an indentation. But since Python accepts it, it knows it and is able to convert Tab to it.</p>
|
<python>
|
2025-03-08 13:24:28
| 2
| 668
|
KumaTea
|
79,494,345
| 16,383,578
|
How to count the first N natural numbers in binary?
|
<p>This may seem trivial but I haven't found a good solution to the problem. I have even found this: <a href="https://stackoverflow.com/questions/12325868/generate-all-n-bit-binary-numbers-in-a-fastest-way-possible">generate all n bit binary numbers in a fastest way possible</a>. but I haven't found an exact duplicate.</p>
<p>The problem is very simple, given a limit N that is a positive integer, generate the binary representation of every natural number up to N in order (excluding N, the first natural number is 0, so N - 1 is the maximum number to be represented), in <code>tuple</code> form, with every tuple padded with leading zeros so that all representations are of the same length.</p>
<p>For example, if <code>N</code> is 4, the output should be <code>[(0, 0), (0, 1), (1, 0), (1, 1)]</code>.</p>
<p>At this point this problem is indeed trivial, but there is a catch, no <code>bin(n)</code> and <code>f'{n:b}'</code> and the like are allowed, the algorithm should entirely operate in the binary domain, because as I understand everything (text, photos, music, videos...) in computers are binary numerals all the way down, so converting representations back and forth is adding unnecessary computations, these computations (base-conversion) are completely redundant and should be eliminated to produce the most efficient program (this is about keeping problems specific to as few domains as possible so that we only operate on those domains).</p>
<p>I wrote a simple program that does exactly what I describe:</p>
<pre><code>from typing import Generator, Tuple
def count_in_binary(n: int) -> Generator[Tuple[int, ...], None, None]:
if not isinstance(n, int) or n < 1:
raise ValueError("The argument n must be a positive integer")
l = (n - 1).bit_length() if n > 1 else 1
numeral = [0] * l
maxi = l - 1
for _ in range(n):
yield tuple(numeral)
i = maxi
while True:
if not (d := numeral[i]):
numeral[i] = 1
break
else:
numeral[i] = 0
i -= 1
</code></pre>
<p>But I am not sure if this is the most efficient way to do it in Python. I haven't used many bit operations and computers already represent numbers as binary, so there should be faster ways to do this.</p>
<p>The question is, what is a faster way?</p>
<hr />
<p>For comparison, here is one way that uses f'{n:b}' and therefore is more concise, but is actually much slower and stupider:</p>
<pre><code>def count_in_binary1(n: int) -> Generator[Tuple[int, ...], None, None]:
if not isinstance(n, int) or n < 1:
raise ValueError("The argument n must be a positive integer")
l = len(f'{n-1:b}')
for i in range(n):
yield tuple(map(int, f'{i:0{l}b}'))
</code></pre>
<pre><code>In [50]: %timeit list(count_in_binary(256))
59.9 μs ± 209 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [51]: %timeit list(count_in_binary1(256))
452 μs ± 3.68 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
</code></pre>
<hr />
<h2>Edit</h2>
<p>I didn't do much testing with the original function, I just thought it would work, now it's fixed.</p>
<p>And no, the scope of the scope is limited to pure Python, so NumPy isn't allowed.</p>
<hr />
<h2>Edit 2</h2>
<p>Now I think there are no exceptions.</p>
<hr />
<h2>Edit 3</h2>
<p>The problem I was trying to solve remains unsolved. I have posted a new question without all relevant information: <a href="https://stackoverflow.com/questions/79494538/fastest-way-to-find-all-permutations-of-0-1-of-width-n-without-itertools-in-pur">Fastest way to find all permutations of 0, 1 of width n without itertools in pure Python?</a></p>
|
<python><algorithm><binary>
|
2025-03-08 12:03:28
| 2
| 3,930
|
Ξένη Γήινος
|
79,494,272
| 2,315,319
|
Pass value from one Django template to other
|
<p>I want to build a Django template hierarchy like so:</p>
<pre><code> root.html
|_ root-dashboard.html
|_ root-regular.html
</code></pre>
<p><code>root.html</code> shall have an <code>if</code> statement:</p>
<pre><code>{% if style == "dashboard" %}
{# render some elements in a certain way #}
{% else %}
{# render those elements in a different way #}
{% endif %}
</code></pre>
<p>And <code>root-dashboard.html</code> and <code>root-regular.html</code> should individually extend <code>root.html</code> by setting <code>style</code>:</p>
<pre><code># root-dashboard.html
{% extend 'root.html' with style='dashboard'%}
# root-regular.html
{% extend 'root.html' with style='regular'%}
</code></pre>
<p>(<code>with</code> above is not an actual valid syntax, its just something similar I want)</p>
<p>And a view can use either <code>root-dashboard.html</code> or <code>root-regular.html</code> to show the content in one style or the other.</p>
<p>How do I achieve this without the view having to set the <code>style</code> context?</p>
|
<python><django><django-templates>
|
2025-03-08 11:02:39
| 1
| 313
|
fishfin
|
79,493,962
| 1,635,450
|
IPython.display does not show any image at all but <IPython.core.display.Image object>
|
<p>I am trying to show <code>LangChain</code> graph in my python module code. Not Jupiter notebook. However, the following code snippet:</p>
<pre><code>from IPython.display import Image, display
from langgraph.graph import StateGraph, MessagesState
graph_builder = StateGraph(MessagesState)
simple_graph = graph_builder.compile()
display(Image(simple_graph.get_graph().draw_mermaid_png()))
</code></pre>
<p>shows:
<code><IPython.core.display.Image object></code> instead of the graph. I am on Ubuntu.</p>
<p>Question: Can <code>IPython</code> be used in python module code? According to <a href="https://ipython.org/" rel="nofollow noreferrer">https://ipython.org/</a>, it seems only possible for the Jupyter Notebook environment. If that's the case, how to use <code>matplotlib</code> or <code>pillow</code> to plot the graph?</p>
|
<python><image><ipython><draw><python-3.12>
|
2025-03-08 05:58:07
| 1
| 4,280
|
khteh
|
79,493,688
| 2,687,317
|
3D scatter plot and projected KDE plots
|
<p>Can someone tell me why the xz projection in this code does not appear on the x-z plane?</p>
<pre><code>import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
from scipy.stats import kde
# Sample data
np.random.seed(42)
x = np.random.randn(100)
y = np.random.randn(100)
z = np.random.randn(100)
# Create figure and 3D axes
fig = plt.figure(figsize=(10, 8))
ax = fig.add_subplot(111, projection='3d')
# Scatter plot
ax.scatter(x, y, z, c='b', marker='o')
# KDE for projections
def kde_projection(data1, data2):
values = np.vstack([data1, data2])
kernel = kde.gaussian_kde(values)
x_range = np.linspace(data1.min(), data1.max(), 100)
y_range = np.linspace(data2.min(), data2.max(), 100)
X, Y = np.meshgrid(x_range, y_range)
positions = np.vstack([X.ravel(), Y.ravel()])
Z = np.reshape(kernel(positions).T, X.shape)
return X, Y, Z
# Projections with KDE
# XY plane
X_xy, Y_xy, Z_xy = kde_projection(x, y)
ax.contour(X_xy, Y_xy, Z_xy, zdir='z', offset=z.min()-1, cmap='Blues')
# XZ plane
X_xz, Z_xz, Y_xz = kde_projection(x, z)
ax.contour(X_xz, Y_xz, Z_xz, zdir='y', offset=y.min()-1, cmap='Reds')
# YZ plane
Y_yz, Z_yz, X_yz = kde_projection(y, z)
ax.contour(X_yz, Y_yz, Z_yz, zdir='x', offset=x.min()-1, cmap='Greens')
# Set labels and limits
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
ax.set_xlim(x.min() - 1, x.max() + 1)
ax.set_ylim(y.min() - 1, y.max() + 1)
ax.set_zlim(z.min() - 1, z.max() + 1)
# Show plot
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/KnOJrpFG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KnOJrpFG.png" alt="enter image description here" /></a></p>
<p>It correcly plots the yz and xy projections onto their respective planes, but xz doesn't want to work... I'm not sure if I'm missing something, but I think the pattern should hold for all three planes.</p>
<p>Thanks.</p>
|
<python><matplotlib><seaborn><plot3d>
|
2025-03-08 00:31:56
| 2
| 533
|
earnric
|
79,493,638
| 4,463,825
|
assigning data in a pythonic way
|
<p>Looking for suggestions on how to compress this code into a couple of lines.
One line for assigning columns, and the other for data.</p>
<pre><code>df_input = pd.DataFrame(columns=['supply_temp', 'liquid_mass_flow','air_inlet_temp'])
flow = 60
inputs = np.array([45,flow*988/60000,35])
df_input['supply_temp'] = inputs[0]
df_input['liquid_mass_flow'] = inputs[1]
df_input['air_inlet_temp'] = inputs[2]
</code></pre>
|
<python><pandas><numpy>
|
2025-03-07 23:42:50
| 1
| 993
|
Jesh Kundem
|
79,493,514
| 10,083,382
|
Convert JSON object to Pandas DataFrame ensuring that Key is considered as column label
|
<p>I have a python script which needs to be executed by passing the input using command line. The command is as follows</p>
<pre class="lang-bash prettyprint-override"><code>python script.py --input [{\\"A\\":\\"322|985\\",\\"B\\":3}]
</code></pre>
<p>The idea is to convert the input to a pandas DataFrame. Code below does convert it to Pandas DataFrame but only creates a single column named <code>0</code> and the value for that column is <code>[{\A\:\322|985\,\B\:3}]</code>.</p>
<pre><code>import json
import pandas as pd
import argparse
def validate_input(input_data):
if isinstance(input_data, pd.DataFrame):
return input_data # Already a DataFrame, return as is
json_conv = json.dumps(input_data)
json_data = json.loads(json_conv)
return pd.DataFrame([json_data]) # Convert JSON serializable to DataFrame
def process_data(input_data):
"""
Function that processes data, only called if dtype is valid.
"""
validated_data = validate_input(input_data)
print(validated_data)
print("Processing data:\n", validated_data)
def main():
parser = argparse.ArgumentParser(description="Validate and process JSON or Pandas DataFrame input.")
parser.add_argument("--input", type=str, help="Input data as a JSON string")
args = parser.parse_args()
try:
process_data(args.input) # Proceed with processing only after validation
except json.JSONDecodeError:
raise TypeError("Invalid JSON input. Please provide a valid JSON string.")
if __name__ == "__main__":
main()
</code></pre>
<p>Run code below to get expected output</p>
<pre><code>pd.DataFrame([{"A":"322|985","B":3}])
</code></pre>
|
<python><json><pandas><dataframe>
|
2025-03-07 21:49:02
| 1
| 394
|
Lopez
|
79,493,454
| 2,893,712
|
SQLite cannot start a transaction within a transaction
|
<p>I have a script that utilizes the APScheduler module to run certain scripts at set intervals. These scripts are used for all kinds of purposes that make my life easier I also use a simple SQLite database to save the information and this is how I see if the data has changed, and if it has, it will send me a message using Telegram's API. Here is a basic rundown of how the script works</p>
<pre><code>import sqlite3
conn = sqlite3.connect('db.db', check_same_thread=False)
from apscheduler.schedulers.background import BackgroundScheduler
sched = BackgroundScheduler()
def compareChanges(site, new_value, url, subname = ""):
cur = conn.cursor()
# Get old value (if exist)
old_value = cur.execute("SELECT Data FROM Data WHERE Name = ? AND Subname = ? LIMIT 1", (site, subname)).fetchone()
# Insert/initialize if not exist (data will be updated later in this function)
if not old_value:
old_value = "NOT SET"
cur.execute("INSERT INTO `Data` (Name, Subname, Data, Timestamp) VALUES (?, ?, ?, ?)", (site, subname, old_value, datetime.now().isoformat()))
#conn.commit()
new_value = str(new_value) # Convert to string because sometimes the data passed is a dict
if old_value != new_value:
# Log and notify of change
logger.info(f"[{site}] has new value of {new_value}")
pushMsg(f"{site} => {new_value}", url) # Send Telegram message of update
# Set change and save file
cur.execute("UPDATE `Data` SET `Data` = ?, `Timestamp` = ? WHERE Name = ?", (new_value, datetime.now().isoformat(), site))
conn.commit()
cur.close()
return True # Return True if changed
else:
#logger.info(f"{site} - No Change: {old_value}")
cur.close()
return False # Return false on no change
def Function1():
import requests
response = requests.get("https://example.com/api/endpoint").json()
compareChanges("Website Status Check", response['status'], "https://example.com")
def Bestbuy_Latest_Price():
import requests
response = requests.get("https://bestbuy.com/product/page")
# This is pseudocode but it loads products on page and compares price for each product find
for item in response:
compareChanges("Bestbuy_Latest_Price", {product: price}, "https://bestbuy.com/product", subname = sku)
sched.add_job(Function1, 'cron', hour='*')
sched.add_job(Bestbuy_Latest_Price, 'cron', hour='*')
sched.start()
</code></pre>
<p>The way this script works is that every hour, <code>Function1()</code> will be called, pull some data, compare it to the stored value in the database, and then notify if this value was changed.</p>
<p>I put <code>cur = conn.cursor()</code> segment inside the <code>compareChanges()</code> function because otherwise I was getting errors about using a cursor inside a different thread. However now I am getting <code>cannot start a transaction within a transaction</code>. Some of the functions run at the exact same time so it is a possibility that the script will attempt to update multiple values at the same time (it will never update the same row at the same time though). How do I fix my code to not start a transaction in a transaction?</p>
|
<python><sqlite><transactions><apscheduler>
|
2025-03-07 21:07:15
| 1
| 8,806
|
Bijan
|
79,493,181
| 568,352
|
I am trying to run uwsg-emporer vassal and it can't find my python
|
<p>When I create a uwsgi vassal ini file, the server throws this error when I hit the site:</p>
<p>--- no python application found, check your startup logs for errors ---</p>
<p>The virtualenv is correct and if I do python manage.py check I do not have errors and python manage.py runserver runs a dev version fine. I first source in my venv so I know python is installed in the Virtual Envelope at this path: /var/www/webapps/lhhs/env</p>
<p>this is my .ini file</p>
<pre><code>[uwsgi]
uid = www-data
socket = /var/www/webapps/lhhs/lhhs.sock
chown-socket = %(uid):www-data
chmod-socket = 660
chdir = /var/www/webapps/lhhs/
virtualenv = /var/www/webapps/lhhs/env
binary-path = /var/www/webapps/lhhs/env/bin/uwsgi
modue = lhhs.wsgi:application
wsgi-file = lhhs/wsgi.py
env = DJANGO_SETTINGS_MODULE=lhhs.settings.dev
module = django.core.handlers.wsgi:WSGIHandler()
stats = 127.0.0.1:9191
vacuum = true
processes = 1
threads = 1
plugins = python3,logfile
logger = file:/var/www/webapps/lhhs/log/uwsgi.log
</code></pre>
|
<python><django><uwsgi>
|
2025-03-07 18:23:26
| 0
| 1,817
|
bradrice
|
79,493,063
| 5,795,116
|
Post Request Not working for scraping a website
|
<p>Step 1: Send a GET request to fetch the page and extract hidden form values
We'll first send a GET request to the page and extract necessary hidden form values like __VIEWSTATE, __VIEWSTATEGENERATOR, and __EVENTVALIDATION which are needed for subsequent POST requests.</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import re
# Define the URL and headers for the initial GET request
url = "https://jamabandi.punjab.gov.in/default.aspx"
headers = {
'Accept': '*/*',
'Accept-Language': 'en-GB,en-US;q=0.9,en;q=0.8',
'Cache-Control': 'no-cache',
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'Origin': 'https://jamabandi.punjab.gov.in',
'Referer': 'https://jamabandi.punjab.gov.in/default.aspx',
'Sec-Fetch-Dest': 'empty',
'Sec-Fetch-Mode': 'cors',
'Sec-Fetch-Site': 'same-origin',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36',
'X-Requested-With': 'XMLHttpRequest'
}
# Send GET request to fetch the page
response = requests.get(url, headers=headers)
# Extract hidden form fields (VIEWSTATE, VIEWSTATEGENERATOR, EVENTVALIDATION)
viewstate = re.search(r'__VIEWSTATE" value="([^"]+)', response.text).group(1)
viewstategenerator = re.search(r'__VIEWSTATEGENERATOR" value="([^"]+)', response.text).group(1)
eventvalidation = re.search(r'__EVENTVALIDATION" value="([^"]+)', response.text).group(1)
# Print the extracted values to verify
print("VIEWSTATE:", viewstate)
print("VIEWSTATEGENERATOR:", viewstategenerator)
print("EVENTVALIDATION:", eventvalidation)
</code></pre>
<p>Step 2: Post request to select the district
Now that we have the necessary hidden form values, we can post the district selection by sending a POST request.</p>
<h1></h1>
<pre><code> Define the payload for the POST request to select a district
payload = {
'ctl00$ScriptManager1': 'ctl00$ContentPlaceHolder1$updRegionSelector|ctl00$ContentPlaceHolder1$ddlDistrict',
'ctl00$SelectRegion$rdPeriod': '1',
'ctl00$SelectRegion$ddlDistrict': '10', # Choose district here (example: 10)
'ctl00$SelectRegion$ddlTehsil': '',
'ctl00$SelectRegion$ddlVillage': '',
'ctl00$SelectRegion$ddlYear': '',
'ctl00$ddlLang': 'en-US',
'ctl00$ContentPlaceHolder1$rdPeriod': '1',
'ctl00$ContentPlaceHolder1$ddlDistrict': '10', # Same district
'__EVENTTARGET': 'ctl00$ContentPlaceHolder1$ddlDistrict',
'__EVENTARGUMENT': '',
'__LASTFOCUS': '',
'__VIEWSTATE': viewstate,
'__VIEWSTATEGENERATOR': viewstategenerator,
'__EVENTVALIDATION': eventvalidation,
'__ASYNCPOST': 'true'
}
# Send the POST request to select the district
post_url = "https://jamabandi.punjab.gov.in/default.aspx"
response = requests.post(post_url, headers=headers, data=payload)
# Print response to check if the request was successful
print(response.text)
</code></pre>
<p>Step 3: Select Tehsil
Next, we move to select the Tehsil, following the same approach as the district selection. We need to extract the hidden form values again from the updated page (the one that follows the district selection) and post the Tehsil selection.</p>
<pre><code> # Extract hidden form fields after the district selection (VIEWSTATE, VIEWSTATEGENERATOR, EVENTVALIDATION)
viewstate = re.search(r'__VIEWSTATE" value="([^"]+)', response.text).group(1)
viewstategenerator = re.search(r'__VIEWSTATEGENERATOR" value="([^"]+)', response.text).group(1)
eventvalidation = re.search(r'__EVENTVALIDATION" value="([^"]+)', response.text).group(1)
# Define the payload for the POST request to select the Tehsil
payload = {
'ctl00$ScriptManager1': 'ctl00$ContentPlaceHolder1$updRegionSelector|ctl00$ContentPlaceHolder1$ddlTehsil',
'ctl00$SelectRegion$rdPeriod': '1',
'ctl00$SelectRegion$ddlDistrict': '10',
'ctl00$SelectRegion$ddlTehsil': '63', # Choose Tehsil here (example: 63)
'ctl00$SelectRegion$ddlVillage': '',
'ctl00$SelectRegion$ddlYear': '',
'ctl00$ddlLang': 'en-US',
'ctl00$ContentPlaceHolder1$rdPeriod': '1',
'ctl00$ContentPlaceHolder1$ddlDistrict': '10',
'ctl00$ContentPlaceHolder1$ddlTehsil': '63',
'__EVENTTARGET': 'ctl00$ContentPlaceHolder1$ddlTehsil',
'__EVENTARGUMENT': '',
'__LASTFOCUS': '',
'__VIEWSTATE': viewstate,
'__VIEWSTATEGENERATOR': viewstategenerator,
'__EVENTVALIDATION': eventvalidation,
'__ASYNCPOST': 'true'
}
# Send the POST request to select the Tehsil
response = requests.post(post_url, headers=headers, data=payload)
# Print response to check if the request was successful
print(response.text)
</code></pre>
<p>Step 4: Select Village
We follow the same approach as before, but now we need to select the village. The hidden form values should be extracted again after selecting the Tehsil.</p>
<pre><code># Extract hidden form fields after the Tehsil selection (VIEWSTATE, VIEWSTATEGENERATOR, EVENTVALIDATION)
viewstate = re.search(r'__VIEWSTATE" value="([^"]+)', response.text).group(1)
viewstategenerator = re.search(r'__VIEWSTATEGENERATOR" value="([^"]+)', response.text).group(1)
eventvalidation = re.search(r'__EVENTVALIDATION" value="([^"]+)', response.text).group(1)
# Define the payload for the POST request to select the Village
payload = {
'ctl00$ScriptManager1': 'ctl00$ContentPlaceHolder1$updRegionSelector|ctl00$ContentPlaceHolder1$ddlVillage',
'ctl00$SelectRegion$rdPeriod': '1',
'ctl00$SelectRegion$ddlDistrict': '10',
'ctl00$SelectRegion$ddlTehsil': '63',
'ctl00$SelectRegion$ddlVillage': '1E36A812-C218-DD11-8334-000E0CA49FC8', # Choose Village here (example village ID)
'ctl00$SelectRegion$ddlYear': '',
'ctl00$ddlLang': 'en-US',
'ctl00$ContentPlaceHolder1$rdPeriod': '1',
'ctl00$ContentPlaceHolder1$ddlDistrict': '10',
'ctl00$ContentPlaceHolder1$ddlTehsil': '63',
'ctl00$ContentPlaceHolder1$ddlVillage': '1E36A812-C218-DD11-8334-000E0CA49FC8',
'__EVENTTARGET': 'ctl00$ContentPlaceHolder1$ddlVillage',
'__EVENTARGUMENT': '',
'__LASTFOCUS': '',
'__VIEWSTATE': viewstate,
'__VIEWSTATEGENERATOR': viewstategenerator,
'__EVENTVALIDATION': eventvalidation,
'__ASYNCPOST': 'true'
}
# Send the POST request to select the Village
response = requests.post(post_url, headers=headers, data=payload)
# Print response to check if the request was successful
print(response.text)
</code></pre>
<p>Step 5: Select Year
Now, we move to select the year in the same manner.</p>
<pre><code># Extract hidden form fields after the Village selection (VIEWSTATE, VIEWSTATEGENERATOR, EVENTVALIDATION)
viewstate = re.search(r'__VIEWSTATE" value="([^"]+)', response.text).group(1)
viewstategenerator = re.search(r'__VIEWSTATEGENERATOR" value="([^"]+)', response.text).group(1)
eventvalidation = re.search(r'__EVENTVALIDATION" value="([^"]+)', response.text).group(1)
# Define the payload for the POST request to select the Year
payload = {
'ctl00$ScriptManager1': 'ctl00$ContentPlaceHolder1$updRegionSelector|ctl00$ContentPlaceHolder1$ddlYear',
'ctl00$SelectRegion$rdPeriod': '1',
'ctl00$SelectRegion$ddlDistrict': '10',
'ctl00$SelectRegion$ddlTehsil': '63',
'ctl00$SelectRegion$ddlVillage': '1E36A812-C218-DD11-8334-000E0CA49FC8',
'ctl00$SelectRegion$ddlYear': '3', # Choose Year here (example: 3)
'ctl00$ddlLang': 'en-US',
'ctl00$ContentPlaceHolder1$rdPeriod': '1',
'ctl00$ContentPlaceHolder1$ddlDistrict': '10',
'ctl00$ContentPlaceHolder1$ddlTehsil': '63',
'ctl00$ContentPlaceHolder1$ddlVillage': '1E36A812-C218-DD11-8334-000E0CA49FC8',
'ctl00$ContentPlaceHolder1$ddlYear': '3',
'__EVENTTARGET': 'ctl00$ContentPlaceHolder1$ddlYear',
'__EVENTARGUMENT': '',
'__LASTFOCUS': '',
'__VIEWSTATE': viewstate,
'__VIEWSTATEGENERATOR': viewstategenerator,
'__EVENTVALIDATION': eventvalidation,
'__ASYNCPOST': 'true'
}
# Send the POST request to select the Year
response = requests.post(post_url, headers=headers, data=payload)
# Print response to check if the request was successful
print(response.text)
</code></pre>
<p>Step 6: Send GET Request to the Mutation Page</p>
<p>The first part of code sends a GET request to the mutation page to retrieve necessary information, such as cookies and hidden fields like __VIEWSTATE, __VIEWSTATEGENERATOR, and __EVENTVALIDATION.</p>
<pre><code>import requests
cookie1 = response.cookies.get_dict()
cookie2 = cookie1['dgrLAndrecordPLRS']
# Define the URL
url = "https://jamabandi.punjab.gov.in/Mutation.aspx?itemPID=4"
# Define the headers
headers = {
"authority": "jamabandi.punjab.gov.in",
"method": "GET",
"scheme": "https",
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7",
"accept-encoding": "gzip, deflate, br, zstd",
"accept-language": "en-GB,en;q=0.9",
"referer": "https://jamabandi.punjab.gov.in/default.aspx",
"sec-ch-ua": '"Not A(Brand";v="8", "Chromium";v="132", "Google Chrome";v="132"',
"sec-ch-ua-mobile": "?0",
"sec-ch-ua-platform": '"macOS"',
"sec-fetch-dest": "document",
"sec-fetch-mode": "navigate",
"sec-fetch-site": "same-origin",
"sec-fetch-user": "?1",
"upgrade-insecure-requests": "1",
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36"
}
# Define the cookies
cookies = {
"dgrLAndrecordPLRS": cookie2
}
# Make the GET request
response = requests.get(url, headers=headers, cookies=cookies)
# Check the response status
if response.status_code == 200:
print("Request was successful!")
print(response.text) # Print the HTML content
else:
print(f"Failed to fetch the page. Status Code: {response.status_code}")
Step 2: Extract Hidden Fields from the HTML
</code></pre>
<p>After successfully sending the GET request, you use BeautifulSoup to parse the HTML response and extract necessary hidden fields (__VIEWSTATE, __VIEWSTATEGENERATOR, and __EVENTVALIDATION) that will be required in the subsequent POST request.This is where it is not working.</p>
<pre><code>from bs4 import BeautifulSoup
# Initialize a session to persist cookies
session = requests.Session()
# Step 1: Send a GET request to the page to get necessary hidden fields
url = "https://jamabandi.punjab.gov.in/Mutation.aspx?itemPID=4"
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36"
}
headers.update({
"Referer": url,
"X-Requested-With": "XMLHttpRequest"
})
# Send GET request to fetch the page content
response = session.get(url, headers=headers)
# Parse HTML to extract __VIEWSTATE, __VIEWSTATEGENERATOR, and __EVENTVALIDATION
soup = BeautifulSoup(response.text, "html.parser")
viewstate = soup.find("input", {"name": "__VIEWSTATE"})["value"]
viewstategenerator = soup.find("input", {"name": "__VIEWSTATEGENERATOR"})["value"]
eventvalidation = soup.find("input", {"name": "__EVENTVALIDATION"})["value"]
# Debug: Print extracted values
print("VIEWSTATE:", viewstate)
print("VIEWSTATEGENERATOR:", viewstategenerator)
print("EVENTVALIDATION:", eventvalidation)
Step 3: Prepare Payload for POST Request
Next, you define the payload for the POST request. This includes the hidden fields you extracted (__VIEWSTATE, __VIEWSTATEGENERATOR, and __EVENTVALIDATION), as well as any form fields required by the mutation form.
# Step 8: Prepare the payload for the POST request
payload = {
"ctl00$ScriptManager1": "ctl00$ContentPlaceHolder1$updMutationPanel|ctl00$ContentPlaceHolder1$ddlMutationNumber",
"__EVENTTARGET": "ctl00$ContentPlaceHolder1$ddlMutationNumber",
"__EVENTARGUMENT": "",
"__LASTFOCUS": "",
"__VIEWSTATE": viewstate,
"__VIEWSTATEGENERATOR": viewstategenerator,
"__SCROLLPOSITIONX": "0",
"__SCROLLPOSITIONY": "0",
"__EVENTVALIDATION": eventvalidation,
"ctl00$SelectRegion$rdPeriod": "1",
"ctl00$SelectRegion$ddlDistrict": "",
"ctl00$SelectRegion$ddlTehsil": "",
"ctl00$SelectRegion$ddlVillage": "",
"ctl00$SelectRegion$ddlYear": "",
"ctl00$ddlLang": "en-US",
"ctl00$ContentPlaceHolder1$ddlMutationNumber": "5672",
"__ASYNCPOST": "true"
}
</code></pre>
<p>Step 4: Send POST Request to Select Mutation</p>
<p>After preparing the payload, you send the POST request to select the mutation number. This step may trigger CAPTCHA handling if a mutation is selected.</p>
<pre><code># Step 3: Send the POST request
post_url = "https://jamabandi.punjab.gov.in/Mutation.aspx?itemPID=4"
response = session.post(post_url, headers=headers, data=payload)
# Print response to verify
print(response.text)
</code></pre>
|
<python><post><get>
|
2025-03-07 17:30:01
| 1
| 327
|
jatin rajani
|
79,493,017
| 274,579
|
How to make script-level variables private to the script
|
<p>Is it possible to declare a global variable that is invisible to the importing script?</p>
<p>For example, in script <code>a.py</code> I have a variable <code>var_a</code> that is accessible to any function in <code>a.py</code>. However, in script <code>b.py</code> that imports <code>a.py</code>, I want <code>var_a</code> to be inaccessible.</p>
<p>(A somewhat similar concept to C's <code>static</code> module variables)</p>
<p>In <code>a.py</code>:</p>
<pre><code>var_a = "hello"
print("In script a.py:", var_a)
</code></pre>
<p>In <code>b.py</code>:</p>
<pre><code>from a import var_a
print("In script b.py:", var_a)
</code></pre>
<p>Testing:</p>
<pre><code>$ python3 b.py
In script a.py: hello
In script b.py: hello
</code></pre>
<p>I would like to get an error when referencing <code>var_a</code> from <code>b.py</code>.</p>
|
<python><python-3.x><scope><python-import>
|
2025-03-07 17:13:59
| 1
| 8,231
|
ysap
|
79,492,880
| 243,031
|
how to make all field optional with alias name?
|
<p>I am following <a href="https://stackoverflow.com/a/77851176/243031">https://stackoverflow.com/a/77851176/243031</a> to create my model optional.</p>
<p>I created function to get base class annotations.</p>
<pre><code>def get_annotations(main_cls):
ret_val = main_cls.__annotations__
for base_cls in main_cls.__bases__:
if base_cls != BaseModel:
ret_val.update(get_annotations(base_cls))
return ret_val
</code></pre>
<p>and created optional model as</p>
<pre><code>OptionalClientModel = create_model(
"OptionalClientModel",
**{k: (Optional[v], None) for k, v in get_annotations(ClientModel).items()})
</code></pre>
<p>The original classes are as below</p>
<pre><code>from typing import Annotated
from bson import ObjectId
from pydantic import Field
from pydantic import EmailStr
from pydantic import BaseModel
from pydantic import BeforeValidator
from pydantic import ConfigDict
from pydantic import AwareDatetime
from pydantic import field_validator
# Represents an ObjectId field in the database.
# It will be represented as a `str` on the model so that it can
# be serialized to JSON.
PyObjectId = Annotated[str, BeforeValidator(str)]
class DBTableBase(BaseModel):
# The primary key for the Table, stored as a `str` on the instance.
# This will be aliased to `_id` when sent to MongoDB,
# but provided as `id` in the API requests and responses.
id: PyObjectId | None = Field(alias="_id",
serialization_alias="id",
default=None)
model_config = ConfigDict(
json_encoders={ObjectId: str},
json_schema_extra={
"example": {
"id": "BSON_ID"
}
},
)
class ClientModel(DBTableBase):
first_name: str
last_name: str
</code></pre>
<p>When I want model with all optional value, I can use <code>OptionalClientModel</code>.</p>
<p>The issue is, <code>id</code> in <code>OptionalClientModel</code> has no alias.</p>
<p>How to create optional with alias?</p>
|
<python><mongodb><alias><pydantic>
|
2025-03-07 16:14:05
| 1
| 21,411
|
NPatel
|
79,492,823
| 1,999,585
|
AttributeError: The layer sequential has never been called and thus has no defined input.. Did you mean: 'inputs'?
|
<p>I am trying to write a Python class that applies the VGG-Face recognition algorithm. This class is written in the deepface.py file:</p>
<pre><code>import cv2
import numpy as np
from deepface import DeepFace
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.svm import SVC
from base import Base
class VGGClassification(Base):
def __init__(self):
super().__init__()
self.X_rgb = None
self.y_encoded = None
self.X_train = None
self.X_test = None
self.y_train = None
self.y_test = None
self.clf = None
def preprocess_images(self):
self.X_rgb = np.array([
cv2.resize(cv2.cvtColor(img.reshape(self.h, self.w), cv2.COLOR_GRAY2RGB), (224, 224))
for img in self.X
])
def encode_labels(self):
encoder = LabelEncoder()
self.y_encoded = encoder.fit_transform(self.y)
def split_data(self):
self.X_train, self.X_test, self.y_train, self.y_test = train_test_split(self.X_rgb, self.y_encoded,
test_size=0.2, random_state=42,
stratify=self.y_encoded)
def extract_embeddings(self, images):
return np.array([
DeepFace.represent(img_path=img, model_name='VGG-Face', enforce_detection=False)[0]['embedding']
for img in images
])
def train_model(self):
X_train_embedded = self.extract_embeddings(self.X_train)
X_test_embedded = self.extract_embeddings(self.X_test)
self.clf = SVC(kernel='linear', probability=True)
self.clf.fit(X_train_embedded, self.y_train)
y_pred = self.clf.predict(X_test_embedded)
print(classification_report(self.y_test, y_pred, target_names=self.target_names))
def execute(self):
super().execute()
self.preprocess_images()
self.encode_labels()
self.split_data()
self.train_model()
</code></pre>
<p>This class inherits a base class, stored in the base.py file:</p>
<pre><code>import os
from sklearn.datasets import fetch_lfw_people
class Base:
def __init__(self):
"""Initializes the Base object with default values."""
self.n_samples = 0
self.h = 0
self.w = 0
self.X = None
self.n_features = 0
self.y = None
self.target_names = None
self.n_classes = 0
def load_data(self):
"""
Loads the LFW dataset and extracts its attributes.
Returns:
tuple: (X, y, n_samples, n_classes, n_features)
"""
lfw_people = fetch_lfw_people(data_home=os.getcwd(), min_faces_per_person=70, resize=0.4)
self.n_samples, self.h, self.w = lfw_people.images.shape
self.X = lfw_people.data
self.n_features = self.X.shape[1]
self.y = lfw_people.target
self.target_names = lfw_people.target_names
self.n_classes = self.target_names.shape[0]
def print_load_results(self):
"""
Prints the total dataset size, including the number of samples, features, and classes.
"""
print('Total dataset size: ')
print(f'n_samples: {self.n_samples}')
print(f'n_features: {self.n_features}')
print(f'n_classes: {self.n_classes}')
def execute(self):
self.load_data()
self.print_load_results()
</code></pre>
<p>I tried to run the execute() method of an instance of the VGGClassification class and I get the error message:</p>
<blockquote>
<p>AttributeError: The layer sequential has never been called and thus
has no defined input.. Did you mean: 'inputs'?</p>
</blockquote>
<p>The full traceback is:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\alexz\OneDrive\Uni stuff\Articole\Articol Cristina 2\Python\main.py", line 22, in <module>
vgg_class.execute()
File "C:\Users\alexz\OneDrive\Uni stuff\Articole\Articol Cristina 2\Python\vgg_deepface.py", line 59, in execute
self.train_model()
File "C:\Users\alexz\OneDrive\Uni stuff\Articole\Articol Cristina 2\Python\vgg_deepface.py", line 46, in train_model
X_train_embedded = self.extract_embeddings(self.X_train)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\alexz\OneDrive\Uni stuff\Articole\Articol Cristina 2\Python\vgg_deepface.py", line 41, in extract_embeddings
DeepFace.represent(img_path=np.array(img), model_name='VGG-Face', enforce_detection=False)[0]['embedding']
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\alexz\OneDrive\Uni stuff\Articole\Articol Cristina 2\Python\.venv\Lib\site-packages\deepface\DeepFace.py", line 418, in represent
return representation.represent(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\alexz\OneDrive\Uni stuff\Articole\Articol Cristina 2\Python\.venv\Lib\site-packages\deepface\modules\representation.py", line 68, in represent
model: FacialRecognition = modeling.build_model(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\alexz\OneDrive\Uni stuff\Articole\Articol Cristina 2\Python\.venv\Lib\site-packages\deepface\modules\modeling.py", line 96, in build_model
cached_models[task][model_name] = model()
^^^^^^^
File "C:\Users\alexz\OneDrive\Uni stuff\Articole\Articol Cristina 2\Python\.venv\Lib\site-packages\deepface\models\facial_recognition\VGGFace.py", line 45, in __init__
self.model = load_model()
^^^^^^^^^^^^
File "C:\Users\alexz\OneDrive\Uni stuff\Articole\Articol Cristina 2\Python\.venv\Lib\site-packages\deepface\models\facial_recognition\VGGFace.py", line 158, in load_model
vgg_face_descriptor = Model(inputs=model.input, outputs=base_model_output)
^^^^^^^^^^^
File "C:\Users\alexz\OneDrive\Uni stuff\Articole\Articol Cristina 2\Python\.venv\Lib\site-packages\keras\src\ops\operation.py", line 268, in input
return self._get_node_attribute_at_index(0, "input_tensors", "input")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\alexz\OneDrive\Uni stuff\Articole\Articol Cristina 2\Python\.venv\Lib\site-packages\keras\src\ops\operation.py", line 299, in _get_node_attribute_at_index
raise AttributeError(
AttributeError: The layer sequential has never been called and thus has no defined input.. Did you mean: 'inputs'?
</code></pre>
<p>I cannot figure out what I am doing wrong. Can you help me?</p>
|
<python><keras>
|
2025-03-07 15:52:19
| 0
| 2,424
|
Bogdan Doicin
|
79,492,778
| 8,037,521
|
Apply VTK color series to render a point cloud
|
<p>I have managed to write this code containing a simple load button for laz/las file and the VTK rendering widget. It applies some automatic color scheme (which one?) to the point cloud based on the colors that I produce by normalizing intensity values to 0-1 range. Question: how to apply a VTK color series to it? I have seen this piece of code but I do not really know where to put it. I tried it several times at several places but it did not produce desired effect (i.e. application of the chosen color scheme). Not sure - maybe it is related to me using vertex filter object.</p>
<pre><code>color_series = vtk.vtkColorSeries()
color_series.SetColorSchemeName("Blue")
lut = color_series.CreateLookupTable()
mapper.SetLookupTable(lut)
</code></pre>
<p>MRE (not containing the color series part above):</p>
<pre><code>import sys
import vtk
from PySide2.QtWidgets import (
QApplication,
QMainWindow,
QVBoxLayout,
QWidget,
QPushButton,
QFileDialog,
)
import laspy
import numpy as np
from vtk.qt.QVTKRenderWindowInteractor import QVTKRenderWindowInteractor
from tqdm import tqdm
class PointCloudViewer(QMainWindow):
def __init__(self):
super().__init__()
self.setWindowTitle("Point Cloud Viewer")
self.setGeometry(100, 100, 800, 600)
layout = QVBoxLayout()
central_widget = QWidget(self)
central_widget.setLayout(layout)
self.setCentralWidget(central_widget)
load_button = QPushButton("Load Point Cloud", self)
load_button.clicked.connect(self.load_point_cloud)
layout.addWidget(load_button)
self.vtk_widget = QVTKRenderWindowInteractor(self)
layout.addWidget(self.vtk_widget)
self.renderer = vtk.vtkRenderer()
self.vtk_widget.GetRenderWindow().AddRenderer(self.renderer)
self.vtk_widget.GetRenderWindow().GetInteractor().Initialize()
self.vtk_widget.GetRenderWindow().GetInteractor().Start()
def load_point_cloud(self):
file_dialog = QFileDialog(self)
file_dialog.setFileMode(QFileDialog.ExistingFiles)
file_dialog.setNameFilter("Point Cloud Files (*.las *.laz)")
if file_dialog.exec_():
file_paths = file_dialog.selectedFiles()
if file_paths:
self.display_point_cloud(file_paths[0])
def display_point_cloud(self, file_path):
self.renderer.RemoveAllViewProps()
self.renderer.SetBackground(0.1, 0.1, 0.1)
points, colors = self.read_point_cloud(file_path)
poly_data = vtk.vtkPolyData()
poly_data.SetPoints(points)
poly_data.GetPointData().SetScalars(colors)
vertex_filter = vtk.vtkVertexGlyphFilter()
vertex_filter.SetInputData(poly_data)
vertex_filter.Update()
mapper = vtk.vtkPolyDataMapper()
mapper.SetInputConnection(vertex_filter.GetOutputPort())
actor = vtk.vtkActor()
actor.SetMapper(mapper)
actor.GetProperty().SetPointSize(1)
self.renderer.AddActor(actor)
self.renderer.ResetCamera()
self.vtk_widget.GetRenderWindow().Render()
def read_point_cloud(self, file_path):
las = laspy.read(file_path)
points = np.vstack((las.x, las.y, las.z)).transpose()
intensity_data = las.intensity
min_intensity = np.min(intensity_data)
max_intensity = np.max(intensity_data)
intensity_data = (intensity_data - min_intensity) / (
max_intensity - min_intensity
)
vtk_colors = vtk.vtkFloatArray()
vtk_colors.SetName("Intensity")
vtk_points = vtk.vtkPoints()
chunk_size = max(10000, len(las) // 100)
shift = np.array((points[0][0], points[0][1], points[0][2]))
for i in tqdm(range(0, len(las), chunk_size)):
chunk = points[i : i + chunk_size] - shift
chunk_intensity = intensity_data[i : i + chunk_size]
for j, point in enumerate(chunk):
vtk_points.InsertNextPoint(point)
vtk_colors.InsertNextValue(chunk_intensity[j])
return vtk_points, vtk_colors
if __name__ == "__main__":
app = QApplication(sys.argv)
viewer = PointCloudViewer()
viewer.show()
sys.exit(app.exec_())
</code></pre>
<p>I admit that I barely understand the code I managed to assemble in VTK, so any other improvements & explanations not related directly to the question</p>
|
<python><vtk><pyside2>
|
2025-03-07 15:33:25
| 0
| 1,277
|
Valeria
|
79,492,385
| 25,413,271
|
Asyncio: pass context or contextvar to add_done_callback
|
<p>I am learning asyncio callbacks. My task is- I have a message dict, message codes are keys, message texts are values. In coro <code>main</code> I have to create a number of asynchronous tasks (in my case 3 tasks), each task wraps a coro which prints one message. Also I have to add a callback to each task. Callback must print a code associated with a message printed by the coroutine wrapped by the task. The question is- how to pass code to callback? The staright solution is to add name to each task with the value of a code, but I dont want to go this way. I decided to use <code>ContextVar</code> for this purpose. So I create a global context variable, <code>set()</code> the value to the variable equal to code. Then I try to <code>get()</code> the context variable value from a callback but receive an Exception <code>LookupError: <ContextVar name='msg_code' at 0x000001C596D94F40></code>. That's my code:</p>
<pre><code>import asyncio
from contextvars import ContextVar
msg_dict = {
'code1': 'msg1 by code1',
'code2': 'msg2 by code2',
'code3': 'msg3 by code3'
}
msg_code = ContextVar('msg_code')
async def print_msg(code):
await asyncio.sleep(0.5)
msg_code.set(code)
print(f'Message: {msg_dict[code]}')
def callback_code(*args):
code = msg_code.get()
print(f'Code: {code}')
async def main():
tasks = [asyncio.create_task(print_msg(code)) for code in msg_dict.keys()]
[task.add_done_callback(callback_code) for task in tasks]
await asyncio.gather(*tasks)
asyncio.run(main())
</code></pre>
<p>I found that <code>add_done_callback()</code> also has keyword argument <code>context=</code> but I can't find any examples of how to pass task's context to a callback.</p>
|
<python><python-asyncio><asynccallback><python-contextvars>
|
2025-03-07 12:59:03
| 2
| 439
|
IzaeDA
|
79,492,367
| 15,560,990
|
Can Airflow task dependencies be re-used?
|
<p>I have a series of airflow DAGs which re-use some of the task dependencies.
For example</p>
<pre><code>DAG 1:
T1 >> T2
DAG 2:
T1 >> T2 >> T3
DAG 3:
T1 >> T2 >> T3 >> [T4, T5, T6] >> T7
</code></pre>
<p>I would like to store the dependencies from DAG 1 (which in this model, are being used by every other DAG) and re-use them when declaring the dependencies for the other DAGs, like so:</p>
<pre><code>def dag_1_dependencies():
T1 >> T2
DAG 2:
dag_1_dependencies() >> T3
DAG 3:
dag_1_dependencies() >> T3 >> [T4, T5, T6] >> T7
</code></pre>
<p>The problem is that dependencies themselves aren't a value, so I can't return them with a method. Calling <code>dag_1_dependencies()</code> does nothing. Is there a way to circumvent this?</p>
|
<python><airflow><directed-acyclic-graphs>
|
2025-03-07 12:51:54
| 1
| 460
|
Dasph
|
79,492,362
| 4,412,929
|
Applying a custom function to Xarray resample drops the dimension coordinates
|
<p>When using the <code>map</code> method to apply a custom function in Xarray resample, the dimension coordinates are lost, and the resulting Dataset has a sequence instead of the actual coordinate values.</p>
<p>The following MWE will properly demonstrate the issue:</p>
<pre><code>import numpy as np
import pandas as pd
import xarray as xr
def custom_fn(ds):
return ds
if __name__=='__main__':
times=pd.date_range(
'2000-01-01 00:00', '2000-01-05 23:59:59',
freq='5min',
)
data1=np.arange(len(times))
data2=data1+1
data=xr.Dataset(
data_vars={
'data1':(['time',], data1,),
'data2':(['time',], data2,),
},
coords={'time':times}
)
new=data.resample(time='30min').map(custom_fn)
print(data)
print(new)
</code></pre>
<p>And the output is:</p>
<p>'''</p>
<p><xarray.Dataset> Size: 35kB<br />
Dimensions: (time: 1440)<br />
Coordinates:<br />
* time (time) datetime64[ns] 12kB 2000-01-01 ... 2000-01-05T23:55:00<br />
Data variables:<br />
data1 (time) int64 12kB 0 1 2 3 4 5 6 ... 1434 1435 1436 1437 1438 1439<br />
data2 (time) int64 12kB 1 2 3 4 5 6 7 ... 1435 1436 1437 1438 1439 1440</p>
<p><xarray.Dataset> Size: 23kB<br />
Dimensions: (time: 1440)<br />
Dimensions without coordinates: time<br />
Data variables:<br />
data1 (time) int64 12kB 0 1 2 3 4 5 6 ... 1434 1435 1436 1437 1438 1439<br />
data2 (time) int64 12kB 1 2 3 4 5 6 7 ... 1435 1436 1437 1438 1439 1440</p>
<p>'''</p>
<p>I have kept the <code>custom_fn</code> very simple, notice that the result has no time coordinate values. I used <code>breakpoint</code> inside the custom function if it is not receiving the coordinates, but it is. So, I assume the issue is with how <code>map</code> function (or whatever) is merging the results.</p>
<p>Does anyone know what is happning and how ot fix it?</p>
|
<python><python-xarray>
|
2025-03-07 12:48:37
| 0
| 363
|
RogUE
|
79,492,317
| 1,826,066
|
Fill gaps in time series data in a Polars Lazy- / Dataframe
|
<p>I am in a situation where I have some time series data, potentially looking like this:</p>
<pre class="lang-py prettyprint-override"><code>{
"t": [1, 2, 5, 6, 7],
"y": [1, 1, 1, 1, 1],
}
</code></pre>
<p>As you can see, the time stamp jumps from <code>2</code> to <code>5</code>. For my analysis, I would like to fill in zeros for the time stamps <code>3</code>, and <code>4</code>.</p>
<p>In reality, I might have multiple gaps with varying lengths. I'd like to fill this gap for all other columns.</p>
<p>I'd also really like to keep my data in a <code>LazyFrame</code> since this is only one step in my pipeline. I don't think that <code>.interpolate</code> is really addressing my issue, nor is <code>fill_null</code> helpful here.</p>
<p>I managed to achieve what I want, but it looks too complex:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
# Dummy, lazy data.
lf = pl.LazyFrame(
{
"t": [1, 2, 5, 6, 7],
"y": [1, 1, 1, 1, 1],
}
)
lf_filled = lf.join(
pl.Series(
name="t",
values=pl.int_range(
start=lf.select("t").first().collect().item(0, 0),
end=lf.select("t").last().collect().item(0, 0) + 1,
eager=True,
),
)
.to_frame()
.lazy(),
on="t",
how="right",
).fill_null(0)
</code></pre>
<p>The output is correct and I am never <code>collect</code>ing any more data than the two values needed for <code>start</code> and <code>end</code>.</p>
<p>This looks like there should be a better way to do this. Happy to hear other suggestions :)</p>
|
<python><dataframe><time-series><python-polars>
|
2025-03-07 12:25:16
| 1
| 1,351
|
Thomas
|
79,492,249
| 7,636,248
|
Why cannot build gem5 with a custom memory controller?
|
<p>I'm trying to integrate a custom memory controller in gem5 but when I try to build it returns me this error:</p>
<pre><code>terminate called after throwing an instance of 'pybind11::error_already_set'
what(): TypeError: module() takes at most 2 arguments (3 given)
At:
src/learning_gem5/my_mem_ctrl/DpiMemCtrl.py(4): <module>
<string>(40): exec_module
<frozen importlib._bootstrap>(703): _load_unlocked
<frozen importlib._bootstrap>(1006): _find_and_load_unlocked
<frozen importlib._bootstrap>(1027): _find_and_load
<string>(1): <module>
src/python/m5/objects/__init__.py(29): <module>
<string>(40): exec_module
<frozen importlib._bootstrap>(703): _load_unlocked
<frozen importlib._bootstrap>(1006): _find_and_load_unlocked
<frozen importlib._bootstrap>(1027): _find_and_load
<frozen importlib._bootstrap>(1050): _gcd_import
<frozen importlib._bootstrap>(241): _call_with_frames_removed
<frozen importlib._bootstrap>(992): _find_and_load_unlocked
<frozen importlib._bootstrap>(1027): _find_and_load
<frozen importlib._bootstrap>(1050): _gcd_import
/usr/lib/python3.10/importlib/__init__.py(126): import_module
build_tools/sim_object_param_struct_hh.py(57): <module>
Aborted (core dumped)
scons: *** [build/X86/params/FUDesc.hh] Error 134
scons: building terminated because of errors.
*** Summary of Warnings ***
Warning: Header file <capstone/capstone.h> not found.
This host has no capstone library installed.
Warning: Header file <png.h> not found.
This host has no libpng library.
Disabling support for PNG framebuffers.
Warning: Couldn't find HDF5 C++ libraries. Disabling HDF5 support.
</code></pre>
<p><strong>The funny fact is that a friend of mine is able to compile it successfully while I can't. We share the same Ubuntu version (22.04), the same python version (3.10.12), the same g++ version (11.4.0) and the same scons version (4.5.2).</strong>
May you help me? Thanks.</p>
|
<python><build><gem5>
|
2025-03-07 11:57:50
| 0
| 361
|
Vincy
|
79,491,978
| 8,621,823
|
Why inspect.getsource only works for functions and not classes in jupyter?
|
<pre><code>import inspect
def method():
return "Hello"
source_code = inspect.getsource(method)
print(source_code)
</code></pre>
<p>works in both jupyter notebook and .py file</p>
<pre><code>import inspect
class MyClass:
def method(self):
return "Hello"
source_code = inspect.getsource(MyClass)
print(source_code)
</code></pre>
<p>works in .py file but in jupyter it's <code>OSError: source code not available</code>.</p>
<p>What's going on?</p>
<p>Can I understand it is py scripts can inspect all code?
What else can or cannot be inspected in jupyter?</p>
<p>Why does this answer (<a href="https://stackoverflow.com/a/75904401">https://stackoverflow.com/a/75904401</a>)
say it errors in interactive shell but both my function and that function works for me in both jupyter and ipython? (my python is 3.10.15, that one is 3.10.6)</p>
|
<python><jupyter-notebook>
|
2025-03-07 10:27:46
| 0
| 517
|
Han Qi
|
79,491,666
| 368,907
|
Read file and insert into map
|
<p>I have been searching around the internet on how to read from file and insert it into map in Python? The file contains username, password.</p>
<pre><code>def readFile(self):
f = open("user.txt", "r")
for x in f:
// Create a map insert into it
print(x)
</code></pre>
<p>Please help me. Thanks in advance.</p>
|
<python><python-3.x>
|
2025-03-07 08:41:43
| 1
| 2,802
|
Ivan
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.