QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,263,643
| 9,381,966
|
Slug Field in Django Model Not Including Related Tags on Save
|
<p>I have the following <code>save</code> method in my Django model:</p>
<pre class="lang-py prettyprint-override"><code>slug = models.SlugField(unique=True, blank=True, null=True, max_length=255)
def save(self, *args, **kwargs):
if self.pk is None:
super().save(*args, **kwargs)
tags = Tag.objects.filter(office__id=self.id).values_list("name", flat=True)
print("Tags")
print(tags)
location_work_override_id = self.location_work_override.id if self.location_work_override else ''
location_work_id = self.contest.location_work_id if self.contest and self.contest.location_work_id else ''
if not self.slug and tags:
self.slug = slugify(
f"{self.display_name}-{'-'.join(tags)}-{location_work_override_id}-{location_work_id}-{self.contest.short_name}-{self.contest.contest_number}"
)
elif not self.slug:
self.slug = slugify(
f"{self.display_name}-{location_work_override_id}-{location_work_id}-{self.contest.short_name}-{self.contest.contest_number}"
)
super().save(*args, **kwargs)
</code></pre>
<p>The <code>slug</code> field is supposed to include related <code>Tag</code> names from a <code>Tag</code> model that has a ForeignKey to <code>Office</code>. However, when I create a new <code>Office</code> instance in the Django admin, the <code>tags</code> variable in the <code>save</code> method is always empty, even though I add the tags in the admin interface.</p>
<p>I suspect this is a timing issue because the <code>save</code> method of <code>Office</code> runs before the related <code>Tag</code> objects are saved.</p>
<h3>My Questions:</h3>
<ol>
<li>How can I ensure that the <code>Tag</code> objects are saved before the <code>save</code> method of the <code>Office</code> model accesses them?</li>
<li>Is there a way to correctly populate the <code>slug</code> field with the tags in this scenario, preferably without relying on manual order of operations?</li>
</ol>
|
<python><django>
|
2024-12-09 01:22:15
| 0
| 1,590
|
Lucas
|
79,263,520
| 4,330,537
|
selenium Filling in text in a series TDs kin the first TR of a specific table
|
<p>I have a first row in a table that has empty TDs filled with input TEXTBOXs or TEXTAREAs here is what the row looks like</p>
<pre><code> <tr>
<td class="gwf_variable_table_control_column">
<div class="gwf_tooltip"><input type="button" class="gwf-round-button" value="^" onclick="InsertRowBefore('MeteorologicalVariables', this, 'TextBox')" title="Add row before"></div>
<div class="gwf_tooltip"><input type="button" class="gwf-round-button" value="X" onclick="DeleteRow('MeteorologicalVariables', this)" title="Delete current row"></div>
<div class="gwf_tooltip"><input type="button" class="gwf-round-button" value="v" onclick="InsertRowAfter('MeteorologicalVariables', this, 'TextBox')" title="Add row after"></div>
</td>
<td><input type="text" class="fmc-tb-appearance fmc-tb-font fmc-tb-height" style="width:100%"></td>
<td><textarea class="fmc-tb-appearance fmc-tb-font" style="height:60px;min-width:200px;width:200px;max-width:200px;"></textarea></td>
<td><textarea class="fmc-tb-appearance fmc-tb-font" style="height:60px;min-width:200px;width:200px;max-width:200px;"></textarea></td>
<td><input type="text" class="fmc-tb-appearance fmc-tb-font fmc-tb-height" style="width:100%"></td>
<td><input type="text" class="fmc-tb-appearance fmc-tb-font fmc-tb-height" style="width:100%"></td>
<td><input type="text" class="fmc-tb-appearance fmc-tb-font fmc-tb-height" style="width:100%"></td>
<td><textarea class="fmc-tb-appearance fmc-tb-font" style="height:60px;min-width:275px;width:275px;max-width:275px;"></textarea></td>
</tr>
</code></pre>
<p>and here is the table itself I need to index to this table then hop into each text box or text area and fill them in:</p>
<pre><code> <table id="MeteorologicalVariables" class="gwf_variable_table">
<colgroup>
<col>
<col style="min-width:150px">
<col style="min-width:200px">
<col style="min-width:200px">
<col style="min-width:200px">
<col style="min-width:220px">
<col style="min-width:220px">
<col style="min-width:275px">
</colgroup>
<tbody>
<tr style="display: table-row;">
<td class="gwf_variable_table_control_column"></td>
<td style="max-width:150px" class="fmc-table-label-font fmc-tb-height">Variable</td>
<td style="max-width:200px" class="fmc-table-label-font fmc-tb-height">Station Name</td>
<td style="max-width:200px" class="fmc-table-label-font fmc-tb-height">Sensor(s)</td>
<td style="max-width:200px" class="fmc-table-label-font fmc-tb-height">Height / Depth (m)</td>
<td style="max-width:220px" class="fmc-table-label-font fmc-tb-height">Record Period</td>
<td style="max-width:220px" class="fmc-table-label-font fmc-tb-height">Measurement Frequency</td>
<td style="max-width:275px" class="fmc-table-label-font fmc-tb-height">Notes / Details</td>
</tr>
<tr>
<td class="gwf_variable_table_control_column">
<div class="gwf_tooltip"><input type="button" class="gwf-round-button" value="^" onclick="InsertRowBefore('MeteorologicalVariables', this, 'TextBox')" title="Add row before"></div>
<div class="gwf_tooltip"><input type="button" class="gwf-round-button" value="X" onclick="DeleteRow('MeteorologicalVariables', this)" title="Delete current row"></div>
<div class="gwf_tooltip"><input type="button" class="gwf-round-button" value="v" onclick="InsertRowAfter('MeteorologicalVariables', this, 'TextBox')" title="Add row after"></div>
</td>
<td><input type="text" class="fmc-tb-appearance fmc-tb-font fmc-tb-height" style="width:100%"></td>
<td><textarea class="fmc-tb-appearance fmc-tb-font" style="height:60px;min-width:200px;width:200px;max-width:200px;"></textarea></td>
<td><textarea class="fmc-tb-appearance fmc-tb-font" style="height:60px;min-width:200px;width:200px;max-width:200px;"></textarea></td>
<td><input type="text" class="fmc-tb-appearance fmc-tb-font fmc-tb-height" style="width:100%"></td>
<td><input type="text" class="fmc-tb-appearance fmc-tb-font fmc-tb-height" style="width:100%"></td>
<td><input type="text" class="fmc-tb-appearance fmc-tb-font fmc-tb-height" style="width:100%"></td>
<td><textarea class="fmc-tb-appearance fmc-tb-font" style="height:60px;min-width:275px;width:275px;max-width:275px;"></textarea></td>
</tr>
</code></pre>
<p>here is what I have tried so far:</p>
<pre><code>text1 = driver.find_element("xpath", '(//table[@id = "MeteorologicalVariables"]//input[@value = "^"])[1]/preceding-sibling::td[1]')
text1.send_keys("text1")
</code></pre>
<p>here is the error I get:</p>
<pre><code>Traceback (most recent call last):
File "d:\code\EnterDataObs\observations.py", line 77, in <module>
text1 = driver.find_element("xpath", '(//table[@id = "MeteorologicalVariables"]//input[@value = "^"])[1]/preceding-sibling::td[1]')
File "C:\Users\RobMe\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\selenium\webdriver\remote\webdriver.py", line 741, in find_element
return self.execute(Command.FIND_ELEMENT, {"using": by, "value": value})["value"]
File "C:\Users\RobMe\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\selenium\webdriver\remote\webdriver.py", line 347, in execute
self.error_handler.check_response(response)
File "C:\Users\RobMe\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\selenium\webdriver\remote\errorhandler.py", line 229, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"(//table[@id = "MeteorologicalVariables"]//input[@value = "^"])[1]/preceding-sibling::td[1]"}
</code></pre>
|
<python><selenium-webdriver>
|
2024-12-08 23:21:02
| 1
| 835
|
RobM
|
79,263,332
| 307,050
|
How to calculate the dBFS level using python sounddevice
|
<p>I'm trying to get a decibel reading from an audio stream using the <code>sounddevice</code> library.</p>
<p>More specifically, I want to display the current level in a <a href="https://en.wikipedia.org/wiki/Digital_audio_workstation" rel="nofollow noreferrer">DAW</a> like fashion.</p>
<p>According to <a href="https://www.masteryourtrack.com/post/understanding-db-and-meters" rel="nofollow noreferrer">this</a>, most DAWs display either <strong>dBFS</strong> or <strong>LUFS</strong> levels in their meters. I'm interested in <strong>dbFS</strong>, because that seems to be the one more commonly used.</p>
<p>According to <a href="https://dsp.stackexchange.com/questions/8785/how-to-compute-dbfs">this</a>, the dBFS level computes to</p>
<pre><code>value_dBFS = 20*log10(rms(signal) * sqrt(2))
</code></pre>
<p>which can be simplified to</p>
<pre><code>value_dBFS = 20*log10(rms(signal)) + 3.0103
</code></pre>
<p>The values coming from the <code>sounddevice</code> library with <code>dtype='float32'</code> are in a range between <code>[-1,1]</code></p>
<p>Here's my code:</p>
<pre class="lang-py prettyprint-override"><code>import sounddevice
import numpy as np
class SoundLevelReader(QObject):
new_level = Signal(float)
def sd_callback(self, indata, frames, time, status):
# see https://en.wikipedia.org/wiki/Sound_pressure#Sound_pressure_level
# see https://dsp.stackexchange.com/questions/8785/how-to-compute-dbfs
rms = np.sqrt(np.mean(np.absolute(indata)**2))
dBFS = 20 * np.log10(rms * np.sqrt(2))
self.new_level.emit(dBFS)
def main():
# ...
app = QApplication(sys.argv)
reader = SoundLevelReader()
meter = VolMeter(-60, 0)
reader.new_level.connect(meter.set_meter_value)
with sd.InputStream(callback=reader.sd_callback, dtype='float32', device=4, blocksize=5000):
sys.exit(app.exec())
</code></pre>
<p>At first glance, the values seem reasonable. However, compared to my DAW (Logic Pro), <strong>my values are around 6dB lower</strong> and I do not understand why. I'm comparing the values of my DAW with the code above with a Sennheiser Profile USB-C microphone. Inside the DAW I didn't add any gain or effects. I only selected the input and that's it.</p>
<p>If a add a smudge factor to the <code>log10()</code> function, I can sort of match the levels, but I'd like to understand what the actual value is and where it comes from.</p>
<pre><code>value_dBFS = 20*log10(dirty * rms(signal)) + 3.0103
</code></pre>
<p><strong>Question</strong>:</p>
<p>Can someone explain, how to correctly calculate dBFS based on readings coming from the <code>sounddevice</code> python lib?</p>
|
<python><audio><python-sounddevice><decibel>
|
2024-12-08 20:55:04
| 0
| 1,347
|
mefiX
|
79,263,329
| 12,466,687
|
How to change text color of facet category in plotly charts in Python?
|
<p>I have created few <code>Plotly</code> charts with <code>facets</code> on basis of category variable and would like to change the <code>color</code> of <code>facet text</code> in the chart. Have searched alot even on plotly website but couldn't figure out the property that can be used to change the color for facet text.</p>
<p>Using below image as an example I would like to change the color of - <code>No</code> & <code>Yes</code>:</p>
<pre><code>import plotly.express as px
fig = px.scatter(px.data.tips(), x="total_bill", y="tip", facet_col="smoker")
fig.for_each_annotation(lambda a: a.update(text=a.text.split("=")[-1]))
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/cWLjZ58g.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cWLjZ58g.png" alt="enter image description here" /></a></p>
<p>Would really Appreciate any help !!</p>
|
<python><plotly>
|
2024-12-08 20:53:06
| 1
| 2,357
|
ViSa
|
79,263,115
| 11,951,910
|
Using signal to clean up process
|
<p>I am learning how to use signals. I created a small program that connects to a database and prints the results. I am leaving the database connection open. I am sending a SIGTERM to end the program. That works. But when I try to add my database disconnect object it fails. I made db global. Any help or explanation of what I am doing incorrectly. My plan to to clean up socket connections and others to shut the program down gracefully.</p>
<pre><code>global db
def term_handler(signum,frame):
print('In term handler')
db.disconnect()
#more clean up to come.
sys.exit()
def main():
db = connection
results = db.select('query here')
print(results.fetchall())
#db.disconnect()
# The above disconnect works when in main. but fails when in term_handler
signal.signal(signal.SIGTERM,term_handler)
while True:
print('yes')
sleep(2)
return None
if __name__ == "__main__":
main()
</code></pre>
|
<python><signals>
|
2024-12-08 18:36:50
| 0
| 718
|
newdeveloper
|
79,262,905
| 10,746,224
|
Django-Extensions: how to add local variables to shell_plus
|
<p>I have multiple dictionaries, and I want each key/value pair to be defined in the local scope of a <strong><code>django-extensions</code></strong> <code>shell_plus</code> session.</p>
<p>My current <em><strong>management command</strong></em> looks something like this:</p>
<pre><code>import django
import code
devs = {
'hot_water': object()
'air': object()
}
procs = {
'clean_keg': lambda x: 'do_something'
}
# unnecessary code when using `shell_plus`
model_map = { model.__name__: model for model in django.apps.apps.get_models() }
local = {
'procs': procs,
'devs': devs,
**devs,
**procs,
**model_map
}
code.interact(local=local)
</code></pre>
<p>Now I find myself wanting to add <code>settings</code>, <code>models</code>, and several other Django objects that are already included with <code>shell_plus</code>, but I can't find a way to add local variables to the <code>shell_plus</code> session.</p>
<p><a href="https://stackoverflow.com/a/19270708/10746224">Brett Thomas's</a> answer shows <em>how to import modules</em> into a <code>shell_plus</code> session, but doesn't show how to add variables from a dict-like object.</p>
<p><strong>How do I add variables to a <code>shell_plus</code> session?</strong></p>
|
<python><django><django-extensions><django-commands>
|
2024-12-08 16:41:52
| 1
| 16,425
|
Lord Elrond
|
79,262,845
| 8,291,840
|
Django Allauth 65.2.0 headless mode - session token problems
|
<p>I have a setup with django, drf, django-allauth headless and nextjs acting somewhat as a proxy to my django api, completely decoupled and server from different servers (a regular django setup and separate node server for next)</p>
<p>Settings:</p>
<pre class="lang-py prettyprint-override"><code>AUTH_USER_MODEL = "user.User"
ACCOUNT_EMAIL_REQUIRED = True
ACCOUNT_EMAIL_VERIFICATION = "mandatory"
AUTHENTICATION_BACKENDS = [
"django.contrib.auth.backends.ModelBackend",
"allauth.account.auth_backends.AuthenticationBackend",
]
HEADLESS_ONLY = True
HEADLESS_FRONTEND_URLS = {}
# HEADLESS_TOKEN_STRATEGY = "apps.core.backends.allauth_token_strategy.DRFTokenAndAnonSessionTokenStrategy"
SOCIALACCOUNT_PROVIDERS = {
"google": {
"APP": {
"client_id": config.OAUTH.GOOGLE.CLIENT_ID,
"secret": config.OAUTH.GOOGLE.CLIENT_SECRET,
},
"SCOPE": [
"profile",
"email",
],
"AUTH_PARAMS": {
"access_type": "offline",
},
"OAUTH_PKCE_ENABLED": True,
}
}
</code></pre>
<p>URLs: (the change is purely for aesthetics)</p>
<pre class="lang-py prettyprint-override"><code>from allauth.headless.constants import Client
from allauth.headless.urls import build_urlpatterns
from django.urls import path, include
from django.urls.resolvers import RoutePattern
def build_allauth_url_patterns():
path_object = build_urlpatterns(Client.APP)[0]
path_object.pattern = RoutePattern("")
return [path_object]
urlpatterns = [
path("user/", include((build_allauth_url_patterns(), "headless"), namespace="app")),
path("accounts/", include("allauth.urls")),
]
</code></pre>
<p>I want to use the headless mode since I don't need the CSRF features of django allauth browser implementation, however I want to use the handshake of django-allauth so I'm sending a post request to the api via a form from nextjs.</p>
<p>for this example consider my domain as <code>localhost</code></p>
<pre class="lang-html prettyprint-override"><code><form method="post" action="https://api.localhost/v1/user/auth/provider/redirect" className="w-full">
<Button variant="outline" className="gap-2 w-full" type="submit">
<Icons.LogIn />
<span>Sign Up With Google</span>
</Button>
<Input type="hidden" name="callback_url" value="https://auth.localhost/accounts/google/login/callback/" />
<Input type="hidden" name="process" value="login" />
<Input type="hidden" name="provider" value="google" />
</form>
</code></pre>
<p>With this, the form successfully redirects to google for authorizing my app, and I can authorize with the scope from my settings and continue to my app. But at this point, django allauth returns an error response - from why understand - because I don't have a session identifier/key.</p>
<p>In <code>allauth.socialaccout.providers.oauth2.views.OAuth2CallbackView.dispatch</code> the call to <code>allauth.socialaccout.providers.oauth2.views.OAuth2CallbackView._get_state</code> returns this error response because in <code>_get_sate</code> the state is always None. This is how far I was able to track it, I tried to figure out how to get a session_token to put it to X-Session-Token header but since handshake is a redirect from google to my app I can't change the header, since I'm using the APP version and not BROWSER version I don't have a cookie (In a non browser app you wouldn't have this anyway, and the provider_token endpoint <em>still</em> requires it as per the <a href="https://docs.allauth.org/en/dev/headless/openapi-specification/#tag/Authentication:-Providers/paths/%7E1_allauth%7E1%7Bclient%7D%7E1v1%7E1auth%7E1provider%7E1token/post" rel="nofollow noreferrer">docs</a>)</p>
<p>Now my question is, if I'm right and I need the session <em>somehow</em> how can I have the session identified by django so <code>state = statekit.unstash_state(request, state_id)</code> actually returns the correct state? Or if I'm wrong and there is something else that's wrong, what is that?</p>
<p>Update 1:
I dug deeper and confirmed the problem is about sessions, the request coming from google doesn't have the session information even though when we initiate the provider redirect the session is created. I'm not sure why but it <em>seems</em> like there is no sessionid cookie <em>even though</em> the session is created.</p>
<p>Update 2:</p>
<pre class="lang-py prettyprint-override"><code>def dispatch(self, request, *args, **kwargs):
response = self.handle(request, *args, **kwargs)
breakpoint()
return self.handle(request, *args, **kwargs)
</code></pre>
<p>If I put a breakpoint in <code>allauth.headless.internal.restkit.views.RestView.dispatch</code> like this, the <code>request.session.modified</code> is <code>True</code> and the session data added via <code>statekit.stash_state</code> is there. However when I put a breakpoint to my <em>LAST</em> middleware to inspect the request/response, somehow the session data is not there anymore. With that, it is missing from the latter middlewares, and most importantly it is missing in the session middleware which - since it is missing - doesn't set the sessionid cookie.</p>
<p>I've created a quick django view as a sanity check</p>
<pre class="lang-py prettyprint-override"><code>def hello(request):
request.session["foo"] = "bar"
from django.shortcuts import HttpResponse
return HttpResponse("Hello, World!")
</code></pre>
<p>Which actually successfully adds the sessionid since this modifies the session. So I <em>think</em> it is not some other configuration of mine but solely on allauth's configurations.</p>
<pre class="lang-py prettyprint-override"><code>MIDDLEWARE = [
"django.middleware.security.SecurityMiddleware",
"whitenoise.middleware.WhiteNoiseMiddleware",
"apps.common.utils.media_whitenoise_middleware.MediaWhiteNoiseMiddleware",
"django_hosts.middleware.HostsRequestMiddleware",
"pghistory.middleware.HistoryMiddleware",
"django.contrib.sessions.middleware.SessionMiddleware",
"corsheaders.middleware.CorsMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
"allauth.account.middleware.AccountMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django_hosts.middleware.HostsResponseMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
]
</code></pre>
<p>Here are my middleware, like I said, the above sanity check function works so I don't think the issue is with any middleware here and like I said, by the time it comes to <code>XFrameOptionsMiddleware.get_response</code> the <code>request.session</code> already somehow loses its data.</p>
<p>Update 3:
Seems like <code>allauth.headless.internal.authkit.authentication_context</code> is to blame, it replaces the session with a new one - I'm not sure why but it seems like by the time finally runs and restores the previous session, if I simply skip this function I see the session_id and rest of the authentication flow somewhat works as expected.</p>
<p>Update 4:
Well, I clearly haven't thought this through as I am just realizing that I have no way of retrieving the session without reading back the cookie in which case I can just use the cookies for the rest of the flow soooooo nevermind this whole quetstion. Leaving this here undeleted since the issue in allauth repo references this.</p>
|
<python><python-3.x><django><django-allauth>
|
2024-12-08 16:16:49
| 0
| 3,042
|
Işık Kaplan
|
79,262,705
| 18,582,529
|
SQLAlchemy Updating ORM model after inserting it
|
<pre class="lang-py prettyprint-override"><code>async def upsert_record(record: Record) -> Record:
async with async_session() as session:
stmt = insert(Record).values(
record.to_dict()
).on_conflict_do_update(
index_elements=['id'],
set_={
# Just updating the values, nothing important
...
}
)
result = await session.execute(stmt)
await session.commit()
record = await get_record(result.inserted_primary_key[0])
return record
</code></pre>
<p>I created the following function to upsert records into the sqlite database. Once the record is upserted and commited I use get_record function which uses selectinload to load the related User object</p>
<pre class="lang-py prettyprint-override"><code>async def get_record(id: int) -> Optional[Record]:
async with async_session() as session:
result = await session.execute(
select(Record)
.where(Record.id == id)
.options(
selectinload(Record.user),
)
)
return result.scalars().one_or_none()
</code></pre>
<p>As you can see I also declared special to_dict function inside the Base class, which converts model into a dictionary, to use inside the values() function. Its worth noting that to_dict function does not include keys which values are None (to not cause any problems with columns that have default value set).</p>
<p>Everything seems to work but I'd like to know is there a better approach to do perform all this actions? Initially I was using session.add, but unfortunately it does not handle conflicts.</p>
<p>Also is it fine to use ORM models outside of the sqlalchemy sessions? As far as I know it is considered a better approach to create special dataclass (using built-in dataclasses or pydantic) and convert them to orm model and vice versa. Is it a necessary step or can it be omitted?</p>
|
<python><python-3.x><database><sqlite><sqlalchemy>
|
2024-12-08 14:59:08
| 0
| 663
|
lisa.smith
|
79,262,610
| 7,456,317
|
Using Pydantic to define AST of boolean expressions
|
<p>I'd like to use Pydantic to define and validate AST of queries that will be applied on a Pandas dataframe. Here's my code:</p>
<pre class="lang-py prettyprint-override"><code>from typing import List, Literal, Optional, Union
from pydantic import BaseModel, Field
import pandas as pd
class ColumnCondition(BaseModel):
"""
A class that represents a condition that is applied to a single column.
"""
tag: Literal["ColumnCondition"] = "ColumnCondition"
column: str = Field(..., title="The name of the column to apply the condition to.")
operator: Literal["==", "!=", "<", ">", "<=", ">="] = Field(
..., title="The operator of the condition."
)
value: Optional[str] = Field(None, title="The value to compare the column to.")
class AndCondition(BaseModel):
"""
A class that represents an 'and' condition that is applied to two or more conditions.
"""
tag: Literal["AndCondition"] = "AndCondition"
conditions: List["Condition"]
Condition = Union[ColumnCondition, AndCondition]
class ConditionModel(BaseModel):
condition: Condition = Field(discriminator="tag")
def get_column_metadata(df: pd.DataFrame) -> dict:
return {col: str(dtype) for col, dtype in df.dtypes.items()}
if __name__ == "__main__":
"""
Example
"""
condition_json = {
"tag": "AndCondition",
"conditions": [
{
"tag": "ColumnCondition",
"column": "original_amount.currency",
"operator": ">=",
"value": "100",
},
{
"tag": "ColumnCondition",
"column": "original_amount.currency",
"operator": "<=",
"value": "1000",
},
],
}
cond = ConditionModel.model_validate({"condition": condition_json})
print(cond.model_dump_json(indent=2))
</code></pre>
<p>This works well, but I have a few questions:</p>
<ul>
<li>Is there a way to remove the <code>ConditionModel</code> wrapper class? I couldn't work around it.</li>
<li>What is the best way to handle the types of values? should I have another field in the <code>ColumnCondition</code> class of its type? or maybe hold a list of columns and types?</li>
<li>What is the best way to convert such a condition into a string to be used in the <code>Dataframe.query</code> method? should I implement a <code>__str__</code> in each class? or maybe write a method that traverse the AST and creates the string?</li>
</ul>
|
<python><pandas><abstract-syntax-tree><pydantic>
|
2024-12-08 14:05:45
| 0
| 913
|
Gino
|
79,262,591
| 5,350,089
|
How to check Serial Write is complete in Raspberry Pi Python
|
<p>Hi In my Project my raspberry pi act as a server and whenever the data came from the tcp client it is writing the same data in Serial Uart but when doing write operation in Serial Uart some character is missing If i insert some delay after the serial write it is working fine but i want to be working without giving any delay after the Serial Uart write operation so i need to find whether the write process for Serial Uart is complete or not please guide me to achieve the solution below is my code</p>
<pre><code>import socket
import serial
from threading import Thread
from datetime import datetime
def on_new_client(client_socket, addr):
try:
while True:
client_socket.settimeout(60)
data = client_socket.recv(1)
ser.write(data)
client_socket.settimeout(None)
if not data:
print(f'Client {addr} disconnected')
break
Ctime = datetime.now().strftime("%H:%M:%S")
print(f"Address: {addr},Time:{Ctime}, Data: {data}")
except socket.timeout:
print(f"Connection timed out for {addr}")
except Exception as e:
print(f"Error with client {addr}: {e}")
finally:
client_socket.close()
print(f"Connection closed for {addr}")
def main():
global ser
ser = serial.Serial(
port='/dev/ttyS0',
baudrate=9600,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE,
bytesize=serial.EIGHTBITS
)
host1 = '192.168.176.248'
port1 = 4002
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((host1, port1))
s.listen(5)
print(f"Server started on {host1}:{port1}")
while True:
client_socket,addr = s.accept()
print(f"New connection from: {addr}")
thread = Thread(target=on_new_client, args=(client_socket, addr), daemon=True) # Mark thread as daemon
thread.start() # Start the thread
if __name__ == '__main__':
main()
</code></pre>
|
<python><raspberry-pi><serial-port><pyserial><uart>
|
2024-12-08 13:49:02
| 0
| 445
|
Sathish
|
79,262,467
| 19,356,117
|
How to convert negative values to others in polars DataFrame?
|
<p>I want to convert negative float values in polars DataFrame,and I use this code:</p>
<pre><code>x_origin.select(pl.col(pl.Float64), pl.when(pl.col(pl.Float64)<0).then(np.nan).otherwise(pl.col(pl.Float64)))
</code></pre>
<p>But it crashed with this:</p>
<pre><code>Traceback (most recent call last):
File "/home/wangyang1/.local/share/JetBrains/IntelliJIdea2024.3/python/helpers-pro/pydevd_asyncio/pydevd_asyncio_utils.py", line 117, in _exec_async_code
result = func()
^^^^^^
File "<input>", line 1, in <module>
File "/home/wangyang1/.conda/envs/torchhydro1/lib/python3.11/site-packages/polars/dataframe/frame.py", line 9113, in select
return self.lazy().select(*exprs, **named_exprs).collect(_eager=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wangyang1/.conda/envs/torchhydro1/lib/python3.11/site-packages/polars/lazyframe/frame.py", line 2029, in collect
return wrap_df(ldf.collect(callback))
^^^^^^^^^^^^^^^^^^^^^
polars.exceptions.DuplicateError: the name 'literal' is duplicate
It's possible that multiple expressions are returning the same default column name. If this is the case, try renaming the columns with `.alias("new_name")` to avoid duplicate column names.
</code></pre>
<p>But I don't need to rename these columns.</p>
<p>So how to solve my problem? I haven't find answer in documents of polars.</p>
|
<python><python-polars>
|
2024-12-08 12:44:29
| 1
| 1,115
|
forestbat
|
79,262,465
| 12,832,931
|
How to invert OHLC data?
|
<p>I have a OHLC (Open, High, Low, Close financial data).</p>
<p>An upward bar (bullish) is when the close price is higher than the open price.</p>
<p>A downward bar (bearish) is when the close price is lower than the open price.</p>
<p>I am trying to find a way to invert the dataset in order to have the following behavior:</p>
<p>Original data:</p>
<p><a href="https://i.sstatic.net/IxtRgobW.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IxtRgobW.jpg" alt="Original Data" /></a></p>
<p>Inverted data:</p>
<p><a href="https://i.sstatic.net/phTueVfg.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/phTueVfg.jpg" alt="Inverted Data" /></a></p>
<p>The first steps seams:</p>
<p>• For an upward bar, swap the open and close prices to make it a downward bar.</p>
<p>• For a downward bar, swap the open and close prices to make it an upward bar.</p>
<p>Second step:</p>
<p>Preserve the range of the candlestick - Maintain the difference of High and Low:</p>
<pre><code># Function to invert the OHLC bars
def invert_ohlc(row):
if row['Close'] > row['Open']:
# Bullish bar, invert to bearish
row['Open'], row['Close'] = row['Close'], row['Open']
elif row['Close'] < row['Open']:
# Bearish bar, invert to bullish
row['Open'], row['Close'] = row['Close'], row['Open']
return row
</code></pre>
<p>But I don't know how to continue:</p>
<p>Reproducible dataset (same as image):</p>
<pre><code>import pandas as pd
from io import StringIO
data = """
Date,Time,Open,High,Low,Close
7/16/2024,09:00,1302000000,1303600000,1302000000,1303550000
7/16/2024,10:00,1303550000,1305650000,1301300000,1301800000
7/16/2024,11:00,1301800000,1305650000,1301150000,1302650000
7/16/2024,12:00,1302650000,1303700000,1300550000,1303600000
7/16/2024,13:00,1303600000,1304150000,1298400000,1299550000
7/16/2024,14:00,1299550000,1300900000,1297300000,1300000000
7/16/2024,15:00,1300000000,1302650000,1298700000,1301300000
7/16/2024,16:00,1301300000,1303800000,1299850000,1300500000
7/16/2024,17:00,1300550000,1301950000,1300000000,1301800000
7/16/2024,18:00,1301800000,1302800000,1301400000,1302450000
7/16/2024,19:00,1302500000,1303450000,1302300000,1303350000
7/17/2024,09:00,1299800000,1300500000,1298800000,1299650000
7/17/2024,10:00,1299650000,1301300000,1297900000,1299900000
7/17/2024,11:00,1299900000,1303600000,1296700000,1302050000
7/17/2024,12:00,1302050000,1305250000,1299000000,1303400000
7/17/2024,13:00,1303400000,1305950000,1302400000,1303750000
7/17/2024,14:00,1303800000,1304450000,1301350000,1303950000
7/17/2024,15:00,1304000000,1305800000,1302950000,1303300000
7/17/2024,16:00,1303300000,1305750000,1302950000,1305050000
7/17/2024,17:00,1305050000,1305250000,1303200000,1303350000
7/17/2024,18:00,1303350000,1304800000,1302950000,1304250000
7/17/2024,19:00,1304300000,1304750000,1302650000,1303150000
7/18/2024,09:00,1302250000,1303850000,1302250000,1303650000
7/18/2024,10:00,1303650000,1304650000,1299100000,1299600000
7/18/2024,11:00,1299600000,1301100000,1294850000,1295650000
7/18/2024,12:00,1295650000,1296850000,1291450000,1292500000
7/18/2024,13:00,1292550000,1293100000,1290400000,1291400000
7/18/2024,14:00,1291450000,1292050000,1288650000,1289250000
7/18/2024,15:00,1289250000,1289650000,1287350000,1288300000
7/18/2024,16:00,1288300000,1288300000,1284850000,1286100000
7/18/2024,17:00,1286100000,1286200000,1283800000,1285450000
7/18/2024,18:00,1285400000,1290950000,1284400000,1290400000
7/18/2024,19:00,1290400000,1292500000,1289650000,1292500000
7/19/2024,09:00,1290400000,1292050000,1289750000,1291200000
7/19/2024,10:00,1291250000,1293550000,1285300000,1287250000
7/19/2024,11:00,1287250000,1292800000,1286100000,1289950000
7/19/2024,12:00,1289900000,1292250000,1286250000,1288400000
7/19/2024,13:00,1288400000,1288950000,1284750000,1287350000
7/19/2024,14:00,1287300000,1287800000,1286150000,1287300000
7/19/2024,15:00,1287300000,1288800000,1285750000,1286900000
7/19/2024,16:00,1286950000,1287050000,1282450000,1283350000
7/19/2024,17:00,1283350000,1284950000,1283000000,1284600000
7/19/2024,18:00,1284650000,1284700000,1283050000,1283400000
7/19/2024,19:00,1283350000,1283400000,1279000000,1279000000
"""
# Use StringIO to simulate reading from a file
df = pd.read_csv(StringIO(data), parse_dates=[['Date', 'Time']])
</code></pre>
|
<python><pandas><time-series>
|
2024-12-08 12:42:50
| 1
| 542
|
William
|
79,262,426
| 11,779,147
|
Vertex AI Text Embedding model forbidden
|
<p>I am trying to run the tutorial here about text embedding on GCP : <a href="https://cloud.google.com/vertex-ai/generative-ai/docs/embeddings/get-text-embeddings?hl=fr" rel="nofollow noreferrer">https://cloud.google.com/vertex-ai/generative-ai/docs/embeddings/get-text-embeddings?hl=fr</a></p>
<p>However, the code snippet below does not work (the error message indicates a disabled service while Vertex AI API is actually enabled on my GCP project):</p>
<pre class="lang-py prettyprint-override"><code>from vertexai.language_models import TextEmbeddingModel
import vertexai
vertexai.init(project="MY_PROJET_ID", location="europe-west1")
texts = ["banana muffins? ", "banana bread? banana muffins?"]
dimensionality = 256
task = "RETRIEVAL_DOCUMENT"
model = TextEmbeddingModel.from_pretrained("text-multilingual-embedding-002")
</code></pre>
<p>Version: <code>google-cloud-aiplatform = "1.74.0"</code></p>
<p>on Python 3.11
<a href="https://i.sstatic.net/2Tjgv1M6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2Tjgv1M6.png" alt="enter image description here" /></a></p>
<p>Error output (=> DISABLED_SERVICE):</p>
<pre><code>PermissionDenied: 403 Your application is authenticating by using local Application Default Credentials. The aiplatform.googleapis.com API requires a quota project, which is not set by default. To learn how to set your quota project, see https://cloud.google.com/docs/authentication/adc-troubleshooting/user-creds . [reason: "SERVICE_DISABLED"
domain: "googleapis.com"
metadata {
key: "service"
value: "aiplatform.googleapis.com"
}
metadata {
key: "consumer"
value: "projects/764086051850"
}
</code></pre>
<p>Still, I can use other VertexAI services such as GenerativeModels.</p>
<p>I suspect that a wrong project <code>764086051850</code> is used instead of mine. I don't see where I can change that. Does anyone kown how to solve this issue ?</p>
|
<python><google-cloud-platform><large-language-model><google-cloud-vertex-ai><rag>
|
2024-12-08 12:12:10
| 1
| 527
|
maggle
|
79,262,287
| 2,069,099
|
Pandas : make future.no_silent_downcasting working
|
<p>Python 3.11, Pandas 2.2.3</p>
<p>I currently track FutureWarnings to sanitize an existing code. To silent and opt-in future <code>Downcasting behavior in 'replace' is deprecated and will be removed in a future version</code>, I use <code>pd.set_option("future.no_silent_downcasting", True)</code>.</p>
<p>Then, I block on the following minimal example:</p>
<pre><code>import pandas as pd
print(f"{pd.__version__=}") # '2.2.3'
# discard "Downcasting behavior in `replace`..."
pd.set_option("future.no_silent_downcasting", True)
# Raise en Exception on any Warning:
from warnings import warn, simplefilter
simplefilter('error') # raise on Warning
df0 = pd.DataFrame({"a": [True, True, False, True, False]})
df1 = pd.DataFrame({"a": [False, True]}, index=[2,4])
# First attempt:
# raise `FutureWarning: Setting an item of incompatible dtype is deprecated and will raise
# in a future error of pandas.
# Value '[True True False True True]' has dtype incompatible with bool, please explicitly cast to a compatible dtype first.`
df0.update(df1) # Raise a FutureWarning
# Second attempt
df0.update(df1.astype(df0.dtypes)) # Also raise the same FutureWarning
</code></pre>
<p>I'm a bit confused... How to solve this Warning?</p>
|
<python><pandas>
|
2024-12-08 10:53:08
| 1
| 3,517
|
Nic
|
79,262,249
| 1,473,517
|
asizeof appears to be inaccurate
|
<p>Take this MWE:</p>
<pre><code>from pympler import asizeof
from random import randint, choice
from string import printable
from heapq import heappush
ascii = printable[:-5]
pq = []
for _ in range(10_000_000):
heappush(pq, (randint(0, 31), randint(0, 31), randint(0, 31), ''.join(choice(ascii) for _ in range(16))))
print(asizeof.asizeof(pq))
</code></pre>
<p>I can see from running 'top' that this takes about 2.7GB of RAM. But asizeof reports 1,449,096,184 bytes which is a long way off.</p>
<p>This is what 'top" shows:</p>
<p><a href="https://i.sstatic.net/2qGJccM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2qGJccM6.png" alt="enter image description here" /></a></p>
<p>/usr/bin/time -v gives:</p>
<pre><code>Maximum resident set size (kbytes): 2858616
</code></pre>
<p>Using another way of measuring RAM:</p>
<pre><code>from resource import getrusage, RUSAGE_SELF
print(getrusage(RUSAGE_SELF).ru_maxrss * 1024)
</code></pre>
<p>This returns</p>
<pre><code>2927054848
</code></pre>
|
<python><memory>
|
2024-12-08 10:27:53
| 2
| 21,513
|
Simd
|
79,262,117
| 8,831,742
|
Graph features with low computational complexity
|
<p>I'm trying to cluster some unlabeleded unweighted undirected graphs. I'd like to calculate some scalar features for each of them to build an embedding vector and then use a clustering algorithm to see if they can be distinguished in an unsupervised way. However, most of the features i was planning to use (closeness/betweenness centrality, clustering coefficient) are quite hard to compute and i don't have access to any significant hardware.</p>
<p>What are some representative features of the graphs that have a lower computational complexity to be extracted? Say, around <code>O(n)</code> or <code>O(m)</code></p>
<p>For this task i'm using the python library <code>networkx</code></p>
|
<python><time-complexity><graph-theory><feature-extraction>
|
2024-12-08 08:52:37
| 1
| 353
|
none none
|
79,261,982
| 6,312,979
|
Using a Polars DataFrame inside Django template html
|
<p>Lets start with their demo.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame(
{
"foo": [1, 2, 3],
"bar": [6, 7, 8],
}
)
df.write_json()
# '[{"foo":1,"bar":6},{"foo":2,"bar":7},{"foo":3,"bar":8}]'
</code></pre>
<p>I pass in the df in the from the df.write_json()</p>
<pre class="lang-py prettyprint-override"><code>context = {
'df' : df,
}
</code></pre>
<p>But nothing I am trying in Django is working? The json was the only option I see to pass the data.</p>
<p>Techinally this is a list of dicts?</p>
<pre><code><ul>
{% for i in df %} # tried many option here
{{ i.foo }}
{% endfor %}
<ul>
</code></pre>
<p>I have tried all sorts of things.</p>
<p>For Pandas I would just use <code>df.to_records()</code></p>
<p>What is best way to get Polars data into a Django template to use in things like tables?</p>
<p>Loving polars so far but getting confused by a few things. Like this one.</p>
<p>Thanks!</p>
|
<python><html><django><dataframe><python-polars>
|
2024-12-08 06:58:25
| 1
| 2,181
|
diogenes
|
79,261,901
| 8,176,763
|
nested operators airflow, good practice?
|
<p>My goal is to pass a string of a file path from one @task into Emailoperators , so I can apply logic with the dataset that I will read from the file path to build up my operators that will send emails. My code looks like this:</p>
<pre><code>from airflow import DAG
from airflow.decorators import dag, task
from airflow.utils.dates import days_ago
from datetime import datetime
import polars as pl
@dag(
dag_id="data_to_taskgroup_dag",
start_date=days_ago(2),
schedule_interval=None,
)
def data_to_taskgroup_dag():
@task
def produce_file_path():
file_path = "path/to/your/dataframe.csv"
return file_path
@task
def process_dataframe(file_path):
df = pl.read_csv(file_path)
for _, row in df.iter_rows():
email_subject = f"Email for {row['column_name']}"
email_body = f"Email body: {row['another_column']}"
EmailOperator(
task_id=f"send_email_{row['column_name']}",
to='recipient@example.com',
subject=email_subject,
html_content=email_body
).execute(context=kwargs)
file_path = produce_file_path()
process_dataframe(file_path)
</code></pre>
<p>the problem of this approach is that I am nesting emailoperators into process_dataframe task, and so I cannot really see the emailops in the UI, the nesting is mainly done to be able to get the xcom from upstream task, if i dont decorate process_dataframe and keep as plain function this line will fail <code>df = pl.read_csv(file_path)</code> because now file_path is not a string but a plainxcomarg. What is the approach/practice for this kind of situation ?</p>
|
<python><airflow>
|
2024-12-08 05:30:57
| 0
| 2,459
|
moth
|
79,261,820
| 7,729,563
|
Inconsistent behavior between Python function and VS Code debugger for scope lookup
|
<p>For the following code (revised):</p>
<pre class="lang-py prettyprint-override"><code># infile.py
FILE1 = 'file1'
FILE2 = 'file2'
def getfile():
# Note: I am doing the equivalent of this with VS Code debugger:
# breakpoint()
file = FILE1 if 'FILE1' in locals() else FILE2
print(f'{file=}')
if __name__ == '__main__':
getfile()
</code></pre>
<p>If I run this on Windows (using Python 3.12.7 or Python 3.13.0), here's what I see:</p>
<pre class="lang-bash prettyprint-override"><code>PS> .\infile.py
file='file2'
</code></pre>
<p>If I run this in my VS Code debugger (v1.95.3, Python v2024.20.0, Pylance v2024.12.1, Python Debugger v2024.12.0), here's what I see:</p>
<pre class="lang-bash prettyprint-override"><code># Uncomment breakpoint at first line in above def getfile function
# Start VS Code in debugging mode (F5)
# VS Code stops at breakpoint
# From VS Code Debug Console, execute following:
file = FILE1 if 'FILE1' in locals() else FILE2
print(f'{file=})
# Output:
file='file1'
</code></pre>
<p>My expectation was the result shown in the VS Code debugger above. It sounds like I'm not quite understanding something with Python scoping. What am I missing?</p>
<p>I'm also curious why I see a difference between what Python does and what the VS Code Python debugger shows. Am I doing something wrong, or is this a bug?</p>
<p>Revised:
If I do this with pdb, I see this:</p>
<pre class="lang-bash prettyprint-override"><code>PS> .\infile
> ...\infile.py(9)getfile()
-> breakpoint()
(Pdb) l
4 FILE1 = 'file1'
5 FILE2 = 'file2'
6
7
8 def getfile():
9 -> breakpoint()
10 file = FILE1 if 'FILE1' in locals() else FILE2
11 # file = FILE1 if 'FILE1' in globals() else FILE2
12 print(f'{file=}')
13
14
(Pdb) 'FILE1' in locals()
False
(Pdb) 'FILE1' in globals()
True
(Pdb)
</code></pre>
<p>So, I think what I really want is to use globals() - is that right?</p>
|
<python><python-3.x><visual-studio-code><vscode-debugger>
|
2024-12-08 03:43:47
| 1
| 529
|
James S.
|
79,261,741
| 1,982,032
|
Text Recognition with pytesseract and cv2 or other libs
|
<p>Please download the png file and save it as 'sample.png'.<br />
<a href="https://i.sstatic.net/M6ywO7up.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6ywO7up.png" alt="enter image description here" /></a><br />
I want to extract english characters in the png file.</p>
<pre><code>import cv2
import pytesseract
img = cv2.imread("sample.png")
gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thr = cv2.adaptiveThreshold(gry, 255, cv2.ADAPTIVE_THRESH_MEAN_C,
cv2.THRESH_BINARY_INV, 23, 100)
bnt = cv2.bitwise_not(thr)
txt = pytesseract.image_to_string(bnt, config="--psm 6")
res = ''.join(i for i in txt if i.isalnum())
print(res)
</code></pre>
<p>The output is</p>
<pre><code>ee
</code></pre>
<p>Another try:</p>
<pre><code>import cv2
import pytesseract
pytesseract.pytesseract.tesseract_cmd = r'/bin/tesseract'
image = cv2.imread('sample.png')
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
inverted_image = cv2.bitwise_not(gray_image)
binary_image = cv2.adaptiveThreshold(inverted_image, 255,
cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
cv2.THRESH_BINARY, 11, 2)
denoised_image = cv2.medianBlur(binary_image, 3)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (4, 4))
eroded_image = cv2.erode(denoised_image, kernel)
mask = (denoised_image == 255) & (eroded_image == 0)
denoised_image[mask] = 0
cv2.imwrite('preprocessed_image.png', denoised_image)
text = pytesseract.image_to_string(denoised_image, config='--psm 6')
print("result:", text.strip())
</code></pre>
<p>Get more accurate result than the first:</p>
<pre><code>result:CRSP
</code></pre>
<p>It is <code>5</code> instead of <code>S</code> in the <code>sample.png</code>. How can I improve the code then?</p>
<p><a href="https://i.sstatic.net/53tJITsH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/53tJITsH.png" alt="enter image description here" /></a></p>
<p>Where is the number <code>5</code> then?</p>
|
<python><text-recognition>
|
2024-12-08 02:18:08
| 1
| 355
|
showkey
|
79,261,713
| 5,163,984
|
minio.error.S3Error: S3 operation failed; code: AccessDenied, message: Access Denied., resource: /musor-bucket
|
<p>Create the bucket</p>
<pre><code>mc mb local/musor-bucket --debug
</code></pre>
<p>Create the credentials pair</p>
<pre><code>mc admin user add local musor-user musor-top-secret-key --debug
</code></pre>
<p>Create the policy to grant access to the bucket</p>
<pre><code># cat musor-bucket-policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowBucketSync",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::musor-bucket",
"arn:aws:s3:::musor-bucket/*"
]
}
]
}
</code></pre>
<p>Add policy to your minio instance</p>
<pre><code>mc admin policy create local musor-bucket-policy ./data/musor-bucket-policy.json --debug
</code></pre>
<p>Associate policy with your user</p>
<pre><code>mc admin policy attach local musor-bucket-policy --user=musor-user --debug
</code></pre>
<p>You can verify that everything is setup as you’d expect by running this</p>
<pre><code># mc admin user info local musor-user
AccessKey: musor-user
Status: enabled
PolicyName: musor-bucket-policy
MemberOf: []
</code></pre>
<p>Try to connect by python</p>
<pre><code>from minio import Minio
from minio.commonconfig import SnowballObject
from urllib3 import PoolManager
access_key="musor-user"
secret_key="musor-top-secret-key"
pool_manager = PoolManager(
ca_certs="minio/tls-ca-bundle.pem",
cert_reqs="CERT_REQUIRED",
)
client = Minio(
"minio.local:443",
access_key=access_key,
secret_key=secret_key,
secure=True,
http_client=pool_manager
)
print(client.bucket_exists("musor-bucket"));
</code></pre>
<p>return error</p>
<pre><code>minio.error.S3Error: S3 operation failed; code: AccessDenied, message: Access Denied., resource: /musor-bucket, request_id: 180F0EC003737710, host_id: 03f6d7ba09b0531a178059659f12e65ab6a75adddf2f548b1f37624d55d95fba, bucket_name: musor-bucket
</code></pre>
<p>How solve this problem?</p>
|
<python><amazon-web-services><amazon-s3><minio><minio-client>
|
2024-12-08 01:24:16
| 1
| 1,703
|
Nikolay Baranenko
|
79,261,638
| 1,827,587
|
Python script can't move file between mounted docker volumes while mv command could
|
<p>I think I did configure docker volumes correctly, but I can't understand why mv command success called from interactive bash shell.</p>
<pre class="lang-none prettyprint-override"><code>FROM ubuntu:rolling
RUN apt update
RUN apt install -y python3 python3-venv
USER ubuntu
</code></pre>
<pre class="lang-none prettyprint-override"><code>$ docker run --rm --interactive --tty --volume .:/home/ubuntu/data --volume $HOME/Downloads:/home/ubuntu/downloads --workdir /home/ubuntu/data vntk2:latest bash
</code></pre>
<pre class="lang-none prettyprint-override"><code>>>> import pathlib
>>> pathlib.Path('/home/ubuntu/downloads/SoftwareEngineer.pdf').exists()
True
>>> pathlib.Path('/home/ubuntu/downloads/SoftwareEngineer.pdf').rename(pathlib.Path('/home/ubuntu/data/'))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.12/pathlib.py", line 1365, in rename
os.rename(self, target)
OSError: [Errno 18] Invalid cross-device link: '/home/ubuntu/downloads/SoftwareEngineer.pdf' -> '/home/ubuntu/data'
>>>
</code></pre>
<pre class="lang-none prettyprint-override"><code>ubuntu@d3d1c2e286c2:~/data$ mv /home/ubuntu/downloads/SoftwareEngineer.pdf /home/ubuntu/data/
ubuntu@d3d1c2e286c2:~/data$ ls -lah /home/ubuntu/data/
total 820K
drwxrwxr-x 8 ubuntu ubuntu 4.0K Dec 8 00:03 .
drwxr-x--- 1 ubuntu ubuntu 4.0K Dec 8 00:02 ..
-rw-rw-r-- 1 ubuntu ubuntu 765K Nov 15 12:14 SoftwareEngineer.pdf
ubuntu@d3d1c2e286c2:~/data$
</code></pre>
|
<python><docker><pathlib>
|
2024-12-08 00:06:06
| 2
| 386
|
proofit404
|
79,261,474
| 251,589
|
Python generic type on function getting lost somewhere
|
<p>Getting this typing error:</p>
<pre><code>error: Incompatible types in assignment (expression has type "object", variable has type "A | B") [assignment]
</code></pre>
<p>With this code:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
from typing import TypeVar, Mapping, reveal_type
@dataclass
class A:
foo: str = "a"
@dataclass
class B:
bar: str = "b"
lookup_table: Mapping[str, type[A] | type[B]] = {
"a": A,
"b": B
}
reveal_type(lookup_table) # note: Revealed type is "typing.Mapping[builtins.str, Union[type[simple.A], type[simple.B]]]"
T = TypeVar("T")
def load(lookup_table: Mapping[str, type[T]], lookup_key:str) -> T:
con: type[T] = lookup_table[lookup_key]
instance: T = con()
return instance
example_a: A | B = load(lookup_table, "a") # error: Incompatible types in assignment (expression has type "object", variable has type "A | B")
print(example_a)
</code></pre>
<hr />
<p>Edit: Logged a mypy bug here: <a href="https://github.com/python/mypy/issues/18265" rel="nofollow noreferrer">https://github.com/python/mypy/issues/18265</a></p>
|
<python><python-typing><mypy>
|
2024-12-07 21:53:59
| 2
| 27,385
|
sixtyfootersdude
|
79,261,380
| 4,333,809
|
Setting boost compilation options in conanfile.py
|
<p>I've got the following structure of conanfile.py for the purpose of installing boost in my macOS build machine. Apparently some boost components are not supported in this platform and should be omitted according to this <a href="https://github.com/conan-io/conan-center-index/issues/24313" rel="nofollow noreferrer">post</a>. Thus I need to apply the following settings. However, this is how they are configured in conanfile.txt, and I'm already working with conanfile.py, so I need to find the equivalent python code to this settings :</p>
<pre><code>[options]
boost/*:with_stacktrace_backtrace=False
boost/*:without_locale=True
</code></pre>
<p>and here's my attempt to apply this configuration in conanfile.py.</p>
<pre><code>from conan2 import ConanFile
class BoostExampleConan(ConanFile):
name = "BoostExample"
version = "1.0"
settings = "os", "compiler", "build_type", "arch"
requires = "boost/1.83.0"
generators = "CMakeDeps", "CMakeToolchain"
default_options = {
"boost:header_only": False
--> "boost:with_stacktrace_backtrace"=False
--> "boost:without_locale"=True
}
def layout(self):
self.folders.source = "."
self.folders.build = "build"
</code></pre>
<p>However, I still get the following error :</p>
<pre><code>boost/1.83.0: WARN: Boost component 'locale' is missing libraries. Try building boost with '-o boost:without_locale'. (Option is not guaranteed to exist)
boost/1.83.0: WARN: Boost component 'stacktrace_addr2line' is missing libraries. Try building boost with '-o boost:without_stacktrace_addr2line'. (Option is not guaranteed to exist)
boost/1.83.0: WARN: Boost component 'stacktrace_backtrace' is missing libraries. Try building boost with '-o boost:without_stacktrace_backtrace'. (Option is not guaranteed to exist)
ERROR: boost/1.83.0: Error in package_info() method, line 2011
raise ConanException(f"These libraries were expected to be built, but were not built: {non_built}")
ConanException: These libraries were expected to be built, but were not built: {'boost_locale', 'boost_stacktrace_backtrace', 'boost_stacktrace_addr2line'}
</code></pre>
|
<python><boost><conan><conan-2>
|
2024-12-07 20:46:58
| 0
| 5,214
|
Zohar81
|
79,261,313
| 932,593
|
How can I make a density plot with log-scaled axes in matplotlib?
|
<p>I want to plot a scalar density as a function of two variables <code>x</code> and <code>y</code>, which can potentially be scaled logarithmically. I essentially run simulations for each pair of <code>x</code> and <code>y</code> and want to report the data using a nice colormap. However, I run into the problem that I cannot make <code>imshow</code> scale the data correctly. While <code>pcolormesh</code> works reliably, it produces files that are orders of magnitudes larger and often cannot be displayed without artifacts (like thin white lines between data points).</p>
<p>Here's some code to reproduce the problem:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib.image import NonUniformImage
# calculate axis positions
x = np.geomspace(1, 100, 5)
dx = np.sqrt(x[1] / x[0]) # half the difference between points in logspace
y = np.linspace(0, 1, 3)
dy = (y[1] - y[0]) / 2 # half the difference between points in linspace
extent = (x[0] / dx, x[-1] * dx, y[0] - dy, y[-1] + dy)
# get some random image data to plot
z = np.random.uniform(size=(len(x), len(y)))
# create figure axes
fig, ax = plt.subplots(ncols=3, figsize=(12, 3))
# use imshow to plot array
ax[0].imshow(z.T, origin="lower", aspect="auto", extent=extent)
ax[0].set_xscale("log")
ax[0].set_title("imshow")
# use NonUniformImage to plot array
im = NonUniformImage(ax[1], extent=extent)
im.set_data(x, y, z.T)
ax[1].add_image(im)
ax[1].set_xscale("log")
ax[1].set_title("NonUniformImage")
# use pcolormesh to plot array
x2 = np.geomspace(*extent[:2], 6)
y2 = np.linspace(*extent[2:], 4)
ax[2].pcolormesh(x2, y2, z.T)
ax[2].set_title("pcolormesh")
# set axis scales
for i in range(3):
ax[i].set_xlim(*extent[:2])
ax[i].set_ylim(*extent[2:])
ax[i].set_xscale("log")
plt.show()
</code></pre>
<p>Running this example results in the following picture</p>
<p><a href="https://i.sstatic.net/IYoJN2yW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IYoJN2yW.png" alt="Result of the script shown above" /></a></p>
<p>Clearly, <code>imshow</code> is distorting the image, presumably because it assumes that the image contains data on a linearly scaled axis.
The second panel shows my attempt at using <code>NonUniformImage</code>, which gets things completely wrong for some reason.
The third panel shows what I want to see, albeit with using <code>pcolormesh</code>, which has the severe drawbacks I mentioned above.</p>
<p>Essentially, I just want to show a "normal" image with rectangular pixels of equal size on a log-scaled axis. I think this should be possible, but I was not able to achieve this. I also vaguely remember that the example shown in the first column used to work a few years back, but apparently this is not the case anymore. Any help with solving this would be much appreciated!</p>
<p>Note that <a href="https://stackoverflow.com/a/8493315/932593">this older answer</a> does not work properly since it simply adds an axes with logarithmic ticks, so the user cannot reliably interact with the result (e.g., to change the ticks afterwards).</p>
|
<python><matplotlib><density-plot>
|
2024-12-07 20:08:34
| 2
| 24,438
|
David Zwicker
|
79,261,312
| 1,684,103
|
matplotlib x axis not as expected
|
<p>I want the x-axis labels to be exactly as in the file, but it's converting them. Also, I don't want the thick black line above the labels. And, I'd like the plot to extend fully to both sides of the area without the empty spaces on the left and right.</p>
<p>plot:
<a href="https://i.sstatic.net/26z7fggM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/26z7fggM.png" alt="enter image description here" /></a></p>
<p>python script:</p>
<pre><code>#!/usr/bin/python3
import matplotlib.pyplot as plt
import numpy as np
a,b = np.genfromtxt("test01.txt", usecols=(0,1), unpack=True, delimiter="\t", dtype='str')
y = list(map(float, b))
plt.figure(figsize=(9, 5))
plt.plot(a, y, linewidth=0.7)
xticks = plt.xticks()[0]
xtick_labels = ["" if i % 100 != 0 else x for i, x in enumerate(xticks)]
plt.xticks(xticks, xtick_labels, fontsize=8)
plt.xticks(rotation=90)
plt.yticks(np.arange(100, 185, 5))
plt.ylim(110, 185)
plt.xlabel("Time")
plt.ylabel("Temp in F")
plt.show()
</code></pre>
<p>sample data from the file:</p>
<pre><code>00:00:02 170.9
00:00:03 171.7
00:00:04 171.9
00:00:04 171.8
00:00:05 171.4
00:00:06 170.9
00:00:07 170.1
00:00:08 169.4
00:00:09 168.5
00:00:10 167.6
</code></pre>
|
<python><matplotlib>
|
2024-12-07 20:07:21
| 1
| 812
|
Mike
|
79,261,159
| 2,334,254
|
How do you install python-barcode on Ubuntu 24?
|
<p>Everywhere I see just use pip but it doesn't work and it isn't found with apt. I think I need to add a repo, which is it?</p>
<pre><code>User@System:~/Desktop$ sudo apt install python3-barcode
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package python3-barcode
User@System:~/Desktop$ pip install barcode
error:externally-managed-environment
× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
python3-xyz, where xyz is the package you are trying to
install.
If you wish to install a non-Debian-packaged Python package,
create a virtual environment using python3 -m venv path/to/venv.
Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
sure you have python3-full installed.
If you wish to install a non-Debian packaged Python application,
it may be easiest to use pipx install xyz, which will manage a
virtual environment for you. Make sure you have pipx installed.
See /usr/share/doc/python3.12/README.venv for more information.
note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.
</code></pre>
|
<python><python-3.x><ubuntu><ubuntu-24.04>
|
2024-12-07 18:34:31
| 1
| 11,200
|
kpie
|
79,260,810
| 8,050,183
|
PyMuPDF - Prevent PDF pages from being auto cropped
|
<p>I'm using PyMuPDF to process a PDF and then re-save it, but the resulting file loses the original page orientations and crop boxes. Some pages in the original PDF are larger or differently oriented (e.g., rotated or with custom crop regions), but after calling pdf.save(), all pages become uniformly sized and oriented.</p>
<p><strong>Example:</strong></p>
<pre class="lang-py prettyprint-override"><code>import pymupdf
pdf = pymupdf.open(pdf_path, filetype="pdf")
pdf.save("pymupdf-exported.pdf")
</code></pre>
<p>Original File: <a href="https://static.vitra.com/media/asset/8664580/storage/master/download/Factbook%2520Electrification%25202024-EN.pdf" rel="nofollow noreferrer">https://static.vitra.com/media/asset/8664580/storage/master/download/Factbook%2520Electrification%25202024-EN.pdf</a></p>
<p>Exported PDF: <a href="https://drive.google.com/file/d/1mVzAoS8OWHRyM2X_BDABoCCaxAAnrL1x/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/1mVzAoS8OWHRyM2X_BDABoCCaxAAnrL1x/view?usp=sharing</a></p>
<p>How can I preserve the original page orientations and crop boxes when using PyMuPDF, so that the re-saved PDF matches the original layout?</p>
<p>My endgoal:</p>
<pre class="lang-py prettyprint-override"><code>def convert_pdf_to_image_arrays(pdf_path: str, zoom: int, dpi: int) -> list[np.ndarray]:
"""
Convert a PDF to high-resolution image arrays, preserving color fidelity.
:param pdf_path: Path to the PDF file.
:param dpi: DPI (dots per inch) for rendering high-resolution images.
:return: List of NumPy arrays representing images of the PDF pages.
"""
pdf = pymupdf.open(pdf_path, filetype="pdf")
images: list[np.ndarray] = []
for page in pdf:
# Render the page to a pixmap with the desired DPI
pix = page.get_pixmap(dpi=dpi)
# Convert the raw pixel data to a PIL image (preserving color accuracy)
img_pil = Image.frombytes(
mode="RGB" if pix.n == 3 else "RGBA",
size=(pix.width, pix.height),
data=pix.samples,
)
# Convert the PIL image to a NumPy array
img_array = np.array(img_pil)
# Convert RGBA to BGR if the image has an alpha channel
if pix.n == 4:
img_array = cv2.cvtColor(img_array, cv2.COLOR_RGBA2BGR)
else:
img_array = cv2.cvtColor(img_array, cv2.COLOR_RGB2BGR)
images.append(img_array)
pdf.close()
return images
</code></pre>
<p>Edit: I tried printing the cropbox of all pages</p>
<pre class="lang-py prettyprint-override"><code>pdf = pymupdf.open(pdf_path, filetype="pdf")
images: list[np.ndarray] = []
for page in pdf:
print(f"Cropbox {page.number}: {page.cropbox}")
</code></pre>
<p>Output:</p>
<pre class="lang-bash prettyprint-override"><code>Cropbox 0: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 1: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 2: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 3: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 4: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 5: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 6: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 7: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 8: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 9: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 10: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 11: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 12: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 13: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 14: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 15: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 16: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 17: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 18: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 19: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 20: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 21: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 22: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 23: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 24: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 25: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 26: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 27: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 28: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 29: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 30: Rect(0.0, 0.0, 612.0, 792.0)
Cropbox 31: Rect(0.0, 0.0, 612.0, 792.0)
</code></pre>
|
<python><pdf><pymupdf>
|
2024-12-07 14:51:30
| 1
| 1,159
|
axelmukwena
|
79,260,798
| 8,024,622
|
KivyMD Google like suggestions while typing
|
<p>I'm new to kivy(MD).</p>
<p>On one screen I have a TextInput where I can type city names. When I click on the "next" button, the screen is switched and the city name variable is passed to that screen.</p>
<p>What I would like to have is that while typing, there are suggestions showing with city names starting with the typed text. Just like one is googling something.</p>
<p>How can I acchieve this?</p>
<p>Thanks in advance!</p>
|
<python><autocomplete><kivy><kivymd><autosuggest>
|
2024-12-07 14:43:38
| 0
| 624
|
jigga
|
79,260,787
| 11,283,324
|
Pandas dataframe reshape with columns name
|
<p>I have a dataframe like this:</p>
<pre><code>>>> df
TYPE A B C D
0 IN 550 350 600 360
1 OUT 340 270 420 190
</code></pre>
<p>I want reshape it to this shape:</p>
<pre><code> AIN AOUT BIN BOUT CIN COUT DIN DOUT
550 340 350 270 600 420 360 190
</code></pre>
<p>So I use these codes to do it:</p>
<pre><code>ds = df.melt().T.iloc[1:,2:]
ds.columns = ['AIN','AOUT','BIN','BOUT','CIN','COUT','DIN','DOUT']
>>> ds
AIN AOUT BIN BOUT CIN COUT DIN DOUT
value 550 340 350 270 600 420 360 190
</code></pre>
<p>It works, but it seems stupid, the columns name was manual inputed, I wonder if there's a better way more pythonic to do this. Any idea?</p>
<p>P.S. the "value" in output dataframe is insignificant.</p>
|
<python><pandas><dataframe><reshape>
|
2024-12-07 14:36:34
| 3
| 351
|
Sun Jar
|
79,260,781
| 11,748,924
|
What is the Keras 3 version of this numpy function of this IEEE754 to Unsigned Integer 8 normalization?
|
<p>Using the Keras 3 <code>ops</code> api, what is the Keras 3 version of this normalization function?</p>
<pre><code>def ieee754_to_uint8(x, axis=-1):
"""
Normalize a tensor using IEEE 754 logic and map it to uint8 values.
Args:
x (tensor): A Keras tensor of shape (batch_size, seq_len).
axis (int): Axis along which to normalize.
Returns:
A Keras tensor with dtype uint8, returning the same shape as input x.
"""
# Find the maximum absolute value in x
m = np.max(np.abs(x), axis=axis, keepdims=True)
m[m==0] = 1
y = (2**7 - 1 * (x > 0)) * x / m
return (y.astype(int) + 128).astype(np.uint8)
</code></pre>
|
<python><machine-learning><keras><neural-network>
|
2024-12-07 14:32:53
| 1
| 1,252
|
Muhammad Ikhwan Perwira
|
79,260,577
| 851,249
|
How can I host a Python Web API built with Flask on IIS?
|
<p>I have developed a Python-based Web API using frameworks such as Flask or FastAPI and need to deploy it on an IIS (Internet Information Services) server. The goal is to make the API accessible within my organization's network.</p>
<p>Here is what I have completed so far:</p>
<ol>
<li>Installed Python and added it to the system's PATH.</li>
<li>Created a virtual environment and installed all necessary dependencies (e.g., Flask).</li>
<li>Tested the API locally using python app.py or a similar command, and it is functioning as expected.</li>
</ol>
<p>What are the steps to host this Python Web API on an IIS server running on Windows OS?</p>
|
<python><iis-7.5><google-api-python-client><python-webbrowser>
|
2024-12-07 12:17:02
| 0
| 1,648
|
Praveen Kumar
|
79,260,553
| 10,451,126
|
SSL error with salesforce API using requests library in python
|
<p>I've recently run into a problem trying to connect to the salesforce API using python simple-salesforce and requests library to download some data for some PowerBI and analytics applications / forecasting / machine learning apps.</p>
<p>I'm on python 3.10.4 on Windows 10 and have the latest updated to latest published version of requests and simple-salesforce.</p>
<p>The error I'm getting is SSL failed self certify. The bit that's throwing me off is I have another pipeline in golang that is working fine on the same machine. I've been able to implement a workaround by setting the <code>verify=False</code> on the session and passing this through to requests. I'm also able to get a response from the server using the lower level urllib3 to make to same request which is working fine.</p>
<pre><code>---------------------------------------------------------------------------
SSLCertVerificationError Traceback (most recent call last)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py:466, in HTTPConnectionPool._make_request(self, conn, method, url, body, headers, retries, timeout, chunked, response_conn, preload_content, decode_content, enforce_content_length)
465 try:
--> 466 self._validate_conn(conn)
467 except (SocketTimeout, BaseSSLError) as e:
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py:1095, in HTTPSConnectionPool._validate_conn(self, conn)
1094 if conn.is_closed:
-> 1095 conn.connect()
1097 # TODO revise this, see https://github.com/urllib3/urllib3/issues/2791
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connection.py:730, in HTTPSConnection.connect(self)
728 server_hostname_rm_dot = server_hostname.rstrip(".")
--> 730 sock_and_verified = _ssl_wrap_socket_and_match_hostname(
731 sock=sock,
732 cert_reqs=self.cert_reqs,
733 ssl_version=self.ssl_version,
734 ssl_minimum_version=self.ssl_minimum_version,
735 ssl_maximum_version=self.ssl_maximum_version,
736 ca_certs=self.ca_certs,
737 ca_cert_dir=self.ca_cert_dir,
738 ca_cert_data=self.ca_cert_data,
739 cert_file=self.cert_file,
740 key_file=self.key_file,
741 key_password=self.key_password,
742 server_hostname=server_hostname_rm_dot,
743 ssl_context=self.ssl_context,
744 tls_in_tls=tls_in_tls,
745 assert_hostname=self.assert_hostname,
746 assert_fingerprint=self.assert_fingerprint,
747 )
748 self.sock = sock_and_verified.socket
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connection.py:909, in _ssl_wrap_socket_and_match_hostname(sock, cert_reqs, ssl_version, ssl_minimum_version, ssl_maximum_version, cert_file, key_file, key_password, ca_certs, ca_cert_dir, ca_cert_data, assert_hostname, assert_fingerprint, server_hostname, ssl_context, tls_in_tls)
907 server_hostname = normalized
--> 909 ssl_sock = ssl_wrap_socket(
910 sock=sock,
911 keyfile=key_file,
912 certfile=cert_file,
913 key_password=key_password,
914 ca_certs=ca_certs,
915 ca_cert_dir=ca_cert_dir,
916 ca_cert_data=ca_cert_data,
917 server_hostname=server_hostname,
918 ssl_context=context,
919 tls_in_tls=tls_in_tls,
920 )
922 try:
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\util\ssl_.py:469, in ssl_wrap_socket(sock, keyfile, certfile, cert_reqs, ca_certs, server_hostname, ssl_version, ciphers, ssl_context, ca_cert_dir, key_password, ca_cert_data, tls_in_tls)
467 context.set_alpn_protocols(ALPN_PROTOCOLS)
--> 469 ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)
470 return ssl_sock
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\util\ssl_.py:513, in _ssl_wrap_socket_impl(sock, ssl_context, tls_in_tls, server_hostname)
511 return SSLTransport(sock, ssl_context, server_hostname)
--> 513 return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File ~\AppData\Local\Programs\Python\Python310\lib\ssl.py:512, in SSLContext.wrap_socket(self, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, session)
506 def wrap_socket(self, sock, server_side=False,
507 do_handshake_on_connect=True,
508 suppress_ragged_eofs=True,
509 server_hostname=None, session=None):
510 # SSLSocket class handles server_hostname encoding before it calls
511 # ctx._wrap_socket()
--> 512 return self.sslsocket_class._create(
513 sock=sock,
514 server_side=server_side,
515 do_handshake_on_connect=do_handshake_on_connect,
516 suppress_ragged_eofs=suppress_ragged_eofs,
517 server_hostname=server_hostname,
518 context=self,
519 session=session
520 )
File ~\AppData\Local\Programs\Python\Python310\lib\ssl.py:1070, in SSLSocket._create(cls, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, context, session)
1069 raise ValueError("do_handshake_on_connect should not be specified for non-blocking sockets")
-> 1070 self.do_handshake()
1071 except (OSError, ValueError):
File ~\AppData\Local\Programs\Python\Python310\lib\ssl.py:1341, in SSLSocket.do_handshake(self, block)
1340 self.settimeout(None)
-> 1341 self._sslobj.do_handshake()
1342 finally:
SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:997)
During handling of the above exception, another exception occurred:
SSLError Traceback (most recent call last)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py:789, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, preload_content, decode_content, **response_kw)
788 # Make the request on the HTTPConnection object
--> 789 response = self._make_request(
790 conn,
791 method,
792 url,
793 timeout=timeout_obj,
794 body=body,
795 headers=headers,
796 chunked=chunked,
797 retries=retries,
798 response_conn=response_conn,
799 preload_content=preload_content,
800 decode_content=decode_content,
801 **response_kw,
802 )
804 # Everything went great!
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py:490, in HTTPConnectionPool._make_request(self, conn, method, url, body, headers, retries, timeout, chunked, response_conn, preload_content, decode_content, enforce_content_length)
489 new_e = _wrap_proxy_error(new_e, conn.proxy.scheme)
--> 490 raise new_e
492 # conn.request() calls http.client.*.request, not the method in
493 # urllib3.request. It also calls makefile (recv) on the socket.
SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:997)
The above exception was the direct cause of the following exception:
MaxRetryError Traceback (most recent call last)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\adapters.py:667, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
666 try:
--> 667 resp = conn.urlopen(
668 method=request.method,
669 url=url,
670 body=request.body,
671 headers=request.headers,
672 redirect=False,
673 assert_same_host=False,
674 preload_content=False,
675 decode_content=False,
676 retries=self.max_retries,
677 timeout=timeout,
678 chunked=chunked,
679 )
681 except (ProtocolError, OSError) as err:
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py:843, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, preload_content, decode_content, **response_kw)
841 new_e = ProtocolError("Connection aborted.", new_e)
--> 843 retries = retries.increment(
844 method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2]
845 )
846 retries.sleep()
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\util\retry.py:519, in Retry.increment(self, method, url, response, error, _pool, _stacktrace)
518 reason = error or ResponseError(cause)
--> 519 raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
521 log.debug("Incremented Retry for (url='%s'): %r", url, new_retry)
MaxRetryError: HTTPSConnectionPool(host='login.salesforce.com', port=443): Max retries exceeded with url: /services/Soap/u/59.0 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:997)')))
During handling of the above exception, another exception occurred:
SSLError Traceback (most recent call last)
Input In [57], in <cell line: 3>()
1 s.verify = requests.certs.where()
2 s.headers = headers
----> 3 resp = s.post(url)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\sessions.py:637, in Session.post(self, url, data, json, **kwargs)
626 def post(self, url, data=None, json=None, **kwargs):
627 r"""Sends a POST request. Returns :class:`Response` object.
628
629 :param url: URL for the new :class:`Request` object.
(...)
634 :rtype: requests.Response
635 """
--> 637 return self.request("POST", url, data=data, json=json, **kwargs)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\sessions.py:589, in Session.request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
584 send_kwargs = {
585 "timeout": timeout,
586 "allow_redirects": allow_redirects,
587 }
588 send_kwargs.update(settings)
--> 589 resp = self.send(prep, **send_kwargs)
591 return resp
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\sessions.py:703, in Session.send(self, request, **kwargs)
700 start = preferred_clock()
702 # Send the request
--> 703 r = adapter.send(request, **kwargs)
705 # Total elapsed time of the request (approximately)
706 elapsed = preferred_clock() - start
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\adapters.py:698, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
694 raise ProxyError(e, request=request)
696 if isinstance(e.reason, _SSLError):
697 # This branch is for urllib3 v1.22 and later.
--> 698 raise SSLError(e, request=request)
700 raise ConnectionError(e, request=request)
702 except ClosedPoolError as e:
SSLError: HTTPSConnectionPool(host='login.salesforce.com', port=443): Max retries exceeded with url: /services/Soap/u/59.0 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:997)')))
</code></pre>
<p>I've also SSH'd into a separate machine and have been able to connect successfully. I'm unsure what my next course of action is. While my longer term plan is to migrate most of the extracts to golang as the distribution is simpler, python is a lot easier to work with in the exploratory stage, especially when I'm looking to run things locally rather than on a remote server.</p>
|
<python><ssl><python-requests><simple-salesforce>
|
2024-12-07 12:00:51
| 0
| 1,071
|
el_oso
|
79,260,231
| 13,392,257
|
Selenium BlockingIOError: [Errno 11] Resource temporarily unavailable
|
<p>I am running several browsers in separate threads with help of <code>selenium 4.2.0 (FireFox)</code> and celery in docker container.</p>
<p>Program is working fine, but for significant amount of data I have an error (see trace below)</p>
<p>I read about the error <a href="https://www.pythonanywhere.com/forums/topic/1298/" rel="nofollow noreferrer">https://www.pythonanywhere.com/forums/topic/1298/</a>
Looks like it deals with breach of OS limit.</p>
<p><strong>How to understand reason of the error and ways to fix it?</strong></p>
<pre><code>Traceback (most recent call last):
celery-1 | File "/project/parser_app/celery_tasks.py", line 221, in check_urls
celery-1 | parser = YandexParser()
celery-1 | File "/project/parser_app/parser/common.py", line 195, in __init__
celery-1 | self.driver = get_firefox_driver(proxy_data=proxy_data)
celery-1 | File "/project/parser_app/parser/common.py", line 79, in get_firefox_driver
celery-1 | driver = webdriver.Firefox(options=options, seleniumwire_options=proxy_options, firefox_profile=firefox_profile)
celery-1 | File "/usr/local/lib/python3.9/site-packages/seleniumwire/webdriver.py", line 179, in __init__
celery-1 | super().__init__(*args, **kwargs)
celery-1 | File "/usr/local/lib/python3.9/site-packages/selenium/webdriver/firefox/webdriver.py", line 172, in __init__
celery-1 | self.service.start()
celery-1 | File "/usr/local/lib/python3.9/site-packages/selenium/webdriver/common/service.py", line 71, in start
celery-1 | self.process = subprocess.Popen(cmd, env=self.env,
celery-1 | File "/usr/local/lib/python3.9/subprocess.py", line 951, in __init__
celery-1 | self._execute_child(args, executable, preexec_fn, close_fds,
celery-1 | File "/usr/local/lib/python3.9/subprocess.py", line 1770, in _execute_child
celery-1 | self.pid = _posixsubprocess.fork_exec(
celery-1 | BlockingIOError: [Errno 11] Resource temporarily unavailable
</code></pre>
<p>I checked number of avaliable threads</p>
<pre><code>cat /proc/sys/kernel/pid_max
4194304
</code></pre>
<p>Also checked number of zombie processes</p>
<pre><code>ps aux | awk '$8 ~ /^[Zz]/' | wc -l
35990
</code></pre>
<p>And disk free space</p>
<pre><code>~/new_app$ df -h --total
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 1004K 3.2G 1% /run
/dev/sda2 9.8G 3.3G 6.0G 36% /
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/sdb 100G 5.1G 95G 6% /data
tmpfs 3.2G 0 3.2G 0% /run/user/1001
total 148G 8.4G 139G 6% -
</code></pre>
|
<python><selenium-webdriver>
|
2024-12-07 08:17:35
| 0
| 1,708
|
mascai
|
79,259,984
| 8,384,910
|
PyTorch type for dataset with length
|
<p>I am creating a meta-dataset that combines data from multiple input datasets.</p>
<pre class="lang-py prettyprint-override"><code>from torch.utils.data import Dataset, IterableDataset
class MetaDataset(Dataset):
def __init__(self, regular_dataset: Dataset, iterable_dataset: IterableDataset):
self.regular_dataset = regular_dataset
self.iterable_dataset = iterable_dataset
pass # Do other stuff...
</code></pre>
<p>I got a type warning when I tried to access <code>len(self.regular_dataset)</code> from within <code>MetaDataset</code></p>
<p>It turns out that the PyTorch <a href="https://github.com/pytorch/pytorch/blob/2d9b0810122375fc110f07b9d1780ef7b593a3eb/torch/utils/data/dataset.py#L43-L74" rel="nofollow noreferrer">type definition for <code>Dataset</code></a> <a href="https://github.com/pytorch/pytorch/blob/2d9b0810122375fc110f07b9d1780ef7b593a3eb/torch/utils/data/sampler.py#L85-L110" rel="nofollow noreferrer">intentionally doesn't include <code>__len__</code></a>. Therefore, I have to build my own type:</p>
<pre class="lang-py prettyprint-override"><code>from torch.utils.data import Dataset, IterableDataset
class DatasetWithLength(Dataset):
def __len__(self) -> int:
pass
class MetaDataset(Dataset):
def __init__(self, regular_dataset: DatasetWithLength, iterable_dataset: IterableDataset):
self.regular_dataset = regular_dataset
self.iterable_dataset = iterable_dataset
pass # Do other stuff...
</code></pre>
<p>But now, I get a <code>Expected type 'DatasetWithLength', got 'FirstDataset' instead</code> warning when I try to do this:</p>
<pre class="lang-py prettyprint-override"><code>foo = MetaDataset(
FirstDataset(),
FirstIterableDataset(),
)
</code></pre>
<p>How do I correctly define the type of a PyTorch Dataset that has the length property?</p>
|
<python><pytorch><python-typing>
|
2024-12-07 04:48:38
| 1
| 9,414
|
Richie Bendall
|
79,259,917
| 8,384,910
|
Python type for dict-like object
|
<p>I have some function that accepts a dict-like object.</p>
<pre class="lang-py prettyprint-override"><code>from typing import Dict
def handle(x: Dict[str, str]):
pass # Do some processing here...
</code></pre>
<p>I still get a type warning if I try passing a <code>Shelf</code> to the function, even though the function supports it. How to I specify the type of a dict-like object?</p>
|
<python><python-typing>
|
2024-12-07 03:43:24
| 2
| 9,414
|
Richie Bendall
|
79,259,732
| 2,334,254
|
How do I install pygame on Ubuntu 24?
|
<p>Pip doesn't work. Python3 base install on Ubuntu wants package installs via <code>apt install python-xyz</code>. And I can't find pygame</p>
<pre><code>_@Sbox:~/Desktop$ python3 -m pip install pygame
error: externally-managed-environment
× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
python3-xyz, where xyz is the package you are trying to
install.
If you wish to install a non-Debian-packaged Python package,
create a virtual environment using python3 -m venv path/to/venv.
Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
sure you have python3-full installed.
If you wish to install a non-Debian packaged Python application,
it may be easiest to use pipx install xyz, which will manage a
virtual environment for you. Make sure you have pipx installed.
See /usr/share/doc/python3.12/README.venv for more information.
note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.
_@Sbox:...
_@Sbox:~/Desktop$ sudo apt install python-pygame
[sudo] password for kevin:
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package python-pygame
</code></pre>
<p>I'm thinking I just need to add a repo to apt. Looking for what to use and how todo that for pygame. Thanks.</p>
|
<python><python-3.x><ubuntu><pygame>
|
2024-12-07 00:30:13
| 1
| 11,200
|
kpie
|
79,259,601
| 2,328,154
|
Using cdk Aspects to add tags to all resources in a stack?
|
<p>I am using cdk Aspects and I can see that the statement is being printed out from the visit method. However, when I try to add a Tag to each resource within my stack, nothing happens.</p>
<p>This is the line I have in my app.py.</p>
<pre><code>cdk.Aspects.of(my_stack).add(TagsAddingAspect())
</code></pre>
<p>This is my aspect.py class.</p>
<pre><code>import jsii
from aws_cdk import (
IAspect, Tag, Tags
)
@jsii.implements(IAspect)
class TagsAddingAspect:
def visit(self, node):
print(f'{node.node.path} - {node.__class__.__name__}') # This gets printed so I know am definitely hitting the visit function for each resource.
#Tags.of(node).add('aspect_test_tag', 'tag1')
Tag('aspect_test_tag', 'tag1').visit(node)
</code></pre>
<p>I tried both options above to create a tag, but neither resulted in the tags being created in the stacks resources. The documentation indicates that either should work.</p>
<p><a href="https://docs.aws.amazon.com/cdk/v2/guide/tagging.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/cdk/v2/guide/tagging.html</a></p>
<p>There may be a gap in my understanding... If someone could point me in the right direction, I'd appreciate it! Thanks.</p>
|
<python><aws-cdk>
|
2024-12-06 23:01:32
| 0
| 421
|
MountainBiker
|
79,259,596
| 1,445,660
|
Aurora - password authentication failed for user "postgres" - after password rotation
|
<p>I'm using python lambda functions. In the beginning of the file I call this function:</p>
<pre><code>def create_db_engines():
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager',
region_name=os.environ['region']
)
try:
get_secret_value_response = client.get_secret_value(
SecretId=os.environ['rds_credentials']
)
except ClientError as e:
print('ERROR getting secret ' + str(e))
raise e
# Decrypts secret using the associated KMS key.
secret = json.loads(get_secret_value_response['SecretString'])
username = secret['username']
password = secret['password']
print('password:' + password) # added temporarily for debugging
engine = create_engine(
f"postgresql://{username}:{password}@{os.environ['db_endpoint']}/postgres")
engine_async = create_async_engine(
f"postgresql+asyncpg://{username}:{password}@{os.environ['db_endpoint']}/postgres")
return engine, engine_async
</code></pre>
<p>I see in the log that the password is correct and I'm able to connect with it in pgadmin.
But I still get the error "password authentication failed for user "postgres"" and I don't know why.
If I rotate the password in rds it works ok, but it stops working in the next rotation.</p>
|
<python><amazon-web-services><aws-lambda><amazon-rds><amazon-aurora>
|
2024-12-06 22:59:00
| 1
| 1,396
|
Rony Tesler
|
79,259,576
| 4,072,043
|
Using Flask session variable for OAUTH CSRF prevention
|
<p>I am trying to write a simple (hah!) Python Flask app which will use Google's <code>people</code> API to retrieve a user's numeric Google ID (which otherwise seems to be almost impossible to determine). Doing this requires that the user authenticate with Google's OAUTH. I have this working, but Google's <a href="https://developers.google.com/identity/protocols/oauth2/web-server" rel="nofollow noreferrer">Using OAuth 2.0 for Web Server Applications</a> documentation says:</p>
<blockquote>
<p>The OAuth client must prevent CSRF as called out in the OAuth2
Specification . One way to achieve this is by using the state
parameter to maintain state between your authorization request and the
authorization server's response.</p>
</blockquote>
<p>So I'm following <a href="https://docs.replit.com/additional-resources/google-auth-in-flask" rel="nofollow noreferrer">this example</a>, and storing the state in a session variable.</p>
<pre class="lang-py prettyprint-override"><code>@app.route('/auth')
def auth():
auth_url, state = get_oauth_flow().authorization_url()
app.logger.debug(f'Setting session state: {state}')
flask.session['state'] = state
return flask.redirect(auth_url)
@app.route('/oauth2callback')
def oauth2callback():
session_state = flask.session.get('state')
request_state = flask.request.args.get('state')
app.logger.debug(f'Got session state: {session_state}')
app.logger.debug(f'Got request state: {request_state}')
if session_state is None or session_state != request_state:
return 'Danger, Will Robinson!', 400
del flask.session['state']
flow = get_oauth_flow()
flow.fetch_token(authorization_response=flask.request.url)
flask.session['oauth_token'] = flow.credentials.token
return flask.redirect('success')
</code></pre>
<p>This is failing whenever I go through the initial login process. (I receive a "Danger, Will Robinson!" message.) My logs show that the session variable has been cleared.</p>
<pre><code>DEBUG in ging: Setting session state: ijV2BAyuZG8uSO4rpN77nczw5UDEJf
DEBUG in ging: Got session state: None
DEBUG in ging: Got request state: ijV2BAyuZG8uSO4rpN77nczw5UDEJf
</code></pre>
<p>I'm not a web developer by trade, so I'm very much at sea here. Any suggestions on what I might be doing wrong and/or other approaches would be appreciated.</p>
|
<python><flask><google-oauth>
|
2024-12-06 22:50:46
| 1
| 1,628
|
Ian Pilcher
|
79,259,509
| 5,270,376
|
ffmpeg piped output producing incorrect metadata frame count
|
<p>The short version: Using piped output from ffmpeg produces a file with incorrect metadata.</p>
<p><code>ffmpeg -y -i .\test_mp4.mp4 -f avi -c:v libx264 - > output.avi</code> to make an AVI file using the pipe output.</p>
<p><code>ffprobe -v error -count_frames -show_entries stream=duration,nb_read_frames,r_frame_rate .\output.avi</code></p>
<p>The output will show that the metadata does not match the actual frames contained in the video.</p>
<p>Details below.</p>
<hr />
<p>Using Python, I am attempting to use ffmpeg to compress videos and put them in a PowerPoint. This works great, however, the video files themselves have incorrect frame counts which can cause issues when I read from those videos in other code.</p>
<p><em><strong>Edit for clarification:</strong> by "frame count" I mean the metadata frame count. The actual number of frames contained in the video is correct, but querying the metadata gives an incorrect frame count.</em></p>
<p>Having eliminated the PowerPoint aspect of the code, I've narrowed this down to the following minimal reproducing example of saving an output from an ffmpeg pipe:</p>
<pre class="lang-py prettyprint-override"><code>from subprocess import Popen, PIPE
video_path = 'test_mp4.mp4'
ffmpeg_pipe = Popen(['ffmpeg',
'-y', # Overwrite files
'-i', f'{video_path}', # Input from file
'-f', 'avi', # Output format
'-c:v', 'libx264', # Codec
'-'], # Output to pipe
stdout=PIPE)
new_path = "piped_video.avi"
vid_file = open(new_path, "wb")
vid_file.write(ffmpeg_pipe.stdout.read())
vid_file.close()
</code></pre>
<p>I've tested several different videos. One small example video that I've tested can be found <a href="https://xorgon.me/media/research/porous_plates_high_angle.mp4" rel="nofollow noreferrer">here</a>.</p>
<p>I've tried a few different codecs with <code>avi</code> format and tried <code>libvpx</code> with <code>webm</code> format. For the <code>avi</code> outputs, the frame count usually reads as <code>1073741824</code> (2^30). Weirdly, for the <code>webm</code> format, the frame count read as <code>-276701161105643264</code>.</p>
<p><em><strong>Edit:</strong></em> This issue can also be reproduced with just ffmpeg in command prompt using the following command:
<code>ffmpeg -y -i .\test_mp4.mp4 -f avi -c:v libx264 - > output.avi</code></p>
<p>This is a snippet I used to read the frame count, but one could also see the error by opening the video details in Windows Explorer and seeing the total time as something like 9942 hours, 3 minutes, and 14 seconds.</p>
<pre class="lang-py prettyprint-override"><code>import cv2
video_path = 'test_mp4.mp4'
new_path = "piped_video.webm"
cap = cv2.VideoCapture(video_path)
print(f"Original video frame count: = {int(cap.get(cv2.CAP_PROP_FRAME_COUNT)):d}")
cap.release()
cap = cv2.VideoCapture(new_path)
print(f"Piped video frame count: = {int(cap.get(cv2.CAP_PROP_FRAME_COUNT)):d}")
cap.release()
</code></pre>
<p>The error can also be observed using <code>ffprobe</code> with the following command: <code>ffprobe -v error -count_frames -show_entries stream=duration,nb_read_frames,r_frame_rate .\output.avi</code>. Note that the frame rate and number of frames counted by ffprobe do not match with the duration from the metadata.</p>
<p>For completeness, here is the ffmpeg output:</p>
<pre class="lang-none prettyprint-override"><code>ffmpeg version 2023-06-11-git-09621fd7d9-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers
built with gcc 12.2.0 (Rev10, Built by MSYS2 project)
configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint
libavutil 58. 13.100 / 58. 13.100
libavcodec 60. 17.100 / 60. 17.100
libavformat 60. 6.100 / 60. 6.100
libavdevice 60. 2.100 / 60. 2.100
libavfilter 9. 8.101 / 9. 8.101
libswscale 7. 3.100 / 7. 3.100
libswresample 4. 11.100 / 4. 11.100
libpostproc 57. 2.100 / 57. 2.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test_mp4.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: isommp42
creation_time : 2022-08-10T12:54:09.000000Z
Duration: 00:00:06.67, start: 0.000000, bitrate: 567 kb/s
Stream #0:0[0x1](eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 384x264 [SAR 1:1 DAR 16:11], 563 kb/s, 30 fps, 30 tbr, 30k tbn (default)
Metadata:
creation_time : 2022-08-10T12:54:09.000000Z
handler_name : Mainconcept MP4 Video Media Handler
vendor_id : [0][0][0][0]
encoder : AVC Coding
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[libx264 @ 0000018c68c8b9c0] using SAR=1/1
[libx264 @ 0000018c68c8b9c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0000018c68c8b9c0] profile High, level 2.1, 4:2:0, 8-bit
Output #0, avi, to 'pipe:':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: isommp42
ISFT : Lavf60.6.100
Stream #0:0(eng): Video: h264 (H264 / 0x34363248), yuv420p(progressive), 384x264 [SAR 1:1 DAR 16:11], q=2-31, 30 fps, 30 tbn (default)
Metadata:
creation_time : 2022-08-10T12:54:09.000000Z
handler_name : Mainconcept MP4 Video Media Handler
vendor_id : [0][0][0][0]
encoder : Lavc60.17.100 libx264
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
[out#0/avi @ 0000018c687f47c0] video:82kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 3.631060%
frame= 200 fps=0.0 q=-1.0 Lsize= 85kB time=00:00:06.56 bitrate= 106.5kbits/s speed=76.2x
[libx264 @ 0000018c68c8b9c0] frame I:1 Avg QP:16.12 size: 3659
[libx264 @ 0000018c68c8b9c0] frame P:80 Avg QP:21.31 size: 647
[libx264 @ 0000018c68c8b9c0] frame B:119 Avg QP:26.74 size: 243
[libx264 @ 0000018c68c8b9c0] consecutive B-frames: 3.0% 53.0% 0.0% 44.0%
[libx264 @ 0000018c68c8b9c0] mb I I16..4: 17.6% 70.6% 11.8%
[libx264 @ 0000018c68c8b9c0] mb P I16..4: 0.8% 1.7% 0.6% P16..4: 17.6% 4.6% 3.3% 0.0% 0.0% skip:71.4%
[libx264 @ 0000018c68c8b9c0] mb B I16..4: 0.1% 0.3% 0.2% B16..8: 11.7% 1.4% 0.4% direct: 0.6% skip:85.4% L0:32.0% L1:59.7% BI: 8.3%
[libx264 @ 0000018c68c8b9c0] 8x8 transform intra:59.6% inter:62.4%
[libx264 @ 0000018c68c8b9c0] coded y,uvDC,uvAC intra: 48.5% 0.0% 0.0% inter: 3.5% 0.0% 0.0%
[libx264 @ 0000018c68c8b9c0] i16 v,h,dc,p: 19% 39% 25% 17%
[libx264 @ 0000018c68c8b9c0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 21% 25% 30% 3% 3% 4% 4% 4% 5%
[libx264 @ 0000018c68c8b9c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 22% 20% 16% 6% 8% 8% 8% 5% 6%
[libx264 @ 0000018c68c8b9c0] i8c dc,h,v,p: 100% 0% 0% 0%
[libx264 @ 0000018c68c8b9c0] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0000018c68c8b9c0] ref P L0: 76.2% 7.9% 11.2% 4.7%
[libx264 @ 0000018c68c8b9c0] ref B L0: 85.6% 12.9% 1.5%
[libx264 @ 0000018c68c8b9c0] ref B L1: 97.7% 2.3%
[libx264 @ 0000018c68c8b9c0] kb/s:101.19
</code></pre>
<p>So the question is: why does this happen, and how can one avoid it?</p>
|
<python><ffmpeg>
|
2024-12-06 22:11:10
| 1
| 520
|
Xorgon
|
79,259,488
| 8,190,068
|
How do I get Accordion-like behavior in a RecycleView?
|
<p>I wanted to use the Kivy Accordion widget in my python app, but I couldn't get it to work right - the accordion items would expand or contract to exactly fill the space in the window. That's when I realized I had a bigger problem: the number of accordion items could increase indefinitely, but my accordion had no scroll bar. So, after some searching, I found the RecycleView stuff in Kivy. After looking at the <a href="https://kivy.org/doc/stable/api-kivy.uix.recycleview.html" rel="nofollow noreferrer">online documentation</a>, as well as the demo code in \Python311\share\kivy-examples\widgets\recycleview, I decided to grab basic_data.py and make some modifications.</p>
<p>Here's my code:</p>
<pre><code>from random import sample, randint
from string import ascii_lowercase
from datetime import date
from kivy.app import App
from kivy.lang import Builder
from kivy.properties import ObjectProperty, StringProperty
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.label import Label
from kivy.uix.recycleview.views import RecycleKVIDsDataViewBehavior
kv = """
<Row>:
heading: ''
BoxLayout:
orientation: 'horizontal'
Button:
background_normal: ''
background_color: 0.3, 0.4, 0.3, 1
text: root.heading
on_press: root.expand()
<Test>:
canvas:
Color:
rgba: 0.3, 0.3, 0.3, 1
Rectangle:
size: self.size
pos: self.pos
rv: rv
orientation: 'vertical'
GridLayout:
cols: 3
rows: 2
size_hint_y: None
height: dp(108)
padding: dp(8)
spacing: dp(16)
Button:
text: 'Populate list'
on_press: root.populate()
Button:
text: 'Sort list'
on_press: root.sort()
Button:
text: 'Clear list'
on_press: root.clear()
BoxLayout:
spacing: dp(8)
Button:
text: 'Insert new item'
on_press: root.insert(new_item_input.text)
TextInput:
id: new_item_input
size_hint_x: 0.6
hint_text: 'heading'
padding: dp(10), dp(10), 0, 0
BoxLayout:
spacing: dp(8)
Button:
text: 'Update first item'
on_press: root.update(update_item_input.text)
TextInput:
id: update_item_input
size_hint_x: 0.6
hint_text: 'new heading'
padding: dp(10), dp(10), 0, 0
Button:
text: 'Remove first item'
on_press: root.remove()
RecycleView:
id: rv
scroll_type: ['bars', 'content']
scroll_wheel_distance: dp(114)
bar_width: dp(10)
viewclass: 'Row'
RecycleBoxLayout:
default_size: None, dp(56)
default_size_hint: 1, None
size_hint_y: None
height: self.minimum_height
orientation: 'vertical'
spacing: dp(2)
"""
Builder.load_string(kv)
class Row(RecycleKVIDsDataViewBehavior, BoxLayout):
is_expanded = ObjectProperty(None)
heading = StringProperty(None)
label = ObjectProperty(None)
orientation = 'vertical'
def expand(self):
print(f"Row with heading '{self.heading}' has been expanded!")
if self.is_expanded:
self.is_expanded = False
self.remove_widget(self.label)
self.height -= self.label.height
else:
self.is_expanded = True
self.label = Label(text='Expanded data ...', markup=True)
self.add_widget(self.label)
self.height += self.label.height
class Test(BoxLayout):
def populate(self):
self.rv.data = [
{'heading': date.today().__str__() + ' ' + str(randint(0, 2000))}
for x in range(50)]
def sort(self):
self.rv.data = sorted(self.rv.data, key=lambda x: x['heading'])
def clear(self):
self.rv.data = []
def insert(self, heading):
self.rv.data.insert(0, {
'name.text': heading or 'default heading', 'heading': 'unknown'})
def update(self, heading):
if self.rv.data:
self.rv.data[0]['name.text'] = heading or 'default new heading'
self.rv.refresh_from_data()
def remove(self):
if self.rv.data:
self.rv.data.pop(0)
class TestApp(App):
def build(self):
return Test()
# def expand(self, row):
# row.expand()
if __name__ == '__main__':
TestApp().run()
</code></pre>
<p>When I ran the code, I was able to populate the RecycleView. I had changed the definition of Row to include a button for expanding and contracting an individual Accordion item. When expanded, a Label widget is added to the Row. When contracted, the Label is removed. So far so good...</p>
<p>But when expanded, the Row item did not increase in height - at least at first. So the Button and Label simply became more compact:
<a href="https://i.sstatic.net/eIN5OHvI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eIN5OHvI.png" alt="Expanded but compacted" /></a></p>
<p>I wanted the Row to expand, so the Label would appear below the Button, each at normal size.</p>
<p>As I continued to play with it, I realized that it was actually inconsistent in its behavior:
<a href="https://i.sstatic.net/8MqIvtnT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8MqIvtnT.png" alt="Expanded inconsistently" /></a></p>
<p>One time the Label would not be shown at all, but the button would be twice as high. Another time it would appear as I had hoped! I have no idea how to guarantee consistent, hoped-for behavior.</p>
<p>I tried making changes to the code...</p>
<p>I made a small change at the App level to hang on to the main layout, so I could get access to the RecycleView later:</p>
<pre><code>class TestApp(App):
testLayout = ObjectProperty(None)
def build(self):
testLayout = Test()
return testLayout
</code></pre>
<p>Then I changed the code for expanding the Row:</p>
<pre><code>def expand(self):
if self.is_expanded:
self.is_expanded = False
self.remove_widget(self.label)
self.height -= self.label.height
print(f"Row with heading '{self.heading}' has been contracted!")
else:
self.is_expanded = True
self.label = Label(text='Expanded data ...', markup=True, size_hint_y=None, height=dp(50))
self.add_widget(self.label)
self.height += self.label.height
print(f"Row with heading '{self.heading}' has been expanded!")
print(f"Label height '{self.label.height}' Row height '{self.height}' ")
# Use Clock to schedule refresh after the layout update
Clock.schedule_once(self.refresh_view, 0)
def refresh_view(self, *args):
rv = self.get_recycleview_parent()
if rv:
rv.refresh_from_data()
def get_recycleview_parent(self):
parent = self.parent
while parent and not isinstance(parent, RecycleView):
parent = parent.parent
return parent
</code></pre>
<p>I tried adding the label height to the Row height when expanded, but subtracting the label height from the Row height when contracted. Then I got the RecycleView from the main layout and called refresh_from_data() to readjust everything to the new reality.</p>
<p>That didn't work. I couldn't get the RecycleView that way.</p>
<p>That's when I added the call to get_recycleview_parent() to get the RecycleView. Now, clicking on a row causes multiple, random rows to expand, inconsistently, and the program hangs.</p>
<p><strong>Can anyone offer suggestions on how to get the desired behavior (in contrast to the ineffective guesses from Copilot)?</strong></p>
|
<python><accordion><kivy-recycleview>
|
2024-12-06 21:59:58
| 1
| 424
|
Todd Hoatson
|
79,259,453
| 4,382,391
|
Is there a way to relatively import python files within a module so that the execution context may change without needing to update the import path?
|
<p>Take the following python project structure:</p>
<pre><code>project/
├── package/
│ ├── __init__.py
│ ├── module1.py
└── main.py
</code></pre>
<p>my main.py looks like:</p>
<pre class="lang-py prettyprint-override"><code>import package
</code></pre>
<p>my module1.py looks like:</p>
<pre class="lang-py prettyprint-override"><code>def some_fn():
print("hi")
</code></pre>
<p>inside <code>__init__.py</code>, I would like to import <code>module1.py</code>.</p>
<p><strong>init.py option 1</strong>- relative import:</p>
<pre class="lang-py prettyprint-override"><code>from . import module1
module1.some_fn()
</code></pre>
<p><strong>the problem</strong>: This works fine if I execute <code>python main.py</code>, but not if I execute <code>python package/__init__.py</code></p>
<p><strong>Option 2: explicit import</strong></p>
<pre class="lang-py prettyprint-override"><code>import package.module1
module1.some_fn()
</code></pre>
<p><strong>the problem</strong>: The same as previous. This works fine if I execute <code>python main.py</code>, but not if I execute <code>python package/__init__.py</code></p>
<p>Is there a way to do a relative import, so that I use two different execution contexts and still successfully import modules?</p>
|
<python><python-3.x><import>
|
2024-12-06 21:40:41
| 0
| 1,070
|
Null Salad
|
79,259,448
| 5,561,649
|
What should typing be for __add__ in a subclass of list?
|
<p>This is a simplification of my code:</p>
<pre class="lang-py prettyprint-override"><code># my_list.py
from __future__ import annotations
class MyList(list[int]):
def __add__(self, other: list[int]) -> MyList:
return MyList()
</code></pre>
<p>mypy says:</p>
<pre class="lang-none prettyprint-override"><code>> mypy .\my_list.py --strict
my_list.py:5: error: Signature of "__add__" incompatible with supertype "list" [override]
my_list.py:5: note: Superclass:
my_list.py:5: note: @overload
my_list.py:5: note: def __add__(self, list[int], /) -> list[int]
my_list.py:5: note: @overload
my_list.py:5: note: def [_S] __add__(self, list[_S], /) -> list[_S | int]
my_list.py:5: note: Subclass:
my_list.py:5: note: def __add__(self, list[int], /) -> MyList
</code></pre>
<p>If I do this, mypy no longer complains:</p>
<pre class="lang-py prettyprint-override"><code>from typing import overload, TypeVar
_T = TypeVar("_T")
class MyList(list[int]):
@overload
def __add__(self, other: list[int]) -> list[int]:
...
@overload
def __add__(self, other: list[_T]) -> list[_T | int]:
...
def __add__(self, other: list[_T] | list[int]) -> list[_T | int]:
return [0]
</code></pre>
<p>... but then it doesn't really make sense for addition of <code>MyList</code>s to return <code>list</code>s...</p>
<p>Also, the problem gets much worse if we try to add a <code>__iadd__</code> method... Then it either says that its signature isn't compatible with that of <code>__add__</code>, or is incompatible with some superclass's signature...</p>
<pre class="lang-none prettyprint-override"><code>Signature of "__iadd__" incompatible with "__add__" of supertype "list" [override]
Signatures of "__iadd__" and "__add__" are incompatible [misc]
</code></pre>
<p>To me things should be really simple here, but they're not. Also I am aware of the Liskov Substitution Principle, and I think I've taken it into account for all of my tests.</p>
|
<python><list><python-typing><mypy>
|
2024-12-06 21:35:12
| 1
| 550
|
LoneCodeRanger
|
79,259,235
| 9,873,381
|
How to extract specific metrics (mAP) from YOLOv7's train function?
|
<p>I am using the train function from the file: <a href="https://github.com/WongKinYiu/yolov7/blob/main/train_aux.py" rel="nofollow noreferrer">https://github.com/WongKinYiu/yolov7/blob/main/train_aux.py</a> to train a <code>YOLOv7</code> model on a custom dataset. I would like to optimize the hyperparameters (using <code>Optuna</code>) to get the optimal performance from the model.</p>
<p>I can generate different hyperparameters values using <code>optuna.trial.Trial</code>, import the correct train function, and call it with these hyperparameters. The point I get stuck at is extracting the <code>mAP@@0.5:0.95</code> from the return value of the train function so I can return it from the objective function.</p>
<p>I tried using <code>subprocess</code> to call the train_aux.py file with different hyperparameters and <code>regex</code> to extract the <code>mAP@@0.5:0.95</code> metric to optimize but it was the wrong way of doing this.</p>
|
<python><pytorch><yolo><hyperparameters><optuna>
|
2024-12-06 19:58:29
| 0
| 672
|
Skywalker
|
79,259,125
| 1,942,868
|
403 error with CSRF when accesing Django login by axios
|
<pre><code> await axios.get("http://localhost:8000/get_csrf_token/").then((res)=>{
console.log(res.data.csrf_token);
axios.defaults.xsrfHeaderName = "X-CSRFTOKEN";
axios.defaults.xsrfCookieName = "csrftoken";
axios.defaults.withCredentials = true;
axios.post('http://localhost:8000/login/',{
username:'admin',
password:'Kv79NExi'
}, {
headers: {
'Content-Type': 'application/json',
'X-CSRFToken': res.data.csrf_token,
}
})
.then((response) => {
log("response login",response.data);
});
});
</code></pre>
<p>I am trying to access django login from command line script(<code>jest</code>) not browser, however it shows the error like this,</p>
<pre><code><p>Reason given for failure:</p>\n' +
' <pre>\n' +
' CSRF cookie not set.\n' +
' </pre>\n' +
</code></pre>
<p>It returns 403 error, it looks like sending <code>csrf_token</code> correctly.</p>
<p>Where am I wrong?</p>
<pre><code>Ran all test suites matching /TopPage/i.
node:internal/process/promises:288
triggerUncaughtException(err, true /* fromPromise */);
^
AxiosError {
message: 'Request failed with status code 403',
name: 'AxiosError',
code: 'ERR_BAD_REQUEST',
config: {
transitional: {
silentJSONParsing: true,
forcedJSONParsing: true,
clarifyTimeoutError: false
},
adapter: [ 'xhr', 'http', 'fetch' ],
transformRequest: [ [Function: transformRequest] ],
transformResponse: [ [Function: transformResponse] ],
timeout: 0,
xsrfCookieName: 'csrftoken',
xsrfHeaderName: 'X-CSRFTOKEN',
maxContentLength: -1,
maxBodyLength: -1,
env: {
FormData: [Function: FormData] {
LINE_BREAK: '\r\n',
DEFAULT_CONTENT_TYPE: 'application/octet-stream'
},
Blob: [class Blob]
},
validateStatus: [Function: validateStatus],
headers: Object [AxiosHeaders] {
Accept: 'application/json, text/plain, */*',
'Content-Type': 'application/json',
'X-CSRFToken': '9aSiDra8P3LzU0hMlSFZ9yqX5fllDySQljBRyHM6jjSYfuZ1BWrCUF9blkElzl1v',
'User-Agent': 'axios/1.7.9',
'Content-Length': '42',
'Accept-Encoding': 'gzip, compress, deflate, br'
},
withCredentials: true,
method: 'post',
url: 'http://localhost:8000/login/',
data: '{"username":"admin","password":"Kv79NExi"}'
},
request: <ref *1> ClientRequest {
_events: [Object: null prototype] {
abort: [Function (anonymous)],
aborted: [Function (anonymous)],
connect: [Function (anonymous)],
error: [Function (anonymous)],
socket: [Function (anonymous)],
timeout: [Function (anonymous)],
finish: [Function: requestOnFinish]
},
_eventsCount: 7,
_maxListeners: undefined,
outputData: [],
outputSize: 0,
writable: true,
destroyed: false,
_last: true,
chunkedEncoding: false,
shouldKeepAlive: false,
maxRequestsOnConnectionReached: false,
_defaultKeepAlive: true,
useChunkedEncodingByDefault: true,
sendDate: false,
_removedConnection: false,
_removedContLen: false,
_removedTE: false,
strictContentLength: false,
_contentLength: '42',
_hasBody: true,
_trailer: '',
finished: true,
_headerSent: true,
_closed: false,
socket: <ref *2> Socket {
connecting: false,
_hadError: false,
_parent: null,
_host: 'localhost',
_closeAfterHandlingError: false,
_readableState: ReadableState {
objectMode: false,
highWaterMark: 16384,
buffer: BufferList { head: null, tail: null, length: 0 },
length: 0,
pipes: [],
flowing: true,
ended: false,
endEmitted: false,
reading: true,
constructed: true,
sync: false,
needReadable: true,
emittedReadable: false,
readableListening: false,
resumeScheduled: false,
errorEmitted: false,
emitClose: false,
autoDestroy: true,
destroyed: false,
errored: null,
closed: false,
closeEmitted: false,
defaultEncoding: 'utf8',
awaitDrainWriters: null,
multiAwaitDrain: false,
readingMore: false,
dataEmitted: true,
decoder: null,
encoding: null,
[Symbol(kPaused)]: false
},
_events: [Object: null prototype] {
end: [Function: onReadableStreamEnd],
free: [Function: onFree],
close: [ [Function: onClose], [Function: socketCloseListener] ],
timeout: [Function: onTimeout],
agentRemove: [Function: onRemove],
error: [Function: socketErrorListener],
finish: [Function: bound onceWrapper] { listener: [Function: destroy] }
},
_eventsCount: 7,
_maxListeners: undefined,
_writableState: WritableState {
objectMode: false,
highWaterMark: 16384,
finalCalled: true,
needDrain: false,
ending: true,
ended: true,
finished: false,
destroyed: false,
decodeStrings: false,
defaultEncoding: 'utf8',
length: 0,
writing: false,
corked: 0,
sync: false,
bufferProcessing: false,
onwrite: [Function: bound onwrite],
writecb: null,
writelen: 0,
afterWriteTickInfo: null,
buffered: [],
bufferedIndex: 0,
allBuffers: true,
allNoop: true,
pendingcb: 1,
constructed: true,
prefinished: false,
errorEmitted: false,
emitClose: false,
autoDestroy: true,
errored: null,
closed: false,
closeEmitted: false,
[Symbol(kOnFinished)]: []
},
allowHalfOpen: false,
_sockname: null,
_pendingData: null,
_pendingEncoding: '',
server: null,
_server: null,
parser: null,
_httpMessage: [Circular *1],
[Symbol(async_id_symbol)]: 969,
[Symbol(kHandle)]: TCP {
reading: true,
onconnection: null,
[Symbol(owner_symbol)]: [Circular *2]
},
[Symbol(lastWriteQueueSize)]: 0,
[Symbol(timeout)]: null,
[Symbol(kBuffer)]: null,
[Symbol(kBufferCb)]: null,
[Symbol(kBufferGen)]: null,
[Symbol(kCapture)]: false,
[Symbol(kSetNoDelay)]: true,
[Symbol(kSetKeepAlive)]: true,
[Symbol(kSetKeepAliveInitialDelay)]: 60,
[Symbol(kBytesRead)]: 0,
[Symbol(kBytesWritten)]: 0
},
_header: 'POST /login/ HTTP/1.1\r\n' +
'Accept: application/json, text/plain, */*\r\n' +
'Content-Type: application/json\r\n' +
'X-CSRFToken: 9aSiDra8P3LzU0hMlSFZ9yqX5fllDySQljBRyHM6jjSYfuZ1BWrCUF9blkElzl1v\r\n' +
'User-Agent: axios/1.7.9\r\n' +
'Content-Length: 42\r\n' +
'Accept-Encoding: gzip, compress, deflate, br\r\n' +
'Host: localhost:8000\r\n' +
'Connection: close\r\n' +
'\r\n',
_keepAliveTimeout: 0,
_onPendingData: [Function: nop],
agent: Agent {
_events: [Object: null prototype] {
free: [Function (anonymous)],
newListener: [Function: maybeEnableKeylog]
},
_eventsCount: 2,
_maxListeners: undefined,
defaultPort: 80,
protocol: 'http:',
options: [Object: null prototype] { noDelay: true, path: null },
requests: [Object: null prototype] {},
sockets: [Object: null prototype] {
'localhost:8000:': [
<ref *2> Socket {
connecting: false,
_hadError: false,
_parent: null,
_host: 'localhost',
_closeAfterHandlingError: false,
_readableState: [ReadableState],
_events: [Object: null prototype],
_eventsCount: 7,
_maxListeners: undefined,
_writableState: [WritableState],
allowHalfOpen: false,
_sockname: null,
_pendingData: null,
_pendingEncoding: '',
server: null,
_server: null,
parser: null,
_httpMessage: [Circular *1],
[Symbol(async_id_symbol)]: 969,
[Symbol(kHandle)]: [TCP],
[Symbol(lastWriteQueueSize)]: 0,
[Symbol(timeout)]: null,
[Symbol(kBuffer)]: null,
[Symbol(kBufferCb)]: null,
[Symbol(kBufferGen)]: null,
[Symbol(kCapture)]: false,
[Symbol(kSetNoDelay)]: true,
[Symbol(kSetKeepAlive)]: true,
[Symbol(kSetKeepAliveInitialDelay)]: 60,
[Symbol(kBytesRead)]: 0,
[Symbol(kBytesWritten)]: 0
}
]
},
freeSockets: [Object: null prototype] {},
keepAliveMsecs: 1000,
keepAlive: false,
maxSockets: Infinity,
maxFreeSockets: 256,
scheduling: 'lifo',
maxTotalSockets: Infinity,
totalSocketCount: 1,
[Symbol(kCapture)]: false
},
socketPath: undefined,
method: 'POST',
maxHeaderSize: undefined,
insecureHTTPParser: undefined,
path: '/login/',
_ended: true,
res: IncomingMessage {
_readableState: ReadableState {
objectMode: false,
highWaterMark: 16384,
buffer: BufferList { head: null, tail: null, length: 0 },
length: 0,
pipes: [],
flowing: true,
ended: true,
endEmitted: true,
reading: false,
constructed: true,
sync: true,
needReadable: false,
emittedReadable: false,
readableListening: false,
resumeScheduled: false,
errorEmitted: false,
emitClose: true,
autoDestroy: true,
destroyed: true,
errored: null,
closed: true,
closeEmitted: true,
defaultEncoding: 'utf8',
awaitDrainWriters: null,
multiAwaitDrain: false,
readingMore: true,
dataEmitted: true,
decoder: null,
encoding: null,
[Symbol(kPaused)]: false
},
_events: [Object: null prototype] {
end: [ [Function: responseOnEnd], [Function: handleStreamEnd] ],
error: [Function: handleStreamError],
data: [Function: handleStreamData],
aborted: [Function: handlerStreamAborted]
},
_eventsCount: 4,
_maxListeners: undefined,
socket: <ref *2> Socket {
connecting: false,
_hadError: false,
_parent: null,
_host: 'localhost',
_closeAfterHandlingError: false,
_readableState: ReadableState {
objectMode: false,
highWaterMark: 16384,
buffer: BufferList { head: null, tail: null, length: 0 },
length: 0,
pipes: [],
flowing: true,
ended: false,
endEmitted: false,
reading: true,
constructed: true,
sync: false,
needReadable: true,
emittedReadable: false,
readableListening: false,
resumeScheduled: false,
errorEmitted: false,
emitClose: false,
autoDestroy: true,
destroyed: false,
errored: null,
closed: false,
closeEmitted: false,
defaultEncoding: 'utf8',
awaitDrainWriters: null,
multiAwaitDrain: false,
readingMore: false,
dataEmitted: true,
decoder: null,
encoding: null,
[Symbol(kPaused)]: false
},
_events: [Object: null prototype] {
end: [Function: onReadableStreamEnd],
free: [Function: onFree],
close: [ [Function: onClose], [Function: socketCloseListener] ],
timeout: [Function: onTimeout],
agentRemove: [Function: onRemove],
error: [Function: socketErrorListener],
finish: [Function: bound onceWrapper] {
listener: [Function: destroy]
}
},
_eventsCount: 7,
_maxListeners: undefined,
_writableState: WritableState {
objectMode: false,
highWaterMark: 16384,
finalCalled: true,
needDrain: false,
ending: true,
ended: true,
finished: false,
destroyed: false,
decodeStrings: false,
defaultEncoding: 'utf8',
length: 0,
writing: false,
corked: 0,
sync: false,
bufferProcessing: false,
onwrite: [Function: bound onwrite],
writecb: null,
writelen: 0,
afterWriteTickInfo: null,
buffered: [],
bufferedIndex: 0,
allBuffers: true,
allNoop: true,
pendingcb: 1,
constructed: true,
prefinished: false,
errorEmitted: false,
emitClose: false,
autoDestroy: true,
errored: null,
closed: false,
closeEmitted: false,
[Symbol(kOnFinished)]: []
},
allowHalfOpen: false,
_sockname: null,
_pendingData: null,
_pendingEncoding: '',
server: null,
_server: null,
parser: null,
_httpMessage: [Circular *1],
[Symbol(async_id_symbol)]: 969,
[Symbol(kHandle)]: TCP {
reading: true,
onconnection: null,
[Symbol(owner_symbol)]: [Circular *2]
},
[Symbol(lastWriteQueueSize)]: 0,
[Symbol(timeout)]: null,
[Symbol(kBuffer)]: null,
[Symbol(kBufferCb)]: null,
[Symbol(kBufferGen)]: null,
[Symbol(kCapture)]: false,
[Symbol(kSetNoDelay)]: true,
[Symbol(kSetKeepAlive)]: true,
[Symbol(kSetKeepAliveInitialDelay)]: 60,
[Symbol(kBytesRead)]: 0,
[Symbol(kBytesWritten)]: 0
},
httpVersionMajor: 1,
httpVersionMinor: 1,
httpVersion: '1.1',
complete: true,
rawHeaders: [
'date',
'Fri, 06 Dec 2024 19:03:51 GMT',
'server',
'uvicorn',
'content-type',
'text/html; charset=utf-8',
'x-frame-options',
'DENY',
'content-length',
'3092',
'vary',
'origin',
'x-content-type-options',
'nosniff',
'referrer-policy',
'same-origin',
'connection',
'close'
],
rawTrailers: [],
aborted: false,
upgrade: false,
url: '',
method: null,
statusCode: 403,
statusMessage: 'Forbidden',
client: <ref *2> Socket {
connecting: false,
_hadError: false,
_parent: null,
_host: 'localhost',
_closeAfterHandlingError: false,
_readableState: ReadableState {
objectMode: false,
highWaterMark: 16384,
buffer: BufferList { head: null, tail: null, length: 0 },
length: 0,
pipes: [],
flowing: true,
ended: false,
endEmitted: false,
reading: true,
constructed: true,
sync: false,
needReadable: true,
emittedReadable: false,
readableListening: false,
resumeScheduled: false,
errorEmitted: false,
emitClose: false,
autoDestroy: true,
destroyed: false,
errored: null,
closed: false,
closeEmitted: false,
defaultEncoding: 'utf8',
awaitDrainWriters: null,
multiAwaitDrain: false,
readingMore: false,
dataEmitted: true,
decoder: null,
encoding: null,
[Symbol(kPaused)]: false
},
_events: [Object: null prototype] {
end: [Function: onReadableStreamEnd],
free: [Function: onFree],
close: [ [Function: onClose], [Function: socketCloseListener] ],
timeout: [Function: onTimeout],
agentRemove: [Function: onRemove],
error: [Function: socketErrorListener],
finish: [Function: bound onceWrapper] {
listener: [Function: destroy]
}
},
_eventsCount: 7,
_maxListeners: undefined,
_writableState: WritableState {
objectMode: false,
highWaterMark: 16384,
finalCalled: true,
needDrain: false,
ending: true,
ended: true,
finished: false,
destroyed: false,
decodeStrings: false,
defaultEncoding: 'utf8',
length: 0,
writing: false,
corked: 0,
sync: false,
bufferProcessing: false,
onwrite: [Function: bound onwrite],
writecb: null,
writelen: 0,
afterWriteTickInfo: null,
buffered: [],
bufferedIndex: 0,
allBuffers: true,
allNoop: true,
pendingcb: 1,
constructed: true,
prefinished: false,
errorEmitted: false,
emitClose: false,
autoDestroy: true,
errored: null,
closed: false,
closeEmitted: false,
[Symbol(kOnFinished)]: []
},
allowHalfOpen: false,
_sockname: null,
_pendingData: null,
_pendingEncoding: '',
server: null,
_server: null,
parser: null,
_httpMessage: [Circular *1],
[Symbol(async_id_symbol)]: 969,
[Symbol(kHandle)]: TCP {
reading: true,
onconnection: null,
[Symbol(owner_symbol)]: [Circular *2]
},
[Symbol(lastWriteQueueSize)]: 0,
[Symbol(timeout)]: null,
[Symbol(kBuffer)]: null,
[Symbol(kBufferCb)]: null,
[Symbol(kBufferGen)]: null,
[Symbol(kCapture)]: false,
[Symbol(kSetNoDelay)]: true,
[Symbol(kSetKeepAlive)]: true,
[Symbol(kSetKeepAliveInitialDelay)]: 60,
[Symbol(kBytesRead)]: 0,
[Symbol(kBytesWritten)]: 0
},
_consuming: false,
_dumped: false,
req: [Circular *1],
responseUrl: 'http://localhost:8000/login/',
redirects: [],
[Symbol(kCapture)]: false,
[Symbol(kHeaders)]: {
date: 'Fri, 06 Dec 2024 19:03:51 GMT',
server: 'uvicorn',
'content-type': 'text/html; charset=utf-8',
'x-frame-options': 'DENY',
'content-length': '3092',
vary: 'origin',
'x-content-type-options': 'nosniff',
'referrer-policy': 'same-origin',
connection: 'close'
},
[Symbol(kHeadersCount)]: 18,
[Symbol(kTrailers)]: null,
[Symbol(kTrailersCount)]: 0
},
aborted: false,
timeoutCb: null,
upgradeOrConnect: false,
parser: null,
maxHeadersCount: null,
reusedSocket: false,
host: 'localhost',
protocol: 'http:',
_redirectable: Writable {
_writableState: WritableState {
objectMode: false,
highWaterMark: 16384,
finalCalled: false,
needDrain: false,
ending: false,
ended: false,
finished: false,
destroyed: false,
decodeStrings: true,
defaultEncoding: 'utf8',
length: 0,
writing: false,
corked: 0,
sync: true,
bufferProcessing: false,
onwrite: [Function: bound onwrite],
writecb: null,
writelen: 0,
afterWriteTickInfo: null,
buffered: [],
bufferedIndex: 0,
allBuffers: true,
allNoop: true,
pendingcb: 0,
constructed: true,
prefinished: false,
errorEmitted: false,
emitClose: true,
autoDestroy: true,
errored: null,
closed: false,
closeEmitted: false,
[Symbol(kOnFinished)]: []
},
_events: [Object: null prototype] {
response: [Function: handleResponse],
error: [Function: handleRequestError],
socket: [Function: handleRequestSocket]
},
_eventsCount: 3,
_maxListeners: undefined,
_options: {
maxRedirects: 21,
maxBodyLength: Infinity,
protocol: 'http:',
path: '/login/',
method: 'POST',
headers: [Object: null prototype] {
Accept: 'application/json, text/plain, */*',
'Content-Type': 'application/json',
'X-CSRFToken': '9aSiDra8P3LzU0hMlSFZ9yqX5fllDySQljBRyHM6jjSYfuZ1BWrCUF9blkElzl1v',
'User-Agent': 'axios/1.7.9',
'Content-Length': '42',
'Accept-Encoding': 'gzip, compress, deflate, br'
},
agents: { http: undefined, https: undefined },
auth: undefined,
family: undefined,
beforeRedirect: [Function: dispatchBeforeRedirect],
beforeRedirects: { proxy: [Function: beforeRedirect] },
hostname: 'localhost',
port: '8000',
agent: undefined,
nativeProtocols: {
'http:': {
_connectionListener: [Function: connectionListener],
METHODS: [Array],
STATUS_CODES: [Object],
Agent: [Function],
ClientRequest: [Function: ClientRequest],
IncomingMessage: [Function: IncomingMessage],
OutgoingMessage: [Function: OutgoingMessage],
Server: [Function: Server],
ServerResponse: [Function: ServerResponse],
createServer: [Function: createServer],
validateHeaderName: [Function: __node_internal_],
validateHeaderValue: [Function: __node_internal_],
get: [Function: get],
request: [Function: request],
setMaxIdleHTTPParsers: [Function: setMaxIdleHTTPParsers],
maxHeaderSize: [Getter],
globalAgent: [Getter/Setter]
},
'https:': {
Agent: [Function: Agent],
globalAgent: [Agent],
Server: [Function: Server],
createServer: [Function: createServer],
get: [Function: get],
request: [Function: request]
}
},
pathname: '/login/'
},
_ended: true,
_ending: true,
_redirectCount: 0,
_redirects: [],
_requestBodyLength: 42,
_requestBodyBuffers: [],
_onNativeResponse: [Function (anonymous)],
_currentRequest: [Circular *1],
_currentUrl: 'http://localhost:8000/login/',
[Symbol(kCapture)]: false
},
.
.
.
data: '\n' +
'<!DOCTYPE html>\n' +
'<html lang="en">\n' +
'<head>\n' +
' <meta http-equiv="content-type" content="text/html; charset=utf-8">\n' +
' <meta name="robots" content="NONE,NOARCHIVE">\n' +
' <title>403 Forbidden</title>\n' +
' <style type="text/css">\n' +
' html * { padding:0; margin:0; }\n' +
' body * { padding:10px 20px; }\n' +
' body * * { padding:0; }\n' +
' body { font:small sans-serif; background:#eee; color:#000; }\n' +
' body>div { border-bottom:1px solid #ddd; }\n' +
' h1 { font-weight:normal; margin-bottom:.4em; }\n' +
' h1 span { font-size:60%; color:#666; font-weight:normal; }\n' +
' #info { background:#f6f6f6; }\n' +
' #info ul { margin: 0.5em 4em; }\n' +
' #info p, #summary p { padding-top:10px; }\n' +
' #summary { background: #ffc; }\n' +
' #explanation { background:#eee; border-bottom: 0px none; }\n' +
' </style>\n' +
'</head>\n' +
'<body>\n' +
'<div id="summary">\n' +
' <h1>アクセス禁止 <span>(403)</span></h1>\n' +
' <p>CSRF検証に失敗したため、リクエストは中断されました。</p>\n' +
'\n' +
'\n' +
' <p>このメッセージが表示されている理由は、このサイトはフォーム送信時にCSRFクッキーを必須としているためです。このクッキーはセキュリティ上の理由(使用中のブラウザが第三者によってハイジャックされていないことを確認するため)で必要です。</p>\n' +
' <p>もしブラウザのクッキーを無効に設定しているならば、same-originリクエストのために少なくともこのサイトでは再度有効にしてください。</p>\n' +
'\n' +
'</div>\n' +
'\n' +
'<div id="info">\n' +
' <h2>Help</h2>\n' +
' \n' +
' <p>Reason given for failure:</p>\n' +
' <pre>\n' +
' CSRF cookie not set.\n' +
' </pre>\n' +
' \n' +
'\n' +
' <p>In general, this can occur when there is a genuine Cross Site Request Forgery, or when\n' +
' <a\n' +
' href="https://docs.djangoproject.com/en/4.2/ref/csrf/">Django’s\n' +
' CSRF mechanism</a> has not been used correctly. For POST forms, you need to\n' +
' ensure:</p>\n' +
'\n' +
' <ul>\n' +
' <li>Your browser is accepting cookies.</li>\n' +
'\n' +
' <li>The view function passes a <code>request</code> to the template’s <a\n' +
' href="https://docs.djangoproject.com/en/dev/topics/templates/#django.template.backends.base.Template.render"><code>render</code></a>\n' +
' method.</li>\n' +
'\n' +
' <li>In the template, there is a <code>{% csrf_token\n' +
' %}</code> template tag inside each POST form that\n' +
' targets an internal URL.</li>\n' +
'\n' +
' <li>If you are not using <code>CsrfViewMiddleware</code>, then you must use\n' +
' <code>csrf_protect</code> on any views that use the <code>csrf_token</code>\n' +
' template tag, as well as those that accept the POST data.</li>\n' +
'\n' +
' <li>The form has a valid CSRF token. After logging in in another browser\n' +
' tab or hitting the back button after a login, you may need to reload the\n' +
' page with the form, because the token is rotated after a login.</li>\n' +
' </ul>\n' +
'\n' +
' <p>You’re seeing the help section of this page because you have <code>DEBUG =\n' +
' True</code> in your Django settings file. Change that to <code>False</code>,\n' +
' and only the initial error message will be displayed. </p>\n' +
'\n' +
' <p>You can customize this page using the CSRF_FAILURE_VIEW setting.</p>\n' +
'</div>\n' +
'\n' +
'</body>\n' +
'</html>\n'
},
status: 403
}
</code></pre>
|
<javascript><python><reactjs><django>
|
2024-12-06 19:08:59
| 0
| 12,599
|
whitebear
|
79,259,121
| 1,717,414
|
Why aren't sub-union types handled properly by type guards?
|
<p>In the following code, the <code>valid</code> function and the <code>invalid</code> function do exactly the same thing. Why is MyPy happy with <code>valid</code>, but throws an error on <code>invalid</code>?</p>
<p>Isn't the <code>TypeGuard</code> suppose to handle that?</p>
<p>If I add a function to <code>B</code> and <code>C</code> only, and call that function from within a block guarded by <code>ifBorC</code>, that works fine.</p>
<p>Does MyPy not look at type guards when dealing with completeness of union types?</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeGuard
class A: ...
class B: ...
class C: ...
ABC = A | B | C
BorC = B | C
def isBorC (x: ABC) -> TypeGuard[BorC]:
return isinstance(x, B) or isinstance(x, C)
def valid (x: ABC) -> str:
if isinstance(x, A):
return 'a'
if isinstance(x, B) or isinstance(x, C):
return 'b or c'
def invalid (x: ABC) -> str:
if isinstance(x, A):
return 'a'
if isBorC(x):
return 'b or c'
# Yields error: Missing return statement [return]
</code></pre>
|
<python><python-typing><mypy>
|
2024-12-06 19:07:54
| 1
| 533
|
Nathan Kronenfeld
|
79,259,088
| 1,357,613
|
Python Paramiko method paramiko.client.SSHClient.connect only receives files via key_filename, is there a way to pass a string variable?
|
<p>Python's package Paramiko has the moehtod paramiko.client.SSHClient.connect to connect. It receives the private key as a file with the parameter key_filename.</p>
<p><code>ssh.connect(key_filename=key_filename)</code></p>
<p>For security purpouses I'm not having a file with my private key in my repository, then I'm using it as a secret variable:</p>
<p><code>key_filename = os.getenv("go")</code></p>
<p>but when I retreive the secret it returns as string, is there a was to pass the pk as a string variable and not a path to paramiko?</p>
|
<python><paramiko>
|
2024-12-06 18:59:25
| 0
| 346
|
Moisés Briseño Estrello
|
79,259,078
| 9,371,999
|
Ruff ignore F401
|
<p>I am working with VScode and python. I installed ruff because I really like it, but thee one thing I don't like is the F401 error, which basically automatically removes the unused libraries when I try to include them in my main files.
For instance, if I try to go from this:</p>
<pre><code>from fastapi import FastAPI, Response, HTTPException
from pydantic import BaseModel
</code></pre>
<p>to this,</p>
<pre><code>from fastapi import FastAPI, Response, HTTPException, status
from pydantic import BaseModel
</code></pre>
<p>the status package gets automatically removed before I even scroll down to start using it on the code. This is counterintuitive and super cumbersome. The error I get is this:</p>
<pre><code>"status" is not accessedPylance
`fastapi.status` imported but unusedRuff (F401)
</code></pre>
<p>I included this line on my pyproject.toml file:</p>
<pre><code>[tool.ruff.lint]
ignore = ["E501", "F401"]
</code></pre>
<p>but it keeps happening all the same. It does not work.</p>
<p>Can somebody give me any clues on how to get rid of this functionality? I would appreciate it a lot!
Thanks in advance!</p>
<p>extent:</p>
<p>My folder structure is like this:</p>
<pre><code>MY-PROJ-FOLDER
│
├── .git
├── .github
├── .pytest_cache
├── .venv
├── .vscode
├── dist
├── garbish_code
├── src
├── tests
│ └── tests_main.py
├── .dockerignore
├── .gitignore
├── .python-version
├── compose.yml
├── Dockerfile
├── multistage.Dockerfile
├── pyproject.toml
├── README.md
├── ruff.toml
├── run_compose.sh
└── run.sh
</code></pre>
<p>and my settings are:</p>
<pre><code>{
// Python settings
"python.analysis.autoSearchPaths": true,
"python.analysis.diagnosticSeverityOverrides": {
"reportMissingImports": "none"
},
"python.analysis.extraPaths": [
"${workspaceFolder}/src"
],
"python.envFile": "${workspaceFolder}/.env",
"python.terminal.activateEnvironment": true,
"python.defaultInterpreterPath": "${workspaceFolder}/.venv/Scripts/python",
// Test settings
"python.testing.pytestEnabled": true,
"python.testing.unittestEnabled": false,
"python.testing.cwd": "${workspaceFolder}/tests",
"python.testing.pytestPath": "${workspaceFolder}/.venv/Scripts/pytest",
"python.testing.autoTestDiscoverOnSaveEnabled": true,
}
</code></pre>
|
<python><visual-studio-code><pyproject.toml><ruff>
|
2024-12-06 18:55:42
| 0
| 529
|
GEBRU
|
79,258,912
| 10,452,700
|
Visualization of ranking models based on time executions
|
<p>I'm working on reproducing the following plot for my own case:
<a href="https://i.sstatic.net/6vK55kBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6vK55kBM.png" alt="img" /></a></p>
<p>so, let's say I already have my results ready and in the form of data frames as below:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
# Data
data1 = {
'Models': ['LinearRegression', 'Random Forest', 'XGBoost', 'MLPRegressor', 'SVR', 'CatBoostRegressor', 'LGBMRegressor'],
'MAE': [4.906, 2.739, 2.826, 5.234, 5.061, 2.454, 2.76]
'Runtime [ms]': [0:00:43.387145, 0:28:11.761681, 0:03:58.883474, 0:01:44.252276 , 0:04:52.754769, 0:19:36.925169, 0:04:51.223103]
}
data2 = {
'Models': ['LinearRegression', 'Random Forest', 'XGBoost', 'MLPRegressor', 'SVR', 'CatBoostRegressor', 'LGBMRegressor'],
'MAE': [4.575, 2.345, 2.129, 4.414, 4.353, 2.281, 2.511]
'Runtime [ms]': [0:00:45.055854, 0:10:55.468473, 0:01:01.575033 , 0:00:31.231719 , 0:02:12.258870, 0:08:16.526615 , 0:15:25.084937]
}
data3 = {
'Models': ['LinearRegression', 'Random Forest', 'XGBoost', 'MLPRegressor', 'SVR', 'CatBoostRegressor', 'LGBMRegressor'],
'MAE': [4.575, 2.345, 2.129, 4.414, 4.353, 2.281, 2.511]
'Runtime [ms]': [0:00:40.055854, 0:11:55.468473, 0:01:03.575033 , 0:00:29.231719 , 0:02:02.258870, 0:07:16.526615 , 0:13:25.084937]
}
df1 = pd.DataFrame(data1)
df2 = pd.DataFrame(data2)
df3 = pd.DataFrame(data3)
</code></pre>
<p>So now I can calculate the ranking by computing <a href="https://datagy.io/python-pandas-frequencies/" rel="nofollow noreferrer">absolute\relative frequencies</a> in the final ranked table in the expected output.</p>
<p>and get something like <a href="https://stackoverflow.com/q/78788751/10452700">this</a> to rank top models, but I will lose <code>Runtime [ms]</code>:</p>
<pre class="lang-py prettyprint-override"><code>Top models(sorted) count freq(%)
LinearRegression,MLPRegressor,SVR,LGBMRegresso... 2 66.666667
MLPRegressor,SVR,LinearRegression,XGBoost,LGBM... 1 33.333333
</code></pre>
<p>How can I get close to this plot?</p>
|
<python><pandas><matplotlib><seaborn><ranking>
|
2024-12-06 17:51:32
| 1
| 2,056
|
Mario
|
79,258,896
| 1,355,634
|
How to do an advanced grouping in pandas?
|
<p>The easiest way is to demonstrate my question with an example. Suppose I have the following long format data frame</p>
<pre><code>In [284]: import pandas as pd
In [285]: data = pd.DataFrame({"day": [0,0,0,0,0,0,1,1,1,1,1,1], "cat1": ["A", "A", "A", "B", "B", "B", "A", "A", "B", "B", "B", "B"], "cat2":["1", "1", "2", "1", "2", "2", "1", "2", "1", "1", "2", "2"], "value": [10, 230, 32,12, 12, 65, 12, 34, 97, 0, 12,1]})
In [286]: data
Out[286]:
day cat1 cat2 value
0 0 A 1 10
1 0 A 1 230
2 0 A 2 32
3 0 B 1 12
4 0 B 2 12
5 0 B 2 65
6 1 A 1 12
7 1 A 2 34
8 1 B 1 97
9 1 B 1 0
10 1 B 2 12
11 1 B 2 1
</code></pre>
<p>Per day I have two categories. My goal is to aggregate the <code>cat2</code> category in a specific way. For each tuple (<code>date</code>, <code>cat1</code>, <code>cat2</code>) I would like to perform the following:</p>
<pre><code>In [287]: data_day = data[data["day"]==0]
In [288]: data_day_cat1 = data_day[data_day["cat1"]=="A"]
In [289]: data_day_cat1_cat2 = data_day_cat1[data_day_cat1["cat2"]=="1"]
In [290]: data_day_cat1_cat2["value"].pow(2).mean()
Out[290]: np.float64(26500.0)
In [291]: data_day_cat1_cat2 = data_day_cat1[data_day_cat1["cat2"]=="2"]
In [292]: data_day_cat1_cat2["value"].pow(2).mean()
Out[292]: np.float64(1024.0)
</code></pre>
<p>That is on the first day, for <code>cat1</code> being <code>A</code>, I want a single line for all occurrence of <code>cat2</code>, where the latter is like a "root mean square error". Currently I'm looping over all combination, but I was playing around with using <code>groupby</code>. However, something like:</p>
<pre><code>data.groupby(["day", "cat1", "cat2"])["value"].apply(lambda x: x**2).mean()
</code></pre>
<p>Does work. What I would like to get is a DataFrame like this:</p>
<pre><code> day cat1 cat2 value
0 0 A 1 26500
1 0 A 2 1024
</code></pre>
<p><strong>EDIT: Note, I want the complete DataFrame, was just too lazy to write down the whole data frame.</strong></p>
<p>Is this possible without looping over all <code>day</code>, <code>cat1</code> and <code>cat2</code>? Could groupby be used?</p>
|
<python><pandas>
|
2024-12-06 17:47:08
| 2
| 2,042
|
math
|
79,258,868
| 5,550,833
|
Tkinter Python - Update a label from an instance method goes to segmentation fault
|
<p>I'm trying to update a Label with a method that is called via a scheduler process, but when i try to configure the label, the app crashes with a segmentation fault</p>
<p>This is my script</p>
<pre><code>class Gui():
def __init__(self):
app = Tk()
self.initialize_user_interface()
def initialize_user_interface(self):
self.title("Title")
self.geometry(f"{1100}x{700}")
self.sidebar_frame = Frame(self)
self.sidebar_frame.grid()
self.my_label = Label(self.sidebar_frame)
self.my_label.grid()
thread = threading.Thread(target=self.start_schedule, daemon=True)
thread.start()
def start_schedule(self):
schedule.every(30).seconds.do(lambda: self.update_label())
def update_label(self):
self.my_label=configure(text="Custom Text")
if __name__ == "__main__":
app = Gui()
app.mainloop()
</code></pre>
<p>I tried to call the method inside the class, using the self but i keep getting the error of segmentation fault</p>
|
<python><tkinter>
|
2024-12-06 17:33:05
| 1
| 3,258
|
MarioC
|
79,258,814
| 11,748,924
|
Numpythonic way of float to signed integer normalization
|
<p>What is the faster numpythonic way of this normalization:</p>
<pre><code>def normalize_vector(x, b, axis):
"""
Normalize real vector x and outputs an integer vector y.
Parameters:
x (numpy.ndarray): Input real vector. (batch_size, seq_len)
b (int): Unsigned integer defining the scaling factor.
axis (int/None): if None, perform flatenned version, if axis=-1, perform relative normalization across batch.
Returns:
numpy.ndarray: Integer vector y.
"""
# Find the maximum absolute value in x
m = np.max(np.abs(x))
# Process each element in x
y = []
for xi in x:
if xi > 0:
y.append(int((2**b - 1) * xi / m))
elif xi < 0:
y.append(int(2**b * xi / m))
else:
y.append(0)
return np.array(y)
</code></pre>
<p>Can <code>np.digitize</code> make it faster?</p>
<p>I have similar <a href="https://stackoverflow.com/questions/37580501/normalized-integer-to-from-float-conversion">question</a>, but it's not about NumPy.</p>
<p>I'm also expecting it supports <code>axis</code> parameter for batch vector.</p>
|
<python><numpy>
|
2024-12-06 17:13:26
| 2
| 1,252
|
Muhammad Ikhwan Perwira
|
79,258,525
| 1,085,805
|
Plotting quiver plots in matplotlib
|
<p>I want to plot the slope field for: <code>0.5*sin(0.5*pi*x)*sqrt(y+7)</code></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
# Specify the grid of dots
x = np.arange(-3,3,0.3)
y = np.arange(-2,4,0.3)
X, Y = np.meshgrid(x,y)
# Create unit vectors at each dot with correct slope
dy = 0.5*(np.sin(x*np.pi*0.5))*np.sqrt(y+7)
dx = np.ones(dy.shape)
norm = np.sqrt(X**2 + Y**2)
dyu = dy/norm
dxu = dx/norm
# Plot everything
plt.quiver(X,Y,dxu,dyu,)
plt.show()
</code></pre>
<p>and I got the second image below. I am trying to replicate the first image. How can I make uniform slope lines like that? Why am I getting variable-length lines anyway?</p>
<p><a href="https://i.sstatic.net/MBKAlugp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MBKAlugp.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/EK1Gw9ZP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EK1Gw9ZP.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><plot>
|
2024-12-06 15:23:27
| 1
| 903
|
blackened
|
79,258,503
| 4,706,711
|
What is the correct AWS policy in order for my app allow users to login at AWS COGNITO?
|
<p>I am using Django Warrant for IAM:</p>
<pre><code>from rest_framework.response import Response
from rest_framework.exceptions import AuthenticationFailed
from rest_framework.decorators import api_view
from django.apps import apps
@api_view(['POST'])
def login(request):
username = request.data.get('email')
password = request.data.get('password')
if not username or not password:
raise AuthenticationFailed('Username and password are required.', code=401)
pool_id = getattr(settings, 'COGNITO_USER_POOL_ID', None)
app_id = getattr(settings, 'COGNITO_APP_ID', None)
region = getattr(settings, 'COGNITO_REGION', None)
key = getattr(settings,'COGNITO_AWS_KEY',None)
secret = getattr(settings,'COGNITO_AWS_SECRET',None)
cognito = Cognito(
user_pool_id=pool_id,
client_id=app_id,
user_pool_region=region,
access_key=key,
secret_key=secret,
username=username
)
try:
cognito.authenticate(password=password)
except Exception as e:
print(e)
raise AuthenticationFailed(str(e), code=401)
# Return tokens
return Response({
"access_token": cognito.access_token,
"refresh_token": cognito.refresh_token,
}, status=201)
</code></pre>
<p>But despite my authentication being sucessfull:</p>
<pre><code>aws cognito-idp admin-initiate-auth --user-pool-id eu-west-1_XXXXXX --client-id XXXXXXXX --auth-flow ADMIN_NO_SRP_AUTH --auth-parameters USERNAME=YTYY,PASSWORD=YYYY
</code></pre>
<p>Return The nessesary credentials:</p>
<pre><code>{
"ChallengeParameters": {},
"AuthenticationResult": {
"AccessToken": "XXXXXXXXXX...XXXXXX",
"ExpiresIn": 3600,
"TokenType": "Bearer",
"RefreshToken": "XXXDDDXXXX",
"IdToken": "XXXSSSXXX"
}
}
</code></pre>
<p>But the view above return:</p>
<pre><code>{
"detail": "An error occurred (NotAuthorizedException) when calling the InitiateAuth operation: Password attempts exceeded"
}
</code></pre>
<p>Responding with a 401, My IAM permissions are:</p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAdminUserPoolActions",
"Effect": "Allow",
"Action": [
"cognito-idp:AdminInitiateAuth",
"cognito-idp:AdminRespondToAuthChallenge",
"cognito-idp:AdminGetUser",
"cognito-idp:AdminSetUserPassword",
"cognito-idp:ListUserPools",
"cognito-idp:GetUser",
"cognito-idp:ListGroups",
"cognito-idp:AdminAddUserToGroup"
],
"Resource": "arn:aws:cognito-idp:eu-west-1:962331388720:userpool/eu-west-1_XXXXXX"
}
]
}
</code></pre>
<p>What I am missing I suspect my policy is wrong. But which one is the correct one? I define my credentials into <code>settings.py</code> as:</p>
<pre><code>COGNITO_USER_POOL_ID = os.getenv("COGNITO_USER_POOL_ID")
COGNITO_APP_ID = os.getenv("COGNITO_APP_ID")
COGNITO_REGION = AWS_REGION
COGNITO_AWS_KEY = os.getenv("COGNITO_AWS_KEY",None)
COGNITO_AWS_SECRET = os.getenv("COGNITO_AWS_SECRET",None)
if COGNITO_AWS_KEY is None:
raise RuntimeError("AWS Cognito Key is not Defined")
if COGNITO_AWS_SECRET is None:
raise RuntimeError("AWS Cognito Secret is not Defined")
</code></pre>
<p>How can I fix this?</p>
|
<python><django><amazon-web-services><amazon-cognito>
|
2024-12-06 15:15:37
| 0
| 10,444
|
Dimitrios Desyllas
|
79,258,483
| 1,987,477
|
How to efficiently look for a substring in a pandas df column
|
<p>I have a large pandas df.</p>
<p>I'd need to loop through a specific column and look for a substring in each row.</p>
<p>For example, I'd have to find the row with <code>source_id</code> containing the substring <code>'136704'</code>.
This is my code</p>
<pre><code>for row_num, row_data in my_df.iterrows():
source_id = str(row_data['source_id'])
if (source_id.contains('136704')):
print(row_num, source_id)
</code></pre>
<p>where <code>row_data['source_id']</code> are <code>int</code> numbers, for example</p>
<pre><code>row_data['source_id'] = [5254261818006768512, 4688972310002481024, 5864639514713019392, 5853820767014992128, 5423800601092727168].
</code></pre>
<p>I get the following error</p>
<pre><code>AttributeError: 'str' object has no attribute 'contains'
</code></pre>
<p>Thank you for any help you could provide</p>
|
<python><pandas>
|
2024-12-06 15:10:41
| 0
| 1,325
|
user123892
|
79,258,445
| 5,344,240
|
Get the list of actually persisted DataFrames in Spark
|
<p>I am using Databricks notebooks. I want to programmatically find the list of DataFrames that have actually been cached by an action call, that is where <code>.cache()</code> was followed by an action, like <code>.show()</code>.</p>
<p><a href="https://stackoverflow.com/questions/75168521/find-the-list-of-all-persisted-dataframes-in-spark">This</a> post gives a solution for the simpler case: finding the objects where either <code>.cache()</code> or <code>.persist()</code> have been called and hence the <code>is_cached</code> attribute is set to <code>True</code>, but there is no guarantee that caching was triggered.</p>
<p>I tried <code>sc._jsc.getPersistentRDDs()</code> but it also returns java objects that I don't want to see; they have nothing to do with my actual code, and it is hard to figure out which item (if any) is my cached dataframe, see toy example below.</p>
<p>How could I proceed?</p>
<p><a href="https://i.sstatic.net/oTZgVyBA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTZgVyBA.png" alt="databricks notebook" /></a></p>
|
<python><apache-spark><pyspark><azure-databricks>
|
2024-12-06 15:00:26
| 1
| 455
|
Andras Vanyolos
|
79,258,400
| 13,491,504
|
Use DPI_AWARENESS_CONTEXT with tkinter
|
<p>I am Using tkinter for a small application. Using</p>
<pre><code>from ctypes import windll
windll.shcore.SetProcessDpiAwareness(2)
</code></pre>
<p>I wanted to adjust the application on my dual monitor setup (with two different kind of montiors (Laptop and Display)). But it doesn't work 100%. The application opens on my Laptop screen very sharp and just the size I want it to be. Moving it to the other monitor it becomes very large (probably trying to keep the same scale on the large display) and looks unnatural.</p>
<p>Then I stumbeld over this webiste <a href="https://learn.microsoft.com/en-us/windows/win32/hidpi/dpi-awareness-context" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/windows/win32/hidpi/dpi-awareness-context</a></p>
<p>It describes a newer Verison of SetProcessDpiAwareness, <code>SetProcessDpiAwarenessContext</code>, which has a V2 of the Monitor resacling.</p>
<p>But it doesnt seem to work in tkinter, even though I have Windows 11 pro.</p>
|
<python><tkinter><dpi>
|
2024-12-06 14:42:52
| 0
| 637
|
Mo711
|
79,258,379
| 1,898,534
|
How to run an init script from correct volume when using databricks asset bundles
|
<p>I have the following resource definition. What i am trying to do is attaching an aws profile to a cluster so it can pull python packages from private codeArtifact via pip. However, in order to allow then pip to pull, I need a init script. But its impossible to figure out in the docs what the config for this is.</p>
<pre class="lang-yaml prettyprint-override"><code>resources:
jobs:
staging-test:
name: staging-test
email_notifications:
on_failure:
- ${var.mail}
description: Staging job for ${var.package}
tags:
stage: "staging"
version: ${var.package}
tasks:
- task_key: test-from-wheel
python_wheel_task:
entry_point: run # Entry point of the wheel
package_name: # Same as in setup py of wheel
parameters: ["-a", "test_run", "-m", "model"]
depends_on:
- task_key: test-from-wheel
new_cluster:
init_scripts:
- volumes:
destination: dbfs://Workspace/Users/Myself/init.sh
instance_pool_id: ${var.instance-pool-id}
spark_version: 15.4.x-scala2.12
autoscale:
min_workers: 1
max_workers: 4
aws_attributes:
instance_profile_arn: ${var.aws_instance-profile-codeartifact-access}
libraries:
# Using the wheel file from code artifact
- pypi:
package: ${var.package}
repo: ${var.codeartifact-repo}
</code></pre>
<p>I already searched the internet on how to properly attach init script to clusters in asset bundles</p>
|
<python><aws-codeartifact><databricks-asset-bundle>
|
2024-12-06 14:38:18
| 1
| 6,499
|
PlagTag
|
79,258,364
| 3,919,804
|
dnspython does not resolve on osx inside corporate network
|
<p>I am trying to use dnspython to resolve some names on our corporate network on mac osx. There is the sample code I am using:</p>
<pre><code>import dns.resolver
dns.resolver.default_resolver = dns.resolver.Resolver(configure=False)
dns.resolver.default_resolver.nameservers = ['x.x.x.x']
dns.resolver.resolve('www.google.com')
</code></pre>
<p>x.x.x.x is replaced with the corporate dns server. When run, the following error message is generated:</p>
<pre><code>dns.resolver.LifetimeTimeout: The resolution lifetime expired after 5.402 seconds: Server Do53:135.37.9.16@53 answered The DNS operation timed out.
</code></pre>
<p>Using dig to make the query, I do get back valid results:</p>
<pre><code>jon@laptop play3 % dig www.google.com
; <<>> DiG 9.10.6 <<>> www.google.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17452
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;www.google.com. IN A
;; ANSWER SECTION:
www.google.com. 235 IN A 74.125.21.103
www.google.com. 235 IN A 74.125.21.106
www.google.com. 235 IN A 74.125.21.105
www.google.com. 235 IN A 74.125.21.147
www.google.com. 235 IN A 74.125.21.104
www.google.com. 235 IN A 74.125.21.99
;; Query time: 19 msec
;; SERVER: x.x.x.x#53(x.x.x.x)
;; WHEN: Fri Dec 06 09:25:51 EST 2024
;; MSG SIZE rcvd: 139
</code></pre>
<p>In the dig resopsone, I see the same name servers being used to do the lookup. I have tried to adjust the timeout in python, using 8.8.8.8 as the dns server, and many other options including edns, but all changes result with same timeout exception.</p>
|
<python><dns><dnspython>
|
2024-12-06 14:33:35
| 0
| 342
|
Jon
|
79,258,317
| 1,879,604
|
Micro Service project structure around code sharing
|
<p>I'm trying to move a monolith project structure to a micro services (self learning), and need directions on the the shared code placement.</p>
<pre><code>src
|-- service1
| |-- Dockerfile
| |-- Code
| |-- DB
|
|-- service2
| |-- Dockerfile
| |-- Code
| |-- DB
|
|-- utils (only code)
| |-- code
|
|--docker-compose.yaml
</code></pre>
<p>Since all these services are on <code>Python</code> there are few pieces of code which are reused across. Here lets say <code>utils</code> is shared by both <code>service1</code> and <code>service2</code>.</p>
<p>Few of the option I'm reading to manage the shared code is to create it as a separate package (pip or git) and add it as a dependency for each of the services, or using git-submodules.</p>
<p>But, my first target is to create a build system, with just one repo catering all the services with their dependencies. Trying to get to <a href="https://github.com/GoogleCloudPlatform/microservices-demo" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/microservices-demo</a> structure at first. And in this demo project the code is not shared, and where needed code is duplicated.</p>
<p>I'm playing around the build context in the <code>docker-compose.yaml</code>.</p>
<pre><code>services:
service1:
build:
context: ./service1/
dockerfile: ./Dockerfile
command: python manage.py runserver 0.0.0.0:8000
</code></pre>
<p>to</p>
<pre><code>services:
service1:
build:
context: .
dockerfile: ./service1/Dockerfile
command: python manage.py runserver 0.0.0.0:8000
</code></pre>
<p>and updating the service1 <code>Dockerfile</code> to access <code>utils</code>, but it looks wrong to me.</p>
<p>Is there a way to build services along with <code>utils</code> as a dependency in the above structure ? Lets say copy the <code>utils</code> as a sibling to <code>Code</code> directory.</p>
<p>Any other options available to achieve this with out package manager or git-submodules ?</p>
<p>Any sample repo exploring this structure ?</p>
|
<python><docker>
|
2024-12-06 14:23:47
| 0
| 742
|
NEB
|
79,258,125
| 1,898,534
|
How to switch off automatic wheel building in databricks asset bundle python wheel task
|
<p>I have the following databricks asset bundle job defintion:</p>
<pre class="lang-py prettyprint-override"><code> - task_key: test-model-from-wheel
python_wheel_task:
entry_point: run # Entry point of the wheel
package_name: myservice # must match the name of your package in toml or setup.py
parameters: ["-a", "test_run", "-m", "model_test"]
libraries:
# By default we just include the .whl file generated for the hdf package in the previous make builds
- whl: ../../${var.wheel-file-path}
new_cluster:
instance_pool_id: ${var.instance-pool-id}
spark_version: 15.4.x-scala2.12
autoscale:
min_workers: 1
max_workers: 4
</code></pre>
<p>however, when i run in the command line <code>databricks bundle deploy -t dev</code> it will ignore my existing already build wheel file specified in <code>- whl: mywheelfile</code> which i have build before in a separate make command. It will just autodetect that there is a buildable python command and build again.</p>
<p>How to switch this off?</p>
<p>i tried to put absolute paths, i was already playing around with command line flags. Sadly the asset bundles docs are quite sparse on these patterns atm.</p>
|
<python><databricks><python-wheel><databricks-asset-bundle>
|
2024-12-06 13:19:22
| 1
| 6,499
|
PlagTag
|
79,258,116
| 3,710,004
|
Remove empty dirctories in Python recursively
|
<p>I have the following script in Pyth to remove empty directories. This will remove an empty directory with the filepath "a/b/c" no matter how many folders down. The problem is, sometimes if folder "c" is removed because it is empty, folder "b" is in turn empty, but then I have to keep running this code until all the empty folders are truly gone. Is there a way to avoid this problem?</p>
<pre><code>directories = glob.glob(f"{DIR_DOCS}/**/", recursive=True)
for directory in directories:
if not os.listdir(directory):
os.rmdir(directory)
print(f"Removed directory: {directory}")
</code></pre>
<p>Example filepath: "data/docs/TX/Fort Worth/2024-11-09/Meeting1/Test" - If Test2 is empty and removed, Meeting1 should be removed, then 2024-11-09.</p>
<p>I saw the answers to a similar question <a href="https://stackoverflow.com/questions/34900289/python-remove-empty-folders-recursively">here</a> but none of the solutions work for me.</p>
|
<python>
|
2024-12-06 13:16:54
| 0
| 686
|
user3710004
|
79,258,103
| 561,243
|
Appending additional '_id' to a field
|
<p>I have the following model:</p>
<pre class="lang-py prettyprint-override"><code>class Sample(MAFwBaseModel):
sample_id = AutoField(primary_key=True, help_text='The sample id primary key')
sample_name = TextField(help_text='The sample name')
class Resolution(MAFwBaseModel):
resolution_id = AutoField(primary_key=True, help_text='The resolution id primary key')
resolution_value = FloatField(help_text='The resolution in µm')
class CalibrationMethod(MAFwBaseModel):
method_id = AutoField(primary_key=True, help_text='The primary key for the calculation method')
multiplier = FloatField(default=1., help_text='The multiplication factor of this method')
class Image(MAFwBaseModel):
image_id = AutoField(primary_key=True, help_text='The image id primary key')
sample_id = ForeignKeyField(Sample, on_delete='CASCADE', backref='+')
filename = TextField()
checksum = TextField()
resolution_id = ForeignKeyField(Resolution,on_delete='CASCADE', backref='+')
class ProcessedImage(MAFwBaseModel):
image_id = ForeignKeyField(Image, primary_key=True, backref='+', help_text='The image id, foreign key and primary', on_delete='CASCADE')
value = FloatField(default=0)
</code></pre>
<p>With this definition, I could build exactly the DB in the way I wished.</p>
<p>The relationships between the Image and the other three tables is perfect as shown in the ERD below.</p>
<p><a href="https://i.sstatic.net/YjzdPUjx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YjzdPUjx.png" alt="desired ERD" /></a></p>
<p>Now, I want to make a cross join with with CalibrationMethod and to do so, I use the following query.</p>
<pre class="lang-py prettyprint-override"><code> cross_join = (ProcessedImage.select(ProcessedImage, CalibrationMethod)
.join(CalibrationMethod, JOIN.CROSS)
.execute()
)
</code></pre>
<p>The query, in itself, works ok, meaning that I get exactly what I wanted, but when I start looping over the rows, I have found something strange:</p>
<pre class="lang-py prettyprint-override"><code>for row in cross_join:
# the image_id of ProcessedImage is not what I was expecting:
row.image_id # is the row in Image
row.image_id_id # is the field in ProcessedImage
row.method_id # is the field in CalibrationMethod.
</code></pre>
<p>I don't understand why sometimes a second '_id' it is added to the attribute. It is only a problem with the attribute, because the columns in the database are all correctly named.</p>
<p>This makes me thinking it is not a bug, but rather a feature and that I am overlooking something. What am I missing?
There must be a page of the documentation that I have skipped.</p>
<h2>Update</h2>
<p>I have read the page on <code>lazy_load</code> in the documentation. If I specify lazy_load = False, then both the row.image_id and row.image_id_id will be the actual field value.</p>
<p>Moreover, if I leave lazy_load = True and I specify object_id_name = 'ref_image_id', then row.image_id will point to Image row and row.ref_image_id to the value of the field. I would have rather preferred the opposite, the possibility to change the name of the row and not the one the field.</p>
|
<python><orm><peewee>
|
2024-12-06 13:09:58
| 1
| 367
|
toto
|
79,257,801
| 2,523,501
|
VSCode's testing framework breaks when referencing a file in a tertiary folder
|
<p>I have a super simple test framework in VSCode as follows:</p>
<pre><code>/.vscode
launch.json
settings.json
/python
/resources
some_data.tsv
/src
myapp.py
/test
test_config.py
test_myapp.py
/venv
</code></pre>
<p><code>setting.json</code> has:</p>
<pre><code> "python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true,
"python.testing.pytestArgs": [
"."
],
</code></pre>
<p><em>(The following doesn't even work when unittestEnabled: true)</em></p>
<p><code>launch.json</code> includes:</p>
<pre><code> "cwd": "${workspaceFolder}/python",
"env": {
"PYTHONPATH": "${cwd}"
}
</code></pre>
<p><code>myapp.py</code> is one simple class/function to test:</p>
<pre><code>import pandas as pd
import sys
sys.path.insert(0, '../src')
class MyApp():
def __init__(self):
self.constant = 42
def get_constant(self):
# pd.read_csv('../resources/some_data.tsv', sep='\t') ## TODO: please get this working!
return self.constant
</code></pre>
<p><code>test_myapp.py</code> uses standard unittest framework to test that class:</p>
<pre><code>import unittest
import sys
sys.path.insert(0, '../src')
from myapp import MyApp
class Test_MyApp(unittest.TestCase):
def test_main_program_loads(self):
app = MyApp()
print(app.get_constant)
self.assertEqual(app.get_constant(), 42)
if __name__ == '__main__':
unittest.main()
</code></pre>
<p>In both these files I've added <code>sys.path.insert(0, '../src')</code> so the code/test can be run individually using the 'play' button in VSCode title-tab bar.</p>
<p>Following this answer <a href="https://stackoverflow.com/a/75565473/2523501">https://stackoverflow.com/a/75565473/2523501</a> I added <code>test_config.py</code> which got the test passing visually in Testing sidebar (they break without it):</p>
<pre><code>from pathlib import Path
import os
import sys
main_folder = Path(__file__).parent.parent
sys.path.insert(0, str(main_folder))
sys.path.insert(0, str(main_folder / 'src'))
sys.path.insert(0, str(main_folder / 'test'))
sys.path.insert(0, str(main_folder / 'resources'))
os.chdir(main_folder)
</code></pre>
<p>No <code>__init__.py</code>'s required.</p>
<p><strong>However!</strong> As soon as I try and use <code>some_data.tsv</code> (i.e. uncomment line 10) the Testing UI framework shows failed tests despite the 'play' button on both myapp.py and test_myapp.py still working.</p>
<p>Error ends with:
<code>FileNotFoundError: [Errno 2] No such file or directory: '../resources/some_data.tsv'</code></p>
|
<python><visual-studio-code><pytest><python-unittest>
|
2024-12-06 11:22:37
| 1
| 750
|
yeliabsalohcin
|
79,257,762
| 4,902,934
|
How to run dependence tasks concurrently with non-dependence ones, and tasks inside for loop?
|
<p>I am learning asyncio and there is a problem of running a dependence task concurrently with non-dependence ones. So far I couldn't make it work. This is my code:</p>
<pre><code>import asyncio
import random
def first_execution(choice):
if choice==1:
print(f"First result {choice}")
return choice
else:
print(f"First result {0}")
return 0
async def check_first_execution(result_from_first):
# Computationally complex function which take a lot of time to compute
await asyncio.sleep(10)
print(f"First check of value {result_from_first} complete")
async def second_execution(result_from_first):
# Moderately complex computation
await asyncio.sleep(5)
print(f"Second result {result_from_first+1}")
return result_from_first+1
async def check_second_execution(result_from_second):
# Computationally complex function which take a lot of time to compute
await asyncio.sleep(10)
print(f"Second check of value {result_from_second} complete")
async def third_execution(result_from_first):
# Moderately complex computation
await asyncio.sleep(5)
print(f"Third result {result_from_first+2}")
return result_from_first+2
async def check_third_execution(result_from_third):
# Computationally complex function which take a lot of time to compute
await asyncio.sleep(10)
print(f"Third check of value {result_from_third} complete")
async def main():
choice = random.choice([0, 1])
result_from_first = first_execution(choice)
# First part
coroutine_1 = check_first_execution(result_from_first)
if result_from_first==1:
coroutine_2 = second_execution(result_from_first)
results = await asyncio.gather(coroutine_1, coroutine_2)
elif result_from_first==0:
coroutine_3 = third_execution(result_from_first)
results = await asyncio.gather(coroutine_1, coroutine_3)
# Second part
list_results_from_first = [result_from_first+i for i in range(5)]
for first_result in list_results_from_first:
second_result = await second_execution(first_result)
check_second = await check_second_execution(second_result)
asyncio.run(main())
</code></pre>
<p><strong>In the first part (# First part)</strong>, my code works but it only runs sequentially, meaning: <code>first_execution</code> -> <code>check_first_execution</code> -> <code>second_execution</code> (with <code>choice==1</code>) :</p>
<pre><code> >> First result 1
>> First check of 1 complete
>> Second result 2
</code></pre>
<p>whereas, either with <code>choice==1</code> (or <code>choice==0</code>), I want the <code>check_first_execution</code>and <code>second_execution</code> (or <code>check_first_execution</code>and <code>third_execution</code>) to happen in parallel, for example with <code>choice==1</code> :</p>
<pre><code> >> First result 1
>> Second result 2
>> First check of 1 complete
</code></pre>
<p>The function <code>check_first_execution</code> takes longer to execute so it should finish later than the <code>second_execution</code> function.</p>
<p><strong>In the second part (# Second part)</strong>, it also happened sequentially: <code>second_execution</code> -> <code>check_second_execution</code> -> <code>second_execution</code> -> <code>check_second_execution</code> ..., for example (with <code>choice==1</code>) :</p>
<pre><code> >> Second result 3
>> Second check of 3 complete
>> Second result 4
>> Second check of 4 complete
>> Second result 5
>> Second check of 5 complete
</code></pre>
<p>what I want is something like this: <code>second_execution</code> -> <code>second_execution</code> -> <code>check_second_execution</code> -> <code>second_execution</code>..., example:</p>
<pre><code> >> Second result 3
>> Second result 4
>> Second check 3 complete
>> Second result 5
>> Second check 4 complete
</code></pre>
<p>How do I achieve the two points above?
Any help is appreciated.</p>
|
<python><python-3.x><asynchronous><concurrency><python-asyncio>
|
2024-12-06 11:06:22
| 1
| 1,030
|
HienPham
|
79,257,740
| 4,108,376
|
Limit of Python call stack variables in Visual Studio Code
|
<p>I'm using Visual Studio Code to debug a Python program with a deep, recursive call stack.
The debugger shows the entire call stack, but it seems that the "Variables" view, showing the local variables at each place in the call stack, only gets updated when I select one of the few first or last of the stack frames.</p>
<p>If I select one in the middle, the "Variables" view does not get updated at all.</p>
<p>Is there any setting (in the IDE, the debugger, Python, ...) that would increase the range where variables are shown?</p>
|
<python><visual-studio-code><vscode-debugger>
|
2024-12-06 10:57:57
| 0
| 9,230
|
tmlen
|
79,257,607
| 8,831,742
|
Estimating average closeness centrality of a large graph in python in a reasonable time
|
<p>I have a large network file (about 7 million nodes and 7 million edges) that i've imported in a <code>networkx</code> graph. I need to estimate its <a href="https://en.wikipedia.org/wiki/Closeness_centrality" rel="nofollow noreferrer">closeness centrality</a> average across all nodes. I've tried implementing the Eppstein-Wang algorithm (which computes the Single-Source-Shortest-Path for a random node <code>k</code> times to produce this estimate). However, for a graph this size this computation needs to happen about 1000 times to produce a sufficiently precise result, which would take me about 10 hours on my machine.</p>
<p>E-W has a complexity of <code>theta(k * m)</code> (where <code>m</code> is the number of nodes).
Are there any other estimation algorithms that have less computational complexity and could get me an acceptable result within, say, around one hour?</p>
|
<python><graph><networkx><estimation>
|
2024-12-06 10:12:23
| 0
| 353
|
none none
|
79,257,526
| 3,575,623
|
Edit legend of seaborn stacked barplot
|
<p>I followed <a href="https://python-graph-gallery.com/stacked-and-percent-stacked-barplot/" rel="nofollow noreferrer">this tutorial</a> to create a stacked double barplot, which worked nicely for my data, but now I'm trying to edit the legend to include the information needed to read the figure. I tried following <a href="https://stackoverflow.com/a/55108651/3575623">this answer</a> but it just creates a new legend over the original one.</p>
<p><a href="https://i.sstatic.net/raMIXUkZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/raMIXUkZ.png" alt="double stacked bar plot with incorrect legend" /></a></p>
<p>What I would like to see in my legend is</p>
<pre><code>blue -> raw
orange -> spike
hatched -> CATTCCA
blank -> CATTCCT
</code></pre>
<p>Any ideas of how to achieve this?</p>
<p>Here's my data:</p>
<pre><code>Conds method peaks motif_CATTCCA motif_CATTCCT
TEAD1_T1_pos_1 raw_only 22424 2376 1357
TEAD1_T1_pos_1 spike_only 4086 975 246
TEAD1_T1_pos_2 raw_only 59922 6907 3899
TEAD1_T1_pos_2 spike_only 1064 611 63
TEAD1_T7_neg_1 raw_only 5650 745 346
TEAD1_T7_neg_1 spike_only 16340 2458 984
TEAD1_T7_neg_2 raw_only 64300 7812 4407
TEAD1_T7_neg_2 spike_only 662 406 29
TEAD1_T7_pos_1 raw_only 49892 4894 3109
TEAD1_T7_pos_1 spike_only 1106 598 55
TEAD1_T7_pos_2 raw_only 22666 2325 1403
TEAD1_T7_pos_2 spike_only 4558 1002 259
</code></pre>
<p>and the code:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import seaborn as sns
workdir = "my/dir/"
data = pd.read_csv(workdir + "motifs_under_peaks.txt", sep=" ")
data_pct = data.copy()
data_pct["motif_CATTCCA"] = data_pct["motif_CATTCCA"] / data_pct["peaks"]
data_pct["motif_CATTCCT"] = data_pct["motif_CATTCCT"] / data_pct["peaks"]
data_pct = data_pct.drop("peaks", 1)
data_pct = data_pct.melt(id_vars=["Conds", "method"], var_name="motif")
total = data_pct.groupby(["Conds", "method"])["value"].sum().reset_index()
motif_A = data_pct[data_pct.motif =="motif_CATTCCA"]
fig1, ax1 = plt.subplots()
bar1 = sns.barplot(x="value", y="Conds", hue="method", data=total, ax=ax1)
bar2 = sns.barplot(x="value", y="Conds", hue="method", data=motif_A, hatch="xxxxx", ax=ax1) ##need to add hatching into legend
fig1.legend(title="motif enrichment", loc="upper right", labels=["raw CATTCCT", "spike CATTCCT", "raw CATTCCA", "spike CATTCCA"])
</code></pre>
|
<python><pandas><matplotlib><seaborn><legend>
|
2024-12-06 09:46:44
| 2
| 507
|
Whitehot
|
79,257,488
| 11,629,296
|
Create a new line for comma separated values in pandas column - I dont want to add new rows, I want to have same rows in output
|
<p>I have a dataframe like this,</p>
<pre><code>df
col1 col2
1 'abc,pqr'
2 'ghv'
3 'mrr, jig'
</code></pre>
<p>Now I want to create a new line for each comma separated values in col2, so the output would look like,</p>
<pre><code>col1 col2
1 'abc
pqr'
2 'ghv'
3 'mrr
jig'
</code></pre>
<p>looking for some easy solutions in pandas</p>
|
<python><pandas><dataframe><sklearn-pandas>
|
2024-12-06 09:31:15
| 0
| 2,189
|
Kallol
|
79,257,046
| 6,328,841
|
Cannot install llama-index-embeddings-huggingface==0.1.3 because these package versions have conflicting dependencies
|
<p>I am unable to install the huggingfaceEmbedding \</p>
<p>Getting the followng error:</p>
<pre><code>ERROR: Cannot install llama-index-embeddings-huggingface==0.1.3, llama-index-embeddings-huggingface==0.1.4 and llama-index-embeddings-huggingface==0.1.5 because these package versions have conflicting dependencies.
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
</code></pre>
<p><strong>Python version:</strong> 3.13</p>
<p><a href="https://i.sstatic.net/IJXlvJWk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IJXlvJWk.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/CUeR8bMr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CUeR8bMr.png" alt="enter image description here" /></a></p>
|
<python><huggingface-transformers><large-language-model><huggingface><llama>
|
2024-12-06 06:33:21
| 1
| 718
|
Saurabh Verma
|
79,256,899
| 8,384,910
|
Plotly KDE in histogram
|
<p>I can create a histogram in Plotly like this:</p>
<pre class="lang-py prettyprint-override"><code>import plotly.figure_factory as ff
import plotly.express as px
data = px.data.tips()["tip"]
ff.create_distplot([data], group_labels=["Tip"])
</code></pre>
<p><a href="https://i.sstatic.net/MBFQ450p.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MBFQ450p.png" alt="Histogram with rug marginal and KDE" /></a></p>
<p>However, <code>ff.create_distplot</code> is deprecated. The docs recommend using <code>px.histogram</code>.</p>
<p>I tried:</p>
<pre class="lang-py prettyprint-override"><code>px.histogram(
x=data,
marginal="rug",
)
</code></pre>
<p>However, it did not produce a graph with KDE (the curve):</p>
<p><a href="https://i.sstatic.net/iVxo0iej.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iVxo0iej.png" alt="Histogram with rug marginal" /></a></p>
<p>How do I fix it?</p>
|
<python><plotly>
|
2024-12-06 05:12:24
| 1
| 9,414
|
Richie Bendall
|
79,256,824
| 174,615
|
Why are enums incompatible across python packages?
|
<p>An enum is declared in an imported package and identically in the importer. Same value, but Python treats the imported enum value as different for some reason. Package 1 is a parser that I wrote which outputs a dictionary containing some values from this enum declared in the parser package:</p>
<pre><code>class NodeFace(Enum):
TOP = 0
BOTTOM = 1
RIGHT = 2
LEFT = 3
</code></pre>
<p>So, it parses along and then maps some text to these values in a dictionary that will be accessible to the importer. In case it matters, the parser is built using Python 3.13.</p>
<p>Now the importer, built using Python 3.12, also declares the same enum in its own local files. Identical to above. Additionally, I use this dictionary to find opposite sides, declared in the importer:</p>
<pre><code>OppositeFace = {
NodeFace.TOP: NodeFace.BOTTOM,
NodeFace.BOTTOM: NodeFace.TOP,
NodeFace.LEFT: NodeFace.RIGHT,
NodeFace.RIGHT: NodeFace.LEFT
}
</code></pre>
<p>In the importer I assign enum values from the parser output dict to variables tface and pface. And, I test to make sure they look right by printing to console and sure enough I get values like: <NodeFace.BOTTOM: 1>. In fact, I'm looking at them in the debugger, no problem so far. Now the fun starts...</p>
<pre><code>if OppositeFace[tface] == pface:
</code></pre>
<p>This fails on a KeyError on the OppositeFace lookup. BUT, if I do the lookup in the debugger console with: OppositeFace[NodeFace.TOP], it works just fine.</p>
<p>I did all kinds of tests and all I could figure is that the problem was in defining the enum twice, once in the parser and again in the importer and, despite all the values matching in the debugger and console, some internal value is different and causing the dict lookup to fail.</p>
<p>My solution was to eliminate the Enum from my parser and just pass strings 'TOP', 'BOTTOM', etc. Then, on the importer side I do OppositeFace[NodeFace[tface]] where tface is now the string and it works fine.</p>
<p>Can anyone tell me why exactly this happens? Just curious at this point.</p>
|
<python><enums><keyerror>
|
2024-12-06 04:17:17
| 1
| 532
|
Leon Starr
|
79,256,512
| 11,091,148
|
FastAPI Azure Opentelemetry logger does not map LogLevels correctly
|
<p>Logs of my FastAPI app are currently written into <code>AppServiceConsoleLogs</code> table in Log Analytics, Azure. I initialize my loggers with (where <code>LOGGING_CONFIG</code> is just the logging config as Class):</p>
<pre><code>class ApiLoggerConfig(BaseModel):
"""Logging configuration to be set for all APIs."""
formatters: dict = {
"uvicorn": {
"()": "uvicorn.logging.DefaultFormatter",
"fmt": "%(levelprefix)s %(asctime)s %(name)s:%(lineno)d: %(message)s",
"datefmt": DATEFMT,
},
"access": {
"()": "uvicorn.logging.AccessFormatter",
"fmt": '%(levelprefix)s %(asctime)s %(name)s: %(client_addr)s - "%(request_line)s" %(status_code)s',
"datefmt": DATEFMT,
},
}
handlers: dict = {
"uvicorn": {
"formatter": "uvicorn",
"class": "logging.StreamHandler",
"stream": "ext://sys.stderr",
},
"access": {
"formatter": "access",
"class": "logging.StreamHandler",
"stream": "ext://sys.stdout",
},
}
loggers: dict = {
"uvicorn": {"handlers": ["uvicorn"], "level": LOG_LEVEL, "propagate": False},
"uvicorn.error": {"level": LOG_LEVEL},
"uvicorn.access": {
"handlers": ["access"],
"level": LOG_LEVEL,
"propagate": False,
},
}
def add_logger(
self,
name: str,
handler: str = "default",
propagate: bool = False,
):
self.loggers[name] = {
"handlers": [handler],
"level": self.LOG_LEVEL,
"propagate": propagate,
}
def _setup_logging(app: FastAPI):
LOGGING_CONFIG = ApiLoggerConfig()
# Load Azure logging config for cloud setup only
if os.getenv("APPLICATIONINSIGHTS_CONNECTION_STRING"):
from azure.monitor.opentelemetry import configure_azure_monitor
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
# set metrics extractor to root
configure_azure_monitor()
FastAPIInstrumentor.instrument_app(app)
LOGGING_CONFIG.add_logger(__name__)
dictConfig(LOGGING_CONFIG)
return logging.getLogger(__name__)
app = FastAPI()
logger = _setup_logging(app)
@app.get("/", response_model=HealthResponse)
async def health() -> Dict[str, str]:
"""Simple heath check endpoint.
Returns:
The response follows the provided HealthResponse schema.
"""
logger.info("I am healthy")
return {"message": "API running."}
</code></pre>
<p>However, <code>INFO</code> logs get associated with the LogLevel <code>ERROR</code> ( especially for all logs of the <code>__name__</code> logger). The setup works locally without any issues.</p>
<p><a href="https://i.sstatic.net/tYomJLyf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tYomJLyf.png" alt="enter image description here" /></a></p>
<p>What am I doing wrong?</p>
|
<python><azure><fastapi><open-telemetry><azure-log-analytics>
|
2024-12-05 23:35:44
| 1
| 526
|
Bennimi
|
79,256,485
| 11,547,168
|
How do I convert a complex Postgres SQL query into a Django queryset?
|
<p>I have the following SQL query working:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT
mrc.token_id,
ARRAY_AGG(mt.name) AS tracking
FROM
markets_rankscurrent mrc
LEFT JOIN (
SELECT
mtg.id,
mtg.index_id,
mtg.favorites_group_id,
mtg.name,
COALESCE(ufe.token_id, mte.token_id, mie.token_id) AS token_id
FROM
markets_trackinggroup mtg
LEFT JOIN users_favoritesentry ufe ON mtg.favorites_group_id = ufe.group_id
LEFT JOIN markets_trackingentry mte ON mtg.id = mte.group_id
LEFT JOIN markets_indexentry mie ON mtg.index_id = mie.index_id
) mt ON mrc.token_id = mt.token_id
GROUP BY
mrc.token_id;
</code></pre>
<p>Here are my models:</p>
<pre class="lang-py prettyprint-override"><code>class Token(models.Model):
class RanksCurrent(models.Model):
token = models.ForeignKey(Token, on_delete=models.CASCADE, db_index=False)
class TrackingGroup(models.Model):
name = models.CharField(max_length=60, verbose_name='Name')
favorites_group = models.ForeignKey(FavoritesGroup, on_delete=models.CASCADE, related_name='tracking_groups', blank=True, null=True)
index = models.ForeignKey(Token, on_delete=models.CASCADE, related_name='tracking_groups', blank=True, null=True)
class TrackingEntry(models.Model):
group = models.ForeignKey(TrackingGroup, on_delete=models.CASCADE, related_name='tokens')
token = models.ForeignKey(Token, on_delete=models.CASCADE, related_name='tracking_entries')
class IndexEntry(models.Model):
index = models.ForeignKey(Token, on_delete=models.CASCADE, related_name='index_tokens')
token = models.ForeignKey(Token, on_delete=models.CASCADE, related_name='indices')
class FavoritesGroup(models.Model):
pass
class FavoritesEntry(models.Model):
group = models.ForeignKey(FavoritesGroup, on_delete=models.CASCADE, related_name='favorites_entries')
token = models.ForeignKey('markets.Token', on_delete=models.CASCADE, related_name='favorites_entries')
</code></pre>
<p><a href="https://i.sstatic.net/prh9bFfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/prh9bFfg.png" alt="Django models" /></a></p>
<p>The <code>TrackingGroup.index</code> foreign key will only be set to a <code>Token</code> object that is also a foreign key in the <code>IndexEntry</code> table.</p>
<p>My end goal is to be able to query the <code>RanksCurrent</code> table and annotate a <code>tracking_groups</code> column that contains a list of <code>TrackingGroup</code> names where the <code>Token</code> is a member. My attempts have made use of <code>Subquery</code> and <code>ArrayAgg</code> to try and do this, but if my subquery returns a list (like I want it to), it fails.</p>
<p>These types of methods have worked for getting the list of TrackingGroup names:</p>
<pre class="lang-py prettyprint-override"><code>tracking_subquery = TrackingGroup.objects.filter(
Q(index__index_tokens__token=OuterRef('token_id')) |
Q(favorites_group__favorites_entries__token=OuterRef('token_id')) |
Q(tokens__token=OuterRef('token_id'))
).values('name')
tracking_subquery = TrackingGroup.objects.filter(
Q(favorites_group_id__in=FavoritesEntry.objects.filter(token_id=OuterRef(OuterRef('token_id'))).values('group_id')) |
Q(id__in=TrackingEntry.objects.filter(token_id=OuterRef(OuterRef('token_id'))).values('group_id')) |
Q(index_id__in=IndexEntry.objects.filter(token_id=OuterRef(OuterRef('token_id'))).values('index_id'))
).values('name')
</code></pre>
<p>However, when I attempt to annotate the main query, it fails:</p>
<pre class="lang-py prettyprint-override"><code>RanksCurrent.objects.annotate(
tracking=ArrayAgg(Subquery(tracking_subquery))
)
</code></pre>
<p>I thought I would need to move the <code>ArrayAgg</code> call inside the subquery, but that did not make a difference:</p>
<pre class="lang-py prettyprint-override"><code>RanksCurrent.objects.annotate(
tracking=Subquery(
TrackingGroup.objects.filter(
Q(index__index_tokens__token=OuterRef('token_id')) |
Q(favorites_group__favorites_entries__token=OuterRef('token_id')) |
Q(tokens__token=OuterRef('token_id'))
).values('name').annotate(group_names=ArrayAgg('name')).values('group_names')))
</code></pre>
<p>As far as I can tell, the reason the SQL works is because the inner <code>SELECT</code> statement returns a unique row for each <code>token_id</code>, rather than a row for each <code>TrackingGroup</code>.</p>
<p>My thought right now is to break this up into three separate queries, each targeting one of the *Entry tables, then either somehow combine the resulting lists into a single one and annotate the queryset with it, or just create three separate annotations. I would have already done this if not for getting the operation to work in SQL, so now I want to try and do the same in Django before giving up fully.</p>
|
<python><django><postgresql><django-models><django-views>
|
2024-12-05 23:16:23
| 1
| 620
|
DLAN
|
79,256,113
| 8,340,867
|
Streamlabs api 2.0 Uauthorized HTTP
|
<p>currently i have this problem with stremlabs, I am trying to integrate a 3rd party donation service which will hit the streamlabs donations in order to augment the value there.</p>
<pre><code>import requests
import datetime
API_ACCESS_TOKEN = "MY API ACCESS TOKEN"
# FROM https://streamlabs.com/dashboard#/settings/api-settings
url = "https://streamlabs.com/api/v2.0/donations"
headers = {
"accept": "application/json",
"Authorization": f"Bearer {API_ACCESS_TOKEN}",
"Content-Type": "application/json"
}
# Donation data (replace with actual values)
donation_data = {
"name": "John Doe",
"message": "Thanks for the stream!",
"identifier": "john.doe@example.com",
"amount": 10.00,
"currency": "USD",
"created_at": datetime.datetime.now().isoformat(),
}
if __name__ == '__main__':
response = requests.post(url, headers=headers, json=donation_data)
if response.status_code == 200:
print("Donation created successfully!")
print(response.text)
elif response.status_code == 400:
print("Error creating donation:")
print(response.text)
else:
print(f"Unexpected status code: {response.status_code}")
print(response.text)
</code></pre>
<p>Unfortunatelly 2 hrs of debugging could not figure out why I recieve 401 Unauthorized</p>
<pre><code>Unexpected status code: 401
Unauthorized.
</code></pre>
|
<python><python-requests><streamlabs>
|
2024-12-05 20:12:10
| 0
| 2,045
|
ThunderHorn
|
79,256,095
| 19,959,092
|
Problems plotting timestamps on the x-axis with Matplotlib
|
<p>I am working on a Python script that loads several CSV files containing timestamps and ping data and then displays them on a plot. The X-axis is supposed to display the timestamps in HH:MM format, with the timestamps coming from multiple CSV files that record different ping values for different addresses.</p>
<p>The challenge is that I only want to display a limited number of timestamps for the X axis, e.g. 10-12 timestamps, based on the number of data points in the CSV files. I also want to ensure that the X-axis is correctly labeled with the appropriate timestamps and associated ping values.</p>
<p>Problem:
The plot shows the data, but the timestamps on the X-axis are not correct and too few ticks appear. Only the first timestamp is displayed and only 8 ticks are generated on the X-axis.</p>
<p>In addition, the X-axis ticks do not seem to match the timestamps from the data correctly, which affects the readability of the plot.</p>
<p>Goal:
The X-axis should correctly display timestamps in the format HH:MM:SS for all addresses from the CSV files.</p>
<p>I would like to have a limited number of timestamps (approx. 10-12) on the X-axis based on the data points in the CSV files.</p>
<p>It is important to mention that the information for the plot is stored in x_labels and x_positions.
11 subdivisions are also correctly created and saved for 99 data records, but these are still displayed incorrectly.</p>
<p>Example:
x_positions: <code>[0.0, 2.55, 5.1, 7.65, 10.216666666666667, 12.766666666666667, 15.316666666666666, 17.866666666666667, 20.416666666666668, 22.983333333333334, 25.533333333333335]</code>
x_labels: <code>['17:24:43', '17:27:16', '17:29:49', '17:32:22', '17:34:56', '17:37:29', '17:40:02', '17:42:35', '17:45:08', '17:47:42', '17:50:15']</code></p>
<p>This is the picture I get, but it should have 11 dividing lines on the X axis and all of them should be labeled
<a href="https://i.sstatic.net/HR0iqzOy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HR0iqzOy.png" alt="enter image description here" /></a></p>
<p>Here is some test Data, I store in the csv:</p>
<pre><code>Time,Ping (ms)
17:24:43,0.1
17:25:00,0.2
17:25:17,0.23
17:25:34,0.12
17:25:51,0.23
17:26:08,0.123
17:26:25,0.321
17:26:42,0.231
</code></pre>
<p>Here is My Code:</p>
<pre><code>import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from datetime import datetime, timedelta
# Funktion zum Laden der Daten aus den CSV-Dateien
def load_data(folder):
data = {}
for root, dirs, files in os.walk(folder):
for file in files:
if file.endswith(".csv"):
address = file.replace('_', '.').replace('.csv', '')
file_path = os.path.join(root, file)
df = pd.read_csv(file_path)
df['Time'] = pd.to_datetime(df['Time'], format='%H:%M:%S')
df['Ping (ms)'] = df['Ping (ms)'].apply(lambda x: 0 if x == 0 else x)
data[address] = df
return data
# Funktion zum Erstellen des Plots
def plot_data(data):
plt.figure(figsize=(14, 8))
colors = generate_colors(len(data))
# Bestimme die Anzahl der Datenpunkte für eine einzelne Adresse
df = next(iter(data.values())) # Wähle den ersten DataFrame aus
total_data_points = len(df)
# Berechne den dif-Wert
dif = total_data_points // 10
if dif < 1:
dif = 1
# Sammle alle Zeitstempel für die X-Achse
x_labels = []
x_positions = []
for i in range(0, len(df), dif):
time = df['Time'].iloc[i]
x_labels.append(time.strftime('%H:%M:%S'))
x_positions.append((time - min(df['Time'])).total_seconds() / 60)
# Plotten der Ping-Daten für jede Adresse
for idx, (address, df) in enumerate(data.items()):
df['Time_diff'] = (df['Time'] - min(df['Time'])).dt.total_seconds() / 60
mask_timeout = df['Ping (ms)'] == 0
mask_normal = ~mask_timeout
plt.plot(df['Time_diff'][mask_normal], df['Ping (ms)'][mask_normal], label=address, color=colors[idx % len(colors)])
plt.plot(df['Time_diff'][mask_timeout], df['Ping (ms)'][mask_timeout], color='r', lw=2)
# Anpassen der X-Achse
plt.xticks(x_positions, x_labels, rotation=45, ha='right')
plt.xlabel('Time')
plt.ylabel('Ping (ms)')
plt.title('Ping Times for Different Addresses')
plt.legend()
plt.grid(True)
plt.tight_layout()
plt.show()
def generate_colors(n):
colors = []
for i in range(n):
hue = i / n
colors.append(plt.cm.hsv(hue))
return colors
# Main-Funktion
def main():
data_folder = input("Bitte geben Sie den Pfad zum Ordner mit den CSV-Dateien ein: ")
if not os.path.exists(data_folder):
print(f"Der Ordner {data_folder} existiert nicht.")
return
data = load_data(data_folder)
plot_data(data)
if __name__ == "__main__":
main()
</code></pre>
|
<python><csv><matplotlib><plot>
|
2024-12-05 20:03:15
| 1
| 428
|
Pantastix
|
79,256,084
| 678,061
|
Python typing of Callable with Concatenate and ParamSpec for child class functions in decorator?
|
<p>I feel I'm not understanding something and don't know the correct way to typehint this.</p>
<p>I've got a parent class and a child class. I'm trying to create a decorator with ParamSpec so I gain access to args / kwargs and the return inside a decorated child class function where the first argument is of course always "self", and be able to do something inherited from the parent.</p>
<p>Something that looks like this:</p>
<pre><code>from typing import ParamSpec, TypeVar, Concatenate
from collections.abc import Callable
P = ParamSpec("P")
T = TypeVar("T")
class Parent:
def parent_function(self, s:str):
print("hola " + s)
def decorate(
func: Callable[Concatenate[Parent, P], T]
) -> Callable[Concatenate[Parent, P], T]:
def wrapper(self: Parent, *args: P.args, **kwargs: P.kwargs) -> T:
self.parent_function("there")
return func(self, *args, **kwargs)
return wrapper
class Child(Parent):
@decorate
def child_function(self, s:str):
print("hello " + s)
child = Child()
child.child_function("world")
</code></pre>
<p>Now, this <em>works</em> just fine of course (returns "hola there\nhello world\n") but MyPy complains on the "@decorate" because:</p>
<blockquote>
<p>Argument 1 to "decorate" has incompatible type "Callable[[Child, str], Any]"; expected "Callable[[Parent, str], Any]"</p>
</blockquote>
<p>I get this is a minimal example, but I'm not sure why it's complaining that Child isn't of type Parent here. Or if there's another way around it.</p>
<p>MyPy is perfectly happy if I get replace "Concatenate[Parent,p]" with "P" and just assert isinstance(args[0],Parent) inside wrapper, but I would prefer to use type hinting instead of a code assertions.</p>
|
<python><python-typing><python-decorators>
|
2024-12-05 19:57:47
| 1
| 840
|
AndyMac
|
79,255,967
| 943,713
|
How to create a column of increasing value base on other column with same value
|
<p>all</p>
<p>I have a data frame as below</p>
<pre><code>df = pd.DataFrame.from_dict({'A':[1,1,2], 'B':[10,20,14], "C":[30,40,20]})
</code></pre>
<p><a href="https://i.sstatic.net/KnzVbFkG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KnzVbFkG.png" alt="enter image description here" /></a></p>
<p>How to create a new column for example <code>newCol</code> which depends on column <code>A</code>. if value at <code>A</code> remains same, then <code>newCol</code> will fill with a increasing integer. if there is a diff value in <code>A</code>, here <code>2</code>, then the value of <code>newCol</code> will reset and start increasing again.</p>
<p>Thank you.</p>
<p><a href="https://i.sstatic.net/mdfawJlD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mdfawJlD.png" alt="enter image description here" /></a></p>
|
<python><pandas><numpy>
|
2024-12-05 19:15:59
| 1
| 1,883
|
Shawn Zhang
|
79,255,816
| 14,802,285
|
How to implement self paced multitask weighted loss (Kendall et al. 2018) in pytorch?
|
<p>In <a href="https://arxiv.org/abs/1705.07115" rel="nofollow noreferrer">this study</a> authors introduce an equation (equation 7) to weigh the individual losses for different tasks of neural networks.</p>
<p>I want to implement this as a function in <code>pytorch</code> so that I can use for my model. So far, what I have tried is:</p>
<pre><code>import torch
# function to apply uncertainty weighing on the losses
def apply_uncertainty_weights(sigma, loss):
"""
This function applies uncertainty weights on the given loss.
NOTE: This implementation is based on the study Kendall et al. 2018 (https://arxiv.org/abs/1705.07115)
Arguments:
sigma: A NN learned uncertainty value (initialised as torch.nn.Parameter(torch.zeros(1))
loss: The calculated losss between the prediction and target
Returns:
weighted_loss: Weighted loss
"""
# apply uncertainty weighthing
# This is the formula in the publication -> weighted_loss = (1 / (2 * sigma**2)) * loss + torch.log(sigma)
# but we can't use it as it won't be numerically stable/differentiable (e.g. when sigma is predicted to be 0)
# instead use the following
sigma = torch.nn.functional.softplus(sigma) + torch.tensor(1e-8) # this makes sure sigma is never exactly 0 or less otherwise the following functions wont work
log_sigma_squared = torch.log(sigma ** 2) # this is log(sigma^2)
precision = (1/2) * torch.exp(-log_sigma_squared) # this is 1/sigma^2
log_sigma = (1/2) * log_sigma_squared # this is log(sigma)
weighted_loss = precision * loss + log_sigma
# return the weighted loss
return weighted_loss
</code></pre>
<p>But strangely this implementation gives me negative loss values during training. What am I doing wrong?</p>
|
<python><pytorch>
|
2024-12-05 18:15:23
| 1
| 3,364
|
bird
|
79,255,610
| 12,276,279
|
Is there a function available to convert HIGHS solution file format to GLPK solution file format?
|
<p>I am currently using HIGHS to solve a mps file and get a solution file using the following code:</p>
<pre><code>import highspy
h = highspy.Highs()
filename = 'abc.mps'
h.readModel(filename)
h.setOptionValue("solver","ipm")
h.run()
h.writeSolution("abc_highs_solution.sol", 1)
</code></pre>
<p>The solution file format of HIGHS looks as shown below. It has two different sections for columns and rows:
<img src="https://github.com/user-attachments/assets/a387a22d-8ecf-47b9-b55d-0d82d6fc1b10" alt="image" /></p>
<p>I need to get the solution into the GLPK solution file format which is as shown. It also has two sections for rows and columns.
<img src="https://github.com/user-attachments/assets/befac0e0-1ee7-4807-ac78-959ce5689041" alt="image" /></p>
<p>Is there a functionality available to convert the HIGHS solution file to the GLPK solution file format?</p>
|
<python><optimization><glpk><highs>
|
2024-12-05 17:00:55
| 0
| 1,810
|
hbstha123
|
79,255,590
| 9,640,238
|
Add columns to dataframe from a dictionary
|
<p>There are many answers out there to this question, but I couldn't find one that applies to my case.</p>
<p>I have a dataframe that contains ID's:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({"id": [0, 1, 2, 3, 4]})
</code></pre>
<p>Now, I query a REST API for each ID's to get additional attributes that are returned to me as a dictionary:</p>
<pre class="lang-json prettyprint-override"><code>{"id": 0, "values": {"first_name": "Bob", "last_name": "Smith"}}
</code></pre>
<p>What I want is to add the content of <code>values</code> as additional columns to the matching row of the dataframe.</p>
<p>An important point is that, at each iteration, I may get different attributes, so I don't know how many columns will be added in the end, or even their names. So, sometimes I need to add a column (which I would do with <code>pd.concat</code>), but sometimes I need to set the value to an existing one.</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>id</th>
<th>first_name</th>
<th>last_name</th>
<th>something</th>
<th>something_else</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>Bob</td>
<td>Smith</td>
<td></td>
<td></td>
</tr>
<tr>
<td>…</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>4</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table></div>
<p>Any thought?</p>
|
<python><pandas><dataframe>
|
2024-12-05 16:53:54
| 3
| 2,690
|
mrgou
|
79,255,556
| 3,858,619
|
Python JetBrains profiler fallthrough call a lot of time
|
<p>I test the profiler in the JetBrains software for my Python application. I did that to evaluate the slow part of my code. Problem is, I check the statistics and I see that <code>fallthrough</code> appear 125 times for 6466ms !</p>
<p><a href="https://i.sstatic.net/M64YHa8p.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M64YHa8p.png" alt="enter image description here" /></a></p>
<p>I can't find which function call this, do you have any idea ?</p>
<p>Thanks</p>
|
<python><pycharm><jetbrains-ide>
|
2024-12-05 16:46:01
| 0
| 1,093
|
Nathan30
|
79,255,413
| 9,223,023
|
How does Python ThreadPoolExecutor switch between concurrent threads?
|
<p>How does Python ThreadPoolExecutor switch between concurrent threads?</p>
<p>In the case of the async/awaint event-loop, the switching between different pieces of the code happens at the await calls. Does the ThreadPoolExecutor run each submitted task for a random amount of time> Or until something somewhere calls Thread.sleep()? Or the OS temporarily switches to do something else and the thread is forced to release the GIL thus allowing some other thread to grab the GIL next time?</p>
|
<python><python-multithreading>
|
2024-12-05 16:02:41
| 1
| 1,203
|
Petras Purlys
|
79,255,406
| 4,811,873
|
Python Wagtail crashes 6.3.1 "Streamfield object has no attribute 'bind_to_model'
|
<p>While updating an old Wagtail website to the current version, I encounter this error, in <code>admin/panels/group.py</code> line 74:</p>
<p><code>AttributeError: 'StreamField' object has no attribute 'bind_to_model'</code></p>
<p>Since this is apparently in the Wagtail software <em>as distributed,</em> I am quite confused.</p>
<hr />
<p>The full traceback is as follows:</p>
<pre><code>Exception in thread django-main-thread:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/threading.py", line 1041, in _bootstrap_inner
self.run()
~~~~~~~~^^
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/threading.py", line 992, in run
self._target(*self._args, **self._kwargs)
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/django/utils/autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
~~^^^^^^^^^^^^^^^^^
File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/django/core/management/commands/runserver.py", line 134, in inner_run
self.check(display_num_errors=True)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/django/core/management/base.py", line 486, in check
all_issues = checks.run_checks(
app_configs=app_configs,
...<2 lines>...
databases=databases,
)
File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/django/core/checks/registry.py", line 88, in run_checks
new_errors = check(app_configs=app_configs, databases=databases)
File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/wagtail/admin/checks.py", line 69, in get_form_class_check
edit_handler = cls.get_edit_handler()
File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/wagtail/utils/decorators.py", line 54, in __call__
return self.value
^^^^^^^^^^
File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/django/utils/functional.py", line 47, in __get__
res = instance.__dict__[self.name] = self.func(instance)
~~~~~~~~~^^^^^^^^^^
File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/wagtail/utils/decorators.py", line 50, in value
return self.fn(self.cls)
~~~~~~~^^^^^^^^^^
File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/wagtail/admin/panels/page_utils.py", line 73, in _get_page_edit_handler
return edit_handler.bind_to_model(cls)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/wagtail/admin/panels/base.py", line 146, in bind_to_model
new.on_model_bound()
~~~~~~~~~~~~~~~~~~^^
File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/wagtail/admin/panels/group.py", line 74, in on_model_bound
self.children = [child.bind_to_model(self.model) for child in self.children]
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/wagtail/admin/panels/base.py", line 146, in bind_to_model
new.on_model_bound()
~~~~~~~~~~~~~~~~~~^^
File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/wagtail/admin/panels/group.py", line 74, in on_model_bound
self.children = [child.bind_to_model(self.model) for child in self.children]
^^^^^^^^^^^^^^^^^^^
AttributeError: 'StreamField' object has no attribute 'bind_to_model'
</code></pre>
|
<python><wagtail><wagtail-streamfield>
|
2024-12-05 16:00:59
| 2
| 9,043
|
Mike Robinson
|
79,255,192
| 849,278
|
Altair - Area chart fill bleeds out from chart
|
<p>I'm using the <code>altair</code> library to build a graph, however for some reason the area fill bleeds out of the chart.</p>
<p>Here is the code</p>
<pre><code>import altair as alt
# Determine the minimum and maximum values for the y-axis
y_min = price_data["close"].min()
y_max = price_data["close"].max()
# Create the chart with adjusted y-axis
alt.Chart(price_data).mark_area(
line={'color': 'darkgreen'},
color=alt.Gradient(
gradient='linear',
stops=[alt.GradientStop(color='white', offset=0),
alt.GradientStop(color='darkgreen', offset=1)],
x1=1,
x2=1,
y1=1,
y2=0
)
).encode(
alt.X('date:T', title="Date"),
alt.Y('close:Q', scale=alt.Scale(domain=[y_min, y_max]), title="Close Price")
).properties(
title=f"{symbol} Price Trend"
)
</code></pre>
<p>I suspect it's something to do with the <code>y_min</code> and <code>y_max</code>.</p>
<p>Here is the image yielded:
<a href="https://i.sstatic.net/jt3cqwxF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jt3cqwxF.png" alt="Image showing the graph's filling bleeding out" /></a></p>
<p>I tried to create an area chart with gradient fill from green to white.</p>
|
<python><plot><altair>
|
2024-12-05 14:57:51
| 1
| 381
|
Rami
|
79,255,096
| 1,099,854
|
Await inside ctypes callback with loop already running
|
<p>I need to perform a series of read/write tasks inside a ctypes callback function, but still inside an async task that is responsible for giving such read/write tasks:</p>
<pre><code>async def main(ble_address):
async with BleakClient(ble_address) as client:
def my_io_callback(...)
# await client.read_gatt_char(UUID) ???
# await client.write_gatt_char(UUID, ...) ???
my_c_function(to_c_function(my_io_callback))
asyncio.run(main(ble_address))
</code></pre>
<p>I can't move the whole async block inside the callback because it will be called several times and I need the device to be connected during the entire interaction.</p>
<p>What's the proper way of dealing with this situation? Answers I've seen so far haven't quite cover this particular case.</p>
|
<python><python-asyncio><ctypes><python-bleak>
|
2024-12-05 14:28:51
| 2
| 491
|
Caian
|
79,255,080
| 9,753,863
|
Can't load C++ unit test module with python 3.10 on windows (using boost 1.75.0)
|
<p>I have a challenging tricky issue managing the unit tests of my C++ project in Python.</p>
<p><strong>The project</strong></p>
<p>The project is a big one, strongly relying on boost, with several dlls/so files and the funcitonnalities are exported to python. So I can't copy the code and it's hard to extract a minimal sample from all of that. Moreover I'm pretty sure this is more a configuration problem than a code issue. I'll give a sample of code though, concerning the critical part.</p>
<p><strong>The problem</strong></p>
<p>I have an unit test routine in C++ that I can use in python. It works well for both python 3.8 and 3.9. It also works with python 3.10 on Linux, but I get the following message on Windows:</p>
<pre><code><class 'ImportError'> : DLL load failed while importing UnitTestTools_4_1_0_0_beta_x64: The specified module was not found.
</code></pre>
<p>I checked the dependancies and they all exist and their locations are all listed in sys.path. I use <code>sys.path.insert(0, dirPath)</code> to add the directory containing the dll/so and pyd files. I also checked with <a href="https://github.com/lucasg/Dependencies" rel="nofollow noreferrer">DependenciesGui.exe</a></p>
<p>Moreover, I can use all the functionalities of my project, i.e. I can import other files and call the functions they contain. Only the unit test on windows with Python 3.10 fails to import the related module.</p>
<p><strong>The configuration</strong></p>
<p><strong>C++ side</strong>, I'm using boost 1.75.0 and build the project with visual studio 2019 on Windows and gcc 9.4 on Linux.
The only libraries I need to interface C++ and python are boost_python and boost_numpy.</p>
<p>I built boost dlls from sources (with some patches to manage newest python requirements), with dedicated Anaconda virtual environments for each version of python I need to support (3.8, 3.9 and 3.10).</p>
<p><strong>Python side</strong>, when I use my libraries in python, I use the default Anaconda environment configured during the installation. I won't list all the modules installed (I can provide more if necessary) but here a comparison of python and numpy versions:</p>
<pre><code> | Windows | Linux
---------------+---------|-------
Python version | 3.9.13 | 3.9.7
Numpy version | 1.24.1 | 1.26.3
---------------+---------|-------
Python version | 3.10.9 | 3.10.9
Numpy version | 1.23.5 | 1.23.5
</code></pre>
<p>I also tried with several version of numpy for Windows/Python 3.10 (1.24.1 for instance).</p>
<p>I found that there was a change in the ABI between 3.9 and 3.10 but I don't think this is the reason of my problem since it works on linux.</p>
<p><strong>Solving attempts</strong></p>
<p>As a summary of all my attempts to solve the problem, here is what I did:</p>
<ul>
<li>check that my pyd's dependancies exist</li>
<li>check the content of sys.path</li>
<li>build boost 1.75.0 with the same numpy version for all python version</li>
<li>configure a python environment with several versions of numpy</li>
</ul>
<p><strong>Some sample of code</strong></p>
<p>Just in case it helps, I copy a part of the code concerning the unit test module export</p>
<pre><code>pythonMajorVersion = sys.version_info.major
pythonMinorVersion = sys.version_info.minor
pythonVersionSubDir = "Python" + str(pythonMajorVersion) + "_" + str(pythonMinorVersion)
# add of binary directory to path
binaryDir = getBinDir() # Directory where the dlls are stored
if not binaryDir in sys.path:
sys.path.insert(1, binaryDir)
if os.name == "nt":
os.environ["PATH"] = os.environ["PATH"] + ";" + binaryDir
# The python pyds are located in a specific subdirectory called "Python3_8", "Python3_9" or "Python3_10" depending on the python I'm using
pythonBinaryDir = os.path.join(binaryDir, pythonVersionSubDir)
if not pythonBinaryDir in sys.path:
sys.path.insert(1, pythonBinaryDir)
if os.name == "nt":
os.environ["PATH"] = os.environ["PATH"] + ";" + pythonBinaryDir
# Generate the full module name, for instance 'PyUnitTestTools_1_0_0_0_x64', 1.0.0.0 being the version number
unitTestModuleName = getLibName("PyUnitTestTools")
try:
self.unittest = import_module(unitTestModuleName)
except:
print("Failed to initialize unit test framework\n" + str(sys.exc_info()[0]) + " : " + str(sys.exc_info()[1]))
</code></pre>
<p>I'm not sure adding the libraries directories to both <code>sys.path</code> and <code>os.environ["PATH"]</code> is necessary but I tried just in case it could help.</p>
<p>Can't I load the unit test module in this specific configuration (Windows, Python 3.10)?</p>
|
<python><c++><python-3.x><unit-testing><boost>
|
2024-12-05 14:24:02
| 1
| 862
|
ractiv
|
79,255,009
| 6,266,810
|
Memory problem when serializing zipped files in PySpark on Databricks
|
<p>I want to unzip many files in 7z format in PySpark on Databricks.
The zip files contain several thousand tiny files.</p>
<p>I read the files using binary File and I use a UDF to unzip the files:</p>
<pre class="lang-py prettyprint-override"><code>schema = ArrayType(StringType())
@F.udf(returnType=schema)
def unzip_content_udf(content):
extracted_file_contents= []
with py7zr.SevenZipFile(io.BytesIO(content), mode='r') as z:
for name, bytes_stream in z.readall().items():
if name.startswith("v1") or name.startswith("v2"):
unzipped_content = bytes_stream.read().decode(ENCODING)
extracted_file_contents.append(unzipped_content)
return extracted_file_contents
df = spark.read.format("binaryFile").load("/mnt/file_pattern*")
df = df.withColumn("unzipped_files", unzip_content_udf(F.col("content")))
df.write.mode("overwrite").parquet("/mnt/test_dump_unzipped")
</code></pre>
<p>This works well for smaller files, but if I specify one of the larger files (150 MB zipped, 4.5GB unzipped) the process dies and I get:</p>
<pre><code>Py4JJavaError: An error occurred while calling o1665.parquet.
ValueError: can not serialize object larger than 2G
</code></pre>
<p>I guess, it makes sense since the serialization limit is smaller than the unzipped file size.</p>
<p>Do you have any ideas on how to either increase the limit or chunk the size of the unzip operation below the limit?</p>
|
<python><pyspark><databricks><7zip><py7zr>
|
2024-12-05 14:04:06
| 1
| 996
|
WilliamEllisWebb
|
79,254,930
| 752,092
|
How to properly and efficiently read and filter a .las file to memory in chunks using laspy
|
<p>I'm working on a number of large .las files (>300M points) from LiDAR scans where I have to perform some calculations on a subset of the points in those files. Reading the files all at once is problematic due to the large memory use when reading all the data into memory, making the processing extremely slow. I'm not looking for a solution that writes the files to disk (e.g. chunk writing), but rather something that returns a LasData object with the same dimensions/point format as the original .las file, but with a subset of the points.</p>
<p>Often I only need a smaller amount of the total points, and the selection can be based on filtering based on dimension values (such as a range of intensity), clipping to a polygon area or simply thinning the point cloud. The size of the point cloud after filtering is not known beforehand, so I cannot preallocate an array of a known, final, size.</p>
<p>I've come up with two solutions that appear to produce the correct result (see <code>check_expected_points</code> and the expected results from <code>read_las_baseline</code>, set the variable <code>TEST_LASDATA = True</code> to run test), and are both more memory efficient and faster compared to reading all at once as in <code>read_las_baseline</code>. It's exemplified with decimating the point cloud, but the goal is to use this is cases with additional filtering steps and without being able to know the final point count. The function <code>read_las_chunks_filter_preallocate</code> performs best, but being new to .las data and a python novice in general I wonder if there are still better/faster ways to do this and if this is a proper way to handle .las data.</p>
<p>The example can be ran using <code>simple.las</code> from the laspy repository to check for correctness, but for performance testing one needs larger files. Run <code>generate_las</code> to create a larger file on disk.</p>
<pre class="lang-py prettyprint-override"><code>import timeit
import gc
import laspy # version 2.5.4
import numpy as np # version 2.0.2
LAS_FILE = r"large_dummy_file.las" # generated using generate_las_100M(), or use "simple.las" from https://github.com/laspy/laspy/tree/master/tests/data
DECIMATE_FACTOR=5
TEST_LASDATA = False # whether to run the tests comparing the data with check_expected_points(), requires extra runs and might lead to less accurate timing results
def generate_las(output_path='large_dummy_file.las', n_points=100_000_000):
""" Creates a test .las file with 100M points (about 3.5GB disk space required) """
# Taken from https://laspy.readthedocs.io/en/latest/examples.html#creating-a-new-lasdata
SHAPE = int(n_points**0.5)
# 0. Creating some dummy data
my_data_xx, my_data_yy = np.meshgrid(np.linspace(-20, 20, SHAPE), np.linspace(-20, 20, SHAPE))
my_data_zz = my_data_xx ** 2 + 0.25 * my_data_yy ** 2
my_data = np.hstack((my_data_xx.reshape((-1, 1)), my_data_yy.reshape((-1, 1)), my_data_zz.reshape((-1, 1))))
# 1. Create a new header
header = laspy.LasHeader(point_format=3, version="1.2")
header.add_extra_dim(laspy.ExtraBytesParams(name="random", type=np.int32))
header.offsets = np.min(my_data, axis=0)
header.scales = np.array([0.1, 0.1, 0.1])
# 2. Create a Las
las = laspy.LasData(header)
las.x = my_data[:, 0]
las.y = my_data[:, 1]
las.z = my_data[:, 2]
las.random = np.random.randint(-1503, 6546, len(las.points), np.int32)
las.write(output_path)
def check_expected_points(true_las: laspy.LasData, las_to_test: laspy.LasData):
""" Compares two laspy.LasData objects. Based on https://github.com/laspy/laspy/blob/master/tests/test_chunk_read_write.py """
assert true_las.header.point_count == las_to_test.header.point_count
assert true_las.header.point_format == las_to_test.header.point_format
np.testing.assert_array_equal(true_las.header.offsets, las_to_test.header.offsets)
np.testing.assert_array_equal(true_las.header.scales, las_to_test.header.scales)
expected_points = true_las.points
to_test_points = las_to_test.points
for dim_name in to_test_points.array.dtype.names:
assert np.allclose(
expected_points[dim_name], to_test_points[dim_name]
), f"{dim_name} not equal"
def read_las_baseline(las_file, decimate):
""" Read and decimate without reading in chunks"""
las = laspy.read(las_file)
las.points = las.points[::decimate]
return las
def read_las_chunks_filter_preallocate(las_file, decimate=None):
""" This function uses pre-allocated PackedPointRecord that of the full size, then slices is to the reduced
size afterwards """
CHUNK_SIZE = 1_000_000
with laspy.open(las_file) as f:
point_record = laspy.PackedPointRecord.zeros(f.header.point_count, f.header.point_format)
new_header = f.header
current_insert_index = 0
for points in f.chunk_iterator(CHUNK_SIZE):
# can manipulate points here..
# e.g. filter on angle, intensity, in polygon etc
# the size of points after filtering is not known beforehand in final application
if decimate:
points = points[::decimate]
chunk_arr_len = points.array.shape[0]
point_record.array[current_insert_index:current_insert_index+chunk_arr_len] = points.array
current_insert_index += chunk_arr_len
# slice to the actual size of inserted data, and update the header
point_record = point_record[:current_insert_index]
new_header.point_count=len(point_record)
output_las = laspy.LasData(header=new_header, points=point_record)
return output_las
def read_las_chunks_filter_list_concat(las_file, decimate=None):
""" This function stores the filtered points.array in a list, then in the end concatenates the points
and uses these to create a new LasData object.
"""
CHUNK_SIZE = 1_000_000
with laspy.open(las_file) as f:
filtered_points = []
final_point_record = laspy.PackedPointRecord.empty(f.header.point_format)
for points in f.chunk_iterator(CHUNK_SIZE):
# can manipulate points here..
# e.g. filter on angle, intensity, in polygon etc
# the size of points after filtering is not known beforehand in final application
if decimate:
points = points[::decimate]
filtered_points.append(points.array)
concatenated_points = np.concatenate(filtered_points)
final_point_record.array = concatenated_points
output_las = laspy.LasData(header=f.header)
output_las.points = final_point_record # setting points here instead of LasData call will set correct point_count
return output_las
def main():
methods = [
('read_las_baseline', read_las_baseline),
('read_las_chunks_filter_preallocate', read_las_chunks_filter_preallocate),
('read_las_chunks_filter_list_concat', read_las_chunks_filter_list_concat),
]
if TEST_LASDATA:
expected = read_las_baseline(LAS_FILE, DECIMATE_FACTOR)
for name, method in methods:
if TEST_LASDATA:
result = method(LAS_FILE, decimate=DECIMATE_FACTOR)
check_expected_points(expected, result)
del result
gc.collect()
# timing a single run
t = timeit.Timer(lambda: method(LAS_FILE, DECIMATE_FACTOR))
print(f"{name}: {t.timeit(number=1):.1f} seconds")
if __name__ == '__main__':
main()
</code></pre>
<p>The timing difference is large when hitting memory limits, if there is enough memory there are no large differences in timing. However, the memory use is quite different (not shown here).</p>
<p>Case when maxing out available RAM, large difference in processing time:</p>
<pre><code>read_las_baseline: 38.3 seconds
read_las_chunks_filter_preallocate: 4.1 seconds
read_las_chunks_filter_list_concat: 5.9 seconds
</code></pre>
<p>Sufficient RAM available, less of a difference:</p>
<pre><code>read_las_baseline: 6.0 seconds
read_las_chunks_filter_preallocate: 2.9 seconds
read_las_chunks_filter_list_concat: 3.4 seconds
</code></pre>
<p>I've not formally benchmarked the memory use, but simply observed it as the code runs. The <code>read_las_chunks_filter_preallocate</code> function is far superior when it comes to memory use.</p>
<p><code>read_las_chunks_filter_preallocate</code>: The function which preallocates an array of the input size, and then slices to the size of the filtered points. This is the fastest method, probably mainly due to the very low memory use which seems to be proportional to the amount of points that are kept. <strong>I wonder though, if this is a proper way to do this with laspy, and if there is a more efficient way to preallocate the data container? It feels a bit hacky, and I wonder if there are pitfalls I'm not seeing when dealing with .las data.</strong></p>
<p><code>read_las_chunks_filter_list_concat</code>: Stores the filtered points.array in a list when doing chunk reading, and in the end of concatenates the arrays in the list. This uses much more memory compared to read_las_chunks_filter_preallocate, and is a little slower for large point clouds. <strong>Is there a way to make this more memory efficient, or a better way to concatenate the arrays?</strong></p>
|
<python><point-clouds><lidar><laspy>
|
2024-12-05 13:44:56
| 0
| 326
|
rhkarls
|
79,254,763
| 1,205,158
|
How can I access an XML node's attributes in Python that have a namespace?
|
<p>I've got a question about parsing a rather complicated XML document in Python with <code>xml.etree.ElementTree</code>. The XML is <code>scap-security-guide-0.1.75/ssg-ubuntu2204-ds.xml</code> from <code>https://github.com/ComplianceAsCode/content/releases/download/v0.1.75/scap-security-guide-0.1.75.zip</code> and the root tag and its attributes are:</p>
<pre><code><ds:data-stream-collection xmlns:cat="urn:oasis:names:tc:entity:xmlns:xml:catalog" xmlns:cpe-dict="http://cpe.mitre.org/dictionary/2.0" xmlns:cpe-lang="http://cpe.mitre.org/language/2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:ds="http://scap.nist.gov/schema/scap/source/1.2" xmlns:html="http://www.w3.org/1999/xhtml" xmlns:ind="http://oval.mitre.org/XMLSchema/oval-definitions-5#independent" xmlns:linux="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux" xmlns:ocil="http://scap.nist.gov/schema/ocil/2.0" xmlns:oval="http://oval.mitre.org/XMLSchema/oval-common-5" xmlns:oval-def="http://oval.mitre.org/XMLSchema/oval-definitions-5" xmlns:unix="http://oval.mitre.org/XMLSchema/oval-definitions-5#unix" xmlns:xccdf-1.2="http://checklists.nist.gov/xccdf/1.2" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" id="scap_org.open-scap_collection_from_xccdf_ssg-ubuntu2204-xccdf.xml" schematron-version="1.3">
</code></pre>
<p>When I load the document with <code>ET.parse(...).getroot()</code> and look at the root element, I can only see the attributes without a namespace:</p>
<pre><code>id='scap_org.open-scap_collection_from_xccdf_ssg-ubuntu2204-xccdf.xml'
schematron-version='1.3'
</code></pre>
<p>I don't really need the other attributes but I'm curious why I don't get them all. What if I needed one of the other attributes? How would I access them?</p>
|
<python><xml><namespaces><elementtree>
|
2024-12-05 12:53:35
| 1
| 339
|
John
|
79,254,587
| 12,466,687
|
How to get a line marking for each tick on x axis in plotnine chart in python?
|
<p>I am trying to replicate some bits from a reference chart and create marking for every tick on x axis in plotnine plot.</p>
<p>Reference Chart:
<a href="https://i.sstatic.net/25aoEmM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/25aoEmM6.png" alt="enter image description here" /></a></p>
<p>I am not able to figure out how to get the grey tick marking for each country (in ref. chart circled with red) with respect to x axis ticks. (Ref of image & code of this plot <a href="https://plotnine.org/reference/geom_segment.html" rel="nofollow noreferrer">plot link here</a>)</p>
<p>I am looking to create the grey xtick markings for 'child_category' in the Code that I have tried below:</p>
<pre><code>import pandas as pd
import plotnine as p9
from plotnine import *
# Create a sample dataset
new_data = {
'date': pd.date_range('2022-01-01', periods=8, freq="ME"),
'parent_category': ['Electronics', 'Electronics', 'Fashion', 'Fashion', 'Home Goods', 'Electronics', 'Fashion','Electronics'],
'child_category': ['Smartphones', 'Laptops', 'Shirts', 'Pants', 'Kitchenware','Laptops', 'Shirts', 'Smartphones']
}
# Create the DataFrame
new_data = pd.DataFrame(new_data)
</code></pre>
<pre><code>(ggplot(new_data
,aes(x="date", y="child_category")) +
# geom_line(size = 8, color = "pink") + # #edece3
geom_point(size = 6, color = "purple",fill = "red",stroke=1,alpha = 0.5) +
facet_wrap("parent_category", scales="free_y",ncol=1) + #
scale_x_datetime(date_breaks="3 month") +
theme_538() +
theme(axis_text_x=element_text(angle=45, hjust=1),
panel_grid_major = element_blank(),
figure_size=(8, 6)
)
)
</code></pre>
<p><a href="https://i.sstatic.net/e8O6zuqv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e8O6zuqv.png" alt="enter image description here" /></a></p>
|
<python><ggplot2><plotnine>
|
2024-12-05 11:59:14
| 1
| 2,357
|
ViSa
|
79,254,467
| 1,307,905
|
how to insert from __future__ import annotations in a future proof way
|
<p>The <a href="https://docs.python.org/3.14/whatsnew/3.14.html#from-future-import-annotations" rel="nofollow noreferrer">Python 3.14 documentation</a> states that the</p>
<pre><code>from __future__ import annotations
</code></pre>
<p>directive will be considered deprecated with the release of 3.14 and expected to be removed at some future date.</p>
<p>It is going to be a while (not before 2029), but I would like to know if it is possible to write Python files that are future proof.</p>
<p>I cannot do something like:</p>
<pre><code>import sys
if (3, 7) <= sys.version_info() < (3, 14):
from __future__ import annotations
</code></pre>
<p>as <code>from __future__ import</code>s must occur at the beginning of a file.</p>
<p>As I have had the need to support some of my libraries, well beyond the end-of-life date of specific Python versions, I foresee that this is going to be a problem. I can probably strip these directives during the build process, but this would force me to create non-universal wheels, complicating the build process.</p>
<p>Is there a simple way to insert/use this directive, in Python files, in a way that is future proof?</p>
|
<python><python-typing>
|
2024-12-05 11:25:10
| 1
| 78,248
|
Anthon
|
79,254,344
| 13,339,621
|
Simple example of influxdb-client-python
|
<p>I am running the script <a href="https://github.com/influxdata/influxdb-client-python/blob/ab16384b7e9931da8b74f9a19af89939c0a3b673/examples/example.py" rel="nofollow noreferrer">example.py</a> from the <code>influxdb-client-python</code> code base. It runs and writes a data point to a local influxDB instance, for tests and learning purposes only.
From what I understand, this script is supposed to write a single point in 2 different ways: using <code>line_protocol</code> (a string) and the <code>Point</code> data structure.
I have no problem with the <code>Point</code> data structure; however, the <code>line_protocol</code> writes a point somewhere in February 1970. I am afraid this is a problem of <a href="https://github.com/influxdata/influxdb-client-python/blob/ab16384b7e9931da8b74f9a19af89939c0a3b673/examples/example.py#L11" rel="nofollow noreferrer">WritePrecision</a>. Maybe it should not be specified when using a <code>Datetime</code> object that is converted to a <code>line_protocol</code>?
Can someone please confirm this behaviour (if this is not an error on my side) and also that this is expected ? Many thanks.</p>
|
<python><influxdb><influxdb-python><influx-line-protocol>
|
2024-12-05 10:50:56
| 2
| 1,549
|
matleg
|
79,254,205
| 21,185,825
|
Python - Flask - cannot use flask run - The system cannot find the file specified
|
<p>After having installed my environment, I just try to run my flask app</p>
<p>It works fine using python</p>
<pre><code>.\venv\Scripts\python.exe .\src\main.py --debug
</code></pre>
<p>But as I try to run this</p>
<pre><code>.\venv\Scripts\activate.bat
flask.exe app .\src\main.py
</code></pre>
<p>I get</p>
<pre><code>Fatal error in launcher: Unable to create process using '"******\venv\scripts\python.exe" "******\venv\Scripts\flask.exe" app src\main.py': The system cannot find the file specified.
</code></pre>
<p>I tried</p>
<pre><code>.\venv\Scripts\flask.exe app .src\main.py
</code></pre>
<p>I get this</p>
<pre><code>Fatal error in launcher: Unable to create process using '"*****\venv\scripts\python.exe" "****\venv\Scripts\flask.exe" app .\src\main.py': The system cannot find the file specified.
</code></pre>
<ul>
<li>flask.exe is not in the path</li>
<li>I already checked other related answers on this subject, yet nothing works</li>
</ul>
<p>Is there something I am missing ?</p>
<p>Thanks for your help</p>
|
<python><flask>
|
2024-12-05 10:09:46
| 0
| 511
|
pf12345678910
|
79,253,942
| 12,466,687
|
How to do facet_wrap/facet_grid properly in plotnine showing relevant subcategpries only in python?
|
<p>I have data with two columns with categories and I am trying to create a split/facets based on parent category that will contain sub categories. Below is something that I have tried:</p>
<pre><code>import pandas as pd
import plotnine as p9
from plotnine import *
# Create a sample dataset
new_data = {
'date': pd.date_range('2022-01-01', periods=8, freq="ME"),
'parent_category': ['Electronics', 'Electronics', 'Fashion', 'Fashion', 'Home Goods', 'Electronics', 'Fashion','Electronics'],
'child_category': ['Smartphones', 'Laptops', 'Shirts', 'Pants', 'Kitchenware','Laptops', 'Shirts', 'Smartphones']
}
# Create the DataFrame
new_data = pd.DataFrame(new_data)
</code></pre>
<pre><code>(ggplot(new_data
,aes(x="date", y="child_category")) +
geom_line(size = 8, color = "pink") + # #edece3
geom_point(size = 6, color = "grey") +
facet_wrap("parent_category",ncol=1) + #
theme_538() +
theme(axis_text_x=element_text(angle=45, hjust=1),
panel_grid_major = element_blank(),
figure_size=(8, 6)
)
)
</code></pre>
<p><a href="https://i.sstatic.net/Aqcqsh8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Aqcqsh8J.png" alt="enter image description here" /></a></p>
<p><strong>Expected output:</strong></p>
<p>Electronics should show Smartphone and Laptop only and not the fashion items. In the above plot all the child categories have got repeated in all facets but I would like to have only relevant child_categories in parent_Category facets</p>
<p>If it is not possible using facets then how can it be done ... subplots or any other method ?</p>
<p>Appreciate any suggestions !!</p>
|
<python><ggplot2><plotnine>
|
2024-12-05 08:51:22
| 1
| 2,357
|
ViSa
|
79,253,846
| 11,748,924
|
Could not locate class 'SinePositionEncoding'. Make sure custom classes are decorated with `@keras.saving.register_keras_serializable()`
|
<p>I trained transformer with <code>SinePositionEncoding</code> and <code>TransformerEncoder</code>:</p>
<p>Instead of using from <code>keras-hub</code> from <code>pip</code> due to error in Windows due to numpy compatibility, I directly copy the source code like below:</p>
<pre><code>#@title Define SinePositionEncoding and TransformerEncoder
import keras
from keras.api import ops
from absl import logging
from keras.api import layers
from keras.api.models import Model
def clone_initializer(initializer):
# If we get a string or dict, just return as we cannot and should not clone.
if not isinstance(initializer, keras.initializers.Initializer):
return initializer
config = initializer.get_config()
return initializer.__class__.from_config(config)
def _check_masks_shapes(inputs, padding_mask, attention_mask):
mask = padding_mask
if hasattr(inputs, "_keras_mask") and mask is None:
mask = inputs._keras_mask
if mask is not None:
if len(mask.shape) != 2:
raise ValueError(
"`padding_mask` should have shape "
"(batch_size, target_length). "
f"Received shape `{mask.shape}`."
)
if attention_mask is not None:
if len(attention_mask.shape) != 3:
raise ValueError(
"`attention_mask` should have shape "
"(batch_size, target_length, source_length). "
f"Received shape `{mask.shape}`."
)
def merge_padding_and_attention_mask(
inputs,
padding_mask,
attention_mask,
):
_check_masks_shapes(inputs, padding_mask, attention_mask)
mask = padding_mask
if hasattr(inputs, "_keras_mask"):
if mask is None:
# If no padding mask is explicitly provided, we look for padding
# mask from the input data.
mask = inputs._keras_mask
else:
logging.warning(
"You are explicitly setting `padding_mask` while the `inputs` "
"have built-in mask, so the built-in mask is ignored."
)
if mask is not None:
# Add an axis for broadcasting, the attention mask should be 2D
# (not including the batch axis).
mask = ops.cast(ops.expand_dims(mask, axis=1), "int32")
if attention_mask is not None:
attention_mask = ops.cast(attention_mask, "int32")
if mask is None:
return attention_mask
else:
return ops.minimum(mask, attention_mask)
return mask
class SinePositionEncoding(keras.layers.Layer):
def __init__(
self,
max_wavelength=10000,
**kwargs,
):
super().__init__(**kwargs)
self.max_wavelength = max_wavelength
self.built = True
def call(self, inputs, start_index=0):
shape = ops.shape(inputs)
seq_length = shape[-2]
hidden_size = shape[-1]
positions = ops.arange(seq_length)
positions = ops.cast(positions + start_index, self.compute_dtype)
min_freq = ops.cast(1 / self.max_wavelength, dtype=self.compute_dtype)
timescales = ops.power(
min_freq,
ops.cast(2 * (ops.arange(hidden_size) // 2), self.compute_dtype)
/ ops.cast(hidden_size, self.compute_dtype),
)
angles = ops.expand_dims(positions, 1) * ops.expand_dims(timescales, 0)
# even indices are sine, odd are cosine
cos_mask = ops.cast(ops.arange(hidden_size) % 2, self.compute_dtype)
sin_mask = 1 - cos_mask
# embedding shape is [seq_length, hidden_size]
positional_encodings = (
ops.sin(angles) * sin_mask + ops.cos(angles) * cos_mask
)
return ops.broadcast_to(positional_encodings, shape)
def get_config(self):
config = super().get_config()
config.update(
{
"max_wavelength": self.max_wavelength,
}
)
return config
def compute_output_shape(self, input_shape):
return input_shape
class TransformerEncoder(keras.layers.Layer):
def __init__(
self,
intermediate_dim,
num_heads,
dropout=0,
activation="relu",
layer_norm_epsilon=1e-05,
kernel_initializer="glorot_uniform",
bias_initializer="zeros",
normalize_first=False,
**kwargs,
):
super().__init__(**kwargs)
self.intermediate_dim = intermediate_dim
self.num_heads = num_heads
self.dropout = dropout
self.activation = keras.activations.get(activation)
self.layer_norm_epsilon = layer_norm_epsilon
self.kernel_initializer = keras.initializers.get(kernel_initializer)
self.bias_initializer = keras.initializers.get(bias_initializer)
self.normalize_first = normalize_first
self.supports_masking = True
def build(self, inputs_shape):
# Infer the dimension of our hidden feature size from the build shape.
hidden_dim = inputs_shape[-1]
# Attention head size is `hidden_dim` over the number of heads.
key_dim = int(hidden_dim // self.num_heads)
if key_dim == 0:
raise ValueError(
"Attention `key_dim` computed cannot be zero. "
f"The `hidden_dim` value of {hidden_dim} has to be equal to "
f"or greater than `num_heads` value of {self.num_heads}."
)
# Self attention layers.
self._self_attention_layer = keras.layers.MultiHeadAttention(
num_heads=self.num_heads,
key_dim=key_dim,
dropout=self.dropout,
kernel_initializer=clone_initializer(self.kernel_initializer),
bias_initializer=clone_initializer(self.bias_initializer),
dtype=self.dtype_policy,
name="self_attention_layer",
)
if hasattr(self._self_attention_layer, "_build_from_signature"):
self._self_attention_layer._build_from_signature(
query=inputs_shape,
value=inputs_shape,
)
else:
self._self_attention_layer.build(
query_shape=inputs_shape,
value_shape=inputs_shape,
)
self._self_attention_layer_norm = keras.layers.LayerNormalization(
epsilon=self.layer_norm_epsilon,
dtype=self.dtype_policy,
name="self_attention_layer_norm",
)
self._self_attention_layer_norm.build(inputs_shape)
self._self_attention_dropout = keras.layers.Dropout(
rate=self.dropout,
dtype=self.dtype_policy,
name="self_attention_dropout",
)
# Feedforward layers.
self._feedforward_layer_norm = keras.layers.LayerNormalization(
epsilon=self.layer_norm_epsilon,
dtype=self.dtype_policy,
name="feedforward_layer_norm",
)
self._feedforward_layer_norm.build(inputs_shape)
self._feedforward_intermediate_dense = keras.layers.Dense(
self.intermediate_dim,
activation=self.activation,
kernel_initializer=clone_initializer(self.kernel_initializer),
bias_initializer=clone_initializer(self.bias_initializer),
dtype=self.dtype_policy,
name="feedforward_intermediate_dense",
)
self._feedforward_intermediate_dense.build(inputs_shape)
self._feedforward_output_dense = keras.layers.Dense(
hidden_dim,
kernel_initializer=clone_initializer(self.kernel_initializer),
bias_initializer=clone_initializer(self.bias_initializer),
dtype=self.dtype_policy,
name="feedforward_output_dense",
)
intermediate_shape = list(inputs_shape)
intermediate_shape[-1] = self.intermediate_dim
self._feedforward_output_dense.build(tuple(intermediate_shape))
self._feedforward_dropout = keras.layers.Dropout(
rate=self.dropout,
dtype=self.dtype_policy,
name="feedforward_dropout",
)
self.built = True
def call(
self,
inputs,
padding_mask=None,
attention_mask=None,
training=None,
return_attention_scores=False,
):
x = inputs # Intermediate result.
# Compute self attention mask.
self_attention_mask = merge_padding_and_attention_mask(
inputs, padding_mask, attention_mask
)
# Self attention block.
residual = x
if self.normalize_first:
x = self._self_attention_layer_norm(x)
if return_attention_scores:
x, attention_scores = self._self_attention_layer(
query=x,
value=x,
attention_mask=self_attention_mask,
return_attention_scores=return_attention_scores,
training=training,
)
return x, attention_scores
else:
x = self._self_attention_layer(
query=x,
value=x,
attention_mask=self_attention_mask,
training=training,
)
x = self._self_attention_dropout(x, training=training)
x = x + residual
if not self.normalize_first:
x = self._self_attention_layer_norm(x)
# Feedforward block.
residual = x
if self.normalize_first:
x = self._feedforward_layer_norm(x)
x = self._feedforward_intermediate_dense(x)
x = self._feedforward_output_dense(x)
x = self._feedforward_dropout(x, training=training)
x = x + residual
if not self.normalize_first:
x = self._feedforward_layer_norm(x)
if return_attention_scores:
return x, attention_scores
return x
def get_config(self):
config = super().get_config()
config.update(
{
"intermediate_dim": self.intermediate_dim,
"num_heads": self.num_heads,
"dropout": self.dropout,
"activation": keras.activations.serialize(self.activation),
"layer_norm_epsilon": self.layer_norm_epsilon,
"kernel_initializer": keras.initializers.serialize(
self.kernel_initializer
),
"bias_initializer": keras.initializers.serialize(
self.bias_initializer
),
"normalize_first": self.normalize_first,
}
)
return config
def compute_output_shape(self, inputs_shape):
return inputs_shape
</code></pre>
<p>Then I trained them using this architecture:</p>
<pre><code>def get_model():
encoder_inputs = layers.Input(shape=(240,), name="encoder_inputs", dtype='uint8')
# embeddings
token_embeddings = layers.Embedding(input_dim=255, output_dim=128)(encoder_inputs) # Input: Token Size, Output: Embed Dim
position_encodings = SinePositionEncoding()(token_embeddings)
embeddings = token_embeddings + position_encodings
# transformer encoder
encoder_outputs = TransformerEncoder(intermediate_dim=128*2, num_heads=4, dropout=0.01)(inputs=embeddings)
# Output layer for vocabulary size of 4
output_predictions = layers.Dense(units=4, activation=None)(encoder_outputs)
# Final model
model = Model(encoder_inputs, output_predictions, name="transformer_encoder")
return model
</code></pre>
<p>I saved the model with <code>model.save('model_best.keras')</code>.</p>
<p>Trying to load model and corresponded cell from source code using <code>keras.saving.load_model('model_best.keras')</code>, returning error:</p>
<pre><code> 718 instance = cls.from_config(inner_config)
719 except TypeError as e:
--> 720 raise TypeError(
721 f"{cls} could not be deserialized properly. Please"
722 " ensure that components that are Python object"
723 " instances (layers, models, etc.) returned by"
724 " `get_config()` are explicitly deserialized in the"
725 " model's `from_config()` method."
726 f"\n\nconfig={config}.\n\nException encountered: {e}"
727 )
728 build_config = config.get("build_config", None)
729 if build_config and not instance.built:
TypeError: <class 'keras.src.models.functional.Functional'> could not be deserialized properly. Please ensure that components that are Python object instances (layers, models, etc.) returned by `get_config()` are explicitly deserialized in the model's `from_config()` method.
config={'module': 'keras.src.models.functional', 'class_name': 'Functional', 'config': {}, 'registered_name': 'Functional', 'build_config': {'input_shape': None}, 'compile_config': {'optimizer': {'module': 'keras.optimizers', 'class_name': 'Adam', 'config': {'name': 'adam', 'learning_rate': 0.0005000000237487257, 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'loss_scale_factor': None, 'gradient_accumulation_steps': None, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}, 'registered_name': None}, 'loss': {'module': 'keras.losses', 'class_name': 'SparseCategoricalCrossentropy', 'config': {'name': 'sparse_categorical_crossentropy', 'reduction': 'sum_over_batch_size', 'from_logits': True, 'ignore_class': None}, 'registered_name': None}, 'loss_weights': None, 'metrics': ['accuracy'], 'weighted_metrics': None, 'run_eagerly': False, 'steps_per_execution': 1, 'jit_compile': True}}.
Exception encountered: Could not locate class 'SinePositionEncoding'. Make sure custom classes are decorated with `@keras.saving.register_keras_serializable()`. Full object config: {'module': None, 'class_name': 'SinePositionEncoding', 'config': {'name': 'sine_position_encoding', 'trainable': True, 'dtype': {'module': 'keras', 'class_name': 'DTypePolicy', 'config': {'name': 'float32'}, 'registered_name': None, 'shared_object_id': 133241101393904}, 'max_wavelength': 10000}, 'registered_name': 'SinePositionEncoding', 'name': 'sine_position_encoding', 'inbound_nodes': [{'args': [{'class_name': '__keras_tensor__', 'config': {'shape': [None, 240, 128], 'dtype': 'float32', 'keras_history': ['embedding', 0, 0]}}], 'kwargs': {}}]}
</code></pre>
<p>How do I access my model that has been trained for 1 hour long?</p>
|
<python><keras>
|
2024-12-05 08:16:27
| 1
| 1,252
|
Muhammad Ikhwan Perwira
|
79,253,803
| 11,815,097
|
Pagination and continuation token in Azure Table Storage Python API
|
<p>Similar questions asked <a href="https://stackoverflow.com/questions/34680011/python-how-to-grab-azure-continuation-token">here</a> but the azure table package has updated significantly since then and that solution doesn't work any more.</p>
<p>I'm using Azure Data Table Package V12.6.0: <a href="https://pypi.org/project/azure-data-tables/12.6.0/" rel="nofollow noreferrer">https://pypi.org/project/azure-data-tables/12.6.0/</a></p>
<p>For my application that is a chat platform, I'm saving each conversation in a record at the table storage but when a user loads the chat page, I just want to grab 10 result per load with it's continuation token so I can load 10 more on demand.</p>
<p>I've seen several c# examples, and even a Java example, but not a single one for Python.</p>
<p>Here is my code, I use this</p>
<blockquote>
<p>results_per_page</p>
</blockquote>
<p>But I doesn't seem to be working as I get all 200 records, plus I don't know where to find continuation token.</p>
<pre><code>def get_chat_history(notebook_id: str, user_id: str):
"""
Get the chat history for a notebook for a user.
"""
table_service_client = connect_table_storage()
table_name = os.getenv("AZURE_CHAT_HISTORY_TABLE_NAME")
table_client = table_service_client.get_table_client(table_name=table_name)
entities = table_client.query_entities(query_filter=f"NotebookId eq '{notebook_id}' and PartitionKey eq '{user_id}'",
select=["Role", "Content"],
results_per_page=10
)
chat_history = []
for entity in entities:
chat_history.append({
"Timestamp": entity._metadata["timestamp"],
"Role": entity["Role"],
"Content": entity["Content"]
})
return chat_history
</code></pre>
|
<python><database><azure><azure-table-storage>
|
2024-12-05 07:58:24
| 1
| 315
|
Yasin Amini
|
79,253,560
| 2,709,454
|
Difficulty converting python web socket code to java
|
<p>I'm trying to convert some python web socket code to Java (code that talks to a consumer device:)</p>
<pre><code> ssl_context = ssl.create_default_context()
ssl_context.check_hostname = False
ssl_context.verify_mode = ssl.CERT_NONE
ws_connection_local = connect(f"wss://{target_ip}:{WEBSOCKET_SSL_PORT}", ssl_context=ssl_context)
</code></pre>
<p>my Java .. (actually groovy) replacement is this...</p>
<pre><code>CountDownLatch latch = new CountDownLatch(1);
String url = "wss://$host:$port"
WebSocket ws = HttpClient
.newHttpClient()
.newWebSocketBuilder()
.buildAsync(URI.create(url), new WebSocketClient(latch))
</code></pre>
<p>whereupon I get the error:</p>
<pre><code>java.util.concurrent.CompletionException: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Caused by: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
</code></pre>
<p>On the theory that the problem is something to do with SSL certificates or host verification, I followed this advice prior to opening the connection: <a href="https://stackoverflow.com/questions/60110848/how-to-disable-ssl-verification">How to disable SSL verification?</a></p>
<p>and set trust all certificates with the recommended no-op HttpsURLConnection.setDefaultSSLSocketFactory etc and HttpsURLConnection.setDefaultHostnameVerifier etc. However that didn't help, I get the same error. I also set debugger breakpoints in the TrustManager and HostnameVerifier to see if they were invoked, and they were not.</p>
|
<python><java><ssl><websocket><java-websocket>
|
2024-12-05 06:06:25
| 0
| 1,696
|
xpusostomos
|
79,253,283
| 11,626,909
|
Counting the Frequency of Some Words within some other Key Words in Text
|
<p>I have two sets of word lists - first one I called <code>search words</code> and the second one I called <code>key words</code>. My goal is to calculate the frequency of <code>search words</code> within 10 words of <code>key words</code>. For example, assume that the word - <strong>acquire</strong> - is in <code>key words</code> list, then I will look for the words in <code>search words</code> list within 10 words of <strong>acquire</strong>. Within 10 words mean, 10 words forward from key words and 10 words backward from key words, meaning that both forward and backward movement.</p>
<p>Below is my <code>search word</code> and <code>key word</code> lists -</p>
<pre><code>search_words = ['access control', 'Acronis', 'Adaware', 'AhnLab', 'AI Max Dev Labs', 'Alibaba Security',
'anti-adware', 'anti-keylogger', 'anti-malware', 'anti-ransomware', 'anti-rootkit', 'anti-spyware',
'anti-subversion', 'anti-tamper', 'anti-virus', 'Antiy', 'Avast', 'AVG', 'Avira', 'Baidu', 'Barracuda',
'Bitdefender', 'BullGuard', 'Carbon Black', 'Check Point', 'Cheetah Mobile', 'Cisco', 'Clario',
'Comodo', 'computer security', 'CrowdStrike', 'cryptography', 'Cybereason', 'cybersecurity',
'Cylance', 'data security', 'diagnostic program', 'Elastic', 'Emsisoft', 'encryption', 'Endgame', 'end point security',
'Ensilo', 'eScan', 'ESET', 'FireEye', 'firewall', 'Fortinet', 'F-Secure', 'G Data',
'Immunet', 'information security', 'Intego', 'intrusion detection system', 'K7', 'Kaspersky', 'log management software', 'Lookout',
'MacKeeper', 'Malwarebytes', 'McAfee', 'Microsoft', 'network security',
'NOD32', 'Norton', 'Palo Alto Networks', 'Panda Security', 'PC Matic', 'PocketBits',
'Qihoo', 'Quick Heal', 'records management', 'SafeDNS', 'Saint Security', 'sandbox', 'Sangfor',
'Securion', 'security event management', 'security information and event management',
'security information management', 'SentinelOne', 'Seqrite', 'Sophos',
'SparkCognition', 'steganography', 'Symantec', 'Tencent', 'Total AV', 'Total Defense',
'Trend Micro', 'Trustport', 'Vipre', 'Webroot', 'ZoneAlarm']
key_words = ['acquire', 'adopt', 'advance', 'agree', 'boost', 'capital resource',
'capitalize', 'change', 'commitment', 'complete', 'configure', 'design', 'develop', 'enhance', 'expand',
'expenditure', 'expense', 'implement', 'improve', 'increase', 'initiate', 'install',
'integrate', 'invest', 'lease',
'modernize', 'modify', 'move', 'obtain', 'plan', 'project', 'purchase', 'replace', 'spend',
'upgrade', 'use']
</code></pre>
<p>A small Example -</p>
<pre><code>text_dict = {
'ITEM7':["Last year, from AVG we have acquired Alibaba Security. This year we are in the process \
of adopting Symantec. We believe these technologies will improve our access control. \
Moreover, we also integrated data security diagnostic program.",
"We are planning to install end-point security, which will upgrade intrusion detection system."]
}
df = pd.DataFrame(text_dict)
</code></pre>
<p>My expected outcome is -</p>
<pre><code> ITEM7 Frequency
Last year, from AVG we have acquired Alibaba S... 6
We are planning to install end-point security,... 2
</code></pre>
<p>For the first row in <code>df</code>, we see the word <code>AVG</code> and <code>Alibaba Security</code> are from <code>search_words</code> list and around the word <strong>acquired</strong>, the base form of which - <strong>acquire</strong> - is in the <code>key_words</code> list. Similarly, <code>Symantec</code>, <code>Access Control</code>, <code>data security</code>, <code>diagnostic program</code> are from <code>search_words</code> list and these words are within 10 words of <code>adopting</code>, <code>improve</code>, <code>integrated</code> from <code>key_words</code> list. So, total search words are 6 (AVG+Alibaba Security+Symantec+Access Control+Data Security+Diagnostic Program). Therefore, in the <code>Frequency</code> column of <code>df</code>, the value is 6.</p>
<p>Please note that the words in <code>key_words</code> are in basically base form, so their variation (like adopted, adopting) should be counted as key words also.</p>
|
<python><pandas><nlp>
|
2024-12-05 03:05:06
| 1
| 401
|
Sharif
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.