QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,514,489
| 4,451,315
|
normalise a list column in DuckDB SQL
|
<p>Say I have:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({'a':[1,1,2], 'b': [4,5,6]}).with_columns(c=pl.concat_list('a', 'b'))
print(df)
</code></pre>
<pre><code>shape: (3, 3)
┌─────┬─────┬───────────┐
│ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ list[i64] │
╞═════╪═════╪═══════════╡
│ 1 ┆ 4 ┆ [1, 4] │
│ 1 ┆ 5 ┆ [1, 5] │
│ 2 ┆ 6 ┆ [2, 6] │
└─────┴─────┴───────────┘
</code></pre>
<p>I can normalise column 'c' by doing:</p>
<pre class="lang-py prettyprint-override"><code>In [15]: df.with_columns(c_normalised = pl.col('c') / pl.col('c').list.sum())
Out[15]:
shape: (3, 4)
┌─────┬─────┬───────────┬──────────────────────┐
│ a ┆ b ┆ c ┆ c_normalised │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ list[i64] ┆ list[f64] │
╞═════╪═════╪═══════════╪══════════════════════╡
│ 1 ┆ 4 ┆ [1, 4] ┆ [0.2, 0.8] │
│ 1 ┆ 5 ┆ [1, 5] ┆ [0.166667, 0.833333] │
│ 2 ┆ 6 ┆ [2, 6] ┆ [0.25, 0.75] │
└─────┴─────┴───────────┴──────────────────────┘
</code></pre>
<p>How can I do this in DuckDB? I've tried</p>
<pre class="lang-py prettyprint-override"><code>In [17]: duckdb.sql("""
...: from df
...: select c / list_sum(c)
...: """)
---------------------------------------------------------------------------
BinderException Traceback (most recent call last)
Cell In[17], line 1
----> 1 duckdb.sql("""
2 from df
3 select c / list_sum(c)
4 """)
BinderException: Binder Error: No function matches the given name and argument types '/(BIGINT[], HUGEINT)'. You might need to add explicit type casts.
Candidate functions:
/(FLOAT, FLOAT) -> FLOAT
/(DOUBLE, DOUBLE) -> DOUBLE
/(INTERVAL, BIGINT) -> INTERVAL
</code></pre>
|
<python><duckdb>
|
2025-03-17 11:50:27
| 1
| 11,062
|
ignoring_gravity
|
79,514,484
| 4,373,898
|
Why does Spark raises an IOException while running a aggregation on a streaming dataframe in Dataproc 2.2
|
<p>I try to migrate a job that is running on Dataproc 2.1 images (Spark 3.3, Python 3.10) to Dataproc 2.2 images (Spark 3.5, Python 3.11).
However I encounter an error on one of my queries. After further investigation, I am able to reproduce the issue with this minimal example:</p>
<pre class="lang-py prettyprint-override"><code>from pyspark.sql import SparkSession
from pyspark.sql import functions as F
spark = SparkSession.Builder().getOrCreate()
df = (spark.readStream.format("rate")
.option("rowsPerSecond", 4)
.load()
).withWatermark(
"timestamp", "1 seconds"
).withColumn(
"window",
F.window("timestamp", "10 seconds"),
).groupBy(F.col("window")).agg(F.count(F.expr("*")))
df.writeStream.format("console").queryName("sql_console").start()
spark.streams.awaitAnyTermination(60)
_ = [q.stop() for q in spark.streams.active]
</code></pre>
<p>This raises the following exception:</p>
<pre class="lang-none prettyprint-override"><code>25/03/17 11:37:04 WARN TaskSetManager: Lost task 7.0 in stage 1.0 (TID 7) (cluster-test-alexis-<redacted>.internal executor 2): java.io.IOException: mkdir of file:/tmp/temporary-42a491af-ba8c-412b-b06e-fb420879b92f/state/0/7 failed
at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:1414)
at org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:185)
at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:219)
at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:818)
[...]
</code></pre>
<p>Removing the line <code>groupby(...).agg(...)</code> cause the exception to disappear.
From what I understand, spark tries to create a temporary file to do the aggregation, but fails to do it (don't know why: permission issue, insufficient space, etc.?).</p>
<p>What can I do to investigate further or workaround ?</p>
<p>Kind.</p>
<hr />
<h2>Edit</h2>
<p>While scrolling the logs, I also stumbled upon this line, at the start of the job:</p>
<pre class="lang-none prettyprint-override"><code>25/03/17 11:36:31 WARN ResolveWriteToStream: Temporary checkpoint location created which is deleted normally when the query didn't fail: /tmp/temporary-42a491af-ba8c-412b-b06e-fb420879b92f. If it's required to delete it under any circumstances, please set spark.sql.streaming.forceDeleteTempCheckpointLocation to true. Important to know deleting temp checkpoint folder is best effort.
</code></pre>
<p>This states that the temporary checkpoint location has been created, so I suppose that my error does not come from a permission issue (or the initial creation would fail also).</p>
<hr />
<h2>Edit 2</h2>
<p>A very easy workaround is to explicitly define the checkpoint location:</p>
<pre class="lang-py prettyprint-override"><code>spark = SparkSession.Builder().config("spark.sql.streaming.checkpointLocation", "/tmp").getOrCreate()
</code></pre>
<p>By the way, I don't see any <code>temporary-...</code> folder in the HDFS <code>/tmp</code> folder, but only folders with the ID of the query.</p>
|
<python><apache-spark><pyspark><google-cloud-dataproc>
|
2025-03-17 11:48:10
| 0
| 3,169
|
AlexisBRENON
|
79,514,242
| 19,386,576
|
Why isn’t my cookie being set on a POST redirect in FastAPI with sqladmin custom authentication?
|
<p>I’m building an admin panel with FastAPI using sqladmin, and I’ve implemented a custom authentication backend by extending sqladmin’s AuthenticationBackend. In my login method, when a POST request is made to /admin/login, I validate the credentials, create a JWT token with the claim is_admin=True, and then set this token as a cookie named "admin_token". The response is a RedirectResponse (HTTP 303) that redirects the user to the admin index page.</p>
<p>However, while testing in Chrome and Safari, I found that the "admin_token" cookie is not being stored by the browser when returned from this POST redirect response. Interestingly, when I set the same cookie in a GET request (using a test endpoint), it gets stored correctly.</p>
<p>Here’s the relevant code:</p>
<h1>admin_auth_service.py:</h1>
<pre><code>import logging
import jwt
from fastapi import Request, status
from sqladmin.authentication import AuthenticationBackend
from starlette.responses import RedirectResponse, Response
from app.core.config import settings
from app.db.base import async_db_cntxman
from app.schemas.admin_schema import AdminLoginRequest
from app.services.admin_service import AdminService
logger = logging.getLogger("app")
class AdminAuthService(AuthenticationBackend):
async def login(self, request: Request) -> bool | Response:
if request.method != "POST":
return False # GET on /admin/login shows the login form
response = RedirectResponse(
url=request.url_for("admin:index"), status_code=status.HTTP_303_SEE_OTHER
)
form_data = dict(await request.form())
try:
async with async_db_cntxman() as db:
form_admin_req = AdminLoginRequest.model_validate(form_data)
data = await AdminService.login(db, form_admin_req)
response.set_cookie(
key="admin_token",
value=data.access_token,
httponly=True, # Protects against XSS
secure=False, # False for local development (HTTP)
samesite="lax", # Helps mitigate CSRF issues
max_age=3600, # 1 hour session duration
path="/", # Valid for the entire admin interface
)
logger.info("Login successful.")
return response
except Exception as e:
logger.error(f"Login error: {e!s}")
return False
async def logout(self, request: Request) -> bool | Response:
response = RedirectResponse(
url=request.url_for("admin:login"), status_code=status.HTTP_302_FOUND
)
response.delete_cookie("admin_token")
return response
async def authenticate(self, request: Request) -> Response | None:
token = request.cookies.get("admin_token")
if not token:
return RedirectResponse(
request.url_for("admin:login"), status_code=status.HTTP_302_FOUND
)
try:
payload = jwt.decode(
token,
settings.SECRET_KEY,
algorithms=[settings.ALGORITHM],
)
if not payload.get("is_admin"):
return RedirectResponse(
request.url_for("admin:login"), status_code=status.HTTP_302_FOUND
)
except jwt.ExpiredSignatureError:
logger.warning("Admin token expired.")
return RedirectResponse(
request.url_for("admin:login"), status_code=status.HTTP_302_FOUND
)
except jwt.PyJWTError as e:
logger.warning(f"JWT error in admin token: {e}")
return RedirectResponse(
request.url_for("admin:login"), status_code=status.HTTP_302_FOUND
)
async with async_db_cntxman() as db:
admin = await AdminService.is_admin(db, payload.get("sub"))
if not admin:
return RedirectResponse(
request.url_for("admin:login"), status_code=status.HTTP_302_FOUND
)
return None
</code></pre>
<h1>temp_router.py (Test endpoint that successfully sets a cookie):</h1>
<pre><code>from fastapi import APIRouter, Response
from app.utils.response import success_response
router = APIRouter()
@router.get("/setcookie")
async def set_cookie(response: Response):
response.set_cookie(key="test_cookie", value="cookie_value")
response.set_cookie(
key="admin_token",
value="data.access_token",
httponly=True,
secure=False,
samesite="lax",
max_age=3600,
path="/",
)
response.headers["X-Debug"] = "Cookie-Test"
return success_response("Cookie set successfully")
@router.post("/postcookie")
async def post_cookie(request: Request):
response = RedirectResponse(url=request.url_for("redirect"), status_code=303)
response.set_cookie(
key="post_cookie",
value="postcookie",
httponly=True,
secure=False,
samesite="lax",
max_age=3600,
path="/",
)
return response
@router.get("/redirect", name="redirect")
async def redirect(request: Request):
cookie = request.cookies.get("post_cookie")
if cookie:
return success_response("Redirected successfully", data={"cookie": cookie})
else:
return error_response("No postcookie found", code=404, detail="Cookie not found")
</code></pre>
<p>The test endpoint (/setcookie) sets the "admin_token" cookie successfully, but in the login flow, the cookie isn’t stored, and subsequent requests redirect back to the login page due to missing authentication.</p>
<p>My questions are:
1. Why isn’t the "admin_token" cookie being stored when it is set in the POST redirect response?
2. Is this issue caused by browser security policies related to setting cookies on redirect responses, or is there something in my code that I should adjust?
3. What are the best practices for setting authentication cookies in FastAPI to ensure they are reliably stored across redirects?</p>
<p>Any help or insights would be greatly appreciated. Please let me know if you need further details.</p>
<p>I tried several approaches to have the “admin_token” cookie persist after a POST login request that results in a redirect. Specifically, I:
• Returned a RedirectResponse with status code 303 (See Other) and then with 302 (Found).
• Set the cookie using different SameSite values: “None”, “Lax”, and “Strict”.
• Tested setting the domain attribute (e.g., domain=“localhost”) and omitting it entirely.</p>
<p>In every case—regardless of the combination of SameSite (None, Lax, Strict) and whether the domain was set or not—the cookie was not stored by the browser when the POST request triggered the redirect. I expected that the browser would accept and persist the cookie so that it would be included in subsequent requests, but it never happened either on Safari and Chrome.</p>
<h2>EDIT</h2>
<p>below add httpie output:</p>
<p>here for the temp router</p>
<pre><code>$ http --session=./zzzz/cookie_jar POST http://localhost:8000/api/v1/temp/postcookie --follow -v
POST /api/v1/temp/postcookie HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Content-Length: 0
Host: localhost:8000
User-Agent: HTTPie/3.2.4
HTTP/1.1 303 See Other
content-length: 0
date: Mon, 17 Mar 2025 15:06:57 GMT
location: http://localhost:8000/api/v1/temp/redirect
referrer-policy: strict-origin
server: uvicorn
set-cookie: post_cookie=postcookie; HttpOnly; Max-Age=3600; Path=/; SameSite=lax
x-content-type-options: nosniff
x-frame-options: DENY
GET /api/v1/temp/redirect HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Cookie: post_cookie=postcookie
Host: localhost:8000
User-Agent: HTTPie/3.2.4
HTTP/1.1 200 OK
content-length: 134
content-type: application/json
date: Mon, 17 Mar 2025 15:06:57 GMT
referrer-policy: strict-origin
server: uvicorn
x-content-type-options: nosniff
x-frame-options: DENY
{
"data": {
"cookie": "postcookie"
},
"message": "Redirected successfully",
"status": "success",
"timestamp": "2025-03-17T15:06:57.812173+00:00"
}
</code></pre>
<p>here for sqladmin login</p>
<pre><code>$ http --session=./zzzz/cookie_jar -f POST http://localhost:8000/admin/login username=admin password=Admin00! --follow -v --print=HhBb
POST /admin/login HTTP/1.1
Accept: application/json, */*;q=0.5
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Content-Length: 34
Content-Type: application/x-www-form-urlencoded; charset=utf-8
Host: localhost:8000
User-Agent: HTTPie/3.2.4
username=admin&password=Admin00%21
HTTP/1.1 302 Found
content-length: 0
date: Mon, 17 Mar 2025 15:30:33 GMT
location: http://localhost:8000/admin/
referrer-policy: strict-origin
server: uvicorn
x-content-type-options: nosniff
x-frame-options: DENY
GET /admin/ HTTP/1.1
Accept: application/json, */*;q=0.5
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Host: localhost:8000
User-Agent: HTTPie/3.2.4
HTTP/1.1 302 Found
content-length: 0
date: Mon, 17 Mar 2025 15:30:33 GMT
location: http://localhost:8000/admin/login
referrer-policy: strict-origin
server: uvicorn
x-content-type-options: nosniff
x-frame-options: DENY
GET /admin/login HTTP/1.1
Accept: application/json, */*;q=0.5
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Host: localhost:8000
User-Agent: HTTPie/3.2.4
HTTP/1.1 200 OK
content-length: 2457
content-type: text/html; charset=utf-8
date: Mon, 17 Mar 2025 15:30:33 GMT
referrer-policy: strict-origin
server: uvicorn
x-content-type-options: nosniff
x-frame-options: DENY
<!DOCTYPE html>
<html lang="en">
...
</html>
</code></pre>
|
<python><fastapi><setcookie><response.redirect><http-status-code-303>
|
2025-03-17 10:02:09
| 0
| 608
|
federikowsky
|
79,513,794
| 3,813,371
|
How to print colored or bold text in Python Visual Studio 2022 CLI app
|
<p>I have tried <code>colorama</code>, <code>termcolor</code> and <code>printy</code>.<br />
When I run the application in debug method, the text isn't formatted.</p>
<p>This wasn't the case, when I built a C# Console application in VS.</p>
<p>Here is the code using <code>printy</code>.</p>
<pre><code>from printy import printy
printy("Hello world", "rBU")
</code></pre>
<p>Output:</p>
<blockquote>
<p>[38;5;196;1;4mHello world[0m</p>
</blockquote>
<p><a href="https://i.sstatic.net/26W6WLzM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/26W6WLzM.png" alt="enter image description here" /></a></p>
<p>I have tried a few no-package solutions from here, but no luck.
<a href="https://stackoverflow.com/questions/287871/how-do-i-print-colored-text-to-the-terminal?page=1&tab=modifieddesc#tab-top">How do I print colored text to the terminal?</a></p>
<p>Is there any VS setting to configure?</p>
|
<python><windows><visual-studio-2022><colorama><termcolor>
|
2025-03-17 06:42:23
| 2
| 2,345
|
sukesh
|
79,513,685
| 3,241,701
|
Is It Reasonable to Simplify Product Variant Design Using Notes Instead of Complex Relations?
|
<p>I'm building an application where product variants are intended to be handled as <strong>physical products already prepared and listed manually</strong>. Instead of using a conventional approach with complex relations between Product, Option, OptionValue, and SKUValue tables, I'm trying to simplify the design.</p>
<h3>💡 ERD Design:</h3>
<pre class="lang-none prettyprint-override"><code>+--------------+ +-----------------+
| Product | | ProductVariant |
+--------------+ +-----------------+
| id (PK) |<------>| id (PK) |
| name | | product_id (FK) |
| owner_id (FK)| | note |
| created_at | | stock |
| updated_at | | price |
+--------------+ +-----------------+
</code></pre>
<p>In the ProductVariant table, the note field is a simple text field where users can manually enter <strong>descriptions like "Size: XL, Color: Red"</strong>.</p>
<h3>🔍 Django Models Based on This Design:</h3>
<pre class="lang-py prettyprint-override"><code>from django.db import models
from django.contrib.auth import get_user_model
User = get_user_model()
class Product(models.Model):
name = models.CharField(max_length=255)
owner = models.ForeignKey(User, on_delete=models.CASCADE, related_name='products')
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
def __str__(self):
return self.name
class ProductVariant(models.Model):
product = models.ForeignKey(Product, on_delete=models.CASCADE, related_name='variants')
note = models.TextField() # Example: "Size: XL, Color: Red"
stock = models.PositiveIntegerField()
price = models.DecimalField(max_digits=10, decimal_places=2)
def __str__(self):
return f"{self.product.name} - {self.note}"
</code></pre>
<h3>🎯 Why I'm Doing This:</h3>
<ol>
<li>The application is designed to handle product variants that are often predefined and don't change dynamically.</li>
<li>Users will manually input variant descriptions based on the <strong>actual physical products they have</strong>.</li>
<li>The goal is to avoid overengineering by eliminating unnecessary tables and relationships.</li>
</ol>
<hr />
<h3>🤔 What I'm Concerned About:</h3>
<ul>
<li>I know that most applications use well-structured relational models for managing product variants.</li>
<li>However, implementing such complex structures seems overkill for my use case, and I'm worried it might create unnecessary complications.</li>
<li>I'm concerned this simplified approach could be considered <strong>"bad design"</strong> even if it suits my use case better.</li>
</ul>
<hr />
<h3>❓ Question:</h3>
<p>Is this simplified design using manual notes for product variants acceptable in scenarios where variants are predefined and manually recorded? What are the potential pitfalls I should be aware of when using this design compared to a more "standard" relational model?</p>
|
<python><django><database><backend><product-variations>
|
2025-03-17 05:22:47
| 2
| 810
|
Solanoize
|
79,513,473
| 20,167,855
|
I’m trying to export data to a CSV file with semicolon separator ";"
|
<p>While using Scrapy, I’m trying to export data to a CSV file, and it’s working. However, the separator used is a comma (","), but I need it to be a semicolon (";") instead.</p>
<p>I’m currently running the command:</p>
<p><code>scrapy runspider Intro_Scrapy.py -o video.csv</code></p>
<p>How can I change the separator to ";" in the exported CSV file?</p>
<p>Here is my hole code (only for learning):</p>
<pre><code>from scrapy.item import Field
from scrapy.item import Item
from scrapy.spiders import Spider
from scrapy.selector import Selector
from scrapy.loader import ItemLoader
class Pregunta(Item):
id = Field()
pregunta = Field()
descripcion = Field()
class StackOverFlowSpider(Spider):
name = "MiPrimerSpider"
custom_settings = {
"USER_AGENT": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36"
}
start_urls = ["https://stackoverflow.com/questions"]
def parse(self, response):
sel = Selector(response)
preguntas = sel.xpath("//div[@id ='questions']//div[@class='s-post-summary--content']")
print(preguntas)
for pregunta in preguntas:
item = ItemLoader(item = Pregunta(), selector = pregunta)
item.add_xpath('pregunta','.//h3/a/text()')
item.add_xpath('descripcion',".//div[contains(@class,'excerpt')]/text()")
item.add_value('id', 1)
yield item.load_item()
</code></pre>
|
<python><csv><scrapy>
|
2025-03-17 01:11:19
| 0
| 351
|
Francisco Augusto Varela Aguir
|
79,513,387
| 2,250,791
|
How to specify generic python signature to return type, as well as use type in function
|
<p>I'm trying to write a function which takes a class as an argument, returns an instance of that class wrapped in another type, and uses the type of the class for casting internally, it works with a simple type like str, but not an actual class.</p>
<p>I've reduced this code to the point that it's a bit silly to be using generics but it demonstrates my problem at least.</p>
<pre class="lang-py prettyprint-override"><code>from typing import cast, Type, TypeVar, Generic, Optional, Any
from pydantic import BaseModel, UUID4
from typing_extensions import Annotated, TypeAliasType
from pydantic.functional_validators import BeforeValidator
from uuid import UUID
def api_fn(output_cls: Optional[Type[BaseModel]] = None):
return None # simplified for example code, signature cannot be changed, not my code
def amount_f(v: Any) -> float:
return 0.0 # simplified for example code
MyNumber = Annotated[float, BeforeValidator(amount_f)]
DataT = TypeVar('DataT')
class TypeWithId(BaseModel, Generic[DataT]):
value: DataT
id: UUID4
T = TypeVar('T')
OptionalWithId = TypeAliasType(
'OptionalWithId', Optional[TypeWithId[T]], type_params=(T,)
)
OutT = TypeVar('OutT')
def query(output_cls:Type[OutT]) -> OptionalWithId[OutT]:
ret = api_fn(output_cls=output_cls)
return TypeWithId(value=cast(OutT,ret), id=UUID('00000000-0000-0000-0000-000000000000'))
str_result = query(str)
print(str_result)
number_result = query(MyNumber)
print(number_result)
</code></pre>
<p>The <code>str_result</code> works but the <code>number_result</code> gives me:</p>
<pre><code> 27 26 error e-f-b Pyright [reportArgumentType]: Argument of type "type[OutT@query]" cannot be assigned to parameter "output_cls" of type "type[BaseModel] | None" in function "api_fn"
32 0 error e-f-b Pyright [reportUnknownVariableType]: Type of "number_result" is partially unknown
32 22 error e-f-b Pyright [reportArgumentType]: Argument of type "MyNumber" cannot be assigned to parameter "output_cls" of type "type[OutT@query]" in function "query"
33 6 error e-f-b Pyright [reportUnknownArgumentType]: Argument type is partially unknown
</code></pre>
|
<python><generics><python-typing>
|
2025-03-16 23:39:58
| 1
| 2,075
|
Camden Narzt
|
79,513,337
| 13,392,257
|
pyyaml resolver: 'type' object is not iterable
|
<p>I am trying to resolve environment variables in yaml config</p>
<p>For example, I want to convert</p>
<pre><code>car:
name: ${CAR_NAME}
is_new: true
</code></pre>
<p>to</p>
<pre><code>car:
name: BMW
is_new: true
</code></pre>
<p>My code</p>
<pre><code># base_model.py
import yaml
import os
import re
from pydantic import BaseModel
from typing import Union
from pathlib import Path
env_pattern = re.compile(r".*?\${(.*?)}.*?")
def env_constructor(loader, node):
value = loader.construct_scalar(node)
for group in env_pattern.findall(value):
value = value.replace(f"${{{group}}}", os.environ.get(group))
return value
yaml.add_implicit_resolver("!pathex", env_pattern, yaml.SafeLoader)
yaml.add_constructor("!pathex", env_constructor, yaml.SafeLoader)
class YamlModel(BaseModel):
@classmethod
def from_yaml(cls, yaml_file_path: Union[str, Path]) -> "YamlModel":
yaml_file = Path(yaml_file_path)
with open(yaml_file, "r") as f:
file_content = yaml.safe_load(f)
return cls.parse_obj(file_content)
</code></pre>
<p>My test</p>
<pre><code>import pytest
import pydantic
from src.example_model import Config
def test_config_positive():
config = Config.from_yaml("tests/config_positive.yaml")
assert config.owner.name == "John Doe"
assert config.owner.age == 30
assert config.car.name == "BMW"
</code></pre>
<p>I have an error:</p>
<pre><code>tests/test_example_model.py:4: in <module>
from src.example_model import Config
src/example_model.py:1: in <module>
from src.base_model import YamlModel
src/base_model.py:17: in <module>
yaml.add_implicit_resolver("!pathex", env_pattern, yaml.SafeLoader)
venv/lib/python3.11/site-packages/yaml/__init__.py:280: in add_implicit_resolver
loader.Loader.add_implicit_resolver(tag, regexp, first)
venv/lib/python3.11/site-packages/yaml/resolver.py:34: in add_implicit_resolver
for ch in first:
E TypeError: 'type' object is not iterable
</code></pre>
<p>Full code <a href="https://github.com/mascaiTemplates/yaml_config_validator_template" rel="nofollow noreferrer">https://github.com/mascaiTemplates/yaml_config_validator_template</a></p>
|
<python><pyyaml>
|
2025-03-16 22:42:48
| 1
| 1,708
|
mascai
|
79,513,292
| 15,412,256
|
Jupyter Notebook DataFrame Render Steadily Increasing memory usage
|
<p>Generate any considerably large dataset:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import polars as pl
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
X, y = make_classification(
n_samples=3_000_000,
n_features=100,
n_informative=20,
n_redundant=10,
n_repeated=0,
n_classes=2,
random_state=42,
shuffle=False,
)
feature_names = [f"feature_{i}" for i in range(X.shape[1])]
X_polars = pl.DataFrame(X, schema=feature_names)
y_polars = pl.Series(values=y, name="target")
X_pandas = X_polars.clone().to_pandas()
</code></pre>
<p>if I execute a code block with Jupyter rendering for both <code>text/HTML</code> and <code>text\plain</code> representation:</p>
<pre class="lang-py prettyprint-override"><code>df = X_polars+1
df
</code></pre>
<p>The <code>df</code> table will be rendered and outputed as the outputs under the cell. The problem begins when I re-run the same cell multiple times, and each time the whole memory usage of <code>df</code> will be accumulatively added:</p>
<pre class="lang-py prettyprint-override"><code># First time running
import psutil
print(psutil.virtual_memory().available / (1024 ** 3))
df = X_polars+1
df
out:
26.315258026123047
`df` table
</code></pre>
<pre class="lang-py prettyprint-override"><code># Fourth time running
out:
19.47336196899414
...
</code></pre>
<p>The same behaviour persists even when</p>
<ol>
<li>Use <code>X_pandas</code>.</li>
<li>Not defining the <code>df</code> variable (e.g., use <code>X_polars+1</code> straight)</li>
<li>On Linux based systems (I am on Windows 10)</li>
<li>Called <code>gc.collect()</code></li>
<li>Switched IDE to a. Vscode b. Jupyter Notebook c. Jupyter Lab</li>
</ol>
<p>The problem does not occur, however, when I use</p>
<ol>
<li><code>print(X_polars+1)</code></li>
<li><code>print(df)</code></li>
</ol>
|
<python><pandas><jupyter-notebook><jupyter><python-polars>
|
2025-03-16 21:58:43
| 0
| 649
|
Kevin Li
|
79,513,211
| 12,415,855
|
Selenium selecting element by value not possible
|
<p>i try to select an element using selenium with the following code:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import Select
print(f"Checking Browser driver...")
options = Options()
# options.add_argument('--headless=new')
options.add_argument("start-maximized")
options.add_argument('--log-level=3')
options.add_experimental_option("prefs", {"profile.default_content_setting_values.notifications": 1})
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('excludeSwitches', ['enable-logging'])
options.add_experimental_option('useAutomationExtension', False)
options.add_argument('--disable-blink-features=AutomationControlled')
srv=Service()
driver = webdriver.Chrome (service=srv, options=options)
waitWD = WebDriverWait (driver, 10)
startLlink = "https://www.firmenabc.at/bauen-wohnen-einrichten"
driver.get (startLlink)
select = Select(driver.find_element(By.XPATH,'(//select[@class="py-4"])[1]'))
select.select_by_value('1')
</code></pre>
<p>But i allways get this error:</p>
<pre><code>(selenium) C:\DEVNEU\Fiverr2025\PROGRAMS\FirmenABC>python test.py
Checking Browser driver...
Traceback (most recent call last):
File "C:\DEVNEU\Fiverr2025\PROGRAMS\FirmenABC\test.py", line 25, in <module>
select.select_by_value('1')
~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "C:\DEVNEU\.venv\selenium\Lib\site-packages\selenium\webdriver\support\select.py", line 84, in select_by_value
raise NoSuchElementException(f"Cannot locate option with value: {value}")
selenium.common.exceptions.NoSuchElementException: Message: Cannot locate option with value: 1; For documentation on this error, please visit: https://www.selenium.dev/documentation/webdriver/troubleshooting/errors#no-such-element-exception
</code></pre>
<p>As you can see this should be the first element with value = "1"</p>
<p><a href="https://i.sstatic.net/e818wj6v.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e818wj6v.png" alt="enter image description here" /></a></p>
<p>How can i select the first opiton using selenium?</p>
|
<python><selenium-webdriver>
|
2025-03-16 20:37:50
| 2
| 1,515
|
Rapid1898
|
79,513,209
| 1,506,763
|
python base64 string - how to decode first 8 bytes
|
<p>I'm having some problems decoding what should be simple data.</p>
<p>I have a base64 string that represents a <code>np.int64</code> followed by an array of <code>np.float64</code>. The size of the array is defined by the first <code>np.int64</code>. This pattern is then repeated for multiple arrays. So in order to decode all of the arrays, I need to be able to read the size in bytes to find the starting point of the next pair.</p>
<p>Here is a very simple example showing the first pair. The second pair starts straight after this - after 64 bytes or 88 base64 characters. then rinse and repeat for the remainig arrays.</p>
<pre class="lang-py prettyprint-override"><code>>>> test_data = 'OAAAAAAAAAAAAAAAAAAAAFVVVVVVVcU/VVVVVVVV1T8AAAAAAADgP1VVVVVVVeU/qqqqqqqq6j8AAAAAAADwPw=='
>>> struct.unpack('Qddddddd', base64.b64decode(test_data)) # 'Q7d' also works
(56,
0.0,
0.16666666666666666,
0.3333333333333333,
0.5,
0.6666666666666666,
0.8333333333333333,
1.0)
</code></pre>
<p>My problem is that I need to extract the <code>Int64</code> first to know the proper size array to be unpacked and the start of the next array which starts immediately after this.</p>
<p>I thought I could simply cut off the first 8 bytes from the <code>base64</code> string using the 4/3 size relation and round to the nearest 4 to account for padding like so:</p>
<pre class="lang-py prettyprint-override"><code>struct.unpack('Q', base64.b64decode(test_data[:12]))
</code></pre>
<p>But that always throws an error regardelsss of how big my slice is (I've tried 8 to 16 just to try and figure out what is going on):</p>
<pre class="lang-none prettyprint-override"><code>struct.error: unpack requires a buffer of 8 bytes
</code></pre>
<p>There must be a simple way to extract just that first integer without knowing the length of the array it is describing?</p>
|
<python><numpy><binary><base64><decode>
|
2025-03-16 20:35:57
| 1
| 676
|
jpmorr
|
79,513,093
| 2,057,516
|
How do I get VSCode Intellisense to work with my python class factory method?
|
<p>I have a python class factory method that I use to create a commonly implemented derived class of <code>django.tests.TestCase</code> and <code>django.tests.TransactionTestCase</code>:</p>
<pre><code>from typing import Type
from django.test import TestCase, TransactionTestCase
def test_case_class_factory(base_class) -> Type[TestCase]:
class MyDerivedTestCase(base_class):
...
return MyDerivedTestCase
TracebaseTestCase: TestCase = test_case_class_factory(TestCase)
TracebaseTransactionTestCase: TransactionTestCase = test_case_class_factory(TransactionTestCase)
</code></pre>
<p>These classes work well. I've used them for years now, but when I inherit from either of those classes, VSCode's Intellisense and syntax highlighting has never worked, e.g.:</p>
<pre><code>from DataRepo.tests.tracebase_test_case import TracebaseTestCase
class MyTests(TracebaseTestCase):
def test_multiply(self):
self.assertEqual(6.2, multiply(2.0, 3.1))
</code></pre>
<p>E.g. <code>assertEqual</code> is white:</p>
<p><a href="https://i.sstatic.net/BOGvKq6z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BOGvKq6z.png" alt="enter image description here" /></a></p>
<p>I've tried a few things today in terms of type hinting, but I can't seem to figure out how to make it work. How can I make inheriting from these factory-created classes be compatible with VSCode's intellisense and syntax highlighting?</p>
|
<python><visual-studio-code><intellisense><syntax-highlighting>
|
2025-03-16 19:13:43
| 1
| 1,225
|
hepcat72
|
79,512,994
| 11,850,322
|
Python - XYZ plot coloring cluster region
|
<p>Current this is my code for XY axis:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
def plot_with_tricontourf_customized(dfs, figures_organizer, figsize_setting, list_points_mean, list_points_type, x_col, y_col, color_col, titles, marker_type='o', apply_limits=False, x_limits=None, y_limits=None):
rows, cols = figures_organizer
fig, axes = plt.subplots(rows, cols, figsize=(figsize_setting[0], figsize_setting[1] * rows), constrained_layout=True)
axes = axes.flatten() # Flatten the axes array for easier indexing
for i, (ax, df, title, points_mean, points_type) in enumerate(zip(axes, dfs, titles, list_points_mean, list_points_type)):
x = df[x_col].to_numpy()
y = df[y_col].to_numpy()
group = df[color_col].to_numpy()
# Use a color map directly
cmap = plt.colormaps['bwr']
scatter = ax.scatter(x, y, c=group, cmap=cmap, edgecolor='black', marker=marker_type, s=40, alpha=0.8)
contour = ax.tricontourf(x, y, group, levels=np.arange(-0.5, df[color_col].max() + 1.5), zorder=0, cmap=cmap, alpha=0.3)
# Plot fixed points with specific markers
for point, type in zip(points_mean, points_type):
ax.scatter([point[0]], [point[1]], color='black', marker=type, s=100)
# Add dashed lines passing through fixed points
for point in points_mean:
ax.axvline(x=point[0], color='black', linestyle='--')
ax.axhline(y=point[1], color='black', linestyle='--')
ax.set_xlabel(x_col)
ax.set_ylabel(y_col)
ax.set_title(title)
ax.grid(True)
if apply_limits:
if x_limits:
ax.set_xlim(x_limits)
if y_limits:
ax.set_ylim(y_limits)
legend1 = ax.legend(*scatter.legend_elements(), loc="upper right", title=color_col)
ax.add_artist(legend1)
# Add a text box for fixed points explanation
textstr = "★ : y=1 mean\n♦ : y=0 mean"
props = dict(boxstyle='round', facecolor='wheat', alpha=0.5)
ax.text(0.05, 0.95, textstr, transform=ax.transAxes, fontsize=12,
verticalalignment='top', bbox=props)
# Hide any unused axes if there are more subplots than dataframes
for ax in axes[len(dfs):]:
ax.axis('off')
plt.colorbar(scatter, ax=axes[:len(dfs)], orientation='horizontal', fraction=0.02, pad=0.04, ticks=np.arange(df[color_col].min(), df[color_col].max() + 1))
plt.show()
# Example usage
# Suppose you have 15 dataframes, their means, types, and titles ready in lists
# plot_with_tricontourf_customized(dfs, (4, 4), list_points_mean, list_points_type, 'x', 'y', 'group', titles)
</code></pre>
<p>A sample output:</p>
<p><a href="https://i.sstatic.net/M6G5qfPp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6G5qfPp.png" alt="enter image description here" /></a></p>
<p>Now I want to plot on 3D with XYZ axis and color by volume. Similar to my shown output, but now in 3D.</p>
<p>I am looking for:</p>
<ul>
<li>Draw this on 3D, how should I adjust my code</li>
<li>Option to adjust the view of 3D</li>
</ul>
<p>Thank you</p>
|
<python><pandas><matplotlib><plotly><matplotlib-3d>
|
2025-03-16 17:56:00
| 0
| 1,093
|
PTQuoc
|
79,512,868
| 3,972,291
|
how to read excel merged cell properties value using python?
|
<p>I need to read data from an Excel file. The first cell contains the property name, and the second cell contains the property value. However, some of the property names in the first column are merged across two or more columns, and the corresponding values are in the next cell. For example, the property name "Ref" is in columns A and B, and its value is in column C.</p>
<p>I want to retrieve the value of the "Ref" property from column C in my Excel file.</p>
<p>Here is my excel image:</p>
<p><a href="https://i.sstatic.net/cwtXNjag.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cwtXNjag.png" alt="enter image description here" /></a></p>
<p>I am using python. Here is the output:</p>
<pre><code>Approval Memo of : SHILPI AKTER
Name of the Applicant : SHILPI AKTER
Name of Territory : Comilla
Total Family Expenses : 30000
Ref : N/A
Amount : N/A
Total Amount : 3000
</code></pre>
<p><strong>Ref and Amount</strong> Properties value not found.
Here is my code:</p>
<pre><code> import os
import openpyxl
from openpyxl.utils import column_index_from_string
file_path = r"D:\file\input\example.xlsx"
if os.path.exists(file_path):
print("File exists!")
else:
print("File not found! Check the path.")
exit()
target_sheet = "Output Approval Templete"
# Define the properties to extract
properties = [
"Approval Memo of",
"Name of the Applicant",
"Name of Territory",
"Total Family Expenses",
"Ref",
"Amount",
"Total Amount"
]
# Function to get the actual value from a merged cell
def get_merged_cell_value(sheet, row, col):
for merged_range in sheet.merged_cells.ranges:
min_row, min_col, max_row, max_col = merged_range.bounds # Extract merged cell bounds
if min_row <= row <= max_row and min_col <= col <= max_col:
return sheet.cell(min_row, min_col).value # Return the first cell's value of the merged range
return sheet.cell(row, col).value
# Function to format numeric values properly
def format_value(value):
if isinstance(value, float) and value > 1e10: # Large numbers like NID
return str(int(value)) # Convert to integer and string to avoid scientific notation
elif isinstance(value, (int, float)): # General number formatting
return str(value)
elif value is None:
return "N/A" # Handle missing values
return str(value).strip()
try:
# Load the workbook
wb = openpyxl.load_workbook(file_path, data_only=True)
if target_sheet not in wb.sheetnames:
print(f"Sheet '{target_sheet}' not found in the file.")
else:
ws = wb[target_sheet]
extracted_data = {}
# Iterate over rows to extract data
for row in ws.iter_rows():
for cell in row:
# Check if the cell value is a property we are looking for
if cell.value and isinstance(cell.value, str) and cell.value.strip() in properties:
prop_name = cell.value.strip()
col_idx = cell.column # Get column index (1-based)
next_col_idx = col_idx + 1 # Next column index
# Ensure next column exists within sheet bounds
if next_col_idx <= ws.max_column:
# Check if the cell is merged, and get its value
next_value = get_merged_cell_value(ws, cell.row, next_col_idx)
# Store the formatted value for the property
extracted_data[prop_name] = format_value(next_value) # Store extracted value
# Print extracted values
for key, value in extracted_data.items():
print(f"{key} : {value}")
except Exception as e:
print(f"Error loading workbook: {e}")
</code></pre>
<p>Please help me to find out merge cell properties value.</p>
|
<python><excel><pandas><openpyxl>
|
2025-03-16 16:29:17
| 1
| 5,095
|
Enamul Haque
|
79,512,785
| 20,167,855
|
Using Scrapy I get LookupError: unknown encoding: 'b'utf8''
|
<p>Im learning how to use Scrapy but when I try to export to csv I get a
LookupError: unknown encoding: 'b'utf8''</p>
<p>I made an example with Stack Overflow (only for learning) and tried to scrap first page of questions and then export to csv. But I get an empty CSV and the error on my terminal is:</p>
<pre><code>2025-03-16 12:07:52 [scrapy.core.scraper] ERROR: Spider error processing <GET https://stackoverflow.com/questions> (referer: None)
Traceback (most recent call last):
File "C:\Users\fvarelaa\AppData\Local\anaconda3\Lib\site-packages\scrapy\utils\defer.py", line 327, in iter_errback
yield next(it)
^^^^^^^^
LookupError: unknown encoding: 'b'utf8''
2025-03-16 12:07:52 [scrapy.core.engine] INFO: Closing spider (finished)
2025-03-16 12:07:52 [scrapy.extensions.feedexport] INFO: Stored csv feed (0 items) in: video.csv
2025-03-16 12:07:52 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
</code></pre>
<pre><code>from scrapy.item import Field
from scrapy.item import Item
from scrapy.spiders import Spider
from scrapy.selector import Selector
from scrapy.loader import ItemLoader
class Pregunta(Item):
id = Field()
pregunta = Field()
descripcion = Field()
class StackOverFlowSpider(Spider):
name = "MiPrimerSpider"
custom_settings = {
"USER_AGENT": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36"
}
start_urls = ["https://stackoverflow.com/questions"]
def parse(self, response):
sel = Selector(response)
preguntas = sel.xpath("//div[contains(@id,'questions')]//div[@class='s-post-summary js-post-summary']")
for pregunta in preguntas:
item = ItemLoader(item = Pregunta(), selector = pregunta)
item.add_xpath('pregunta','.//h3/a/text()')
item.add_xpath('descripcion',".//div[contains(@class,'excerpt')]/text()")
item.add_value('id', 1)
yield item.load_item()
# scrapy runspider Intro_Scrapy.py -o video.csv
</code></pre>
|
<python><web-scraping><scrapy>
|
2025-03-16 15:27:39
| 1
| 351
|
Francisco Augusto Varela Aguir
|
79,512,639
| 16,383,578
|
How to get specified number of decimal places of any fraction?
|
<p>So I can generate many <code>tuple</code>s like this:</p>
<pre><code>(601550405185810455248373798733610900689885946410558295383908863020551447581889414152035914344864580636662293641050147614154610394724089543305418716041082523503171641011728703744273399267895810412812627682686305964507416778143771218949050158028407021152173879713433156038667304976240165476457605035649956901133077856035193743615197184,
191479441008508487760634222418439911957601682008868450843945373670464368694409556412664937174591858285324642229867265839916055393493144203677491629737464170928066273172154431360491037381070068776978301192672069310596051608957593418323738306558817176090035871566224143565145070495468977426925354101694409791889484040439128875732421875)
</code></pre>
<p>They are all <code>tuple</code>s of two <code>int</code>s, they represent fractions with (nearly) infinite precision (bounded only by computer memory), the first number is the numerator, the second denominator.</p>
<p>If we divide them, we get the first 101 decimal places of π:</p>
<pre><code>'3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706798'
</code></pre>
<p>I did everything using integer math and without using any libraries. I didn't use any floating point because I know they are all of finite precision, in contrast to <code>int</code>'s infinite precision in Python.</p>
<p>Python's <code>float</code> uses <a href="https://en.wikipedia.org/wiki/Double-precision_floating-point_format" rel="nofollow noreferrer">this</a>, and it only has log<sub>10</sub>(2) * 52 = 15.653559774527022 decimal places, far less than what I wanted.</p>
<p>I wrote two correct functions to get n specified decimal places of any fraction:</p>
<pre><code>from typing import Dict, List, Tuple
def decimalize(dividend: int, divisor: int, places: int) -> str:
div, mod = divmod(dividend, divisor)
result = [f"{div}."]
while mod and places:
div, mod = divmod(mod * 10, divisor)
result.append(str(div))
places -= 1
integral, first, *others = result
return integral + first + "".join(others).rstrip("0")
def pad_cycles(mod: int, places: int, pairs: Dict[int, str], result: List[str]) -> None:
if mod and places:
i = list(pairs).index(mod)
cycle = "".join(list(pairs.values())[i:])
div, mod = divmod(places, len(cycle))
result.append(cycle * div + cycle[:mod])
def decimalize1(dividend: int, divisor: int, places: int) -> str:
div, mod = divmod(dividend, divisor)
result = [f"{div}."]
pairs = {}
while mod and places and mod not in pairs:
div, mod1 = divmod(mod * 10, divisor)
pairs[mod] = (div := str(div))
result.append(div)
mod = mod1
places -= 1
pad_cycles(mod, places, pairs, result)
integral, first, *others = result
return integral + first + "".join(others).rstrip("0")
</code></pre>
<p>They work but both are inefficient, as they are literally doing long division.</p>
<p>They both adhere to the following rules:</p>
<p>They should return the fraction expanded to the desired width, unless the fraction terminates before reaching the width (we had an exact finite decimal representation before reaching the width);</p>
<p>If the last few digits of the decimal expansion cutoff are 0, in which case the result should not contain any trailing zeros, unless the very first decimal digit is 0. For example, it should be <code>'0.0'</code> instead of <strike><code>'0.'</code></strike></p>
<p>Additionally, the second function breaks out of the loop once it has all digits of one cycle, and constructs the remaining digits by repeating the cycle, though in each iteration more work is done, as a result it is faster if the cycle is short, and slower if the cycle is longer. Worst case is if the cycle is longer than <code>length</code>, so it does all the extra work without terminating early:</p>
<pre><code>In [364]: %timeit decimalize(1, 3, 100)
25.8 μs ± 371 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [365]: %timeit decimalize1(1, 3, 100)
2.07 μs ± 20.4 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
In [366]: %timeit decimalize(1, 137, 100)
26.8 μs ± 209 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [367]: %timeit decimalize1(1, 137, 100)
4.94 μs ± 38.1 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
In [368]: %timeit decimalize1(1, 1337, 100)
43.4 μs ± 280 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [369]: %timeit decimalize(1, 1337, 100)
28.6 μs ± 389 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [370]: %timeit decimalize1(1, 123456789, 100)
42.4 μs ± 309 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [371]: %timeit decimalize(1, 123456789, 100)
29.7 μs ± 494 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [372]: a = 60155040518581045524837379873361090068988594641055829538390886302055144758188941415203591434486458063666229364105014761415461039472408954330541871604108252350317164101172870374427339926789581041
...: 2812627682686305964507416778143771218949050158028407021152173879713433156038667304976240165476457605035649956901133077856035193743615197184
In [373]: b = 19147944100850848776063422241843991195760168200886845084394537367046436869440955641266493717459185828532464222986726583991605539349314420367749162973746417092806627317215443136049103738107006877
...: 6978301192672069310596051608957593418323738306558817176090035871566224143565145070495468977426925354101694409791889484040439128875732421875
In [374]: decimalize(a, b, 101)
Out[374]: '3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706798'
In [375]: decimalize1(a, b, 101) == decimalize(a, b, 101)
Out[375]: True
In [376]: %timeit decimalize1(a, b, 101)
96.3 μs ± 295 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [377]: %timeit decimalize(a, b, 101)
64.4 μs ± 402 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
</code></pre>
<p>How can we do better than long division, so that the execution time is reduced drastically, while achieving the same result (using integer math to get infinite precision representation of fractions)? Preferably this should be done without using any libraries as I would like to know the algorithm.</p>
<hr />
<p>Just to prove a point:</p>
<pre><code>In [398]: decimalize1(1, 5, 99999999999999999999999999999999999999999)
Out[398]: '0.2'
In [399]: decimalize1(1, 5, 10**1000-1)
Out[399]: '0.2'
</code></pre>
<p>In my solution, no matter how many digits are asked, if there is an exact finite decimal representation shorter than asked, it will always be returned, the number of digits asked doesn't affect it.</p>
<hr />
<p>I chose long division because it is the only thing I know of that works, I want infinite precision, and that method spits out the first N correct decimal places for any N.</p>
<p>I had read this Wikipedia <a href="https://en.wikipedia.org/wiki/Division_algorithm" rel="nofollow noreferrer">article</a>, but as you can see none of them can be used for my purposes. I want to know what algorithm can be used for infinite precision decimal.</p>
<hr />
<p>To address some comment, code taken directly from <code>decimalize1</code>:</p>
<pre><code>In [409]: places = 4096
...: div, mod = divmod(1, 1337)
...: result = [f"{div}."]
...: pairs = {}
...: while mod and places and mod not in pairs:
...: div, mod1 = divmod(mod * 10, 1337)
...: pairs[mod] = (div := str(div))
...: result.append(div)
...: mod = mod1
...: places -= 1
In [410]: len(result)
Out[410]: 571
In [411]: len(pairs)
Out[411]: 570
</code></pre>
|
<python><algorithm><math><fractions><integer-division>
|
2025-03-16 13:47:27
| 2
| 3,930
|
Ξένη Γήινος
|
79,512,493
| 22,213,065
|
How to Freeze a Flourish Studio Bar Chart Race Every 0.01s for High-Quality PDF Capture?
|
<p>I created the following Flourish Studio bar chart race as an example for this topic:<br />
<a href="https://flo.uri.sh/visualisation/19321054/embed" rel="nofollow noreferrer">https://flo.uri.sh/visualisation/19321054/embed</a></p>
<p>Now, I want to capture a sequence of PDF prints from this bar chart race, recording all movements and frames using the browser’s <strong>Print to PDF</strong> feature.</p>
<p><strong>Why PDF Print?</strong></p>
<p>I plan to later convert these PDF prints into a <strong>high-quality 4K video</strong>. The reason I prefer <strong>PDF prints</strong> over screen recording is that:</p>
<ul>
<li><p>When zooming into the PDF, <strong>there is no blurriness</strong>, and the
quality remains sharp.</p>
</li>
<li><p>Traditional screen recording (even with upscaling) results in
<strong>blurry and low-quality frames</strong> when zoomed in.</p>
</li>
</ul>
<p><strong>Please do not suggest screen recording tools like OBS!</strong> They do not produce the high-quality frames I need.</p>
<p><strong>Methods I Have Tried (But Failed)</strong><br />
I have attempted the following approaches, but they all have limitations in capturing and saving PDF print sequences efficiently:</p>
<ul>
<li><strong>Puppeteer (Node.js & Python)</strong></li>
<li><strong>Puppeteer with multiple browser contexts</strong></li>
<li><strong>Playwright (Python)</strong></li>
<li><strong>Built-in console commands (Chrome & Firefox)</strong></li>
<li><strong>Selenium WebDriver</strong></li>
</ul>
<p>All of these methods have a limitation: they can only capture <strong>one print every ~5 seconds</strong>, which is too slow.<br />
I also tested <strong>MSS and DxCam</strong>, but they only record the screen, not the direct PDF print output from the browser. <strong>Screen recording is not as sharp as a PDF print.</strong></p>
<p><strong>My Current Idea</strong><br />
Since all other methods failed, I am looking for a way to <strong>freeze</strong> the bar chart race every <strong>0.01 seconds</strong> and take a <strong>PDF print</strong> from each frozen frame.</p>
<p><strong>My question:</strong><br />
Is there any tool, extension, script, or plugin that can <strong>freeze the Flourish Studio bar chart race every 0.01 seconds</strong> so that I can print each frame to a PDF?</p>
<p><strong>Important Notes:</strong></p>
<ul>
<li><p>The built-in <strong>pause button</strong> of Flourish <strong>does not help</strong>, because
when I pause, the bars automatically <strong>rearrange or shift</strong>, making
it impossible to capture all frames accurately.</p>
</li>
<li><p>My PC is <strong>low-end</strong> and does <strong>not</strong> have a <strong>good GPU or capture
card</strong>, so I cannot use high-performance screen recording tools.</p>
</li>
</ul>
<p><strong>My last hope is finding a tool that can:</strong></p>
<ul>
<li><p><strong>Freeze the Flourish Studio animation every 0.01s</strong></p>
</li>
<li><p><strong>Allow me to print each frozen frame as a PDF</strong></p>
</li>
</ul>
<p>Does anyone know of a solution? Thanks in advance!</p>
|
<javascript><python><pdf><browser-automation><screen-recording>
|
2025-03-16 11:26:14
| 1
| 781
|
Pubg Mobile
|
79,511,967
| 15,587,184
|
Azure AI Search - Tag Scoring profile on azureopenai extra_body
|
<p>I created an index on Azure AI Search and connected it to Azure OpenAI using the extra_body. It works perfectly. However, I created a default scoring profile for my index, which boosts documents containing the string "zinc" in the VITAMINS field by a factor of 10.</p>
<p>Since doing this, I can no longer run the query that worked previously without issues. Now, the query is asking for a scoringParameter, and when I attempt to pass it, I receive an error. Here is the code that works fine when I remove the scoring function.</p>
<pre><code>client.chat.completions.create(
model=os.getenv('DEPLOYMENT'),
messages=messages,
temperature=0.5,
extra_body={
"data_sources": [{
"type": "azure_search",
"parameters": {
"endpoint": os.getenv('ENDPOINT'),
"index_name": os.getenv('INDEX'),
"semantic_configuration": os.getenv('RANK'),
"query_type": "hybrid",
"in_scope": True,
"role_information": None,
"strictness": 1,
"top_n_documents": 3,
"authentication": {
"type": "api_key",
"key": os.getenv('KEY')
},
"embedding_dependency": {
"type": "deployment_name",
"deployment_name": os.getenv('ADA_VIT')
}
}
}]
}
)
</code></pre>
<p>However, if I activate the default scoring profile, I get the following error:</p>
<blockquote>
<p>An error occurred: Error code: 400 - {'error': 'message': 'An error occurred when calling Azure Cognitive Search: Azure Search Error: 400, message='Server responded with status 400. Error message: {"error":{"code":"MissingRequiredParameter","message":"Expected 1 parameter(s) but 0 were supplied.\\r\\nParameter name: scoringParameter","details":[{"code":"MissingScoringParameter","message":"Expected 1 parameter(s) but 0 were supplied."}]}}', api-version=2024-03-01-preview'\nCall to Azure Search instance failed.\nAPI Users: Please ensure you are using the right instance, index_name, and provide admin_key as the api_key.\n'}</p>
</blockquote>
<p><strong>If I try to pass the scoringParameter anywhere in the extra_body</strong>, I receive this error:</p>
<blockquote>
<p>An error occurred: Error code: 400 - {'error': {'requestid': '', 'code': 400, 'message': 'Validation error at #/data_sources/0/azure_search/parameters/scoringParameter: Extra inputs are not permitted'}}</p>
</blockquote>
<p>This error is even more confusing. I’ve been looking through various resources, but none of them seem to provide a clear example of how to properly pass the scoring profile or scoring parameters in the extra_body.</p>
<p>Here’s how I define my scoring profile using tags:</p>
<pre><code>scoring_profiles = [
ScoringProfile(
name="my-scoring-profile",
functions=[
TagScoringFunction(
field_name="VITAMINS",
boost=10.0,
parameters=TagScoringParameters(
tags_parameter="tags",
),
)
]
)
]
</code></pre>
<p>How to pass the scoring parameters correctly in the <code>extra_body</code> on the <code>client.chat.completions.create</code>?</p>
<p>PS: The only way I can get my code to work is if I delete the scoring profile or do not make it the scoring profile by default by I do want to use it.</p>
|
<python><azure><azure-openai><azure-ai-search>
|
2025-03-16 00:36:05
| 1
| 809
|
R_Student
|
79,511,907
| 16,383,578
|
Why my function that converts a string of digits to a list of bytes doesn't work?
|
<p>I have written a simple function that converts a string consisting of only digits (<code>'0123456789'</code>) to a list of bytes. More specifically it finds the longest possible sequences of digits that when converted to an integer is less than 256.</p>
<p>For example, given the decimal digits in order, the output should be <code>(0, 123, 45, 67, 89)</code>.</p>
<p>Now I wrote this function very quickly:</p>
<pre><code>from typing import List
DIGITS = {str(i): i for i in range(10)}
def get_bytes(s: str) -> List[int]:
byte_list = []
chunk = 0
for c in f'{s}0':
if (d := DIGITS.get(c)) is None:
raise ValueError(f"{c} is not a digit")
if (new := chunk * 10 + d) < 256:
chunk = new
else:
byte_list.append(chunk)
chunk = d
return byte_list[not byte_list[0]:]
</code></pre>
<pre><code>In [143]: get_bytes('0000')
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
Cell In[143], line 1
----> 1 get_bytes('0000')
Cell In[142], line 14, in get_bytes(s)
11 byte_list.append(chunk)
12 chunk = d
---> 14 return byte_list[not byte_list[0]:]
IndexError: list index out of range
In [144]: get_bytes('100100100')
Out[144]: [100, 100, 100]
In [145]: get_bytes('12301230123')
Out[145]: [123, 123, 123]
</code></pre>
<p>I tried to fix the code, I couldn't. Now there is only one problem remaining, and the exact problem I tried to address with this question.</p>
<p>In the first example, expected output is <code>[0, 0, 0, 0]</code>, in the third example the correct output should be <code>[123, 0, 123, 0, 123]</code>. How to fix this?</p>
<p>And for 0, I want every isolated 0 to be one number in the output. In short I want to make sure <code>''.join(str(i) for i in output) == original</code>.</p>
<hr />
<p>According to my tests, the code from the accepted answer performs better than the regex based approach, so I accepted it:</p>
<pre><code>In [384]: get_bytes1(string) == get_bytes2(string)
Out[384]: True
In [385]: %timeit get_bytes1(string)
179 ms ± 588 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [386]: %timeit get_bytes2(string)
161 ms ± 748 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [387]: len(string)
Out[387]: 1000001
</code></pre>
<p><code>get_bytes2</code> is from the accepted answer, <code>get_bytes1</code> is the regex based approach. And <code>string</code> is the first 1000000 decimal places of π plus a <code>'3'</code> in the front.</p>
|
<python>
|
2025-03-15 22:52:47
| 3
| 3,930
|
Ξένη Γήινος
|
79,511,457
| 23,512,643
|
Generate a random matrix in Python which satisfies a condition
|
<p>I manually define the following 16 matrices in Python :</p>
<pre><code>matrices = {
"Simulation 1": [
[1, 1, 1, 1, 1, 2],
[1, 1, 1, 1, 2, 2],
[1, 1, 1, 2, 2, 2],
[1, 1, 1, 2, 3, 2],
[1, 1, 1, 1, 3, 3],
[1, 1, 1, 1, 3, 3]
],
"Simulation 2": [
[1, 1, 2, 2, 2, 2],
[1, 1, 1, 2, 2, 2],
[1, 1, 2, 2, 2, 2],
[1, 1, 1, 2, 2, 2],
[1, 1, 1, 2, 2, 3],
[1, 1, 1, 3, 3, 3]
],
"Simulation 3": [
[1, 1, 2, 2, 2, 2],
[1, 1, 1, 2, 2, 2],
[1, 1, 1, 2, 3, 3],
[1, 1, 2, 2, 3, 3],
[1, 1, 1, 3, 3, 3],
[1, 1, 3, 3, 3, 3]
],
"Simulation 4": [
[1, 1, 1, 1, 2, 2],
[1, 1, 1, 2, 2, 2],
[3, 1, 3, 3, 3, 2],
[3, 3, 3, 3, 3, 2],
[3, 3, 3, 3, 3, 2],
[3, 3, 3, 3, 3, 3]
],
"Simulation 5": [
[1, 1, 1, 2, 2, 2],
[1, 1, 1, 2, 2, 2],
[1, 1, 1, 1, 3, 2],
[1, 3, 3, 3, 3, 3],
[3, 3, 3, 3, 3, 3],
[3, 3, 3, 3, 3, 3]
],
"Simulation 6": [
[1, 1, 1, 1, 1, 2],
[1, 1, 1, 2, 2, 2],
[1, 1, 1, 1, 2, 2],
[1, 1, 1, 1, 2, 3],
[1, 3, 3, 3, 3, 3],
[1, 3, 3, 3, 3, 3]
],
"Simulation 7": [
[1, 1, 1, 1, 2, 2],
[1, 1, 1, 1, 1, 2],
[1, 1, 1, 2, 2, 2],
[1, 1, 3, 2, 2, 3],
[1, 1, 3, 3, 3, 3],
[3, 3, 3, 3, 3, 3]
],
"Simulation 8": [
[1, 1, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 3, 3],
[2, 2, 3, 3, 3, 3],
[2, 2, 2, 3, 3, 3]
],
"Simulation 9": [
[1, 1, 2, 2, 2, 2],
[1, 1, 1, 2, 2, 2],
[1, 1, 1, 1, 2, 2],
[1, 1, 1, 1, 3, 2],
[1, 1, 1, 1, 3, 3],
[1, 1, 3, 3, 3, 3]
],
"Simulation 10": [
[1, 1, 1, 2, 2, 2],
[1, 1, 2, 2, 2, 2],
[1, 1, 2, 2, 2, 2],
[1, 1, 2, 2, 2, 3],
[1, 1, 1, 1, 3, 3],
[1, 1, 1, 3, 3, 3]
],
"Simulation 11": [
[1, 1, 1, 2, 2, 2],
[1, 1, 2, 2, 2, 2],
[1, 1, 2, 2, 2, 3],
[1, 1, 1, 2, 3, 3],
[1, 1, 1, 3, 3, 3],
[1, 1, 1, 3, 3, 3]
],
"Simulation 12": [
[1, 1, 1, 1, 2, 2],
[1, 1, 1, 1, 2, 2],
[1, 1, 1, 1, 2, 2],
[3, 1, 1, 3, 3, 3],
[3, 3, 3, 3, 3, 3],
[3, 3, 3, 3, 3, 3]
],
"Simulation 13": [
[1, 1, 1, 2, 2, 2],
[1, 1, 1, 1, 2, 2],
[1, 1, 1, 2, 2, 2],
[1, 1, 1, 3, 3, 3],
[1, 3, 3, 3, 3, 3],
[3, 3, 3, 3, 3, 3]
],
"Simulation 14": [
[1, 1, 1, 2, 2, 2],
[1, 1, 1, 1, 1, 2],
[1, 1, 1, 1, 1, 2],
[1, 1, 1, 3, 3, 2],
[1, 3, 3, 3, 3, 3],
[1, 3, 3, 3, 3, 3]
],
"Simulation 15": [
[1, 1, 1, 2, 2, 2],
[1, 2, 2, 2, 2, 2],
[1, 1, 1, 2, 2, 2],
[1, 1, 1, 1, 3, 3],
[1, 1, 1, 3, 3, 3],
[1, 1, 1, 3, 3, 3]
],
"Simulation 16": [
[1, 1, 3, 2, 2, 2],
[1, 1, 3, 2, 3, 3],
[1, 1, 3, 3, 3, 3],
[1, 1, 3, 3, 3, 3],
[1, 1, 3, 3, 3, 3],
[1, 1, 3, 3, 3, 3]
]
}
</code></pre>
<p>When visualized, these look like this:</p>
<p><a href="https://i.sstatic.net/M7oss8pB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M7oss8pB.png" alt="enter image description here" /></a></p>
<p>Positions in the matrix are understood like this:</p>
<pre><code>positions = [
[1, 2, 3, 4, 5, 6],
[7, 8, 9, 10, 11, 12],
[13, 14, 15, 16, 17, 18],
[19, 20, 21, 22, 23, 24],
[25, 26, 27, 28, 29, 30],
[31, 32, 33, 34, 35, 36]
]
</code></pre>
<p>These matrices have the following properties:</p>
<ul>
<li>1 = Red, 2 = Blue, 3 = Green</li>
<li>Position 1 is always Red, Position 6 is always Blue and Position 36 is always green</li>
<li>All circles of the same color can reach all other circles of the same color without touching any other color</li>
</ul>
<p>Here is an example of an invalid matrix (i.e. node 1 (red) can not reach the other red nodes without stepping on blue):</p>
<p><a href="https://i.sstatic.net/bZJiQ7yU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZJiQ7yU.png" alt="enter image description here" /></a></p>
<p><strong>I have the following question:</strong> Is there some algorithm which can be used to rapidly generate (random) valid matrices for this problem? Can something like a tree/graph be used which can efficiently generate 100 hundred such solutions?</p>
<hr />
<p><strong>Updated version of trincot's answer to include visualization:</strong></p>
<pre><code>import random
import matplotlib.pyplot as plt
import numpy as np
def make_matrix(n):
mat = [[0] * n for _ in range(n)]
frontier = set()
def place(row, col, color):
mat[row][col] = color
frontier.discard((row, col, 1))
frontier.discard((row, col, 2))
frontier.discard((row, col, 3))
for next_row, next_col in (row-1, col), (row+1, col), (row, col-1), (row, col+1):
if 0 <= next_row < n and 0 <= next_col < n and mat[next_row][next_col] == 0:
frontier.add((next_row, next_col, color))
place(0, 0, 1)
place(0, n-1, 2)
place(n-1, n-1, 3)
while frontier:
place(*random.choice(list(frontier)))
return mat
def visualize_matrix(mat):
n = len(mat)
colors = np.zeros((n, n, 3))
for i in range(n):
for j in range(n):
if mat[i][j] == 1:
colors[i, j] = [1, 0, 0] # Red for 1
elif mat[i][j] == 2:
colors[i, j] = [0, 0, 1] # Blue for 2
elif mat[i][j] == 3:
colors[i, j] = [0, 1, 0] # Green for 3
plt.figure(figsize=(5, 5))
plt.imshow(colors)
plt.grid(True, color='black', linewidth=0.5)
plt.xticks(np.arange(-0.5, n, 1), [])
plt.yticks(np.arange(-0.5, n, 1), [])
plt.tick_params(length=0)
for i in range(n):
for j in range(n):
plt.text(j, i, str(mat[i][j]), ha="center", va="center", color="white")
plt.tight_layout()
plt.show()
plt.figure(figsize=(15, 10))
for i in range(4):
plt.subplot(2, 2, i+1)
mat = make_matrix(6)
n = len(mat)
colors = np.zeros((n, n, 3))
for r in range(n):
for c in range(n):
if mat[r][c] == 1:
colors[r, c] = [1, 0, 0] # Red for 1
elif mat[r][c] == 2:
colors[r, c] = [0, 0, 1] # Blue for 2
elif mat[r][c] == 3:
colors[r, c] = [0, 1, 0] # Green for 3
plt.imshow(colors)
plt.grid(True, color='black', linewidth=0.5)
plt.title(f"Matrix #{i+1}")
plt.xticks([])
plt.yticks([])
for r in range(n):
for c in range(n):
plt.text(c, r, str(mat[r][c]), ha="center", va="center", color="white")
plt.tight_layout()
plt.show()
for i in range(5):
print(f"Matrix #{i+1}:")
mat = make_matrix(6)
for row in mat:
print(row)
print()
visualize_matrix(mat)
</code></pre>
|
<python><matrix><graph><tree>
|
2025-03-15 16:49:24
| 1
| 6,799
|
stats_noob
|
79,511,416
| 1,234,721
|
Multiple different attachments using Gmail API in Django
|
<p>How do I account for sending multiple (or no) attachments (via <code>request.FILES</code>) in Django using the Gmail API, so I can store the legacy message ID (ie "FBf…MiD") for future retrieval to reply in the same thread/thread_ID? I am switching from the SMTP (to be fully deprecated by Google) by Django's email.send(), which appears to be significantly different in how it handles file types from Gmail's API upload.</p>
<p>This already works with Django's email. How can I convert it to use Gmail's API for multiple-type attachments?</p>
<p>I am specifically struggling to find out how to attach multiple files of different types to the Gmail API. (unsure if I am supposed to serialize the EmailMultiAlternatives, or how to do so)</p>
<p>This is where the magic/headaches happens, and have issues. I am not sure why I feel like I'm reinventing the wheel, and can't find any packages for the server side that make it easier. I can get the 'attachments' to 'attach' to the email and show up in the UI, but the files are not clickable because they are 0 bytes, and sometimes empty MIME types given their actual type.
<a href="https://i.sstatic.net/gwGPz0xI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gwGPz0xI.png" alt="enter image description here" /></a></p>
<pre><code>def create_body_message_with_attachments(
sender,
to,
subject,
msgHtml,
msgPlain,
attachmentFiles,
cc=None,
bcc=None,
):
"""Create a message for an email.
Args:
sender: Email address of the sender.
to: Email address of the receiver.
subject: The subject of the email message.
msgHtml: Html message to be sent
msgPlain: Alternative plain text message for older email clients
attachmentFile: The path to the file to be attached.
Returns:
An object containing a base64url encoded email object.
"""
# allow either one recipient as string, or multiple as list
if isinstance(to, list):
to = ", ".join(to)
if cc:
if isinstance(cc, list):
cc = ", ".join(cc)
if bcc:
if isinstance(bcc, list):
bcc = ", ".join(bcc)
message = MIMEMultipart("mixed")
message["to"] = to
message["from"] = sender
message["subject"] = subject
…
# allow either one attachment as string, or multiple as list
if not isinstance(attachmentFiles, list):
attachmentFiles = [attachmentFiles]
# attachmentFiles
# [
# <InMemoryUploadedFile: Screenshot.jpg (image/jpg)>,
# <InMemoryUploadedFile: Screenshot2.png (image/png)>,
# <_io.BufferedReader name='/path/to/quote.pdf'>
# ]
messageA = MIMEMultipart("alternative")
messageR = MIMEMultipart("related")
messageR.attach(MIMEText(msgHtml, "html"))
messageA.attach(MIMEText(msgPlain, "plain"))
messageA.attach(messageR)
message.attach(messageA)
for attachment in attachmentFiles:
# Trying to separate the filename from the file content for different types
# File Name
if hasattr(attachment, "temporary_file_path"):
filename = os.path.basename(attachment.temporary_file_path())
elif hasattr(attachment, "name"):
filename = os.path.basename(attachment.name)
else:
filename = os.path.basename(attachment)
# File Contents
# Content Data
if isinstance(attachment, str) and os.path.exists(attachment):
content_type, _ = mimetypes.guess_type(attachment) or (content_type, None)
with open(attachment, "rb") as f:
file_data = f.read()
# Handle BufferedReader (BytesIO)
elif isinstance(attachment, io.BytesIO):
file_data = attachment.getvalue() # Ensure correct byte data is read
# Handle Django InMemoryUploadedFile
elif isinstance(attachment, InMemoryUploadedFile):
content_type = attachment.content_type or content_type
file_data = attachment.read()
# Type
# I have tried different ways to get the MIME type,
# but am unsure of the most pythonic way to do so, many opions out there
mim = magic.Magic(mime=True)
try:
c_t = mim.from_file(filename)
except OSError as e:
# Magic needs to differentiate?! between files and streams
c_t = mim.from_buffer(attachment.read(2048))
# Magic often returns 'application/x-empty'
print(f"file: {attachment} with {content_type=} and {c_t=}")
main_type, sub_type = content_type.split("/", 1)
if main_type == "text":
# Unsure if I have to open and close for each type
# fp = open(attachment, "rb")
msg_attachment = MIMEText(file_data, _subtype=sub_type)
# fp.close()
elif main_type == "image":
# Unsure if I have to open and close for each type
# fp = open(attachment, "rb")
msg_attachment = MIMEImage(file_data, _subtype=sub_type)
# fp.close()
elif main_type == "audio":
msg_attachment = MIMEAudio(file_data, _subtype=sub_type)
elif main_type == "application" and sub_type == "pdf":
msg_attachment = MIMEApplication(attachment.read(), _subtype=sub_type)
else:
msg_attachment = MIMEBase(main_type, sub_type)
msg_attachment.set_payload(attachment.read())
encoders.encode_base64(msg_attachment)
msg_attachment.add_header(
'Content-Disposition', 'attachment', filename=f'{filename}'
)
message.attach(msg_attachment)
raw = base64.urlsafe_b64encode(message.as_bytes())
raw = raw.decode()
body = {"raw": raw}
return body
</code></pre>
<p>Trying to passing the EmailMultiAlternatives directly to the Gamil API I get <code>RFC822 payload message string or uploading message via /upload/* URL required", 'domain': 'global', 'reason': 'invalidArgument'</code> and trying to encode the message to JSON I get errors similar to <code>'utf-8' codec can't decode byte 0x93 in position 10: invalid start byte</code>. Not sure how to pass correctly encoded attachments and a message to the API.</p>
<p>Notes(that help differentiate other SO questions)</p>
<ul>
<li>Not using SMTP, as that has already begun sundowning/deprecating, and I am are trying to avoid using the google account setting of "less secure apps"</li>
<li>Using <code>django.core.mail</code>'s EmailMultiAlternatives to compose and send, which <code>mail.send(fail_silently=True)</code> does not return the message, or give an ID on the gmail server, so finding the exact email is not functional/deterministic given the smilarities</li>
<li>In Django, when the upload files come through the <code>request.FILES</code>, they are a of type <code>InMemoryUploadedFile</code> from <code>django.core.files.uploadedfile</code> ¿with the potential to be different MIME Types?, but the internal attachment of a generated PDF is of type <code>_io.BufferedReader</code>. This is presumably causing my headaches</li>
<li>using Python 3.11.7 in dev and Python 3.12.3 in prod, and aware that in 3.13 that <code>mimetypes.guess_type()</code> will be deprecated per <a href="https://docs.python.org/3/library/mimetypes.html" rel="nofollow noreferrer">this</a></li>
<li>Sending the EmailMultiAlternatives directly to the Gmail API <code>send().execute()</code> I get an error of <code>Object of type EmailMultiAlternatives is not JSON serializable</code></li>
<li>The MIME information from the received email is:</li>
</ul>
<p>.</p>
<pre><code>--===============2268240127970561189==
Content-Type: image/png
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="Screenshot 2025-03-12 at 1.54.01PM.png"
--===============2268240127970561189==
Content-Type: image/png
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="Screenshot 2025-03-12 at 4.44.11PM.png"
--===============2268240127970561189==
Content-Type: application/octet-stream
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="Q-DFD-12345-031525-0XEH-1153.pdf"
--===============2268240127970561189==--
</code></pre>
<ul>
<li>I have searched many SO questions, walkthroughs, API guides, etc…. I believe there not to be any current examples that satisfy all the following: Django InMemoryUploadedFile, Gmail API, multiple attachments and of different types, non SMTP, traceable email/threadIDs, and mostly updated python. Feel free to share if you find one.</li>
</ul>
<p>Other files that may help understand flow</p>
<p>view.py</p>
<pre><code>class MakeQuoteWithItems(…, CreateView):
def post(self, request, *args, **kwargs):
# init a Django Model
quote = QuoteClass(request)
# with a generated PDF, and send the associated email
quote.make_and_email_quote()
</code></pre>
<p>Current working way (SMTP)</p>
<pre><code>from django.core.mail import EmailMultiAlternatives
def emailTheQuote(quote, attachments=None):
from_email = …
subject = …
email_context = …
html_content = render_to_string("email/tempalte.html", email_context, quote.request)
to_emails = … (string, [] or whatever format needed)
…
#
# Current working way sending through SMTP
#
# Django settings somehow gets the email out
# EMAIL_USE_TLS = False
# EMAIL_HOST = os.getenv("EMAIL_PROVIDER_SMTP_NOTIFIER_HOST")
# EMAIL_HOST_USER = os.getenv("EMAIL_PROVIDER_SMTP_NOTIFIER_EMAIL")
# EMAIL_HOST_PASSWORD = os.getenv("EMAIL_PROVIDER_SMTP_NOTIFIER_PASSWORD")
# EMAIL_PORT = 587
#
mail = EmailMultiAlternatives(
subject,
strip_tags(html_content),
to=to_emails,
bcc=bcc_emails,
from_email=from_email,
)
mail.attach_alternative(html_content, "text/html")
mail.attach(
quote.pdf_canvas._filename, open(quote.canvas._filename, "rb").read(), "application/pdf"
)
if quote.attachments:
for attachment in quote.attachments:
mail.attach(attachment._name, attachment.read(), attachment.content_type)
mail.send(fail_silently=False)
# But can't get a hold of the mail going out to get it's ID to reply to later
</code></pre>
<p>Proposed way (Gmail API)</p>
<pre><code>def emailTheQuote(quote, attachments=None):
from_email = …
subject = …
# Instead -
gmail_email = create_and_send_gmail_message(
from_email,
to_emails,
subject,
html_content,
text_content,
quote.request,
attachmentFiles=attachments, # user uploaded and the generated PDF
)
# Some Django Model to query to get the email.message_id and email.thread_id
GmailEmailLog.objects.create(
gmail_email=gmail_email,
message_id=gmail_email.message_id,
# Thread ids are the same for the first message in a thread, but remain
# the same for all subsequent messages sent within the thread
thread_id=gmail_email.thread_id,
quote_id=quote.id
…
)
return gmail_email
</code></pre>
<p>helper_functions.py</p>
<pre><code>def create_and_send_gmail_message(
sender,
to,
subject,
msgHtml,
msgPlain,
request,
attachmentFiles=None,
cc=None,
bcc=None,
):
if attachmentFiles:
message_body = create_body_message_with_attachments(sender,to,subject,msgHtml,msgPlain,attachmentFiles,cc,bcc,)
else:
# This is not an issue
message_body = create_messag_body_html(sender,to,subject,msgHtml,msgPlain,cc,bcc,)
result = SendMessageInternal(message_body, request)
return result
def SendMessageInternal(incoming_message_body, request):
credentials = get_credentials(request)
service = discovery.build("gmail", "v1", credentials=credentials)
user_id = settings.EMAIL_GMAIL_USERID
try:
msg = (
service.users()
.messages()
.send(
userId=user_id,
body=incoming_message_body,
)
.execute()
)
print(f"Message Id: {msg['id']}")
return msg
except errors.HttpError as error:
print(f"An error occurred: {error}")
return f"{error=}"
return "OK"
</code></pre>
<p>EMAIL Raw files</p>
<p>THIS IS THE PREVIOUS WORKING EMAIL, sudo-anonymised</p>
<pre><code>Delivered-To: gmailAccount@gmail.com
Received: by 20:…:ca with SMTP id sp132931;
Wed, 15 Feb 2023 19:45:01 -0500 (ET)
X-Received: by 20:…:5c with SMTP id 20….….…06;
Wed, 15 Feb 2023 19:45:01 -0500 (ET)
ARC-Seal: i=3; a=rsa-sha256; t=1676522553; cv=pass;
d=google.com; s=arc-20160816;
b=FBh… …zfk==
ARC-Message-Signature: i=3; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816;
h=message-id:date:to:from:subject:mime-version:dkim-signature
:delivered-to;
bh=Mn…tY=;
b=zA…fO+wb…9Z+X+GZ…l8+QC…w3+rs…RW+ch…DQ==
ARC-Authentication-Results: i=3; mx.google.com;
dkim=pass header.i=@domain.com header.s=emailProvider header.b=er…YA;
arc=pass (i=2 dkim=pass dkdomain=production_domain.com);
spf=neutral (google.com: 209.….41 is neither permitted nor denied by best guess record for domain of my_=gmailAccount=gmail.com@personal_test_domain.org) smtp.mailfrom="my_=gmailAccount=gmail.com@personal_test_domain.org"
Return-Path: <my_=gmailAccount=gmail.com@personal_test_domain.org>
Received: from mail-sor-f41.google.com (mail-sor-f41.google.com. [209.….41])
by mx.google.com with SMTPS id a5…23.ip.ad.dr.e.ss
for <gmailAccount@gmail.com>
(Google Transport Security);
Wed, 15 Feb 2023 19:45:01 -0500 (ET)
Received-SPF: neutral (google.com: 209.….41 is neither permitted nor denied by best guess record for domain of my_=gmailAccount=gmail.com@personal_test_domain.org) client-ip=209.….41;
Authentication-Results: mx.google.com;
dkim=pass header.i=@production_domain.com header.s=emailProvider header.b=er…YA;
arc=pass (i=2 dkim=pass dkdomain=production_domain.com);
spf=neutral (google.com: 209.….41 is neither permitted nor denied by best guess record for domain of my_=gmailAccount=gmail.com@personal_test_domain.org) smtp.mailfrom="my_=gmailAccount=gmail.com@personal_test_domain.org"
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=1e100.net; s=20210112;
h=message-id:date:to:from:subject:mime-version:dkim-signature
:delivered-to:x-gm-message-state:from:to:cc:subject:date:message-id
:reply-to;
bh=f3lQdqf4Na3Uj8i45MnNgmpkfOMWo5t8aOYO8suzazE=;
b=6m1jghQ3ciL+qfXsAIFeM5EZ54BIjxX5aCebYBX/neCEaKXoVycDZAC0bAl4FpeiNv
UwkST9cVeWQweICf6HKwQ1J2rQSELlhRLjTqNvM5pBbPMQZXc+g/wrATZd+2botCqZO/
Y6zog9xQWHs/IXeZYV2T+H1AoBZIow9DiYhvl9nD8/zjwAsC5BfvANVQVhpmERKPksYN
9T0L9SX83HokibmO3bZzb5DTK1eJGQDeysgznNHDERZIHTF7W6rq+7lVoqG3wj7auX3F
jsVllk6E7yXxtuBeJ3PQO9ldtaNU/TxaLy3u7Cq2sqlaR5ttqS003cIO/M1IZo/Kr3oT
NtqQ==
X-Gm-Message-State: AO0yUKUng8IxDTR5Pa4seNHrOauqQx1ULgwNakQLuDabR5Df/CR+pbfh 52r6R/0O8UXEuIp4MustAWlEXSMAeWz8hcEWmUwGn5aF1s8PEz6f+UvEcEg=
X-Received: by 2002:a50:bb48:0:b0:4ac:b8e1:7410 with SMTP id y66-20020a50bb48000000b004acb8e17410mr2364158ede.6.1676522552160;
Wed, 15 Feb 2023 19:45:01 -0500 (ET)
X-Forwarded-To: gmailAccount@gmail.com
X-Forwarded-For: chris@personal_test_domain.org gmailAccount@gmail.com
Delivered-To: jerome@personal_test_domain.org
Received: by 2002:a54:2789:0:b0:1f9:34b:de17 with SMTP id n9csp225579ecp;
Wed, 15 Feb 2023 19:45:01 -0500 (ET)
X-Google-Smtp-Source: AK7set9WzXuuJYVtYFjdtn1LrQnmLKtM2tv4acM4vnzclcNgqHEAS0FQQMXr004S9ccLFIJOWep/
X-Received: by 2002:a17:902:e5c3:b0:19a:a80e:a6d5 with SMTP id u3-20020a170902e5c300b0019aa80ea6d5mr5626264plf.23.1676522545554;
Wed, 15 Feb 2023 19:45:01 -0500 (ET)
ARC-Seal: i=2; a=rsa-sha256; t=1676522545; cv=pass;
d=google.com; s=arc-20160816;
b=fWktl8SgvQeHJZHh5vkX/H+BDnrVsEHD2gJQ7z5kAAcO+0G3MzKJiksm5Q3Tma46/s
vPBk1I9HeFFlmOVDNfZzrpSqNtKzrbRh6KDSFgumiAl/IYWzyul/Y9izd3uWs0IoQhBT
+SutRjEE5ZqgR5bLbNbBBaAkpVIWIbj3PEHxHR3fIrykqReaC0S9x/IlcTBRXdji0I/Y
HbVFL9oiyLU3JoV5HUuU//oQbT648XPTZeawUxP41Hz8PJDYj3riyo32XmlxRNLXRvTZ
+Zb2x6EPOQezwDXb7XR8CgAIQ4KJvxIl7IuArmOXQTRf45gCZywhEMHUxKtW0o/IkZNw
qzjA==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816;
h=message-id:date:to:from:subject:mime-version:dkim-signature;
bh=f3lQdqf4Na3Uj8i45MnNgmpkfOMWo5t8aOYO8suzazE=;
b=ED7g4/sG1svFF6LH/QjWHutvwM/kYqlW3n6IUmuCdvqUHsRR9JFqwwE4Sj/1Xjf8qA
gUUUWgGWSxsVC6Oqoqt48PjmRGuVq8y5LYIIGNHgfe/FOScOYl2w1mJup16MwTrXlq51
QF9jJe6fGH9P/uBLUC0QwpwFhAmHVjbwMXsw1zoobjmkKNHRERJWUTzLjNWiiVYmeVog
CvwzW49kRjiapIlQnGCnIje7c4ywLtsU9z6g6VIxwyHoJHEWMO4HdHbGsiwx6LL3VT5O
rv0bJ5lHZnpZnnhWZES+Q8ewr/BcKB/0bSFclfDMPBtbKWM4AVF1dfNcIjRTh8cRdPV2
/LNQ==
ARC-Authentication-Results: i=2; mx.google.com;
dkim=pass header.i=@production_domain.com header.s=emailProvider header.b=er…YA;
arc=pass (i=1);
spf=neutral (google.com: 23.83.212.48 is neither permitted nor denied by best guess record for domain of development-quote@production_domain.com) smtp.mailfrom=DEVELOPMENT-quote@production_domain.com
Return-Path: <DEVELOPMENT-quote@production_domain.com>
Received: from dog.elm.relay.mailchannels.net (dog.elm.relay.mailchannels.net. [23.83.212.48])
by mx.google.com with ESMTPS id e10-20020a170902784a00b00194a1f665b5si293781pln.570.2023.02.15.20.42.23
(version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
Wed, 15 Feb 2023 19:45:01 -0500 (ET)
Received-SPF: neutral (google.com: 23.83.212.48 is neither permitted nor denied by best guess record for domain of development-quote@production_domain.com) client-ip=23.83.212.48;
X-Sender-Id: emailProvider|x-authsender|donotreply@notifications.production_domain.com
Received: from relay.mailchannels.net (localhost [127.0.0.1]) by relay.mailchannels.net (Postfix) with ESMTP id 3E82F8810AA; Thu, 16 Feb 2023 04:42:23 +0000 (UTC)
Received: from pdx1-sub0-mail-a306.emailProvider.com (unknown [127.0.0.6]) (Authenticated sender: emailProvider) by relay.mailchannels.net (Postfix) with ESMTPA id 8A…14; Thu, 16 Feb 2023 04:42:22 +0000 (UTC)
ARC-Seal: i=1; s=arc-2022; d=mailchannels.net; t=76…54; a=rsa-sha256; cv=none; b=qk…nC/bA…de/ Dc…mf/s8…3u+XQ…Wx+8c/D…g/SR…up+zl…zf…==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=mailchannels.net; s=arc-2022; t=16…25; h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
to:to:cc:mime-version:mime-version:content-type:content-type:
dkim-signature; bh=lQ…sY=; b=eK…+/Qu…+m2CZ9JA7 lp…+tY…+lvjD 7T…+b…JK…+dh…+D/yytOf Yt…+ht/Kf0rAlyIeuNn7tT9Wu1au4/dR…+ch…w1/gz…+7Z==
ARC-Authentication-Results: i=1; rspamd-b9c55767f-ngqs4; auth=pass smtp.auth=emailProvider smtp.mailfrom=DEVELOPMENT-quote@production_domain.com
X-Sender-Id: emailProvider|x-authsender|donotreply@notifications.production_domain.com
X-MC-Relay: Neutral
X-MailChannels-SenderId: emailProvider|x-authsender|donotreply@notifications.production_domain.com
X-MailChannels-Auth-Id: emailProvider
X-Relation-Continue: 21…07
X-MC-Loop-Signature: 22…30:69…94
X-MC-Ingress-Time: 65…54
Received: from pdx1-sub0-mail-a306.emailProvider.com (pop.emailProvider.com [ip.ad.dr.es.s1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384) by ip.ad.dr.es.s9 (trex/6.7.1); Thu, 16 Feb 2023 04:42:23 +0000
Received: from 1.0.0…0.0.0.0.0.0.ip6.arpa (cpe-71-68-87-31.carolina.res.rr.com [ip.ad.dr.es.s]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: donotreply@notifications.production_domain.com) by pdx1-sub0-mail-a306.emailProvider.com (Postfix) with ESMTPSA id Mk…Qz; Wed, 15 Feb 2023 19:45:01 -0500 (ET)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=production_domain.com; s=emailProvider; t=16…25; bh=lQ…az=; h=Content-Type:Subject:From:To:Date; b=er…YA…gV/wq…9G
Ky…z3/iA…QG/eb…we+aY/4p…cD/y8…7d
WtajPHs+5Z…YB+xX…5D
Z6…UP+j2…KJ
I2…ue/6H…l6/Rx…83+RZ…8D/zD…mP
jB…iC/w==
Content-Type: multipart/mixed; boundary="===============867…5309=="
MIME-Version: 1.0
Subject: Quote - QUOTE-FSL-021523-2342
From: DEVELOPMENT-quote@production_domain.com
To: other@personal_test_domain.org, second@personal_test_domain.org, third@personal_test_domain.org, actual@production.com
Date: Thu, 16 Feb 2023 04:42:19 -0000
Message-ID: <867…5309@1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa>
--===============45…72==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
High Level Notes:
quote with attachments
Purchase Order: na
Regular thank you text.
Regular body Text.
--===============45…72==
Content-Type: application/pdf
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="/path/to/quote.pdf"
--===============45…72==
Content-Type: image/jpeg
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="_bosch_cp1.jpeg"
--===============45…72==
Content-Type: image/jpeg
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="_cr_valve_bosch.jpeg"
--===============45…72==
Content-Type: application/pdf
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="prod_SERVER_quote.pdf"
--===============45…72==--
</code></pre>
<p>THIS IS THE GMAIL API MESSAGE, received with three "empty" attachments, sudo-anonymised</p>
<pre><code>Received: from 21…17 named unknown by gmailapi.google.com with
HTTPREST; Mon, 17 Mar 2025 09:34:56 -0500
Received: from 21…17 named unknown by gmailapi.google.com with HTTPREST; Mon, 17 Mar 2025 09:34:56 -0500
Content-Type: multipart/mixed; boundary="===============50…13=="
MIME-Version: 1.0
From: gmailAccount@gmail.com
To: q@dev_domain.com
Cc: cc@dev_domain.com
Subject: Quote - Q-DFD-12345-031725-L0KQ-1034
Date: Mon, 17 Mar 2025 09:34:56 -0500
Message-Id: <oB…7k@mail.gmail.com>
--===============50…13==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
DEFAULT MESSAGE No message body text was provided - Have a nice day
--===============50…13==
Content-Type: image/png
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="Screenshot 2025-03-12 at 4.44.11PM.png"
--===============50…13==
Content-Type: image/png
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="Screenshot 2025-03-12 at 1.54.01PM.png"
--===============50…13==
Content-Type: application/pdf
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="Q-DFD-12345-031725-UNIQ-1034.pdf"
--===============50…13==--
</code></pre>
<p>Thanks in advance :)</p>
|
<python><python-3.x><django><gmail-api><mime-types>
|
2025-03-15 16:15:28
| 4
| 19,295
|
chris Frisina
|
79,511,352
| 16,236,118
|
How can I change the success url of allauth AddEmailForm?
|
<p>I have a UserDetail page that includes several forms.</p>
<pre><code>from allauth.account.forms import ChangePasswordForm, AddEmailForm
User = get_user_model()
class UserDetailView(LoginRequiredMixin, DetailView):
model = User
slug_field = "username"
slug_url_kwarg = "username"
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
# context["form_info_username"] =
context["form_add_mail"] = AddEmailForm(user=self.request.user)
context["form_change_password"] = ChangePasswordForm(user=self.request.user)
return context
</code></pre>
<p>When I add a new email:</p>
<pre><code><form method="post" action="{% url 'account_email' %}" class="add_email mt-2">
{% csrf_token %}
{{ form_add_mail|crispy }}
<div class="flex justify-end mt-2">
<button type="submit"
name="action_add"
class="bg-green-600 hover:bg-green-800 dark:bg-green-600 dark:hover:bg-green-800 font-semibold text-white rounded-sm p-2 text-xs sm:text-sm sm:px-8 ">
{% translate "Add E-mail" %}
</button>
</div>
</form>
</code></pre>
<p>The form/view(<code>{% url 'account_email' %}</code>) redirects to http://localhost:8000/de/accounts/email/, but I would prefer it to redirect to the same page where I submitted instead.</p>
|
<python><django><django-forms><django-allauth>
|
2025-03-15 15:34:12
| 1
| 1,636
|
JAdel
|
79,511,254
| 7,886,968
|
Can I specify the replacement character in str([value], encoding='utf-8', errors='replace')
|
<p>Is it possible to specify the replacement character used by <code>str(xxx,encoding='utf-8', errors='replace')</code> to be something other than the diamond-question-mark character (�)?</p>
<p>I am attempting to fix up a GPS data filtering routine in Python for my GoPiGo robot.</p>
<p>The GPS module I am using returns NMEA "GPxxx" sentences as pure 8-bit ASCII values beginning with "$GPxxx" where "xxx" is a three character code for the type of data in the sentence.</p>
<p>For example GPS NMEA serial data sentences might look something like this:</p>
<pre class="lang-none prettyprint-override"><code>$GPRMC,100905.00,A,5533.07171,N,03734.91789,E,2.657,,150325,,,A*76
$GPVTG,,T,,M,2.657,N,4.920,K,A*2A
$GPGGA,100905.00,5533.07171,N,03734.91789,E,1,04,2.81,183.7,M,13.4,M,,*58
$GPGSA,A,3,12,29,20,06,,,,,,,,,8.80,2.81,8.34*0A
$GPGSV,3,1,10,04,04,007,,05,08,138,,06,24,046,14,11,49,070,*77
$GPGSV,3,2,10,12,48,134,18,20,28,107,08,25,78,210,,28,38,290,11*7E
$GPGSV,3,3,10,29,49,251,14,31,24,315,*7E
$GPGLL,5533.07171,N,03734.91789,E,100905.00,A,A*69
</code></pre>
<p>Right now I am using str() to read the serial data, convert it to UTF-8 from raw ASCII, and print it. Sometimes the initial read generates some garbage characters which throw an exception, so I use <code>errors='ignore'</code> to prevent this:</p>
<pre><code>self.raw_line = str(self.ser.readline().strip(),encoding='utf-8', errors='ignore')
</code></pre>
<p>The result is that when I start a serial communication session, some of the first characters read into the input stream are "garbage" characters. This appears to be a characteristic of the way Python reads the stream as putty in Windows and miniterm in Linux don't show these characters.</p>
<pre class="lang-none prettyprint-override"><code>Reading GPS sensor for location . . .
If you are not seeing coordinates appear, your sensor needs to be
outside to detect GPS satellites.
Reading GPS sensor for location . . .
JSSH
ubbbbrb95)$GPRMC,131435.00,V,,,,,,,150325,,,N*7C
$GPVTG,,,,,,,,,N*30
$GPGGA,131435.00,,,,,0,00,99.99,,,,,,*67
$GPGSA,A,1,,,,,,,,,,,,,99.99,99.99,99.99*30
$GPGSV,3,1,12,05,36,054,,07,03,359,,13,10,076,,15,13,113,*75
</code></pre>
<p>...where <code>JSSHubbbbrb95)</code> represents a string of nonsensical characters that appear before the start of valid data.</p>
<p>What I want to do is replace invalid characters with "" instead of the diamond-question-mark.</p>
<p>I understand that I can filter them out. However it would be much more convenient if I could replace them with "nothing" at the time they're read.</p>
<p>Is it possible to specify a different replacement character when using <code>errors='replace'</code>?</p>
|
<python><string><utf-8>
|
2025-03-15 14:11:10
| 1
| 643
|
Jim JR Harris
|
79,511,208
| 6,035,977
|
Why does SymPy incorrectly evaluate inequalities for infinite symbols?
|
<p>I'm using SymPy to handle symbolic mathematics involving possibly infinite values. I need to define a symbol <code>y>=0</code> and <code>y<=sympy.inf</code> and tried the following but the behaviour is not as I would expect it:</p>
<pre class="lang-py prettyprint-override"><code>import sympy
y = sympy.symbols("y", finite=False, extended_nonnegative=True)
print(y >= 0) # True (as expected)
print(y < sympy.oo) # False (should be uncertain)
print(y > 19) # True (should be uncertain)
</code></pre>
<p>How to ensure the last two comparisons ain't evaluated?</p>
|
<python><sympy><symbols>
|
2025-03-15 13:31:36
| 1
| 333
|
Corram
|
79,511,189
| 12,323,013
|
matplotlib waterfall plot with surfaces shows black artifacts on border of plot
|
<p>I have a script to write a heatmap (or contour map) inside an arbitrary closed shape. A bounding box of grid points is created and a mask is used to clip any points outside of the shape. However, if I want to create a stacked plot of these maps (to show changes along a fourth dimension), the grid points are painted black.</p>
<p>Here is the script for a single plot:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib.path import Path
from scipy.interpolate import griddata
arbitrary_shape_points = [
(0.0, 0.5), (0.1, 0.6), (0.3, 0.55), (0.5, 0.4), (0.6, 0.2), # Top curve
(0.65, 0.0), (0.6, -0.2), (0.5, -0.4), (0.3, -0.55), (0.1, -0.6), # Bottom curve right
(0.0, -0.5), (-0.1, -0.6), (-0.3, -0.55), (-0.5, -0.4), (-0.6, -0.2), # Bottom curve left
(-0.65, 0.0), (-0.6, 0.2), (-0.5, 0.4), (-0.3, 0.55), (-0.1, 0.6), # Top curve left
(0.0, 0.5) # Closing point
]
shape_path = Path(arbitrary_shape_points)
np.random.seed(42)
num_points = 100
x_data = np.random.uniform(-0.7, 0.7, num_points)
y_data = np.random.uniform(-0.7, 0.7, num_points)
z_data = np.pi*np.sin(np.pi * x_data) + np.exp(-np.pi*y_data)
# Bounding box
shape_xmin = min([p[0] for p in arbitrary_shape_points[:-1]])
shape_ymin = min([p[1] for p in arbitrary_shape_points[:-1]])
shape_xmax = max([p[0] for p in arbitrary_shape_points[:-1]])
shape_ymax = max([p[1] for p in arbitrary_shape_points[:-1]])
# Grid
grid_resolution = 500
x_grid = np.linspace(shape_xmin, shape_xmax, grid_resolution)
y_grid = np.linspace(shape_ymin, shape_ymax, grid_resolution)
xx, yy = np.meshgrid(x_grid, y_grid)
# Interpolate data
interpolation_method = 'cubic'
zz_interpolated = griddata((x_data, y_data), z_data, (xx, yy), method=interpolation_method)
# Mask the grid outside the shape
grid_points = np.column_stack((xx.flatten(), yy.flatten()))
mask = shape_path.contains_points(grid_points).reshape(xx.shape)
zz_masked = np.where(mask, zz_interpolated, np.nan)
# Plot Heatmap using pcolormesh
plt.figure(figsize=(8, 6))
heatmap = plt.pcolormesh(xx, yy, zz_masked, cmap='viridis', shading='auto')
plt.colorbar(heatmap, label='Z Value')
plt.legend()
plt.title('Heatmap within Arbitrary Shape')
plt.xlabel('X')
plt.ylabel('Y')
x_shape_original, y_shape_original = zip(*arbitrary_shape_points)
plt.plot(x_shape_original, y_shape_original, 'k-', linewidth=1)
# create whitespace around the bounding box
whitespace_factor = 1.2
plt.xlim(shape_xmin * whitespace_factor, shape_xmax * whitespace_factor)
plt.ylim(shape_ymin * whitespace_factor, shape_ymax * whitespace_factor)
plt.gca().set_aspect('equal', adjustable='box')
plt.show()
</code></pre>
<p>The result is this:
<a href="https://i.sstatic.net/EFNX3rZP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EFNX3rZP.png" alt="enter image description here" /></a></p>
<p>This is the stacked plot script:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib.path import Path
from scipy.interpolate import griddata
arbitrary_shape_points = [
(0.0, 0.5), (0.1, 0.6), (0.3, 0.55), (0.5, 0.4), (0.6, 0.2), # Top curve
(0.65, 0.0), (0.6, -0.2), (0.5, -0.4), (0.3, -0.55), (0.1, -0.6), # Bottom curve right
(0.0, -0.5), (-0.1, -0.6), (-0.3, -0.55), (-0.5, -0.4), (-0.6, -0.2), # Bottom curve left
(-0.65, 0.0), (-0.6, 0.2), (-0.5, 0.4), (-0.3, 0.55), (-0.1, 0.6), # Top curve left
(0.0, 0.5) # Closing point
]
shape_path = Path(arbitrary_shape_points)
np.random.seed(42)
num_points = 100
x_data = np.random.uniform(-0.7, 0.7, num_points)
y_data = np.random.uniform(-0.7, 0.7, num_points)
fourth_dimension_values = np.linspace(0, 1, 5)
shape_xmin = min([p[0] for p in arbitrary_shape_points[:-1]])
shape_ymin = min([p[1] for p in arbitrary_shape_points[:-1]])
shape_xmax = max([p[0] for p in arbitrary_shape_points[:-1]])
shape_ymax = max([p[1] for p in arbitrary_shape_points[:-1]])
grid_resolution = 100
x_grid = np.linspace(shape_xmin, shape_xmax, grid_resolution)
y_grid = np.linspace(shape_ymin, shape_ymax, grid_resolution)
xx, yy = np.meshgrid(x_grid, y_grid)
grid_points = np.column_stack((xx.flatten(), yy.flatten()))
mask = shape_path.contains_points(grid_points).reshape(xx.shape)
interpolation_method = 'cubic'
fig = plt.figure(figsize=(10, 8), facecolor=(0, 0, 0, 0))
ax = fig.add_subplot(111, projection='3d', facecolor=(0, 0, 0, 0))
z_offset = 0
z_step = 1
for i, fd_value in enumerate(fourth_dimension_values):
z_data = np.pi*np.sin(np.pi * x_data) + np.exp(-np.pi*y_data) + fd_value
zz_interpolated = griddata((x_data, y_data), z_data, (xx, yy), method=interpolation_method)
# Mask the grid outside the shape
zz_masked = np.where(mask, zz_interpolated, np.nan)
# Prepare Z values for 3D plot - constant Z for each slice, offset along z-axis
z_surface = np.full_like(xx, z_offset + i * z_step)
z_min_slice = np.nanmin(zz_masked)
z_max_slice = np.nanmax(zz_masked)
if z_max_slice > z_min_slice:
zz_normalized = (zz_masked - z_min_slice) / (z_max_slice - z_min_slice)
else:
zz_normalized = np.zeros_like(zz_masked)
# Create facecolors from normalized data
facecolors_heatmap = plt.cm.viridis(zz_normalized)
# Make masked areas fully transparent
facecolors_heatmap[np.isnan(zz_masked)] = [0, 0, 0, 0]
# Plot each heatmap slice as a surface
surf = ax.plot_surface(xx, yy, z_surface, facecolors=facecolors_heatmap, linewidth=0, antialiased=False, shade=False, alpha=0.8, rstride=1, cstride=1)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Fourth Dimension Index')
ax.set_title('Stacked Heatmaps along Fourth Dimension')
ax.view_init(elev=30, azim=-45)
ax.set_box_aspect([np.diff(ax.get_xlim())[0], np.diff(ax.get_ylim())[0], np.diff(ax.get_zlim())[0]*0.5])
plt.show()
</code></pre>
<p>The result is this:
<a href="https://i.sstatic.net/DahVv7m4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DahVv7m4.png" alt="enter image description here" /></a></p>
<p>I am not sure how to stop the black border from appearing. I tried setting those points to be transparent but that does not do anything.</p>
|
<python><numpy><matplotlib>
|
2025-03-15 13:20:17
| 1
| 349
|
noobquestionsonly
|
79,511,188
| 3,815,773
|
Gauss Fitting data with scipy and getting strange answers on Fit Quality
|
<p>I have Gamma-Spectra and I am doing Gauss fit to the peaks using Python with scipy. Works well, but trying to get a number on fit quality (for the intent on some automation) returns very odd numbers.</p>
<p>The scipy command is:</p>
<pre><code>response = scipy.optimize.curve_fit(gauss, e, c, param0, full_output=True, absolute_sigma=True, method=FitMethod, bounds=bounds)
</code></pre>
<p>I get the fit quality from:</p>
<pre><code>fitQuality = np.linalg.cond(response[1]) # PCOV is response[1]
</code></pre>
<p>FitQuality is a value from zero (excellent fit) up to infinity (beyond lousy). As an example, here is a fit (shaded green) to the K-40 Gamma line in a CsI detector.
<a href="https://i.sstatic.net/7oDbE1fe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7oDbE1fe.png" alt="Gauus Fit to K-40 Gamma line from a CsI detector" /></a></p>
<p>Given the jittery nature of the data, I am pleased with the result. And so is scipy, giving it a FitQuality rating of 18.</p>
<p>Next picture shows a fit (shaded red) to a Na-22 line in a High-Resolution Ge detector.</p>
<p><a href="https://i.sstatic.net/tC6LPgJy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tC6LPgJy.png" alt="enter image description here" /></a></p>
<p>Again, I am very pleased with the result. But scipy is giving it a FitQuality rating of 56,982,136; this means very, very poor.</p>
<p>This does not make sense. The fit is nearly perfect!</p>
<p>Is my FitQuality formula inappropriate? Does it need additions? Am I completely off the mark? Please, help.</p>
|
<python><scipy><statistics><curve-fitting>
|
2025-03-15 13:20:10
| 1
| 505
|
ullix
|
79,511,155
| 7,456,793
|
How to generate requirements (conda / pip) restricted to a given project, from a larger environment?
|
<p>so, I am not a pro-developer and my practices are a quite sub-optimal. I have developed a project (mainly python), which requires a bunch of dependencies. My way is generally to install the dependencies using conda, then pip if not in conda.</p>
<p>The project is nearly ready to be shared. But my conda environment is now a bit swollen with needless stuff, and I would like to dump only the requirements needed for the project. And I would like to avoid the trouble of re-installing a clean environment.</p>
<p>My question is, is there a "smart" application that could analyze this project and make proper installation instructions ? With buzz over AI assisted development, it feels like something like this might exists already (or maybe not even AI).</p>
|
<python><pip><conda><requirements>
|
2025-03-15 13:03:13
| 0
| 313
|
Megamini
|
79,510,955
| 3,368,111
|
Failing to initialise a new USGS API instance
|
<p>I have an account on the USGS and when I try to create an API instance using this code</p>
<pre><code>from landsatxplore.api import API
import os
# Your USGS credentials
username = '<username>'
password = '<password>'
# Initialize a new API instance
api = API(username, password)
# Perform a request
response = api.request(endpoint="dataset-catalogs")
print(response)
</code></pre>
<p>I get this error message:</p>
<p><strong>landsatxplore.errors.USGSError: NOT_FOUND: Invalid Endpoint.</strong></p>
<p>Python version: 3.10</p>
<p>landsatxplore version: 0.15.0</p>
|
<python><geospatial>
|
2025-03-15 10:23:00
| 1
| 443
|
Federico
|
79,510,868
| 595,305
|
How can I test that a method of a class was passed as a parameter to a mocked method
|
<p>Say I have classes like this:</p>
<pre><code>class MyTask(QtCore.QObject):
def do(self):
...
class Manager(QtCore.QObject):
def __init__(self):
super().__init__()
self.worker_executor = concurrent.futures.ThreadPoolExecutor()
def run_task(self):
self.worker_executor.submit(MyTask().do)
</code></pre>
<p>I want to test, using pytest, that method <code>do</code> of an (i.e. any) instance of <code>MyTask</code> is passed as the parameter of <code>ThreadPoolExecutor.submit</code>.</p>
<pre><code>manager.worker_executor.submit = mock.Mock()
manager.run_task()
mock_call = manager.worker_executor.submit.call_args_list[0]
assert mock_call.args[0] == MyTask.do # False
</code></pre>
<p>I'm not surprised this fails, with a message like:</p>
<pre><code>AssertionError: assert <bound method MyTask.do of <MyTask object at 0x000001B07E6537F0>> == <function MyTask.do at 0x000001B07D2E3BE0>
</code></pre>
<p>as a bound method is being compared with a function.</p>
<p>Equivalent to <a href="https://stackoverflow.com/questions/40506848/how-to-verify-that-specific-class-method-was-passed-as-parameter">How to verify that specific class method was passed as parameter?</a>, but that relates to Java and a Java mocking framework.</p>
|
<python><pytest>
|
2025-03-15 09:08:18
| 1
| 16,076
|
mike rodent
|
79,510,817
| 1,232,660
|
Algorithm to select multiple non-overlapping subsequences of given sequence
|
<p>You have been given the input data (a sequence of items), for example:</p>
<pre><code>[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]
</code></pre>
<p>and your task is to randomly select M non-overlapping samples (subsequences) of the same size N.</p>
<p>For example, if the task was to select 3 samples of size 3, one of the solutions would be:</p>
<pre><code>[3, 4, 5]
[8, 9, 10]
[11, 12, 13]
</code></pre>
<p>The samples are unordered so <code>[8, 9, 10]</code>, <code>[3, 4, 5]</code>, <code>[11, 12, 13]</code> is the same solution. All solutions are expected to have an equal probability of being selected.</p>
<hr />
<p>My algorithm:</p>
<ol>
<li>Select randomly first sample: <code>[11, 12, 13]</code></li>
<li>Remaining input data are <code>[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]</code> and <code>[14, 15, 16]</code>.</li>
<li>Select randomly second sample from the remaining input data: <code>[3, 4, 5]</code>.</li>
<li>Remaining input data (big enough) are <code>[6, 7, 8, 9, 10]</code> and <code>[14, 15, 16]</code>.</li>
<li>Select randomly third sample from the remaining input data: <code>[8, 9, 10]</code>.</li>
</ol>
<p>Sadly, this algorithm does not work when the samples are too big. If the task was to select 3 samples of size 5, there exists a solution, but if you use my algorithm and select randomly the first sample as <code>[3, 4, 5, 6, 7]</code>, the algorithm will fail.</p>
<hr />
<p>Of course there is also an obvious brute-force algorithm: find all possible solutions and select randomly one of them. But I was hoping for something more "clever" (time and space efficient).</p>
|
<python><algorithm><subsequence>
|
2025-03-15 08:27:56
| 2
| 3,558
|
Jeyekomon
|
79,510,643
| 18,333,498
|
How to Detect Thick Lines as Single Lines Using Hough Transform in OpenCV
|
<p>I'm using OpenCV's <code>HoughLinesP</code> function to detect straight lines in an image. When the image contains thin lines, the detection works perfectly. However, when the image contains thick lines, the algorithm detects them as two parallel lines instead of a single line.</p>
<p>Here's my current code:</p>
<pre class="lang-py prettyprint-override"><code>import cv2
import numpy as np
image_path = "thickLines.png"
image = cv2.imread(image_path)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Thresholding to create a binary image
_, binary = cv2.threshold(gray, 200, 255, cv2.THRESH_BINARY_INV)
# Edge Detection
edges = cv2.Canny(binary, 50, 150, apertureSize=3)
# Hough Line Transform to Detect Walls
lines = cv2.HoughLinesP(edges, 1, np.pi / 180, 100, minLineLength=50, maxLineGap=5)
# Draw Detected Walls
if lines is not None:
for line in lines:
x1, y1, x2, y2 = line[0]
cv2.line(image, (x1, y1), (x2, y2), (0, 255, 0), 2) # Draw thick lines in green
# Show Final Processed Image
cv2.imshow("Detected Image", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>I have tried adjusting the Canny edge thresholds and modifying <code>minLineLength</code> and <code>maxLineGap</code>, but the issue persists.</p>
<p>My goal is to detect thick lines as a single line instead of two parallel lines.</p>
<p><strong>Questions:</strong></p>
<ol>
<li>How can I modify my approach to merge or simplify detected thick lines into a single line?</li>
<li>Are there any parameters in HoughLinesP that can be tuned to achieve this?</li>
</ol>
<p><strong>Screenshots</strong><br />
Here are screenshots of the issue:<br />
<em>The green lines represents the detected lines from the image</em>
<img src="https://i.sstatic.net/7odPZzje.png" alt="thick lines" />
<img src="https://i.sstatic.net/H3FQPdZO.png" alt="thin lines" /></p>
<p><strong>Here are sample images of thick & thin lines</strong>
<img src="https://i.sstatic.net/9w5bwrKN.png" alt="Sample thick lines" />
<img src="https://i.sstatic.net/ZzflaYmS.png" alt="Sample thin lines" /></p>
|
<python><opencv><contour><hough-transform><canny-operator>
|
2025-03-15 04:54:16
| 1
| 572
|
Kuldeep J
|
79,510,222
| 89,931
|
Get Python string value as printed by "print()"
|
<p>Suppose I have a string as follows (after parsing some stdout):</p>
<pre><code>>>> mystr
'--More-- \r \rSerial-Number: 12345'
</code></pre>
<p>I notice that print() function automatically processes the <code>\r</code> characters, essentially getting rid of the entire "--More--" part, like this:</p>
<pre><code>>>> print(mystr)
Serial-Number: 12345
</code></pre>
<p>How can I achieve the same result in a variable?
In other words, I'd like to get a string variable <code>newStr</code>, so that:</p>
<pre><code>>>> newStr
Serial-Number: 12345
</code></pre>
|
<python>
|
2025-03-14 21:05:37
| 4
| 1,133
|
Dmitry Perets
|
79,510,104
| 21,896,093
|
How to get a discrete colour bar for plotly Scatter plot?
|
<p>I have defined a list of 5 colours, which I use to colour the markers in the <code>go.Scatter</code> plot below.</p>
<p>I want the colour bar (legend) to show these <em>discrete</em> values, but the resulting colour bar stays white. See the white/empty bar titled "outputs" in the image.</p>
<p><a href="https://i.sstatic.net/Tp1wZXLJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Tp1wZXLJ.png" alt="enter image description here" /></a></p>
<h3>Reproducible example</h3>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import plotly.graph_objects as go
#Data for testing
outputs = np.random.choice([0, 1, 2, 3, 4], size=100)
#Figure
marker_palette=[
'rgb(39, 100, 25)', # ++
'rgb(156, 207, 100)', # +
'rgb(247, 247, 247)', # 0
'rgb(232, 150, 196)', # -
'rgb(142, 1, 82)', # --
]
scatter = go.Scatter(
x=np.random.randn(100),
y=np.random.randn(100),
mode='markers',
marker_color=[marker_palette[o] for o in outputs],
marker=dict(
# color=outputs, colorscale=marker_palette, #creates a continuous scale - not what I want
colorbar={
'title': 'outputs',
'tickmode': 'array',
'tickvals': np.unique(outputs),
'ticktext': ['++', '+', '0', '-', '--']
},
#/colorbar
),
#/marker
)
go.Figure(scatter)
</code></pre>
<p>I can get a continuous colour bar without any problems, but I want a discrete one.</p>
<p>I would prefer solutions using <code>graph_objects</code> as opposed to <code>plotly.express</code>, as I want to see how the internals get configured in your answer (though it's not a strict requirement).</p>
<p>How do I get a colour bar that shows the discrete colours?</p>
|
<python><plotly><legend><colorbar>
|
2025-03-14 19:52:49
| 1
| 5,252
|
MuhammedYunus
|
79,510,075
| 4,515,940
|
python script causing error - NameError: name 'Python' is not defined
|
<p>I'm running a simple Streamlit + Ollama + Python to run an AI chatbot.</p>
<p>After asking the chat, it's causing this error:</p>
<pre><code>[ERROR] Failed with error: Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/pandasai/pipelines/chat/code_execution.py", line 87, in execute
result = self.execute_code(input, code_context)
File "/usr/local/lib/python3.10/site-packages/pandasai/pipelines/chat/code_execution.py", line 172, in execute_code
exec(code, environment)
File "<string>", line 1, in <module>
NameError: name 'Python' is not defined
</code></pre>
<p>I'm not sure what can be happening here.
So here is my scripts.</p>
<p><strong>docker-compose.yml</strong></p>
<pre><code>version: '3.8'
services:
app:
build: .
container_name: streamlit_app
ports:
- "8501:8501"
volumes:
- .:/app
depends_on:
- ollama
ollama:
image: ollama/ollama
container_name: ollama
ports:
- "11434:11434"
volumes:
- ollama_data:/root/.ollama
entrypoint: ["/bin/sh", "-c", "ollama serve"]
volumes:
ollama_data:
</code></pre>
<p><strong>Dockerfile</strong></p>
<pre><code>FROM python:3.10
WORKDIR /app
COPY requirements.txt .
COPY app.py .
RUN pip install --no-cache-dir -r requirements.txt
CMD ["streamlit", "run", "app.py", "--server.port=8501", "--server.address=0.0.0.0"]
</code></pre>
<p><strong>app.py</strong></p>
<pre><code> from pandasai.llm.local_llm import LocalLLM ## Importing LocalLLM for local Meta Llama 3 model
import streamlit as st
import pandas as pd # Pandas for data manipulation
from pandasai import SmartDataframe # SmartDataframe for interacting with data using LLM
# Function to chat with CSV data
def chat_with_csv(df,query):
llm = LocalLLM(
api_base="http://ollama:11434/v1",
model="llama3")
pandas_ai = SmartDataframe(df, config={"llm": llm})
result = pandas_ai.chat(query)
return result
st.set_page_config(layout='wide')
st.title("Multiple-CSV ChatApp powered by LLM")
# Upload multiple CSV files
input_csvs = st.sidebar.file_uploader("Upload your CSV files", type=['csv'], accept_multiple_files=True)
# Check if CSV files are uploaded
if input_csvs:
# Select a CSV file from the uploaded files using a dropdown menu
selected_file = st.selectbox("Select a CSV file", [file.name for file in input_csvs])
selected_index = [file.name for file in input_csvs].index(selected_file)
#load and display the selected csv file
st.info("CSV uploaded successfully")
data = pd.read_csv(input_csvs[selected_index])
st.dataframe(data.head(3),use_container_width=True)
#Enter the query for analysis
st.info("Chat Below")
input_text = st.text_area("Enter the query")
#Perform analysis
if input_text:
if st.button("Chat with csv"):
st.info("Your Query: "+ input_text)
result = chat_with_csv(data,input_text)
st.success(result)
</code></pre>
<p>Anyone can help?</p>
|
<python><docker><ollama>
|
2025-03-14 19:31:40
| 0
| 1,835
|
anderlaini
|
79,510,017
| 18,385,480
|
How can I deploy and run a Flask web application using heavy NLP libraries (pandas, numpy, sklearn) on a SiteGround shared hosting plan?
|
<p>I have a Flask-based web application that performs NLP tasks using libraries like pandas, numpy, sklearn, and nltk. I've tried deploying it to my current hosting (SiteGround shared hosting plan), but encountered multiple issues, such as:</p>
<p>Installation issues (pyahocorasick and other dependency errors).
Resource limitations (KeyboardInterrupt when importing heavy libraries).
Difficulty running continuously in the background.
My current setup:
Hosting provider: SiteGround Shared Hosting
Python version: 3.13.2
Flask app with dependencies: pandas, numpy, sklearn, nltk, contractions, etc.
SSH access available, but no root access.
Tried using virtual environment (venv), encountering build issues.
My questions are:</p>
<ol>
<li>Is it possible to run resource-intensive NLP applications like this on SiteGround’s shared hosting plan at all?</li>
<li>If yes, how? What steps or configurations are required to overcome these errors?</li>
<li>If no, what are the simplest and most cost-effective alternatives to deploy such a Flask NLP application smoothly (PythonAnywhere, Render.com, Heroku, AWS, DigitalOcean, or others)?</li>
</ol>
<p>Thanks in advance for any guidance or advice!</p>
|
<python><flask><nlp>
|
2025-03-14 19:03:16
| 0
| 723
|
bsraskr
|
79,509,775
| 4,819,195
|
How to get route of request from connexion.AsyncApp in python?
|
<p>I have an <code>connexion.AsyncApp</code> in python. A snippet of the code looks like this:</p>
<pre><code>app = AsyncApp(__name__)
app.add_api("openapi.yaml", arguments={"title": "Async APP API"})
if __name__ == "__main__":
uvicorn.run(app, host="127.0.0.1", port=8000)
</code></pre>
<p>I've been trying to determine how to access the route of an incoming request. I attempted to add middleware before <code>SwaggerUIMiddleware</code> using <code>self.app.add_middleware()</code>, assuming the route information would be available at that point. However, after inspecting the object received in the middleware, I couldn't find any member variables containing this information.</p>
<p>I sent a request to an endpoint similar to <code>/greetings/{user}</code> but couldn't see the route with <code>{user}</code> as a placeholder. Instead, I only saw the actual username.</p>
<p>Is there no easy way to find the route information with connexion async app?</p>
|
<python><connexion>
|
2025-03-14 17:06:55
| 0
| 399
|
A Beginner
|
79,509,728
| 6,695,793
|
Polars group_by + describe: return all columns as single dataframe
|
<p>I'm slowly migrating to polars from pandas and I have found that in some cases the polars syntax is tricky.</p>
<p>I'm seeking help to do a <a href="https://docs.pola.rs/api/python/stable/reference/dataframe/api/polars.DataFrame.group_by.html" rel="nofollow noreferrer"><code>group_by</code></a> followed by a <a href="https://docs.pola.rs/api/python/stable/reference/series/api/polars.Series.describe.html" rel="nofollow noreferrer"><code>describe</code></a> using less (or more readable) code.</p>
<p>See this example:</p>
<pre><code>from io import BytesIO
import pandas as pd
import polars as pl
S = b'''group,value\n3,245\n3,28\n3,48\n1,113\n1,288\n1,165\n2,90\n2,21\n2,109'''
pl_df = pl.read_csv(BytesIO(S))
pd_df = pd.read_csv(BytesIO(S))
# Polars' way
pl_df.group_by('group').map_groups(
lambda df: (
df['value']
.describe()
.with_columns(
group=pl.lit(df['group'][0])
)
)
).pivot(index='group', on='statistic')
</code></pre>
<p>Something similar in pandas would be:</p>
<pre><code># pandas'
pd_df.groupby('group').value.describe()
</code></pre>
|
<python><pandas><dataframe><python-polars>
|
2025-03-14 16:44:35
| 1
| 4,073
|
jcaliz
|
79,509,661
| 8,830,612
|
Azure AI ML - can't build local Docker container
|
<p>I want to build <strong>local Docker container</strong> with and Azure CLI & Azure SDKv2, which afterwards I want to <strong>replicate in Azure ML</strong>. My objective is to have a container which can run YOLO models.</p>
<p>With the following script you can replicate the error:</p>
<pre class="lang-none prettyprint-override"><code>endpoint_name = "web-svc-local-test"
# create an online endpoint
endpoint = ManagedOnlineEndpoint(
name = endpoint_name,
description="Test a local endpoint",
auth_mode="key"
)
ml_client.online_endpoints.begin_create_or_update(endpoint, local=True)
model = Model(path="model.pt")
env=Environment(
# conda_file="environment/conda.yaml",
conda_file="environment/conda.yaml",
image="mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest"
)
test_deployment = ManagedOnlineDeployment(
name="websvclocalvideo",
endpoint_name=endpoint_name,
model=model,
environment=env,
code_configuration=CodeConfiguration(
#code=".", scoring_script="failure_2_score_remote_model.py"
#code=".", scoring_script="failure_3_score_invalid_script.py"
#code=".", scoring_script="failure_4_score_script_error.py"
code=".", scoring_script="YOLO11 Finetuned Model/score-copy.py"
),
instance_type="Standard_DS3_v2",
instance_count=1,
)
ml_client.online_deployments.begin_create_or_update(test_deployment, local=True)
</code></pre>
<p>The conda.yaml file which I'm using to build upon the Docker image:</p>
<pre><code>name: model-env
channels:
- conda-forge
dependencies:
- python=3.9
- numpy=1.23.5
- pip=23.0.1
- scikit-learn=1.2.2
- scipy=1.10.1
- pip:
- numpy
- pandas==1.1.5
- azureml-defaults==1.53.0
- inference-schema[numpy-support]==1.5.1
- joblib==1.2.0
- torch==2.6.0
- requests==2.32.3
- pillow==10.4.0
- supervision==0.25.1
- opencv-python-headless==4.7.0.68
- ultralytics==8.3.85
</code></pre>
<p>This is the error log from the container:</p>
<pre class="lang-none prettyprint-override"><code>---------------
Liveness Probe: GET 127.0.0.1:31311/
Score: POST 127.0.0.1:31311/score
2025-03-14 15:58:55,845 I [21] gunicorn.error - Starting gunicorn 20.1.0
2025-03-14 15:58:55,845 I [21] gunicorn.error - Listening at: http://0.0.0.0:31311 (21)
2025-03-14 15:58:55,846 I [21] gunicorn.error - Using worker: sync
2025-03-14 15:58:55,847 I [104] gunicorn.error - Booting worker with pid: 104
2025-03-14 15:58:56,365 W [104] azmlinfsrv - AML_FLASK_ONE_COMPATIBILITY is set. However, compatibility patch for Flask 1 has failed. This is only a problem if you use @rawhttp and relies on deprecated methods such as has_key().
Traceback (most recent call last):
File "/opt/miniconda/envs/inf-conda-env/lib/python3.9/site-packages/azureml_inference_server_http/server/create_app.py", line 58, in <module>
patch_flask()
File "/opt/miniconda/envs/inf-conda-env/lib/python3.9/site-packages/azureml_inference_server_http/server/create_app.py", line 33, in patch_flask
patch_werkzeug = LooseVersion(werkzeug.__version__) >= LooseVersion("2.1")
AttributeError: module 'werkzeug' has no attribute '__version__'
Initializing logger
2025-03-14 15:58:56,367 I [104] azmlinfsrv - Starting up app insights client
2025-03-14 15:58:59,093 E [104] azmlinfsrv - Traceback (most recent call last):
File "/opt/miniconda/envs/inf-conda-env/lib/python3.9/site-packages/azureml_inference_server_http/server/user_script.py", line 77, in load_script
main_module_spec.loader.exec_module(user_module)
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/var/azureml-app/YOLO11/YOLO11 Finetuned Model/score-copy.py", line 1, in <module>
from ultralytics import YOLO
File "/opt/miniconda/envs/inf-conda-env/lib/python3.9/site-packages/ultralytics/__init__.py", line 11, in <module>
from ultralytics.models import NAS, RTDETR, SAM, YOLO, FastSAM, YOLOWorld
File "/opt/miniconda/envs/inf-conda-env/lib/python3.9/site-packages/ultralytics/models/__init__.py", line 3, in <module>
from .fastsam import FastSAM
File "/opt/miniconda/envs/inf-conda-env/lib/python3.9/site-packages/ultralytics/models/fastsam/__init__.py", line 3, in <module>
from .model import FastSAM
File "/opt/miniconda/envs/inf-conda-env/lib/python3.9/site-packages/ultralytics/models/fastsam/model.py", line 5, in <module>
from ultralytics.engine.model import Model
File "/opt/miniconda/envs/inf-conda-env/lib/python3.9/site-packages/ultralytics/engine/model.py", line 11, in <module>
from ultralytics.cfg import TASK2DATA, get_cfg, get_save_dir
File "/opt/miniconda/envs/inf-conda-env/lib/python3.9/site-packages/ultralytics/cfg/__init__.py", line 10, in <module>
import cv2
File "/opt/miniconda/envs/inf-conda-env/lib/python3.9/site-packages/cv2/__init__.py", line 181, in <module>
bootstrap()
File "/opt/miniconda/envs/inf-conda-env/lib/python3.9/site-packages/cv2/__init__.py", line 153, in bootstrap
native_module = importlib.import_module("cv2")
File "/opt/miniconda/envs/inf-conda-env/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/miniconda/envs/inf-conda-env/lib/python3.9/site-packages/azureml_inference_server_http/server/aml_blueprint.py", line 91, in setup
self.user_script.load_script(config.app_root)
File "/opt/miniconda/envs/inf-conda-env/lib/python3.9/site-packages/azureml_inference_server_http/server/user_script.py", line 79, in load_script
raise UserScriptImportException(ex) from ex
azureml_inference_server_http.server.user_script.UserScriptImportException: Failed to import user script because it raised an unhandled exception
2025-03-14 15:58:59,093 I [104] gunicorn.error - Worker exiting (pid: 104)
2025-03-14 15:58:59,625 I [21] gunicorn.error - Shutting down: Master
2025-03-14 15:58:59,626 I [21] gunicorn.error - Reason: Worker failed to boot.
2025-03-14T15:58:59,658862454+00:00 - gunicorn/finish 3 0
2025-03-14T15:58:59,659978615+00:00 - Exit code 3 is not normal. Killing image.
ERROR conda.cli.main_run:execute(125): `conda run runsvdir /var/runit` failed. (See above for error)
2025-03-14T15:58:59,668900102+00:00 - rsyslog/finish 0 0
2025-03-14T15:58:59,668900902+00:00 - nginx/finish 0 0
2025-03-14T15:58:59,670339080+00:00 - Exit code 0 is not normal. Restarting rsyslog.
2025-03-14T15:58:59,670354181+00:00 - Exit code 0 is not normal. Killing image.
runsvdir: no process found
</code></pre>
|
<python><azure><docker><azure-machine-learning-service>
|
2025-03-14 16:19:57
| 0
| 518
|
default_settings
|
79,509,632
| 19,802,442
|
Shortest way to broadcast 1d array to specific dimension in NumPy
|
<p>I often find myself to broadcast 1d arrays into a specific dimension <code>dim_a</code> given a total number of dimension <code>dim_total</code>. What I mean is the following:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
a = np.arange(10)
dim_a = 2
dim_total = 4
shape = tuple([-1 if idx == dim_a else 1 for idx in range(dim_total)])
print(a.reshape(shape))
axis = list(range(dim_total))
del axis[dim_a]
print(np.expand_dims(a, axis=axis))
</code></pre>
<p>Both work as expected, however the question is whether there is an even shorter way to achieve this for a single array?</p>
|
<python><arrays><numpy><array-broadcasting>
|
2025-03-14 16:09:34
| 2
| 1,941
|
Axel Donath
|
79,509,624
| 4,451,315
|
Python overload doesn't match `Any` case
|
<p>I've written this code:</p>
<pre class="lang-py prettyprint-override"><code>from typing import overload, TYPE_CHECKING, Protocol, Any
import pyarrow as pa # type: ignore[import-not-found]
class PyArrowArray(Protocol):
@property
def buffers(self) -> Any: ...
@overload
def func(a: PyArrowArray) -> int: ...
@overload
def func(a: str) -> str: ...
@overload
def func(a: Any) -> str | int: ...
def func(a) -> str | int:
if isinstance(a, pa.Array):
return 0
return '0'
reveal_type(func(pa.array([1,2,3])))
</code></pre>
<p>PyArrow is a Python library which does not have type hints. However, there is a package <code>pyarrow-stubs</code> which provides types for it.</p>
<p>I have a function can accept either a <code>pyarrow.Array</code> or a <code>str</code>:</p>
<ul>
<li>if it receives a <code>pyarrow.Array</code>, it returns an <code>int</code></li>
<li>if it receives a <code>str</code>, it returns a <code>str</code></li>
</ul>
<p>I would like to annotate it such that:</p>
<ul>
<li>if a user has <code>pyarrow-stubs</code> installed, then <code>func(pa.array([1,2,3]))</code> is revealed to be <code>int</code></li>
<li>if a user doesn't have <code>pyarrow-stubs</code> installed, then <code>func(pa.array([1,2,3]))</code> should be revealed to be <code>int | str</code>, because <code>pa</code> is not known statically</li>
</ul>
<p>I was hoping that the code above would accomplish that, but it doesn't. If pyarrow-stubs is not installed, I get</p>
<pre><code>Revealed type is "Any"
</code></pre>
<p>I was expecting that the</p>
<pre class="lang-py prettyprint-override"><code>@overload
def func(a: Any) -> str | int: ...
</code></pre>
<p>overload would be matched and that I'd get <code>Revealed type is int | str</code></p>
|
<python><python-typing><mypy>
|
2025-03-14 16:06:37
| 1
| 11,062
|
ignoring_gravity
|
79,509,621
| 764,365
|
How do I pass in arguments when initializing an event handler with win32com.client.WithEvents in python
|
<p>I am trying to use win32com.client.WithEvents and I'd like to be able to pass values to the initialization method. However, the input for the event handler is the class, not an instance of the class:</p>
<pre><code>import win32com.client
#For my application this is what defines "document"
labchart = win32com.client.Dispatch("ADIChart.Application")
document = labchart.ActiveDocument
#This is the definition
ev = win32com.client.WithEvents(document, LabChartEventHandler)
#This is not valid, but is what I want to get to
ev = win32com.client.WithEvents(document, LabChartEventHandler(param1,param2))
</code></pre>
<p>It is possible to do this afterwards, but is there a way to pass in arguments to the constructor, other than using previously defined variables that are in scope?</p>
<pre><code>#Workaround, not sure what happens if time elapses between initialization and these calls
ev.param1 = param1
ev.param2 = param2
#Current setup
param1 = 1
param2 = 2
class LabChartEventHandler:
def __init__(self):
#Grabs already defined param1, not ideal (in my opinion)
self.param1 = param1
self.param2 = param2
</code></pre>
|
<python><pywin32>
|
2025-03-14 16:04:38
| 0
| 3,304
|
Jimbo
|
79,509,594
| 990,549
|
Can a Python value be annotated with its type without assigning it to a variable?
|
<p>I'm working with a Pandas DataFrame where the index is guaranteed to be the first day of a year. I want to change the index to be the year itself. The code is straightforward enough, and runs fine:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(data={
'col1': [1, 2, 3],
'col2': ['a', 'b', 'c'],
}, index=pd.date_range(start='2023-01-01', periods=3, freq='YS'))
df.index = df.index.year.set_names('Year')
</code></pre>
<p>However, my code editor cannot infer the type of the <code>.year</code> property or the <code>.set_names()</code> method, because it can only infer <code>df.index</code> as an <code>Index</code>, and <code>.year</code> is only a property of its subtype <code>DatetimeIndex</code>. This irks me.</p>
<p>A type hint can clarify things for the editor, or anyone reading:</p>
<pre><code>idx: pd.DatetimeIndex = df.index
df.index = idx.year.set_names('Year')
</code></pre>
<p>But it seems poor form to include an extra variable as an intermediate step solely for the type hint, when the original line is already quite short.</p>
<p>In TypeScript, this could be done inline with a type assertion, like so:</p>
<pre><code>df.index = (df.index as pd.DatetimeIndex).year.set_names('Year')
</code></pre>
<p>Does Python have any similar way of supplementing automatic type inference?</p>
|
<python><pandas><python-typing>
|
2025-03-14 15:55:22
| 0
| 1,050
|
Shay Guy
|
79,509,565
| 9,951,273
|
Creating a dummy Pydantic model generator
|
<p>I want to create a method on a base Pydantic model to instantiate child models with dummy data.</p>
<pre><code>from __future__ import annotations
from pydantic import BaseModel
class BaseModelWrapper(BaseModel):
@classmethod
def make_dummy(cls) -> BaseModelWrapper:
for name, field in cls.model_fields.items():
if not field.is_required():
continue
# How can I create values based on the field type?
print(field.annotation)
return cls()
class XXX(BaseModelWrapper):
a: int | None
b: str
c: int
d: int | None = None
e: list[str]
# These should be equivalent
XXX.make_dummy()
XXX(a=None, b="", c=0, e=[])
</code></pre>
<p>The part I'm struggling with is how to programmatically map type annotations to values.</p>
<p>Let's say <code>field.annotation</code> is <code>int | None</code>. I could just create a dictionary to map that to <code>None</code>, but there are tons of possible combinations of types, so this doesn't scale. There must be a cleaner way to create a value for each field.</p>
|
<python><pydantic>
|
2025-03-14 15:47:31
| 2
| 1,777
|
Matt
|
79,509,548
| 5,403,466
|
Logistic curve produced by curve_fit is a straight line
|
<p>I'm trying to produce a Sigmoid/Logistic curve from some input data. I borrowed code from <a href="https://stackoverflow.com/questions/55725139/fit-sigmoid-function-s-shape-curve-to-data-using-python">this post</a> and <a href="https://stackoverflow.com/questions/48506782/scipy-curve-fit-how-to-plot-the-fitted-curve-beyond-the-data-points">this</a> for plotting.</p>
<p>The result is the following:</p>
<pre><code>from scipy.optimize import curve_fit
import numpy as np
import matplotlib.pyplot as plt
def sigmoid(x, L ,x0, k, b):
y = L / (1 + np.exp(-k*(x-x0))) + b
return (y)
data = np.loadtxt("data.csv", delimiter=",")
xdata = data[0]
ydata = data[1]
p0 = [max(ydata), np.median(xdata),1,min(ydata)] # this is an mandatory initial guess
fitting_parameters, covariance = curve_fit(sigmoid, xdata, ydata,p0, method='dogbox', maxfev=10000)
plt.plot(xdata, ydata, 'o', label='Data')
plt.plot(xdata, sigmoid(xdata, *fitting_parameters), '-', label='Fit')
plt.legend()
plt.show()
</code></pre>
<p>This produces a straight line instead of a logistic fit:</p>
<p><a href="https://i.sstatic.net/g5z7eoIz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g5z7eoIz.png" alt="enter image description here" /></a></p>
<p>What am I missing? I know the data is a bit coarse, but is that the cause?</p>
<p>EDIT: Here is the raw data, if useful:</p>
<pre><code>1.15102,1.17397,1.18334,1.18484,1.2073,1.25081,1.26446,1.26535,1.26654,1.29653,1.30118,1.30991,1.32608,1.39597,1.39721,1.41225,1.415,1.41989,1.47602,1.19992,1.23148,1.2895,1.31759,1.33068,1.34391,1.35604,1.35879,1.37359,1.38695,1.40233,1.41753,1.42323,1.43474,1.44706,1.48247,1.50033,1.52272,1.59789,1.09956,1.10712,1.13576,1.16265,1.16993,1.18129,1.19587,1.1973,1.20428,1.23916,1.24522,1.2505,1.26135,1.26542,1.27122,1.2736,1.27456,1.30306,1.34639,1.16272,1.18929,1.28076,1.28145,1.28513,1.28708,1.30215,1.30236,1.30887,1.31634,1.37677,1.37745,1.38119,1.38846,1.43016,1.43046,1.43234,1.48051,1.54508
0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,0.55,0.6,0.65,0.7,0.75,0.8,0.85,0.9,0.95,0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,0.55,0.6,0.65,0.7,0.75,0.8,0.85,0.9,0.95,0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,0.55,0.6,0.65,0.7,0.75,0.8,0.85,0.9,0.95,0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,0.55,0.6,0.65,0.7,0.75,0.8,0.85,0.9,0.95
</code></pre>
|
<python><matplotlib><scipy>
|
2025-03-14 15:39:57
| 3
| 1,456
|
Tyler Shellberg
|
79,509,402
| 59,587
|
Python environment error, "TypeError: Choicelist and default value do not have a common dtype"
|
<p>I have two python environments - one online through a class, and the other on my own computer.</p>
<p>The following code works on the online environment, but gives an error on my local environment. Does anyone know what this error means, and have any suggestions for fixing my environment, or fixing my code? The online environment has a habit of losing my work, so I'd like to get this working on my own machine.</p>
<p>This is the code</p>
<pre><code> custom_categories = ['cat_a', 'cat_b', 'cat_c', 'Other']
custom_categories_filter = [
(df['column_name'].str.contains('(A)', regex = False)),
(df['column_name'].str.contains('(B)', regex = False)),
(df['column_name'].str.contains('(C)', regex = False)),
(df['column_name'].str.contains('(A)', regex = False) == False)
& (df['column_name'].str.contains('(B)', regex = False) == False)
& (df['column_name'].str.contains('(C)', regex = False) == False)
]
df["custom_category"] = numpy.select(custom_categories_filter, custom_categories)
</code></pre>
<p>It's intended to look through a column of a pandas data frame, search for certain terms in brackets, then put a value based on that term into a new column.</p>
<p>This is the error:</p>
<pre><code>TypeError: Choicelist and default value do not have a common dtype:
The DType <class 'numpy.dtypes._PyLongDType'> could not be promoted by <class 'numpy.dtypes.StrDType'>.
This means that no common DType exists for the given inputs.
For example they cannot be stored in a single array unless the dtype is `object`.
The full list of DTypes is: (<class 'numpy.dtypes.StrDType'>, <class 'numpy.dtypes.StrDType'>, <class 'numpy.dtypes.StrDType'>, <class 'numpy.dtypes.StrDType'>, <class 'numpy.dtypes.StrDType'>, <class 'numpy.dtypes._PyLongDType'>)
</code></pre>
<p>Online environment is Python 3.11.11, local is 3.10.8 - could it be the python version I'm using?</p>
|
<python><numpy>
|
2025-03-14 14:46:28
| 1
| 5,875
|
Sophia
|
79,509,349
| 2,573,061
|
Correctly invoke TorchEval AUC calculation for two tensors
|
<p>I am new to torcheval and trying to measure the AUC of my binary classifier (<a href="https://pytorch.org/torcheval/main/generated/torcheval.metrics.AUC.html" rel="nofollow noreferrer">doc</a>).</p>
<p>I notice that while classifier accuracy is decent, the AUC metric evaluates to below 0.5, which is incorrect (given that accuracy is better than 50/50 and my classes are balanced). The AUC also differs from sklearn.metrics.roc_auc_score. For a simple example:</p>
<pre><code>from torcheval.metrics.aggregation.auc import AUC
from torcheval.metrics import BinaryAccuracy
from sklearn.metrics import roc_auc_score, accuracy_score
p_pred = torch.tensor([0.2, 0.3, 0.4, 0.6, 0.7, 0.8]) # model est likelihood of target class
y_true = torch.tensor([0.0, 0.0, 1.0, 0.0, 1.0, 1.0]) # ground truth, 1 = target class
# TorchEval Metrics
auc_metric = AUC()
accuracy_metric = BinaryAccuracy(threshold=0.5)
auc_metric.reset()
accuracy_metric.reset()
auc_metric.update(p_pred,y_true)
accuracy_metric.update(input=p_pred,target=y_true)
print(f"TorchEval Accuracy = {accuracy_metric.compute().item():.3}")
print(f"Sklearn Accuracy = {accuracy_score(y_true=y_true,y_pred=p_pred.round()):.3}")
print(f"TorchEval AUC = {auc_metric.compute().item():.3}")
print(f"Sklearn AUC = {roc_auc_score(y_true=y_true,y_score=p_pred):.3}")
</code></pre>
<p>Return an unexpected value of TorchEval AUC:</p>
<pre><code>TorchEval Accuracy = 0.667
Sklearn Accuracy = 0.667
TorchEval AUC = 0.3
Sklearn AUC = 0.889
</code></pre>
<p>How can I correctly invoke TorchEval AUC to get the expected value of ~0.9?</p>
|
<python><pytorch><auc>
|
2025-03-14 14:26:43
| 1
| 19,153
|
C8H10N4O2
|
79,509,277
| 6,329,284
|
Is an instance of my Pydantic model also in my Enum?
|
<p>In my situation I get a response from an API which follows the structure of MyGenericModel.
After I parse the response using MyGenericModel, I want to check if it belongs to my set defined in MyEnum. If it does I want to continue with it, if not.. then we proceed further.</p>
<ul>
<li><p>When I call <code>MyGenericModel(value_one="a", value_two="b") in MyEnum</code> I get a TypeError as response.</p>
</li>
<li><p>When I call <code>MySpecificModel() in MyEnum</code> I get a TypeError as response.</p>
</li>
<li><p>When I call <code>MySpecificModel() == MyGenericModel(value_one="a", value_two="b")</code> I get 'True' as response.</p>
</li>
</ul>
<p>here is a small simplified example of my code:</p>
<pre><code>from pydantic import BaseModel
from enum import Enum
class MyGenericModel(BaseModel):
value_one: str
value_two: str
def __eq__(self, other):
return self.model_dump() == other.model_dump()
class MySpecificModel(MyGenericModel):
value_one: str = "a"
value_two: str = "b"
class MyEnum(Enum):
my_model = MySpecificModel()
# This is true
MyGenericModel(value_one="a", value_two="b") == MySpecificModel()
# This fails - and I want to be true
MyGenericModel(value_one="a", value_two="b") in MyEnum
</code></pre>
<p>How to solve this issue?</p>
|
<python><enums><pydantic>
|
2025-03-14 13:55:51
| 1
| 1,340
|
zwep
|
79,509,243
| 5,404,647
|
How to Use Template in Langchain to Insert Results from Chain for Further Reasoning?
|
<p>I'm working with Langchain and OpenAI to develop a conversational AI. I've integrated multiple tools into the chain and am using a template to structure the conversation. However, I'm stuck on how to use the results from the chain (<code>chain.invoke(...)</code>) in the template to allow the agent to continue reasoning based on these results. Here's the relevant part of my code:</p>
<pre><code>from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
import os
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.tools import tool
load_dotenv()
api_key = os.getenv("OPENAI_API_KEY")
if api_key is not None:
os.environ["OPENAI_API_KEY"] = api_key
else:
raise ValueError("OPENAI_API_KEY environment variable is not set.")
llm = ChatOpenAI(
model="gpt-4o",
temperature=0,
)
template = ChatPromptTemplate([
("system", "You are a helpful AI bot. Your name is Bob."),
("human", "Hello, how are you doing?"),
("ai", "I'm doing well, thanks!"),
("human", "{user_input}"),
("placeholder", "{conversation}")
])
@tool
def weather(city: str) -> str:
"""Gives the weather in a given city"""
return f"The weather in {city} is sunny"
@tool
def sum_numbers(numbers: str) -> str:
"""Sums two numbers"""
return str(sum(map(int, numbers.split())))
llm_with_tools = llm.bind_tools([weather, sum_numbers])
chain = template | llm_with_tools
res = chain.invoke({"user_input": "What is the weather in Tokyo? also what is 3 + 1? Give me the answer as if you are a cat"})
</code></pre>
<p>How can I modify the template or the invocation so that Bob can use the results from <code>chain.invoke(...)</code> for further reasoning in a continued conversation? For instance, after obtaining the weather and the sum, I want the AI to use these results in its next interactions.</p>
<p>I'm using</p>
<pre><code>langchain==0.3.20
langchain-community==0.3.19
langchain-openai==0.3.8
openai==1.66.3
python-dotenv==1.0.1
</code></pre>
|
<python><artificial-intelligence><chatbot><langchain><large-language-model>
|
2025-03-14 13:42:12
| 1
| 622
|
Norhther
|
79,509,031
| 1,518,100
|
python multiprocess sharing value with Value not working as documented
|
<p>I'm learning sharing variables in multiprocess.
The official <a href="https://docs.python.org/3/library/multiprocessing.html#synchronization-between-processes" rel="nofollow noreferrer">doc</a> says</p>
<blockquote>
<p>Data can be stored in a shared memory map using Value...</p>
</blockquote>
<p>and the example works fine.</p>
<p>But got error when I try to use it with <code>Pool.map</code>:</p>
<pre class="lang-py prettyprint-override"><code>from multiprocessing import Value, Pool, Manager
def f(args):
n, = args
n.value = 1
if __name__ == '__main__':
n = Value('d', 0.0)
# n = Manager().Value('d', 0.0) # can workaround the error
with Pool(1) as pool:
pool.map(f, [(n,)])
# RuntimeError: Synchronized objects should only be shared between processes through inheritance
print(n.value)
</code></pre>
<p>trace back</p>
<pre class="lang-bash prettyprint-override"><code>Traceback (most recent call last):
File "D:\0ly\ly\processvaldoc.py", line 13, in <module>
pool.map(f, [(n,)])
File "C:\a3\envs\skl\Lib\multiprocessing\pool.py", line 367, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\a3\envs\skl\Lib\multiprocessing\pool.py", line 774, in get
raise self._value
File "C:\a3\envs\skl\Lib\multiprocessing\pool.py", line 540, in _handle_tasks
put(task)
File "C:\a3\envs\skl\Lib\multiprocessing\connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\a3\envs\skl\Lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "C:\a3\envs\skl\Lib\multiprocessing\sharedctypes.py", line 199, in __reduce__
assert_spawning(self)
File "C:\a3\envs\skl\Lib\multiprocessing\context.py", line 374, in assert_spawning
raise RuntimeError(
RuntimeError: Synchronized objects should only be shared between processes through inheritance
</code></pre>
<p>my python version is 3.12.9 64bit on Win11.</p>
<p>Since there are so many ways to start multiprocess, I have never read all the documents or tried them. Just wonder what's the essential difference of documented <code>Process.start()</code> and <code>Pool.map</code> that leads to the latter failure?</p>
<p>I searched and learned <code>Manager().Value</code> can solve. But what's its magic? since I'm not using <code>with Manager() as</code> style. If <code>Manager().Value</code> works in both(maybe all) scenarios, why need design another <code>multiprocessing.Value</code> which only partly worked?</p>
|
<python><multiprocessing>
|
2025-03-14 12:07:19
| 1
| 4,435
|
Lei Yang
|
79,508,526
| 3,420,345
|
How to specify a nightly Pytorch version in environment.yml
|
<p>Consider the following environment.yml file:</p>
<pre><code>channels:
- nvidia
- defaults
- conda-forge
dependencies:
- bottleneck>=1.3.6
- catboost>=0.24.4
...
- pip:
- xgboost==2.1.4
...
</code></pre>
<p>How do I add the following pip command to the yml file without disrupting the current behavior of the other pip install commands above?</p>
<pre><code>pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128
</code></pre>
|
<python><pytorch><pip><conda>
|
2025-03-14 08:46:31
| 1
| 343
|
Martin
|
79,508,495
| 2,780,906
|
How do I create a multi-index dataframe from tuples/lists
|
<p>I want to create (for example) a small multi-index dataframe with 3 level index and one column of values. There will only be a handful of rows.</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Category</strong></td>
<td><strong>Sub Category</strong></td>
<td><strong>Sub Sub Category</strong></td>
<td></td>
</tr>
<tr>
<td>Machine</td>
<td>Car</td>
<td>Sedan</td>
<td>10</td>
</tr>
<tr>
<td>Machine</td>
<td>Bike</td>
<td>2 wheel</td>
<td>5</td>
</tr>
<tr>
<td>Machine</td>
<td>Bike</td>
<td>3 wheel</td>
<td>4</td>
</tr>
<tr>
<td>Animal</td>
<td>Donkey</td>
<td>Big</td>
<td>2</td>
</tr>
</tbody>
</table></div>
<p>A requirement is that the data has to be easy to enter, a row at a time, from left to right (not top to bottom), so my first step is to make a list of lists.</p>
<p>Then I can use DataFrame method and then the set-index method like this:</p>
<pre class="lang-py prettyprint-override"><code>data=[["Machine","Car","Sedan",10], ["Machine","Bike","2 Wheel",5], ["Machine","Bike","3 Wheel",4], ["Animal","Donkey","Big",2]]
column_names=["Category","Sub Category","Sub Sub Category","Value"]
df=pd.DataFrame(data,columns=column_names)
df.set_index(["Category","Sub Category","Sub Sub Category"])
</code></pre>
<p>But is there a way to make the multi-index DataFrame directly from a list, or something like it?</p>
|
<python><pandas><dataframe><multi-index>
|
2025-03-14 08:35:14
| 1
| 397
|
Tim
|
79,508,467
| 6,702,598
|
Prevent pytest instantiating parent test class
|
<p>I'm using the Template Method Pattern for my test cases, i.e. they have the following structure:</p>
<pre class="lang-none prettyprint-override"><code>Parent(unittest.TestCase)
|
|---- Child1
|---- Child2
|---- Child3
</code></pre>
<p>Test cases are defined in the parent. Each child implements a certain different setup.</p>
<p>When I run my tests with pytest (<code>python -m pytest</code>), I want the inherited tests in Child1 - Child3 to be run, but the ones defined in Parent should not run. They should also not yield a failing test.</p>
<p>How can I do that?</p>
<p>I'm using <code>python_classes=</code> in <code>pytest.ini</code>, so tests are not selected based on names. Changing that is not an option.</p>
<p>I tried:</p>
<ul>
<li>Making Parent abstract. Result: Parent fails to instantiate, tests are marked as failed.</li>
<li>Setting <code>__test__ = False</code> in Parent. Result: Child tests are also not executed.</li>
</ul>
|
<python><pytest>
|
2025-03-14 08:20:34
| 1
| 3,673
|
DarkTrick
|
79,508,420
| 4,436,517
|
Expanding polars dataframe with cartesian product of two columns
|
<p>The code below shows a solution I have found in order to expand a dataframe to include the cartesian product of columns <code>A</code> and <code>B</code>, filling in the other columns with <code>null</code> values. I'm wondering if there is a better and more efficient way of solving this?</p>
<pre class="lang-py prettyprint-override"><code>>>> df = pl.DataFrame({'A': [0, 1, 1],
... 'B': [1, 1, 2],
... 'C': [6, 7, 8]})
>>> df
shape: (3, 3)
┌─────┬─────┬─────┐
│ A ┆ B ┆ C │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 │
╞═════╪═════╪═════╡
│ 0 ┆ 1 ┆ 6 │
│ 1 ┆ 1 ┆ 7 │
│ 1 ┆ 2 ┆ 8 │
└─────┴─────┴─────┘
>>> df.join(df.select('A').unique().join(df.select('B').unique(), how='cross'), on=['A','B'], how='right')
shape: (4, 3)
┌──────┬─────┬─────┐
│ C ┆ A ┆ B │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 │
╞══════╪═════╪═════╡
│ 6 ┆ 0 ┆ 1 │
│ null ┆ 0 ┆ 2 │
│ 7 ┆ 1 ┆ 1 │
│ 8 ┆ 1 ┆ 2 │
└──────┴─────┴─────┘
</code></pre>
|
<python><python-polars>
|
2025-03-14 07:53:07
| 2
| 1,159
|
rindis
|
79,508,413
| 7,766,024
|
How to bypass "Too many open files" error when using Playwright?
|
<p>I'm using Playwright to crawl websites. I have a scraping function that uses Playwright, and implemented a Python object that uses this function to crawl websites in a bread-first search manner.</p>
<p>Below is the scraping function:</p>
<pre class="lang-py prettyprint-override"><code>import logging
from collections import deque
from playwright.async_api import Browser, BrowserContext, async_playwright
async def fetch_page_content(
url: str,
browser: Browser = None,
context: BrowserContext = None,
open_pages: deque = None,
max_open_pages: int = 100,
timeout: int = 60000,
headless: bool = True,
logger: logging.Logger = None,
) -> str | None:
should_close_browser = browser is None
should_close_context = context is None
if should_close_browser:
p = await async_playwright().start()
browser = await p.chromium.launch(headless=headless)
if should_close_context:
context = await browser.new_context(
user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36"
)
if open_pages is not None:
if len(open_pages) >= max_open_pages:
old_page = open_pages.popleft()
await old_page.close()
page = await context.new_page()
if open_pages is not None:
open_pages.append(page)
try:
response = await page.goto(url, timeout=timeout, wait_until="load")
if not response or response.status >= 400:
if logger:
logger.error(f"Failed to fetch {url}")
return None
html = await page.content()
return html
except Exception as e:
if logger:
logger.warning(f"Error fetching {url}: {e}")
return None
finally:
await page.close()
if open_pages is not None and page in open_pages:
open_pages.remove(page)
if should_close_context:
await context.close()
if should_close_browser:
await browser.close()
</code></pre>
<p>Inside of my crawler, the part that uses this function is as follows:</p>
<pre class="lang-py prettyprint-override"><code>async with async_playwright() as p:
browser = await p.chromium.launch(headless=True)
context = await browser.new_context()
total_pages = 0
while self.queue:
batch = []
for _ in range(self.parallel_requests):
if not self.queue:
break
url, depth = self.queue.popleft()
if url in self.visited_urls or depth > self.max_depth:
continue
should_skip_url_, reason = should_skip_url(url=url)
if should_skip_url_:
self.logger.info(f"Skipping {url}: {reason}")
continue
total_pages += 1
self.logger.info(f"[{total_pages}] Crawling: {url} (Depth: {depth})")
self.visited_urls.add(url)
batch.append((url, depth))
self.logger.info(f"open_pages size before fetching batch: {len(self.open_pages)}")
tasks = [
fetch_page_content(
url=url,
context=context,
open_pages=self.open_pages,
max_open_pages=self.max_open_pages,
logger=self.logger,
)
for url, depth in batch
]
html_results = await asyncio.gather(*tasks, return_exceptions=True)
self.logger.info(f"open_pages size after fetching batch: {len(self.open_pages)}")
for (url, depth), html_result in zip(batch, html_results):
processing_successful = await self.process_and_save_fetched_html(url=url, html=html_result)
if not processing_successful:
continue
links = await self.extract_links(html=html_result, base_url=url)
await self.validate_and_enqueue_links(url=url, links=links, depth=depth)
if total_pages % self.restart_interval == 0 and total_pages != 0:
self.logger.info("Restarting browser and context...")
await context.close()
await browser.close()
browser = await p.chromium.launch(headless=True)
context = await browser.new_context()
</code></pre>
<p>What I've tried are as follows:</p>
<ol>
<li>Limit concurrency by using lower values of <code>parallel_requests</code>.</li>
<li>Manually close pages using the <code>open_pages</code> deque.</li>
<li>Restarting the browser when we hit an interval.</li>
</ol>
<p>Playwright's throwing an Errno 24 saying that there are too many files open and it failed to fetch the webpage.</p>
<p>I'm sort of at a dead end now and am not sure what else I can do. I would rather not manually raise the system's values for maximum open pages and take care of it at the code level, if possible.</p>
|
<python><concurrency><playwright><playwright-python>
|
2025-03-14 07:48:28
| 1
| 3,460
|
Sean
|
79,508,044
| 7,282,437
|
Asynchronously running a function in the background while sending results in Python
|
<p>I have a Python script which I would like to a) Continuously listen for requests from an API, and b) Run an expensive algorithm continuously in the background. The results from the algorithm are saved locally to a file (<code>results.json</code>), and if the API requests the result, I would like to load current results from disk and send it to the endpoint.</p>
<p>I've provided a simplified sketch of the code logic below. However I'm not sure how to structure it so that <code>algo.run</code> runs in the background constantly, while <code>Main.send</code> periodically does the API calls. Since both are blocking operations, should I be using <code>async def</code> in the methods in <code>Algorithm</code> as well? In addition, how would I handle the edge case where <code>Main.send</code> tries to read the file from disk, while <code>algo.run</code> is trying to save the file at the same time?</p>
<p>Any suggestions would be greatly appreciated!</p>
<pre><code>import json
import random
import time
ENDPOINT = 'http://some_api.com/endpoint' # Some external API
class Main:
def __init__(self):
algo = Algorithm()
# self.send() ## These would block
# algo.run()
async def send(self):
# Continuously listen on an endpoint
while True:
response = requests.get(ENDPOINT).json()
if response['should_send']:
# Load the result from disk
with open('results.json', 'r') as file:
outputs = json.load(file)
# Send results to API
requests.post(ENDPOINT, outputs)
time.sleep(60)
class Algorithm:
def __init__(self):
pass
def run(self):
# Some expensive computations, running repeatedly in background
while True:
outputs = {'result' : random.random()}
time.sleep(60)
# Save result to disk
with open('results.json', 'w') as file:
json.dump(outputs, file)
</code></pre>
|
<python><asynchronous><concurrency><python-asyncio>
|
2025-03-14 02:44:55
| 1
| 389
|
Adam
|
79,507,953
| 5,049,813
|
What type hint should I use for an object that can be iterated through multiple times?
|
<p>Sometimes I have a function that takes a parameter, and I need to be able to loop through that parameter multiple times. For example:</p>
<pre class="lang-py prettyprint-override"><code>def loop_twice(param: MultiIterable):
for thing in param:
print(thing)
for thing in param:
print(f"{thing} again")
</code></pre>
<p>Importantly, I don't care about ordering (if I did, I would use <code>Sequence</code>). I just want to be able to iterate through the data twice. It could be a <code>Set</code>, a <code>Tuple</code>, a <code>List</code>, or any custom object that supports <code>iter</code> being called on it twice.</p>
<p><strong>What's the best type-hint for such a parameter?</strong></p>
<p>Using <code>Iterable</code> doesn't seem right because generators are iterable, but it's also allowed for them to only run once.</p>
<p><code>Sequence</code> seems closer, but requires <code>__getitem__</code> which I don't need. <code>Collection</code> currently seems like the best option, but I also don't need <code>__contains__</code>. How should I type-hint this?</p>
|
<python><python-typing>
|
2025-03-14 00:55:56
| 3
| 5,220
|
Pro Q
|
79,507,943
| 10,862,918
|
Pythonic Doxygen function signature rendering
|
<p>given the python function:</p>
<pre class="lang-py prettyprint-override"><code> @staticmethod
def check_for_conflicting_method(
this_scope: Scope, target_scope: Scope, new_function: Asts.FunctionPrototypeAst,
conflict_type: FunctionConflictCheckType)\
-> Optional[Asts.FunctionPrototypeAst]:
"""!
Check for conflicting methods between the new function, and functions in the type scope. This function has two
uses:
* Check if an overload conflicts with an existing overload in the same type.
* Check if a method is overriding a base class method (ie there is a signature conflict).
@param this_scope The scope of the new function being checked.
@param target_scope The scope to check for conflicts in (for example a class).
@param new_function The new function AST (its scope is @p this_scope).
@param conflict_type The type of conflict to check for: overload or override.
@return A conflicting function if one is found, or None if no conflict is found.
"""
</code></pre>
<p>the Doxygen render is:</p>
<p><a href="https://i.sstatic.net/nuqRRhhP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nuqRRhhP.png" alt="Doxygen Render 1" /></a></p>
<p>i'd like, at the very least, for the render to look like:</p>
<pre class="lang-py prettyprint-override"><code>Optional[Asts.FunctionPrototypeAst]
check_for_conflicting_method(
Scope this_scope,
Scope target_scope,
Asts.FunctionPrototypeAst new_function,
FunctionConflictCheckType conflict_type)
</code></pre>
<p>or even better:</p>
<pre class="lang-py prettyprint-override"><code>check_for_conflicting_method(
this_scope: Scope,
target_scope: Scope,
new_function: Asts.FunctionPrototypeAst,
conflict_type: FunctionConflictCheckType)
-> Optional[Asts.FunctionPrototypeAst]
</code></pre>
<p>but i'm not too bothered about if the trailing types. really i'd just like a newline between the return type and function name, and every parameter on a newline, tabbed in.</p>
<p>i've tried a variety of css options with <code>table.memname td.funcname/memproto/params/sig-parm/sig-return/tr</code> etc but haven't found suitable css for moving all the parameters onto newlines with a visible indentation.</p>
<pre class="lang-css prettyprint-override"><code>table.memname td.funcname {
display: block;
font-weight: bold;
padding-bottom: 5px;
}
table.memname tr {
display: block;
margin-bottom: 5px;
}
</code></pre>
<p>brings the parameters, except for the first one, onto new lines, but without an indentation, but i'm not sure where to go from here.</p>
|
<python><doxygen>
|
2025-03-14 00:47:39
| 0
| 491
|
SamG101
|
79,507,897
| 235,218
|
Use a python regex expression to replace spaces between words in string1 with underscore, and add string2 at the end of string1
|
<p>I want to use a Python <em>regex</em> replacement operation to change prospective file names such as</p>
<p><code>Using 3 Buttons in PyQt5 DialogButtonBox</code> into</p>
<p><code>Using_3_Buttons_in_PyQt5_DialogButtonBox_aaa.pdf</code></p>
<p>If I only wanted to replace the spaces with "_", the replacement operation would be trivial and could be solved with the following lines:</p>
<pre><code>string1 = "Using 3 Buttons in PyQt5 DialogButtonBox"
string1_revised = re.sub('[ ]', '_', string1)
</code></pre>
<p>If I wanted to only add '_aaa.pdf' to the end of string1, the operation again would trivial:</p>
<p><code>result2 = re.sub('(?P<wholeThing>.+)', '\g<wholeThing>_aaa.pdf', string1)</code></p>
<p>But I want the replacement operation to replace the spaces with underscores, and add <code>"_aaa.pdf"</code> to the end of <code>string1_revised</code>.</p>
<p>I think the following expression matches <code>string1</code>, and assigns the group name <code>"wholeThing"</code> to <code>string1</code>: <code>(?P<wholeThing>(([A-Za-z0-9]+)([ ]))(x)?)</code></p>
<p>But I can't figure out how to code the replacement.
Any insights would be much appreciated.
Thank you.
Marc</p>
|
<python><regex>
|
2025-03-14 00:00:36
| 1
| 767
|
Marc B. Hankin
|
79,507,891
| 147,530
|
Understanding state management in Python ProcessPoolExecutor
|
<p>What I want to do: I have CPU intensive computation that I want to offload to a worker process so as not to block my main server.</p>
<p>Without ProcessPoolExecutor my code looks like this:</p>
<pre><code>my_model = model_init()
def handler(args):
return my_model.process(args)
</code></pre>
<p>How do I utilize ProcessPoolExecutor?</p>
<p>consider the function that creates a ProcessPoolExecutor:</p>
<pre><code>concurrent.futures.ProcessPoolExecutor(max_workers=1, initializer=init_function, initargs=(model_name))
</code></pre>
<p>say I have:</p>
<pre><code>def init_function():
model = model_init()
</code></pre>
<p>The core of the problem is how to "persist" the <code>model</code> created in the <code>init_function</code>? Later on I want to <code>submit</code> tasks to the ProcessPoolExecutor and be able to use the <code>model</code> created in the <code>init_function</code> to process the task.</p>
<p>Its difficult to explain - the issue is that ProcessPoolExecutor only knows about two functions the initializer and the function I will provide in <code>submit</code>. I don't know how to pass state between these functions. How can the function that is argument to <code>submit</code> access a variable that was initialized when <code>initializer</code> was called?</p>
<p>The only example I found is <a href="https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor" rel="nofollow noreferrer">here</a> and that does not go over this issue.</p>
|
<python>
|
2025-03-13 23:55:55
| 1
| 20,700
|
morpheus
|
79,507,886
| 10,083,382
|
MacOS - Pty Host Disconnected VS Code
|
<p>I am using VSCode to execute Python Code inline. The execution seems to work fine if I execute line by line or couple of lines of code. Once I execute multiple code blocks the terminal gets unresponsive and then disconnects with error <code>pty host disconnected</code></p>
<p>There is an <a href="https://stackoverflow.com/questions/75095228/connection-to-the-terminals-pty-host-process-is-unresponsive">existing question</a> but the solution doesn't seem viable because even after I reconnect I have to start all over again as all the variable are gone. The same code works fine on a Windows machine.</p>
<p>What can be done to resolve this issue?</p>
<p>Other Details:</p>
<pre><code>VS Code version: 1.98.0
MacOS: Sequoia 15.3
Apple Silicon M4
</code></pre>
<p>Adding sample code below for reproducibility. Disconnection always occur on <code>if-else</code> statement if I select the entire code below and run it inline.</p>
<pre><code>import pandas as pd
import numpy as np
import scipy.stats as stats
# Set random seed for reproducibility
np.random.seed(42)
# Create a DataFrame with random data
data = {
'Age': np.random.randint(20, 60, 100),
'Height': np.random.normal(170, 10, 100),
'Weight': np.random.normal(70, 15, 100),
'Income': np.random.normal(50000, 15000, 100)
}
df = pd.DataFrame(data)
# Display basic statistics
print("Basic Statistics:")
print(df.describe())
# Hypothesis Testing: Is the mean height significantly different from 170?
height_sample = df['Height']
mean_height = 170
# Perform a one-sample t-test
t_stat, p_value = stats.ttest_1samp(height_sample, mean_height)
print("\nHypothesis Test: One-Sample t-test for Mean Height")
print(f"t-statistic: {t_stat:.4f}, p-value: {p_value:.4f}")
# Interpretation
alpha = 0.05
if p_value < alpha:
print("Reject the null hypothesis: The mean height is significantly different from 170.")
else:
print("Fail to reject the null hypothesis: No significant difference from 170.")
# Confidence Interval for Mean Weight
weight_sample = df['Weight']
mean_weight = np.mean(weight_sample)
std_error = stats.sem(weight_sample)
confidence_level = 0.95
conf_interval = stats.t.interval(confidence_level, len(weight_sample)-1, loc=mean_weight, scale=std_error)
print("\nConfidence Interval for Mean Weight:")
print(f"{confidence_level*100:.0f}% Confidence Interval: {conf_interval}")
# Correlation Matrix
print("\nCorrelation Matrix:")
print(df.corr())
</code></pre>
<p><a href="https://i.sstatic.net/JfcVSNL2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JfcVSNL2.png" alt="enter image description here" /></a></p>
|
<python><macos><visual-studio-code><execution>
|
2025-03-13 23:46:07
| 1
| 394
|
Lopez
|
79,507,737
| 785,494
|
How to exit a Python Win32 GUI program on Ctrl-C from invoking console
|
<p>I have a simple Python Win32 program that works well when run under <code>pythonw</code> from a command prompt. When I run it under <code>python</code>, it keeps running in the foreground of the command prompt and can e.g. print to STDOUT, which is very useful for debugging. However, it ignores Ctrl-C. I would like to have it exit when Ctrl-C is sent to the console it is running from.</p>
<p>Relevant code is:</p>
<pre><code> update_gui_thread = threading.Thread(target=gui_clock)
update_gui_thread.start()
win32gui.PumpMessages()
def signal_handler(sig, frame):
print('Ctrl-C pressed, exiting.')
win32gui.PostQuitMessage(0)
sys.exit(0)
if __name__ == '__main__':
signal.signal(signal.SIGINT, signal_handler)
main()
</code></pre>
<p>When I hit Ctrl-C, it prints to Console <code>Ctrl-C pressed, exiting. Python WNDPROC handler failed</code>. The GUI freezes but stays on the screen. Only manually terminating the process via Task Manager make the GUI go away.</p>
<p>I think that most likely the <code>update_gui_thread</code> is exiting but not the main window.</p>
<p>How can I make it exit completely on Ctrl-C?</p>
|
<python><user-interface><winapi><python-multithreading><pywin32>
|
2025-03-13 22:06:12
| 0
| 9,357
|
SRobertJames
|
79,507,720
| 16,383,578
|
How to improve the approximation of π using Machin-like formulas?
|
<p>I want to compute π using <a href="https://en.wikipedia.org/wiki/Machin-like_formula" rel="nofollow noreferrer">Machin-like formulae</a>, I use <a href="https://en.wikipedia.org/wiki/Arctangent_series#Accelerated_series" rel="nofollow noreferrer">Newton's accelerated series</a> to make the approximation converge faster:</p>
<pre><code>def Newton_arctan(x: int | float, lim: int) -> float:
if not (lim and isinstance(lim, int)):
raise ValueError(f"Argument lim must be a positive integer, received {lim}")
square = x**2
y = y_0 = 1 + square
even_p = even = 2
odd_p = odd = 3
s = x / y
for _ in range(lim - 1):
s += even_p / odd_p * (x := x * square) / (y := y * y_0)
even += 2
odd += 2
even_p *= even
odd_p *= odd
return s
def Machin_Pi_worker(terms: list, lim: int) -> float:
return 4 * sum(coef * Newton_arctan(1 / denom, lim) for coef, denom in terms)
def Machin_Pi1(lim: int) -> float:
return Machin_Pi_worker(((4, 5), (-1, 239)), lim)
</code></pre>
<pre><code>In [178]: old = Machin_Pi1(i := 1)
...: while True:
...: if (new := Machin_Pi1(i := i + 1)) == old:
...: break
...:
...: old = new
In [179]: i -= 1; print(i, Machin_Pi1(i))
11 3.141592653589793
</code></pre>
<p>This time it also took 11 iterations to reach maximum precision, and all digits are correct, though this time it has only 15 decimal places, interestingly this value is the same as <code>math.pi</code>.</p>
<p>I tried a bunch of other formulae from <a href="https://en.wikipedia.org/wiki/Approximations_of_%CF%80#Machin-like_formula" rel="nofollow noreferrer">here</a>:</p>
<pre><code>def Machin_Pi2(lim: int) -> float:
return Machin_Pi_worker(((6, 8), (2, 57), (1, 239)), lim)
def Machin_Pi3(lim: int) -> float:
return Machin_Pi_worker(((12, 18), (8, 57), (-5, 239)), lim)
def Machin_Pi4(lim: int) -> float:
return Machin_Pi_worker(((12, 49), (32, 57), (-5, 239), (12, 110443)), lim)
def Machin_Pi5(lim: int) -> float:
return Machin_Pi_worker(((44, 57), (7, 239), (-12, 682), (24, 12943)), lim)
test_result = {}
for i in range(1, 6):
func = globals()[fname := f"Machin_Pi{i}"]
old = func(j := 1)
while True:
if (new := func(j := j + 1)) == old:
break
old = new
test_result[fname] = (new, j - 1)
</code></pre>
<pre><code>{'Machin_Pi1': (3.141592653589793, 11),
'Machin_Pi2': (3.1415926535897936, 9),
'Machin_Pi3': (3.1415926535897927, 7),
'Machin_Pi4': (3.1415926535897927, 5),
'Machin_Pi5': (3.1415926535897922, 5)}
</code></pre>
<p>The later series converged faster, but they reached float underflow before they reached the maximum possible precision in double precision floating point format.</p>
<p>Now I think to minimize the impact of float underflow I want to make it so the numerator and denominator are computed separately as integers, so that we don't lose precision before final division.</p>
<p>It has been many years since I last used pen and paper and my math is extremely rusty, but I did the following computation:</p>
<p><a href="https://i.sstatic.net/fSDC846t.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fSDC846t.jpg" alt="enter image description here" /></a></p>
<p>And I re-implemented the whole thing:</p>
<pre><code>from typing import List, Tuple
Fraction = Tuple[int, int]
def Newton_arctan_xr(i: int | float, lim: int) -> float:
if not (lim and isinstance(lim, int)):
raise ValueError(f"Argument lim must be a positive integer, received {lim}")
cur_hi = dividend = i_sqr = i * i
i_sqr_p = i_sqr + 1
divisor = i * i_sqr_p
even = 2
odd = 3
for _ in range(lim - 1):
cur_hi *= even * i_sqr
divisor *= (prod := odd * i_sqr * i_sqr_p)
dividend = dividend * prod + cur_hi
even += 2
odd += 2
return dividend, divisor
def add_fractions(frac1: Fraction, frac2: Fraction) -> Fraction:
a, b = frac1
c, d = frac2
return (a * d + b * c, b * d)
def sum_fractions(fractions: List[Fraction]) -> Fraction:
result = fractions[0]
for frac in fractions[1:]:
result = add_fractions(result, frac)
return result
def gcd(x: int, y: int) -> int:
while y != 0:
(x, y) = (y, x % y)
return x
def Machin_Pi_worker1(terms: List[Tuple[int, int]], lim: int) -> Fraction:
fractions = []
for coef, inv in terms:
dividend, divisor = Newton_arctan_xr(inv, lim)
fractions.append((coef * dividend, divisor))
dividend, divisor = sum_fractions(fractions)
dividend *= 4
extra = gcd(dividend, divisor)
return dividend // extra, divisor // extra
def Machin_Pi_1(lim: int) -> Fraction:
return Machin_Pi_worker1(((4, 5), (-1, 239)), lim)
def Machin_Pi_2(lim: int) -> Fraction:
return Machin_Pi_worker1(((6, 8), (2, 57), (1, 239)), lim)
def Machin_Pi_3(lim: int) -> Fraction:
return Machin_Pi_worker1(((12, 18), (8, 57), (-5, 239)), lim)
def Machin_Pi_4(lim: int) -> Fraction:
return Machin_Pi_worker1(((12, 49), (32, 57), (-5, 239), (12, 110443)), lim)
def Machin_Pi_5(lim: int) -> Fraction:
return Machin_Pi_worker1(((44, 57), (7, 239), (-12, 682), (24, 12943)), lim)
</code></pre>
<pre><code>In [230]: Machin_Pi_5(5)
Out[230]:
(1279457632672435538478197124236187110232840682131383545616,
407264013432945209516129385309101616710788249969482421875)
In [231]: 1279457632672435538478197124236187110232840682131383545616/407264013432945209516129385309101616710788249969482421875
Out[231]: 3.141592653589793
</code></pre>
<p>It works, but I don't know if I have reduced repeated calculations to the minimum, and I don't know what library I can use to speed up the execution, though I am not asking for software recommendations, so you can just implement in pure Python like me, but the answer is required to run faster than mine code.</p>
<p>Though I am really interested in getting more digits, I use <code>gmpy2.mpfr</code>, and I want to know how to accurately estimate the number of correct digits a particular fraction has, so we can pass precision argument accordingly.</p>
|
<python><math><pi><taylor-series>
|
2025-03-13 21:57:48
| 1
| 3,930
|
Ξένη Γήινος
|
79,507,694
| 14,351,026
|
Unpredictable behaviour when reading from stdin in python daemon thread
|
<p>The <a href="https://docs.python.org/3/library/threading.html" rel="nofollow noreferrer">python documentation</a> says of daemon threads:</p>
<blockquote>
<p>A thread can be flagged as a “daemon thread”. The significance of this flag is that the entire Python program exits when only daemon threads are left.</p>
</blockquote>
<p>Consider the following example:</p>
<pre class="lang-py prettyprint-override"><code>import threading
threading.Thread(target=input, daemon=True).start()
</code></pre>
<p>Given the above definition, it may be reasonable to assume that executing the above script would cause the process to terminate immediately, as the main thread ends and leaves only the daemon thread in a call to <code>input()</code>.</p>
<p>However, this example hangs indefinitely waiting for input (tested on CPython 3.10.12 and 3.13.1, intel x86 and Arch Linux).
Attempting to redirect stdout (i.e. <code>python3 test.py > /dev/null</code>) does not continue to wait for input, but exits after 1-2 seconds with <code>IOT instruction (core dumped)</code> and the following message to stderr:</p>
<pre class="lang-bash prettyprint-override"><code>Fatal Python error: _enter_buffered_busy: could not acquire lock for <_io.BufferedReader name='<stdin>'> at interpreter shutdown, possibly due to daemon threads
Python runtime state: finalizing (tstate=0x000060b51e205c10)
Current thread 0x00007ad136221080 (most recent call first):
<no Python frame>
</code></pre>
<p>In an attempt to locate the cause of this, I tried with the (sufficiently) functionally equivalent <code>sys.stdin.readline()</code>:</p>
<pre class="lang-py prettyprint-override"><code>import threading
import sys
threading.Thread(target=sys.stdin.readline, daemon=True).start()
</code></pre>
<p>This works as one would expect, and exits immediately, and with no error when redirecting stdout.</p>
<p>I then tried on an ARM machine running Ubuntu and CPython 3.10.12. Both <code>input</code> and <code>sys.stdin.readline</code> exit as expected, but the crash when redirecting stdout occurs only for <code>input</code>.</p>
<p>With CPython 2.7.18, <code>raw_input</code> and <code>sys.stdin.readline</code> both cause the indefinite hang without the crash when redirecting stdout on the original machine, and both behave exactly as expected on the ARM machine.</p>
<hr />
<p>What's going on here? I think there are 4 main questions here:</p>
<ol>
<li>Why am I experiencing such different behaviour between <code>input()</code> and <code>sys.stdin.readline()</code>?</li>
<li>Why am I experiencing such different behaviour between python versions?</li>
<li>Why am I experiencing such different behaviour between machines?</li>
<li>Why does redirecting stdout trigger a crash?</li>
</ol>
|
<python><linux><multithreading><python-multithreading><cpython>
|
2025-03-13 21:43:46
| 0
| 406
|
pjones123
|
79,507,557
| 4,474,230
|
How to send Slack api request to websocket without Bolt or other SDK?
|
<p>I have the code below. It connects correctly to Slack, and authenticate correctly.</p>
<p>I want to sent a conversations.list request to list the channels. How can I do that without using Bolt or some other SDK? The system where this runs is locked down, so I'm not getting to install anything further than websockets.</p>
<p>I think the only missing part is figuring what to send on the websocket to request a channel list. Currently it outputs:</p>
<pre><code>Opened connection
{"type":"hello","num_connections":1,"debug_info":{"host":"applink-11","build_number":105,"approximate_connection_time":18060},"connection_info":{"app_id":"Redacted"}}
</code></pre>
<p>The API I'm after is <a href="https://api.slack.com/methods/conversations.list" rel="nofollow noreferrer">https://api.slack.com/methods/conversations.list</a></p>
<p>Code is</p>
<pre><code>#!/usr/bin/env python2.6
import httplib
import requests
import websocket
import argparse
def on_message(ws, message):
print(message)
ws.send("list")
def on_error(ws, error):
print(error)
def on_close(ws, close_status_code, close_msg):
print("### closed ###")
def on_open(ws):
print("Opened connection")
def run_with(url):
# websocket.enableTrace(True)
ws = websocket.WebSocketApp(url,
on_open=on_open,
on_message=on_message,
on_error=on_error,
on_close=on_close)
ws.run_forever()
def main():
parser = argparse.ArgumentParser()
parser.add_argument("token")
args = parser.parse_args()
url = 'https://slack.com/api/apps.connections.open'
headers = {
'content-type': 'application/x-www-form-urlencoded',
'Authorization': 'Bearer ' + args.token}
r = requests.post(url, headers=headers)
if r.json()['ok']:
url = r.json()['url']
run_with(url)
else:
print(r.content)
if __name__ == '__main__':
main()
</code></pre>
|
<python><websocket><slack-api>
|
2025-03-13 20:27:58
| 2
| 11,112
|
Jeffrey
|
79,507,355
| 8,006,721
|
How to compare two list of objects using the object's equal method in Python
|
<p>I have a class that comes with a <code>.equals()</code> method. I'd like to compare two unordered lists of these objects to see if they are <em><strong>not</strong></em> equal.</p>
<p>My current approach is to loop through list 1, then loop through list 2.</p>
<pre><code>def objectListsNotEqual(list_1, list_2):
if(len(list_1) != len(list_2)):
return True
# same length lists, so check it
for obj_1 in list_1:
if(not(any(obj_1.equals(obj_2) for obj_2 in list_2))):
return True
for obj_2 in list_2:
if(not(any(obj_2.equals(obj_1) for obj_1 in list_1))):
return True
# if execution reaches here, then both lists match
return False
</code></pre>
<p>However, this doesn't account for when list 1 and list 2 have different counts when the <code>.equals()</code> method returns <code>True</code>. e.g. if I had:</p>
<pre><code>list_1 = [obj_1(foo), obj_2(foo), obj_3(bar)]
list_2 = [obj_4(foo), obj_5(bar), obj_6(bar)]
objectListsNotEqual(list_1, list_2) # --> returns False due to the use of any() function
</code></pre>
<p>I would want the function to return <code>False</code> in the above example since they have different counts of <code>foo</code> and <code>bar</code> instances of the object.</p>
<p><strong>Edit</strong>: class is a <a href="https://pro.arcgis.com/en/pro-app/latest/arcpy/classes/polyline.htm" rel="nofollow noreferrer">Polyline</a> which is built from the <a href="https://pro.arcgis.com/en/pro-app/latest/arcpy/classes/geometry.htm" rel="nofollow noreferrer">Geometry object</a>. Don't really know how it works internally.</p>
|
<python><python-3.x><list><compare>
|
2025-03-13 18:48:12
| 2
| 332
|
sushi
|
79,507,238
| 250,003
|
ModuleNotFoundError: No module named 'LibraryExplorer'
|
<p>I am a newbie on python and cannot figure out how to solve this error.</p>
<p>This is the structure of my project:
<a href="https://i.sstatic.net/Alne7J8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Alne7J8J.png" alt="my project struct" /></a></p>
<p>and this is part of my code inside explorer.py</p>
<pre><code>class Explorer:
def __init__(self, library_path: str):
self.library_path = library_path
db_path = os.path.join(library_path, "library.json")
# TinyDB will automatically create the file if it doesn't exist
# or open it if it does exist
self.db = TinyDB(db_path)
def get_all_books(self) -> List[Book]:
books = []
results = self.db.all()
for book_data in results:
book = Book(book_data['author'], book_data['title'])
book.set_filename(os.path.basename(book_data['path']))
book.tags = book_data['tags']
books.append(book)
return books
</code></pre>
<p>in main.py try using the code but I got this error</p>
<blockquote>
<p>Traceback (most recent call last): File
"C:\Users\salva\OneDrive\Documents\Developing\ScriptLibraryManager\scripts\main.py",
line 6, in
from LibraryExplorer.explorer import Explorer ModuleNotFoundError: No module named 'LibraryExplorer'</p>
</blockquote>
<p>just on</p>
<pre><code>from LibraryExplorer.explorer import Explorer
from LibraryExplorer.book import Book
</code></pre>
<p>With <code>print(f"Looking on {sys.path}")</code> on main.py I getting this result</p>
<pre><code>Looking on ['C:\\Users\\salva\\OneDrive\\Documents\\Developing\\ScriptLibraryManager', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.13_3.13.752.0_x64__qbz5n2kfra8p0\\python313.zip', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundatio
n.Python.3.13_3.13.752.0_x64__qbz5n2kfra8p0\\DLLs', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.13_3.13.752.0_x64__qbz5n2kfra8p0\\Lib', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.13_3.13.752.0_x64__qbz5n2kfra8p0', 'C:\\Users\\salva\\OneDrive\\Documents\\Developing\\ScriptLibraryManager\\.venv', 'C:\\Users\\salva\\OneDrive\\Documents\\Developing\\ScriptLibraryManager\\.venv\\Lib\\site-packages']
</code></pre>
<p>and if I try to execute like this</p>
<blockquote>
<p>python scripts/main.py</p>
</blockquote>
<p>I get this</p>
<blockquote>
<p>python scripts/main.py Traceback (most recent call last): File
"C:\Users\salva\OneDrive\Documents\Developing\ScriptLibraryManager\scripts\main.py",
line 9, in
from LibraryExplorer.explorer import Explorer ModuleNotFoundError: No module named 'LibraryExplorer'</p>
</blockquote>
|
<python>
|
2025-03-13 17:48:36
| 1
| 715
|
Salvatore Di Fazio
|
79,507,207
| 13,132,728
|
Create an N length list by uniformly (in frequency) selecting items from a separate list in python
|
<h2>SETUP</h2>
<p>I have a list <code>days</code> and a value <code>N</code></p>
<pre><code>days = ['Monday','Tuesday','Wednesday','Thursday','Friday']
N = 52
</code></pre>
<h2>WHAT I AM TRYING TO DO</h2>
<p>I am trying to create a list <code>selections</code> with length <code>N</code> where I <em>uniformly in frequency</em> sample values from <code>days</code> (remainders are fine). I would like the order of this list to then be shuffled.</p>
<h2>EXAMPLE OUTPUT</h2>
<p><strong>NOTE HOW THE ORDER IS SHUFFLED, BUT THE DISTRIBUTION OF VALUES IS UNIFORM</strong></p>
<pre><code>selections
['Wednesday','Friday','Monday',...'Tuesday','Thursday','Monday']
import collections
counter = collections.Counter(selections)
counter
Counter({'Monday': 11, 'Tuesday': 10, 'Wednesday': 11, 'Thursday': 10, 'Friday': 10})
</code></pre>
<h2>WHAT I HAVE TRIED</h2>
<p>I have code to randomly select <code>N</code> values from <code>days</code></p>
<pre><code>from random import choice, seed
seed(1)
days = ['Monday','Tuesday','Wednesday','Thursday','Friday']
N = 52
selections = [choice(days) for x in range(N)]
</code></pre>
<p>But they aren't selected uniformly</p>
<pre><code>import collections
counter = collections.Counter(selections)
counter
Counter({'Tuesday': 9,
'Friday': 8,
'Monday': 14,
'Wednesday': 7,
'Thursday': 14})
</code></pre>
<p>How can I adjust this code or what different method will create a list of length <code>N</code> with a uniform distribution of values from <code>days</code> in a random order?</p>
<hr />
<p>EDIT: I obviously seemed to have phrased this question poorly. I am looking for list with length <code>N</code> with a uniform distribution of values from <code>days</code> but in a <strong>shuffled</strong> order (what I meant by random.) So I suppose what I am looking for is how to uniformly sample values from <code>days</code> <code>N</code> times, then just shuffle that list. Again, I want an equal amount of each value from <code>days</code> making up a list with length <code>N</code>. I need a uniform distribution for a list of <em>exactly</em> length 52, just as the example output shows.</p>
|
<python><list><random>
|
2025-03-13 17:32:28
| 5
| 1,645
|
bismo
|
79,506,916
| 620,679
|
Sum of products of columns in polars
|
<p>I have a dataset, part of which looks like this:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>customer</th>
<th>product</th>
<th style="text-align: right;">price</th>
<th style="text-align: right;">quantity</th>
<th>sale_time</th>
</tr>
</thead>
<tbody>
<tr>
<td>C060235</td>
<td>P0204</td>
<td style="text-align: right;">6.99</td>
<td style="text-align: right;">2</td>
<td>2024-03-11 08:24:11</td>
</tr>
<tr>
<td>C045298</td>
<td>P0167</td>
<td style="text-align: right;">14.99</td>
<td style="text-align: right;">1</td>
<td>2024-03-11 08:35:06</td>
</tr>
<tr>
<td>...</td>
<td></td>
<td style="text-align: right;"></td>
<td style="text-align: right;"></td>
<td></td>
</tr>
<tr>
<td>C039877</td>
<td>P0024</td>
<td style="text-align: right;">126.95</td>
<td style="text-align: right;">1</td>
<td>2024-09-30 21:18:45</td>
</tr>
</tbody>
</table></div>
<p>What I want is a list of unique customer, product pairs with the total sales, so something like:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>customer</th>
<th>product</th>
<th style="text-align: right;">total</th>
</tr>
</thead>
<tbody>
<tr>
<td>C0000105</td>
<td>P0168</td>
<td style="text-align: right;">643.78</td>
</tr>
<tr>
<td>C0000105</td>
<td>P0204</td>
<td style="text-align: right;">76.88</td>
</tr>
<tr>
<td>...</td>
<td></td>
<td style="text-align: right;"></td>
</tr>
<tr>
<td>C1029871</td>
<td>P1680</td>
<td style="text-align: right;">435.44</td>
</tr>
</tbody>
</table></div>
<p>Here's my attempt at constructing this. This gives me the grand total of all sales, which isn't what I want. What's a correct approach?</p>
<pre><code>import polars as pl
db.select(
(
pl.col('customer'),
pl.col('product'),
pl.col('quantity').mul(pl.col('price')).alias('total')
)
).group_by(('customer', 'product'))
</code></pre>
|
<python><dataframe><python-polars>
|
2025-03-13 15:28:55
| 3
| 4,041
|
Scott Deerwester
|
79,506,903
| 1,708,779
|
Cannot submit login form with Selenium (Python) using Headless Remote Driver
|
<p>I am trying to login to a website using Selenium. The relevant form HTML is as follows:</p>
<pre class="lang-html prettyprint-override"><code><form class="ActionEmittingForm form-wrapper decorate-required" method="post" novalidate="" data-rf-form-name="LoginPageForm_SignInForm" data-rf-test-name="ActionEmittingForm">
<span data-rf-test-name="Text" class="field text Text required email-field emailInput">
<label class="label" data-rf-test-name="label" aria-label="Email Address. Required field.">Email Address</label>
<span class="input">
<div role="presentation" class="value text">
<input type="text" name="emailInput" value="" placeholder="&nbsp;" inputmode="" class="text" data-rf-input-event-type="onInput" data-rf-test-name="input" aria-label="Email Address" aria-required="true" tabindex="0">
</div>
</span>
</span>
<span data-rf-test-name="Text" class="field text Text required password-field passwordInput">
<label class="label" data-rf-test-name="label" aria-label="Password. Required field.">Password</label>
<span class="input">
<div role="presentation" class="value text">
<input type="password" name="passwordInput" value="" placeholder="&nbsp;" inputmode="" class="password" data-rf-input-event-type="onInput" data-rf-test-name="input" aria-label="Password" aria-required="true" tabindex="0">
</div>
</span>
</span>
<button type="submit" class="button Button primary submitButton" tabindex="0" role="button" data-rf-test-name="submitButton">
<span>
<span>Sign In</span>
</span>
</button>
</form>
</code></pre>
<p>I am using the Selenium Python package. The host running Selenium/python is a Debian 12 server (without a GUI) in my network. So the driver must be run in headless more.</p>
<p>I am using a remote driver, running in Docker on the same host with a standalone Firefox browser. I am using Docker because I don't want to manage browser versions using packages on my host. Deploying a pinned standalone browser is very easy with Docker.</p>
<p>I've chosen to try Firefox, because I prefer that over Chrome. But I could be convinced to switch if that is part of the problem.</p>
<p>Selenium version is 4.29.0. Firefox version is 135.0.1. Geckodriver is 0.36.0. Python version 3.11.</p>
<p>Relevant code is as follows:</p>
<pre class="lang-py prettyprint-override"><code>import time
import docker
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
def setupDockerFirefoxSeleniumContainer():
client = docker.from_env()
client.images.pull("selenium/standalone-firefox:135.0.1-geckodriver-0.36.0-20250303")
firefox = client.containers.run("selenium/standalone-firefox:135.0.1-geckodriver-0.36.0-20250303",
detach = True,
name = "firefox",
ports = {4444: 4444, 7900: 7900},
shm_size = "2G",
environment = ["SE_START_XVFB=false",
"SE_SCREEN_WIDTH=1200",
"SE_SCREEN_HEIGHT=900"])
return firefox
def setupSeleniumFirefoxDriver():
try:
options=webdriver.FirefoxOptions()
options.add_argument("--headless")
options.add_argument("--disable-gpu")
options.add_argument("user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:36.0) Gecko/20100101 Firefox/135.0")
driver = webdriver.Remote(
command_executor="http://127.0.0.1:4444/wd/hub",
options=options
)
driver.get("https://httpbin.org/ip")
print("Successfully started Firefox driver.")
return driver
except Exception as e:
print("Caught an exception: ", repr(e))
def cleanupDriver(d):
print("Cleaning up driver")
d.quit()
def cleanupContainer(c):
print("Cleaning up container")
c.stop()
c.remove()
def siteLogin(driver, username, password):
driver.get("https://www.MYWEBSITE.com/login")
driver.implicitly_wait(5)
driver.get_screenshot_as_file("/home/fresh_login_screen.png")
username_box = driver.find_element(By.CSS_SELECTOR, "input[name='emailInput']")
password_box = driver.find_element(By.CSS_SELECTOR, "input[name='passwordInput']")
submit_button = driver.find_element(By.CSS_SELECTOR, "button.submitButton")
time.sleep(2)
username_box.send_keys(username)
time.sleep(2)
password_box.send_keys(password)
time.sleep(2)
driver.get_screenshot_as_file("/home/login_screen_keys_sent.png")
time.sleep(2)
#submit_button.click() # doesn't work
#submit_button.submit() # doesn't work
#password_box.submit() # doesn't work
#password_box.send_keys(Keys.ENTER) # doesn't work
submit_button.send_keys(Keys.ENTER) # doesn't work
print("Waiting for login to process.")
time.sleep(5)
driver.get_screenshot_as_file("/home/submitted_login.png")
time.sleep(10)
driver.get_screenshot_as_file("/home/another_screenshot.png")
print(driver.page_source)
def main():
firefoxContainer = setupDockerFirefoxSeleniumContainer()
print("Waiting 5 seconds for Selenium server to start")
time.sleep(5)
firefoxDriver = setupSeleniumFirefoxDriver()
try:
siteLogin(firefoxDriver, "myEmail@example.com", "crazyHardPassword")
except Exception as e:
print("Caught an exception when trying siteLogin: ", repr(e))
finally:
cleanupDriver(firefoxDriver)
cleanupContainer(firefoxContainer)
if __name__=="__main__":
main()
</code></pre>
<p>The login submission is not working. The page just seems to hang. In my screenshots, I can see my username and password have been sent to the correct form boxes. However, I've tried several ways to submit the form (<code>send_keys(Keys.ENTER)</code>, <code>click()</code>, and <code>submit()</code>), but all seem to result in nothing happening. The screenshot looks the same before and after (with my username and password sitting in the form, but it looks like no form submission was attempted). Any tips for how to submit this form?</p>
|
<python><html><selenium-webdriver>
|
2025-03-13 15:25:13
| 3
| 1,690
|
GNUser
|
79,506,811
| 14,132
|
What could cause strange delays while sending delays from a python aiohttp server?
|
<p>We have a service with this architecture:</p>
<ul>
<li>HTTPS requests come into an A10 load balancer that does L4 load balancing</li>
<li>Behind it are 2 backend servers with Apache running that terminate the TLS connection</li>
<li>In Apache there is a <code>ProxyPass</code> rule that talks to a http service on localhost</li>
<li>this local service uses gunicorn and is implemented in Python using aiohttp (Python 3.11.2 running on Debian Linux ("bookworm") running as a VMWare VM)</li>
</ul>
<p>Now we have some cases where users reported timeouts. Both the python application and Apache write logs, and so we've traced some of the timeouts to a weird delay. The python application logs a message right before it passes it response to gunicorn (and thus to Apache), and sometimes the time between this log and the Apache access log is really long, like 300s.</p>
<p>This is a somewhat rare occurrence, in a day where we process about 880k requests there are just 17 requests where the delay is 30s or more, which makes this kinda hard to debug.</p>
<p>Capturing all the network traffic in a huge <code>.pcap</code> file and then sifting through it is kinda hopeless, far too much data, and the responses from Apache are encrypted, so it makes it really hard to trace request IDs in the pcap files.</p>
<p>Most of the requests with delays have a response size of a few kilobytes to a few megabytes, though very seldomly we also see slow requests with response body <1kb, which should fit into a single TCP package.</p>
<p>Does anybody have a good hypothesis where this delay could come from, and how I could debug it?</p>
|
<python><apache><debugging><gunicorn><latency>
|
2025-03-13 14:52:03
| 1
| 12,742
|
moritz
|
79,506,603
| 940,490
|
Check units of pandas DateTime index with pandera
|
<p>I am new to <code>pandera</code> and am still learning how it works. What is the easiest way to check that the <code>datetime</code> units of an index are in nanoseconds and not milliseconds?</p>
<p>In a perfect world, I am looking for compact declarations of this check inside of the class-based API definitions. If the solution attempt 2 is the best way of doing this, I will be happy with this answer as I am looking for a more experienced perspective.</p>
<h3>Solution Attempt 1</h3>
<p>First I tried the way that looked as intuitive after studying the docs, but this did not produce the desired result. The index does not cause a schema error.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import pandera as pa
from pandera import DataFrameModel, Field
from pandera.typing import Index
from pandera.engines import pandas_engine
class DateIndexSchema(DataFrameModel):
date: Index[pandas_engine.DateTime] = Field(nullable=False, dtype_kwargs={'unit': 'ns'})
df_wrong_index_type = pd.DataFrame(
{'value': [100, 200]},
index=pd.to_datetime(['2023-01-01', '2023-01-02']).astype('datetime64[ms]'),
)
DateIndexSchema.validate(df_wrong_index_type)
</code></pre>
<h3>Solution Attempt 2</h3>
<p>This solution works as expected, but it feels a bit verbose and makes me feel that I am missing something obvious.</p>
<pre class="lang-py prettyprint-override"><code>class DateIndexSchemaThrow(DataFrameModel):
date: Index[pandas_engine.DateTime] = Field(nullable=False)
@pa.dataframe_check
def index_should_be_in_ns(cls, dataframe: pd.DataFrame) -> bool:
if dataframe.index.dtype != "datetime64[ns]":
return False
return True
</code></pre>
|
<python><pandera>
|
2025-03-13 13:30:49
| 1
| 1,615
|
J.K.
|
79,506,581
| 114,904
|
How can I use ruff rules to enforce a particular import style
|
<p>I changed my code base to always import the python <code>datetime</code> module like this:</p>
<pre><code>import datetime as dt
</code></pre>
<p>instead of using</p>
<pre><code>import datetime
</code></pre>
<p>or</p>
<pre><code>from datetime import datetime
</code></pre>
<p>And we had both those in the codebase! It's confusing because you can't know at this point what datetime can do, if it is the module or the class. See also this blog post by Adam Johnson: <a href="https://adamj.eu/tech/2019/09/12/how-i-import-pythons-datetime-module/" rel="nofollow noreferrer">https://adamj.eu/tech/2019/09/12/how-i-import-pythons-datetime-module/</a></p>
<p>What I try to do is create a rule for the ruff linter that enforces this import style. There is tidy-imports but I can't get it to work.</p>
|
<python><ruff>
|
2025-03-13 13:22:39
| 1
| 4,348
|
MichielB
|
79,506,558
| 2,801,669
|
Values differ on multiple reads from parquet files using polars read_parquet but not with pandas read_parquet by workstation
|
<p>I read data from the same parquet files multiple times using polars (polars rust engine and pyarrow) and using pandas pyarrow backend (not fastparquet as it was very slow), see below code.</p>
<p>All the parquetfiles contain a column called "backscat" where every element is a list of 1.5 Million floating points (it is a time series).</p>
<p>If the hash of the dataframe is different, my script checks the differences in the values of the "backscat" column and if there are values outside "rtol = 1e-7, atol = 1e-10" saved them as "violations".</p>
<p>I let that script run on two workstations with an identical cpu, with the identical virtual environment (poetry) and python version (3.12.8) and polars Version: 1.20.0, Ubuntu 22.04. and they also have exactly the same memory configuration.</p>
<p>On workstation A my analysis show that there are significant differences between file reads on the same file. For example, sorting for the largest absolute differences, for the polars rust backend, the 5 biggest values are:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>name_test</th>
<th>data_1</th>
<th>data_2</th>
<th>abs_diff</th>
<th>abs_relative_diff</th>
</tr>
</thead>
<tbody>
<tr>
<td>polars_rust</td>
<td>-3.041666</td>
<td>-38.666656</td>
<td>35.62499</td>
<td>11.712328</td>
</tr>
<tr>
<td>polars_rust</td>
<td>-38.666656</td>
<td>-3.041666</td>
<td>35.62499</td>
<td>0.921336</td>
</tr>
<tr>
<td>polars_rust</td>
<td>-2.927914</td>
<td>-27.423315</td>
<td>24.495401</td>
<td>8.36616</td>
</tr>
<tr>
<td>polars_rust</td>
<td>-27.423315</td>
<td>-2.927914</td>
<td>24.495401</td>
<td>0.893233</td>
</tr>
<tr>
<td>polars_rust</td>
<td>-2.927876</td>
<td>-27.42301</td>
<td>24.495134</td>
<td>8.366178</td>
</tr>
</tbody>
</table></div>
<p>As you can see the differnces are absolutely significant, one time it reads -3.041666, the other time it reads -38.666656! On the same workstation A I do not get any violations using pandas.</p>
<p>Now running the same script on workstation B, with the same cpu, I do not get a single violation, not for polars, not for pandas.</p>
<p>Is this a known issue? Is my RAM on workstation A corrupted? But then why does this problem only occure with polars?</p>
<p>I now also run it in a Docker container on workstation A, with polars version 1.24.0, and the results were the same.</p>
<p>I observed that if a colleague runs simmulations at the same time, using all processors, I am not getting violations! Hence if polars can not use multiple processes in parallel I am not getting the error?</p>
<p>I opend an issue in polars, see <a href="https://github.com/pola-rs/polars/issues/21757#issuecomment-2732584540" rel="nofollow noreferrer">here</a>.
Currently I think it is a hardware issue.</p>
<pre class="lang-py prettyprint-override"><code># CODE TO READ PARQUET FILES AND DETECT DIFFERENCES
import hashlib
from pathlib import Path
from typing import Any, Callable
import numpy as np
import pandas as pd
import polars as pl
from tqdm import tqdm
def hash_dataframe_polars(df):
return hashlib.sha256(df.select("backscat").explode("backscat").write_csv(None).encode()).hexdigest()
def hash_dataframe_pandas(df):
return hashlib.sha256(df["backscat"].to_csv().encode()).hexdigest()
def find_precision_violations(
paths_parquet_files: list[Path],
name_test: str,
rtol: float,
atol: float,
hash_fun: Callable[[Any], str],
read_parquet_file: Callable[[Path], Any],
num_reps_per_file: int,
output_dir: Path = Path("./results_check_ram_or_cpu_catch"), # Default output directory
) -> tuple[list[dict[str, Any]], list[dict[str, Any]]]:
precision_violated = []
errors_reading_files = []
# Ensure output directory exists
output_dir.mkdir(parents=True, exist_ok=True)
# Use tqdm to monitor file processing
for file_index, path_file in enumerate(tqdm(paths_parquet_files, desc=f"Processing {name_test} Files")):
previous_hash = None
previous_df = None
# Use tqdm to monitor repetitions per file
for rep in tqdm(range(num_reps_per_file), desc=f"Reps for File {file_index}", leave=False):
try:
df = read_parquet_file(path_file)
except Exception as e:
errors_reading_files.append(
{"name_test": name_test, "file_index": file_index, "rep": rep, "error": str(e)}
)
continue
hash = hash_fun(df)
if hash != previous_hash and previous_hash is not None:
tqdm.write(f"{name_test}, hashes differ for file {path_file}")
for i in range(df.shape[0]):
data_1 = np.asarray(df["backscat"][i])
data_2 = np.asarray(previous_df["backscat"][i]) # type: ignore
mask = ~np.isclose(data_1, data_2, rtol=rtol, atol=atol)
if np.any(mask):
print(f"Precision violation found for {sum(mask)} elements")
precision_violated.append(
{
"name_test": name_test,
"data_1": data_1[mask].tolist(),
"data_2": data_2[mask].tolist(),
"i": i,
"rep": rep,
"file_index": file_index,
}
)
previous_hash = hash
previous_df = df
# Convert results to Polars DataFrame and save as Parquet
if precision_violated:
df_precision_violated = pl.DataFrame(precision_violated)
df_precision_violated.write_parquet(output_dir / f"precision_violations_{name_test}.parquet")
if errors_reading_files:
df_errors = pl.DataFrame(errors_reading_files)
df_errors.write_parquet(output_dir / f"errors_reading_files_{name_test}.parquet")
return precision_violated, errors_reading_files
list_set_up = [
{
"name_test": "polars_rust",
"read_parquet_file": lambda path_file: pl.read_parquet(
path_file, columns=["date", "backscat"], use_pyarrow=False
),
"hash_fun": hash_dataframe_polars,
},
{
"name_test": "pandas_pyarrow",
"read_parquet_file": lambda path_file: pd.read_parquet(
path_file, columns=["date", "backscat"], engine="pyarrow"
),
"hash_fun": hash_dataframe_pandas,
},
# {
# "name_test": "pandas_fastparquet",
# "read_parquet_file": lambda path_file: pd.read_parquet(
# path_file, columns=["date", "backscat"], engine="fastparquet"
# ),
# "hash_fun": hash_dataframe_pandas,
# },
{
"name_test": "polars_pyarrow",
"read_parquet_file": lambda path_file: pl.read_parquet(
path_file, columns=["date", "backscat"], use_pyarrow=True
),
"hash_fun": hash_dataframe_polars,
},
]
dir_parquet_files = Path("cache/212_artificial_data_set/parquet_files")
paths_parquet_files = list(dir_parquet_files.glob("*.parquet"))
rtol = 1e-7
atol = 1e-10
num_reps_per_file = 10
all_violations = []
all_errors_reading_files = []
for set_up in list_set_up:
violations, errors_reading_files = find_precision_violations(
paths_parquet_files=paths_parquet_files,
name_test=set_up["name_test"],
rtol=rtol,
atol=atol,
hash_fun=set_up["hash_fun"],
read_parquet_file=set_up["read_parquet_file"],
num_reps_per_file=num_reps_per_file,
)
all_violations += violations
all_errors_reading_files += errors_reading_files
# %% save violations to csv save errors reading files to csv
violations_df = pd.DataFrame(all_violations)
print(violations_df)
violations_df.to_csv("all_violations.csv")
errors_reading_files_df = pd.DataFrame(all_errors_reading_files)
print(errors_reading_files_df)
errors_reading_files_df.to_csv("all_errors_reading_files.csv")
</code></pre>
<pre class="lang-py prettyprint-override"><code># ANALYSIS SCRIPT
import ast
import matplotlib.pyplot as plt
import polars as pl
violations_df = pl.read_csv("all_violations.csv")
errors_reading_files_df = pl.read_csv("all_errors_reading_files.csv")
violations_df = pl.read_csv("all_violations.csv")
violations_df = violations_df.with_columns(
pl.col("data_1").map_elements(lambda x: ast.literal_eval(x), return_dtype=pl.List(pl.Float64)),
pl.col("data_2").map_elements(lambda x: ast.literal_eval(x), return_dtype=pl.List(pl.Float64)),
)
violations_df = violations_df.with_columns(
pl.col("data_1").len().alias("len_data_1"),
pl.col("data_2").len().alias("len_data_2"),
)
if violations_df.filter(pl.col("len_data_1") != pl.col("len_data_2")).height > 0:
print("Lengths differ")
violations_df = violations_df.drop("len_data_1").rename({"len_data_2": "num_violations"})
tot_violations = (
violations_df.group_by("name_test")
.agg(pl.col("num_violations").sum().alias("total_violations"))
.sort("total_violations")
)
print(tot_violations)
# %%
for name_test in violations_df["name_test"].unique():
df = violations_df.filter(pl.col("name_test") == name_test)
df = df.explode(["data_1", "data_2"])
df = df.with_columns((pl.col("data_1") - pl.col("data_2")).alias("diff"))
df = df.with_columns(pl.col("diff").abs().alias("abs_diff"))
df = df.with_columns((pl.col("abs_diff") / pl.col("data_1").abs()).alias("abs_relative_diff"))
cols = ["name_test", "data_1", "data_2", "abs_diff", "abs_relative_diff"]
df = df.sort(by="abs_diff", descending=True)
print("sorted by abs_diff")
print("head")
print(df.select(cols).head(10))
print("tail")
print(df.select(cols).tail(10))
df = df.sort(by="abs_relative_diff", descending=True)
print("sorted by abs_relative_diff")
print("head")
print(df.select(cols).head(10))
print("tail")
print(df.select(cols).tail(10))
</code></pre>
|
<python><parquet><python-polars><polars>
|
2025-03-13 13:12:18
| 0
| 1,080
|
newandlost
|
79,506,519
| 616,460
|
How do I prevent shelve module from appending ".db" to filename?
|
<p>Python: 3.12.2, OS: MacOS 13.6.6</p>
<p>When I specify a filename to <code>shelve.open</code>, it appends <code>.db</code> to the filename:</p>
<pre class="lang-none prettyprint-override"><code>% ls
% python
Python 3.12.2 (main, Feb 6 2024, 20:19:44) [Clang 15.0.0 (clang-1500.1.0.2.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import shelve
>>> with shelve.open("myfile"):
... pass
...
>>> quit()
% ls
myfile.db
</code></pre>
<p>Furthermore, if I attempt to open an existing file as "myfile.db" (with the extension), I get the following error:</p>
<pre class="lang-none prettyprint-override"><code>% python
Python 3.12.2 (main, Feb 6 2024, 20:19:44) [Clang 15.0.0 (clang-1500.1.0.2.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import shelve
>>> with shelve.open("myfile.db"):
... pass
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/homebrew/Cellar/python@3.12/3.12.2_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/shelve.py", line 243, in open
return DbfilenameShelf(filename, flag, protocol, writeback)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.2_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/shelve.py", line 227, in __init__
Shelf.__init__(self, dbm.open(filename, flag), protocol, writeback)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.2_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/dbm/__init__.py", line 89, in open
raise error[0]("db type could not be determined")
dbm.error: db type could not be determined
</code></pre>
<p>Opening the existing file as simply "myfile" with no extension works fine, however.</p>
<ol>
<li><strong>How do I prevent <code>shelve.open</code> from appending ".db" to the filename?</strong></li>
<li>Why can't I open existing databases if I specify their ".db" extension?</li>
</ol>
<p>Neither of these issues happen on Python 3.10.12 on Ubuntu 22, so I'm not sure if it's a Python version thing, or a platform thing.</p>
|
<python><python-3.x><shelve>
|
2025-03-13 12:54:13
| 1
| 40,602
|
Jason C
|
79,506,513
| 4,720,025
|
Efficient (and Safe) Way of Accessing Larger-than-Memory Datasets in Parallel
|
<p>I am trying to use <code>polars~=1.24.0</code> on Python 3.13 to process larger-than-memory sized datasets. Specifically, I am loading many (i.e., 35 of them) <code>parquet</code> files via the <code>polars.scan_parquet('base-name-*.parquet')</code> API to create the <code>LazyFrame</code>.</p>
<p>In my use case, I need to batch process this dataset and ingest it to a remote database backend. I am using <code>concurrent.futures.ThreadPoolExecutor</code> to spawn multiple threads and do the batch processing.</p>
<p>What is the best way to cover the whole dataset while making sure that the workers would not result in OOM issues on my local machine?</p>
<p>I have thought of a kernel (to be passed to the <code>ThreadPoolExecutor</code>) of the form:</p>
<pre class="lang-py prettyprint-override"><code>def parse(df: pl.LazyFrame, rank: int, size: int):
split, remain = divmod(size, rank)
start = rank * split + min(rank, remain)
end = start + split + (rank < remain)
for row in df.slice(start, end-start).collect(new_streaming=True).iter_rows():
# do something with the row
pass
</code></pre>
<p>but I have the following questions:</p>
<ol>
<li>Is it safe to pass <code>df: pl.LazyFrame</code> to other threads like this,</li>
<li>Is it safe to (read-only) access the dataframe/lazyframe via <code>.slice</code> and cover all the rows,</li>
<li><code>.collect</code> still loads (or seems to load, in <code>htop</code>) the whole chunk to the memory -- is there a more efficient/correct way to stream those while being able to use the "iterator" interface? Or should I further slice / mini-batch the range <code>[start, end)</code>?</li>
</ol>
<p>I am new to <code>polars</code> and I would love to get some suggestions/help along the above use case.</p>
|
<python><python-polars><polars>
|
2025-03-13 12:48:48
| 1
| 1,303
|
Arda Aytekin
|
79,506,510
| 4,960,855
|
How to offset by 1 year the values of a date column using ORM?
|
<p>I have an <a href="https://docs.sqlalchemy.org/en/20/orm/declarative_mixins.html" rel="nofollow noreferrer">sqlalchemy ORM</a> for a Postgres table:</p>
<pre class="lang-py prettyprint-override"><code>class Item(Base, BoilerPlateMixinBase):
id = Column(Integer, primary_key=True)
date = Column(Date)
value = Column(Float)
</code></pre>
<p>My project performs several SELECT queries on it.<br />
What's the quickest way to experiment having all the <code>dates</code> offset by 1 year?</p>
<p>I could do that by modifying either:</p>
<ul>
<li>the records in the table</li>
<li>the lines of code that read the records</li>
<li>the ORM <code>date</code> column</li>
</ul>
<p>The latter is preferable as it'd require changing just one place in the code.</p>
<p><strong>But how can this be achieved?</strong></p>
<p>(tried DATEADD as suggested <a href="https://stackoverflow.com/q/15572292/4960855">here</a>, but didn't work for me. sqlalchemy 2.0)</p>
|
<python><postgresql><sqlalchemy><dateadd>
|
2025-03-13 12:48:02
| 2
| 7,228
|
EliadL
|
79,506,286
| 2,604,247
|
How can I run a uv pytest tool with proper dependencies and module paths?
|
<h4>Directory Structure</h4>
<pre class="lang-none prettyprint-override"><code>project-root/
├── src/
│ ├── file1.py
│ ├── file2.py
└── tests/
│ ├── test_file1.py
│ ├── test_file2.py
├── requirements.txt
├── pyproject.toml
</code></pre>
<p>So basically, under the project root, I have the source codes as <code>src/*.py</code>, test scripts as <code>tests/*.py</code> and the <code>requirements.txt</code> that gives the python packages such as <code>tensorflow==2.18.0</code></p>
<h4>Goal</h4>
<p>At the project root directory, run</p>
<pre class="lang-none prettyprint-override"><code>uv tool run pytest
</code></pre>
<p>and all the tests from <code>tests/*.py</code> will be run to log the output to the console.</p>
<h4>Difficulties</h4>
<p>I am facing two difficulties here, related to the import path, and dependencies.</p>
<h5>Import Path</h5>
<p>Basically, as follows.</p>
<pre class="lang-python prettyprint-override"><code># test_file1.py
from file1 import Class1 # Fails, but this would be my first preference to import
from src.file1 import Class1 # Still fails
</code></pre>
<p>How can I import classes and functions defined in <code>src/*.py</code> to the files in the <code>tests</code>? I tried</p>
<pre class="lang-none prettyprint-override"><code>uv pip install --editable ./
</code></pre>
<p>The command run successfully, but did not help, and the classes in the source files are still not visible in the test directory.</p>
<h5>Dependency Resolution</h5>
<p>I am installing the pip dependencies with</p>
<pre class="lang-none prettyprint-override"><code>time uv pip install --requirement requirements.txt
</code></pre>
<p>which is working for the source code. But it seems the pytest tool cannot <em>see</em> those dependencies, so it is complaining like</p>
<p><code>ModuleNotFoundError: No module named 'tensorflow'</code></p>
<h5>Note</h5>
<p>As I said, <code>uv</code> is my preferred tool for packaging, so it is best if the answer sticks to this framework instead of introducing <a href="https://python-poetry.org/" rel="nofollow noreferrer">Poetry</a> or something.</p>
|
<python><testing><pytest><tdd><uv>
|
2025-03-13 11:18:17
| 2
| 1,720
|
Della
|
79,506,238
| 379,859
|
Polars upsampling with grouping does not behave as expected
|
<p>Here is the data</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
from datetime import datetime
df = pl.DataFrame(
{
"time": [
datetime(2021, 2, 1),
datetime(2021, 4, 2),
datetime(2021, 5, 4),
datetime(2021, 6, 6),
datetime(2021, 6, 8),
datetime(2021, 7, 10),
datetime(2021, 8, 18),
datetime(2021, 9, 20),
],
"groups": ["A", "B", "A", "B","A","B","A","B"],
"values": [0, 1, 2, 3,4,5,6,7],
}
)
</code></pre>
<p>The upsampling and the testing:</p>
<pre><code>(
df
.upsample(
time_column="time",
every="1d",
group_by="groups",
maintain_order=True
)
.group_by('groups')
.agg(pl.col('time').diff().max())
)
</code></pre>
<pre><code>shape: (3, 2)
┌────────┬──────────────┐
│ groups ┆ time │
│ --- ┆ --- │
│ str ┆ duration[μs] │
╞════════╪══════════════╡
│ A ┆ 92d │
│ null ┆ 2d │
│ B ┆ 72d │
└────────┴──────────────┘
</code></pre>
<p>The diff is not 1 day as I would expect.
Is this a bug, or am I doing something wrong?</p>
|
<python><python-polars>
|
2025-03-13 10:58:19
| 1
| 1,257
|
JohnRos
|
79,506,182
| 5,576,083
|
python gdal.warp doesn't properly crop raster over shapefile polygons
|
<p><a href="https://i.sstatic.net/TM0yw0AJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TM0yw0AJ.png" alt="enter image description here" /></a></p>
<p>I used this script to crop a raster using a polygon shapefile with python gdal.Warp, but it seems it leaves many pixels with value = 1 outside the cropline</p>
<pre><code>gdal.Warp(rasterOut.jp2,
rasterIn.jp2,
cutlineDSName=mask.shp,
cropToCutline=True,
dstNodata = nodata)
</code></pre>
<p>I can select any nodata, like 0 or np.nan, but the results doesn't change</p>
|
<python><gdal>
|
2025-03-13 10:37:15
| 1
| 301
|
ilFonta
|
79,505,837
| 2,420,452
|
How to make an easily instantiable derivative attribute-only protocol class?
|
<p>I have a <code>Protocol</code> subclass that defines objects with attributes from an external library:</p>
<pre><code>class P(Protocol):
val: int
</code></pre>
<p>For testing purposes, I want to turn this protocol class into something I can instantiate easily. However, when I try to turn it into a dataclass, an error pops up:</p>
<pre><code>import dataclasses
from typing import Protocol
class P(Protocol):
val: int
PInst = dataclasses.dataclass(P)
PInst(val=4) # TypeError: Protocols cannot be instantiated
</code></pre>
<p>Is there an easy solution to use <code>P</code> to create a class that satifies its protocol and is instantiable, without redeclaring its attributes?</p>
|
<python><python-typing><python-dataclasses>
|
2025-03-13 08:39:21
| 1
| 498
|
Jan Šimbera
|
79,505,800
| 3,999,951
|
Pandas - Unable to set Index to one of the columns
|
<p>A data frame is created and its index set to be one of the columns:</p>
<pre><code>import pandas as pd
ducks_of_destiny = {
"duck_ids": [2381, 2914, 3022, 3090],
"titles": ["Viscount Wigeon", "Sir Pintail the 4th", "Lady Tufted",
"The Right Honourable Field Marshall Lord Professor Shoveller"],
"DuckForce_kN": [33432, 34312, 44331, 99999999]
}
ducks = pd.DataFrame.from_dict(ducks_of_destiny)
ducks.set_index("duck_ids", drop=True, inplace=True) # Set the duck_ids column to be the index.
</code></pre>
<p>It is checked that the indices have been set to the formerly <em>duck_ids</em> column as intended:</p>
<pre><code># Show that the indices have been set to the duck ids:
duck_indices = ducks.index.to_list()
print(f"Here are the indices: {duck_indices}")
</code></pre>
<p>returns</p>
<pre><code>Here are the indices: [2381, 2914, 3022, 3090]
</code></pre>
<p>However, when it is then attempted to select a subset of the rows using iloc:</p>
<pre><code>test_set_duck_ids = [3022, 3090]
test_ducks = ducks.iloc[test_set_duck_ids]
</code></pre>
<p>This returns an error, indicating that the index has not been reset:</p>
<pre><code>IndexError: positional indexers are out-of-bounds
</code></pre>
<p>Further, it is seen that iloc can be used to select by the original indices:</p>
<pre><code># Show that using original indices (0, 1, 2 etc) still works:
test_ducks_two = ducks.iloc[[0]]
print(f"Here is the returned duck:\n\n{test_ducks_two}")
</code></pre>
<p>returns</p>
<pre><code>Here is the returned duck:
titles DuckForce_kN
duck_ids
2381 Viscount Wigeon 33432
</code></pre>
<p>Showing that the index has not been reset as desired. The desired behaviour is to set index to be the <em>duck_ids</em> column then select by these new indices (3022, 3090 etc).</p>
<p>What is the error in my approach?</p>
|
<python><pandas><dataframe>
|
2025-03-13 08:24:00
| 0
| 467
|
acolls_badger
|
79,505,516
| 22,213,065
|
Improve This APNG Webpage Recorder to Capture Smooth 30+ FPS Video
|
<p>I have written a Python script using Playwright to record a high-quality APNG video of a webpage. The script captures frames at 100 FPS and saves them as an animated PNG. I use it to record bar chart race animations from Flourish Studio, such as this one:<br />
<a href="https://public.flourish.studio/visualisation/21660262/" rel="nofollow noreferrer">https://public.flourish.studio/visualisation/21660262/</a></p>
<p>Here is my script:</p>
<pre><code>from playwright.sync_api import sync_playwright
import keyboard
import time
import os
import threading
from PIL import Image
import io
# Output folder
output_folder = r"E:\Desktop\New folder (4)\New folder (3)"
os.makedirs(output_folder, exist_ok=True) # Ensure directory exists
capturing = False # Flag to control capturing state
saving = False # Prevent capturing while saving
screenshot_buffer = [] # Store screenshots in memory (RAM)
lock = threading.Lock() # Prevent race conditions
# Function to listen for keyboard inputs
def listen_keys():
""" Listens for keyboard input to start/stop capturing or exit script. """
global capturing, saving
while True:
if keyboard.is_pressed("F3"): # Start capturing
if not capturing and not saving:
print("📸 Starting ultra-smooth 100 FPS screenshot recording...")
capturing = True
if keyboard.is_pressed("F4"): # Stop capturing & save as APNG
if capturing:
print("🛑 Stopping screenshot recording & saving as APNG...")
capturing = False
saving = True # Set flag to prevent new capturing
save_screenshots()
saving = False # Reset flag after saving
if keyboard.is_pressed("F5"): # Close browser
print("❌ Closing browser...")
os._exit(0) # Force close script
time.sleep(0.01) # Reduce CPU usage while keeping responsiveness
# Function to save screenshots as an APNG file
def save_screenshots():
""" Saves all stored screenshots as a single APNG file. """
global screenshot_buffer
if not screenshot_buffer:
print("⚠️ No screenshots to save.")
return
apng_path = os.path.join(output_folder, "recording.apng")
frames = []
try:
for _, image_data in screenshot_buffer:
image = Image.open(io.BytesIO(image_data))
frames.append(image.convert("RGBA")) # Convert to RGBA for APNG support
# Save as animated PNG (APNG) with ultra-smooth 100 FPS (10ms per frame)
frames[0].save(
apng_path, save_all=True, append_images=frames[1:], duration=10, loop=0
)
print(f"✅ Saved APNG: {apng_path}")
except Exception as e:
print(f"⚠️ Error saving APNG: {e}")
# Clear buffer after saving
screenshot_buffer.clear()
with sync_playwright() as p:
browser = p.firefox.launch(headless=False) # Launch Firefox (visible)
context = browser.new_context(device_scale_factor=4) # High-quality scaling
page = context.new_page()
print("Loading page...")
page.goto("https://public.flourish.studio/visualisation/21660262/", timeout=60000)
page.set_viewport_size({"width": 1920, "height": 1080}) # Maximize viewport
print("Press F3 to start ultra-smooth 100 FPS recording...")
print("Press F4 to stop recording and save as APNG...")
print("Press F5 to close browser...")
# Start key listener in a separate thread
key_listener = threading.Thread(target=listen_keys, daemon=True)
key_listener.start()
while True:
if capturing:
start_time = time.time()
while capturing: # Keep capturing while capturing is True
with lock:
try:
buffer = page.screenshot(full_page=True) # Capture frame
screenshot_buffer.append((time.time(), buffer)) # Store in RAM
except Exception as e:
print(f"⚠️ Screenshot failed: {e}")
# Maintain 100 FPS (10ms per frame)
elapsed_time = time.time() - start_time
sleep_time = (1 / 100) - elapsed_time % (1 / 100)
if sleep_time > 0:
time.sleep(sleep_time)
else:
time.sleep(0.1) # Reduce CPU usage when idle
</code></pre>
<p><strong>Problem:</strong><br />
My script records only 60 frames total, even when capturing for 5-7 seconds. I expect it to function like a real screen recorder, capturing at 30+ FPS continuously and saving a smooth frame-by-frame PNG sequence of all movements in the bar chart race.</p>
<p>However, it stops recording frames too soon, rather than maintaining the expected frame rate over time.</p>
<p><strong>My Goal:</strong><br />
I want the script to behave like a real video recorder:</p>
<ol>
<li>Capture every frame smoothly at 30 FPS or higher.</li>
<li>Record all movements and changes in the bar chart race.</li>
<li>Maintain ultra-high PNG quality (unlike standard screen recorders).</li>
<li>Save the full recording as an APNG video at the end.</li>
</ol>
<p><strong>Key Notes:</strong></p>
<ol>
<li><p>I do not want to use external screen recording tools (e.g., OBS,
Camtasia). They record the entire screen and result in lower-quality videos
compared to my method, which directly converts the webpage into a
high-quality APNG.</p>
</li>
<li><p>If there is a better approach to achieving this using Python and
Playwright, please suggest it.</p>
</li>
<li><p>How can I fix the issue where my script stops at 60 frames instead
of continuously capturing at 30+ FPS?</p>
</li>
</ol>
|
<python><python-3.x><playwright><screen-recording><apng>
|
2025-03-13 05:58:58
| 0
| 781
|
Pubg Mobile
|
79,505,390
| 2,130,515
|
How to grant a lambda function access to call routes in an API gateway
|
<p>Scenario:
I create 4 different lambda functions using docker images. Each function is created froma separate image. Each function contains different FAST API routes for a different service.</p>
<p>I create an API Gateway and invoke the 4 different lambda functions with proxy integration. For each, I create a resource and method ANY.
All lambda functions are created in the same account, same vpc and same security group.
I create another lambda function from a docker image. This function make call for different routes in the API gateway.</p>
<p>I create a resource policy for the API gateway:</p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::account-id:role/Lambda-5-Role"
},
"Action": "execute-api:Invoke",
"Resource": "arn:aws:execute-api:us-east-1:account-id:api-id/*"
},
{
"Effect": "Allow",
"Principal": "*",
"Action": "execute-api:Invoke",
"Resource": "arn:aws:execute-api:us-east-1:account-id:api-id/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"x.x.x.x",
"y.y.y.y"
]
}
}
}
]
}
</code></pre>
<p><strong>The Lambda-5-Role contains:</strong><br />
<em>AWSLambdaBasicExecutionRole, AWSLambdaVPCAccessExecutionRole, CreateNetworkInterface-policy permissions</em> and the following permission:</p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "execute-api:Invoke",
"Resource": "arn:aws:execute-api:us-east-1:account-id:api-id/*"
}
]
}
</code></pre>
<p>When I test it, I got :</p>
<pre><code>{
"Message": "User: anonymous is not authorized to perform: execute-api:Invoke on resource: arn:aws:execute-api:us-east-1:account-id:api-id/stage/method/route"
}
</code></pre>
<p>It is working well without resource policy.
What is the mistake that I am doing ?</p>
|
<python><aws-lambda><aws-api-gateway><amazon-iam>
|
2025-03-13 04:06:04
| 0
| 1,790
|
LearnToGrow
|
79,505,317
| 368,907
|
Write CSV file with data in new row in python
|
<p>I using python locust to do load test and my questions is i try to write csv file with email address as new row but it is separated the email charaters with ',' with each chracter like below:</p>
<pre><code>z,b,r,X,S,F,i,E,p,H,1,@,g,m,a,i,l,.,c,o,m
I,q,O,2,i,t,a,Q,w,m,2,@,g,m,a,i,l,.,c,o,m
5,w,J,b,1,v,Y,X,f,H,3,@,g,m,a,i,l,.,c,o,m
G,J,S,y,I,Y,S,S,9,e,4,@,g,m,a,i,l,.,c,o,m
L,M,8,R,B,a,p,g,I,0,5,@,g,m,a,i,l,.,c,o,m
</code></pre>
<p>How to make it like LM8RBapgI05@gmail.com?</p>
<p>My code as below:</p>
<pre><code> @task
def register(self):
email: str
telcoCode: int
phone: int
self.client.request_name = "API Register User"
with open("email.csv", "w", newline="") as csvfile:
for x in range(5):
email = ''.join(random.choices(string.ascii_letters + string.digits, k=10)) + str(
x + 1) + "@gmail.com"
telcocode = random.randint(10, 19)
phone = random.randint(10000000, 99999999)
phonestr = "+60" + str(telcocode) + str(phone)
csvwriter = csv.writer(csvfile)
csvwriter.writerow(email)
self.client.headers = {
"x-api-key": "KZThJSCH1vWj5snaeQQ+TQ5rTbucayxXtDVHFmfRSJag5gLkUKOMgrpitL1jHIVr6bYAN5N9NrKWg6RhdgNThQ=="}
response = self.client.post("https://",
{"email": email, "phone": phonestr,
"password": "abcd1234", "password_confirmation": "abcd1234"}
)
assert response.status_code == 200, "Failed to register, status code: " + str(response.status_code)
logging.info(response.text)
</code></pre>
<p>Please help. Thanks in advance.</p>
|
<python><python-3.x><locust>
|
2025-03-13 03:00:29
| 1
| 2,802
|
Ivan
|
79,505,315
| 20,706,987
|
Terminal stops launching after downgrading Python version
|
<p>I recently upgraded my Ubuntu through it's upgrade suggestion from <code>22.04</code> to <code>24.04</code>. when I try <code>python --version</code> I see <code>3.12.x</code>. But this caused my <code>cqlsh</code> command doesn't work, cause it needs a lower Python version.</p>
<p>So I decided to install <code>Python 3.9</code> and add it as an alternative, change the configuration via <code>sudo update-alternatives --config python3</code> to use Python 3.9 as default version. But as soon as I did it, the terminal didn't launch.</p>
<p>I am not a fan of Python's virtual environments and looking for a way to downgrade the Python 3.12 system wide to something with most compatibility and reliability..</p>
<p>How can I do that?</p>
|
<python><ubuntu><downgrade>
|
2025-03-13 02:59:33
| 1
| 756
|
best_of_man
|
79,505,258
| 243,031
|
PyMongo - Group by year based on subdocument date
|
<p>I have a MongoDB document like:</p>
<pre><code>[
{
_id: ObjectId('67cfd69ba3e561d35ee57f51'),
created_at: ISODate('2025-03-11T06:22:19.044Z'),
conversation: [
{
id: '67cfd6c1a3e561d35ee57f53',
feedback: {
liked: false,
disliked: true,
copied: true,
created_at: ISODate('2025-03-11T06:27:48.634Z')
}
},
{
id: '67cfd77fa3e561d35ee57f54',
feedback: {
liked: true,
disliked: false,
copied: false,
created_at: ISODate('2025-03-11T06:28:25.099Z')
}
},
{ id: '67d009f1a3e561d35ee57f5a', feedback: null },
{ id: '67d009f8a3e561d35ee57f5b', feedback: null }
]
},
{
_id: ObjectId('67d00aeaa3e561d35ee57f5d'),
created_at: ISODate('2025-03-11T10:05:30.848Z'),
conversation: [
{ id: '67d00af7a3e561d35ee57f5f', feedback: null },
{ id: '67d00afaa3e561d35ee57f60', feedback: null }
]
}
]
</code></pre>
<p>Where the main document has a <code>conversation</code> subdocument, I want to know how many <code>likes</code>, <code>dislikes</code> and <code>copied</code> data in each year.</p>
<p>I tried to get <code>year</code> from the <code>conversation.feedback.created_at</code> using <code>$dateToString</code> operator.</p>
<pre><code>pipeline = [
{
'$match': { 'conversation.feedback.copied': True }
},
{
'$group': {
'_id': {
'$dateToString': {
'format': '%Y',
'date': '$conversation.feedback.created_at'
}
},
'total_copied': { '$sum': 1 }
}
}
]
</code></pre>
<p>But it gives an error:</p>
<blockquote>
<p>OperationFailure: PlanExecutor error during aggregation :: caused by :: can't convert from BSON type array to Date, full error: {'ok': 0.0, 'errmsg': "PlanExecutor error during aggregation :: caused by :: can't convert from BSON type array to Date", 'code': 16006, 'codeName': 'Location16006'}</p>
</blockquote>
<p>What I am expecting out as:</p>
<pre><code>{
"2025": {
"total_liked": 1,
"total_disliked": 1,
"total_copied": 1
}
}
</code></pre>
<p>How to convert the DateTime object to year and combine the total counts for 3 parameters?</p>
|
<python><mongodb><aggregation-framework><pymongo><pipeline>
|
2025-03-13 02:15:22
| 1
| 21,411
|
NPatel
|
79,505,129
| 16,723,655
|
TypeError: Could not convert OscarOscarMichaelJimDwightCreed to numeric
|
<p>I am following the code (chapter_08_reshaping_and_pivoting) from following website.</p>
<p><a href="https://github.com/paskhaver/pandas-in-action" rel="nofollow noreferrer">https://github.com/paskhaver/pandas-in-action</a></p>
<pre><code>import pandas as pd
pd.read_csv('sales_by_employee.csv').head()
</code></pre>
<p><a href="https://i.sstatic.net/9A43ThKN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9A43ThKN.png" alt="enter image description here" /></a></p>
<pre><code>sales = pd.read_csv('sales_by_employee.csv', parse_dates=['Date'])
sales.tail()
</code></pre>
<p><a href="https://i.sstatic.net/YjGw6vYx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YjGw6vYx.png" alt="enter image description here" /></a></p>
<pre><code>sales.pivot_table(index='Date')
</code></pre>
<p>Above code makes error as below.</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
File ~\anaconda3\Lib\site-packages\pandas\core\groupby\groupby.py:1942, in GroupBy._agg_py_fallback(self, how, values, ndim, alt)
1941 try:
-> 1942 res_values = self._grouper.agg_series(ser, alt, preserve_dtype=True)
1943 except Exception as err:
File ~\anaconda3\Lib\site-packages\pandas\core\groupby\ops.py:864, in BaseGrouper.agg_series(self, obj, func, preserve_dtype)
862 preserve_dtype = True
--> 864 result = self._aggregate_series_pure_python(obj, func)
866 npvalues = lib.maybe_convert_objects(result, try_float=False)
File ~\anaconda3\Lib\site-packages\pandas\core\groupby\ops.py:885, in BaseGrouper._aggregate_series_pure_python(self, obj, func)
884 for i, group in enumerate(splitter):
--> 885 res = func(group)
886 res = extract_result(res)
File ~\anaconda3\Lib\site-packages\pandas\core\groupby\groupby.py:2454, in GroupBy.mean.<locals>.<lambda>(x)
2451 else:
2452 result = self._cython_agg_general(
2453 "mean",
-> 2454 alt=lambda x: Series(x, copy=False).mean(numeric_only=numeric_only),
2455 numeric_only=numeric_only,
2456 )
2457 return result.__finalize__(self.obj, method="groupby")
File ~\anaconda3\Lib\site-packages\pandas\core\series.py:6549, in Series.mean(self, axis, skipna, numeric_only, **kwargs)
6541 @doc(make_doc("mean", ndim=1))
6542 def mean(
6543 self,
(...)
6547 **kwargs,
6548 ):
-> 6549 return NDFrame.mean(self, axis, skipna, numeric_only, **kwargs)
File ~\anaconda3\Lib\site-packages\pandas\core\generic.py:12420, in NDFrame.mean(self, axis, skipna, numeric_only, **kwargs)
12413 def mean(
12414 self,
12415 axis: Axis | None = 0,
(...)
12418 **kwargs,
12419 ) -> Series | float:
> 12420 return self._stat_function(
12421 "mean", nanops.nanmean, axis, skipna, numeric_only, **kwargs
12422 )
File ~\anaconda3\Lib\site-packages\pandas\core\generic.py:12377, in NDFrame._stat_function(self, name, func, axis, skipna, numeric_only, **kwargs)
12375 validate_bool_kwarg(skipna, "skipna", none_allowed=False)
> 12377 return self._reduce(
12378 func, name=name, axis=axis, skipna=skipna, numeric_only=numeric_only
12379 )
File ~\anaconda3\Lib\site-packages\pandas\core\series.py:6457, in Series._reduce(self, op, name, axis, skipna, numeric_only, filter_type, **kwds)
6453 raise TypeError(
6454 f"Series.{name} does not allow {kwd_name}={numeric_only} "
6455 "with non-numeric dtypes."
6456 )
-> 6457 return op(delegate, skipna=skipna, **kwds)
File ~\anaconda3\Lib\site-packages\pandas\core\nanops.py:147, in bottleneck_switch.__call__.<locals>.f(values, axis, skipna, **kwds)
146 else:
--> 147 result = alt(values, axis=axis, skipna=skipna, **kwds)
149 return result
File ~\anaconda3\Lib\site-packages\pandas\core\nanops.py:404, in _datetimelike_compat.<locals>.new_func(values, axis, skipna, mask, **kwargs)
402 mask = isna(values)
--> 404 result = func(values, axis=axis, skipna=skipna, mask=mask, **kwargs)
406 if datetimelike:
File ~\anaconda3\Lib\site-packages\pandas\core\nanops.py:720, in nanmean(values, axis, skipna, mask)
719 the_sum = values.sum(axis, dtype=dtype_sum)
--> 720 the_sum = _ensure_numeric(the_sum)
722 if axis is not None and getattr(the_sum, "ndim", False):
File ~\anaconda3\Lib\site-packages\pandas\core\nanops.py:1701, in _ensure_numeric(x)
1699 if isinstance(x, str):
1700 # GH#44008, GH#36703 avoid casting e.g. strings to numeric
-> 1701 raise TypeError(f"Could not convert string '{x}' to numeric")
1702 try:
TypeError: Could not convert string 'OscarOscarMichaelJimDwightCreed' to numeric
The above exception was the direct cause of the following exception:
TypeError Traceback (most recent call last)
Cell In[7], line 1
----> 1 sales.pivot_table(index='Date')
File ~\anaconda3\Lib\site-packages\pandas\core\frame.py:9509, in DataFrame.pivot_table(self, values, index, columns, aggfunc, fill_value, margins, dropna, margins_name, observed, sort)
9492 @Substitution("")
9493 @Appender(_shared_docs["pivot_table"])
9494 def pivot_table(
(...)
9505 sort: bool = True,
9506 ) -> DataFrame:
9507 from pandas.core.reshape.pivot import pivot_table
-> 9509 return pivot_table(
9510 self,
9511 values=values,
9512 index=index,
9513 columns=columns,
9514 aggfunc=aggfunc,
9515 fill_value=fill_value,
9516 margins=margins,
9517 dropna=dropna,
9518 margins_name=margins_name,
9519 observed=observed,
9520 sort=sort,
9521 )
File ~\anaconda3\Lib\site-packages\pandas\core\reshape\pivot.py:102, in pivot_table(data, values, index, columns, aggfunc, fill_value, margins, dropna, margins_name, observed, sort)
99 table = concat(pieces, keys=keys, axis=1)
100 return table.__finalize__(data, method="pivot_table")
--> 102 table = __internal_pivot_table(
103 data,
104 values,
105 index,
106 columns,
107 aggfunc,
108 fill_value,
109 margins,
110 dropna,
111 margins_name,
112 observed,
113 sort,
114 )
115 return table.__finalize__(data, method="pivot_table")
File ~\anaconda3\Lib\site-packages\pandas\core\reshape\pivot.py:183, in __internal_pivot_table(data, values, index, columns, aggfunc, fill_value, margins, dropna, margins_name, observed, sort)
173 if observed is lib.no_default and any(
174 ping._passed_categorical for ping in grouped._grouper.groupings
175 ):
176 warnings.warn(
177 "The default value of observed=False is deprecated and will change "
178 "to observed=True in a future version of pandas. Specify "
(...)
181 stacklevel=find_stack_level(),
182 )
--> 183 agged = grouped.agg(aggfunc)
185 if dropna and isinstance(agged, ABCDataFrame) and len(agged.columns):
186 agged = agged.dropna(how="all")
File ~\anaconda3\Lib\site-packages\pandas\core\groupby\generic.py:1432, in DataFrameGroupBy.aggregate(self, func, engine, engine_kwargs, *args, **kwargs)
1429 kwargs["engine_kwargs"] = engine_kwargs
1431 op = GroupByApply(self, func, args=args, kwargs=kwargs)
-> 1432 result = op.agg()
1433 if not is_dict_like(func) and result is not None:
1434 # GH #52849
1435 if not self.as_index and is_list_like(func):
File ~\anaconda3\Lib\site-packages\pandas\core\apply.py:187, in Apply.agg(self)
184 kwargs = self.kwargs
186 if isinstance(func, str):
--> 187 return self.apply_str()
189 if is_dict_like(func):
190 return self.agg_dict_like()
File ~\anaconda3\Lib\site-packages\pandas\core\apply.py:603, in Apply.apply_str(self)
601 else:
602 self.kwargs["axis"] = self.axis
--> 603 return self._apply_str(obj, func, *self.args, **self.kwargs)
File ~\anaconda3\Lib\site-packages\pandas\core\apply.py:693, in Apply._apply_str(self, obj, func, *args, **kwargs)
691 f = getattr(obj, func)
692 if callable(f):
--> 693 return f(*args, **kwargs)
695 # people may aggregate on a non-callable attribute
696 # but don't let them think they can pass args to it
697 assert len(args) == 0
File ~\anaconda3\Lib\site-packages\pandas\core\groupby\groupby.py:2452, in GroupBy.mean(self, numeric_only, engine, engine_kwargs)
2445 return self._numba_agg_general(
2446 grouped_mean,
2447 executor.float_dtype_mapping,
2448 engine_kwargs,
2449 min_periods=0,
2450 )
2451 else:
-> 2452 result = self._cython_agg_general(
2453 "mean",
2454 alt=lambda x: Series(x, copy=False).mean(numeric_only=numeric_only),
2455 numeric_only=numeric_only,
2456 )
2457 return result.__finalize__(self.obj, method="groupby")
File ~\anaconda3\Lib\site-packages\pandas\core\groupby\groupby.py:1998, in GroupBy._cython_agg_general(self, how, alt, numeric_only, min_count, **kwargs)
1995 result = self._agg_py_fallback(how, values, ndim=data.ndim, alt=alt)
1996 return result
-> 1998 new_mgr = data.grouped_reduce(array_func)
1999 res = self._wrap_agged_manager(new_mgr)
2000 if how in ["idxmin", "idxmax"]:
File ~\anaconda3\Lib\site-packages\pandas\core\internals\managers.py:1469, in BlockManager.grouped_reduce(self, func)
1465 if blk.is_object:
1466 # split on object-dtype blocks bc some columns may raise
1467 # while others do not.
1468 for sb in blk._split():
-> 1469 applied = sb.apply(func)
1470 result_blocks = extend_blocks(applied, result_blocks)
1471 else:
File ~\anaconda3\Lib\site-packages\pandas\core\internals\blocks.py:393, in Block.apply(self, func, **kwargs)
387 @final
388 def apply(self, func, **kwargs) -> list[Block]:
389 """
390 apply the function to my values; return a block if we are not
391 one
392 """
--> 393 result = func(self.values, **kwargs)
395 result = maybe_coerce_values(result)
396 return self._split_op_result(result)
File ~\anaconda3\Lib\site-packages\pandas\core\groupby\groupby.py:1995, in GroupBy._cython_agg_general.<locals>.array_func(values)
1992 return result
1994 assert alt is not None
-> 1995 result = self._agg_py_fallback(how, values, ndim=data.ndim, alt=alt)
1996 return result
File ~\anaconda3\Lib\site-packages\pandas\core\groupby\groupby.py:1946, in GroupBy._agg_py_fallback(self, how, values, ndim, alt)
1944 msg = f"agg function failed [how->{how},dtype->{ser.dtype}]"
1945 # preserve the kind of exception that raised
-> 1946 raise type(err)(msg) from err
1948 if ser.dtype == object:
1949 res_values = res_values.astype(object, copy=False)
TypeError: agg function failed [how->mean,dtype->object]
</code></pre>
<p>My pandas version is 2.2.3.</p>
<p>However, when I make code as below. Then, it works fine.</p>
<pre><code>sales[['Date','Revenue','Expenses']].pivot_table(index='Date')
</code></pre>
<p><a href="https://i.sstatic.net/YYytgNx7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YYytgNx7.png" alt="enter image description here" /></a></p>
<p>I don't know how github code directlly filtering object and only calculate int64 columns ('Revenue','Expenses').</p>
|
<python><pandas><pivot-table>
|
2025-03-13 00:08:12
| 0
| 403
|
MCPMH
|
79,505,126
| 9,133,950
|
Convert the values in a SQL array by SELECTing from another table?
|
<p>I have a table "Data" containing arrays of FOOs, and a separate table "Lookup" where I can find the BAR for each FOO. I want to write a SQL query which returns the Data table, but with arrays of the BARs that correspond to the FOOs in the original.</p>
<p>I appreicate it may be an awkward table structure from a SQL perspective, and I won't go into why it is this way. I just want to know if what I want is technically possible.</p>
<p>As an example, the "Data" table might contain arrays of cities, and I want arrays of countries.</p>
<h3>Data</h3>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>Cities</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>'Mark'</code></td>
<td><code>['Manchester', 'Paris']</code></td>
</tr>
<tr>
<td><code>'Aisha'</code></td>
<td><code>['London', 'Munich']</code></td>
</tr>
</tbody>
</table></div>
<h3>Lookup</h3>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Country</th>
<th>Capital_city</th>
<th>Other_cities</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>'Britain'</code></td>
<td><code>'London'</code></td>
<td><code>['Manchester', 'Bristol']</code></td>
</tr>
<tr>
<td><code>'France'</code></td>
<td><code>'Paris'</code></td>
<td><code>['Bordeaux', 'Marseilles']</code></td>
</tr>
<tr>
<td><code>'Germany'</code></td>
<td><code>'Berlin'</code></td>
<td><code>['Munich', 'Cologne']</code></td>
</tr>
</tbody>
</table></div>
<h3>Desired output</h3>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>Cities</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>'Mark'</code></td>
<td><code>['Britain', 'France']</code></td>
</tr>
<tr>
<td><code>'Aisha'</code></td>
<td><code>['Britain', 'Germany']</code></td>
</tr>
</tbody>
</table></div>
<h3>What I have tried so far</h3>
<p>I have a working query which can find the country for a given city <code>@City</code>:</p>
<pre><code>SELECT Country
FROM Lookup
WHERE (
Capital_City = @City
OR @City IN (SELECT Cities FROM Lookup)
)
</code></pre>
<p>What I can't figure out is: how to do this for every single item in every array in the Cities column of my Data table?</p>
<p>Note: I'm using the <a href="https://docs.pola.rs/api/python/stable/reference/sql/index.html" rel="nofollow noreferrer">SQL API from the python Polars module</a>. It seems to generally support core SQL features.</p>
|
<python><sql><python-polars>
|
2025-03-13 00:07:38
| 1
| 375
|
DarthVlader
|
79,504,418
| 7,256,443
|
How to check if `pyright: ignore` comments are still required?
|
<p>I have recently switched from using <code>mypy</code> to <code>pyright</code> for static type checking. <code>mypy</code> has a useful feature whereby it can be configured to warn you if it detects comments instructing it to ignore certain errors which would not actually be raised (described here: <a href="https://stackoverflow.com/a/65581907/7256443">https://stackoverflow.com/a/65581907/7256443</a>).</p>
<p>Does anyone know if <code>pyright</code> has a similar feature? I can't see any reference to it in their documentation. Ideally, it would apply to the <code>pyright</code> specific ignore comments (<code># pyright: ignore [errorCode]</code>) as well as the generic <code># type: ignore</code>.</p>
|
<python><python-typing><pyright>
|
2025-03-12 17:27:13
| 1
| 1,033
|
Ben Jeffrey
|
79,504,328
| 1,584,736
|
Confused about setting environment variables from Python script
|
<p>The problem is that I'm not able to fetch the variable I set in any way.</p>
<p>I used</p>
<pre><code> for var in vars:
os.putenv(str(f"${var}"), local_config[var])
print(f"{var} has been set")
</code></pre>
<p>I've also tried to use <code>os.environ[var]</code> but the result is always empty. I've also tried to remove the <code>$</code> character.</p>
<p>When I echo the variable in the shell I get nothing, but from inside the script I have both the variable and the value, which proves that the data is ok.</p>
<p>How can I have something like <code>os.environ["LOL"] = "TEST"</code> and have "TEST" back when I <code>echo $LOL</code>? Just to be clear, the shell is the same I run the script into.</p>
<p>Thanks for the help.</p>
|
<python><python-3.x>
|
2025-03-12 16:52:26
| 1
| 1,280
|
ngw
|
79,504,309
| 11,062,613
|
Create a uniform dataset in Polars with cross joins
|
<p>I am working with Polars and need to ensure that my dataset contains all possible combinations of unique values in certain index columns. If a combination is missing in the original data, it should be filled with null.</p>
<p>Currently, I use the following approach with sequential cross joins:</p>
<pre><code>def ensure_uniform(df: pl.DataFrame, index_cols: Sequence[str]) -> pl.DataFrame:
# Quick exit
if len(index_cols) == 1:
return df
# Get unique values of the first index column
uniform_df = df.select(index_cols[0]).unique(maintain_order=True)
# Cross join with other unique index columns
for i in range(1, len(index_cols)):
unique_index_values = df.select(index_cols[i]).unique(maintain_order=True)
uniform_df = uniform_df.join(unique_index_values, how="cross")
# Left join with the original DataFrame to preserve existing values
return uniform_df.join(df, on=index_cols, how="left")
</code></pre>
<p>Here is an example:</p>
<pre><code>df = pl.from_repr('''
┌─────┬─────┬─────┬───────┐
│ g1 ┆ g2 ┆ g3 ┆ value │
│ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ i64 ┆ i64 │
╞═════╪═════╪═════╪═══════╡
│ A ┆ 1 ┆ 1 ┆ 10 │
│ A ┆ 1 ┆ 2 ┆ 20 │
│ B ┆ 2 ┆ 1 ┆ 30 │
│ B ┆ 2 ┆ 2 ┆ 40 │
└─────┴─────┴─────┴───────┘
''')
uniform_df = ensure_uniform(df, index_cols=["g1", "g2", "g3"])
print(uniform_df)
# ┌─────┬─────┬─────┬───────┐
# │ g1 ┆ g2 ┆ g3 ┆ value │
# │ --- ┆ --- ┆ --- ┆ --- │
# │ str ┆ i64 ┆ i64 ┆ i64 │
# ╞═════╪═════╪═════╪═══════╡
# │ A ┆ 1 ┆ 1 ┆ 10 │
# │ A ┆ 1 ┆ 2 ┆ 20 │
# │ A ┆ 2 ┆ 1 ┆ null │
# │ A ┆ 2 ┆ 2 ┆ null │
# │ B ┆ 1 ┆ 1 ┆ null │
# │ B ┆ 1 ┆ 2 ┆ null │
# │ B ┆ 2 ┆ 1 ┆ 30 │
# │ B ┆ 2 ┆ 2 ┆ 40 │
# └─────┴─────┴─────┴───────┘
</code></pre>
<p>Any suggestions for making this more graceful and efficient?</p>
<p>Edit:
@Dean MacGregor & @orlp Thank you for your answers.
All approaches show comparable performance (+/-10%), with @Dean MacGregor's proposal consistently being slightly faster than the others. After testing on multiple setups, I found that the main bottleneck seems to be the final join process, combining the uniform multiindex with the original dataset, rather than building the Cartesian product beforehand. This suggests that speed and peak memory consumption remain similar regardless of how the Cartesian product is computed, especially as dataset sizes grow.</p>
<p>Both proposals enable lazy operations, which could be useful depending on the use case.
Since there is no dedicated method for computing the Cartesian product, all options seem to be valid.</p>
|
<python><dataframe><python-polars><polars>
|
2025-03-12 16:46:33
| 2
| 423
|
Olibarer
|
79,504,248
| 4,183,498
|
Heroku installs extra Python package
|
<p>I have a Python/Django app which uses <code>Pipfile</code> and <code>Pipfile.lock</code> running on Heroku-22 stack with <a href="https://devcenter.heroku.com/articles/buildpacks#classic-buildpacks" rel="nofollow noreferrer">classic buildpacks</a>.</p>
<p><code>requirements.txt</code> is not part of the code base, thus e.g. <a href="https://stackoverflow.com/questions/55272714/heroku-modulenotfounderror-no-module-named-requests">Heroku: ModuleNotFoundError :No module named 'requests'</a> and similar issues are not relevant.</p>
<p><code>Pipfile</code> and <code>Pipfile.lock</code> contain <code>psycopg2</code> (not <code>psycopg2-binary</code>). Testing env has expected packages installed, however, production env has an extra package, <code>psycopg</code>, installed.</p>
<pre><code>$ heroku run bash -a my-app-production
~ $ pip freeze | grep -i psyco
psycopg==3.2.4
psycopg2==2.9.10
</code></pre>
<p>vs</p>
<pre><code>$ heroku run bash -a my-app-testing
~ $ pip freeze | grep -i psyco
psycopg2==2.9.10
</code></pre>
<p>Testing and production envs are the same with one exception - <a href="https://devcenter.heroku.com/articles/postgres-connection-pooling" rel="nofollow noreferrer">server side Pgbouncer</a> runs on production. Pgbouncer (<a href="https://github.com/pgbouncer/pgbouncer" rel="nofollow noreferrer">https://github.com/pgbouncer/pgbouncer</a>) lists <code>psycopg</code> as one of its dependencies. Though according to Heroku' docs, Pgbouncer's connection pool exists on the database server, not on app' dyno.</p>
<p>Also, installing app' dependencies locally in a Docker container with <code>pipenv install --deploy</code> (Heroku runs the same command) works as expected.</p>
<p>Any idea where is this extra <code>psycopg</code> package coming from? Is it installed because of Pgbouncer (unlikely, in my opinion)?</p>
|
<python><django><heroku><psycopg2><pgbouncer>
|
2025-03-12 16:25:52
| 0
| 10,009
|
Dušan Maďar
|
79,504,247
| 1,412,564
|
How to find server_version in psycopg3?
|
<p>I have this test in Django <a href="https://stackoverflow.com/questions/47232413/how-to-find-postgres-version-from-django">[1]</a> which is using psycopg2:</p>
<pre class="lang-py prettyprint-override"><code>from django.db import connection
def test_postgresql_version(self):
postgresql_version = connection.cursor().connection.server_version
if (postgresql_version >= 140000):
pass
else:
raise NotImplementedError("postgresql version must be at least 14.0.")
</code></pre>
<p>This test fails with psycopg3:</p>
<pre class="lang-py prettyprint-override"><code>AttributeError: 'Connection' object has no attribute 'server_version'
</code></pre>
<p>How do I check the server_version in psycopg3?</p>
|
<python><django><postgresql><psycopg2><psycopg3>
|
2025-03-12 16:25:10
| 3
| 3,361
|
Uri
|
79,504,009
| 33,886
|
Detect coordinate precision in polars floats?
|
<p>I have some coordinate data; some of it high precision, some of it low precision thanks to multiple data sources and other operational realities. I want to have a column that indicates the relative precision of the coordinates. So far, what I want is to essentially count digits after the decimal; in my case more digits indicates higher precision data. In my case I usually get data like the data in the example; its either coming with five to six digits precision or just one digit. Both are useful; but we can do more analysis on higher precision data as you may imagine.</p>
<p>This code does what I want, but it seems .... wordy, inelegant; as if I'm being paid by the line of code. Is there a simpler way to do this?</p>
<pre><code>import polars as pl
df = pl.DataFrame(
{
"lat": [ 43.6425047, 43.6, 40.688966, 40.6],
"lng": [-79.3861057, -79.3, -74.044438, -74.0],
}
)
df = (df.with_columns(
pl.col("lat").cast(pl.String)
.str.split_exact(".", 1)
.struct.rename_fields(["lat_major", "lat_minor"])
.alias("lat_temp"))
.unnest("lat_temp")
.with_columns(
pl.col("lat_minor")
.str.len_chars()
.alias("lat_precision"))
.drop("lat_major", "lat_minor")
.with_columns(
pl.col("lng").cast(pl.String)
.str.split_exact(".", 1)
.struct.rename_fields(["lng_major", "lng_minor"])
.alias("lng_temp"))
.unnest("lng_temp")
.with_columns(
pl.col("lng_minor")
.str.len_chars()
.alias("lng_precision"))
.drop("lng_major", "lng_minor")
.with_columns(
pl.col("lat_precision")
.add(pl.col("lng_precision"))
.alias("precision"))
.drop("lat_precision", "lng_precision")
)
df.head()
</code></pre>
<p>results in</p>
<pre><code>shape: (4, 3)
┌───────────┬────────────┬───────────┐
│ lat ┆ lng ┆ precision │
│ --- ┆ --- ┆ --- │
│ f64 ┆ f64 ┆ u32 │
╞═══════════╪════════════╪═══════════╡
│ 43.642505 ┆ -79.386106 ┆ 14 │
│ 43.6 ┆ -79.3 ┆ 2 │
│ 40.688966 ┆ -74.044438 ┆ 12 │
│ 40.6 ┆ -74.0 ┆ 2 │
└───────────┴────────────┴───────────┘
</code></pre>
<p>later I might pull out records with precision over 5, for instance, as my source data tends to be either one decimal point precision or four+ decimal points precision per coordinate.</p>
|
<python><python-3.x><coordinates><python-polars><polars>
|
2025-03-12 15:09:30
| 1
| 1,012
|
Kyle
|
79,503,984
| 13,812,982
|
Determining which row causes IntegrityError exception when using cursor.executemany() to insert data
|
<p>I am inserting a few thousand rows into a table with a UNIQUE constraint, using <code>cursor.executemany()</code>. On some occasions the data may have a duplicate which violates the constraint, and <code>sqlite</code> throws an <code>IntegrityError</code> exception.</p>
<p>Is there any way to determine which row caused the violation?</p>
<p>Simplified code:</p>
<pre><code>import sqlite3
conn = sqlite3.connect(':memory:')
#Create table
cursor = conn.cursor()
cursor.execute('CREATE TABLE IF NOT EXISTS test(name TEXT,UNIQUE(name))')
data = [('Larry',),('Curly',),('Moe',),('Larry',)]
try:
cursor.executemany('INSERT INTO test VALUES(?)',data)
print('{0} rows inserted'.format(cursor.rowcount))
except sqlite3.IntegrityError as ie:
print('Exception:',ie)
</code></pre>
<p>Which results in:</p>
<pre><code>Exception: UNIQUE constraint failed: test.name
</code></pre>
<p>as expected. But how can I identify that it was the duplicate 'Larry' row that violated the UNIQUE constraint? Or do I just have to insert the rows one at a time?</p>
|
<python><sqlite>
|
2025-03-12 15:02:51
| 0
| 4,331
|
DS_London
|
79,503,943
| 9,669,142
|
PyCharm Cannot start the IDE
|
<p>I always used Spyder for my Python scripts, but I switcheed to PyCharm now and installed on my PC (version 2024.3.4). I made some Python scripts earlier in Spyder and committed those to a repo on GitLab. Now I want to clone this repo in PyCharm, but I get the following error:</p>
<p><a href="https://i.sstatic.net/x7NPuyiI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/x7NPuyiI.png" alt="enter image description here" /></a></p>
<p>Does anyone know why this error occurs and how to fix this? I have no idea what the meaning is. The suggested URL is not helping.</p>
|
<python><pycharm>
|
2025-03-12 14:44:05
| 1
| 567
|
Fish1996
|
79,503,865
| 9,967,780
|
numpy timedelta highest unit without loss of information
|
<p>I have several numpy timedelta values.
I want to convert them to the a format that is better to read for humans without losing information.<br />
Let's say I have <code>td = np.timedelta64(10800000000001, 'ns')</code>.
Then I can only have it in <code>ns</code> because if I convert it to <code>ms</code>or higher it will lose information.
If I have <code>td = np.timedelta64(1080000000000, 'ns')</code> I can convert it to <code>3 hours</code> without losing information.
What is a good way to do this automatically?<br />
I tried it by taking the number of trailing zeros into account:</p>
<pre><code>import numpy as np
if __name__ == "__main__":
td = np.timedelta64(10800000000001, 'ns')
number_of_zeros = len(str(td.item())) - len(str(td.item()).rstrip('0'))
if number_of_zeros==0:
print("[ns]")
elif number_of_zeros<7:
print("ms")
elif number_of_zeros<10:
print("s")
elif number_of_zeros<12:
print("min")
else:
print("h")
</code></pre>
<p>This is probably not a very elegant way to do it (not to mention that it will be wrong when we get to minutes and higher).
Any recommendations?</p>
|
<python><numpy><timedelta>
|
2025-03-12 14:11:02
| 2
| 374
|
AkariYukari
|
79,503,813
| 13,258,554
|
MSIX Packaging Tool - Failing Installation
|
<p>I am trying to create a Windows app from the following simple Python code.</p>
<pre><code>from qtpy.QtWidgets import QApplication, QLabel
from qtpy.QtCore import Qt
if __name__ == "__main__":
# Create the application instance
app = QApplication([])
# Create a fullscreen label
label = QLabel("Hello World!")
label.setStyleSheet("font-size: 40px; text-align: center;") # Increase text size
label.setAlignment(Qt.AlignCenter) # Center the text
label.showFullScreen() # Make it fullscreen
# Run the application event loop
app.exec_()
</code></pre>
<p><code>PyInstaller</code> is used to create a single executable and <code>Inno Setup Compiler</code> to convert the executable to an installer.</p>
<p>Lastly, <code>MSIX Packaging Tool</code> converts the installer to an MSIX app installer.</p>
<p>The installation of the msix file fails with the following error:</p>
<blockquote>
<p>App installation failed with error message: Common::Deployment::MsixvcStagingSession::GetManifestReader in MsixvcStagingSession failed with error 0x80070570. (0x80070570)</p>
</blockquote>
<p>What could be the reason of this error and are there any other, less cumbersome, methods to create a Windows app from a Python project?</p>
<p>The end-goal would be to create a Python UI in QtPy that runs on a Windows machine in single app kiosk mode.</p>
|
<python><uwp><packaging><msix>
|
2025-03-12 13:54:49
| 1
| 393
|
Mark wijkhuizen
|
79,503,514
| 7,134,737
|
How can I reuse the same virtualenv in a different project?
|
<p>I have created a project folder with a <code>pyproject.toml</code>, but I want to use the same virtualenv created in the ProjectA in ProjectB, how can I do that ?</p>
<p>Whenever I use the poetry env use /full/path/python.exe, it basically creates another venv for me which I want to avoid.</p>
|
<python><virtualenv><python-poetry>
|
2025-03-12 11:58:17
| 1
| 3,312
|
ng.newbie
|
79,503,401
| 7,462,275
|
How to generate efficiently a full panda data frame from another data frame with missing values?
|
<p>Considering the following data frame : <code>df1 = pd.DataFrame({'col1':[1,4,5,7],'col2':[10,20,30,2]})</code>, some values are missing in <code>col1</code> to have a full series from 1 to 7.</p>
<p>So, a second data frame <code>df2</code> is created with the full series in <code>col1</code> : <code>df2 = pd.DataFrame({'col1':np.arange(1,8,1),'col2':0})</code> with default value = 0 in <code>col2</code>.</p>
<p>And after, <code>col2</code> of <code>df2</code> is filled with <code>col2</code> datas from <code>df1</code>:</p>
<pre><code>for i in df2.col1 :
aaa = df1.loc[df1['col1'] == i ]['col2']
if (len(aaa) != 0):
df2.loc[df2['col1'] == i, 'col2'] = int(aaa.iloc[0])
</code></pre>
<p>The result is as expected. But, I am sure that this code is not very efficient.
Does it exist a function in pandas to do it directly ?</p>
|
<python><pandas><dataframe>
|
2025-03-12 11:10:00
| 0
| 2,515
|
Stef1611
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.