QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
79,422,077
| 10,537,579
|
Final file content rolled back to initial file content in streamlit
|
<p>I have created a simple <code>streamlit</code> application to browse files and folders and written the Python code as:</p>
<pre class="lang-py prettyprint-override"><code>import streamlit as st
import tkinter as tk
from tkinter import filedialog
import os
import pandas as pd
from pathlib import Path
uploaded_files = st.file_uploader("Upload your Python/SQL/C file", type=["py","sql","c"], accept_multiple_files=True)
st.session_state.file_contents = ""
root = tk.Tk()
root.withdraw()
root.wm_attributes('-topmost', 1)
st.write('Please select a folder:')
clicked = st.button('Browse Folder')
if clicked:
dirname = str(filedialog.askdirectory(master=root))
files = [file for file in os.listdir(dirname)]
output = pd.DataFrame({"File Name": files})
st.table(output)
for file in files:
st.session_state.file_contents += Path(os.path.join(dirname, file)).read_text()
for uploaded_file in uploaded_files:
st.session_state.file_contents += uploaded_file.read().decode('utf-8') + "\n"
print("File content initially:",st.session_state.file_contents)
</code></pre>
<p>Now when I select a file, say <code>file1</code>, after clicking on "Browse files" and select a folder which contains <code>file2</code> by clicking on "Browse Folder" then <code>st.session_state.file_contents</code> is fetching the file content of both <code>file1</code> and <code>file2</code>.</p>
<p>Until now the code is working fine. But when I write the code as:</p>
<pre class="lang-py prettyprint-override"><code>if (uploaded_files and st.session_state.file_contents and st.session_state.messages) is not None:
for message in st.session_state.messages:
if message["role"] == "user":
st.chat_message("human").markdown(message["content"])
if message["role"] == "ai":
st.chat_message("ai").markdown(message["content"])
if prompt := st.chat_input("Generate test cases for the attached file(s)"):
st.session_state.messages.append({"role": "user", "content": prompt})
with st.chat_message("user"):
st.markdown(prompt)
print("File content before calling a function:",st.session_state.file_contents)
</code></pre>
<p>Now, <code>st.session_state.file_contents</code> contains only the content of <code>file1</code>, not <code>file2</code>.</p>
<p>So, can anyone please explain why I am not getting the content of <code>file2</code>, why does it disappear after calling <code>st.chat_input()</code>?</p>
<p>Any help would be appreciated.</p>
|
<python><python-3.x><file-upload><directory><streamlit>
|
2025-02-07 20:14:25
| 0
| 357
|
ankit
|
79,422,061
| 2,969,880
|
Problem with FastAPI, Pydantic, and kebab-case header fields
|
<p>In my FastAPI project, if I create a common header definition with Pydantic, I find that kebab-case header fields aren't behaving as expected. The "magic" conversion from kebab-case header fields in the request to their snake_case counterparts is not working, in addition to inconsistencies in the generated Swagger docs.</p>
<p>What is the right way to specify this Pydantic header class so that the Swagger docs and behavior match?</p>
<p>Here's a minimal reproduction of the problem:</p>
<pre class="lang-py prettyprint-override"><code>### main.py
from typing import Annotated
from fastapi import FastAPI, Header
from pydantic import BaseModel, Field
app = FastAPI()
class CommonHeaders(BaseModel):
simpleheader: str
a_kebab_header: str | None = Field(
default=None,
title="a-kebab-header",
alias="a-kebab-header",
description="This is a header that should be specified as `a-kebab-header`",
)
@app.get("/")
def root_endpoint(
headers: Annotated[CommonHeaders, Header()],
):
result = {"headers received": headers}
return result
</code></pre>
<p>If I run this and look at the Swagger docs at http://localhost:8000/docs I see this, which looks correct:</p>
<p><a href="https://i.sstatic.net/oTz6jFuA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTz6jFuA.png" alt="Swagger docs showing correctly specified kebab-case header." /></a></p>
<p>And if I "try it out" it will generate what I would expect as the correct request:</p>
<pre class="lang-bash prettyprint-override"><code>curl -X 'GET' \
'http://localhost:8000/' \
-H 'accept: application/json' \
-H 'simpleheader: foo' \
-H 'a-kebab-header: bar'
</code></pre>
<p>But in the response, it becomes clear it did not correctly receive the kebab-case header:</p>
<pre class="lang-json prettyprint-override"><code>{
"headers received": {
"simpleheader": "foo",
"a-kebab-header": null
}
}
</code></pre>
<p>Changing the header name to snake_case "a_kebab_header" in the request does not work, either.</p>
<p>Updating the header definition to look like this doesn't work as expected, either. The Swagger docs and actual behavior are inconsistent.</p>
<pre class="lang-py prettyprint-override"><code>class CommonHeaders(BaseModel):
simpleheader: str
a_kebab_header: str | None = Field(
default=None,
description="This is a header that should be specified as `a-kebab-header`",
)
</code></pre>
<p>Notice this now results in the Swagger docs specifying it in snake_case:</p>
<p><a href="https://i.sstatic.net/bZyY08bU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZyY08bU.png" alt="Swagger docs showing incorrectly specified header in snake_case" /></a></p>
<p>And using "try it out" results in the snake_case variant:</p>
<pre class="lang-bash prettyprint-override"><code>curl -X 'GET' \
'http://localhost:8000/' \
-H 'accept: application/json' \
-H 'simpleheader: foo' \
-H 'a_kebab_header: bar'
</code></pre>
<p>But SURPRISINGLY this doesn't work! The response:</p>
<pre class="lang-json prettyprint-override"><code>{
"headers received": {
"simpleheader": "foo",
"a_kebab_header": null
}
}
</code></pre>
<p>But in a SURPRISE ENDING, if I manually re-write the request in kebab-case:</p>
<pre class="lang-bash prettyprint-override"><code>curl -X 'GET' \
'http://localhost:8000/' \
-H 'accept: application/json' \
-H 'simpleheader: foo' \
-H 'a-kebab-header: bar'
</code></pre>
<p>it finally picks up that header value via the magic translation and I get the desired results back:</p>
<pre class="lang-json prettyprint-override"><code>{"headers received":{"simpleheader":"foo","a_kebab_header":"bar"}}
</code></pre>
<p><strong>What is the right way to specify this Pydantic header class so that the Swagger docs and behavior match?</strong> If the docs are inconsistent with behavior I'm going to get hassled.</p>
<hr />
<p>As a final thought: the following way works correctly in both the OpenAPI documentation and in the application (displaying and working as kebab-case), BUT it doesn't use Pydantic and so I lose the ability to define and use a common header structure easily across my project, and instead need to declare them individually for each endpoint:</p>
<pre class="lang-py prettyprint-override"><code>"""Alternative version without Pydantic."""
from typing import Annotated
from fastapi import FastAPI, Header
app = FastAPI()
@app.get("/")
def root_endpoint(
simpleheader: Annotated[str, Header()],
a_kebab_header: Annotated[
str | None,
Header(
title="a-kebab-header",
description="This is a header that should be specified as `a-kebab-header`",
),
] = None,
):
result = {
"headers received": {
"simpleheader": simpleheader,
"a_kebab_header": a_kebab_header,
}
}
return result
</code></pre>
|
<python><fastapi><openapi><swagger-ui><pydantic>
|
2025-02-07 20:05:40
| 1
| 1,461
|
sql_knievel
|
79,422,054
| 20,771,478
|
Check if string only contains characters from a certain ISO specification
|
<p>Short question:
What is the most efficient way to check whether a <code>.TXT</code> file contains only characters defined in a selected ISO specification?</p>
<p>Question with full context:
In the German energy market EDIFACT is used to automatically exchange information. Each file exchanged has a header segment which contains information about the contents of the file.</p>
<p>Please find an example of this segment below.</p>
<pre><code>UNB+UNOC:3+9903323000007:500+9900080000007:500+250102:0900+Y48A42R58CRR43++++++
</code></pre>
<p>As you can see after the <code>UNB+</code> we find the content <code>UNOC</code>. This tells us which character set is used in the file. In this case it is <a href="https://en.wikipedia.org/wiki/ISO/IEC_8859-1" rel="nofollow noreferrer">ISO/IEC 8859-1</a>.</p>
<p>I would like a python method which checks whether the EDIFACT file contains only characters specified in <a href="https://en.wikipedia.org/wiki/ISO/IEC_8859-1" rel="nofollow noreferrer">ISO/IEC 8859-1</a>.</p>
<p>The most simple solution I can think of is doing something like this (pseudo code).</p>
<pre><code>ISO_string = "All characters contained in ISO/IEC 8859-1"
EDIFACT_string = "Contents of EDIFACT file"
is_iso_char = FALSE
For EDIFACT_Char in EDIFACT_string:
For ISO_char in ISO_string:
if EDIFACT_Char = ISO_char:
is_iso_char = TRUE
break
if is_iso_char == FALSE:
raiseerror("File contains char not contained in ISO/IEC 8859-1 and needs to be rejected")
do_error_handling()
is_iso_char = FALSE
</code></pre>
<p>I studied business informatics and lack the theoretical background for algorithm theory. This feels like a very inefficient method and since EDIFACT needs to be processed quickly I don't want this functionality to be a bottleneck.</p>
<p>Is there an inbuilt pyhton way to do what I want to achieve better?</p>
<p>Update #1:</p>
<p>I wrote this code as suggested by Barmar. To check it I added the Chinese characters for "World" in the file (δΈη). I expected <code>.decode</code> to throw an error. However it just decodes the byte string and adds some strange characters at the beginning.</p>
<p>File Contents: <code>δΈηUNB+UNOC:3+9903323000007:500+9900080000007:500+250102:0900+Y48A42R58CRR43++++++</code></p>
<pre><code>with open(Filename, "rb") as edifact_file:
edifact_bytes = edifact_file.read()
try:
verified_edifact_string = edifact_bytes.decode(encoding='latin_1', errors='strict')
except:
print("String does not conform to ISO specification")
print(verified_edifact_string)
</code></pre>
<p>Prints:
<a href="https://i.sstatic.net/EDgdlkoZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EDgdlkoZ.png" alt="enter image description here" /></a>
If I just copy Stackoverflow cuts away some of the characters.</p>
<p>Edit #2:
According to Python documentation the ISO/IEC_8859-1 specification is called <code>latin_1</code> when using Python's <code>.decode()</code> and <code>.encode()</code> methods.</p>
|
<python><algorithm><iso><edifact>
|
2025-02-07 20:01:52
| 1
| 458
|
Merlin Nestler
|
79,422,048
| 99,089
|
Organizing Python source files for two related projects with some shared code in one git repository
|
<p>I'm writing a system in Python that has to components: a compiler and a runtime. I could organize the files like:</p>
<pre class="lang-none prettyprint-override"><code>project/
compiler/
runtime/
</code></pre>
<p>When a user uses the runtime, the compiler is not needed at all and doesn't even need to be installed.</p>
<p>However, I'd like to share some code between the two components, e.g., a <code>util</code> directory with various <code>.py</code> files for utility code that's common. Where can/should I put <code>util</code>?</p>
<p>If it matters, the runtime code is not something users will use really directly. I intend to eventually have scripts that package up the runtime and user code and builds a container that's deployed.</p>
<p><strong>Note</strong>: I did look at some of the similar questions, but it seems that they're all 10+ years old and I assume thoughts on packaging Python has changed somewhat in the last decade.</p>
|
<python><python-3.x><code-organization>
|
2025-02-07 20:00:36
| 0
| 7,171
|
Paul J. Lucas
|
79,422,025
| 1,951,507
|
How to nest generic type definitions?
|
<p>Say I have:</p>
<ul>
<li>class <code>A</code></li>
<li>class <code>B</code> that is parameterized by a type with upper bound <code>A</code></li>
<li>class <code>C</code> that is parameterized by a type with upper bound <code>B</code>.</li>
</ul>
<p>How should I define these classes in Python 3.12 such that static type checkers are most succesful/correct?</p>
<pre><code>class A:
...
class B[T: A]:
...
class C[T: B]:
...
</code></pre>
<p>Or should I use <code>C[T1: A, T2: B]</code>?
I understood that <code>C[T: B[T: A]]</code> is not allowed.
Would a separate <code>TypeVar</code> bound to <code>B[A]</code> work?</p>
<p>To make it little less abstract and show a use case for this:</p>
<ul>
<li><code>A</code> could be a vector type which I could subtype as a 2D or 3D vector.</li>
<li><code>B</code> could be a geometrical shape, with subtypes like line and arc, in 2D, 3D, etc. flavors.</li>
<li><code>C</code> could be a concatenation of shapes, thus build from these 2D/3D lines/arcs.</li>
</ul>
Here is a concrete executable example:
<pre><code>from typing import Self, Type, cast, reveal_type
# Basic structure:
class Vector:
identity: 'Vector'
class Primitive[V: Vector]:
def __init__(self, b: V, e: V):
self.b, self.e = b, e
class Line[V: Vector](Primitive[V]): pass
class Arc[V: Vector](Primitive[V]):
def __init__(self, b: V, e: V, r: float):
super().__init__(b, e)
self.r = r
class PathSegment[V: Vector](Primitive[V]):
def __init__(self, e: V, previous: Self | None = None):
vector_type = cast(Type[V], e.__class__)
b = previous.e if previous else vector_type.identity
super().__init__(b=b, e=e)
class LineSegment[V: Vector](Line[V], PathSegment[V]): pass
class ArcSegment[V: Vector](Arc[V], PathSegment[V]): pass
class Path[S: PathSegment, V: Vector]:
def __init__(self, segments: list[S]):
self.segments = segments
@property
def b(self) -> V:
return self.segments[0].b
# 2D and 3D variants:
class Vector2D(Vector):
def __init__(self, x: float, y: float):
self.v = (x, y)
Vector2D.identity = Vector2D(0, 0)
class Vector3D(Vector):
def __init__(self, x: float, y: float, z: float):
self.v = (x, y, z)
Vector3D.identity = Vector3D(0, 0, 0)
class Line2D(Line[Vector2D]): pass
class Line3D(Line[Vector3D]): pass
class Arc2D(Arc[Vector2D]): pass
class Arc3D(Arc[Vector3D]): pass
class PathSegment2D(PathSegment[Vector2D]): pass
class PathSegment3D(PathSegment[Vector3D]): pass
class LineSegment2D(LineSegment[Vector2D]): pass
class LineSegment3D(LineSegment[Vector3D]): pass
class ArcSegment2D(ArcSegment[Vector2D]): pass
class ArcSegment3D(ArcSegment[Vector3D]): pass
class Path2D(Path[PathSegment2D, Vector2D], Vector2D): pass
class Path3D(Path[PathSegment3D, Vector3D], Vector3D): pass
p1 = Path2D(segments=[LineSegment2D(e=Vector2D(2, 3))])
# Complaints about this line:
# pycharm: Expected type 'list[PathSegment2D]' (matched generic type 'list[S β€: PathSegment]'), got 'list[LineSegment2D[Vector2D]]' instead
# pyright: Argument of type "list[LineSegment2D]" cannot be assigned to parameter "segments" of type "list[PathSegment2D]"
# in function "__init__". "LineSegment2D" is not assignable to "PathSegment2D" (reportArgumentType)
# Mix-up of types not detected.
class WrongPathClass(Path[PathSegment2D, Vector3D], Vector2D): pass
</code></pre>
|
<python><generics><python-typing>
|
2025-02-07 19:53:09
| 0
| 1,052
|
pfp.meijers
|
79,421,833
| 9,848,968
|
Google Drive API: Starred folder update does not reflect in Google Drive
|
<p>I am trying to update the starred value of a Google Drive folder using the Google Drive API but it is not reflecting in Google Drive.</p>
<p>The name is getting updated but the starred value is not getting updated. There is an <a href="https://stackoverflow.com/questions/21761853/starred-file-using-drive-api-doesnt-star-in-google-drive">old case here on Stackoverflow</a> claiming that it takes a very long time, but I have been waiting for more than 15 minutes and nothing changes.</p>
<p>Can someone help me with this?</p>
<p>First I am using the following code to create a folder:</p>
<pre class="lang-py prettyprint-override"><code>from googleapiclient.discovery import build
from oauth2client.service_account import ServiceAccountCredentials
from my_creds import GOOGLE_DRIVE_API_CREDS, PARENT_FOLDER_ID
SCOPES = ["https://www.googleapis.com/auth/drive"]
credential = ServiceAccountCredentials.from_json_keyfile_dict(
GOOGLE_DRIVE_API_CREDS, SCOPES
)
service = build("drive", "v3", credentials=credential, cache_discovery=False)
file_metadata = {
"name": "Folder name",
"mimeType": "application/vnd.google-apps.folder",
"parents": [PARENT_FOLDER_ID],
}
directory = service.files().create(body=file_metadata, fields="id").execute()
</code></pre>
<p>Then the folder looks like this in Google Drive:</p>
<p><a href="https://i.sstatic.net/eA5t8Arv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eA5t8Arv.png" alt="enter image description here" /></a></p>
<p>After that, I am trying to update the folder name and starred value using the following code:</p>
<pre class="lang-py prettyprint-override"><code>update_metadata = {"name": "New folder name", "starred": True}
updated_folder = (
service.files()
.update(fileId=directory["id"], body=update_metadata, fields="name, starred")
.execute()
)
</code></pre>
<p>if I run the following code:</p>
<pre class="lang-py prettyprint-override"><code>updated_file = (
service.files().get(fileId=directory["id"], fields="name,starred").execute()
)
print(updated_file)
</code></pre>
<p>I get the following output:</p>
<pre><code>{'name': 'New folder name', 'starred': True}
</code></pre>
<p>But the starred value is not getting updated in Google Drive:
<a href="https://i.sstatic.net/QvkJKwnZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QvkJKwnZ.png" alt="enter image description here" /></a></p>
|
<python><google-drive-api>
|
2025-02-07 18:25:20
| 0
| 385
|
muw
|
79,421,811
| 4,027,688
|
Writing map types with pyiceberg
|
<p>I'm not sure if this is a bug or I'm just not structuring the data correctly, I couldn't find any examples for writing maps.</p>
<p>Given a table with a simple schema with a map field</p>
<pre class="lang-py prettyprint-override"><code>from pyiceberg.schema import Schema
from pyiceberg.types import StringType, MapType, NestedField
map_type = MapType(key_id=1001, key_type=StringType(), value_id=1002, value_type=StringType())
schema = Schema(NestedField(field_id=1, name='my_map', field_type=map_type))
table = catalog.create_table(..., schema=schema)
</code></pre>
<blockquote>
<pre class="lang-none prettyprint-override"><code>table
map.test(
1: my_map: optional map<string, string>
),
partition by: [],
sort order: [],
snapshot: null
</code></pre>
</blockquote>
<p>I first construct an arrow table with the converted schema</p>
<pre class="lang-py prettyprint-override"><code>data = {'my_map': [{'symbol': 'BTC'}]}
pa_table = pa.Table.from_pydict(data, schema=schema.as_arrow())
</code></pre>
<blockquote>
<pre class="lang-none prettyprint-override"><code>pyarrow.Table
my_map: map<large_string, large_string>
child 0, entries: struct<key: large_string not null, value: large_string not null> not >null
child 0, key: large_string not null
child 1, value: large_string not null
----
my_map: [[keys:["symbol"]values:["BTC"]]]
</code></pre>
</blockquote>
<p>When writing though, the schema validation complains that I haven't provided <code>key</code> and <code>value</code> fields</p>
<pre class="lang-py prettyprint-override"><code>>>> table.append(pa_table)
ββββββ³ββββββββββββββββββββββββββββββββββββββββββ³ββββββββββββββββββββββββββββββββββββββββββ
β β Table field β Dataframe field β
β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β β
β 1: my_map: optional map<string, string> β 1: my_map: optional map<string, string> β
β β β 2: key: required string β Missing β
β β β 3: value: required string β Missing β
ββββββ΄ββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββ
</code></pre>
|
<python><apache-iceberg><pyiceberg>
|
2025-02-07 18:16:50
| 0
| 3,175
|
bphi
|
79,421,790
| 19,467,973
|
How to change the configuration elementFormDefault and attributeFormDefault fields in Python spyne SOAP wsdl service
|
<p>Thank you in advance for your help and for reading my question.</p>
<p>I have a WSDL file, for which I want to disable the explicit relation of the transmitted data to the namespace.</p>
<p>The service is written in spyne, which does not have such an explicit feature. It automatically generates xml in which there is only one field:</p>
<pre><code><xs:targetNamespace="http://simpleTest/" elementFormDefault="specified">
</code></pre>
<p>And the result I want to achieve is:</p>
<pre><code>< attributeFormDefault="unqualified" elementFormDefault="unqualified" targetNamespace="http://simpleTest/">
</code></pre>
<p>Here are the versions I'm working with</p>
<p><code>spyne = 3.11.3</code>
<code>python = 2.14.0</code></p>
<p>This is how I create my service.</p>
<pre class="lang-none prettyprint-override"><code>from logging import getLogger
from spyne import Application
from spyne.protocol.soap import Soap11
from spyne.server.wsgi import WsgiApplication
from configs import APP_CONFIG
from routers import SameTestRouter
debug_loggers = getLogger('debug_logger')
soap_app = Application(
[SameTestRouter],
name=APP_CONFIG.APP_NAME,
tns=APP_CONFIG.URL_PREFIX,
in_protocol=Soap11(validator='lxml'),
out_protocol=Soap11()
)
wsgi_app = WsgiApplication(soap_app)
</code></pre>
<p>Well, then there's the test method with its schematics, which I'm sending the message to. Well, it works if I explicitly specify the namespace, but I need it without explicitly specifying it.</p>
<pre class="lang-none prettyprint-override"><code>class TestBlockTwo(ComplexModel):
__namespace__ = APP_CONFIG.URL_PREFIX
test_test = Unicode
test_test = Decimal
class message(ComplexModel):
__namespace__ = APP_CONFIG.URL_PREFIX
test_block_two = TestBlockTwo
class SameTestRouter(ServiceBase):
@rpc(TestBlockOne, _returns=Unicode)
def process(self, message: TestBlockOne) -> str:
passage = vars(message.test_block_two)
debug_loggers.info(f"Received data: {passage}")
</code></pre>
<p>If you have encountered this, please tell me how it can be solved.</p>
|
<python><xml><soap><wsdl><spyne>
|
2025-02-07 18:08:06
| 0
| 301
|
Genry
|
79,421,702
| 3,153,928
|
Working with python unit tests, tring to patch a class but is not working
|
<p>I have code with below functionality</p>
<p>src/client.py</p>
<pre><code>class Client1(object):
def foo(self, t):
return f"foo-{t}"
def get_foo(self, type):
return self.foo(type)
</code></pre>
<p>src/get_foos.py</p>
<pre><code>
from .src import Client1
client = Client1()
def get_foo():
foo = client.get_foo("foo")
return foo
def get_bar():
bar = client.get_foo("bar")
return bar
</code></pre>
<p>I am trying to create a unit test for get_foo() and get_bar().</p>
<p>I do following</p>
<p>src/test/test_get_foos.py</p>
<pre><code>from src.get_foos import get_foo
@patch(src.get_store.Client1)
def test_get_store(self,mock_client)
mock_client.get_foo.return_value = 'myfoo'
result = get_foo()
assert result === 'myfoo'
</code></pre>
<p>But I observe that the patch does not work and the test fails. I checked all the resources online and everywhere I see the same pattern to mock a class class which is outside a function.</p>
<p>I have more than 6 methods in my actual code and I don't want to make a new instance of Client1 everywhere.</p>
|
<python><mocking><python-unittest.mock>
|
2025-02-07 17:28:45
| 1
| 1,423
|
uday8486
|
79,421,661
| 29,295,031
|
How to customize the textposition of a bubblechart label
|
<p>I managed to set a fixed label for each bubble in my chart. Hereβs the code:</p>
<pre><code>import plotly.graph_objects as go
import plotly.express as px
import pandas as pd
margin_factor = 1.6
data = {'x': [1.5, 1.6, -1.2],
'y': [21, -16, 46],
'circle-size': [10, 5, 6],
'circle-color': ["red","red","green"],
'tttt': ["the last xt for MO","the last xt for MO pom","the last xt for MO %"]
}
# Create DataFrame
df = pd.DataFrame(data)
fig = px.scatter(
df,
x="x",
y="y",
color="circle-color",
size='circle-size'
)
fig.update_layout(
{
'xaxis': {
"range": [-100, 100],
'zerolinewidth': 3,
"zerolinecolor": "blue",
"tick0": -100,
"dtick": 25,
'scaleanchor': 'y'
},
'yaxis': {
"range": [-100, 100],
'zerolinewidth': 3,
"zerolinecolor": "green",
"tick0": -100,
"dtick": 25
},
"width": 500,
"height": 500
}
)
x_pad = (max(df.x) - min(df.x)) / 8
y_pad = (max(df.y) - min(df.y)) / 30
for x0, y0 in zip(data['x'], data['y']):
fig.add_shape(type="rect",
x0=x0 + (x_pad)/5,
y0=y0 - y_pad,
x1=x0 + x_pad,
y1=y0 + y_pad,
xref='x', yref='y',
line=dict(
color="black",
width=2,
),
fillcolor="#1CBE4F",
layer="below"
)
fig.add_trace(
go.Scatter(
x=df["x"].values + (x_pad)/2,
y=df["y"],
text=df["tttt"],
mode="text",
showlegend=False
)
)
fig.show()
</code></pre>
<p>The result :
<a href="https://i.sstatic.net/3K3wZHNl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3K3wZHNl.png" alt="enter image description here" /></a></p>
<p>Now what Iβm trying to do is to put some css to these labels some this like this :</p>
<p><a href="https://i.sstatic.net/InHsANWk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/InHsANWk.png" alt="enter image description here" /></a></p>
<p>I know there is something called annotation, I tried it, but I did not successfully create annotations for each element.</p>
|
<python><pandas><plotly>
|
2025-02-07 17:07:47
| 1
| 401
|
user29295031
|
79,421,610
| 8,329,213
|
How to find the most common frequeny in Time series
|
<p>I have a TimeSeries shown below. We can clearly see that there is a <code>cyclical behavior</code> in the data, where rise and fall happens every 12 months and so on, irrespective of if trend is rising/falling. I am struggling to find how to extract this <code>frequency</code> of 12 from the data using <a href="https://docs.scipy.org/doc/scipy/tutorial/fft.html" rel="nofollow noreferrer"><code>scipy.fft</code></a> python library. Mostly every time repetition happens around 12 months, but it can be around 11 months as well or 13 months, but 12 is the most common frequency. I am using using Fourier transformation, but can't figure out how to extract this most common frequency (12) using some python library?</p>
<p>Thanks</p>
<p><a href="https://i.sstatic.net/8M13WdvT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8M13WdvT.png" alt="enter image description here" /></a></p>
<p>Here is the data:</p>
<pre><code>11130.578853063385,
6509.723592808,
5693.928982796129,
3415.099921325464,
-9299.291673374173,
-3388.284173885658,
-5577.9316298032745,
-3509.583232678111,
2285.99065857961,
3844.3061166856014,
-7383.526882168155,
-4622.792125020905,
2813.586128745183,
-1501.9405075290495,
8911.954971658348,
7800.444458471794,
-1190.7952377053866,
4768.791467877524,
2173.593871988719,
-2420.04786197912,
-2304.842777539399,
-3562.1788525161837,
-8766.208378658988,
-7655.936603573945,
-5890.9298543619125,
-9628.764012284291,
12124.740839506767,
12391.257220312522,
7512.253051850619,
12921.032383220418,
10063.270097922614,
-1350.2599176773563,
-6887.434936788105,
-11116.26528794868,
-10196.871058658888,
-10874.172006461778,
-15014.190086779208,
-17837.744786550902,
15235.434771455053,
17183.25815161994,
16835.95193044929,
21063.986176551374,
17987.99577807288,
-270.6290142721815,
-11239.957979217992,
-18724.854251002133,
-11752.820614075652,
-14332.597031388648,
-24609.22398631297,
-26155.98046224267,
18192.356438131075,
22165.14150786262,
26758.419290443217,
29284.65841543005,
25762.928479462924,
865.3393849464444,
-15121.264730579132,
-26306.45361387036,
-13494.286360139175,
-18089.58324839494,
-34738.184049794625,
-34718.87495981627,
21145.112760626133,
27322.030709198487,
37252.78168890166,
37846.98231395838,
33206.62103950547,
2092.870600583023,
-18537.521405900694,
-33955.48182565038,
-15445.551953312479,
-22284.152196532646,
-45880.94206326016,
-44229.92788481257,
24988.038646046363,
32958.71017047145,
49117.93320642349,
47304.760779654374,
40776.01828187993,
3403.573579093782,
-22402.79273128273,
-42361.96378730598,
-17190.060741565456,
-27378.2527904574,
-59155.49212555031,
-60122.10588005664,
26272.133100405994,
44887.192435244986,
69002.74742137044,
59037.928523261784,
42122.51604012313,
6075.663868325184,
-20631.710295791454,
-48088.66531781877,
-23396.29341809641,
-40847.479839729145,
-68317.87342502769,
-73679.4424942532,
28302.69374713241,
57321.16868946109,
83820.10748232565,
68399.66173487401,
44989.374076533895,
8830.088516704302,
-18149.500187183363,
-52028.5021898363,
-31013.963236266634,
-53956.5249205745,
-77250.59604604884,
-86642.45203443282,
30541.62328593645,
69812.47143595785,
98233.7834300242,
77385.915451272,
48189.69475295938,
11504.22579592029,
-15251.799652343976,
-55879.292898282,
-38956.992207762654,
-67210.9936142441,
-86636.69916492153,
-99845.12467446178,
32751.253099701484,
82656.01928819218,
113259.2399845611,
86532.20966362985,
51019.20889397171,
14289.09297146163,
-11777.371574335935,
-59627.30976102835,
-47170.18721199697,
-81027.36407627042,
-96178.09587995053,
-113526.93736260894,
34817.23859755824,
95927.57143777516,
128782.84687524068,
95920.65382048927,
53226.62965224956,
17272.000877533148,
-7716.869736424804,
-63110.06727848651,
-55696.68126167806,
-95538.60898488457,
-105325.08525283691,
-127600.17956244369,
36734.97589442811,
109601.51109750797,
144205.71977383518,
105517.48123365057,
54793.814706888734,
20380.77940730315,
-3119.1108027357986,
-66153.73274186133,
-64702.85998743505,
-110650.72884973585
</code></pre>
|
<python><fft><continuous-fourier>
|
2025-02-07 16:44:58
| 1
| 7,707
|
cph_sto
|
79,421,531
| 6,029,488
|
Python Pandas: Groupby multiple columns and linearly interpolate values of column Y based on another X column
|
<p>Consider the following pandas dataframe</p>
<pre><code> reference sicovam label id date TTM price
0 SCOM_WTI 68801903 WTI Nymex BBG:CL 2015-01-02 18 52.69
1 SCOM_WTI 68801903 WTI Nymex BBG:CL 2015-01-02 30 NaN
2 SCOM_WTI 68801903 WTI Nymex BBG:CL 2015-01-02 49 53.11
3 SCOM_WTI 68801903 WTI Nymex BBG:CL 2015-01-02 60 NaN
4 SCOM_WTI 68801903 WTI Nymex BBG:CL 2015-01-02 77 53.69
5 SCOM_WTI 68801903 WTI Nymex BBG:CL 2015-01-02 90 NaN
6 SCOM_WTI 68801903 WTI Nymex BBG:CL 2015-01-02 109 54.42
7 SCOM_WTI 68801903 WTI Nymex BBG:CL 2015-01-02 137 55.15
8 SCOM_WTI 68801903 WTI Nymex BBG:CL 2015-01-02 171 55.80
9 SCOM_WTI 68801903 WTI Nymex BBG:CL 2015-01-02 180 NaN
10 SCOM_WTI 68801903 WTI Nymex BBG:CL 2015-01-05 15 50.04
11 SCOM_WTI 68801903 WTI Nymex BBG:CL 2015-01-05 30 NaN
12 SCOM_WTI 68801903 WTI Nymex BBG:CL 2015-01-05 46 50.52
13 SCOM_WTI 68801903 WTI Nymex BBG:CL 2015-01-05 60 NaN
14 SCOM_WTI 68801903 WTI Nymex BBG:CL 2015-01-05 74 51.17
15 SCOM_WTI 68801903 WTI Nymex BBG:CL 2015-01-05 90 NaN
16 SCOM_WTI 68801903 WTI Nymex BBG:CL 2015-01-05 106 51.95
17 SCOM_WTI 68801903 WTI Nymex BBG:CL 2015-01-05 134 52.73
18 SCOM_WTI 68801903 WTI Nymex BBG:CL 2015-01-05 168 53.46
19 SCOM_WTI 68801903 WTI Nymex BBG:CL 2015-01-05 180 NaN
</code></pre>
<p>After grouping by the <code>reference</code>, <code>sicovam</code>, <code>label</code>, <code>id</code> and <code>date</code> columns, I would like to fill the <code>NaN</code> values of the <code>price</code> column via linear interpolation over the <code>TTM</code> value i.e., in the context of the linear interpolation formula, <code>price</code> is the <code>y</code> and <code>TTM</code> is the <code>x</code> variable.</p>
<p>So far, I built the following lines.</p>
<pre><code>def intepolate_group(group):
group["price"] = group["price"].interpolate(method='linear', limit_direction='both', axis=0)
return group
new_df = df.groupby(["reference","sicovam","label","id","date"])[["TTM","price"]].apply(intepolate_group)
</code></pre>
<p>Nevertheless, the result that I get is the linear interpolation over the index numbers per group. For example for the following part of the dataset, I get <code>54.06</code> instead of <code>53.99</code>. What do I still need in order to interpolate over the TTM variable?</p>
<p>PS: I want to avoid masking via loop (instead of grouping) and setting the <code>TTM</code> as the index, because the dataframe is quite big and such a scenario takes considerable amount of time.</p>
<pre><code>4 SCOM_WTI 68801903 WTI Nymex BBG:CL 2015-01-02 77 53.69
5 SCOM_WTI 68801903 WTI Nymex BBG:CL 2015-01-02 90 NaN
6 SCOM_WTI 68801903 WTI Nymex BBG:CL 2015-01-02 109 54.42
</code></pre>
|
<python><pandas><group-by><interpolation>
|
2025-02-07 16:15:56
| 1
| 479
|
Whitebeard13
|
79,421,434
| 4,317,594
|
Trying to download YouTube video using yt_dlp I get error: Error: '__files_to_merge'
|
<p>I want to download a YouTube video, I did but the audio is not merged with the video(audio is not audible). How can I fix that using yt-dlp?</p>
<p><code>Error: '__files_to_merge'</code></p>
<pre><code>import os
import yt_dlp
def download_video(video_url, save_path):
if not save_path.endswith('.mp4'):
save_path += '.mp4'
try:
ydl_opts = {
'outtmpl': save_path,
'format': 'bestvideo+bestaudio/best',
'merge_output_format': 'mp4',
'postprocessors': [
{'key': 'FFmpegMerger'},
],
}
with yt_dlp.YoutubeDL(ydl_opts) as ydl:
ydl.download([video_url])
print(f"Downloaded: {save_path}")
except Exception as e:
print(f"Failed to download: {video_url}\nError: {e}")
def main():
# List of video URLs
video_urls = [
'https://youtu.be/u_CQggTmSO8', # Example URL
'https://youtu.be/vCzMfZtrdjk',
# Add more video URLs here
]
# Directory to save videos
save_dir = 'Udacity_GenAI_Videos'
if not os.path.exists(save_dir):
os.makedirs(save_dir)
for i, video_url in enumerate(video_urls):
save_path = os.path.join(save_dir, f'video_{i+1}.mp4')
download_video(video_url, save_path)
if __name__ == "__main__":
main()
</code></pre>
|
<python><yt-dlp>
|
2025-02-07 15:41:47
| 1
| 3,019
|
Kaleab Woldemariam
|
79,421,422
| 11,613,489
|
Failed to Build Installable Wheels for OpenCV-Python on macOS
|
<p>I'm trying to install the <code>opencv-python</code> library on macOS using <code>pip</code></p>
<pre><code> pip install opencv-python
</code></pre>
<p>I keep encountering this error:</p>
<blockquote>
<p>ERROR: Failed to build installable wheels for some pyproject.toml
based projects (opencv-python)</p>
</blockquote>
<pre><code>pip install --upgrade pip setuptools wheel
</code></pre>
<p>I have tried to solve it,</p>
<p>Installed dependencies via Homebrew:</p>
<pre><code>brew install cmake pkg-config libjpeg libpng libtiff openexr eigen tbb ffmpeg
</code></pre>
<p>Attempted to manually build OpenCV from source using CMake (installed via Homebrew).</p>
<p>My Setup:</p>
<pre><code>macOS Version: [macOS Big Sur 11.4]
Python Version: [Python 3.10.9]
pip Version: [pip 25.0 from /Users/demo/anaconda3/lib/python3.10/site-packages/pip (python 3.10)]
</code></pre>
<p>Question:
What am I missing? How can I resolve this issue and successfully install OpenCV in Python on macOS?</p>
|
<python><macos><opencv>
|
2025-02-07 15:35:57
| 1
| 642
|
Lorenzo Castagno
|
79,421,406
| 16,891,669
|
Are there any builtin frozen modules in python
|
<p>I was going through the python <a href="https://docs.python.org/3/reference/import.html" rel="nofollow noreferrer">import process</a> and found out about frozen modules. The only thing I understood after searching is that frozen modules are files that can be directly executed without python installed on the system.</p>
<p>I wanted to know one thing. Like <code>sys.modules</code> and <code>sys.builtin_module_names</code> are any frozen modules also present in python that are loaded automatically when running python? If yes, can we also access a list of them somehow? My main goal is to know any names that I should avoid giving to the files.</p>
|
<python>
|
2025-02-07 15:30:42
| 1
| 597
|
Dhruv
|
79,421,373
| 16,891,669
|
Understanding import in python (initalization of sys.modules)
|
<p>I recently came to know that modules like <code>os</code> are <a href="https://stackoverflow.com/questions/79420610/undertanding-python-import-process-importing-custom-os-module">imported way before</a> a user python program starts running and therfore we cannot import a custom file named <code>os</code> even though it's not in the <code>sys.builtin_module_names</code>. So, I looked into <code>sys.modules</code> right when the python program starts and found <code>os</code> and others modules there.</p>
<pre class="lang-py prettyprint-override"><code>Python 3.13.1 (main, Dec 3 2024, 17:59:52) [Clang 16.0.0 (clang-1600.0.26.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> 'os' in sys.modules.keys()
True
>>> 're' in sys.modules.keys()
True
</code></pre>
<p>So, I had few more doubts</p>
<ol>
<li>Can it be said that because <code>os</code> module is present in the <code>sys.modules</code> at the <strong>start</strong> of the python program, a file with the same name cannot be imported?</li>
<li>I removed the entry for <code>os</code> in sys.modules using <code>del sys.modules['os']</code> and then tried to import a file named <code>os.py</code> in my python code and still wasn't able to do so. Why?
<pre class="lang-py prettyprint-override"><code>#os.py
def fun():
print("Custom os called!")
</code></pre>
<pre class="lang-py prettyprint-override"><code>#test.py
import sys
del sys.modules['os']
import os
print("source file of the imported os -", os.__file__)
print(os.fun())
</code></pre>
Output
<pre class="lang-py prettyprint-override"><code>dhruv@MacBook-Air-2 test % python3 test.py
source file of the imported os - /opt/homebrew/Cellar/python@3.13/3.13.1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/os.py
Traceback (most recent call last):
File "/Users/dhruv/Documents/test/test.py", line 12, in <module>
print(os.fun())
^^^^^^
AttributeError: module 'os' has no attribute 'fun'
</code></pre>
</li>
</ol>
<p>Could someone cleary explain the import process? I've already gone through <a href="https://docs.python.org/3/tutorial/modules.html#the-module-search-path" rel="nofollow noreferrer">this</a> and <a href="https://docs.python.org/3/reference/import.html" rel="nofollow noreferrer">this</a> and have build the following the steps for the python import process.</p>
<ol>
<li>Look through <code>sys.modules</code></li>
<li>Look through <code>sys.builtin_module_names</code></li>
<li>Find if there is any frozen module with the name</li>
<li>Go through <code>sys.path</code></li>
</ol>
<p>Is there anything else that is missing?</p>
|
<python><python-import>
|
2025-02-07 15:21:53
| 1
| 597
|
Dhruv
|
79,421,358
| 2,532,408
|
chromedriver in headless mode incorrectly sets opacity to 0 for MUI component using fade
|
<p>I'm seeing an issue in selenium using chromedriver in headless mode with a MUI component that uses fade in where the transition doesn't seem to trigger.</p>
<p>If you run the following script against the <a href="https://react-w7zntifk-wnhs8qae.stackblitz.io/" rel="nofollow noreferrer">sample</a> url, the script will timeout waiting for the component to become visible.</p>
<p>Run without headless and the component becomes visible. (which makes it tricky to troubleshoot).</p>
<p>Even when running in headless mode, if you call <code>driver.save_screenshot</code> or <code>driver.get_screenshot_as_png</code> prior to waiting for the visibility, it becomes visible!</p>
<p>Run against edge browser (which is chromium) the element becomes visible. (i.e. works)</p>
<ol>
<li>is anyone else seeing this behavior?</li>
<li>Anyone have insight into what might be happening here? Specifically why a screenshot would allow the fade in trigger to work?</li>
</ol>
<pre class="lang-py prettyprint-override"><code>from time import sleep
from selenium import webdriver
from selenium.webdriver.chrome.service import Service as ChromeService
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait
opts = (
"--disable-extensions",
"--disable-single-click-autofill",
"--disable-autofill-keyboard-accessory-view[8]",
"--disable-full-form-autofill-ios",
"--disable-infobars",
# chromedriver crashes without these two in linux
"--no-sandbox",
"--disable-dev-shm-usage",
)
exp_prefs = {"autofill.profile_enabled": False}
options = webdriver.ChromeOptions()
for opt in opts:
options.add_argument(opt)
options.add_experimental_option("prefs", exp_prefs)
#comment this out to run without headless to witness the script finish properly.
options.add_argument("--headless")
driver = webdriver.Chrome(service=ChromeService(), options=options)
driver.set_window_position(0, 0)
driver.set_window_size(1600, 1080)
URL = "https://react-w7zntifk-wnhs8qae.stackblitz.io/"
TEST_ID = (By.ID, "test-id")
RUN_BUTTON = (By.XPATH, "//button[contains(string(), 'Run this project')]")
driver.get(URL)
wait = WebDriverWait(driver, 5)
# get past the stackblitz initialization
button = wait.until(EC.element_to_be_clickable(RUN_BUTTON))
button.click()
# once we find the element in the DOM (which has a 2 second fade in timer) wait a moment
# before printing the opacity value, which SHOULD be a float value not zero
# but isn't in this bug.
elem = wait.until(EC.presence_of_element_located(TEST_ID), "")
sleep(0.5)
print(f"{TEST_ID} is present")
print(f'opacity: {elem.value_of_css_property("opacity")}')
print(f"is_displayed: {elem.is_displayed()}")
elem2 = wait.until(
EC.visibility_of_element_located(TEST_ID), f"{TEST_ID} was not visible)"
)
</code></pre>
<p>react playground:
<a href="https://stackblitz.com/edit/react-w7zntifk-wnhs8qae?file=Demo.tsx" rel="nofollow noreferrer">https://stackblitz.com/edit/react-w7zntifk-wnhs8qae?file=Demo.tsx</a></p>
<p>example component:
<a href="https://react-w7zntifk-wnhs8qae.stackblitz.io/" rel="nofollow noreferrer">https://react-w7zntifk-wnhs8qae.stackblitz.io/</a></p>
<hr />
<p>edit:</p>
<p>Shortly after creating this post edge updated and began matching the behavior in Chrome. So this clearly is a chromium issue in conjunction with MUI. (I still don't know whose at fault here.)</p>
<p>I figured out that <code>is_displayed</code> remains false for the element long after these specific MUI components should have rendered while in headless mode.</p>
<p>I was able to find a workaround; <code>--disable-renderer-backgrounding</code> seems to allow MUI to work properly while in headless mode.</p>
<p>I created a bug with chromium group which outlines what the basic issue is in hopes they can help identify <em>why</em> it's happening.
<a href="https://issues.chromium.org/issues/394907241" rel="nofollow noreferrer">https://issues.chromium.org/issues/394907241</a></p>
|
<python><material-ui><selenium-chromedriver>
|
2025-02-07 15:17:25
| 1
| 4,628
|
Marcel Wilson
|
79,421,290
| 5,265,038
|
Reassigning Values of Multiple Columns to Values of Multiple Other Columns
|
<p>For the following <code>df</code> I wish to change the values in columns <code>A</code>,<code>B</code> and <code>C</code> to those in columns <code>X</code>,<code>Y</code> and <code>Z</code>, taking into account a boolean selection on column <code>B</code>.</p>
<pre><code>columns = {"A":[1,2,3],
"B":[4,pd.NA,6],
"C":[7,8,9],
"X":[10,20,30],
"Y":[40,50,60],
"Z":[70,80,90]}
df = pd.DataFrame(columns)
df
A B C X Y Z
0 1 4 7 10 40 70
1 2 <NA> 8 20 50 80
2 3 6 9 30 60 90
</code></pre>
<p>However when I try to do the value reassignment I end up with NULLS.</p>
<pre><code>df.loc[~(df["B"].isna()), ["A","B","C"]] = df.loc[~(df["B"].isna()), ["X","Y","Z"]]
df
A B C X Y Z
0 NaN NaN NaN 10 40 70
1 2.0 <NA> 8.0 20 50 80
2 NaN NaN NaN 30 60 90
</code></pre>
<p>My desired result is:</p>
<pre><code> A B C X Y Z
0 10 40 70 10 40 70
1 2 <NA> 8 20 50 80
2 30 60 90 30 60 90
</code></pre>
<p>If I do the reassignment on a single column then I get my expected result:</p>
<pre><code>df.loc[~(df["B"].isna()), "A"] = df.loc[~(df["B"].isna()), "X"]
df
A B C X Y Z
0 10 4 7 10 40 70
1 2 <NA> 8 20 50 80
2 30 6 9 30 60 90
</code></pre>
<p>However I expected that I should be able to do this on multiple columns at once. Any ideas what I am missing here?</p>
<p>Thanks</p>
|
<python><pandas><dataframe>
|
2025-02-07 14:54:04
| 2
| 605
|
mmTmmR
|
79,421,254
| 13,674,431
|
Python Error while executing "from selenium import webdriver"
|
<p>I used to execute my code normally without any problems.
But today when I executed</p>
<pre><code>import selenium
from selenium.webdriver.common.by import By
</code></pre>
<p>it's okay, but when I run this line, it's suddenly showing an error</p>
<pre><code>from selenium import webdriver
</code></pre>
<p><strong>_TYPE_REDUCE_RESULT = tuple[typing.Callable[..., object], tuple[object, ...]]</strong></p>
<p><strong>TypeError: 'type' object is not subscriptable</strong></p>
<p>I tried to uninstall and install <code>selenium</code> but i got the same error.</p>
<p>Any help please ?</p>
|
<python><selenium-webdriver><selenium-chromedriver><spyder>
|
2025-02-07 14:43:28
| 0
| 315
|
Ruser-lab9
|
79,421,201
| 4,767,670
|
Select the largest available data type in Numpy
|
<p>I'm working on numerical algorithms and I have a Numpy backend for fast, fixed-precision calculations.</p>
<p>During the initialization of the interface I let the user choose among the complex data types <code>np.complex128</code>, <code>np.complex192</code>, <code>np.complex256</code>, <code>np.complex512</code>.</p>
<p>I know that Numpy defines these types as dummy types if the underlying platform doesn't support them. I would like to check at runtime whether these types are supported, and take the largest available.</p>
<p>How do I check for 'dumminess'? My idea was to try instantiate a numpy array with the dtype and see whether it raises an error, but it's not very clean to do... I was wondering if there is a better way to do it.</p>
|
<python><numpy>
|
2025-02-07 14:25:38
| 0
| 1,807
|
NYG
|
79,420,818
| 1,358,829
|
Clearing tf.data.Dataset from GPU memory
|
<p>I'm running into an issue when implementing a training loop that uses a <code>tf.data.Dataset</code> as input to a Keras model. My dataset has an element spec of the following format:</p>
<pre class="lang-py prettyprint-override"><code>({'data': TensorSpec(shape=(15000, 1), dtype=tf.float32), 'index': TensorSpec(shape=(2,), dtype=tf.int64)}, TensorSpec(shape=(1,), dtype=tf.int32))
</code></pre>
<p>So, basically, each sample is structured as tuple <code>(x, y)</code>, in which <code>x</code> has the structure of a dict containing two tensors, one of data with shape <code>(15000, 1)</code>, and the other an index of shape <code>(2,)</code> (the index is not used during training), and <code>y</code> is a single label.</p>
<p>The <code>tf.data.Dataset</code> is created using <code>dataset = tf.data.Dataset.from_tensor_slices((X, y))</code>, where <code>X</code> is a dict of two keys:</p>
<ul>
<li><code>data</code>: an np array of shape <code>(200k, 1500, 1)</code>, <code>index</code> with</li>
<li><code>index</code>: an np array of shape <code>(200k, 2)</code></li>
</ul>
<p>and <code>y</code> is a single array of shape <code>(200k, 1)</code></p>
<p>My dataset has about 200k training samples (after running undersampling) and 200k validation samples.</p>
<p>Right after calling <code>tf.data.Dataset.from_tensor_slices</code> I noticed a spike in GPU memory usage, with about 16GB being occupied after creating the training <code>tf.Dataset</code>, and 16GB more after creating the validation <code>tf.Dataset</code>.</p>
<p>After creating of the <code>tf.Dataset</code>, I run a few operations (e.g. shuffle, batching, and prefetching), and call <code>model.fit</code>. My model has about 500k trainable parameters.</p>
<p>The issue I'm running into is <em>after</em> fitting the model. I need to run inference on some additional data, so I create a new <code>tf.Dataset</code> with this data, again using <code>tf.Dataset.from_tensor_slices</code>. However, I noticed the training and validation <code>tf.Dataset</code> still reside in GPU memory, which causes my script to break with an out of memory problem for the new <code>tf.Dataset</code> I want to run inference on.</p>
<p>I tried calling <code>del</code> on the two <code>tf.Dataset</code>, and subsequently calling <code>gc.collect()</code>, but I believe that will only clear RAM, not GPU memory. Also, I tried disabling some operations I apply, such as <code>prefetch</code>, and also playing with the batch size, but none of that worked. I also tried calling <code>keras.backend.clear_session()</code>, but it also did not work to clear GPU memory. I also tried importing <code>cuda</code> from <code>numba</code>, but due to my install I cannot use it to clear memory. Is there any way for me to clear the <code>tf.data.Dataset</code> from GPU memory?</p>
<p>Minimum reproducible example below</p>
<p>Setup</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import tensorflow as tf
from itertools import product
# Setting tensorflow memory growth for GPU
gpus = tf.config.list_physical_devices('GPU')
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
</code></pre>
<p>Create dummy data with similar size as my actual data (types are the same as the actual data):</p>
<pre><code>train_index = np.array(list(product(np.arange(1000), np.arange(200)))).astype(np.int32)
train_data = np.random.rand(200000, 15000).astype(np.float32)
train_y = np.random.randint(0, 2, size=(200000, 1)).astype(np.int32)
val_index = np.array(list(product(np.arange(1000), np.arange(200)))).astype(np.int32)
val_data = np.random.rand(200000, 15000).astype(np.float32)
val_y = np.random.randint(0, 2, size=(200000, 1)).astype(np.int32)
</code></pre>
<p>This is the nvidia-smi output at this point:
<a href="https://i.sstatic.net/pVKioJfg.png" rel="noreferrer"><img src="https://i.sstatic.net/pVKioJfg.png" alt="nvidia-smi before calling the first tf.data.Dataset" /></a></p>
<p>Creating the training <code>tf.data.Dataset</code>, with as batch size of 256</p>
<pre class="lang-py prettyprint-override"><code>train_X = {'data': train_data, 'index':train_index}
train_dataset = tf.data.Dataset.from_tensor_slices((train_X, train_y))
train_dataset = train_dataset.batch(256)
</code></pre>
<p>This is the nvidia-smi output after the <code>tf.data.Dataset</code> creation:
<a href="https://i.sstatic.net/IYzxOcFW.png" rel="noreferrer"><img src="https://i.sstatic.net/IYzxOcFW.png" alt="nvidia-smi after calling the first tf.data.Dataset" /></a></p>
<p>Creating the validation <code>tf.data.Dataset</code>, with as batch size of 256</p>
<pre class="lang-py prettyprint-override"><code>val_X = {'data': val_data, 'index':val_index}
val_dataset = tf.data.Dataset.from_tensor_slices((val_X, val_y))
val_dataset = val_dataset.batch(256)
</code></pre>
<p>This is the nvidia-smi output after the second <code>tf.data.Dataset</code> creation:
<a href="https://i.sstatic.net/19oNOJx3.png" rel="noreferrer"><img src="https://i.sstatic.net/19oNOJx3.png" alt="nvidia-smi after calling the second tf.data.Dataset" /></a></p>
<p>So GPU usage grows when creating each <code>tf.data.Dataset</code>. Since after running <code>model.fit</code> I need to create a new <code>tf.data.Dataset</code> of similar size, I end up running out of memory. Is there any way to clear this data from GPU memory?</p>
|
<python><tensorflow><machine-learning><keras>
|
2025-02-07 12:02:12
| 1
| 1,232
|
Alb
|
79,420,684
| 6,467,567
|
Inconsistent conversion from Euler to quaternions and back
|
<p>I am trying to debug my code but I'm confused why converting from Euler angles to quaternions and back is not giving me consistent results. How can I resolve this?</p>
<pre><code>import numpy as np
from scipy.spatial.transform import Rotation as R
# Define test rotation in degrees
testrotation = np.zeros([2, 2, 3])
testrotation[0, 0] = [45, 30, 60]
testrotation[0, 1] = [10, 20, 30]
testrotation[1, 0] = [90, 0, 45]
testrotation[1, 1] = [90, 45, 30]
#################### Euler -> Quaternions using SciPy
scipy_quaternions_temp = R.from_euler("ZYX", np.radians(testrotation.reshape(-1, 3)))
scipy_quaternions = np.roll(scipy_quaternions_temp.as_quat(), shift=1, axis=-1).reshape(testrotation.shape[0], testrotation.shape[1], 4)
print("Quaternions (wxyz format):\n", scipy_quaternions, "\n")
#################### Quaternions -> Euler using SciPy
scipy_euler_angles = np.rad2deg(scipy_quaternions_temp.as_euler('zyx', degrees=False)).reshape(testrotation.shape[0], testrotation.shape[1], 3)
print("Euler angles after conversion and normalization:\n", scipy_euler_angles, "\n")
#################### Euler -> Quaternions (back conversion) using SciPy
scipy_quat_reconverted = R.from_euler('zyx', np.radians(scipy_euler_angles.reshape(-1, 3))).as_quat().reshape(testrotation.shape[0], testrotation.shape[1], 4)
scipy_quat_reconverted = np.roll(scipy_quat_reconverted, shift=1, axis=-1)
print("Reconverted quaternions (wxyz format):\n", scipy_quat_reconverted, "\n")
sys.exit()
</code></pre>
<p>You can see that the Euler angles I get back are not the same.</p>
<pre><code>Quaternions (wxyz format):
[[[ 0.82236317 0.36042341 0.39190384 0.20056212]
[ 0.95154852 0.23929834 0.18930786 0.03813458]]
[[ 0.65328148 0.27059805 0.27059805 0.65328148]
[ 0.70105738 -0.09229596 0.43045933 0.56098553]]]
Euler angles after conversion and normalization:
[[[ 4.42303689e+00 5.21060674e+01 4.51703838e+01]
[-1.11605468e+00 2.22421809e+01 2.84517753e+01]]
[[ 9.00000000e+01 4.50000000e+01 1.27222187e-14]
[ 9.00000000e+01 3.00000000e+01 -4.50000000e+01]]]
Reconverted quaternions (wxyz format):
[[[ 0.82236317 0.36042341 0.39190384 0.20056212]
[ 0.95154852 0.23929834 0.18930786 0.03813458]]
[[ 0.65328148 0.27059805 0.27059805 0.65328148]
[ 0.70105738 -0.09229596 0.43045933 0.56098553]]]
</code></pre>
|
<python><scipy>
|
2025-02-07 11:17:34
| 1
| 2,438
|
Kong
|
79,420,610
| 16,891,669
|
Undertanding python import process (importing custom os module)
|
<p>I was reading through the <a href="https://docs.python.org/3/tutorial/modules.html#the-module-search-path" rel="nofollow noreferrer">python docs</a> for how import is resolved and found this</p>
<blockquote>
<p>... the interpreter first searches
for a built-in module with that name. These module names are listed in
sys.builtin_module_names. If not found, it then searches for a file
named spam.py in a list of directories given by the variable sys.path.
sys.path is initialized from these locations:</p>
<ul>
<li><p>The directory containing the input script (or the current directory
when no file is specified).</p>
</li>
<li><p>PYTHONPATH (a list of directory names, with the same syntax as the
shell variable PATH).
...</p>
</li>
</ul>
</blockquote>
<p>So python first looks into <code>sys.builtin_module_names</code> and then into <code>sys.path</code>. So I checked sys.builtin_module_names on my OS (Mac).</p>
<pre class="lang-py prettyprint-override"><code>>>> sys.builtin_module_names
('_abc', '_ast', '_codecs', '_collections', '_functools', '_imp', '_io', '_locale', '_operator', '_signal', '_sre', '_stat', '_string', '_suggestions', '_symtable', '_sysconfig', '_thread', '_tokenize', '_tracemalloc', '_typing', '_warnings', '_weakref', 'atexit', 'builtins', 'errno', 'faulthandler', 'gc', 'itertools', 'marshal', 'posix', 'pwd', 'sys', 'time')
>>> 'os' in sys.builtin_module_names
False
</code></pre>
<p>Since, <code>os</code> is not in <code>sys.builtin_module_names</code>, a file named <code>os.py</code> in the same directory as my python file should take precedence over the <code>os</code> python module.</p>
<p>I created a file named <code>os.py</code> in a <code>test</code> directory with the following simple code:</p>
<pre class="lang-py prettyprint-override"><code>#os.py
def fun():
print("Custom os called!")
</code></pre>
<p>And created another file named <code>test.py</code> which imports <code>os</code></p>
<pre class="lang-py prettyprint-override"><code>#test.py
import sys
print("os in sys.builtin_module_names -", 'os' in sys.builtin_module_names)
print("First directory on sys -", sys.path[0])
import os
print("source file of the imported os -", os.__file__)
print(os.fun())
</code></pre>
<p>This is the output of <code>test.py</code></p>
<pre class="lang-py prettyprint-override"><code>> python3 test.py
os in sys.builtin_module_names - False
First directory on sys - /Users/dhruv/Documents/test
source file of the imported os - /opt/homebrew/Cellar/python@3.13/3.13.1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/os.py
Traceback (most recent call last):
File "/Users/dhruv/Documents/test/test.py", line 9, in <module>
print(os.fun())
^^^^^^
AttributeError: module 'os' has no attribute 'fun'
</code></pre>
<p>Why is the python os module called?</p>
|
<python><python-3.x><python-import>
|
2025-02-07 10:53:57
| 1
| 597
|
Dhruv
|
79,420,563
| 4,539,582
|
pymupdf4llm get 1 table format
|
<p>I want to get all text include table inside pdf file using <code>pymupdf4llm</code></p>
<pre class="lang-py prettyprint-override"><code>import pymupdf4llm
import pathlib
md_text = pymupdf4llm.to_markdown("my_file.pdf")
pathlib.Path("result.md").write_bytes(md_text.encode())
</code></pre>
<p>But in result found my table have 2 in markdown.</p>
<p>here my pdf</p>
<p><a href="https://i.sstatic.net/fzUFRLa6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fzUFRLa6.png" alt="enter image description here" /></a></p>
<p>one with no table
<a href="https://i.sstatic.net/VBvsrwth.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VBvsrwth.png" alt="enter image description here" /></a></p>
<p>Another one in table format</p>
<p><a href="https://i.sstatic.net/LRnooq7d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LRnooq7d.png" alt="enter image description here" /></a></p>
<p>how to get result text with just 1 table format?</p>
|
<python><pymupdf>
|
2025-02-07 10:38:00
| 0
| 979
|
newbie
|
79,420,346
| 8,135,029
|
Syntax error at or near ':'(line 1, pos 2) - PARSE_SYNTAX_ERROR - == SQL ==
|
<p>I'm trying to read two JSON files at a time from AWS S3 bucket. getting error PARSE_SYNTAX_ERROR</p>
<p>but when i'm reading single file, it is working fine.</p>
<p>Using AWS Glue for execution.</p>
<pre><code>file_paths = [
"s3://test_bt/data/batch_date=20241217/20241217/data_0_0_0.json.gz",
"s3://test_bt/data/batch_date=20241217/20241217/data_0_1_0.json.gz"
]
df_single = spark.read.json(*file_paths)
df_single.show()
</code></pre>
<p>ERROR:</p>
<pre><code>Spark Error Class: PARSE_SYNTAX_ERROR; Traceback (most recent call last):
File "/tmp/TEST.py", line 10, in <module>
df_single = spark.read.json(*s3_paths)
File "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 254, in json
self._set_opts(
File "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 50, in _set_opts
self.schema(schema) # type: ignore[attr-defined]
File "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 118, in schema
self._jreader = self._jreader.schema(schema)
File "/opt/amazon/spark/python/lib/py4j-0.10.9.5-src.zip/py4j/java_gateway.py", line 1321, in __call__
return_value = get_return_value(
File "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 196, in deco
raise converted from None
pyspark.sql.utils.ParseException:
Syntax error at or near ':'(line 1, pos 2)
== SQL ==
s3://test_bt/data/batch_date=20241217/20241217/data_0_1_0.json.gz
--^^^
</code></pre>
|
<python><amazon-web-services><apache-spark><amazon-s3><pyspark>
|
2025-02-07 09:24:54
| 1
| 922
|
abhimanyu
|
79,420,135
| 7,002,525
|
pySMT using deprecated setup.py
|
<p>I have installed <a href="https://github.com/pysmt/pysmt" rel="nofollow noreferrer">pySMT</a> for Python 3.12 using PyCharm package manager with a conda environment in Ubuntu 22.04.5 LTS:</p>
<pre><code>$ pip show pysmt
Name: PySMT
Version: 0.9.6
Summary: A solver-agnostic library for SMT Formulae manipulation and solving
Home-page: http://www.pysmt.org
Author: PySMT Team
Author-email: info@pysmt.org
License: APACHE
Location: .../lib/python3.12/site-packages
Requires:
Required-by:
</code></pre>
<p>However, if I try to install a solver, or, for example, <em>"show you which solvers have been found in your <code>PYTHONPATH</code>"</em>, I get a SetuptoolsDeprecationWarning:</p>
<pre><code>$ pysmt-install --check
.../lib/python3.12/site-packages/setuptools/_distutils/cmd.py:79: SetuptoolsDeprecationWarning: setup.py install is deprecated.
!!
********************************************************************************
Please avoid running ``setup.py`` directly.
Instead, use pypa/build, pypa/installer or other
standards-based tools.
See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details.
********************************************************************************
!!
self.initialize_options()
Traceback (most recent call last):
...
FileExistsError: [Errno 17] File exists: '.../miniforge3/envs/.../bin/python3.12'
</code></pre>
<p>What am I missing? Why would <code>pysmt-install</code> use the deprecated setup.py?</p>
<p>Perplexity suggested to <code>pip install --upgrade setuptools</code> but I'm not convinced.</p>
<p>Edit: There is one similar open issue in the pySMT Github repo but it is related to MacOS with M1:</p>
<p><a href="https://github.com/pysmt/pysmt/issues/709" rel="nofollow noreferrer">Cannot install msat solver on MacOs with M1</a></p>
|
<python><ubuntu><conda><setup.py><pysmt>
|
2025-02-07 08:04:28
| 1
| 706
|
teppo
|
79,420,118
| 4,105,601
|
Incorrecto version of pip when using pkg_resources
|
<p>I'm trying to gather information about several machines, each one of them running different Python versions and installed modules.</p>
<p>I tried the pkg_resources way, but after updating pip on one of them I'm seeing that the reported pip version is not correct: pkg_resources</p>
<pre class="lang-none prettyprint-override"><code>Type "help", "copyright", "credits" or "license" for more information.
>>> import pkg_resources
>>> dists = [str(d).replace(" ","==") for d in pkg_resources.working_set]
>>> dists
['pip==1.5.6', 'pycryptodome==3.9.9', 'requests==2.10.0', 'setuptools==2.1', 'urllib3==1.24.3']
>>> import pip
>>> pip.__version__
'19.1.1'
</code></pre>
<pre><code>python -m pip freeze
DEPRECATION: Python 3.4 support has been deprecated. pip 19.1 will be the last one supporting it. Please upgrade your Python as
Python 3.4 won't be maintained after March 2019 (cf PEP 429).
pycryptodome==3.9.9
requests==2.10.0
urllib3==1.24.3
</code></pre>
<p>The other libraries are fine:</p>
<pre><code>>>> import requests
>>> requests.__version__
'2.10.0'
>>> import urllib3
>>> urllib3.__version__
'1.24.3'
</code></pre>
<p>Why is this happening?
Is there a reliable way to get a module version without importing it?</p>
|
<python><pip><pkg-resources>
|
2025-02-07 07:55:29
| 1
| 507
|
HΓ©ctor C.
|
79,419,787
| 11,850,322
|
Call another jupyter notebook, passing variables and receiving variables
|
<p>Let say I have two jupyter notebook files called: <code>main</code> and <code>sub</code></p>
<p>Here are my needs:</p>
<ol>
<li>Call and run <code>sub</code> from <code>main</code></li>
<li>Each notebook has its own variables space. For example, if <code>x=1</code> in <code>main</code>, but <code>x=2</code> in <code>sub</code> unless exchange info</li>
<li>I can pass certain variables to <code>sub</code> and retrieve certain variables from <code>sub</code> (not all) β This is very important as I do not want variables to overlap and mess up the code</li>
</ol>
<p>I have been trying all days with GPT and Copilot looking for an effective solution but:</p>
<ol>
<li>Using <code>%run -i% or </code>IPython.get_ipython` will just share all variables between the two notebooks</li>
<li>Using <code>nbclients</code> β both GPT and Copilot keep giving wrong answers (i.e. I still cannot pass and receive variables from <code>sub</code> to <code>main</code>)</li>
</ol>
<p>I know there is another solution is create a defined fucntion on .py file; however since my work at the moment is complex to convert from ipynb file to py file</p>
<p>If anyone have any suggestions, I would really appreciate (a sample code would be x2 appreciate)</p>
<p>Thank you!</p>
|
<python><python-3.x><jupyter-notebook><jupyter><jupyter-lab>
|
2025-02-07 01:37:11
| 0
| 1,093
|
PTQuoc
|
79,419,739
| 13,395,230
|
Optimized binary matrix inverse
|
<p>I am attempting to invert (mod 2) a non-sparse binary matrix. I am trying to do it with very very large matricies, like 100_000 by 100_000. I have tried libraries, such as sympy, numpy, numba. Most of these do this with floats and don't apply mod 2 at each step. In general, numpy reports the determinant of a random binary invertible array of size 1000 by 1000 as INF.</p>
<p>Here is my attempted code, but this lags at 10_000 by 10_000.</p>
<p>This function is also nice, since even if I am not invertible, I can see the rank.</p>
<pre><code>def binary_inv(A):
n = len(A)
A = np.hstack((A.astype(np.bool_),np.eye(n,dtype=np.bool_)))
for i in range(n):
j = np.argmax(A[i:,i]) + i
if i!=j:
A[[i,j],i:] = A[[j,i],i:]
A[i,i] = False
A[A[:,i],i+1:] ^= A[i:i+1,i+1:]
return A[:,n:]
</code></pre>
<p>Any thoughts on making this as fast as possible?</p>
|
<python><matrix><optimization>
|
2025-02-07 00:48:36
| 1
| 3,328
|
Bobby Ocean
|
79,419,588
| 6,703,783
|
How to import `wikipedia` module in Python
|
<p>I am running a Python script which downloads articles from Wikipedia. I have tried to install <code>wikipedia</code> and <code>wikipedia-api</code> but none seem to work.</p>
<pre><code>#!/usr/bin/env python
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
"""
This script downloads a few sample wikipedia articles that can be used for demo or quickstart purposes in conjunction with the solution accelerator.
"""
import argparse
import os
import wikipedia
us_states = [
"Alaska",
"California",
"Washington (state)",
"Washington_D.C.",
"New York (state)",
]
def main():
parser = argparse.ArgumentParser(description="Wikipedia Download Script")
parser.add_argument(
"directory",
help="Directory to download sample wikipedia articles to.",
default="testdata",
)
parser.add_argument(
"--short-summary",
help="Retrieve short summary article content.",
action="store_true",
)
parser.add_argument(
"--num-articles",
help="Number of wikipedia articles to download. Default=5",
default=5,
choices=range(1, 6),
type=int,
)
args = parser.parse_args()
os.makedirs(args.directory, exist_ok=True)
for state in us_states[0 : args.num_articles]:
try:
title = wikipedia.page(state).title.lower().replace(" ", "_")
content = (
wikipedia.page(state).summary
if args.short_summary
else wikipedia.page(state).content
)
content = content.strip()
filename = os.path.join(args.directory, f"{title}_wiki_article.txt")
with open(filename, "w", encoding="utf-8") as f:
f.write(content)
print(f"Saving wiki article '{title}' to {filename}")
except Exception:
print(f"Error fetching wiki article {title}")
if __name__ == "__main__":
main()
</code></pre>
<p>Output of <code>pip list</code></p>
<pre><code>wheel 0.45.1
wikipedia 1.4.0
Wikipedia-API 0.8.1
</code></pre>
<p>When I run the code, I get error</p>
<pre><code>vscode β /graphrag-accelerator/notebooks (main) $ python get-wiki-articles.py
Traceback (most recent call last):
File "/graphrag-accelerator/notebooks/get-wiki-articles.py", line 12, in <module>
import wikipedia
ModuleNotFoundError: No module named 'wikipedia'
</code></pre>
<p>Output of <code>pip show wikipedia</code></p>
<pre><code>vscode β /graphrag-accelerator/notebooks (main) $ pip show wikipedia
Name: wikipedia
Version: 1.4.0
Summary: Wikipedia API for Python
Home-page: https://github.com/goldsmith/Wikipedia
Author: Jonathan Goldsmith
Author-email: jhghank@gmail.com
License: MIT
Location: /home/vscode/.local/lib/python3.10/site-packages
Requires: beautifulsoup4, requests
Required-by:
</code></pre>
<p>Python env:</p>
<p><a href="https://i.sstatic.net/fzee27v6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fzee27v6.png" alt="enter image description here" /></a></p>
<p>After receiving advice, I recreated the environment for the project</p>
<p>pyvenv.cfg</p>
<pre><code>home = /usr/local/bin
include-system-site-packages = false
version = 3.10.16
</code></pre>
<p><code>which python</code> output in terminal window</p>
<pre><code>$ which python
/graphrag-accelerator/.venv/bin/python
</code></pre>
<p>version</p>
<pre><code> python --version
Python 3.10.16
</code></pre>
<p><code>print(sys.path)</code> output</p>
<pre><code>['/graphrag-accelerator/notebooks', '/graphrag-accelerator/backend', '/graphrag-accelerator/notebooks/$PATH', '/usr/local/lib/python310.zip', '/usr/local/lib/python3.10', '/usr/local/lib/python3.10/lib-dynload', '/graphrag-accelerator/.venv/lib/python3.10/site-packages']
</code></pre>
|
<python><wikipedia><wikipedia-api>
|
2025-02-06 23:14:39
| 1
| 16,891
|
Manu Chadha
|
79,419,480
| 1,122,185
|
Using Python to replace triple double quotes with single double quote in CSV
|
<p>I used the pandas library to manipulate the original data down to a simple 2 column csv file and rename it to a text file. The file has triple double quotes that I need replaced with single double quotes. Every line of the file is formatted as:</p>
<pre><code>"""QA12345""","""Some Other Text"""
</code></pre>
<p>What I need is:</p>
<pre><code>"QA12345","Some Other Text"
</code></pre>
<p>This python snippet wipes out the file after it finishes.</p>
<pre><code>with fileinput.FileInput(input_file, inplace=True) as f:
next(f)
for line in f:
line.replace("""""", '"')
</code></pre>
<p>It doesn't work with</p>
<pre><code>line.replace('"""', '"')
</code></pre>
<p>either.</p>
<p>I've also tried adjusting the input values to be <code>'"Some Other Text"'</code> and variations (<code>"""</code> and <code>'\"'</code>) but nothing seems to work. I believe the triple quote is causing the issue, but I don't know what I have to do to.</p>
|
<python><csv>
|
2025-02-06 22:12:50
| 4
| 577
|
ResourceReaper
|
79,419,312
| 3,045,351
|
Service in a virtualenv calling nodes in their own virtualenv's
|
<p>I have a situation where I have a large code base, that I did not author, only have a surface level understanding of and cannot substantially change. There is a parent service running in it's own virtualenv. This is importing various third party nodes, each with their own code base and dependencies. The nodes are imported using importlib.</p>
<p>As is normal, different nodes require different versions of the same packages and their dependencies. This is causing an issue as the parent service is importing one node after another. If diffusers v0.24 has already been imported and the next node needs diffusers v0.32, as diffusers has already been imported, the package by default will not be re imported.</p>
<p>Steps taken so far:</p>
<ol>
<li>Parent service and nodes all in their own virtualenv's with their own package requirements installed.</li>
<li>Sys.path dynamically defined iteratively to always have the site package dir for the virtualenv first in the path and the location of the node second.</li>
<li>A shebang line added to the <strong>init</strong>.py file for each node to be imported.</li>
<li>Have changed the <strong>init</strong>.py file's chmod permissions to x+ and the chmod permissions for the node directory to 755.</li>
<li>Using the example of diffusers again, added the line print(importlib.metadata.version('diffusers')) in the node code to check that v0.32 is available where it should be and v0.24 likewise.</li>
<li>Iterated over the key, value pairs of sys.module in the virtualenv hosting the parent service to see where diffusers was imported from. If v0.24 has already been loaded in the parent virtualenv, v0.32 will not be loaded and I will get an error saying the code in the the node that needs v0.32 cant find what it needs from the diffusers library.</li>
</ol>
<p>I've had partial success using the below code:</p>
<pre><code>import sys
del_modules = ['diffusers']
del_list = []
print(sys.path)
print(sys.executable)
if (node_var != 'cogvideox' and diffusers_vers != '0.24.0') or (node_var == 'cogvideox' and diffusers_vers != '0.32.2'):
for module in del_modules:
for key, value in sys.modules.items():
#print(key, value)
key2 = key.split('.')[0]
if module in key2:
del_list.append(key)
for d in del_list:
print(f"Unloading {d}")
#reload(d)
del sys.modules[d]
if d in globals():
print(f"Unloading {d} globals")
del globals()[d]
</code></pre>
<p>...this feels a bit hacky to say the least, but it does at least achieve the effect of reloading diffusers for me. However, it seems like this has had a knock on effect on some other installed modules that are not behaving as expected.</p>
<p>I am also aware of the reload module from importlib and have tried something of the order of:</p>
<pre><code>from importlib import reload
for key, value in sys.modules.items():
if '/usr' in str(value) and 'built-in' not in str(value) and 'dist-packages' in str(value) and 'importlib' not in str(value) and 'google' not in str(value) and 'ipy' not in str(value):
reload(sys.modules[key])
</code></pre>
<p>...this works, but when I have tried it in a sandbox type of way so far keeps causing my Google Colab session to crash.</p>
<p>I understand all the arguments about not having multiple versions of the same package used in the same code base etc, but I am not in a position to rewrite volumous and complex AI code.</p>
<p>Have I missed something really basic, or is this just an inbuilt limitation of Python? To recap, the parent service imports all the slave nodes and their dependencies in a loop as it's booting up and there is nothing I can do to change that.</p>
<p>I am basically looking for a more straightforward, reliable way of being able to import two nodes using two different versions of the same package into my parent service...</p>
|
<python><python-3.x>
|
2025-02-06 20:46:56
| 0
| 4,190
|
gdogg371
|
79,419,279
| 13,971,251
|
How to use django-allauth for Google API?
|
<p>How is <a href="https://pypi.org/project/django-allauth/" rel="nofollow noreferrer"><code>django-allauth</code></a> implemented to obtain authorization using Oauth2 for a <a href="https://developers.google.com/identity/protocols/oauth2" rel="nofollow noreferrer">Google API</a> (in my case the <a href="https://developers.google.com/gmail/api/guides" rel="nofollow noreferrer">Gmail API</a>)? Additionally, I am looking to implement this <em>separately</em> from using <code>django-allauth</code> to have users log in with Google, so I would need to store it separately, and also call it in a view.</p>
<p>Thanks!</p>
|
<python><django><google-oauth><django-allauth><django-oauth>
|
2025-02-06 20:34:26
| 1
| 1,181
|
Kovy Jacob
|
79,419,072
| 13,971,251
|
Manage django-allauth social applications from admin portal
|
<p>In all the tutorials I've seen, the <a href="https://pypi.org/project/django-allauth/" rel="nofollow noreferrer"><code>django-allauth</code></a> settings are all in the <code>settings.py</code> file. However, this ends up being kind of messy:</p>
<pre class="lang-none prettyprint-override"><code>SOCIALACCOUNT_PROVIDERS = {
"google": {
"SCOPE": [
"profile",
"email",
],
"AUTH_PARAMS": {
"access_type": "online",
"redirect_uri": "https://www.********.com/accounts/google/login/callback/",
},
"OAUTH_PKCE_ENABLED": True,
}
}
SITE_ID = 1
SOCIALACCOUNT_ONLY = True
ACCOUNT_EMAIL_VERIFICATION = 'none'
ACCOUNT_EMAIL_REQUIRED = True
ACCOUNT_AUTHENTICATION_METHOD = 'email'
LOGIN_REDIRECT_URL = 'https://www.********.com/success/'
ROOT_URLCONF = 'gvautoreply.urls'
</code></pre>
<p>So my question is, how can I fully manage Social Applications and their settings from the admin portal? I see that there is a settings field that takes JSON input, but I can't find any documentation on how to use it.</p>
<p><a href="https://i.sstatic.net/V0fgPjAt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V0fgPjAt.png" alt="" /></a></p>
|
<python><django><google-oauth><django-allauth><django-oauth>
|
2025-02-06 19:03:31
| 2
| 1,181
|
Kovy Jacob
|
79,418,903
| 271,789
|
Using subscribe() method in an asyncio stack
|
<p>I want to use pubsub in an application that uses asyncio as a basic way of achieving I/O concurrency. The default Google SDK, however doesn't offer async methods (I have tried gcloud-aio-pubsub, but it only supports pull methods, so the latency is very bad).</p>
<p>With publishing and topic/subscription management I could just use wrap_future, as the publish() method and most other return an object supporting <code>Future</code> protocol.</p>
<p>My question is about the <code>subscribe</code> method. According to the documentation it runs the StreamingPull in a separate thread. Also, if I understand the code correctly, I see that the callback is called in a threat that is taken from a pool.</p>
<p>My idea to use it with asyncio is to replace the default Scheduler with one that would schedule the coroutines in the main thread's event loop. Is it a good approach? Any considerations that I need to take into account?</p>
|
<python><multithreading><python-asyncio><google-cloud-pubsub>
|
2025-02-06 17:53:17
| 1
| 2,066
|
zefciu
|
79,418,852
| 1,110,442
|
How to build a url with path components in Python securely
|
<p>In Python, how do I build urls where some of the path components might be user input or otherwise untrusted?</p>
<p>Is there a way to use <a href="https://docs.python.org/3/tutorial/inputoutput.html#formatted-string-literals" rel="nofollow noreferrer">f-strings</a> that automatically provides escaping, similar to Javascript's <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals#tagged_templates" rel="nofollow noreferrer">tagged template literals</a>? Or maybe <code>string.format</code>?</p>
<p>I am looking for a pattern I can use multiple times on a larger project. I am looking for the "parameterized queries" of url building, as opposed to the "string concatenate and escape" approach.</p>
<p>Ideally, it would be some kind of url builder that also supports query parameters, or whatever else I might need down the road.</p>
<p>For example, maybe I have a url template like this:</p>
<pre class="lang-py prettyprint-override"><code>url_template = "https://example.com/api/v1/user/{user_id}"
</code></pre>
<p>And I want to be able to take that url template and fill in a <code>user_id</code> value, but with any special characters escaped.</p>
<p>For example, if I had:</p>
<pre class="lang-py prettyprint-override"><code>url_template = "https://example.com/api/v1/user/{user_id}"
user_id = "123/some-other-url?virus=veryyes"
final_url = build_it(url_template, { "user_id": user_id })
# final_url:
# https://example.com/api/v1/user/123%2Fsome-other-url%3Fvirus%3Dveryyes
</code></pre>
<p>I am aware of the function <a href="https://docs.python.org/3/library/urllib.parse.html#urllib.parse.quote" rel="nofollow noreferrer"><code>urllib.parse.quote</code></a> but that seems too low level to use in practice.</p>
<p>I also see a lot of suggestions to use <a href="https://docs.python.org/3/library/urllib.parse.html#urllib.parse.urljoin" rel="nofollow noreferrer"><code>urllib.parse.urljoin</code></a> but that seems like a bad idea, as it allows query parameters and multiple path segments.</p>
<p>The solution doesn't need to use a template like my example. I'd also be happy with a <code>.append_single_path_segment()</code> style api.</p>
|
<python><security><url>
|
2025-02-06 17:33:35
| 1
| 5,439
|
Tim
|
79,418,812
| 861,164
|
select random representative from a list in python (axiom of choice)
|
<p>I have a very large (1M or more elements) list of elements that occur repeatedly. I want for each unique element an index pointing to a random representative of that element in the list.
For example, the list</p>
<pre><code>lst = ['b', 't', 'm', 'a', 'c', 'k', 'm', 't', 'm', 'l']
</code></pre>
<p>contains the elements</p>
<pre><code>elem = {'a', 'b', 'c', 'k', 'l', 'm', 't'}
</code></pre>
<p>with 'a' being at index 3, 't' being at index 1 and 7 and 'm' being at 2, 6 and 8 and so on.</p>
<p>I'm looking for a <em>random</em> list <code>selection</code> such that:</p>
<pre><code>set(lst) == set(numpy.array(lst)[selection])
</code></pre>
<p>and</p>
<pre><code>len(selection) == len(elem)
</code></pre>
<p>What is an efficient algorithm in python to achieve that?</p>
<p>A possible solution is:</p>
<pre><code>lst = numpy.array(lst)
[numpy.random.choice(numpy.where(lst==e)[0]) for e in set(lst)]
</code></pre>
|
<python>
|
2025-02-06 17:21:07
| 1
| 1,498
|
Michael
|
79,418,754
| 4,986,615
|
A recursive property defined as a property_column in SQLAlchemy
|
<p>I am looking for a proper way to manage a recursive object using SQLAlchemy.</p>
<p>The <code>column_property</code> syntax looks very promising, but I could not make it work to load the children.</p>
<p>Below is an MRE:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import (
Column,
ForeignKey,
Integer,
create_engine,
select,
)
from sqlalchemy.orm import (
backref,
column_property,
declarative_base,
relationship,
remote,
Session,
)
from sqlalchemy.sql.expression import null
from testcontainers.postgres import PostgresContainer
Base = declarative_base()
class Category(Base):
__tablename__ = "categories"
id = Column(Integer, primary_key=True)
parent_id = Column(Integer, ForeignKey("categories.id"), index=True, nullable=True)
children = relationship(
"Category",
primaryjoin=(id == remote(parent_id)),
lazy="select",
backref=backref(
"parent",
primaryjoin=(remote(id) == parent_id),
lazy="select",
),
)
deep_children = column_property(null())
def build_deep_children_expr():
cte_query = select(Category.id).cte(recursive=True, name="descendants_cte")
parent_query = select(Category).where(Category.parent_id == cte_query.c.id)
cte_query = cte_query.union_all(parent_query)
return select(cte_query).scalar_subquery()
Category.deep_children = column_property(build_deep_children_expr())
if __name__ == "__main__":
# Use a temporary PostgreSQL container
with PostgresContainer("postgres:16.1") as postgres:
database_url = postgres.get_connection_url(driver="psycopg")
engine = create_engine(database_url, echo=True)
Base.metadata.create_all(engine)
session = Session(engine)
# Build a hierarchy of categories:
# root (0)
# ββ (1)
# ββ (2)
# ββ (3)
# ββ (4)
# ββ (5)
# ββ (6)
# ββ (7)
root = Category(
id=0,
children=[
Category(id=1),
Category(
id=2,
children=[
Category(id=3),
Category(id=4),
Category(id=5, children=[Category(id=6), Category(id=7)]),
],
),
],
)
session.add(root)
session.flush()
session.expire_all()
cat = session.query(Category).filter(Category.id == 0).one()
print(f"Category: {cat!r}")
print(f" deep_children_count: {cat.deep_children_count}")
print(f" children: {[c.id for c in cat.children]}")
# Check each child's deep_children too
for c in cat.children:
print(f" Child {c.id} => deep_children = {c.deep_children}")
session.close()
</code></pre>
<p>Obtained error:</p>
<pre class="lang-bash prettyprint-override"><code>sqlalchemy.exc.CompileError: All selectables passed to CompoundSelect must have identical numbers of columns; select #1 has 1 columns, select #2 has 3
</code></pre>
|
<python><sqlalchemy>
|
2025-02-06 16:59:02
| 0
| 2,916
|
guhur
|
79,418,598
| 9,863,397
|
How to do install my custom package in editable mode, with uv
|
<p>My colleagues told me to try 'uv'. It is true that it is handy, but I quickly faced a huge problem: I can't install my custom sources.</p>
<p>Usually, my projects' structures look like this:</p>
<pre class="lang-none prettyprint-override"><code>my_project/
βββ src/
β βββ backend/
β β βββ __init__.py
β β βββ some_module.py
βββ scripts/
β βββ my_script.py
βββ pyproject.toml (or setup.py)
βββ other_project_files...
</code></pre>
<p>With venv, I do <code>pip install -e .</code>, so that I install my 'src' package, and I can use it in my scripts.</p>
<p>Now I have been trying to <code>pip install -e .</code> or <code>uv run pip install -e .</code>, but it just does nothing. When I try to execute my script which uses src it does not find src.</p>
<p>What am I doing wrong?</p>
<p><a href="https://i.sstatic.net/kdMP6zb8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kdMP6zb8.png" alt="Enter image description here" /></a></p>
<p>I run</p>
<pre class="lang-none prettyprint-override"><code>uv init
</code></pre>
<p>Then I run</p>
<pre class="lang-none prettyprint-override"><code>uv run pip install -e .
</code></pre>
<p>Then</p>
<pre class="lang-none prettyprint-override"><code>uv run scripts/importmylist.py
</code></pre>
<p>It gives:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "...\test_repo\scripts\importmylist.py", line 1, in <module>
from src.utils import my_list
ModuleNotFoundError: No module named 'src'
</code></pre>
<p>Content of pyproject:</p>
<pre class="lang-ini prettyprint-override"><code>[project]
name = "test-repo"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.12"
dependencies = []
</code></pre>
|
<python><pip><uv>
|
2025-02-06 16:05:57
| 4
| 457
|
ava_punksmash
|
79,418,418
| 467,366
|
How to use ebooklib to create a nested chapter hierarchy?
|
<p>I am trying to create an EPUB that has a nested chapter structure. There are top-level sections, with sub-chapters, and then subheadings below those sub-chapters. I would like this to be reflected in the TOC. Here is an example of what it looks like:</p>
<blockquote>
<h1>Section 1</h1>
<p>Here is some preamble about what is going to happen in Chapter 1</p>
<h2>Chapter 1</h2>
<p>This is the contents of Section 1, Chapter 1.</p>
<h2>Chapter 2</h2>
<p>This is the contents of Section 1, Chapter 2.</p>
<h1>Section 2</h1>
<p>Preamble for section 2.</p>
<h2>Chapter 1</h2>
<p>Section 2, Chapter 1.</p>
</blockquote>
<p>I have created this MWE that I think should represent the structure I want:</p>
<pre class="lang-py prettyprint-override"><code># /// script
# dependencies = [
# "ebooklib"
# ]
# ///
from ebooklib import epub
def create_epub():
book = epub.EpubBook()
book.set_title("Minimal EPUB Example")
book.set_language("en")
book.add_author("Author Name")
# Section 1
section1_preamble = epub.EpubHtml(
title="Section 1 Preamble",
file_name="section1_preamble.xhtml",
content="<h1>Section 1</h1><p>Preamble text...</p>",
)
chapter1 = epub.EpubHtml(
title="Chapter 1",
file_name="chapter1.xhtml",
content="<h2>Chapter 1</h2><p>Content of Chapter 1</p>",
)
chapter2 = epub.EpubHtml(
title="Chapter 2",
file_name="chapter2.xhtml",
content="<h2>Chapter 2</h2><p>Content of Chapter 2</p>",
)
# Section 2
chapter3 = epub.EpubHtml(
title="Chapter 1 (Section 2)",
file_name="chapter3.xhtml",
content="<h2>Chapter 1 (Section 2)</h2><p>Content of Chapter 1 in Section 2</p>",
)
# Add chapters to book
for item in [section1_preamble, chapter1, chapter2, chapter3]:
book.add_item(item)
# Define table of contents with nesting
book.toc = (
(epub.Section("Section 1"), [section1_preamble, chapter1, chapter2]),
(epub.Section("Section 2"), [chapter3]),
)
# Define book spine
book.spine = ["nav", section1_preamble, chapter1, chapter2, chapter3]
# Add navigation files
book.add_item(epub.EpubNcx())
book.add_item(epub.EpubNav())
# Write to file
epub.write_epub("minimal_epub.epub", book, {})
print("EPUB created: minimal_epub.epub")
if __name__ == "__main__":
create_epub()
</code></pre>
<p>However, the EPUB this creates doesn't work quite right:</p>
<p><a href="https://i.sstatic.net/AJABT0Z8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJABT0Z8.png" alt="Screenshot of calibre's ebook viewer with a "Destination does not exist" error" /></a></p>
<p>There are three problems I'm having with this:</p>
<ol>
<li>The "Section" links don't work (I get a "Destination does not exist" error)</li>
<li>I would prefer it if the "preamble" text were not its own chapter. I want the "Section" link to be a chapter with sub-chapters.</li>
<li>I would like it if there were not an enforced page break after each sub-chapter, and certainly not after the preamble. The way I've got it structured now, "Section 1 Preamble" points to this:</li>
</ol>
<p><a href="https://i.sstatic.net/gshabsIz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gshabsIz.png" alt="A page where the only text is "Section 1. Preamble text"" /></a></p>
<p>But I would like it if "Chapter 1" were a subheading here, directly under "Preamble text". There can still be enforced page breaks at the end of a section.</p>
|
<python><epub><ebooklib>
|
2025-02-06 15:07:08
| 1
| 10,943
|
Paul
|
79,418,256
| 29,295,031
|
Why I do not see the x axis when I use plotly with streamlit
|
<p>when I use only plotly I got this :</p>
<p>code :</p>
<pre><code>import plotly.graph_objects as go
import plotly.express as px
import pandas as pd
data = {'x': [1.5, 1.6, -1.2],
'y': [21, -16, 46],
'circle-size': [10, 5, 6],
'circle-color': ["red","blue","green"]
}
# Create DataFrame
df = pd.DataFrame(data)
fig = px.scatter(
df,
x="x",
y="y",
color="circle-color",
size='circle-size'
)
fig.update_layout(
{
'xaxis': {
"range": [-100, 100],
'zerolinewidth': 3,
"zerolinecolor": "blue",
"tick0": -100,
"dtick": 25,
'scaleanchor': 'y'
},
'yaxis': {
"range": [-100, 100],
'zerolinewidth': 3,
"zerolinecolor": "green",
"tick0": -100,
"dtick": 25
},
"width": 500,
"height": 500
}
)
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/OlCU5Sd1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OlCU5Sd1.png" alt="enter image description here" /></a></p>
<p>but when I use it with streamlit :</p>
<pre><code>import streamlit as st
import plotly.graph_objects as go
import plotly.express as px
import pandas as pd
data = {'x': [1.5, 1.6, -1.2],
'y': [21, -16, 46],
'circle-size': [10, 5, 6],
'circle-color': ["red","blue","green"]
}
# Create DataFrame
df = pd.DataFrame(data)
fig = px.scatter(
df,
x="x",
y="y",
color="circle-color",
size='circle-size'
)
fig.update_layout(
{
'xaxis': {
"range": [-100, 100],
'zerolinewidth': 3,
"zerolinecolor": "green",
"tick0": -100,
"dtick": 25,
"scaleanchor": 'y'
},
'yaxis': {
"range": [-100, 100],
'zerolinewidth': 3,
"zerolinecolor": "red",
"tick0": -100,
"dtick": 25
},
"width": 500,
"height": 500
}
)
event = st.plotly_chart(fig, key="iris", on_select="rerun")
event.selection
</code></pre>
<p>we got this :</p>
<p><a href="https://i.sstatic.net/82WXFc4T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82WXFc4T.png" alt="enter image description here" /></a></p>
<p>why the x axis is removed when I use streamlit?</p>
|
<python><pandas><plotly><streamlit>
|
2025-02-06 14:12:25
| 1
| 401
|
user29295031
|
79,418,236
| 829,412
|
Can't buf generate generate folders with dots?
|
<p>We use protobuf and the buf CLI to create and generate the shared files between a Go and a Python project (for ensuring structured data to be transmitted between the projects in a common way).</p>
<p>We want to build the generator commands as simple as possible, as the generation is done using GitHub workflows and taskfiles etc. and therefore do not want (to the greatest extent possible), to maintain specific file paths every time a new .proto file is added.</p>
<p>In addition we also use the buf linting to ensure the files are valid...</p>
<p>We have tried different things, but don't think we can quite get what we want (without having to do post-processing by renaming and moving files and folders)...</p>
<p><strong>buf.yaml</strong></p>
<pre><code>version: v2
modules:
- path: proto
lint:
use:
- STANDARD
breaking:
use:
- FILE
</code></pre>
<p>As mentioned we use the .proto files as the base for both a Go and a Python project - the Go project. The file structure (output) of the Go project is not as relevant or important as it is in our other component, which uses Python.</p>
<p><strong>buf.gen.yaml</strong> from the <em>Python</em> project (separate repo, fetching files from the other repo using the github input - this have however been removed from the below file listing as it has nothing to do with this issue)</p>
<pre><code>version: v2
managed:
enabled: true
plugins:
- remote: buf.build/protocolbuffers/python
out: components
- remote: buf.build/protocolbuffers/pyi
out: components
</code></pre>
<p>The remote <code>python</code> plugin does not support the options (opt) argument - and the <code>python</code> protoc_builtin plugin seems to require each file to be lisited (no wildcards etc are supported).</p>
<p>We have tried different structures of the proto files, but if I use a folder name with dots (like com.company.app1) the linter is not satisfied.</p>
<ul>
<li>proto/com/company/app1/entity1/v1/entity1.proto</li>
<li>proto/com/company/app1/entity1/v2/entity1.proto</li>
<li>proto/com/company/app2/entity2/v1/entity2.proto</li>
</ul>
<p>This requires the packages to be something like this:</p>
<pre><code>package com.company.app1.entity1.v1;
message Entity1 {
string name = 2;
}
</code></pre>
<p>(if we add dots in a folder name, then linter says that the package name used is not correct - and you cannot use underscore or hyphen to replace the dots in the folder name)</p>
<p><strong>What we want</strong> to achieve (output) is the generated folders in the python projects to be something like this:</p>
<ul>
<li>components/com.company.app1/entity1/v1/entity1.proto</li>
<li>components/com.company.app1/entity1/v2/entity1.proto</li>
<li>components/com.company.app2/entity2/v1/entity2.proto</li>
</ul>
<p><em>The Go code has not the same requirements regarding the folder structure and can be handled by the individual imports</em></p>
|
<python><protocol-buffers><buf>
|
2025-02-06 14:04:11
| 1
| 461
|
Kim Rasmussen
|
79,418,235
| 9,318,323
|
Msgraph-sdk-python get sharepoint site id from name
|
<p>I want to work with SharePoint sites using Python. For now, I just want to ls on the site. I immediately ran into an issue where I need site id to do anything with it.</p>
<p>How do I get site id from the site name?
I did retrieve it using powershell:</p>
<pre class="lang-bash prettyprint-override"><code>$uri = "https://graph.microsoft.com/v1.0/sites/"+$SharePointTenant+":/sites/"+$SiteName+"?$select=id"
$RestData1 = Invoke-MgGraphRequest -Method GET -Uri $uri -ContentType "application/json"
# Display Site ID
$RestData1.displayName
$RestData1.id
</code></pre>
<p>But now I want to perform the same operation as above but using msgraph library. The problem is I am not sure how to do it. So far I have:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
from azure.identity.aio import ClientSecretCredential
from msgraph import GraphServiceClient
tenant_id="..."
client_id="..."
client_secret="..."
credential = ClientSecretCredential(tenant_id, client_id, client_secret)
scopes = ['https://graph.microsoft.com/.default']
client = GraphServiceClient(credentials=credential, scopes=scopes)
sharepoint_tenant = "foobar365.sharepoint.com"
site_name = "someTestSPsite"
site_id = "f482d14d-6bd1-1234-gega-3121183eb87a" # how do I get this using GraphServiceClient from the site_name ?
# so that I can do this
# this works with the id retrieved from PS script
drives = (asyncio.run(client.sites.by_site_id(site_id).drives.get()))
print(drives)
</code></pre>
|
<python><azure><sharepoint><microsoft-graph-api>
|
2025-02-06 14:03:35
| 2
| 354
|
Vitamin C
|
79,418,188
| 25,413,271
|
Get indices of edge points computed with extract_feature_edges()
|
<p>I have a non-closed surface in STL. I read it as <code>PolyData</code> with PyVista and I want to get edge points indices. I go this way:</p>
<pre class="lang-py prettyprint-override"><code>bot_surf_file = r'bottom.stl'
bot_surf_mesh = pv.PolyData(bot_surf_file)
points = bot_surf_mesh.points
triangles = bot_surf_mesh.faces.reshape((bot_surf_mesh.n_cells, 4))[:, 1:]
edges = bot_surf_mesh.extract_feature_edges(boundary_edges=True)
edge_points_indices = np.where(np.all(np.isin(bot_surf_mesh.points, edges.points), axis=1))
</code></pre>
<p>But I dont really want to rely on <code>numpy.where</code>. I believe there must be a way to get indices right from pyvista <code>PolyData</code> object.</p>
|
<python><pyvista>
|
2025-02-06 13:46:54
| 1
| 439
|
IzaeDA
|
79,418,004
| 5,044,921
|
How to force all terms in 1-D Gaussian mixture model to have the same mean?
|
<p>I have a one-dimensional set of data points, of which I want to parameterise the probability density. I have reasons to believe that a Gaussian mixture model would be a good way to accomplish this, so I'm trying to use scikit-learn's <a href="https://scikit-learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html" rel="nofollow noreferrer">GaussianMixture</a> class to fit the parameters and weights of two Gaussian distributions.</p>
<p>Toy example:</p>
<pre><code>import numpy as np
from sklearn.mixture import GaussianMixture
stdev_1 = 5
stdev_2 = 30
gaussian_data_1 = stdev_1 * np.random.randn(1000)
gaussian_data_2 = stdev_2 * np.random.randn(1000)
data = np.concatenate([gaussian_data_1, gaussian_data_2])
model = GaussianMixture(2)
data_2d = data.reshape((len(data), 1))
model.fit(data_2d)
print("Estimated means:", model.means_[:, 0])
print("Estimated stdevs:", model.covariances_[:, 0, 0] ** 0.5)
print("Estimated weights:", model.weights_)
</code></pre>
<p>The resulting model has reasonable estimates of two Gaussians. I put in means of zero, and standard deviations of 5 and 30, both with weights of 0.5 (both have 1000 data points), and it finds means of [-0.0715483 and -0.06263915], standard deviations of [ 5.46757321 and 30.77977466], and weights of [0.53427173 and 0.46572827].</p>
<p>So far so good.</p>
<p>However, in my application, I <em>know</em> that the underlying distribution is unimodal, and I really only want to find out which combination of (standard deviations and weights) fits best. Hence, I'd like to <em>force</em> this model to use the same means, for instance by simply passing it the mean myself, and having it only optimise the weights and standard deviations (variances).</p>
<p>Is this possible with scikit-learn? The GaussianMixture class seems to be designed for <em>classification</em>, while I'm actually using it for the sake of <em>parameterising a distribution</em>, so it might be that there's a better solution that I'm not aware of.</p>
<p>If this is not feasible with scikit-learn, any suggestions on how to do this (Preferably with >2 Gaussians)?</p>
|
<python><scikit-learn>
|
2025-02-06 13:04:05
| 1
| 4,786
|
acdr
|
79,417,996
| 24,696,572
|
efficient matrix inversion/multiplication with multiple batch dimensions in pytorch
|
<p>In my framework, I am having an outer loop (here mocked by the variable <code>n</code>) and inside the loop body I have to perform matrix inversions/multiplications for multiple batch dimensions. I observed that manually looping over the batch dimensions and computing the inverse is measurable faster than passing all batches to the <code>torch.linalg.inv</code> function.
Similar statements can be made for computing matrix multiplications using <code>torch.einsum</code> function.</p>
<p>My expectation was that passing all batches at once performs better than for-loops. Any ideas/explanations/recommendations here?</p>
<p>Profiling output of function <code>inverse_batch</code>:</p>
<pre><code>ncalls tottime percall cumtime percall filename:lineno(function)
100 2.112 0.021 2.112 0.021 {built-in method torch._C._linalg.linalg_inv}
1 0.000 0.000 2.112 2.112 mwe.py:5(inverse_batch)
1 0.000 0.000 2.112 2.112 {built-in method builtins.exec}
1 0.000 0.000 2.112 2.112 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
</code></pre>
<p>Profiling output of function <code>inverse_loop</code>:</p>
<pre><code>ncalls tottime percall cumtime percall filename:lineno(function)
8000 0.207 0.000 0.207 0.000 {built-in method torch._C._linalg.linalg_inv}
1 0.022 0.022 0.229 0.229 mwe.py:9(inverse_loop)
1 0.000 0.000 0.000 0.000 {method 'view' of 'torch._C.TensorBase' objects}
1 0.000 0.000 0.229 0.229 {built-in method builtins.exec}
1 0.000 0.000 0.229 0.229 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
</code></pre>
<p>Code:</p>
<pre><code>import torch
import cProfile
import pstats
def inverse_batch(tensors, n):
for i in range(n):
torch.linalg.inv(tensors)
def inverse_loop(tensors, n):
tensors = tensors.view(-1, 3, 3)
for i in range(n):
for j in range(10 * 8):
torch.linalg.inv(tensors[j])
# Create a batch of tensors
tensors = torch.randn(10, 8, 3, 3, dtype = torch.double) # Shape: (10, 8, 3, 3)
# Profile code
n = 100 # Dummy outer loop variable
cProfile.run('inverse_batch(tensors, n)', 'profile_output')
stats = pstats.Stats('profile_output')
stats.strip_dirs().sort_stats('tottime').print_stats()
</code></pre>
|
<python><pytorch><batch-processing>
|
2025-02-06 13:02:06
| 2
| 332
|
Mathieu
|
79,417,824
| 607,407
|
How to tell pylance to correctly read identifiers loaded by their full module name?
|
<p>I have a file that depends on a tool from another directory. The structure is:</p>
<ul>
<li><code>my_subproject</code>
<ul>
<li><code>util</code>
<ul>
<li><code>__init__.py</code></li>
<li><code>my_tool.py</code></li>
</ul>
</li>
<li><code>__init__.py</code></li>
<li><code>my_main_file.py</code></li>
</ul>
</li>
</ul>
<p><code>my_tool.py</code> may contain something like:</p>
<pre><code>class MyTool:
pass
</code></pre>
<p>In the main file, I import it by:</p>
<pre><code>import my_subproject.util.my_tool
my_tool_instance = my_subproject.util.my_tool.MyTool()
</code></pre>
<p>This works fine when ran in python, but I suspect there are path configurations I am not aware of. I am trying to get it to work in the IDE as well.</p>
<p>Right now, it says:</p>
<blockquote>
<p>"<code>my_tool</code>" is not a known attribute of module "<code>my_subproject.my_util</code>"</p>
</blockquote>
<p>In the VS Code settings for pylance, I have this:</p>
<pre><code> "python.autoComplete.extraPaths": [
"my_subproject"
],
"python.analysis.include": ["my_subproject/**"],
</code></pre>
|
<python><pylance>
|
2025-02-06 12:02:00
| 0
| 53,877
|
TomΓ‘Ε‘ Zato
|
79,417,790
| 8,315,819
|
Python script for process injection
|
<p>I need to do a process injection using python. Tried the below script. My objective is to see if the script is working and if working then can it be used to evade EDR detection</p>
<pre><code>import pymem
import os
import subprocess
notepad = subprocess.Popen(['notepad.exe'])
pm = pymem.Pymem('notepad.exe')
pm.inject_python_interpreter()
shellcode = """
f = open("C:\TestRule\process_injection.txt", "w+")
f.write("pymem_injection")
f.close()
os.system('rundll32.exe C:\windows\System32\comsvcs.dll, MiniDump 988 C:\dump.dmp full')
"""
pm.inject_python_shellcode(shellcode)
notepad.kill()
</code></pre>
<p>The command within the shellcode --> os.system('rundll32.exe C:\windows\System32\comsvcs.dll, MiniDump 988 C:\dump.dmp full') when ran separately gets blocked by our EDR tool which indicates that it works. But when used in the script nothing happens except opening of notepad . Even I don't see any notification of it getting blocked by EDR. Seems like the code is just ignored .</p>
<p>This I can say because the other code within the shellcode</p>
<pre><code>f = open("C:\TestRule\process_injection.txt", "w+")
f.write("pymem_injection")
f.close()
</code></pre>
<p>just works fine .</p>
<p>Can anyone plz tell me what am i doing wrong and suggest any modification in the script</p>
|
<python><security><process-injection>
|
2025-02-06 11:50:13
| 1
| 445
|
Biswa
|
79,417,729
| 6,618,051
|
PyInstaller add additional files
|
<p>How to add additional files into the <code>dist</code>-folder during a compilation (<code>python -m PyInstaller tlum.spec</code>)?</p>
<p>I've added them to <code>datas</code>, but it's not helping</p>
<pre><code>a = Analysis(
['..\\src\\main.py'],
pathex=[],
binaries=[],
datas=[
("..\\assets\\data\\dictionary.txt","assets\\data"),
("..\\assets\\images\\error.png","assets\\images"),
("..\\assets\\images\\success.png","assets\\images"),
("..\\src\\component\\harmonica_widget.py","component"),
("..\\src\\template\\game_harmonica.kv","template"),
("..\\src\\template\\game_parrot.kv","template"),
("..\\src\\template\\game_phonetics.kv","template"),
("..\\src\\template\\main.kv","template"),
],
hiddenimports=[],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
noarchive=False,
optimize=0,
)
</code></pre>
<p>Failing with an error:</p>
<pre><code>Traceback (most recent call last):
File "main.py", line 71, in <module>
File "kivy\app.py", line 955, in run
File "kivy\app.py", line 925, in _run_prepare
File "main.py", line 32, in build
File "kivy\lang\builder.py", line 308, in load_file
FileNotFoundError: [Errno 2] No such file or directory: 'template/main.kv'
</code></pre>
<p>Only when I add needed folders/files into the <code>dist</code> folder, everything works.</p>
<p><a href="https://i.sstatic.net/6HOStd5B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6HOStd5B.png" alt="enter image description here" /></a></p>
<p>Sources: <a href="https://github.com/lyskouski/app-language/blob/main/windows/tlum.spec" rel="nofollow noreferrer">https://github.com/lyskouski/app-language/blob/main/windows/tlum.spec</a></p>
|
<python><kivy><pyinstaller>
|
2025-02-06 11:30:00
| 1
| 1,939
|
FieryCat
|
79,417,642
| 3,365,532
|
How to implement Softmax of a Polars LazyFrame using expressions?
|
<p>I'm relatively new to using polars and it seems to be very verbose compared to pandas for what I would consider even relatively basic manipulations.</p>
<p>Case in point, the shortest way I could figure out doing a softmax over a lazy dataframe is the following:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.LazyFrame({'a': [1,2,3,4,5,6,7,8,9,10], 'b':[5,5,5,5,5,5,5,5,5,5], 'c': [10,9,8,7,6,5,4,3,2,1]})
cols = ['a','b','c']
df = df.with_columns([ pl.col(c).exp().alias(c) for c in cols]) # Exp all columns
df = df.with_columns(pl.sum_horizontal(cols).alias('sum')) # Get row sum of exps
df = df.with_columns([ (pl.col(c)/pl.col('sum')).alias(c) for c in cols ]).drop('sum')
df.collect()
</code></pre>
<pre><code>shape: (10, 3)
ββββββββββββ¬βββββββββββ¬βββββββββββ
β a β b β c β
β --- β --- β --- β
β f64 β f64 β f64 β
ββββββββββββͺβββββββββββͺβββββββββββ‘
β 0.000123 β 0.006692 β 0.993185 β
β 0.000895 β 0.01797 β 0.981135 β
β 0.006377 β 0.047123 β 0.946499 β
β 0.04201 β 0.114195 β 0.843795 β
β 0.211942 β 0.211942 β 0.576117 β
β 0.576117 β 0.211942 β 0.211942 β
β 0.843795 β 0.114195 β 0.04201 β
β 0.946499 β 0.047123 β 0.006377 β
β 0.981135 β 0.01797 β 0.000895 β
β 0.993185 β 0.006692 β 0.000123 β
ββββββββββββ΄βββββββββββ΄βββββββββββ
</code></pre>
<p>Am I missing something and is there a shorter, more readable way of achieving this?</p>
|
<python><dataframe><python-polars>
|
2025-02-06 11:03:06
| 1
| 443
|
velochy
|
79,417,386
| 9,128,863
|
Pytorch Convolution Network: problem with floating point type
|
<p>I'm trying to train the CV-model with standard MNIST-data:</p>
<pre><code> import torch
from torchvision.datasets import MNIST
import torchvision.transforms as transforms
img_transforms = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1305,), (0.3081,))
])
train_dataset = MNIST(root='../mnist_data/',
train=True,
download=True,
transform=img_transforms)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=10,
shuffle=True)
</code></pre>
<p>Model is declared as:</p>
<pre><code>import torch.nn as nn
class MNIST_ConvNet(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = ConvLayer(1, 14, 5, activation=nn.Tanh(),
dropout=0.8)
self.conv2 = ConvLayer(14, 7, 5, activation=nn.Tanh(), flatten=True,
dropout=0.8)
self.dense1 = DenseLayer(28 * 28 * 7, 32, activation=nn.Tanh(),
dropout=0.8)
self.dense2 = DenseLayer(32, 10)
def forward(self, x: Tensor) -> Tensor:
assert_dim(x, 4)
x = self.conv1(x)
x = self.conv2(x)
x = self.dense1(x)
x = self.dense2(x)
return x
</code></pre>
<p>Then I invoke forward and estimate loss for this model, in accordance with pytorch approach:</p>
<pre><code>import torch.optim as optim
model = MNIST_ConvNet()
for X_batch, y_batch in train_dataloader:
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
optimizer.zero_grad()
output = model(X_batch)[0]
loss = nn.CrossEntropyLoss()
loss = loss(output, y_batch)
</code></pre>
<p>X_batch has the following content:</p>
<pre><code>tensor([[[[-0.4236, -0.4236, -0.4236, ..., -0.4236, -0.4236, -0.4236],
[-0.4236, -0.4236, -0.4236, ..., -0.4236, -0.4236, -0.4236],
[-0.4236, -0.4236, -0.4236, ..., -0.4236, -0.4236, -0.4236],
...,
</code></pre>
<p>And for this line of code "self.loss(output, y_batch)", I receive the following error:</p>
<blockquote>
<p>RuntimeError: Expected floating point type for target with class probabilities, got Long</p>
</blockquote>
<p>To solve the problem, I tried update data type:</p>
<pre><code> self.model(X_batch.type(torch.FloatTensor))[0]
</code></pre>
<p>But this does not working.</p>
|
<python><pytorch><floating-point>
|
2025-02-06 09:34:44
| 1
| 1,424
|
Jelly
|
79,417,370
| 17,580,381
|
How can iterating over a BufferedReader of binary data give line number results?
|
<p>Numerous propositions for counting the number of lines in a file can be found <a href="https://stackoverflow.com/questions/845058/how-to-get-the-line-count-of-a-large-file-cheaply-in-python">here</a></p>
<p>One of the suggestions is (effectively):</p>
<pre><code>with open("foo.txt", "rb") as handle:
line_count = sum(1 for _ in handle)
</code></pre>
<p>When I looked at that I thought "That can't be right" but it does indeed produce the correct result.</p>
<p>Here's what I don't understand... The file is opened in binary mode. Therefore, I would expect iterating over <em>handle</em> (which is an _io.BufferedReader) to reveal one byte at a time.</p>
<p>It seems odd to me that a file opened in binary mode could be considered as line-oriented.</p>
<p>I must be missing something fundamental here.</p>
|
<python>
|
2025-02-06 09:29:31
| 2
| 28,997
|
Ramrab
|
79,417,362
| 10,680,954
|
VSCode Remote SSH: Canβt Install βms-python.pythonβ (Incompatibility Error & Infinite Loop)
|
<p>Iβm working with VSCode on a remote server via SSH and having issues installing the Python and Jupyter extensions. When I try to install ms-python.python, I sometimes get the following error message:</p>
<pre><code>Canβt install βms-python.pythonβ extension because it is not compatible.
</code></pre>
<p>Other times, the installation starts but gets stuck in an <strong>infinite install loop</strong>.</p>
<p>The same issue occurs with other extensions like Jupyter, but some extensions (e.g., File Browser) install successfully and appear under .vscode-server/extensions. However, after restarting VSCode, it still prompts me to install them on the remote server, suggesting something is wrong.</p>
<p><strong>What Iβve tried:</strong></p>
<ul>
<li>Reinstalling the VSCode server (rm -rf ~/.vscode-server)</li>
<li>Reinstalling VSCode (brew uninstall visual-studio-code)</li>
</ul>
<p><strong>My VSCode version:</strong></p>
<pre><code>Commit: cd4ee3b1c348a13bafd8f9ad8060705f6d4b9cba
Date: 2025-01-16T00:16:19.038Z
Electron: 32.2.6
ElectronBuildId: 10629634
Chromium: 128.0.6613.186
Node.js: 20.18.1
V8: 12.8.374.38-electron.0
OS: Darwin arm64 23.5.0
</code></pre>
<p><strong>Question:</strong></p>
<p>Does anyone have an idea what could be causing this issue?</p>
|
<python><visual-studio-code><vscode-extensions><vscode-remote>
|
2025-02-06 09:26:45
| 1
| 1,611
|
zwithouta
|
79,417,209
| 6,013,016
|
Clean up unused pip packages that my project is not using
|
<p>I'm using <code>pip</code> to install necessary packages for my project. And my project is working fine on my pc. Next step is deploying it on client's device. But, I see some packages are installed, not being used though.</p>
<p><strong>So, what is the best (optimal) way to clean up those unused pip packages before deployment?</strong> I saw solutions (such as pipreqs) but didn't work well.</p>
|
<python><pip>
|
2025-02-06 08:30:56
| 0
| 5,926
|
Scott
|
79,417,096
| 6,734,243
|
how to force the reinsall of my local package with uv?
|
<p>I'm starting to use <code>uv</code> for my CI as it's showing outstanding performances with respect to normal pip installation. for each CI run I (in fact <code>nox</code> does it on my behalf) create a virtual environment that will be used to run the tests. In this environment I run the following:</p>
<pre><code>uv pip install .[test]
</code></pre>
<p>my folder is a simple python package as this one:</p>
<pre><code>my_package/
βββ __init__.py
βββ logic.py
docs/
βββ index.rst
βββ conf.py
test/
βββ test_something.py
pyproject.toml
noxfile.py
</code></pre>
<p>As <code>uv</code> is caching everything my virtual environment is never updated and I cannot check new fonctionalities without rebuilding the venv from scratch. how could I make sure that "." get's reinstalled from source everytime I run my tests ?</p>
<p>I see 3 potential options:</p>
<ul>
<li>always install in editable mode: <code>uv pip install -e .[test]</code> which is not ideal for testing purposes as I don't check if the wheel build includes all the necessary files.</li>
<li>force the reinstall in the CI call: <code>uv pip install --reinstall .[test]</code> I guess I will loose the caching for all the libs and not only my package</li>
<li>force the reinstall from the pyproject.toml: <code>reinstall-package = ["."]</code> but I don't know if it's going to mess with normal installation from users that are not running the tests</li>
</ul>
<p>Am I missing an alternative and which one is the best to avoid unwanted side effects in my tests ?</p>
|
<python><packaging><uv>
|
2025-02-06 07:37:55
| 1
| 2,670
|
Pierrick Rambaud
|
79,417,079
| 7,156,008
|
Django ORM sporadically dies when ran in fastAPI endpoints using gunicorn
|
<p>Other related Questions mention using either
<code>django.db</code>'s <code>close_old_connections</code> method or looping through the django connections and calling <code>close_if_unusable_or_obsolete</code> on each.</p>
<p>I've tried doing this as a a fastapi middlewear and the issue persists just the same:</p>
<pre><code>class OptOutOfRandomConnectionCrashFastapiMiddleware(BaseHTTPMiddleware):
def __init__(self, app: ASGIApp, *args, **kwargs) -> None:
super().__init__(app)
async def dispatch(self, request: Request, call_next: RequestResponseEndpoint) -> Response:
# Pre-processing
response = await call_next(request)
app = request.app
if not hasattr(app.state, "last_pruned"):
app.state.last_pruned = 0
if app.state.last_pruned + 5 < time.time():
start = time.time()
app.state.last_pruned = start
for conn in connections.all():
conn.close_if_unusable_or_obsolete()
print(f"Pruned all connections. Took {round((time.time() - start) * 1000, 1)} miliseconds")
# Post-processing
return response
</code></pre>
<hr />
<p>I'm using the django ORM outside of a django app, in async fastapi endpoints that I'm running with gunicorn.</p>
<p>All works fine except that once in a blue moon I get these odd errors where a worker seemingly "goes defunct" and is unable to process any requests due to database connections dropping.</p>
<p>I've attached a stack trace bellow but they really make no sense whatsoerver to me.</p>
<p>I'm using postgres without any short timeouts (database side) and the way I setup the connectiong enables healthchecks at every request, which I'd assume would get rid of any "stale connection" style issues:</p>
<pre><code>DATABASES = {
"default": dj_database_url.config(
default="postgres://postgres:pass@localhost:5432/db_name",
conn_max_age=300,
conn_health_checks=True,
)
}
</code></pre>
<p>I'm curious if anyone has any idea as to how I'd go about debugging this? It's really making the django ORM unusable due to apps going down spontanously.</p>
<p>Stacktrace bellow is an example of the kind of error I get here:</p>
<pre><code>Traceback (most recent call last):
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/uvicorn/protocols/http/h11_impl.py", line 403, in
run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in
__call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/starlette/applications.py", line 113, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 187, in __c
all__
raise exc
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 165, in __c
all__
await self.app(scope, receive, _send)
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/starlette/middleware/cors.py", line 85, in __call
__
await self.app(scope, receive, send)
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/honeybadger/contrib/asgi.py", line 109, in _run_a
sgi3 File "/home/ubuntu/py_venv/lib/python3.12/site-packages/honeybadger/contrib/asgi.py", line 118, in _run_a
pp
raise exc from None
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/honeybadger/contrib/asgi.py", line 115, in _run_a
pp
return await callback()
^^^^^^^^^^^^^^^^
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 62, in
__call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wra
pped_app
raise exc
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wra
pped_app
await app(scope, receive, sender)
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/starlette/routing.py", line 715, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/starlette/routing.py", line 735, in app
await route.handle(scope, receive, send)
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wra
pped_app
raise exc
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wra
pped_app
await app(scope, receive, sender)
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/starlette/routing.py", line 73, in app
response = await f(request)
^^^^^^^^^^^^^^^^
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/fastapi/routing.py", line 212, in run_endpoint_fu
nction
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/stateshift/main.py", line 181, in email_exists
user = await User.objects.filter(email=email).afirst() ^^^^^^^^^^^^^^^^
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/stateshift/main.py", line 181, in email_exists
user = await User.objects.filter(email=email).afirst()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/django/db/models/query.py", line 1101, in afirst
return await sync_to_async(self.first)()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/asgiref/sync.py", line 468, in __call__
ret = await asyncio.shield(exec_coro)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/asgiref/sync.py", line 522, in thread_handler
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/django/db/models/query.py", line 1097, in first
for obj in queryset[:1]:
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/django/db/models/query.py", line 400, in __iter__
self._fetch_all()
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/django/db/models/query.py", line 1928, in _fetch_all
self._result_cache = list(self._iterable_class(self))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/django/db/models/query.py", line 91, in __iter__
results = compiler.execute_sql(
^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/django/db/models/sql/compiler.py", line 1572, in execute_sql
cursor = self.connection.cursor()
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/django/db/backends/base/base.py", line 320, in cursor
return self._cursor()
^^^^^^^^^^^^^^
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/django/db/backends/base/base.py", line 297, in _cursor
with self.wrap_database_errors:
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/django/db/backends/base/base.py", line 298, in _cursor
return self._prepare_cursor(self.create_cursor(name))
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/py_venv/lib/python3.12/site-packages/django/db/backends/postgresql/base.py", line 429, in create_cursor
cursor = self.connection.cursor()
</code></pre>
|
<python><django><fastapi><django-orm><starlette>
|
2025-02-06 07:31:39
| 0
| 4,144
|
George
|
79,416,950
| 3,825,996
|
Python: Initializing extra members in a subclass without overriding/obfuscating the base class' __init__
|
<p>Say I am for example subclassing Popen which has roughly a million <code>__init__</code> parameters and I want to initialize an extra class member. The standard way (I think) would be this:</p>
<pre class="lang-py prettyprint-override"><code>from subprocess import Popen
class MyPopen(Popen):
def __init__(self, *args, **kwargs) -> None:
super().__init__(*args, **kwargs)
self.extra_member = "extra"
</code></pre>
<p>However, now mypy won't check the arguments. I can just call <code>MyPopen(invalid=7)</code> without this getting reported. Also, my IDE will no longer list the parameters in a call site. I don't want to explicitly repeat the base constructor (and all its overloads) as that's a lot of overhead and might also change over time. I am considering intializing extra parameters inside <code>__new__</code> but that doesn't feel like the right way.</p>
<p>Is there an elegant way to wrap the base class' <code>__init__</code> without repeating the signature and also without obfuscating the signature? Bonus points if the solution also lets me add extra parameters to the construction.</p>
|
<python><python-typing><mypy>
|
2025-02-06 06:29:11
| 1
| 766
|
mqnc
|
79,416,850
| 219,153
|
How to reduce verbosity of self-documenting expressions in Python f-strings?
|
<p>This script:</p>
<pre><code>import numpy as np
a = np.array([2, 3, 1, 9], dtype='i4')
print(a)
print(f'{a=}')
</code></pre>
<p>produces:</p>
<pre><code>[2 3 1 9]
a=array([2, 3, 1, 9], dtype=int32)
</code></pre>
<p>Is there a way to get just <code>a=[2 3 1 9]</code> from <code>{a=}</code> f-string as in the standard formatting in the first line?</p>
|
<python><numpy-ndarray><f-string>
|
2025-02-06 05:22:39
| 2
| 8,585
|
Paul Jurczak
|
79,416,816
| 11,402,025
|
pydantic version update : Field defined on a base class was overridden by a non-annotated attribute
|
<p>I updated the pydantic version and code is breaking with the error</p>
<p>Field 'allowed_type' defined on a base class
was overridden by a non-annotated attribute. All field<br />
definitions, including overrides, require a type<br />
annotation.</p>
<pre><code>class AllowedType(str, Enum):
YES = "YES"
NO = "NO"
</code></pre>
<pre><code>class APIResponse(BaseModel):
allowed_type: AllowedType
def to_json(self):
return json.dumps(
{
"allowed_type": self.allowed_type,
}
)
</code></pre>
<p>None of the solutions mentioned here work for me : <a href="https://docs.pydantic.dev/2.6/errors/usage_errors/#model-field-overridden" rel="nofollow noreferrer">https://docs.pydantic.dev/2.6/errors/usage_errors/#model-field-overridden</a></p>
|
<python><pydantic><pydantic-v2>
|
2025-02-06 05:00:52
| 0
| 1,712
|
Tanu
|
79,416,737
| 616,728
|
OpenSearch Python - Return hit count by index
|
<p>I have two indices in OpenSearch that I am searching and want to do get window function like totals for each index in addition to the results. They are called <code>users</code> and <code>posts</code> and I need to get the total number of hits from <code>posts</code> and the total from <code>users</code> as well as search results that are ordered by <code>_score</code> or, possibly by another sorting mechanism.</p>
<p>I can search both indices by hitting <code>users,posts/_search</code> but the hits object I get back doesn't break out the totals, it just looks like this:</p>
<pre><code>{'total': {'value': 28, 'relation': 'eq'}, 'max_score': 8.996178, '_hits': [ ... ]}
</code></pre>
<p>I want something that will give me totals per index. If its relevant I use the OpenSearch-py client and the call I'm currently making looks like this:</p>
<pre><code>client.search(body=body, index='users,posts', pretty=True)
</code></pre>
|
<python><opensearch><amazon-opensearch>
|
2025-02-06 03:33:58
| 0
| 2,748
|
Frank Conry
|
79,416,716
| 6,260,879
|
httpx Library vs curl program
|
<p>I am trying to use the httpx library to POST a file through a website provided API. The API requests some additional headers.
Here is my Pyhton script:</p>
<pre><code>#!/usr/bin/python3
import httpx
files = {'file': ('Test.pdf', open('Test.pdf', 'rb'), 'application/pdf')}
url='https://some.api.domain/Api/1/documents/mandant'
headers= {\
'accept': 'application/json',\
'APPLICATION-ID': '42fxxxxxxx',\
'USER-TOKEN': '167axxxxx',\
'Content-Type': 'multipart/form-data',\
}
data={\
'year': 2024,\
'month': 1,\
'folder': 3,\
'notify': False\
}
response = httpx.post(url, headers=headers, data=data, files=files)
print(response.status_code)
</code></pre>
<p>And here's the curl command line call to do the same:</p>
<pre><code>curl -i -v -X 'POST' 'http://www.some.domain/' \
-H 'accept: application/json' \
-H 'APPLICATION-ID: 42xxxx' \
-H 'USER-TOKEN: 167axxxxx'\
-H 'Content-Type: multipart/form-data'\
-F 'year=2024'\
-F 'month=1'\
-F 'file=@Test.pdf;type=application/pdf'\
-F 'folder=3'\
-F 'notify=false'
</code></pre>
<p>Running the Python script I am getting a 412 response, which means some parameters missing, incomplete or invalid.
Running the <code>curl</code> command it works fine and I am getting 200 and the file is transferred.</p>
<p>So how do I "translate" the correct curl call into a correct httpx call?</p>
<p>Thanks!</p>
|
<python><curl><post><httpx>
|
2025-02-06 03:22:35
| 1
| 387
|
Christian
|
79,416,657
| 1,481,689
|
__set_name__ outside of classes?
|
<p>Does anyone know how to get <code>__set_name__</code> to work outside of classes?</p>
<pre class="lang-py prettyprint-override"><code>>>> class N:
... def __set_name__(self, owner, name):
... print(f'{name = }')
...
>>> class T:
... n = N()
...
name = 'n'
</code></pre>
<p>Works, but:</p>
<pre class="lang-py prettyprint-override"><code>>>> n = N()
</code></pre>
<p>Fails.</p>
<p>How can I get it work outside of a class?</p>
|
<python>
|
2025-02-06 02:35:09
| 0
| 1,040
|
Howard Lovatt
|
79,416,603
| 1,982,032
|
How can import package-numpy for libreoffice's macro to call python?
|
<p>"numpy" is installed in for <code>libreoffice24.8</code>:</p>
<pre><code>ls /opt/libreoffice24.8/program | grep numpy
numpy
numpy-2.2.2.dist-info
numpy.libs
</code></pre>
<p>Edit a test python script for macro in <code>libreoffice24.8</code> to call:</p>
<pre><code>vim .config/libreoffice/4/user/Scripts/python/test.py
def output_screen():
import numpy as np
print("call python in macro")
</code></pre>
<p>Run it in macro:</p>
<p><a href="https://i.sstatic.net/e81JAgiv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e81JAgiv.png" alt="enter image description here" /></a></p>
<p>How can import package-numpy for libreoffice's macro to call python?<br />
I install numpy for libreoffice this way:</p>
<pre><code>sudo python3 -m pip install numpy --target=/opt/libreoffice24.8/program
</code></pre>
<p>Remove all of them and try again:</p>
<pre><code>sudo rm -rf /opt/libreoffice24.8/program/numpy*
</code></pre>
<p><a href="https://i.sstatic.net/mLtYrrLD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mLtYrrLD.png" alt="enter image description here" /></a></p>
<p>Show numpy location:</p>
<pre><code>pip show numpy
Name: numpy
Version: 1.24.2
Summary: Fundamental package for array computing in Python
Home-page: https://www.numpy.org
Author: Travis E. Oliphant et al.
Author-email:
License: BSD-3-Clause
Location: /usr/lib/python3/dist-packages
Requires:
Required-by: pandas, rank-bm25
</code></pre>
<p>How can make libreoffice refer to the path when to call python script in macro?</p>
|
<python><numpy><macros><libreoffice><libreoffice-calc>
|
2025-02-06 01:44:46
| 3
| 355
|
showkey
|
79,416,595
| 7,955,271
|
Why can't Pylance fully infer the return type of this function?
|
<p>I am using Python 3.11.4, and Pydantic 2.10.0. The following is a toy example of a real-world problem.</p>
<p>I have defined two Pydantic Basemodels, and created a list containing instances of both, as follows.</p>
<pre class="lang-py prettyprint-override"><code>import pydantic
class Foo(pydantic.BaseModel):
a: int
b: int
class Bar(pydantic.BaseModel):
c: int
d: int
non_homogeneous_container = [
Foo(a=1, b=5),
Foo(a=7, b=8),
Bar(c=5, d=3),
Bar(c=15, d=12),
]
</code></pre>
<p>I want to write a function that takes in such a list, as well as a <code>target_type</code>, and returns a list containing only those objects from the original list that conform to the specified <code>target_type</code>. I have written such a function, as below, and provided (what I believe to be) the appropriate type annotation for my type checker (Pylance, in VSCode).</p>
<pre class="lang-py prettyprint-override"><code>from typing import Any, Type, TypeVar
T = TypeVar("T", bound=pydantic.BaseModel)
def extract_objects_of_specified_type(
mixed_up_list: list[Any],
target_type: Type[T],
) -> list[T]:
extracted_list: list[T] = []
for each_object in mixed_up_list:
if isinstance(each_object, target_type):
extracted_list.append(each_object)
return extracted_list
</code></pre>
<p>Invoking my function with my mixed up list of objects, I obtain the expected list containing only objects of the <code>target_type</code>.</p>
<pre class="lang-py prettyprint-override"><code>list_of_only_foos = extract_objects_of_specified_type(
mixed_up_list=non_homogeneous_container, target_type=Foo
)
print (list_of_only_foos)
# Result: [Foo(a=1, b=5), Foo(a=7, b=8)]
</code></pre>
<p>However, I have a yellow underline on the line where I invoke my function; Pylance says the <code>argument type is partially unknown</code>. The bit that's underlined and the associated Pylance report are as below:</p>
<pre class="lang-py prettyprint-override"><code># list_of_only_foos: list[Foo] = extract_objects_of_specified_type(
# mixed_up_list=non_homogeneous_container, target_type=Foo
# ) ^^^^^^^^^^^^^^^^^^^^^^^^^
# Linter report:
# Argument type is partially unknown
# Argument corresponds to parameter "mixed_up_list" in function "extract_objects_of_specified_type"
# Argument type is "list[Unknown]"PylancereportUnknownArgumentType
</code></pre>
<p>The linter report goes away when I annotate my input list of objects as below:</p>
<pre class="lang-py prettyprint-override"><code>non_homogeneous_container: list[Foo | Bar] = [
Foo(a=1, b=5),
Foo(a=7, b=8),
Bar(c=5, d=3),
Bar(c=15, d=12),
]
</code></pre>
<p>or</p>
<pre class="lang-py prettyprint-override"><code>non_homogeneous_container: list[pydantic.BaseModel] = [
Foo(a=1, b=5),
Foo(a=7, b=8),
Bar(c=5, d=3),
Bar(c=15, d=12),
]
</code></pre>
<p>I find this unsatisfactory because it obliges me to know in advance what objects are expected to be in the list I feed to my function.</p>
<p>I have tried to get around this issue by wrapping my function in a class that employs a <code>Generic</code> (below), but this made no difference.</p>
<pre class="lang-py prettyprint-override"><code>class ExtractSpecificType(Generic[T]):
@staticmethod
def extract_objects_of_specified_type(
mixed_up_list: list[Any],
target_type: Type[T],
) -> list[T]:
extracted_list: list[T] = []
for each_object in mixed_up_list:
if isinstance(each_object, target_type):
extracted_list.append(each_object)
return extracted_list
</code></pre>
<p>My function as originally constructed does what I want it to do - my question is about WHY the type checker is unhappy. Is my type annotation imprecise in some way? If so, what should I do differently?</p>
<p>Thank you.</p>
<p><strong>EDIT</strong></p>
<p><a href="https://stackoverflow.com/questions/79416595/why-cant-pylance-fully-infer-the-return-type-of-this-function?cb=1#comment140056967_79416595">user2357112's comment</a> and <a href="https://stackoverflow.com/a/79416666/7955271">InSync's answer</a> indicate that that this behaviour has something to do with the prevailing type-checker settings about when and where to warn about missing/ambiguous type annotations. For context, I have reproduced my VSCode type-checking settings below.</p>
<pre class="lang-json prettyprint-override"><code> "python.analysis.typeCheckingMode": "standard",
"python.analysis.diagnosticSeverityOverrides": {
"reportGeneralTypeIssues": true,
"reportUnknownArgumentType": "warning",
"reportUnknownParameterType": "warning",
"reportMissingTypeArgument ": "warning",
"reportMissingParameterType": "warning",
"reportReturnType": "error",
"reportUnusedImport": "warning",
"reportUnnecessaryTypeIgnoreComment": "error"
},
</code></pre>
|
<python><python-typing><pyright>
|
2025-02-06 01:41:17
| 1
| 1,037
|
Vin
|
79,416,554
| 8,266,189
|
How to wrap a torch.jit model inside a torch Module?
|
<p>I'm trying to call a TorchScript model inside a <code>torch.nn.Module</code> but got an error related to pickle.</p>
<p>Here's the code to reproduce:</p>
<pre><code>import torch
import torch.nn as nn
# A simple base model to create a ScriptModel
class ExampleModel(nn.Module):
def __init__(self, factor: int):
super(ExampleModel, self).__init__()
self.factor = factor
def forward(self, x):
return x * self.factor
# Define a wrapper model with a ModuleDict
class WrapperModel(nn.Module):
def __init__(self, path):
super(WrapperModel, self).__init__()
self.model = torch.jit.load(path)
def forward(self, name: str, x):
return self.model(x)
scripted_model = torch.jit.script(ExampleModel(2))
scripted_model.save("model.jit")
# Initialize the WrapperModel
wrapper = WrapperModel("model.jit")
</code></pre>
<p>And when I try to pickle <code>wrapper</code> with:</p>
<pre><code>import pickle
pickle.dumps(wrapper)
</code></pre>
<p>I got error:</p>
<pre><code>RuntimeError: Tried to serialize object __torch__.___torch_mangle_3.ExampleModel which does not have a __getstate__ method defined!
</code></pre>
<p>Is there a way to call TorchScript model so that it doesn't raise such error?</p>
|
<python><pytorch><jit><pytorch-lightning><torchscript>
|
2025-02-06 00:56:56
| 0
| 357
|
Ha An Tran
|
79,416,484
| 210,867
|
I can't figure out why `isinstance()` returns `True` for these subclasses
|
<p>I'm using the Python package <a href="https://pypi.org/project/ariadne/" rel="nofollow noreferrer">ariadne</a>, <a href="https://github.com/mirumee/ariadne/tree/0.23.0/ariadne" rel="nofollow noreferrer">v0.23.0</a>.</p>
<p>I wrote a utility to scan my code for instances of <code>ariadne.types.SchemaBindable</code>, but it's also unintentionally picking up the <code>SchemaBindable</code> subclasses that I've imported:</p>
<pre class="lang-py prettyprint-override"><code>ariadne.input.InputType( SchemaBindable )
ariadne.objects.ObjectType( SchemaBindable )
ariadne.scalars.ScalarType( SchemaBindable )
ariadne.unions.UnionType( SchemaBindable )
</code></pre>
<p>I ran a test in a Python shell, and sure enough, <code>isinstance()</code> is returning <code>True</code> when comparing those classes to <code>SchemaBindable</code>:</p>
<pre class="lang-py prettyprint-override"><code>isinstance( ObjectType, SchemaBindable ) -> True
...etc...
</code></pre>
<p><code>SchemaBindable</code> even appears to be an instance of itself:</p>
<pre class="lang-py prettyprint-override"><code>isinstance( SchemaBindable, SchemaBindable ) -> True
</code></pre>
<p>Meanwhile, <code>issubclass()</code> continues to also return <code>True</code>:</p>
<pre class="lang-py prettyprint-override"><code>issubclass( ObjectType, SchemaBindable ) -> True
</code></pre>
<p>Make it make sense.</p>
|
<python><ariadne-graphql>
|
2025-02-05 23:57:19
| 3
| 8,548
|
odigity
|
79,416,398
| 1,540,660
|
Logger does not inherit config from parent process
|
<p>Consider the following minimal setup:</p>
<pre><code>/mymodule
βββ __init__.py
βββ main.py
βββ worker.py
</code></pre>
<p><code>__init__.py</code> is empty</p>
<p><code>main.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import sys
import logging
import multiprocessing
from test.worker import do_stuff
logging.basicConfig(
format='[%(name)s] [%(levelname)s]: %(message)s',
level=logging.DEBUG,
)
logger = logging.getLogger(__name__)
def main():
logger.debug('I am main. I manage workers')
logger.info('I am main. I manage workers')
logger.warning('I am main. I manage workers')
p = multiprocessing.Process(target=do_stuff)
p.start()
if __name__ == '__main__':
sys.exit( main() )
</code></pre>
<p><code>worker.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import logging
logger = logging.getLogger(__name__)
def do_stuff():
logger.debug(f'I am a worker. I do stuff')
logger.info(f'I am a worker. I do stuff')
logger.error(f'I am a worker. I do stuff')
logger.error(f'Here is my logger: {logger}')
</code></pre>
<p>If I run <code>python -m mymodule.main</code> I get (as expected):</p>
<pre><code>[__main__] [DEBUG]: I am main. I manage workers
[__main__] [INFO]: I am main. I manage workers
[__main__] [WARNING]: I am main. I manage workers
[mymodule.worker] [DEBUG]: I am a worker. I do stuff
[mymodule.worker] [INFO]: I am a worker. I do stuff
[mymodule.worker] [ERROR]: I am a worker. I do stuff
[mymodule.worker] [ERROR]: Here is my logger: <Logger mymodule.worker (DEBUG)>
</code></pre>
<p>But if I just rename <code>/mymodule/main.py</code> to <code>mymodule/__main__.py</code> and then run one of these commands: <code>python -m mymodule.__main__</code> or <code>python -m mymodule</code>, I get this:</p>
<pre><code>[__main__] [DEBUG]: I am main. I manage workers
[__main__] [INFO]: I am main. I manage workers
[__main__] [WARNING]: I am main. I manage workers
I am a worker. I do stuff
Here is my logger: <Logger mymodule.worker (WARNING)>
</code></pre>
<p>It is pretty clear to me that in the second case logger of <code>mymodule.worker</code> did not inherit configuration done by <code>logging.basicConfig</code>. But why? No change of code, just changed file name from <code>main.py</code> to <code>__main__.py</code>.</p>
<p>I would like to have <code>__main__.py</code> to be able to easily run the module by its name and also to keep <code>logging.basicConfig</code> that will be inhertied by imported submodules and propagate correctly to subprocesses.</p>
|
<python><logging><multiprocessing><python-multiprocessing><python-logging>
|
2025-02-05 23:02:21
| 2
| 336
|
Art Gertner
|
79,416,288
| 11,202,233
|
Django "compilemessages" error: Can't find msgfmt (GNU gettext) on Ubuntu VPS
|
<p>I am trying to compile translation messages in my Django project by running the following command:</p>
<pre><code>python manage.py compilemessages
</code></pre>
<p>However, I get this error:</p>
<pre><code>CommandError: Can't find msgfmt. Make sure you have GNU gettext tools 0.15 or newer installed.
</code></pre>
<p>So then I tried installing gettext.</p>
<pre><code>sudo apt update && sudo apt install gettext -y
</code></pre>
<p>After installation, I still get the same error when running python manage.py compilemessages.</p>
<p>Checked if msgfmt is installed by <code>msgfmt --version</code> but it says: <code>command not found</code></p>
<p>OS is Ubuntu and Django version is 4.2.5.</p>
<p>How can I resolve this issue and make Django recognize msgfmt? Any help would be appreciated!</p>
|
<python><django>
|
2025-02-05 21:55:46
| 1
| 487
|
Genesis Solutions
|
79,416,254
| 1,406,168
|
Applications insights in an azure function app - metrics and custom dimensions missing
|
<p>I have some trouble adding logs from an azure function app to application insigths. I am using this code:</p>
<pre><code>import azure.functions as func
import datetime
import logging
import requests
from settings import settings
from services.exchangerate_service import ExchangeRateService
from azure.identity import DefaultAzureCredential
from azure.monitor.opentelemetry import configure_azure_monitor
from opentelemetry import metrics
credential = DefaultAzureCredential()
configure_azure_monitor(
connection_string="InstrumentationKey=xx-xx-xx-xx-xx;IngestionEndpoint=https://westeurope-5.in.applicationinsights.azure.com/;LiveEndpoint=https://westeurope.livediagnostics.monitor.azure.com/;ApplicationId=xx-xx-xx-xx-xx",
credential=credential
)
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
app = func.FunctionApp()
@app.timer_trigger(schedule=settings.IBAN_CRON, arg_name="myTimer", run_on_startup=True,
use_monitor=False)
def timer_trigger(myTimer: func.TimerRequest) -> None:
if myTimer.past_due:
logging.info('The timer is past du3e!')
logger.info("Segato5", extra={"custom_dimension": "Kam_value","test1": "val1"})
meter = metrics.get_meter_provider().get_meter(__name__)
counter = meter.create_counter("segato2")
counter.add(8)
</code></pre>
<p>I get traces but not custom dimesions and no metrics:
<a href="https://i.sstatic.net/jmlhZxFd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jmlhZxFd.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/oOBxH1A4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oOBxH1A4.png" alt="enter image description here" /></a></p>
|
<python><azure-functions><azure-application-insights>
|
2025-02-05 21:35:08
| 1
| 5,363
|
Thomas Segato
|
79,416,003
| 2,758,524
|
Lowpass filter is slower on GPU than CPU in PyTorch
|
<p>I have been trying out some of the Torchaudio functionalities and I can't seem to figure out why <code>lowpass_biquad</code> is running slower on the GPU than on the CPU. And this is true for other effects like, phaser, flanger, overdrive, which are even slower. Here I am pasting the example for the lowpass filter, but I apply the other effects the same way to obtain the measurements. The example code is taken from <a href="https://github.com/pytorch/audio/issues/1408" rel="nofollow noreferrer">this issue</a>, which seems to have been resolved, but it's still slower on the GPU.</p>
<pre><code>import time
import torch
from torchaudio.functional import lowpass_biquad
gpu_device = torch.device('cuda:0')
cpu_device = torch.device('cpu')
seconds = 1000
sample_rate = 44100
cutoff_freq = 1000.
Q = .7
# Run in cpu
x = torch.rand(sample_rate * seconds, device=cpu_device)
begin = time.time()
y = lowpass_biquad(x, sample_rate, cutoff_freq, Q)
print(f'Run in cpu: {time.time() - begin}')
# Run in gpu
x = torch.rand(sample_rate * seconds, device=gpu_device)
begin = time.time()
y = lowpass_biquad(x, sample_rate, cutoff_freq, Q)
torch.cuda.synchronize()
print(f'Run in gpu: {time.time() - begin}')
</code></pre>
<hr />
<pre><code>Run in cpu: 1.6084413528442383
Run in gpu: 6.183292865753174
</code></pre>
<p>For example for the overdrive effect, the GPU is more than 1000x slower. It would be understandable, if the Torchaudio doesn't have the GPU implementation of the said effects, but their documentation seems to suggest they do. Am I doing something wrong?</p>
|
<python><audio><pytorch><torchaudio>
|
2025-02-05 19:35:02
| 1
| 543
|
orglce
|
79,415,896
| 1,788,656
|
Pandas dataframe info() vs info
|
<p>Any ideas what the difference between <code>pandas.DataFrame.info</code> and <code>pandas.DataFrame.info()</code> cause each one seems to work and yield different output.</p>
<p>Using the example available on <code>pandas.DataFrame.info</code> <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.info.html" rel="nofollow noreferrer">page</a> ,</p>
<pre><code>import pandas as pd
int_values = [1, 2, 3, 4, 5]
text_values = ['alpha', 'beta', 'gamma', 'delta', 'epsilon']
float_values = [0.0, 0.25, 0.5, 0.75, 1.0]
df = pd.DataFrame({"int_col": int_values,
"text_col": text_values,
"float_col": float_values})
</code></pre>
<p>Calling <code>df.info</code> yields</p>
<pre><code><bound method DataFrame.info of int_col text_col float_col
0 1 alpha 0.00
1 2 beta 0.25
2 3 gamma 0.50
3 4 delta 0.75
4 5 epsilon 1.00>
</code></pre>
<p>while calling <code>df.info()</code> yields</p>
<pre><code>RangeIndex: 5 entries, 0 to 4
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 int_col 5 non-null int64
1 text_col 5 non-null object
2 float_col 5 non-null float64
dtypes: float64(1), int64(1), object(1)
memory usage: 252.0+ bytes
</code></pre>
<p>Thanks</p>
|
<python><python-3.x><pandas><database>
|
2025-02-05 18:57:17
| 0
| 725
|
Kernel
|
79,415,833
| 375,666
|
Aligning a point cloud of 2D Mask that was generated from depth camera in ZED
|
<p>I'm trying a MVP sample that aligns a 2D Mask that is generating by converting it to point cloud using the depth map generated by ZED Stereo camera.</p>
<p>I'm trying to align a 3D Object file to that point cloud using C# and Unity but for now I'm testing it using python.
The 3D Model and the PCD file is here <a href="https://drive.google.com/file/d/1rUA1j22z1WOd3tEcPZwN7n58lB4nNyES/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/1rUA1j22z1WOd3tEcPZwN7n58lB4nNyES/view?usp=sharing</a>
They are size of 2MB~
I want to align this
<a href="https://i.sstatic.net/TzMqMhJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TzMqMhJj.png" alt="enter image description here" /></a></p>
<p>with
<a href="https://i.sstatic.net/9Q6srxSK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9Q6srxSK.png" alt="enter image description here" /></a></p>
<p>This is the result, and here is my attempt</p>
<p>The mask is generated from that scene
<a href="https://i.sstatic.net/BHbwjZKz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BHbwjZKz.png" alt="enter image description here" /></a></p>
<p>Here is my attempt:</p>
<pre><code>import open3d as o3d
import numpy as np
def load_point_clouds(obj_file, pcd_file, num_points=10000):
source = o3d.io.read_triangle_mesh(obj_file)
source_pcd = source.sample_points_uniformly(number_of_points=num_points)
target = o3d.io.read_point_cloud(pcd_file)
return source_pcd, target
def filter_points_above_level(pcd, z_level):
points = np.asarray(pcd.points)
filtered_points = points[points[:, 1] <= z_level] # Filter based on z-axis
filtered_pcd = o3d.geometry.PointCloud()
filtered_pcd.points = o3d.utility.Vector3dVector(filtered_points)
return filtered_pcd
def find_dense_region(pcd, voxel_size=0.05):
# Get points array
points = np.asarray(pcd.points)
# Find bounding box
min_point = np.min(points, axis=0)
max_point = np.max(points, axis=0)
# Create voxel grid dimensions
dims = ((max_point - min_point) / voxel_size).astype(int) + 1
voxel_grid = np.zeros(dims)
# Assign points to voxels
indices = ((points - min_point) / voxel_size).astype(int)
for idx in indices:
voxel_grid[tuple(idx)] += 1
# Find densest region
dense_mask = voxel_grid > np.mean(voxel_grid[voxel_grid > 0])
if not np.any(dense_mask):
return None, None
dense_indices = np.argwhere(dense_mask)
dense_min = dense_indices.min(axis=0) * voxel_size + min_point
dense_max = dense_indices.max(axis=0) * voxel_size + min_point
return dense_min, dense_max
def extract_and_align(source, target, min_bound, max_bound):
# Extract region
target_points = np.asarray(target.points)
mask = np.all((target_points >= min_bound) & (target_points <= max_bound), axis=1)
roi = o3d.geometry.PointCloud()
roi.points = o3d.utility.Vector3dVector(target_points[mask])
# Scale source
source_scale = np.linalg.norm(np.asarray(source.points), axis=1).mean()
target_scale = np.linalg.norm(np.asarray(roi.points), axis=1).mean()
scale = target_scale / source_scale * 0.5 # Reduced scale factor
source_points = np.asarray(source.points) * scale
source.points = o3d.utility.Vector3dVector(source_points)
# Align centers
source.translate(roi.get_center() - source.get_center())
# ICP
reg = o3d.pipelines.registration.registration_icp(
source, roi, 0.05, np.eye(4),
o3d.pipelines.registration.TransformationEstimationPointToPoint(),
o3d.pipelines.registration.ICPConvergenceCriteria(max_iteration=200)
)
source.transform(reg.transformation)
return source
def main():
source, target = load_point_clouds("untitled.obj", "ahmed.pcd")
source.paint_uniform_color([1, 0, 0])
target.paint_uniform_color([0, 1, 0])
target = filter_points_above_level(target, 10)
o3d.visualization.draw_geometries([target])
min_bound, max_bound = find_dense_region(target)
if min_bound is not None:
aligned_source = extract_and_align(source, target, min_bound, max_bound)
o3d.visualization.draw_geometries([aligned_source, target])
else:
print("Could not find suitable region")
if __name__ == "__main__":
main()
</code></pre>
|
<python><c#><3d><computer-vision><open3d>
|
2025-02-05 18:34:50
| 0
| 1,919
|
Andre Ahmed
|
79,415,797
| 1,406,168
|
ImportError: cannot > import name 'AccessTokenInfo' from 'azure.core.credentials'
|
<p>I have a FAST API deployed to an Azure function. Locally everything works as expected. However when deployed I get following error. The error already occurs when doing the import:</p>
<pre><code># if I remove this line it works
from azure.identity import DefaultAzureCredential
</code></pre>
<p>Error:</p>
<blockquote>
<p>2025-02-05T18:13:34.960344159Z return
util.import_app(self.app_uri) 2025-02-05T18:13:34.960347459Z<br />
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-02-05T18:13:34.960350559Z File
"/opt/python/3.12.2/lib/python3.12/site-packages/gunicorn/util.py",
line 371, in import_app 2025-02-05T18:13:34.960353959Z mod =
importlib.import_module(module) 2025-02-05T18:13:34.960357159Z<br />
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-02-05T18:13:34.960360459Z File
"/opt/python/3.12.2/lib/python3.12/importlib/<strong>init</strong>.py", line 90, in
import_module 2025-02-05T18:13:34.960363959Z return
_bootstrap._gcd_import(name[level:], package, level) 2025-02-05T18:13:34.960367359Z<br />
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-02-05T18:13:34.960370759Z File "",
line 1387, in _gcd_import 2025-02-05T18:13:34.960374859Z File
"", line 1360, in _find_and_load
2025-02-05T18:13:34.960378160Z File "",
line 1331, in _find_and_load_unlocked 2025-02-05T18:13:34.960381660Z<br />
File "", line 935, in _load_unlocked
2025-02-05T18:13:34.960384960Z File "", line 995, in exec_module
2025-02-05T18:13:34.960388360Z File "",
line 488, in _call_with_frames_removed 2025-02-05T18:13:34.960392960Z
File "/tmp/8dd460fb826ed55/main.py", line 12, in
2025-02-05T18:13:34.960400760Z from azure.identity import
DefaultAzureCredential 2025-02-05T18:13:34.960404260Z File
"/tmp/8dd460fb826ed55/antenv/lib/python3.12/site-packages/azure/identity/<strong>init</strong>.py",
line 10, in 2025-02-05T18:13:34.960407860Z from
._credentials import ( 2025-02-05T18:13:34.960411060Z File
"/tmp/8dd460fb826ed55/antenv/lib/python3.12/site-packages/azure/identity/_credentials/<strong>init</strong>.py",
line 5, in 2025-02-05T18:13:34.960414560Z from
.authorization_code import AuthorizationCodeCredential
2025-02-05T18:13:34.960417860Z File
"/tmp/8dd460fb826ed55/antenv/lib/python3.12/site-packages/azure/identity/_credentials/authorization_code.py", line 7, in 2025-02-05T18:13:34.960421260Z from
azure.core.credentials import AccessToken, AccessTokenInfo,
TokenRequestOptions 2025-02-05T18:13:34.960424860Z ImportError: cannot
import name 'AccessTokenInfo' from 'azure.core.credentials'
(/agents/python/azure/core/credentials.py)</p>
</blockquote>
<p><code>Main.py</code>:</p>
<pre><code>import os
from typing import Union
from fastapi import FastAPI, Security
import uvicorn
from utils.azure.fast_api_auth import FastAPIAuth,FastAPIAuthOptions
from routers.router1 import router1_router
from enums import fast_api_metadata
from settings import settings
import logging
from azure.identity import DefaultAzureCredential
print("TEST1")
fast_api_options = FastAPIAuthOptions(
title="API",
tenant_id=settings.xx,
openapi_tags=xx,
client_id=settings.fastapi_settings.xx,
)
fast_api_auth = FastAPIAuth(fast_api_options)
app = fast_api_auth.get_fast_api_app()
app.include_router(
router1_router,
prefix="/api/router1",
dependencies=[Security(fast_api_auth.azure_scheme)],
tags=["vessels"]
)
@app.get("/log", dependencies=[Security(fast_api_auth.azure_scheme)])
async def root():
return {"whoIsTheBest": "!!"}
@app.get("/", dependencies=[Security(fast_api_auth.azure_scheme)])
def read_root():
return {"Hello": "World"}
@app.get("/settings")
def read_settings():
return settings
@app.get("/items/{item_id}")
def read_item(item_id: int, q: Union[str, None] = None):
return {"item_id": item_id, "q": q}
if __name__ == '__main__':
uvicorn.run(
'main:app',
host="0.0.0.0",
port=8000,
reload=True,
log_level="info"
)
</code></pre>
<p><code>Requirements.txt</code>:</p>
<pre><code>annotated-types==0.7.0
anyio==4.7.0
certifi==2024.8.30
cffi==1.17.1
click==8.1.7
colorama==0.4.6
cryptography==44.0.0
fastapi==0.115.6
fastapi-azure-auth==5.0.1
h11==0.14.0
httpcore==1.0.7
httpx==0.28.1
idna==3.10
pycparser==2.22
pydantic==2.10.3
pydantic-settings==2.6.1
pydantic_core==2.27.1
PyJWT==2.10.1
python-dotenv==1.0.1
sniffio==1.3.1
starlette==0.41.3
typing_extensions==4.12.2
uvicorn==0.32.1
azure-monitor-opentelemetry
databricks
databricks.sdk
databricks-sql-connector
pandas
sqlalchemy
numpy
azure-identity
opencensus-ext-azure
opentelemetry-api
opentelemetry-sdk
opentelemetry-instrumentation-fastapi
</code></pre>
|
<python><azure-active-directory><azure-functions><azure-managed-identity>
|
2025-02-05 18:22:41
| 1
| 5,363
|
Thomas Segato
|
79,415,691
| 405,017
|
Minimal way to set PYTHONPATH for developing python in VS Code and Jupyter notebook server
|
<p><em>Related questions that do not apply:</em></p>
<ul>
<li><a href="https://stackoverflow.com/q/58735256/405017">Set PYTHONPATH for local Jupyter Notebook in VS Code</a> - does not apply to running Jupyter server.</li>
<li><a href="https://stackoverflow.com/q/48730312/405017">How to set pythonpath at startup when running jupyter notebook server</a> - no answer</li>
<li><a href="https://stackoverflow.com/q/57118938/405017">Python: Module Not Found in Jupyter Notebook</a> - solution involves absolute path</li>
<li><a href="https://stackoverflow.com/q/51852864/405017">Python module not found <project> only on VPS. works fine on local machine</a> - solution involves absolute path and modifying every notebook</li>
</ul>
<hr />
<p>I'm helping to organize some python code with a team at work. For "reasons" the project has multiple subdirectories and so needs <code>PYTHONPATH</code> or equivalent to add those subdirectories to work. For example, with this project setupβ¦</p>
<pre><code>project/
.venv/
foo/
bar.py
jim/
jam.py
</code></pre>
<p>β¦we want Jupyter notebooks to work for the code:</p>
<pre class="lang-py prettyprint-override"><code>import jim
import jam
</code></pre>
<p>We want these notebooks to work both (a) within VS Code, and (b) when starting a Jupyter notebook server and connecting to it from Google Colab.</p>
<p>Desires:</p>
<ol>
<li>Little-to-no setup work needed per user. (Ideally, they just clone the project and run a launch command or manual script from within the .venv to get Jupyter server running.)</li>
<li>No absolute paths hard-coded anywhere. (Multiple copies of the project sometimes exist on disk, are added and removed; a user needs to be able to launch Jupyter notebook servers for various locations.)</li>
<li>Specify the <code>foo</code> and <code>bar</code> paths in <strong>as few places as possible</strong>; ideally one. (There are many, many places where extra paths can be specified within the ecosystem of Python, Jupyter, VS Code, pytest, pylint, etc.)</li>
<li>No need to pollute every single notebook file by adding <code>sys.path</code> modifications at the top of each.</li>
<li>Live development is possible. The developers are not installing this project and then using the notebooks, they are iteratively using the notebooks and modifying core library code that affects the next notebook evaluation.</li>
</ol>
<hr />
<p>Within VS Code I've gotten the notebooks working by:</p>
<ol>
<li><p>Adding <code>.env</code> file at the project root with the contents:</p>
<pre><code>PYTHONPATH=foo:bar
</code></pre>
<ul>
<li><em>The Python extension has a default setting:</em><br />
<code>"python.envFile" : "${workspaceFolder}/.env"</code></li>
</ul>
</li>
<li><p>Ensuring all notebooks are run (in VS Code) from the root directory by changing the setting <code>jupyter.notebookFileRoot</code> from the default <code>${fileDirname}</code> to <code>${workspaceFolder}</code>.</p>
</li>
</ol>
<hr />
<p>I've also had to add the locations in the <code>pyproject.toml</code> file for testing:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.pytest.ini_options]
pythonpath = [
"foo",
"bar",
]
</code></pre>
<hr />
<p>Nowβ¦how can I make it so that all the developers can launch a Jupyter notebook server for the project, inside the project's venv, that gets <code>PYTHONPATH</code> set correctly? It can be a single <code>launch_jupyter.sh</code> script, but I'd prefer not to have to maintain the same list of <code>foo:bar</code> in that script file. (Because it's not DRY, and the actual directory names are longer and more than just a couple.)</p>
|
<python><jupyter-notebook><jupyter><pythonpath>
|
2025-02-05 17:44:00
| 1
| 304,256
|
Phrogz
|
79,415,456
| 12,281,892
|
How to swap background colour of stripes in arviz.plot_forest
|
<p>This might be a simple question but I can't figure it out. In arviz's <a href="https://python.arviz.org/en/latest/api/generated/arviz.plot_forest.html" rel="nofollow noreferrer">arviz.plot_forest</a>, how can I swap the order of the shaded backgrounds? For instance, in this example figure from their docs, how can I start with grey and have the odd rows white? Or how do I define and customise this formatting?</p>
<p><a href="https://i.sstatic.net/8nj4pYTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8nj4pYTK.png" alt="enter image description here" /></a></p>
<p>Thanks.</p>
<hr />
<p>Edit after @Onuralp Arslan's answer (6.2.).</p>
<p>The issue I'm facing is that there remains some colour from the original plot, so repainting it to other colours (like red and yellow) works but painting them white for some reason doesn't:</p>
<pre class="lang-py prettyprint-override"><code>non_centered_data = az.load_arviz_data('non_centered_eight')
centered_data = az.load_arviz_data('centered_eight')
axes = az.plot_forest([non_centered_data, centered_data],
model_names = ["non centered eight", "centered eight"],
kind='forestplot',
var_names=["^the"],
filter_vars="regex",
combined=True,
figsize=(9, 7))
axes[0].set_title('Estimated theta for 8 schools models')
# flip the grey and white
ax=axes[0]
y_ticks = ax.get_yticks()
# to calculate where to color / size of plot
y_min, y_max = ax.get_ylim()
total_height = y_max - y_min
num_rows = len(y_ticks)
row_height = total_height / num_rows
# even odd alternate
for i, y in enumerate(y_ticks):
bottom = y - row_height / 2
top = y + row_height / 2
if i % 2 == 0:
ax.axhspan(bottom, top, color='white', alpha=1, zorder=0)
else:
ax.axhspan(bottom, top, color='lightgrey', alpha=0.2, zorder=-1)
ax.set_axisbelow(True)
</code></pre>
<p>produces:
<a href="https://i.sstatic.net/The8vLJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/The8vLJj.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><arviz>
|
2025-02-05 16:27:06
| 2
| 2,550
|
My Work
|
79,415,185
| 1,422,096
|
How to automate "Retry" on error when copying a file with the Windows Explorer COM model?
|
<p>I'm automating specific copy operations (that can't be done via normal filesystem copy) with Windows Explorer COM:</p>
<pre><code>import pythoncom
from win32comext.shell import shell
fo = pythoncom.CoCreateInstance(shell.CLSID_FileOperation, None, pythoncom.CLSCTX_ALL, shell.IID_IFileOperation)
fo.CopyItem(src, dest)
fo.PerformOperations()
</code></pre>
<p>Once every thousands of files, I get:</p>
<blockquote>
<p><a href="https://learn.microsoft.com/fr-fr/windows/win32/uianimation/uianimation-error-codes" rel="nofollow noreferrer">Error 0x802a0006</a></p>
</blockquote>
<blockquote>
<p>UI_E_VALUE_NOT_DETERMINED
0x802A0006
The requested value cannot be determined.</p>
</blockquote>
<p>Doing "Retry" and checking the checkbox "Do this for future files" on the dialog box does not help for future files, so it's a problem when automating.</p>
<p>Question:</p>
<p><strong>1. Does using <code>shell.FOF_NOCONFIRMATION</code> make the dialog choose automatically "Retry", or "Ignore" or "Cancel"? Which one is correct?</strong></p>
<p><strong>2. Is there a way to automate retrying without dialog? Without doing manually with a <code>for _ in range(MAX_RETRIES): try: ... except: ...</code></strong></p>
<p><a href="https://i.sstatic.net/88hC7zTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/88hC7zTK.png" alt="enter image description here" /></a></p>
|
<python><windows><com><pywin32><file-copying>
|
2025-02-05 14:57:12
| 1
| 47,388
|
Basj
|
79,415,141
| 14,463,396
|
Pandas convert string column to bool but convert typos to false
|
<p>I have an application that takes an input spreadsheet filled in by the user. I'm trying to bug fix, and I've just noticed that in one True/False column, they've written <code>FLASE</code> instead of <code>FALSE</code>. I'm trying to write in as many workarounds for user error as I can, as the users of this app aren't very technical, so I was wondering if there's a way to convert this column to type bool, but set any typos to False? I appreciate this would also set something like TURE to be false too.</p>
<p>For example:</p>
<pre><code>df = pd.DataFrame({'bool_col':['True', 'Flase', 'False', 'True'], 'foo':[1,2,3,4]})
</code></pre>
<p>Running <code>df['bool_col'] = df['bool_col'].astype(bool)</code> returns True for everything (as they're all non-empty strings), however I would like it to reutrn True, False, False, True.</p>
|
<python><pandas><boolean>
|
2025-02-05 14:45:50
| 2
| 3,395
|
Emi OB
|
79,414,997
| 9,947,412
|
Dagster: preserve multiple sklearn model in asset
|
<p>I have a linear regression asset in dagster that uses data previously computed and sklearn LinearRegression (Python 10 here).</p>
<p>For each of my input columns (that represents a country, I want to fit a Linear Regression model.</p>
<p>Everything works fine. My question is about <em>outputing</em> these models (or use dagster metadata?)</p>
<p>I want basically a train asset and a forecast asset, a for this I want to return the models trained in the <em>train asset</em> and load them in the <em>forecast asset</em>. Solution could be to save them locally but I want to use dagster exclusively.</p>
<p>Also, I would like to save plenty of metadata (score, rmse) of each model into the train asset metadata.</p>
<p>Here is my code:</p>
<pre><code>@asset(deps=[])
def train_linear_regression(duckdb: DuckDBResource):
"""Use pivot table with time serie data to forecast.
Used Linear Regression.
"""
# Setting up query.
query = "SELECT * FROM pivot_table_model"
# Execute the query.
with duckdb.get_connection() as conn:
df = conn.execute(query).df()
output = {}
for country_name in df.drop(columns=["year"]).columns:
# Setting Y.
Y = df.loc[:, country_name] # Retrieving population - pd.Series.
# Preparing linear model.
linear_regression = LinearRegression()
# Fitting the model.
linear_regression.fit(X, Y)
# Scoring.
score = linear_regression.score(X, Y)
output[country_name] = {
"model": linear_regression,
"score": float(score),
"plot": generate_plot(df),
}
</code></pre>
<p>What should I return?</p>
|
<python><scikit-learn><dagster>
|
2025-02-05 13:53:24
| 0
| 907
|
PicxyB
|
79,414,980
| 2,471,910
|
Can I re-use browser authentication with python REST?
|
<p>I am trying to run a REST API with python. I can invoke the API successfully from the browser, but I get authentication errors when doing so from python or from the command line.</p>
<p>I've already authenticated myself with the brower in order to be able to access the API. It is a complicated multi-factor auth which I don't want to have to do via python, if I even could.</p>
<p>So my question: Is there a way to extract the token from the browser session and re-use that in my python application?</p>
<p>Or maybe I am already authenticated and I just need to identify myself?</p>
<p>Thanks!</p>
|
<python><rest><authentication><multi-factor-authentication>
|
2025-02-05 13:47:27
| 1
| 2,077
|
Trenin
|
79,414,937
| 1,635,523
|
How to make python print bytes strings as hex code all the way?
|
<p>So I want to turn a positive integer into a little endian byte string of length 2.</p>
<p>Easy! For example, <code>200</code>:</p>
<pre class="lang-py prettyprint-override"><code>>>> b = int.to_bytes(200, 2, 'little')
b'\xc8\x00'
</code></pre>
<p>However, as soon as we take one that may be interpreted as some ASCII symbol, like, e.g., <code>100</code>:</p>
<pre class="lang-py prettyprint-override"><code>>>> b = int.to_bytes(100, 2, 'little')
b'd\x00'
>>> print(b) #<< Does not help
b'd\x00'
</code></pre>
<p>^Ugly! Barely readable in my opinion! Is there a straight-forward way to tell
python to make it hex-style all the way? Like so:</p>
<pre class="lang-py prettyprint-override"><code>>>> b = int.to_bytes(100, 2, 'little')
b'\x64\x00'
</code></pre>
<p><em>P.S.: For clarification of the question: It is an easy task to write a function that will do just what I want, printing the <code>bytes</code> as desired, <code>b'\x64\x00'</code>-style-all-the-way-hex.</em></p>
<p><em>The question here is: Is there an option to bring the python interpreter to doing this for me? Like <code>logging.basicConfig(..)</code> only <code>interpreter.basicNumericFormat(bytes, ..)</code>. And if so: How would one achieve that?</em></p>
|
<python>
|
2025-02-05 13:32:06
| 1
| 1,061
|
Markus-Hermann
|
79,414,916
| 29,295,031
|
how to implement custom plotly bubble chart
|
<p>I new to plotly library , I want to visualize a dafarame into a plotly Bubble chart.</p>
<p>here's the code :</p>
<pre><code>import plotly.graph_objects as go
import plotly.graph_objects as px
import streamlit as st
import pandas as pd
data = {'x': [1.5, 1.6, -1.2],
'y': [21, 16, 46],
'circle-size': [10, 5, 6],
'circle-color': ["red","blue","green"]
}
# Create DataFrame
df = pd.DataFrame(data)
st.dataframe(df)
fig = px.scatter(df, x="x", y="y", color="circle-color",
size='circle-size')
fig.show()
st.plotly_chart(fig)
</code></pre>
<p>I have problems, the first one is how to plug the dataframe(df) with plotly to see the data ? and the second I'm lookng to implement a custom bubble chart, something with colors with negative values like this :
<a href="https://i.sstatic.net/cHBj6PgY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cHBj6PgY.png" alt="enter image description here" /></a></p>
<p>can anyone help pleas to solve these problems ?
thnaks</p>
|
<python><pandas><plotly><streamlit>
|
2025-02-05 13:25:29
| 1
| 401
|
user29295031
|
79,414,902
| 9,359,102
|
MPEG-DASH(.mpd) implementation similar to my existing methods written currently for .m3u8
|
<p>I have the following 2 methods written in python which have been implemented for .m3u8 playlist files</p>
<ul>
<li>Make a master manifest.</li>
<li>Make a Feed manifest.</li>
</ul>
<p>My purpose is to do the same for .mpd manifest files as well. Eventual outcome desired is to have 4 methods : 2 listed above for .m3u8 and 2 for .mpd manifest file.</p>
<p>I'm currently using the following package for working with .m3u8 (HLS) playlist files</p>
<p><a href="https://github.com/globocom/m3u8" rel="nofollow noreferrer">https://github.com/globocom/m3u8</a></p>
<p>I'm searched to find a similar implementation for .mpd(MPEG-DASH) manifest files but all I found were parsers like the following:</p>
<p><a href="https://github.com/sangwonl/python-mpegdash" rel="nofollow noreferrer">https://github.com/sangwonl/python-mpegdash</a></p>
<p><a href="https://github.com/avishaycohen/mpd-parser" rel="nofollow noreferrer">https://github.com/avishaycohen/mpd-parser</a></p>
<p>Any help on implementing similar to the 2 methods to allow for .mpd files and doing the exact same things which the 2 methods do.</p>
<pre><code>
import logging
from datetime import timedelta
from os.path import basename
from furl import furl
from m3u8 import M3U8, Media, Playlist, Segment
from m3u8 import load as load_m3u8
def make_master_manifest(request, stream):
if stream.info:
bandwidth = int(stream.info["bw_out"])
width = stream.info["meta"]["video"]["width"]
height = stream.info["meta"]["video"]["height"]
stream_info = {
"bandwidth": bandwidth,
"resolution": f"{width}x{height}",
"codecs": "avc1.640028,mp4a.40.2",
}
else:
stream_info = {"bandwidth": 1000}
p = Playlist(basename(stream.index_manifest_url), stream_info, None, None)
m = M3U8()
m.add_playlist(p)
for feed in stream.feeds.all():
media = Media(
type="SUBTITLES",
group_id="feeds",
name=f"feed-{feed.uuid}",
language="en",
default="YES",
autoselect="YES",
uri=furl(feed.manifest_url).set({"stream": stream.uuid}).url,
)
p.media.append(media)
m.add_media(media)
return m.dumps()
def make_feed_manifest(request, stream, feed):
url = request.build_absolute_uri(stream.index_manifest_url)
p = load_m3u8(url)
m = M3U8()
m.version = p.version
m.target_duration = p.target_duration
m.media_sequence = p.media_sequence
for s in p.segments:
if not m.program_date_time:
m.program_date_time = s.current_program_date_time
vtt_url = furl(basename(feed.webvtt_url)).set({"stream": stream.uuid})
if s.current_program_date_time:
vtt_url.args.update(
{
"start": s.current_program_date_time.isoformat(),
"end": (
s.current_program_date_time + timedelta(seconds=s.duration)
).isoformat(),
"epoch": stream.started_at.isoformat(),
}
)
v = Segment(
base_uri=vtt_url.url,
uri=vtt_url.url,
duration=s.duration,
discontinuity=s.discontinuity,
program_date_time=s.current_program_date_time,
)
m.add_segment(v)
return m.dumps()
</code></pre>
<p>Help appreciated.</p>
<p>Note. The above methods are created for .m3u8 playlist files. I attempted to implement on using similar logic for an taking an .mpd file but failed to do so using existing package parsers. The packages have parsers only but no methods like the one in the m3u8 package listed above. I'm hopeful stack users could help me implement similar to the 2 methods for an .mpd manifest file as input.</p>
|
<python><python-3.x><django-rest-framework><m3u8><mpeg-dash>
|
2025-02-05 13:22:43
| 0
| 489
|
Earthling
|
79,414,891
| 7,454,765
|
How to make my dataclass compatible with ctypes and not lose the βdunderβ methods?
|
<p>Consider a simple data class:</p>
<pre><code>from ctypes import c_int32, c_int16
from dataclasses import dataclass
@dataclass
class MyClass:
field1: c_int32
field2: c_int16
</code></pre>
<p>According to the <a href="https://docs.python.org/3/library/ctypes.html#ctypes.Structure" rel="nofollow noreferrer">docs</a>, if we want to make this dataclass compatible with <code>ctypes</code>, we have to define it like this:</p>
<pre><code>import ctypes
from ctypes import Structure, c_int32, c_int16, sizeof
from dataclasses import dataclass, fields
@dataclass
class MyClass(ctypes.Structure):
_pack_ = 1
_fields_ = [("field1", c_int32),("field2", c_int16)]
print(ctypes.sizeof(MyClass))
</code></pre>
<p>But unfortunately, this definition deprives us of the <a href="https://docs.python.org/3/library/dataclasses.html#module-contents" rel="nofollow noreferrer">convenient features</a> of the dataclass, which are called βdunderβ methods. <a href="https://pythononline.net/#YxSHuj" rel="nofollow noreferrer">For example</a>, constructor(<code>__init__()</code>) and string representation(<code>__repr__()</code>) become unavailable:</p>
<p><code>inst = MyClass(c_int32(42), c_int16(43)) # will give error</code></p>
<p><strong>Q</strong>: What is the most elegant and idiomatic way to make a dataclass compatible with ctypes without losing βdunderβ methods?</p>
<p>If we ask me, this code seems to work at first glance:</p>
<pre><code>@dataclass
class MyClass(ctypes.Structure):
field1: c_int32
field2: c_int16
_pack_ = 1
MyClass._fields_ = [(field.name, field.type) for field in fields(MyClass)] #_pack_ is skipped
</code></pre>
<p>Since I'm a beginner, I'm not sure if this code doesn't lead to some other, non-obvious problems.</p>
|
<python><ctypes><python-dataclasses>
|
2025-02-05 13:18:11
| 2
| 713
|
PavelDev
|
79,414,809
| 2,164,975
|
Access entry from Vaultwarden with Python
|
<p>I want to use my vaultwarden docker instance on the server to handle multiple passwords securely. Within my python project I want to access one of the stored entries.</p>
<p>To access the vaultwarden I want to use <a href="https://github.com/numberly/python-vaultwarden" rel="nofollow noreferrer">https://github.com/numberly/python-vaultwarden</a></p>
<p>In my current code I can see all the ciphers in the collection. But I struggle to access the password for later use.</p>
<pre class="lang-py prettyprint-override"><code>from uuid import UUID
from vaultwarden.clients.bitwarden import BitwardenAPIClient
from vaultwarden.models.bitwarden import OrganizationCollection, get_organization
from vaultwarden.utils.crypto import decode_cipher_string
import os
from dotenv import load_dotenv
load_dotenv()
bitwarden_client = BitwardenAPIClient(os.getenv("VW_URL"),
os.getenv("VW_USER_MAIL"),
os.getenv("VW_USER_PW"),
os.getenv("VW_CLIENT_ID"),
os.getenv("VW_CLIENT_SECRET"),
os.getenv("VW_DEVICE_ID"))
org_uuid = UUID('...')
collection_uuid = UUID('...')
orga = get_organization(bitwarden_client, org_uuid)
collection_elements = orga.ciphers(collection_uuid)
for element in collection_elements:
if element.Name == "Grafana":
password_encrypted = element.model_extra["data"]["password"]
password_decrypted = decode_cipher_string(password_encrypted)
print(password_decrypted)
</code></pre>
<p>Any suggestion for accessing the password for later use inside of the program?</p>
|
<python><security><password-protection><password-manager>
|
2025-02-05 12:50:10
| 1
| 602
|
betaros
|
79,414,767
| 5,615,873
|
abjad.show() issues "FileNotFoundError: [WinError 2] The system cannot find the file specified" in Python
|
<p>Here's a basic, simple 'abjad' code that one can find in any 'abjad' documentation:</p>
<pre><code>import abjad
n = abjad.Note("c'4")
abjad.show(n)
</code></pre>
<p>And here's the full traceback produced by the above code:</p>
<pre><code>Traceback (most recent call last):
File "R:\W\y.py", line 122, in <module>
abjad.show(n)
File "R:\W\venv\Lib\site-packages\abjad\io.py", line 672, in show
result = illustrator()
^^^^^^^^^^^^^
File "R:\W\venv\Lib\site-packages\abjad\io.py", line 75, in __call__
string = self.string or self.get_string()
^^^^^^^^^^^^^^^^^
File "R:\W\venv\Lib\site-packages\abjad\io.py", line 152, in get_string
return lilypond_file._get_lilypond_format()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "R:\W\venv\Lib\site-packages\abjad\lilypondfile.py", line 474, in _get_lilypond_format
string = configuration.get_lilypond_version_string()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "R:\W\venv\Lib\site-packages\abjad\configuration.py", line 388, in get_lilypond_version_string
proc = subprocess.run(command, stdout=subprocess.PIPE)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\Python3.12\Lib\subprocess.py", line 548, in run
with Popen(*popenargs, **kwargs) as process:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\Python3.12\Lib\subprocess.py", line 1026, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "G:\Python3.12\Lib\subprocess.py", line 1538, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [WinError 2] The system cannot find the file specified
</code></pre>
<p>Note: I debugged 'subprocess.py' and saw that 'executable' was None, but I couldn't find what it was expected to be.</p>
<p>I have installed <strong>abjad</strong> in a virtual environment, using both Python 3.10 and Python 3.12. And I have tried with both <strong>abjad</strong> versions, 3.19 & 3.21. Same story.</p>
<p>I have installed hundreds of Python packages ... I can't remember this case evere having occurred.</p>
<p>Any idea why's this happening?</p>
|
<python><venv>
|
2025-02-05 12:34:02
| 1
| 3,537
|
Apostolos
|
79,414,739
| 126,833
|
Copying / Syncing files from a pip package that requires additional install to a Prod build of a Dockerfile
|
<p>I am on this article (<a href="https://www.docker.com/blog/how-to-dockerize-django-app/" rel="nofollow noreferrer">https://www.docker.com/blog/how-to-dockerize-django-app/</a>) to Dockerize a Django App</p>
<p>I have <code>playwright==1.50.0</code> in my requirements.txt</p>
<p>post <code>pip install</code> one has to do <code>playwright install</code> which can follow after</p>
<pre><code># Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt
</code></pre>
<p>But how do I know this alone is enough to have the browser(s) copied to the Stage 2 (production) build?</p>
<p>Code under "Make improvements to the Dockerfile"</p>
<pre><code># Copy the Python dependencies from the builder stage
COPY --from=builder /usr/local/lib/python3.13/site-packages/ /usr/local/lib/python3.13/site-packages/
COPY --from=builder /usr/local/bin/ /usr/local/bin/
</code></pre>
<p>Where do the browsers get installed to on performing <code>playwright install</code> ?</p>
<p>EDIT (included the Dockerfile code) :</p>
<pre class="lang-none prettyprint-override"><code># Stage 1: Base build stage
FROM python:3.13 AS builder
# Create the app directory
RUN mkdir /app
# Set the working directory
WORKDIR /app
# Set environment variables to optimize Python
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# Upgrade pip and install dependencies
RUN pip install --upgrade pip
# Copy the requirements file first (better caching)
COPY requirements.txt /app/
# Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt
RUN playwright install
# Stage 2: Production stage
FROM python:3.13
RUN useradd -m -r appuser && \
mkdir /app && \
chown -R appuser /app
# Copy the Python dependencies from the builder stage
COPY --from=builder /usr/local/lib/python3.13/site-packages/ /usr/local/lib/python3.13/site-packages/
COPY --from=builder /usr/local/bin/ /usr/local/bin/
# Set the working directory
WORKDIR /app
# Copy application code
COPY --chown=appuser:appuser . .
# Set environment variables to optimize Python
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# Switch to non-root user
USER appuser
# Expose the application port
EXPOSE 8000
# Start the application using Gunicorn
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "--workers", "3", "nerul.wsgi:application"]
# CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
</code></pre>
<p>I have <code>RUN playwright install</code> right after <code>RUN pip install</code>.</p>
|
<python><docker><pip><playwright>
|
2025-02-05 12:25:28
| 0
| 4,291
|
anjanesh
|
79,414,713
| 5,137,645
|
filter amazon jumpstart models
|
<p>I found this block of code that allows me to list all of the available jumpstart models in AWS. I wanted to find a list of the key words that I could filter by and the values. Does this exsist somewhere? If not how would I filter the models for ones that are fine tunable.</p>
<pre><code>import IPython
import ipywidgets as widgets
from sagemaker.jumpstart.notebook_utils import list_jumpstart_models
from sagemaker.jumpstart.filters import And, Or
# Retrieves all TensorFlow Object Detection models.
filter_value = Or(
And("task == od1", "framework == tensorflow"), And("task == od", "framework == tensorflow")
)
model_id, model_version = "tensorflow-od1-ssd-resnet50-v1-fpn-640x640-coco17-tpu-8", "*"
tensorflow_od_models = list_jumpstart_models(filter=filter_value)
# display the model-ids in a dropdown, for user to select a model.
dropdown = widgets.Dropdown(
options=tensorflow_od_models,
value=model_id,
description="SageMaker Built-In TensorFlow Object Detection Models:",
style={"description_width": "initial"},
layout={"width": "max-content"},
)
display(IPython.display.Markdown("## Select a SageMaker pre-trained model from the dropdown below"))
display(dropdown)
</code></pre>
<p>So specefically how do adjust this line to find finetunable/trainable models:</p>
<pre><code>filter_value = Or(And("task == od1", "framework == tensorflow"), And("task == od", "framework == tensorflow"))
</code></pre>
|
<python><amazon-web-services><amazon-sagemaker><amazon-sagemaker-jumpstart>
|
2025-02-05 12:14:42
| 0
| 606
|
Nikita Belooussov
|
79,414,537
| 8,622,976
|
mongomock BulkOperationBuilder.add_update() unexpected keyword argument 'sort'
|
<p>I'm testing a function that performs a bulk upsert using UpdateOne with bulk_write. In production (using the real MongoDB client) everything works fine, but when running tests with <code>mongomock</code> I get this error:</p>
<pre class="lang-bash prettyprint-override"><code>app.mongodb.exceptions.CatalogException: failure in mongo repository function `batch_upsert_catalog_by_sku`: BulkOperationBuilder.add_update() got an unexpected keyword argument 'sort'
</code></pre>
<p>I don't pass any sort parameter in my code. Here's the relevant function:</p>
<pre class="lang-py prettyprint-override"><code>def batch_upsert_catalog_by_sku(self, items: List[CatalogBySkuWrite]) -> None:
operations = []
current_time = datetime.datetime.now(datetime.timezone.utc)
for item in items:
update_fields = item.model_dump()
update_fields["updated_at"] = current_time
operations.append(
UpdateOne(
{"sku": item.sku, "chain_id": item.chain_id},
{
"$set": update_fields,
"$setOnInsert": {"created_at": current_time},
},
upsert=True,
)
)
if operations:
result = self.collection.bulk_write(operations)
logger.info("Batch upsert completed",
matched=result.matched_count,
upserted=result.upserted_count,
modified=result.modified_count)
</code></pre>
<p>Has anyone seen this error with mongomock? Is it a version issue or a bug, and what would be a good workaround? I'm using mongomock version <code>4.3.0</code> and pymongo version <code>4.11</code>.</p>
<p>Thanks!</p>
|
<python><mongodb><pymongo><mongomock>
|
2025-02-05 11:09:05
| 1
| 2,103
|
Alon Barad
|
79,414,508
| 4,614,641
|
Matplotlib: font size of the tick labels unit factor
|
<p>Below is an example plot, where the units are on the scale of 1e-5 and 1e-6, which causes matplotlib to factor this number and display it next to the axis.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
x = np.arange(100) * 1e-8
y = np.random.normal(1e-5, 1e-6, size=100)
plt.plot(x, y, '-')
plt.tick_params(labelsize='xx-small')
fig = plt.gcf()
fig.set_size_inches(4, 3)
fig.savefig('matplotlib_tick_unit_factor.png', bbox_inches='tight')
</code></pre>
<p>Matplotlib plot with tick unit factors 1e-5 for Y axis and 1e-6 for the X axis:</p>
<p><a href="https://i.sstatic.net/cwZH8ROg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cwZH8ROg.png" alt="enter image description here" /></a></p>
<p>I would like to control the font size of this factor, but could not find it (not in <code>tick_params</code> and could not figure out if it is a <code>Locator</code> element).</p>
<p>I could not find the name of this element in the <a href="https://matplotlib.org/stable/gallery/showcase/anatomy.html" rel="nofollow noreferrer">anatomy of a figure</a>.</p>
|
<python><matplotlib>
|
2025-02-05 10:58:11
| 0
| 2,314
|
PlasmaBinturong
|
79,414,500
| 7,804,144
|
Verify Keycloak Token from React Frontend in my Flask Python Server using a JWTBearerTokenValidator
|
<p>I want to verify a Token that is sent to my server using authlib and Keycloak. This is my current setup, although it is not working. Somehow the token in the validate_token() method is always None.</p>
<pre><code>class ClientCredsTokenValidator(JWTBearerTokenValidator):
def __init__(self, issuer):
certs = keycloak_openid.certs()
public_key = JsonWebKey.import_key_set(certs)
super(ClientCredsTokenValidator, self).__init__(public_key)
self.claims_options = {
"exp": {"essential": True},
"iss": {"essential": True, "value": issuer},
}
def validate_token(self, token, scopes, request):
super(ClientCredsTokenValidator, self).validate_token(token, scopes, request)
require_auth = ResourceProtector()
validator = ClientCredsTokenValidator(KEYCLOAK_ISSUER)
require_auth.register_token_validator(validator)
</code></pre>
<p>My app route I try to fetch looks like this:</p>
<pre><code>@app.route('/loans/all', methods=['GET'])
@require_auth(None)
def get_all_loans():
...
</code></pre>
<p>The keycloak_openid.certs() is a package fetch the certificates from my keycloak instance, which is hosted in a docker container on my local machine.</p>
<p>What am I missing?</p>
|
<python><flask><jwt><keycloak>
|
2025-02-05 10:54:14
| 0
| 373
|
Karl Wolf
|
79,414,360
| 11,863,823
|
Why should I use the `type` statement in Python 3.12 and upwards?
|
<p>In <a href="https://docs.python.org/3/library/typing.html#type-aliases" rel="nofollow noreferrer">the Python documentation</a>, it is stated that I can declare type aliases using the following syntax (Python >=3.12):</p>
<pre class="lang-py prettyprint-override"><code>type Vector = list[float]
</code></pre>
<p>but that for backwards compatibility I can also simply write (using the syntax that existed until now):</p>
<pre class="lang-py prettyprint-override"><code>Vector = list[float]
</code></pre>
<p>They do not state any benefit of using the new syntax vs. the former one, and the cons are that it is (very slightly) longer to write, breaks backwards compatibility, and is a new syntax to learn. But I guess they introduced this new syntax for a good reason, what is the typical use case? Should programmers use it or stick to the former way of declaring type aliases?</p>
|
<python><python-typing>
|
2025-02-05 10:04:23
| 0
| 628
|
globglogabgalab
|
79,414,224
| 3,406,193
|
How to annotate a function that preserves type but applies a transformation to certain types in Python?
|
<p>Let's say I have a function that takes only an argument and is guaranteed to return a value of the same type as the argument. Inside, depending on the specific type of the argument, it will apply some transformations or others.</p>
<p>A minimal example could be something like this:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar
T = TypeVar("T")
S = TypeVar('S', bound=str)
def str_transform(s: S) -> S:
return s
def take_any_transform_only_str(obj: T) -> T:
if isinstance(obj, str):
return str_transform(obj)
return obj
</code></pre>
<p>It seems straightforward that <code>take_any_transform_only_str</code> will always return a value of the same type than the argument it was given. And yet, <code>mypy</code> (<code>v1.15.0</code>) complains with:</p>
<pre><code>test.py:11: error: Incompatible return value type (got "str", expected "T") [return-value]
</code></pre>
<p>What would be the correct way to annotate this function?</p>
<p>(Probably I could use <code>typing.overload</code>, but I'd very much prefer a solution based on type variables than having to manually annotate the different cases)</p>
|
<python><python-typing><mypy>
|
2025-02-05 09:20:57
| 1
| 4,044
|
mgab
|
79,414,193
| 10,971,285
|
Tlethon RPCError 406: UPDATE_APP_TO_LOGIN When try to run Python script on Raspberry pi 1 b+
|
<pre><code>import asyncio
from telethon import TelegramClient, events
from telethon.errors import SessionPasswordNeededError
import logging
import json
import requests
import os
from urllib.parse import unquote
from pathlib import Path
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
class TelegramDownloader:
def __init__(self, api_id, api_hash, phone_number):
self.api_id = api_id
self.api_hash = api_hash
self.phone_number = phone_number
self.client = TelegramClient(f'session_{phone_number}', api_id, api_hash)
self.download_queue = asyncio.Queue()
self.download_path = "/media/pi/MRX/Download"
self.is_downloading = False
async def connect(self):
await self.client.start()
if not await self.client.is_user_authorized():
try:
await self.client.send_code_request(self.phone_number)
code = input('Enter the code: ')
await self.client.sign_in(self.phone_number, code)
except SessionPasswordNeededError:
password = input('Two-step verification is enabled. Please enter your password: ')
await self.client.sign_in(password=password)
async def download_file(self, url, event):
try:
filename = unquote(url.split('/')[-1].split('?')[0])
filepath = os.path.join(self.download_path, filename)
await event.respond(f"β¬οΈ Downloading: {filename}")
response = requests.get(url, stream=True)
total_size = int(response.headers.get('content-length', 0))
with open(filepath, 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
if chunk:
f.write(chunk)
await event.respond(f"β
Download completed: {filename}")
logger.info(f"Downloaded {filename}")
except Exception as e:
await event.respond(f"β Download failed: {filename}\nError: {str(e)}")
logger.error(f"Error downloading {filename}: {str(e)}")
async def process_download_queue(self):
while True:
url, event = await self.download_queue.get()
self.is_downloading = True
await self.download_file(url, event)
self.is_downloading = False
self.download_queue.task_done()
async def monitor_channel(self, channel_id):
await self.connect()
# Start download queue processor
asyncio.create_task(self.process_download_queue())
@self.client.on(events.NewMessage(chats=[channel_id]))
async def handler(event):
try:
message = event.message.text
if message and "example.com" in message:
# Add download to queue
await self.download_queue.put((message, event))
if not self.is_downloading:
logger.info("Download added to queue")
else:
await event.respond("β³ Download queued - will start after current download completes")
except Exception as e:
logger.error(f"Error processing message: {str(e)}")
logger.info(f"Started monitoring channel {channel_id} for download links")
await self.client.run_until_disconnected()
def read_credentials():
try:
with open("credentials.json", "r") as file:
creds = json.load(file)
return creds["api_id"], creds["api_hash"], creds["phone_number"]
except FileNotFoundError:
logger.error("Credentials file not found.")
return None, None, None
def write_credentials(api_id, api_hash, phone_number):
creds = {
"api_id": api_id,
"api_hash": api_hash,
"phone_number": phone_number
}
with open("credentials.json", "w") as file:
json.dump(creds, file, indent=4)
async def main():
api_id, api_hash, phone_number = read_credentials()
if api_id is None or api_hash is None or phone_number is None:
api_id = input("Enter your API ID: ")
api_hash = input("Enter your API Hash: ")
phone_number = input("Enter your phone number: ")
write_credentials(api_id, api_hash, phone_number)
downloader = TelegramDownloader(api_id, api_hash, phone_number)
# Create download directory if it doesn't exist
os.makedirs(downloader.download_path, exist_ok=True)
# Monitor the specific group
group_id = "https://t.me/+IgOxAHTrv8UzMGU9"
await downloader.monitor_channel(group_id)
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
<p>When i run the above code on my raspberry pi1 b+, bookworm OS i got the following error. My telethon is latest one. I am using a sliest changed on on my oracle server and it is working fine for months. What should i do to fix this</p>
<blockquote>
<p>pi@raspberrypi:~/telegram_forwarder $ sudo python3 linkdownloader.py
2025-02-04 23:02:03,276 - telethon.network.mtprotosender - INFO -
Connecting to 149.154.167.51:443/TcpFull... 2025-02-04 23:02:08,398 -
telethon.network.mtprotosender - INFO - Connection to
149.154.167.51:443/TcpFull complete! Please enter your phone (or bot token): ********** 2025-02-04 23:03:21,912 - telethon.client.users</p>
<ul>
<li>INFO - Phone migrated to 5 2025-02-04 23:03:22,181 - telethon.client.telegrambaseclient - INFO - Reconnecting to new data
center 5 2025-02-04 23:03:22,494 - telethon.network.mtprotosender -
INFO - Disconnecting from 149.154.167.51:443/TcpFull... 2025-02-04
23:03:22,508 - telethon.network.mtprotosender - INFO - Disconnection
from 149.154.167.51:443/TcpFull complete! 2025-02-04 23:03:22,518 -
telethon.network.mtprotosender - INFO - Connecting to
91.108.56.128:443/TcpFull... 2025-02-04 23:03:26,847 - telethon.network.mtprotosender - INFO - Connection to
91.108.56.128:443/TcpFull complete! Traceback (most recent call last): File "/home/pi/telegram_forwarder/linkdownloader.py", line 125, in
asyncio.run(main()) File "/usr/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/asyncio/base_events.py", line 653, in
run_until_complete
return future.result()
^^^^^^^^^^^^^^^ File "/home/pi/telegram_forwarder/linkdownloader.py", line 122, in main
await downloader.monitor_channel(group_id) File "/home/pi/telegram_forwarder/linkdownloader.py", line 66, in
monitor_channel
await self.connect() File "/home/pi/telegram_forwarder/linkdownloader.py", line 25, in connect
await self.client.start() File "/usr/lib/python3/dist-packages/telethon/client/auth.py", line 190, in
_start
await self.send_code_request(phone, force_sms=force_sms) File "/usr/lib/python3/dist-packages/telethon/client/auth.py", line 519, in
send_code_request
result = await self(functions.auth.SendCodeRequest(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/telethon/client/users.py", line 30, in
<strong>call</strong>
return await self._call(self._sender, request, ordered=ordered)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/telethon/client/users.py", line
84, in _call
result = await future
^^^^^^^^^^^^ telethon.errors.rpcbaseerrors.AuthKeyError: RPCError 406: UPDATE_APP_TO_LOGIN (caused by SendCodeRequest)
pi@raspberrypi:~/telegram_forwarder $</li>
</ul>
</blockquote>
|
<python><authentication><raspberry-pi><telegram><telethon>
|
2025-02-05 09:09:43
| 1
| 1,336
|
Savad
|
79,414,172
| 2,869,971
|
Azure OpenAI Chat Stuck using .Net SDK
|
<p>I have a program in .Net Core to describe an image using Azure OpenAPI GPT-4o model.</p>
<pre><code>using Azure;
using Azure.AI.OpenAI;
using OpenAI;
using OpenAI.Chat;
using System;
using System.ClientModel;
using System.Collections.Generic;
using System.IO;
using System.Threading.Tasks;
// Azure OpenAI client library for .NET - version 2.1.0
public class AzureOpenAiService : IAzureOpenAiService
{
private static string endpoint = "https://xyz.openai.azure.com/";
private static string deployment = "gpt-4o";
private static string apiKey = "LFK";
public async Task FindPrimarySubjectAsync(string imagePath)
{
try
{
string base64Image = EncodeImage(imagePath);
var credential = new AzureKeyCredential(apiKey);
OpenAIClientOptions openAIClientOptions = new OpenAIClientOptions
{
Endpoint = new Uri(endpoint)
};
var client = new AzureOpenAIClient(new Uri(endpoint), new ApiKeyCredential(apiKey));
var chatMessages = new List<ChatMessage>
{
new SystemChatMessage("Analyze the uploaded image and return a single-word description of the main subject. The response should be only one word, representing the most general yet accurate category."),
new UserChatMessage($"What is in this image? Image: data:image/png;base64,{base64Image}")
};
var chatRequest = new ChatCompletionOptions();
var chatClient = client.GetChatClient(deployment);
var response = await chatClient.CompleteChatAsync(chatMessages, chatRequest); // Stuck here.
var content = response.Value.Content;
}
catch(Exception ex)
{
throw;
}
}
private static string EncodeImage(string imagePath)
{
byte[] imageBytes = File.ReadAllBytes(imagePath);
return Convert.ToBase64String(imageBytes);
}
}
</code></pre>
<p>This code is getting stuck when CompleteChatAsync() is invoked. I waited for more than 5 minutes and there was no response.</p>
<p>When I tried the same code using python, it returned the response within 5-6 seconds.</p>
<pre><code>from openai import AzureOpenAI
import base64
endpoint = 'https://xyz.openai.azure.com/'
deployment = 'gpt-4o'
api_key = "LFK"
api_version = "2024-05-01-preview"
def encode_image(image_path):
"""Encodes an image to base64 format."""
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode("utf-8")
def analyze_image(image_path):
base64_image = encode_image(image_path)
client = AzureOpenAI(
azure_endpoint=endpoint,
api_key=api_key,
api_version=api_version
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "Analyze the uploaded image and return a single-word description of the main subject. The response should be only one word, representing the most general yet accurate category."},
{
"role": "user",
"content": [
{"type": "text", "text": f"What ?"},
{"type": "image_url", "image_url": {"url": f"data:image/png;base64,{base64_image}"}}
],
}
]
)
return response.choices[0].message.content
image_path = "image3.png"
result = analyze_image(image_path)
print("AI Response:", result)
</code></pre>
<p>Why is the .Net code not responding?</p>
|
<python><.net><azure><openai-api><gpt-4o>
|
2025-02-05 08:59:28
| 1
| 1,645
|
S7H
|
79,414,097
| 8,554,833
|
Logger Flask Error Not Emailing (works with text error, works stand along)
|
<p>This is my code and for some reason I can't get it to run.
I was able get it run outside of flask, with a test message <code>logger.error('This is a test error message.')</code> Also I have code pushing an error to a text file that works when I have an error in my application, but this doesn't seem to send an email.</p>
<p>Anything you can see that's causing me a problem?</p>
<p><a href="https://github.com/drudd75077/microblog-2025/blob/main/app/%5C_%5C_init__.py" rel="nofollow noreferrer">https://github.com/drudd75077/microblog-2025/blob/main/app/\_\_init__.py</a></p>
<pre><code>import logging
from sendgrid import SendGridAPIClient
from sendgrid.helpers.mail import Mail
from flask import Flask
from config import Config
app = Flask(__name__)
app.config.from_object(Config)
if not app.debug:
class SendGridEmailHandler(logging.Handler):
def __init__(self, api_key, to_email):
super().__init__()
self.api_key = api_key
self.to_email = to_email
def emit(self, record):
message = self.format(record)
self.send_email(message)
def send_email(self, message):
sg = SendGridAPIClient(self.api_key)
email = Mail(
from_email=app.config['MAIL_DEFAULT_SENDER'],
to_emails=self.to_email,
subject='Error Log Notification',
plain_text_content=message
)
try:
sg.send(email)
except Exception as e:
print(f"Failed to send email: {e}")
# Configure logging
logger = logging.getLogger('EmailLogger')
logger.setLevel(logging.ERROR)
# Set up SendGrid email handler
api_key = app.config['SENDGRID_API_KEY']
email_handler = SendGridEmailHandler(api_key, app.config['SENDGRID_RECIPIENTS'])
print('email handler', vars(email_handler))
formatter = logging.Formatter(
'%(asctime)s %(levelname)s: %(message)s [in %(pathname)s:%(lineno)d]')
email_handler.setFormatter(formatter)
logger.addHandler(email_handler)
</code></pre>
|
<python><flask><sendgrid>
|
2025-02-05 08:31:45
| 0
| 728
|
David 54321
|
79,414,070
| 1,634,905
|
SeleniumBase CDP mode execute_script and evaluate with javascript gives error "SyntaxError: Illegal return statement"
|
<p>I am using <a href="https://seleniumbase.io/examples/cdp_mode/ReadMe/" rel="nofollow noreferrer">SeleniumBase</a> in CDP Mode.</p>
<p>I am having a hard time figuring out if this a python issue or SeleniumBase issue.</p>
<p>The below simple example shows my problem:</p>
<pre><code>from seleniumbase import SB
with SB(uc=True, locale_code="en", headless=True) as sb:
link = "https://news.ycombinator.com"
print(f"\nOpening {link}")
sb.wait_for_ready_state_complete(timeout=120)
sb.activate_cdp_mode(link)
script = f"""
function getSomeValue() {{
return '42';
}}
return getSomeValue();
"""
# data = sb.execute_script(script)
data = sb.cdp.evaluate(script)
print(data)
print("Finished!")
</code></pre>
<p>This throws error:</p>
<pre><code>seleniumbase.undetected.cdp_driver.connection.ProtocolException:
exceptionId: 1
text: Uncaught
lineNumber: 5
columnNumber: 4
scriptId: 6
exception:
type: object
subtype: error
className: SyntaxError
description: SyntaxError: Illegal return statement
objectId: 3089353218542582072.1.2
</code></pre>
<p>Notice above that I have tried both <code>sb.execute_script(script)</code> and <code>sb.cdp.evaluate(script)</code> and both give the same issue.</p>
<p>How can I execute such scripts?</p>
|
<python><selenium-webdriver><undetected-chromedriver><chrome-devtools-protocol><seleniumbase>
|
2025-02-05 08:22:10
| 1
| 9,021
|
sudoExclamationExclamation
|
79,414,030
| 9,560,986
|
How can I efficiently read a large CSV file in Python without running out of memory?
|
<p>I'm working with a large CSV file (over 10 million rows) that I need to process in Python. However, when I try to load the entire file into a pandas DataFrame, I run into memory issues.</p>
<p>What are some efficient ways to read and process large CSV files in Python without running out of memory?</p>
<p>I've considered the following approaches:</p>
<ul>
<li>Using <code>pandas.read_csv()</code> with <code>chunksize</code>: This allows me to read the file in smaller chunks, but I'm unsure how to effectively process each chunk.</li>
<li>Using <code>dask.dataframe</code>: I've heard that Dask can handle larger-than-memory datasets. Is it a good alternative?</li>
</ul>
<p>Using <code>csv.reader</code>: Would this be a more memory-efficient way to read the file line by line?</p>
<p>Could someone provide examples or best practices for handling large CSV files in Python? Any tips on optimizing performance would also be appreciated!</p>
|
<python><pandas><csv>
|
2025-02-05 08:06:35
| 2
| 1,616
|
R. Marolahy
|
79,414,021
| 196,489
|
"LookupError: No installed app with label 'admin'." when using muppy in django
|
<p>I have a django + drf application that has no admin site, which works very well for us. However, when using pympler and muppy like this:</p>
<pre><code>class DashboardViewSet(
SpecialEndpoint,
):
def list(self, request, *args, **kwargs):
from pympler import tracker
tr = tracker.SummaryTracker()
[...]
tr.print_diff()
return Response(...)
</code></pre>
<p>I get this error:</p>
<pre><code> File "src/api/views/case_manager/dashboard/dashboard_viewset.py", line 33, in list
tr = tracker.SummaryTracker()
File "lib/python3.13/site-packages/pympler/tracker.py", line 45, in __init__
self.s0 = summary.summarize(muppy.get_objects())
~~~~~~~~~~~~~~~~~^^
File "lib/python3.13/site-packages/pympler/muppy.py", line 42, in get_objects
tmp = [o for o in tmp if not ignore_object(o)]
~~~~~~~~~~~~~^^^
File "lib/python3.13/site-packages/pympler/muppy.py", line 17, in ignore_object
return isframe(obj)
File "lib/python3.13/inspect.py", line 507, in isframe
return isinstance(object, types.FrameType)
File "lib/python3.13/site-packages/django/utils/functional.py", line 280, in __getattribute__
value = super().__getattribute__(name)
File "lib/python3.13/site-packages/django/utils/functional.py", line 251, in inner
self._setup()
~~~~~~~~~~~^^
File "lib/python3.13/site-packages/django/contrib/admin/sites.py", line 605, in _setup
AdminSiteClass = import_string(apps.get_app_config("admin").default_site)
~~~~~~~~~~~~~~~~~~~^^^^^^^^^
File "lib/python3.13/site-packages/django/apps/registry.py", line 165, in get_app_config
raise LookupError(message)
LookupError: No installed app with label 'admin'.
</code></pre>
<p>This seems to happen as a DefaultAdminSite is always created, lazily, in a global variable, which muppy accesses to get the size.</p>
<p>Any idea how to work around this?</p>
|
<python><django>
|
2025-02-05 08:02:09
| 1
| 12,904
|
Thorben CroisΓ©
|
79,414,014
| 3,296,786
|
Pytest method mocking
|
<p>This is the method for which I need to write a test case:</p>
<pre><code>def create_interface(iface, address, netmask):
validate_interface(iface, address, netmask)
if ping_test(address):
raise Exception("Address %s is already in use. interface %s with ip %s." % (address, iface, address))
proc = Popen(["sudo ifconfig %s %s netmask %s up" %
(iface, address, netmask)], shell=True, stdout=PIPE, stderr=PIPE)
out, err = proc.communicate()
ret = proc.returncode
if ret:
raise Exception("Failed to create virtual interface '%s': %s" %
(iface, out + err))
else:
logging.info("Created %s with ip %s and netmask %s successfully" %
(iface, address, netmask))
</code></pre>
<p>My test case:</p>
<pre><code>def test_create_interface(self):
iface = "eth0:1"
address = "1.2.3.4"
netmask = "255.255.255.0"
import subprocess
process = self.mox.CreateMock(subprocess.Popen)
process.returncode = 1
self.mox.StubOutWithMock(vi, "validate_interface")
vi.validate_interface(iface, address, netmask).AndReturn(None)
self.mox.StubOutWithMock(vi, "ping_test")
vi.ping_test(address).AndReturn(False)
self.mox.StubOutWithMock(subprocess, "Popen")
subprocess.Popen( IgnoreArg(), stdout=subprocess.PIPE,
stderr=subprocess.PIPE).AndReturn(process)
self.mox.StubOutWithMock(process, "communicate")
process.communicate().AndReturn(("", 0))
self.mox.ReplayAll()
vi.create_interface(iface, address, netmask)
self.mox.VerifyAll()
</code></pre>
<p>But it fails with</p>
<pre><code>Unexpected method call `function.__call__('eth0:1', '1.2.3.4', '255.255.255.0') -> None.`
</code></pre>
<p>What am I missing?</p>
|
<python><pytest>
|
2025-02-05 07:59:20
| 1
| 1,156
|
aΨVaN
|
79,413,677
| 13,571,242
|
Streamlit error `StreamlitDuplicateElementId` with rendering buttons after calling `empty()`
|
<p>The following streamlit code will set <code>box</code> to <code>empty()</code>, render a button, set <code>box</code> to <code>empty()</code> again, render another button, and repeat.</p>
<pre class="lang-py prettyprint-override"><code>import streamlit as st
from time import sleep
box = st.empty()
while True:
box.empty()
button_A = box.button('Btn A')
sleep(1)
box.empty()
button_B = box.button('Btn B')
</code></pre>
<p>This code is giving me the error <code>StreamlitDuplicateElementId</code> on the button rendering. There should not be duplicate ids as <code>empty()</code> clears <code>box</code>. Therefore, was wondering if you all could please assist in solving this error? Thank you.</p>
|
<python><streamlit>
|
2025-02-05 04:55:12
| 0
| 407
|
James Chong
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.