QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
79,325,394
| 5,004,159
|
Trouble identifying the "Connect" art deco button element in Linkedin for Scraping via Selenium
|
<pre><code>def send_linkedin_requests(speakers):
"""Send LinkedIn connection requests to scraped speakers."""
driver = None
try:
print("\nStarting LinkedIn connection process...")
driver = create_chrome_driver()
driver.get("https://www.linkedin.com/login")
wait = WebDriverWait(driver, 20)
wait.until(EC.presence_of_element_located((By.ID, "username"))).send_keys(LINKEDIN_EMAIL)
driver.find_element(By.ID, "password").send_keys(LINKEDIN_PASSWORD)
driver.find_element(By.XPATH, "//button[@type='submit']").click()
time.sleep(5)
for speaker in speakers:
try:
speaker_name = normalize_name(speaker["name"])
print(f"\nSearching for {speaker['name']}...")
names = speaker_name.split()
if len(names) < 2:
print(f"Skipping {speaker['name']} - insufficient name information")
continue
first_name, last_name = names[:2]
search_query = f"https://www.linkedin.com/search/results/people/?keywords={first_name}%20{last_name}"
driver.get(search_query)
time.sleep(5)
# Wait for and print number of results if shown
try:
results_count = driver.find_element(By.CSS_SELECTOR, ".search-results-container h2").text
print(f"LinkedIn shows: {results_count}")
except:
pass
try:
# Try multiple selectors to find search results
selectors = [
"div.search-results-container ul.reusable-search__entity-result-list",
"div.search-results-container div.mb3",
".search-results-container li.reusable-search__result-container",
".entity-result__item"
]
search_results = []
for selector in selectors:
try:
results = driver.find_elements(By.CSS_SELECTOR, selector)
if results:
search_results = results
print(f"Found {len(results)} results using selector: {selector}")
break
except:
continue
if not search_results:
print("No search results found using any selector")
continue
print(f"Processing {len(search_results)} results...")
matches_found = 0
for result in search_results[:5]:
try:
# Try multiple selectors for the name
name_selectors = [
".entity-result__title-text span[aria-hidden='true']",
".entity-result__title-text",
"span.actor-name",
".app-aware-link span"
]
profile_name = None
for selector in name_selectors:
try:
name_element = result.find_element(By.CSS_SELECTOR, selector)
profile_name = normalize_name(name_element.text.strip())
if profile_name:
break
except:
continue
if not profile_name:
print("Could not find name in result, skipping...")
continue
print(f"\nFound profile: {profile_name}")
print(f"Looking for: {speaker_name}")
# Check for name match
if first_name in profile_name and last_name in profile_name:
print("Name match found!")
# Look for connect button
connect_button = None
button_selectors = [
"button.artdeco-button--secondary",
"button.artdeco-button[aria-label*='Connect']",
"button.artdeco-button[aria-label*='Invite']"
]
for selector in button_selectors:
try:
buttons = result.find_elements(By.CSS_SELECTOR, selector)
for button in buttons:
if 'connect' in button.text.lower():
connect_button = button
break
except:
continue
if connect_button:
print("Found Connect button")
if input(f"Send connection request? (yes/no): ").strip().lower() == "yes":
driver.execute_script("arguments[0].click();", connect_button)
time.sleep(2)
note = (
f"{first_name.title()}, hope our paths cross soon! At Kintsugi, we're developing novel voice biomarker AI to screen "
"clinical depression and anxiety from 20 seconds of free-form speech. We were recently featured in Forbes AI 50 and Fierce 15.\n\nWarmly,\nGrace"
)
print(f"\nDraft note:\n{note}")
if input("Confirm sending note? (yes/no): ").strip().lower() == "yes":
try:
add_note_button = wait.until(EC.element_to_be_clickable((
By.XPATH, "//button[contains(text(), 'Add a note')]"
)))
driver.execute_script("arguments[0].click();", add_note_button)
time.sleep(1)
textarea = wait.until(EC.presence_of_element_located((
By.XPATH, "//textarea"
)))
textarea.send_keys(note)
send_button = wait.until(EC.element_to_be_clickable((
By.XPATH, "//button[contains(text(), 'Send')]"
)))
driver.execute_script("arguments[0].click();", send_button)
print(f"Connection request sent!")
time.sleep(3)
except Exception as e:
print(f"Error sending connection request: {e}")
else:
print("No Connect button found - may already be connected or have pending request")
matches_found += 1
if matches_found >= 3:
break
else:
print("Name does not match, skipping...")
except Exception as e:
print(f"Error processing result: {e}")
continue
except Exception as e:
print(f"Error processing search results: {e}")
except Exception as e:
print(f"Error processing {speaker['name']}: {e}")
continue
except Exception as e:
print(f"Error in LinkedIn connection process: {e}")
finally:
if driver:
driver.quit()
</code></pre>
<p>The above function is used to connect to a list of speakers on Linkedin using Search People and the "Connect" button selector. However, individuals though they show up in the search results, do not get identified and returns "No initial results."</p>
<p>I've added multiple selectors for finding names and the "Connect" button, better debugging, and better error handling; however, I'm still not able to match-identify the speakers with my list for followup.</p>
<p>Any thoughts on how to improve the capture of the match and Connect sequence? Thanks!</p>
|
<python><selenium-webdriver><web-scraping>
|
2025-01-03 02:49:41
| 2
| 319
|
grehce
|
79,325,378
| 943,222
|
how to combine alive-progress with paramiko in multi-threaded setup
|
<p>I have a multi-threaded sftp python script and uses paramiko which works, now I want to add alive-progress to it so it can print the progress its making for each file it transfers.</p>
<p>Currently it is making one sftp upload thread per file given and I want to be able to print out all the thread's progress kind of like filezilla:
<a href="https://i.sstatic.net/8M3IX43T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8M3IX43T.png" alt="file zilla goal" /></a></p>
<p>But what I have got right now looks more like this in pycharm:
<a href="https://i.sstatic.net/51VftwZH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/51VftwZH.png" alt="current state in pycharm" /></a></p>
<p>In this sample run I have 4 files, so it opened up 4 threads, and tried to print 4 progress bars all at the same time on the same line.</p>
<p>I am not sure how to make them all separate.</p>
<pre><code>def uploadFile(host, port, username, password, sourceFile, parentDir, showname):
#... logic for handling file names not relevant...
transport_UploadFile = paramiko.Transport((host, port))
transport_UploadFile.connect(username=username, password=password)
sftp = paramiko.SFTPClient.from_transport(transport_UploadFile)
sftp.put(sourceFile, targetPath,callback=printTotals)
sftp.close()
transport_UploadFile.close()
</code></pre>
<p>The sftp.put can do a callback that gives you the total bytes transfered so far and total bytes to be transfered.</p>
<pre><code>def printTotals(transferred, toBeTransferred):
with alive_bar(toBeTransferred) as bar:
print(transferred)
bar()
</code></pre>
<p>So I thought inside each sftp thread i would feed the callback from put into alive to print it.</p>
<p>But now I read a bit about alive and I think I might have to start the alive as a thread, and from there do my sftp put?</p>
<p>The entire file is here: <a href="https://github.com/jzoudavy/bulkSFTP/blob/main/main.py" rel="nofollow noreferrer">https://github.com/jzoudavy/bulkSFTP/blob/main/main.py</a></p>
<p>The alive-progress page: <a href="https://pypi.org/project/alive-progress/#forcing-animations-on-pycharm-jupyter-etc" rel="nofollow noreferrer">https://pypi.org/project/alive-progress/#forcing-animations-on-pycharm-jupyter-etc</a></p>
|
<python><multithreading><paramiko>
|
2025-01-03 02:35:15
| 0
| 816
|
D.Zou
|
79,325,277
| 22,146,392
|
Custom jinja filter using python pandas is returning only sometimes returning correct information?
|
<p>I'm building a site with the Material theme for MkDocs. I've added the following custom filter:</p>
<pre class="lang-py prettyprint-override"><code>import pandas
def csv_to_html(csv_path):
return pandas.read_csv(csv_path).to_html()
</code></pre>
<p>I'm trying to find a CSV file based on the metadata of my MD document. This is what my document (<code>doc_12345.md</code>) looks like:</p>
<pre class="lang-yaml prettyprint-override"><code>---
title: myDocument
doc_number: doc_12345
---
</code></pre>
<p>I want to parse the file <code>doc_12345.csv</code>. The Jinja filters seem to be acting very erratically. I'm having a few different issues:</p>
<h3>First problem</h3>
<p>The filter only seems to be valid if I use the pipe (<code>|</code>). This fails:</p>
<pre class="lang-py prettyprint-override"><code>{{ csv_to_html("data/doc_12345.csv") }}
</code></pre>
<p>Error:</p>
<blockquote>
<p>jinja2.exceptions.UndefinedError: 'csv_to_html' is undefined</p>
</blockquote>
<p>This works:</p>
<pre class="lang-py prettyprint-override"><code>{{ "data/doc_12345.csv" | csv_to_html }}
</code></pre>
<h3>Second problem</h3>
<p>The page metadata is only found in some situations and not others. Referencing <a href="https://squidfunk.github.io/mkdocs-material/reference/#on-a-single-page" rel="nofollow noreferrer">this documentation</a>, I'm accessing the <code>doc_number</code> attribute with <code>page.meta.doc_number</code>. Using it on its own seems to work, see examples below:</p>
<pre class="lang-py prettyprint-override"><code>{{ page.meta }}
# { 'title': 'myDocument', 'doc_number': 'doc_12345' }
{{ page.meta.doc_number }}
# doc_12345
</code></pre>
<p>But concatenating strings doesn't:</p>
<pre class="lang-py prettyprint-override"><code>{{ "data/" + page.meta.doc_number + ".csv" }}
</code></pre>
<blockquote>
<p>jinja2.exceptions.UndefinedError: 'dict object' has no attribute 'doc_number'</p>
</blockquote>
<p>Strangely enough, if I use <code>default()</code> it's able to find the <code>doc_number</code> attribute:</p>
<pre class="lang-py prettyprint-override"><code>{{ "data/" + (page.meta.doc_number | default("random text")) + ".csv" }}
# data/doc_12345.csv
</code></pre>
<p>Why would it work when sent to a filter but not by itself?</p>
<h3>Third problem</h3>
<p>Once I pass that value to my <code>csv_to_html</code> filter, the <code>doc_number</code> disappears:</p>
<pre class="lang-py prettyprint-override"><code>{{ "data/" + (page.meta.doc_number | default("random text")) + ".csv" | csv_to_html }}
</code></pre>
<blockquote>
<p>FileNotFoundError: [Errno 2] No such file or directory: 'data/.csv'</p>
</blockquote>
<p>I also tried setting it as a variable:</p>
<pre class="lang-none prettyprint-override"><code>{% set csv_path = "data/" + (page.meta.doc_number | default("random text")) + ".csv" %}
{{ csv_path }}
# data/doc_12345.csv
{{ csv_path | csv_to_html }}
# FileNotFoundError: [Errno 2] No such file or directory: 'data/.csv'
</code></pre>
<p>What's even weirder is if I modify <code>csv_to_html</code> to simply return the input, the path is fine:</p>
<pre class="lang-py prettyprint-override"><code>import pandas
def csv_to_html(csv_path):
return csv_path
</code></pre>
<pre class="lang-py prettyprint-override"><code>{{ csv_path | csv_to_html }}
# data/doc_12345.csv
</code></pre>
<p>So the <code>csv_path</code> string works fine until it gets to <code>pandas.read_csv()</code>, but it <em>only</em> has a problem if it was assembled using <code>page.meta.doc_number</code>. When I pass the string literal <code>"data/doc_12345.csv"</code> to <code>csv_to_html</code>, it converts the CSV to HTML just fine.</p>
<p>This makes no sense to me whatsoever. What's going on here?</p>
<p>Edit to add minimum reproducible example:</p>
<p><code>mkdocs.yml</code></p>
<pre class="lang-yaml prettyprint-override"><code>theme:
name: material
custom_dir: templates
plugins:
- mkdocs-simple-hooks:
hooks:
on_env: "modules.hooks:on_env"
</code></pre>
<p><code>modules/hooks.py</code></p>
<pre class="lang-py prettyprint-override"><code>import pandas
def csv_to_html(csv_path):
return pandas.read_csv(csv_path).to_html(index=False)
def on_env(env, config, **kwargs):
# Add the filters
env.filters['csv_to_html'] = csv_to_html
</code></pre>
<p><code>data/doc_12345.csv</code></p>
<pre class="lang-none prettyprint-override"><code>ColumnA,ColumnB,ColumnC
ValueA,ValueB,ValueC
</code></pre>
<p><code>docs/doc_12345.md</code></p>
<pre class="lang-markdown prettyprint-override"><code>---
title: myDocument
doc_number: doc_12345
---
# Hello world
</code></pre>
<p><code>templates/main.html</code></p>
<pre class="lang-py prettyprint-override"><code>{% extends "base.html" %}
{% block content %}
{{ ("data/" + page.meta.doc_number + ".csv") | csv_to_html }}
{{ page.content }}
{% endblock %}
</code></pre>
|
<python><pandas><mkdocs><mkdocs-material>
|
2025-01-03 01:06:40
| 0
| 1,116
|
jeremywat
|
79,325,079
| 2,648,504
|
Pandas displaying the first row but not indexing it
|
<p>I have a large text file, with a header of 18 lines.</p>
<p>If I try to display the entire dataframe:</p>
<pre><code>df = pd.read_csv('my_log')
print(df)
</code></pre>
<p>I get:
<code>pandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 19, saw 3</code></p>
<p>If I try to use exclude the header:</p>
<pre><code>df = pd.read_csv('my_log', header=18)
</code></pre>
<p>I get the first row (line 19), then the second row (showing indexed at 0)
No matter which index number I use in: <code>print(df.loc[[0]])</code>, I always get that first row displayed (no index number) before the row that I want.</p>
<p>I've checked out the text file, and every row ends in a CR/LF. I've also completely removed line 19; but, the same behavior occurs.</p>
<p>Also, if I completely remove the header and print the entire dataframe, I still get the same behavior. The first row prints (without an index number) and the row count is 1 less than the true row count.</p>
<p>Any suggestions greatly appreciated!</p>
|
<python><pandas>
|
2025-01-02 22:15:43
| 1
| 881
|
yodish
|
79,324,850
| 5,602,104
|
Python: is there a straightforward way to determine if it is safe to move a line of code outside of a try-except?
|
<p>Suppose you have some code like this:</p>
<pre><code>def some_func():
try:
func_a()
func_b()
func_c()
except SomeException as e:
print("An exception occurred.")
</code></pre>
<p>Is there a straightforward way to determine whether it is safe to move any of the <code>func_*</code> calls to outside the <code>try</code> clause? In other words, is there a straightforward way to determine if a function <strong>could</strong> raise an exception <strong>without inspecting its code</strong>?</p>
|
<python><exception><error-handling><try-except>
|
2025-01-02 20:07:06
| 0
| 729
|
jcgrowley
|
79,324,706
| 9,112,151
|
What happens with ContextVar if don't reset it?
|
<p>With code below what will happen to SqlAlchemy session that is set in ContextVar if not to reset it?</p>
<pre><code>from contextvars import ContextVar
from fastapi import FastAPI, BackgroundTasks
from sqlalchemy import text
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession
engine = create_async_engine("postgresql+asyncpg://postgres:postgres@localhost:5433/postgres", isolation_level="AUTOCOMMIT")
session_context = ContextVar("session")
app = FastAPI()
class Repository:
async def get_one(self):
return await self.execute(text("select 1"))
async def get_two(self):
return await self.execute(text("select 2"))
async def execute(self, statement):
try:
session = session_context.get()
except LookupError:
session = AsyncSession(engine)
session_context.set(session)
print(session)
result = (await session.execute(statement)).scalar()
await session.close() # for some reason I need to close session every time
return result
async def check_connections_statuses():
print(engine.pool.status())
print(session_context.get())
@app.get("/")
async def main(background_tasks: BackgroundTasks):
repo = Repository()
print(await repo.get_one())
print(await repo.get_two())
background_tasks.add_task(check_connections_statuses)
</code></pre>
<p>The code reproduces output:</p>
<pre><code><sqlalchemy.ext.asyncio.session.AsyncSession object at 0x121bb31a0>
1
<sqlalchemy.ext.asyncio.session.AsyncSession object at 0x121bb31a0>
2
INFO: 127.0.0.1:59836 - "GET / HTTP/1.1" 200 OK
Pool size: 5 Connections in pool: 1 Current Overflow: -4 Current Checked out connections: 0
<sqlalchemy.ext.asyncio.session.AsyncSession object at 0x121bb31a0>
</code></pre>
<p>It seems that connections does not leak. But what about session object in ContextVar? As you could see it is still in ContextVar. Is it save not to reset it? Or Python somehow resets it?</p>
|
<python><sqlalchemy><fastapi><python-contextvars>
|
2025-01-02 18:58:29
| 1
| 1,019
|
ΠΠ»ΡΠ±Π΅ΡΡ ΠΠ»Π΅ΠΊΡΠ°Π½Π΄ΡΠΎΠ²
|
79,324,668
| 10,886,283
|
How to get only the first occurrence of each increasing value in numpy array?
|
<p>While working on first-passage probabilities, I encountered this problem. I want to find a NumPythonic way (without explicit loops) to leave only the first occurrence of strictly increasing values in each row of a <code>numpy</code> array, while replacing repeated or non-increasing values with zeros. For instance, if</p>
<pre><code>arr = np.array([
[1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 5, 5],
[1, 1, 2, 2, 2, 3, 2, 2, 3, 3, 3, 4, 4],
[3, 2, 1, 2, 1, 1, 2, 3, 4, 5, 4, 3, 2]])
</code></pre>
<p>I would like to get as output:</p>
<pre><code>out = np.array([
[1, 0, 0, 2, 0, 0, 3, 0, 0, 4, 0, 5, 0],
[1, 0, 2, 0, 0, 3, 0, 0, 0, 0, 0, 4, 0],
[3, 0, 0, 0, 0, 0, 0, 0, 4, 5, 0, 0, 0]])
</code></pre>
|
<python><arrays><numpy>
|
2025-01-02 18:37:05
| 2
| 509
|
alpelito7
|
79,324,608
| 8,990,329
|
IPv6 Hop-by-Hop Scapy: ValueError: Missing 'dst' attribute
|
<p>I'm experimenting with <code>IPv6</code> Scapy and trying to set <code>Router Alert</code> Hop By Hop option. Here is the code sample:</p>
<pre><code>hdr = IPv6ExtHdrHopByHop(options=[("Router Alert", b'\x01\x00')])
ip6 = IPv6(src="xxxx::xxxx", dst="yyyy::yyyy")
send(ip6 / hdr)
</code></pre>
<p>Execution of this code produces the following error:</p>
<pre><code>Traceback (most recent call last):
File "/home/sn/scapy/scapy/supersocket.py", line 391, in send
sx = raw(x)
^^^^^^
File "/home/sn/scapy/scapy/compat.py", line 123, in raw
return bytes(x)
^^^^^^^^
File "/home/sn/scapy/scapy/packet.py", line 609, in __bytes__
return self.build()
^^^^^^^^^^^^
File "/home/sn/scapy/scapy/packet.py", line 768, in build
p = self.do_build()
^^^^^^^^^^^^^^^
File "/home/sn/scapy/scapy/packet.py", line 751, in do_build
pay = self.do_build_payload()
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sn/scapy/scapy/packet.py", line 737, in do_build_payload
return self.payload.do_build()
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sn/scapy/scapy/packet.py", line 748, in do_build
pkt = self.self_build()
^^^^^^^^^^^^^^^^^
File "/home/sn/scapy/scapy/packet.py", line 727, in self_build
raise ex
File "/home/sn/scapy/scapy/packet.py", line 718, in self_build
p = f.addfield(self, p, val)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sn/scapy/scapy/fields.py", line 240, in addfield
return s + self.struct.pack(self.i2m(pkt, val))
^^^^^^^^^^^^^^^^^^
File "/home/sn/scapy/scapy/fields.py", line 2201, in i2m
f = fld.i2len(pkt, fval)
^^^^^^^^^^^^^^^^^^^^
File "/home/sn/scapy/scapy/layers/inet6.py", line 890, in i2len
return len(self.i2m(pkt, i))
^^^^^^^^^^^^^^^^
File "/home/sn/scapy/scapy/layers/inet6.py", line 905, in i2m
d = p.alignment_delta(curpos)
^^^^^^^^^^^^^^^^^
AttributeError: While dissecting field 'len': 'tuple' object has no attribute 'alignment_delta'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/home/sn/scapy/scapy/sendrecv.py", line 486, in send
return _send(
^^^^^^
File "/home/sn/scapy/scapy/sendrecv.py", line 447, in _send
results = __gen_send(socket, x, inter=inter, loop=loop,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sn/scapy/scapy/sendrecv.py", line 409, in __gen_send
s.send(p)
File "/home/sn/scapy/scapy/supersocket.py", line 399, in send
raise ValueError(
ValueError: Missing 'dst' attribute in the first layer to be sent using a native L3 socket ! (make sure you passed the IP layer)
</code></pre>
<p>If I remove the <code>options=[("Router Alert", b'\x01\x00')]</code> and send essentially empty <code>HopByHop</code> extension header as</p>
<pre><code>send (ip6 / IPv6ExtHdrHopByHop())
</code></pre>
<p>it works fine, but does not make a lot of sense. What's wrong with actually specifying the options?</p>
|
<python><ipv6><scapy>
|
2025-01-02 18:09:36
| 1
| 9,740
|
Some Name
|
79,324,524
| 13,971,251
|
AttributeError with instance of model with Generic Foreign Field in, created by post_save signal
|
<p>I have 3 models that I am dealing with here: <code>SurveyQuestion</code>, <code>Update</code>, and <code>Notification</code>. I use a <code>post_save</code> signal to create an instance of the <code>Notification</code> model whenever an instance of <code>SurveyQuestion</code> or <code>Update</code> was created.</p>
<p>The <code>Notification</code> model has a <code>GenericForeignKey</code> which goes to whichever model created it. Inside the Notification model I try to use the <code>ForeignKey</code> to set <code>__str__</code> as the <code>title</code> field of the instance of the model that created it. Like so:</p>
<pre><code>class Notification(models.Model):
source_object = models.ForeignKey(ContentType, on_delete=models.CASCADE)
object_id = models.PositiveIntegerField()
source = GenericForeignKey("source_object", "object_id")
#more stuff
def __str__(self):
return f'{self.source.title} notification'
</code></pre>
<p>I am able to create instances of SurveyQuestion and Update from the admin panel, which is then (supposed to be) creating an instance of Notification. However, when I query instances of <code>Notification</code> in the shell:</p>
<pre><code>from hotline.models import Notification
notifications = Notification.objects.all()
for notification in notifications:
print (f"Notification object: {notification}")
NoneType
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
1 for notification in notifications:
----> 2 print (notification)
File ~/ygzsey/hotline/models.py:27, in Notification.__str__(self)
26 def __str__(self):
---> 27 return f'{self.source.title} notification'
AttributeError: 'NoneType' object has no attribute 'title'
</code></pre>
<p>When I query instances of <code>SurveyQuestion</code>:</p>
<pre><code>from hotline.models import SurveyQuestion
surveys = SurveyQuestion.objects.all()
for survey in surveys:
print (f"Model: {survey.__class__.__name__}")
Model: SurveyQuestion
</code></pre>
<p>When I query instances of <code>Notification</code> and try to print the class name of their <code>ForeignKey</code> field (I labled it <code>source</code>), I get this:</p>
<pre><code>for notification in notifications:
print (f"Notification for {notification.source.__class__.__name__}")
Notification for NoneType
Notification for NoneType
Notification for NoneType
</code></pre>
<p>So it seems that the <code>SurveyQuestion</code>, <code>Update</code>, and <code>Notification</code> instances are saving properly, but there is some problem with the <code>GenericForeignKey</code>.</p>
<p>I had the <code>post_save</code> create an instance of <code>Notification</code> using <code>Notification(source_object=instance, start_date=instance.start_date, end_date=instance.end_date)</code>, but that would give me an error when trying to save an instance of <code>SurveyQuestion</code> or <code>Update</code> in the admin panel:</p>
<pre><code>ValueError at /admin/hotline/update/add/
Cannot assign "<Update: Update - ad>": "Notification.source_object" must be a "ContentType" instance.
</code></pre>
<p>So I changed it to <code>Notification(source_object=ContentType.objects.get_for_model(instance), start_date=instance.start_date, end_date=instance.end_date)</code>.</p>
<p>My full <code>models.py</code>:</p>
<pre><code>from django.db import models
from datetime import timedelta
from django.utils import timezone
from django.contrib.contenttypes.fields import GenericForeignKey, GenericRelation
from django.contrib.contenttypes.models import ContentType
from django.db.models.signals import post_save
from django.dispatch import receiver
def tmrw():
return timezone.now() + timedelta(days=1)
class Notification(models.Model):
source_object = models.ForeignKey(ContentType, on_delete=models.CASCADE)
object_id = models.PositiveIntegerField()
source = GenericForeignKey("source_object", "object_id")
start_date = models.DateTimeField(default=timezone.now)
end_date = models.DateTimeField(default=tmrw)
class Meta:
verbose_name = 'Notification'
verbose_name_plural = f'{verbose_name}s'
def __str__(self):
return f'{self.source.title} notification'
class Update(models.Model):
title = models.CharField(max_length=25)
update = models.TextField()
start_date = models.DateTimeField(default=timezone.now)
end_date = models.DateTimeField(default=tmrw)
#notification = GenericRelation(Notification, related_query_name='notification')
class Meta:
verbose_name = 'Update'
verbose_name_plural = f'{verbose_name}s'
def __str__(self):
return f'{self.__class__.__name__} - {self.title}'
class SurveyQuestion(models.Model):
title = models.CharField(max_length=25)
question = models.TextField()
start_date = models.DateTimeField(default=timezone.now)
end_date = models.DateTimeField(default=tmrw)
#notification = GenericRelation(Notification, related_query_name='notification')
class Meta:
verbose_name = 'Survey'
verbose_name_plural = f'{verbose_name}s'
def __str__(self):
return f'{self.__class__.__name__} - {self.title}'
class SurveyOption(models.Model):
survey = models.ForeignKey(SurveyQuestion, on_delete=models.CASCADE, related_name='options')
option = models.TextField()
id = models.AutoField(primary_key=True)
class Meta:
verbose_name = 'Survey option'
verbose_name_plural = f'{verbose_name}s'
def __str__(self):
return f'{self.survey.title} option #{self.id}'
@receiver(post_save)
def create_notification(instance, **kwargs):
#"""
print (f"instance: {instance}")
print (f"instance.__class__: {instance.__class__}")
print (f"instance.__class__.__name__: {instance.__class__.__name__}")
#"""
senders = ['SurveyQuestion', 'Update']
if instance.__class__.__name__ in senders:
notification = Notification(source_object=ContentType.objects.get_for_model(instance), start_date=instance.start_date, end_date=instance.end_date)
notification.save()
post_save.connect(create_notification)
</code></pre>
|
<python><django><django-models><generic-foreign-key><django-model-field>
|
2025-01-02 17:33:43
| 1
| 1,181
|
Kovy Jacob
|
79,324,475
| 10,140,821
|
capture file name as variable based on substring in python
|
<p>I have a below scenario in <code>python</code>. I want to check the current working directory and the files present in that directory and create a variable.</p>
<p>I have done like below</p>
<pre><code>import os
# current working directory
print(os.getcwd())
# files present in that directory
dir_contents = os.listdir('.')
# below is the output of the dir_contents
print(dir_contents)
['.test.json.crc', '.wf_list_dir_param.py.crc', 'test.json', 'wf_list_dir_param.py']
</code></pre>
<p>Now from the <code>dir_contents</code> list I want to extract the <code>wf_list_dir</code> as a variable.</p>
<p>I need to do below</p>
<ol>
<li>find out the elements that start with <code>wf</code> and end with <code>param.py</code></li>
<li>extract everything before <code>_param.py</code> as a variable</li>
</ol>
<p>How do I do that?</p>
|
<python>
|
2025-01-02 17:12:03
| 2
| 763
|
nmr
|
79,324,398
| 3,837,788
|
Custom implementation of Python http.server doesn't work properly when HTTPS is enabled
|
<p>I am aware that Python <code>http.server</code> module has been conceived mainly for testing purposes, since there are some well known security issues in its implementation.</p>
<p>That said, for a very humble personal project, I implemented my own class, derived from <code>http.server.BaseHTTPRequestHandler</code>, where I override the <code>do_GET</code> method. Everything was working properly until I updated Python version to 3.12: it seems this version produced some changes to the <code>ssl</code> module too and so I had to refactor a bit the code.</p>
<p>With this regard, this is the condensed content of the <code>main.py</code> file:</p>
<pre><code># ...here are import statements and other functions...
class RequestHandler(BaseHTTPRequestHandler):
# ...here the other methods are defined...
def do_GET(self):
resobj = self._manage_get_request()
self.send_response(resobj.status_code)
self._send_headers(resobj.headers)
self._send_body(resobj.body)
def main():
print('Running http server...')
address = (_HTTP_SERV_ADDR, _HTTP_SERV_PORT)
httpd = HTTPServer(address, RequestHandler)
if not _DEBUG_ENABLED:
print('Enabling TLS protocol...')
context = _ssl.SSLContext(_ssl.PROTOCOL_TLS_SERVER)
context.load_cert_chain(certfile=_CERTIFICATE_FILE_PATH,
keyfile=_KEY_FILE_PATH)
httpd.socket = context.wrap_socket(httpd.socket, server_side=True)
print(f'Server is listening ({httpd.server_address[0]}:{httpd.server_port})...')
try:
httpd.serve_forever()
except KeyboardInterrupt:
pass
print('Shutting down http server...')
httpd.server_close()
</code></pre>
<p>If the debug is enabled, and so the code portion related to HTTPS is skipped, the server works properly. Instead, when the debug is disabled, the server seems to work fine as well but after some hours it becomes unresponsive: all the connection requests time out.</p>
<p>Would any of you kindly provide me some hints on what to check to better understand what it is going on and why?</p>
<p><strong>Further information on the implementation</strong></p>
<p>While <code>send_response</code> is already defined in <code>BaseHTTPRequestHandler</code>, the other methods called in <code>do_GET</code> are defined in my implementation. In particular:</p>
<ul>
<li><code>_manage_get_request</code> is a kind of <code>if</code> statement to properly manage the request, taking into account the parameters provided via the query; it returns a custom Response object with code, headers and body</li>
<li><code>_send_headers</code> is basically a wrapper of the <code>send_header</code> method of parent class</li>
<li><code>_send_body</code> simply writes the response body to the socket buffer</li>
</ul>
<hr />
<p><strong>Jan 6th 2025 Update</strong></p>
<p>Since I submitted this question, I tried several changes to code but actually none of them was resolving the issue.</p>
<p>So, I decided to implement a <a href="https://gist.github.com/rodolfocangiotti/b3e4da3035f1b81aa98b074e8f831d4b" rel="nofollow noreferrer">humble decorator</a> to trace the function call stack. With this little tool, I noticed that the code was hanging inside the <code>get_request</code> method of the <code>TCPServer</code> class, where a new TCP connection is accepted:</p>
<pre><code>[...]
----- Calling BaseServer.service_actions on 2025-01-05 15:59:15.092364 -----
BaseServer.service_actions execution ended after 0:00:00.000015
----- Calling BaseServer._handle_request_noblock on 2025-01-05 15:59:15.282698 -----
----- Calling TCPServer.get_request on 2025-01-05 15:59:15.282869 -----
</code></pre>
<p>Honestly, I don't know the reason of this behaviour (I see only that this happens when the server receive more than a request at almost the same time, just a matter of tenths of second), but in the meantime I override the method in question by adding a simple timeout mechanism:</p>
<pre><code>class TimeoutException(BaseException):
pass
def alarm_handler(signum, frame):
print('Timeout reached!')
raise TimeoutException
def get_request(self):
# Overwrite method of TCPServer class...
_signal.signal(_signal.SIGALRM, alarm_handler)
_signal.alarm(30)
try:
result = self.socket.accept()
_signal.alarm(0)
return result
except TimeoutException as e:
_signal.alarm(0)
raise OSError from e # OSError is already managed by caller...
</code></pre>
<p>I know it is not a valuable solution to the issue but it lets the server keep working somehow, so it might be considered as a temporary workaround.</p>
<p>That said, could any of you explain what are the possible causes of this issue and how to solve it? The application in question is running inside a Docker image derived from <code>ubuntu:latest</code>.</p>
|
<python><http><ssl><timeout>
|
2025-01-02 16:38:28
| 0
| 566
|
rudicangiotti
|
79,324,137
| 357,024
|
Why is Python's IsolatedAsyncioTestCase so slow?
|
<p>I'm working on writing test cases for some Python code that uses asyncio. I'm noticing a significant performance degradation when my test classes inherit from <code>unittest.IsolatedAsyncioTestCase</code> vs. using the non-async UnitTest and manually calling a coroutine using <code>asyncio.run</code></p>
<p>I've tested this on Python 3.13.1 and 3.10.10 on my personal desktop (Mac Studio M1 Max) and on a Linode Nanode VPS. I'm noticing around 10x slowdowns across testing platforms.</p>
<p>Below is a simplified example, my actual use case is far more complicated.</p>
<pre><code># test.py
import asyncio
from unittest import TestCase
from unittest import IsolatedAsyncioTestCase
async def f():
for i in range(100000):
await asyncio.sleep(0)
class TestManualAsync(TestCase):
def test_async(self):
asyncio.run(f())
class TestIsolatedAsync(IsolatedAsyncioTestCase):
async def test_async(self):
await f()
$ python3 -m unittest test.py -k TestManualAsync
.
----------------------------------------------------------------------
Ran 1 test in 0.614s
$ python3 -m unittest test.py -k TestIsolatedAsync
.
----------------------------------------------------------------------
Ran 1 test in 5.115s
</code></pre>
<p>You can see from the above it's much faster to just trigger a coroutine in a traditional test class. I don't see why this should be the case.</p>
|
<python><python-3.x><python-asyncio><python-unittest>
|
2025-01-02 14:56:42
| 1
| 61,290
|
Mike
|
79,323,979
| 999,162
|
Saving a website screenshot with pyqt6 / QtWebEngineView is always empty
|
<p>I'm trying to save a screenshot of a website with QtWebEngineView, but the resulting image always ends up being empty.</p>
<pre><code>
import sys
import time
from PyQt6.QtCore import *
from PyQt6.QtGui import *
from PyQt6.QtWidgets import QApplication, QWidget
from PyQt6.QtWebEngineWidgets import QWebEngineView
class Screenshot(QWebEngineView):
def __init__(self):
self.app = QApplication(sys.argv)
QWebEngineView.__init__(self)
self._loaded = False
self.loadFinished.connect(self._loadFinished)
def capture(self, url, output_file):
self.resize(QSize(1024, 768))
self.load(QUrl(url))
self.wait_load()
image = QImage(self.size(), QImage.Format.Format_ARGB32)
painter = QPainter(image)
self.render(painter)
painter.end()
image.save(output_file)
def wait_load(self, delay=0):
# process app events until page loaded
while not self._loaded:
self.app.processEvents()
time.sleep(delay)
self._loaded = False
def _loadFinished(self, result):
self._loaded = True
s = Screenshot()
s.capture("https://www.google.com", "screenshot.png")
</code></pre>
<p>I've tried variations of the code where the QTWebEngineView is rendered in an application window and that part works. The <code>wait_load</code> also ensures the website is loaded before proceeding to paint. Yet somehow the rendered image is always a full white png instead of the website.</p>
<p>Edit: Voting to reopen because the solution in the "duplicate" does not seem to resolve this case.</p>
|
<python><web-scraping><pyqt><pyqt6>
|
2025-01-02 14:13:41
| 2
| 5,274
|
kontur
|
79,323,929
| 2,123,706
|
how do I find all polars dataframes in python
|
<p>I have a long script in python, predominantly pandas, but shifting to polars.</p>
<p>I am reviewing memory of items.</p>
<p>To find 10 largest objects currently in use <code>locals().items()</code> and <code>sys.getsizeof()</code>, I run:</p>
<pre><code>import sys
def sizeof_fmt(num, suffix='B'):
''' by Fred Cirera, https://stackoverflow.com/a/1094933/1870254, modified'''
for unit in ['','Ki','Mi','Gi','Ti','Pi','Ei','Zi']:
if abs(num) < 1024.0:
return "%3.1f %s%s" % (num, unit, suffix)
num /= 1024.0
return "%.1f %s%s" % (num, 'Yi', suffix)
for name, size in sorted(((name, sys.getsizeof(value)) for name, value in list(
locals().items())), key= lambda x: -x[1])[:10]:
print("{:>30}: {:>8}".format(name, sizeof_fmt(size)))
</code></pre>
<p>but, I know I have some polars objects, when I run:</p>
<pre><code>pl.DataFrame.estimated_size(data)
</code></pre>
<p>I get a value of 400MB, which would make it the largest in my current script.</p>
<p>the polars dataframes are not returned in my locals() call</p>
<p>Is there a way to determine all polars objects currently in use?</p>
|
<python><dataframe><python-polars>
|
2025-01-02 13:57:39
| 0
| 3,810
|
frank
|
79,323,746
| 11,850,322
|
Why keep NumPy RuntimeWarning
|
<p>Here is a sample data, even there is no negative or np.nan, it still show error message:</p>
<p>Data:</p>
<pre><code> gvkey sale ebit
4 1000 44.8 16.8
5 1000 53.2 11.5
6 1000 42.9 6.2
7 1000 42.4 0.9
8 1000 44.2 5.3
9 1000 51.9 9.7
</code></pre>
<p>Function:</p>
<pre><code>def calculate_ln_values(df):
conditions_ebit = [
df['ebit'] >= 0.0,
df['ebit'] < 0.0
]
choices_ebit = [
np.log(1 + df['ebit']),
np.log(1 - df['ebit']) * -1
]
df['lnebit'] = np.select(conditions_ebit, choices_ebit, default=np.nan)
conditions_sale = [
df['sale'] >= 0.0,
df['sale'] < 0.0
]
choices_sale = [
np.log(1 + df['sale']),
np.log(1 - df['sale']) * -1
]
df['lnsale'] = np.select(conditions_sale, choices_sale, default=np.nan)
return df
</code></pre>
<p>Run</p>
<pre><code>calculate_ln_values(data)
</code></pre>
<p>Error Warning:</p>
<pre><code>C:\Users\quoc\anaconda3\envs\uhart\Lib\site-packages\pandas\core\arraylike.py:399: RuntimeWarning: invalid value encountered in log
result = getattr(ufunc, method)(*inputs, **kwargs)
C:\Users\quoc\anaconda3\envs\uhart\Lib\site-packages\pandas\core\arraylike.py:399: RuntimeWarning: invalid value encountered in log
result = getattr(ufunc, method)(*inputs, **kwargs)
</code></pre>
<p>I would very appreciate if someone could help me this issue</p>
<p>---- Edit: reply to Answer of @Emi OB and @Quang Hoang: ---------------</p>
<p>The formula as in the paper is:</p>
<p><a href="https://i.sstatic.net/UDHZnHaE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UDHZnHaE.png" alt="enter image description here" /></a></p>
<p>ln(1+EBIT) if EBIT β₯ 0</p>
<p>-ln(1-EBIT) if EBIT < 0</p>
<p>so my code:</p>
<pre><code>np.log(1 + df['ebit']),
np.log(1 - df['ebit']) * -1
</code></pre>
<p>follows the paper.</p>
<p>The part <code>np.log(1 - df['ebit'])</code> is impossible to be negative since it fall under the condition of <code>ebit < 0</code>.</p>
|
<python><pandas><numpy>
|
2025-01-02 12:43:34
| 2
| 1,093
|
PTQuoc
|
79,323,706
| 2,526,586
|
Jinja linking resources from separate projects
|
<p>With Jinja2, I can normally link to a CSS stylesheet from HTML template with something like:</p>
<pre><code> <link rel="stylesheet" href="{{ url_for('static', filename='styles.css') }}" />
</code></pre>
<p>given that "static" is the relative directory containing the styles.css.</p>
<p>My problem is, while I am working on my web application, I am concurrently working on a SCSS/SASS project that resides in another project folder. I want my web application to link to the generated CSS file in the SCSS/SASS project folder.
But for every update/build in the SCSS/SASS project, I don't want to manually copy the generated CSS file over to the script folder of the web application project.</p>
<p>What would be the best set up and practice to achieve this? Any additional concerns when building/packing my web application for project?</p>
<p>For the context, I am using VSCode with the "Live Sass Compiler" extension for auto-building my CSS file. For production, I am planning to containerise the web application with gunicorn and Docker.</p>
<p>Some of my concerns:</p>
<ul>
<li>I understand that I can do the file copying in the dockerfile, but I am hoping to have a solution that works for both development and production.</li>
<li>The SCSS/SASS project is hosted with Git, but the generated CSS files may not be pushed to the remote repo. When there are future colleagues joining to either project (by collaborating on git), the ideal scenario would be that
<ul>
<li>people working on SCSS/SASS doesn't need to know anything about the web application project</li>
<li>people working on the web application do not need to worry too much about the fiddling of the SCSS/SASS project. They may have cloned the SCSS/SASS repo to their local as this project is a dependency of the web application project, but they may or may not modify anything in the SCSS project.</li>
</ul>
</li>
<li>Although the question here is about Jinja linking to a auto-generated CSS file, I wish the resolution could be project-agnostic where possible, meaning that if I apply the same solution to linking things such as a separate JavaScript library project, a font project, or other web resource projects, I would hope they work the same way.</li>
</ul>
<p>Thank you all in advance.</p>
|
<python><visual-studio-code><flask><sass><jinja2>
|
2025-01-02 12:30:37
| 0
| 1,342
|
user2526586
|
79,323,535
| 14,754,870
|
aioboto3 parallel page fetching from s3
|
<p>I have an s3 bucket. It contains many thousands of objects. I want to use a paginator to extract all the lists of object keys in parallel using asynchio and aioboto3.</p>
<p>Is this possible?
It seems to not work. If it's not possible, then why?</p>
<p>This is what I tried that raises an exception:</p>
<pre><code>async def parallel_pagination(bucket_name, bucket_path):
async with aioboto3_session.client("s3") as aios3_client:
paginator = aios3_client.get_paginator("list_objects_v2")
page_iterator = paginator.paginate(Bucket=bucket_name, Prefix=bucket_path)
coroutines = [page.get("Contents",[]) async for page in page_iterator]
pages = await asyncio.gather(*coroutines)
return pages
</code></pre>
|
<python><amazon-s3><python-asyncio><boto3>
|
2025-01-02 11:14:25
| 0
| 348
|
PurpleHacker
|
79,323,285
| 1,305,993
|
Executorch installation issue
|
<p>I am trying to set up environmenty like in tutorial: <a href="https://pytorch.org/executorch/stable/getting-started-setup" rel="nofollow noreferrer">https://pytorch.org/executorch/stable/getting-started-setup</a></p>
<p>When running ./install_requirements.sh
I get an error:
CMake Error at CMakeLists.txt:45 (project):
Generator</p>
<pre><code> NMake Makefiles
does not support toolset specification, but toolset
Clang
was specified.
</code></pre>
<p>I tried to change CMAKE_ARGS += " -T ClangCL" to different values but nothing helps</p>
|
<python><pytorch>
|
2025-01-02 09:22:35
| 1
| 1,309
|
RCH
|
79,323,215
| 12,466,687
|
How to combine columns with extra strings into a concatenated string column in Polars?
|
<p>I am trying to add another column that will contain combination of two columns (Total & percentage) into a result column(labels_value) which look like: <code>(Total) percentage%</code>.</p>
<p>Basically to wrap <code>bracket</code> strings on <code>Total</code> column and add <code>%</code> string at the end of combination of these two columns.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
pl.Config(tbl_rows=21) # increase repr defaults
so_df = pl.from_repr("""
ββββββββββββββββ¬ββββββββββββββββββββββ¬ββββββ¬ββββββββ¬βββββββββββββ¬βββββββββ
β Flag β Category β len β Total β percentage β value β
β --- β --- β --- β --- β --- β --- β
β str β str β i64 β i64 β f64 β f64 β
ββββββββββββββββͺββββββββββββββββββββββͺββββββͺββββββββͺβββββββββββββͺβββββββββ‘
β Outof Range β Thyroid β 7 β 21 β 33.33 β 33.33 β
β Outof Range β Inflammatory Marker β 2 β 8 β 25.0 β 25.0 β
β Outof Range β Lipid β 12 β 63 β 19.05 β 19.05 β
β Outof Range β LFT β 14 β 87 β 16.09 β 16.09 β
β Outof Range β DLC β 11 β 126 β 8.73 β 8.73 β
β Outof Range β Vitamin β 1 β 14 β 7.14 β 7.14 β
β Outof Range β CBC β 2 β 45 β 4.44 β 4.44 β
β Outof Range β KFT β 2 β 56 β 3.57 β 3.57 β
β Outof Range β Urine Examination β 1 β 28 β 3.57 β 3.57 β
β Within Range β Thyroid β 14 β 21 β 66.67 β -66.67 β
β Within Range β Inflammatory Marker β 6 β 8 β 75.0 β -75.0 β
β Within Range β Lipid β 51 β 63 β 80.95 β -80.95 β
β Within Range β LFT β 73 β 87 β 83.91 β -83.91 β
β Within Range β DLC β 115 β 126 β 91.27 β -91.27 β
β Within Range β Vitamin β 13 β 14 β 92.86 β -92.86 β
β Within Range β CBC β 43 β 45 β 95.56 β -95.56 β
β Within Range β KFT β 54 β 56 β 96.43 β -96.43 β
β Within Range β Urine Examination β 27 β 28 β 96.43 β -96.43 β
β Within Range β Anemia β 38 β 38 β 100.0 β -100.0 β
β Within Range β Diabetes β 22 β 22 β 100.0 β -100.0 β
β Within Range β Electrolyte β 46 β 46 β 100.0 β -100.0 β
ββββββββββββββββ΄ββββββββββββββββββββββ΄ββββββ΄ββββββββ΄βββββββββββββ΄βββββββββ
""")
</code></pre>
<p>I have tried below three ways and none of them worked:</p>
<pre class="lang-py prettyprint-override"><code>(so_df
# .with_columns(labels_value = "("+str(pl.col("Total"))+") "+str(pl.col("percentage"))+"%")
# .with_columns(labels_value = "".join(["(",str(pl.col("Total")),") ",str(pl.col("percentage")),"%"]))
# .with_columns(labels_value =pl.concat_str([pl.col("Total"),pl.col("percentage")])))
</code></pre>
<p>Desired result would be to add a new column like:</p>
<pre><code>ββββββββββββββββ
β labels_value β
β --- β
β str β
ββββββββββββββββ‘
β (21) 33.33% β
β (8) 25% β
β (63) 19.05% β
β (87) 16.09% β
β (126) 8.73% β
β β¦ β
</code></pre>
|
<python><dataframe><python-polars>
|
2025-01-02 08:56:51
| 1
| 2,357
|
ViSa
|
79,323,147
| 4,745,607
|
Integrating Chaquopy 16.0 in kotlin multiplatform's shared module, Can not access Python in shared module
|
<p>I am developing a kotlin multiplatform app. In the shared module of this I want to use numpy package to do operation on some .pkl files.
I am following the setup mentioned <a href="https://chaquo.com/chaquopy/doc/current/android.html" rel="nofollow noreferrer">Official Setup Link-Kotlin</a>.
So far what I have done is
Project-setting.gradle.kts</p>
<pre><code>rootProject.name = "Assignment"
enableFeaturePreview("TYPESAFE_PROJECT_ACCESSORS")
pluginManagement {
repositories {
google {
mavenContent {
includeGroupAndSubgroups("androidx")
includeGroupAndSubgroups("com.android")
includeGroupAndSubgroups("com.google")
}
}
mavenCentral()
gradlePluginPortal()
}
}
dependencyResolutionManagement {
repositories {
google {
mavenContent {
includeGroupAndSubgroups("androidx")
includeGroupAndSubgroups("com.android")
includeGroupAndSubgroups("com.google")
}
}
mavenCentral()
}
}
include(":composeApp")
include(":shared")
</code></pre>
<p>Project-> build.gradle.kts</p>
<pre><code> plugins {
// this is necessary to avoid the plugins to be loaded multiple times
// in each subproject's classloader
alias(libs.plugins.androidApplication) apply false
alias(libs.plugins.androidLibrary) apply false
alias(libs.plugins.composeMultiplatform) apply false
alias(libs.plugins.composeCompiler) apply false
alias(libs.plugins.kotlinMultiplatform) apply false
id("com.chaquo.python") version "16.0.0" apply false
}
</code></pre>
<p>And the
Shared-> build.gradle.kts</p>
<pre><code> import org.jetbrains.kotlin.gradle.ExperimentalKotlinGradlePluginApi
import org.jetbrains.kotlin.gradle.dsl.JvmTarget
plugins {
alias(libs.plugins.kotlinMultiplatform)
alias(libs.plugins.androidLibrary)
id("com.chaquo.python")
}
kotlin {
androidTarget {
@OptIn(ExperimentalKotlinGradlePluginApi::class)
compilerOptions {
jvmTarget.set(JvmTarget.JVM_11)
}
}
listOf(
iosX64(),
iosArm64(),
iosSimulatorArm64()
).forEach { iosTarget ->
iosTarget.binaries.framework {
baseName = "Shared"
isStatic = true
}
}
sourceSets {
androidMain.dependencies {
}
iosMain.dependencies {
}
commonMain.dependencies {
implementation(libs.multik.core)
implementation(libs.multik.kotlin)
}
}
}
android {
namespace = "com.assignment.shared"
compileSdk = libs.versions.android.compileSdk.get().toInt()
compileOptions {
sourceCompatibility = JavaVersion.VERSION_11
targetCompatibility = JavaVersion.VERSION_11
}
defaultConfig {
minSdk = libs.versions.android.minSdk.get().toInt()
ndk {
// On Apple silicon, you can omit x86_64.
abiFilters += listOf("arm64-v8a")
}
}
}
chaquopy {
defaultConfig {
buildPython ("/usr/local/bin/python3")
}
productFlavors { }
sourceSets { }
}
</code></pre>
<p>These are the config files.
The first warning I gets is</p>
<pre><code>Warning: Failed to compile to .pyc format: [/usr/local/bin/python3] does not appear to be a valid Python command. See https://chaquo.com/chaquopy/doc/current/android.html#android-bytecode
</code></pre>
<p>I tried accessing python3 via terminal its working well.</p>
<p>when I try importing python in shared module's class it does not get imported.
How can I fix it and can access numpy package in my kmm application?</p>
<p>EDIT: If someone wants to suggest any other library or workaround to do operation on a .pkl file in a KMM project.(iOS&Android).
Or help me in converting the python script into kotlin/multik library.</p>
<pre><code> # -*- coding: utf-8 -*-
import numpy as np
import pickle
def seg_stride(sig, dz, f_start, est_t):
"""! Estimate stride start and end for walking segment
@param sig Moving standard deviation of heel
@param dz Depth difference between heels
@param f_start Start frame of walking segment
@param est_t Estimated duration of one stride
@return fp_lr Estimated frames for stride [start, end]
"""
cur_len = len(sig)
m_thres = np.mean(sig[sig < np.median(sig)]) # get the mean for values less than median
[idx] = np.where(np.diff((sig < m_thres) * 1) > 0) # the search start point for each stride, where the heel z goes below threshold
# Filter idx such that there is only one in each stride
# upward zero-crossings to nearest time step
[upcross] = np.where((dz[:-1] <= 0) & (dz[1:] > 0))
# downward zero-crossings
[downcross] = np.where((dz[:-1] >= 0) & (dz[1:] < 0))
if len(upcross) >= len(downcross):
cross = upcross
else:
cross = downcross
cross = np.concatenate([[0], cross, [cur_len]])
filtered_idx = []
# Iterate through the intervals in downcross
for i in range(len(cross) - 1):
start = cross[i]
end = cross[i + 1]
# Find elements in idx that are in the interval (start, end)
elements_in_interval = idx[(idx >= start) & (idx < end)]
# If there is exactly one element, keep it; otherwise, keep the smallest element
if len(elements_in_interval) >= 1:
filtered_idx.append(elements_in_interval[0])
# Convert filtered_idx back to a numpy array
filtered_idx = np.array(filtered_idx)
num_fp = len(filtered_idx) # no. of strides
if num_fp > 1:
fp = np.zeros((num_fp))
# refine and record the corresponding frame
for k in range(num_fp):
cur_t = filtered_idx[k]
cur_sig = sig[cur_t : min(cur_len, cur_t + round(0.25 * est_t))] # search space for one stride, start point to 1/4 of estimated stride or end of vid
# locate the local minimum
s_idx = np.argmin(cur_sig)
t2 = min(cur_len, cur_t + s_idx)
t1 = max(0, cur_t - round(0.2 * est_t)) # what is this for?
t_idx = np.argmax(dz[t1 : t2 + 1]) # checking step to see if there is an earlier peak?
fp[k] = f_start + t1 + t_idx
# set fp_left: each row is [start frame, end frame] of a valid stride
fp_lr = np.zeros((num_fp - 1, 2), dtype=int)
for k in range(num_fp - 1):
fp_lr[k, 0] = int(fp[k])
fp_lr[k, 1] = int(fp[k + 1])
else:
fp_lr = np.empty((0, 2), dtype=int)
return fp_lr
def dist2plane_new(points, ground_plane):
"""! Project 3D points to plane and compute shortest distance
@param points Input 3D points
@param ground_plane ground plane parameters (a,b,c,d)
@return projected_points Projected points on ground plane
@return distance Shortest distance between points and plane
"""
a, b, c, d = ground_plane
projected_points = []
distance = []
for point in points:
s, u, v = point
t = (d - a * s - b * u - c * v) / (a * a + b * b + c * c)
# point closest to plane
x = s + a * t
y = u + b * t
z = v + c * t
dist = abs(a * s + b * u + c * z + d) / (np.sqrt(a * a + b * b + c * c))
projected_points.append([x, y, z])
distance.append(dist)
return np.array(projected_points), np.array(distance)
def refine_stride_boundary(angle, foot_on, cur_t, c_margin, nb_frames):
# set local segment
t1 = max(0, cur_t - c_margin)
t2 = min(nb_frames - 1, cur_t + c_margin)
# refine the start of stride by finding the largest decrease in heel angle
idx = local_argmin_diff(angle, t1, t2)
t = t1 + idx
# further refine by ensuring foot is on ground at t
if t not in foot_on:
# if foot not on ground, adjust to next frame when foot is on ground
next_foot_on = foot_on[(foot_on - t)>0]
if len(next_foot_on)>0:
t = next_foot_on[0]
return t
def find_boundaries(fp, angle, angle_other, foot_on, est_t, nb_frames):
c_margin = round(est_t / 10)
t = []
# refine stride boundaries (foot on)
for k in range(len(fp)):
# refine start and end of stride
t_start = refine_stride_boundary(angle, foot_on, fp[k, 0], c_margin, nb_frames)
t_end = refine_stride_boundary(angle, foot_on, fp[k, 1], c_margin, nb_frames)
# find foot offs
if t_end >= t_start+4: # if next foot on is at least 4 frames ahead
# find foot off - min angle between start and end of stride
t_fo = t_start + local_argmin(angle, t_start, t_end)
# if current foot off is at least 2 frames ahead of start
if t_fo >= t_start+2:
# find other foot off
to_fo = t_start + local_argmin(angle_other, t_start, t_fo)
t.append([t_start, to_fo, t_fo, t_end])
return t
def combine_left_right(fp1, fp2):
# foot ons
t_strides = [t[0] for t in fp2] + [t[-1] for t in fp2]
t_strides = sorted(set(t_strides))
t_strides = np.array(t_strides)
refined_fp = []
for fp in fp1:
other_foot_on = t_strides[(t_strides>fp[1]) & (t_strides<fp[2])]
if len(other_foot_on) == 1:
refined_fp.append([fp[0], fp[1], other_foot_on[0], fp[2], fp[3]])
return np.array(refined_fp)
def params_est_lidar(pose_data, fp_walk_list, ground_plane):
"""! Estimate gait parameters based on 3D pose data by identifying corresponding timestamps
@param pose_data 3D pose data (28 key points) for
('L_HEAD' 'R_HEAD' 'SGL' 'CV7' 'TV10'
'L_SAE' 'R_SAE' 'MID_L_HE' 'MID_R_HE' 'MID_L_SP'
'MID_R_SP' 'L_IAS' 'R_IAS' 'MID_IPS' 'L_FLE'
'R_FLE' 'L_FME' 'R_FME' 'L_FAL' 'R_FAL'
'L_TAM' 'R_TAM' 'L_FCC' 'R_FCC' 'L_FM2' 'R_FM2' 'L_FM5' 'R_FM5')
@param fps Video frame rate
@param fp_walk_list List of walking segments [[start1, end1, start2, end2], [s1,e1,s2,e2],...]
@param ground_plane ground plane parameters (a,b,c,d)
@return t_left Corresponding consecutive timestamps for left cycle
@return t_right Corresponding consecutive timestamps for right cycle
@return estimated stride lengths: stride_left, stride_right
@return gait parameters output
"""
# set 3D points
left_heel = pose_data[:, 22, :] # L_FCC
right_heel = pose_data[:, 23, :] # R_FCC
left_toe = pose_data[:, 24, :] # L_FM2 (idx: 25)
right_toe = pose_data[:, 25, :] # R_FM2 (idx: 26)
# get number of frames
nb_frames = pose_data.shape[0]
# finding boundaries of each stride
# depth difference
dz = right_heel[:, 2] - left_heel[:, 2]
# upward zero-crossings to nearest time step
[upcross] = np.where((dz[:-1] <= 0) & (dz[1:] > 0))
# downward zero-crossings
[downcross] = np.where((dz[:-1] >= 0) & (dz[1:] < 0))
# estimate the duration of one stride
diff_up = np.diff(upcross).tolist()
diff_down = np.diff(downcross).tolist()
est_t = np.median(diff_up + diff_down)
print('The estimated duration of one stride is %d frames' % est_t)
# use the 2D-based phase segmentation results to set walking segments (change on 02/07/2023)
# take input list and run in loop for walk-turn-walk (22/01/2024)
fp_left = []
fp_right = []
print("number of complete round of walk-forward, walk-backward: ", len(fp_walk_list))
for fp_walk in fp_walk_list:
f_start1 = fp_walk[0]
f_end1 = fp_walk[1]
f_start2 = fp_walk[2]
f_end2 = fp_walk[3]
# for 1st walking segment (walk towards camera)
# for left foot
sig1 = left_heel[f_start1 : f_end1 + 1, 2] # relative-depth (05/Oct/2023)
dz1 = dz[f_start1 : f_end1 + 1]
fp1_left = seg_stride(sig1, dz1, f_start1, est_t)
# for right foot
sig2 = right_heel[f_start1 : f_end1 + 1, 2] # relative-depth (05/Oct/2023)
dz2 = -dz1
fp1_right = seg_stride(sig2, dz2, f_start1, est_t)
# for 2nd walking segment (walk away from camera)
# for left foot
sig1 = -left_heel[f_start2 : f_end2 + 1, 2] # add negative
dz1 = -dz[f_start2 : f_end2 + 1]
fp2_left = seg_stride(sig1, dz1, f_start2, est_t)
# for right foot
sig2 = -right_heel[f_start2 : f_end2 + 1, 2] # relative-depth (05/Oct/2023)
dz2 = -dz1
fp2_right = seg_stride(sig2, dz2, f_start2, est_t)
fp_left.append(fp1_left)
fp_left.append(fp2_left)
fp_right.append(fp1_right)
fp_right.append(fp2_right)
#print('Finished for loop')
fp_left = np.vstack(fp_left)
fp_right = np.vstack(fp_right)
# Further processing to identify single support, double support, and refine stride boundaries
#print('Starting Step3')
# Step 3: Calculate distance to plane and angles between foot and floor
# angle between left foot and floor plane
[pp1, dz1] = dist2plane_new(left_heel, ground_plane)
[pp2, dz2] = dist2plane_new(left_toe, ground_plane)
perpendicular = dz2 - dz1 # height difference between left heel and toe relative to ground plane
base = np.sqrt(np.sum(np.power(pp2 - pp1, 2), 1)) # distance between left heel and left toe
left_angle = np.arctan2(perpendicular, base)
left_angle = left_angle * 180 / np.pi # convert to degree
# angle between right foot and floor plane
[pp1, dz1] = dist2plane_new(right_heel, ground_plane)
[pp2, dz2] = dist2plane_new(right_toe, ground_plane)
perpendicular = dz2 - dz1
base = np.sqrt(np.sum(np.power(pp2 - pp1, 2), 1))
right_angle = np.arctan2(perpendicular, base)
right_angle = right_angle * 180 / np.pi # convert to degree
# distance between ankles and floor
[ppl, dzl] = dist2plane_new(left_heel, ground_plane)
[ppr, dzr] = dist2plane_new(right_heel, ground_plane)
# compute when each foot is on ground
left_foot_on = np.where(dzl < dzr)[0]
right_foot_on = np.where(dzr < dzl)[0]
#print('Starting Step4')
# Step 4: refine boundaries of each stride and further divide into
# double support (DS1), single support, and double support (DS2)
# refine stride boundaries (foot on) and find foot offs
t_left = find_boundaries(fp_left, left_angle, right_angle, left_foot_on, est_t, nb_frames)
t_right = find_boundaries(fp_right, right_angle, left_angle, right_foot_on, est_t, nb_frames)
# combine left and right times
t_left = combine_left_right(t_left, t_right)
t_right = combine_left_right(t_right, t_left)
return t_left, t_right # , stride_left, stride_right, output
def local_argmin(sig, start, end):
"""! Find local minimum index
@param sig Signal
@return Local minimum index
"""
if end <= start:
idx = 0
else:
idx = np.argmin(sig[start : end + 1])
return idx
def local_argmin_diff(sig, start, end):
"""! Find local minimum index of difference
@param sig Signal
@return Local minimum index
"""
if end <= start:
idx = 0
else:
idx = np.argmin(np.diff(sig[start : end + 1]))
return idx
with open('input_output/in3.pkl','rb') as file:
data = pickle.load(file)
print(f'{data}')
pose_data_3d_norm = data['pose_data_3d_norm']
fp_walk = data['fp_walk']
ground_plane = data['ground_plane']
[t_left, t_right] = params_est_lidar(pose_data_3d_norm, fp_walk, ground_plane)
</code></pre>
|
<python><python-3.x><numpy><kotlin-multiplatform><chaquopy>
|
2025-01-02 08:23:49
| 1
| 2,524
|
Devendra Singh
|
79,323,140
| 11,850,322
|
Suggestion of sub files type for calculation
|
<p>I use jupyter lab for my project. I use mostly pandas, numpy, statsmodel. The number of code lines now is very long and becoming hard to manage. I want to leverage by creating sub-files for calculation, i.e.:</p>
<ul>
<li>Call a sub-file</li>
<li>Pass input to the sub-file</li>
<li>Get output from the sub-file</li>
</ul>
<p>For the sub-files, should I use python file (.py) or jupyter file</p>
<p>I am looking for suggestions</p>
|
<python><jupyter-notebook><jupyter><jupyter-lab>
|
2025-01-02 08:21:15
| 2
| 1,093
|
PTQuoc
|
79,322,845
| 958,580
|
Multiple Aliases for a Terminal or Rule
|
<p>Can one assign multiple aliases for a terminal or rule in Lark ?</p>
<p>Consider the following grammar</p>
<pre><code>coordinate : "(" X "," Y ")"
%import common.SIGNED_NUMBER -> X
%import common.SIGNED_NUMBER -> Y
</code></pre>
<p>which returns the error</p>
<pre><code>lark.exceptions.GrammarError: Rule 'X' used but not defined (in rule coordinate)
</code></pre>
<p>My full script for reference is as follows :</p>
<pre><code>from lark import Lark
parser = Lark("""\
coordinate : "(" X "," Y ")"
%import common.SIGNED_NUMBER -> X
%import common.SIGNED_NUMBER -> Y
""")
parser.parse("(12,43)")
</code></pre>
|
<python><lark-parser>
|
2025-01-02 05:31:08
| 0
| 3,417
|
Carel
|
79,322,796
| 5,404,337
|
activate javascript to render Next page on website using selenium
|
<p>I am trying to understand how to submit request to generate the next page in a website.</p>
<p>The site displays the first 10 rows of a data table, and then provides a list of page numbers and Previous and Next highlighted text (e.g., Previous 1 2 3 4 Next) at the bottom of the data table. I assume there is a javascript that is executed to render the next page. There is no xpath type '//button[text()=Next"] button on the page. On the page, clicking on the highlighted text Next renders the next page.</p>
<p>I have successfully scraped the data from the first displayed table info, but do not understand how to trigger the site to render the next page.
I assume the python selenium command to trigger the next page is using:</p>
<p><code>driver.execute_script('INSERT JS NEXT PAGE COMMAND HERE')</code></p>
<p>but do not know how to find the name of the js script. I am not familiar with js</p>
<p>This is a portion of the HTML that is displayed. Does this include the name of a js to use as an argument in a driver.execute_script() command?</p>
<pre><code> <div class="paginator js-paginator hide">
<a class="previous js-previous icon-with-text">
<span class="text">Previous</span> <svg class="icon" height="16" width="16" focusable="false" aria-hidden="true"><use xlink:href="/static/img/components/icons/16px/ic_caretleft.1e8c2f677d23.svg#icon" /></svg>
</a>
<div class="pages js-pages hide-for-mobile"></div>
<a class="next js-next icon-with-text">
<span class="text">Next</span> <svg class="icon" height="16" width="16" focusable="false" aria-hidden="true"><use xlink:href="/static/img/components/icons/16px/ic_caretright.b5b45a745d42.svg#icon" /></svg>
</a>
</div>
</code></pre>
<p>There are also a sequence of scripts included at the bottom of the html page:
</p>
<pre><code> <script src="/static/js/lib/jquery.min.a09e13ee94d5.js" nonce=""></script>
<script src="/static/js/lib/jquery.deserialize.min.3db41d3f53d5.js" nonce=""></script>
<script src="/static/js/lib/lodash.min.68e9b65c3984.js" nonce=""></script>
<script src="/static/js/lib/d3.min.9fe250ef5e1d.js" nonce=""></script>
<script src="/static/js/lib/select2.min.aebd21499cab.js" nonce=""></script>
<script src="/static/js/lib/tipped.9093e3c14aaf.js" nonce=""></script>
<script src="/static/js/lib/vex.min.c9f77cb11959.js" nonce=""></script>
<script src="/static/js/lib/jquery.waypoints.min.07c8bc20d684.js" nonce=""></script>
<script src="/static/js/lib/sticky.min.d3f1ddda5800.js" nonce=""></script>
<script nonce="">
window.TTAM.ready(function(exports) {
exports.utility.GTMImpressionsSingleton();
});
</script>
<script src="/static/js/main.40c23f145e01.js" nonce=""></script>
</code></pre>
|
<javascript><python><selenium-webdriver>
|
2025-01-02 04:51:44
| 1
| 328
|
bici.sancta
|
79,322,639
| 562,697
|
Matching a QListView's background color to the window color
|
<p>To be clear, this has nothing to do with the items in a list view, but the default background color itself. I'd like the widget to effectively not be noticeable itself.</p>
<p>According to the <a href="https://doc.qt.io/qt-6/qwidget.html#setBackgroundRole" rel="nofollow noreferrer">docs</a> setting the background role to <code>NoRole</code> will achieve my desired effect, but does not. I assume I am misunderstanding the documentation.</p>
<p>I realize I can change this with <code>setStyleSheet()</code> however, I'd prefer to avoid style sheets if possible.</p>
<p>The following is a small example app. The desired effect is that you cannot see the list view view. When I start adding items, I want to style them specifically.</p>
<pre><code>from PyQt6.QtGui import QPalette
from PyQt6.QtWidgets import QFrame, QListView, QWidget
class ListView(QListView):
''' Custom list view widget. '''
def __init__(self, parent: QWidget | None = None):
'''
Construct a list view.
@param parent - parent widget
'''
super().__init__(parent)
self.setFrameShape(QFrame.Shape.NoFrame)
self.setBackgroundRole(QPalette.ColorRole.NoRole)
# end constructor
# end ListView
def tester():
''' Function to test the list view. '''
from PyQt6.QtCore import Qt
from PyQt6.QtGui import QColor
from PyQt6.QtWidgets import QApplication, QDialog, QHBoxLayout, QRadioButton, QVBoxLayout
def set_theme(dark: bool):
palette = QPalette()
if dark:
app.setStyle('Fusion')
palette.setColor(QPalette.ColorRole.Window, QColor(53, 53, 53))
palette.setColor(QPalette.ColorRole.WindowText, Qt.GlobalColor.white)
palette.setColor(QPalette.ColorRole.Base, QColor(25, 25, 25))
palette.setColor(QPalette.ColorRole.AlternateBase, QColor(53, 53, 53))
palette.setColor(QPalette.ColorRole.ToolTipBase, Qt.GlobalColor.black)
palette.setColor(QPalette.ColorRole.ToolTipText, Qt.GlobalColor.white)
palette.setColor(QPalette.ColorRole.Text, Qt.GlobalColor.white)
palette.setColor(QPalette.ColorRole.Button, QColor(53, 53, 53))
palette.setColor(QPalette.ColorRole.ButtonText, Qt.GlobalColor.white)
palette.setColor(QPalette.ColorRole.BrightText, Qt.GlobalColor.red)
palette.setColor(QPalette.ColorRole.Link, QColor(42, 130, 218))
palette.setColor(QPalette.ColorRole.Highlight, QColor(42, 130, 218))
palette.setColor(QPalette.ColorRole.HighlightedText, Qt.GlobalColor.black)
#
else:
app.setStyle('windowsvista')
#
app.setPalette(palette)
# end set_theme
app = QApplication([])
win = QDialog()
win.setWindowTitle('List View example')
win.setLayout(QVBoxLayout())
lv = ListView()
win.layout().addWidget(lv)
row = QHBoxLayout()
win.layout().addLayout(row)
radio = QRadioButton('Light theme')
radio.setChecked(True)
radio.clicked.connect(lambda: set_theme(False))
row.addWidget(radio)
radio = QRadioButton('Dark theme')
radio.clicked.connect(lambda: set_theme(True))
row.addWidget(radio)
row.addStretch(1)
win.layout().addStretch(1)
win.show()
app.exec()
# end tester
if __name__ == '__main__': tester()
</code></pre>
|
<python><pyqt>
|
2025-01-02 02:10:21
| 0
| 11,961
|
steveo225
|
79,322,580
| 1,492,229
|
Calling huggingface APIs fails
|
<p>This is my first time to use huggingface.</p>
<p>I want to call its API to answer a prompt.</p>
<p>Here is what I have so far:</p>
<pre><code>import requests
# Define API details
API_URL = "https://api-inference.huggingface.co/models/Llama-3.3-70B-Instruct"
API_TOKEN = "hf_********************************"
# Define headers with authentication token
headers = {
"Authorization": f"Bearer {API_TOKEN}"
}
# Define the input payload
payload = {
"inputs": "What is the biggest animal?", # Replace with the input text or data
}
def query_huggingface_model(api_url, headers, payload):
try:
response = requests.post(api_url, headers=headers, json=payload)
response.raise_for_status() # Raise an error for HTTP codes 4xx/5xx
return response.json()
except requests.exceptions.RequestException as e:
print(f"An error occurred: {e}")
return None
# Query the model
response = query_huggingface_model(API_URL, headers, payload)
# Process and print the response
if response:
print("Model Response:", response)
else:
print("Failed to get a response from the model.")
</code></pre>
<p>but when I try to run it I get this error:</p>
<blockquote>
<p>An error occurred: 400 Client Error: Bad Request for url:<br />
<a href="https://api-inference.huggingface.co/models/Llama-3.3-70B-Instruct" rel="nofollow noreferrer">https://api-inference.huggingface.co/models/Llama-3.3-70B-Instruct</a><br />
Failed to get a response from the model.</p>
</blockquote>
<p>I tried to change the API_url to <a href="https://api-inference.huggingface.co/models/meta-llama/Llama-3.3-70B-Instruct" rel="nofollow noreferrer">https://api-inference.huggingface.co/models/meta-llama/Llama-3.3-70B-Instruct</a> but still getting the same error.</p>
<p>How to fix that; are there other errors in my code?</p>
|
<python><huggingface>
|
2025-01-02 01:09:29
| 0
| 8,150
|
asmgx
|
79,322,543
| 7,693,707
|
Find the closest converging point of a group of vectors
|
<p>I am trying to find the point that is closest to a group of vectors.</p>
<p>For context, the vectors are inverted rays emitted from center of aperture stop after exiting a lens, this convergence is meant to locate the entrance pupil.</p>
<p>The backward projection of the exiting rays, while not converging at one single point due to spherical aberration, is quite close to converging toward a point, as illustrated in the figure below.</p>
<p><a href="https://i.sstatic.net/8eH1ewTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8eH1ewTK.png" alt="enter image description here" /></a></p>
<p>(For easier simulation the positive z is pointing down)</p>
<p>I believe that to find the closest point, I would be finding a point that has the shortest distance to all these lines. And wrote the method as follow:</p>
<pre><code>def FindConvergingPoint(position, direction):
A = np.eye(3) * len(direction) - np.dot(direction.T, direction)
b = np.sum(position - np.dot(direction, np.dot(direction.T, position)), axis=0)
return np.linalg.pinv(A).dot(b)
</code></pre>
<p>For the figure above and judging visually, I would have expected the point to be something around <code>[0, 0, 20]</code></p>
<p>However, this is not the case. The method yielded a result of <code>[ 0., 188.60107764, 241.13690715]</code>, which is far from the converging point I was expecting.</p>
<p>Is my algorithm faulty or have I missed something about python/numpy?</p>
<hr />
<p>Attached are the data for the vectors:</p>
<pre><code>position = np.array([
[0, 0, 0],
[0, -1.62, 0.0314],
[0, -3.24, 0.1262],
[0, -4.88, 0.2859],
[0, -6.53, 0.5136],
[0, -8.21, 0.8135],
[0, -9.91, 1.1913],
[0, -11.64, 1.6551],
[0, -13.43, 2.2166],
[0, -15.28, 2.8944],
[0, -17.26, 3.7289]
])
direction = np.array([
[0, 0, 1],
[0, 0.0754, 0.9972],
[0, 0.1507, 0.9886],
[0, 0.2258, 0.9742],
[0, 0.3006, 0.9537],
[0, 0.3752, 0.9269],
[0, 0.4494, 0.8933],
[0, 0.5233, 0.8521],
[0, 0.5969, 0.8023],
[0, 0.6707, 0.7417],
[0, 0.7459, 0.6661]
])
</code></pre>
|
<python><numpy><vector>
|
2025-01-02 00:11:21
| 1
| 1,090
|
Amarth GΓ»l
|
79,322,530
| 1,319,070
|
Unable to run a simple python application in AWS App Runner
|
<p>Has any deployed a python application on AWS app runner recently ? I have tried a lot of different ways and they all fail with the same error</p>
<pre><code>01-01-2025 04:44:22 PM [AppRunner] Health check failed on protocol `HTTP`[Path: '/'], [Port: '8080']. Check your configured port number. For more information, see the application logs.
01-01-2025 04:44:35 PM [AppRunner] Deployment with ID : 9719e0a9f3814a5d88c90569994bb10c failed. Failure reason : Health check failed.
</code></pre>
<p>I started with my python application and moved to simple python hello world samples like these
<a href="https://github.com/Sathyvs/apprunner" rel="nofollow noreferrer">https://github.com/Sathyvs/apprunner</a>
<a href="https://github.com/adamjkeller/simple-apprunner-demo" rel="nofollow noreferrer">https://github.com/adamjkeller/simple-apprunner-demo</a> (this is a code the author put on a youtube video that shows everything just working fine, but when I fork this repository and run it, I get the same above issue.</p>
<p>I have tried</p>
<ul>
<li>using source code as the input as well as a built images from ECR</li>
<li>using build settings on aws console, and using a configuration file (apprunner.yaml)</li>
<li>changing the health check configuration to various timeouts, to tcp, http, and http at root, at different routes like /health</li>
<li>I started with fast api at 8000, and have tried different servers with all different ports like 8080, 80, 8000 etc.</li>
<li>I have tried python3 runtime, python 3.11</li>
</ul>
<p>But every attempt have failed at the exact same step. and if you see this file here <a href="https://github.com/Sathyvs/apprunner/blob/main/server.py" rel="nofollow noreferrer">https://github.com/Sathyvs/apprunner/blob/main/server.py</a>
I added some logs and while I see the logs on start up in local machine, I <strong>do not see any application logs on app runner</strong>, it only gives the same limited logs and there is no additional information on what is going on with the deployment ?</p>
<p>Code at <a href="https://github.com/Sathyvs/apprunner/tree/main" rel="nofollow noreferrer">https://github.com/Sathyvs/apprunner/tree/main</a>
server.py</p>
<pre><code>from wsgiref.simple_server import make_server
from pyramid.config import Configurator
from pyramid.response import Response
import os, logging
logging.basicConfig(level=logging.INFO, stream=sys.stdout)
logger = logging.getLogger(__name__)
def hello_world(request):
name = os.environ.get('NAME')
if name == None or len(name) == 0:
name = "world"
message = "Hello, " + name + "!\n"
logger.info("api called")
return Response(message)
if __name__ == '__main__':
port = int(os.environ.get("PORT"))
logger.info(f"running main application and listening on port {os.environ.get("PORT")}")
logger.info(f"value of my port {os.environ.get("MY_PORT")}")
with Configurator() as config:
logger.info("configuring apis")
config.add_route('hello', '/')
config.add_view(hello_world, route_name='hello')
app = config.make_wsgi_app()
server = make_server('0.0.0.0', 8080, app)
logger.info("starting server")
server.serve_forever()
</code></pre>
<p>Dockerfile</p>
<pre><code>FROM public.ecr.aws/amazonlinux/amazonlinux:latest
RUN yum install python3.7 -y && curl -O https://bootstrap.pypa.io/get-pip.py && python3 get-pip.py && yum update -y
COPY . /app
WORKDIR /app
RUN pip3 install -r requirements.txt
CMD python3 server.py
EXPOSE 8080
</code></pre>
<p>Has any one faced these challenges and actually deployed an application in App runner ?
Is there any sample out there that actually runs a python application on app runner ? does app runner actually work at all ?</p>
<p>I appreciate any help or clue to even find out some more logs on what is going on. Thank you.</p>
|
<python><amazon-web-services><aws-app-runner><docker-healthcheck>
|
2025-01-02 00:03:01
| 1
| 419
|
Sathiesh
|
79,322,486
| 2,936,329
|
Pyre: `Invalid class instantiation` on a factory pattern with ABC type hint
|
<p>I have implemented a factory pattern but Pyre-check trips over the type-hinting, thinking I want to instantiate the mapping dict type hint: <code>type[MyClassABC]</code></p>
<p>Here is a minimum example, also a <a href="https://pyre-check.org/play?input=from%20abc%20import%20abstractmethod%0A%0A%0Aclass%20MyClassABC%3A%0A%20%20%20%20%40abstractmethod%0A%20%20%20%20def%20__init__(self%2C%20name%3A%20str)%20-%3E%20None%3A%0A%20%20%20%20%20%20%20%20pass%0A%0A%0Aclass%20MyClass2(MyClassABC)%3A%0A%20%20%20%20def%20__init__(self%2C%20name%3A%20str)%20-%3E%20None%3A%0A%20%20%20%20%20%20%20%20self.name%20%3D%20name%0A%0A%0Aclass%20MyClass(MyClassABC)%3A%0A%20%20%20%20def%20__init__(self%2C%20name%3A%20str)%20-%3E%20None%3A%0A%20%20%20%20%20%20%20%20self.name%20%3D%20name%0A%0A%0Aclass%20ClassFactory%3A%0A%20%20%20%20MAPPING_STACKS%3A%20dict%5Bstr%2C%20type%5BMyClassABC%5D%5D%20%3D%20%7B%0A%20%20%20%20%20%20%20%20%22myclass%22%3A%20MyClass%2C%0A%20%20%20%20%20%20%20%20%22myclass2%22%3A%20MyClass2%2C%0A%20%20%20%20%7D%0A%0A%20%20%20%20%40staticmethod%0A%20%20%20%20def%20get_class()%20-%3E%20type%5BMyClassABC%5D%3A%0A%20%20%20%20%20%20%20%20return%20ClassFactory.MAPPING_STACKS%5B%22myclass%22%5D%0A%0A%0Aif%20__name__%20%3D%3D%20%22__main__%22%3A%0A%20%20%20%20factory_cls%20%3D%20ClassFactory.get_class()%0A%20%20%20%20instance%20%3D%20factory_cls(name%3D%22test%22)%0A" rel="nofollow noreferrer">pyre-check playground</a> that runs the checks online.</p>
<p>The goal is a factory pattern with proper type hinting. To achieve this I created an ABC for the class, with an abstract method for the <code>__init__</code>, this way in my IDE I can see which parameters to supply.</p>
<p>However, I get the error:</p>
<pre><code> Invalid class instantiation [45]: Cannot instantiate abstract class `MyClassABC` with abstract method `__init__`.
</code></pre>
<p>Because Pyre-check thinks I am going to instantiate the type-hint, <code>type[MyClassABC]</code>. How can I fix my type hinting so that it's still proper type hinting but Pyre-check doesn't think I'm instantiating an abstract class?
I want to keep the <code>abstractmethod</code> <code>__init__</code> for the parameter type hinting.</p>
<pre class="lang-py prettyprint-override"><code>from abc import abstractmethod
class MyClassABC:
@abstractmethod
def __init__(self, name: str) -> None:
pass
class MyClass2(MyClassABC):
def __init__(self, name: str) -> None:
self.name = name
class MyClass(MyClassABC):
def __init__(self, name: str) -> None:
self.name = name
class ClassFactory:
MAPPING: dict[str, type[MyClassABC]] = {
"myclass": MyClass,
"myclass2": MyClass2,
}
@staticmethod
def get_class() -> type[MyClassABC]:
return ClassFactory.MAPPING["myclass"]
if __name__ == "__main__":
factory_cls = ClassFactory.get_class()
instance = factory_cls(name="test")
</code></pre>
|
<python><abstract-class><python-typing><pyre-check>
|
2025-01-01 23:14:35
| 1
| 468
|
Rien
|
79,322,284
| 142,976
|
How to override inherited Odoo method?
|
<p>I'm trying to override a method that is inherited like this example</p>
<pre><code>module school
###################################################################
class BaseTest(models.AbstractModel)
_name = 'base.test'
score = fields.Float()
def get_score(self)
return score
class ExamRecord(models.Model):
_name = 'exam.record'
_inherit = ['base.test']
student = fields.Char()
module custom_school
###################################################################
class CustomBaseTest(models.AbstractModel)
_name = ['custom.base.test']
def get_score(self)
return score * 10
class CustomExamRecord(models.Model):
_name = 'exam.record'
_inherit = ['exam.record', 'custom.base.test']
student = fields.Char()
</code></pre>
<p>But the <code>get_score</code> of <code>CustomBaseTest</code> are never inherited and the <code>exam.record</code> are still using <code>get_score</code> of <code>BaseTest</code>.</p>
<p>Am I using the wrong way to override? What should be the correct way to override <code>get_score</code> method?</p>
|
<python><odoo>
|
2025-01-01 20:27:12
| 0
| 4,224
|
strike_noir
|
79,322,153
| 10,889,650
|
How "unpythonic" is it for an exception to be the expected outcome?
|
<p>In Django I am validating a request which submits something that a user is only supposed to submit once, and in the "correct behavour sequence" an Exception is raised:</p>
<pre><code>try:
my_row = models.MyModel.objects.get(id=instance_id, user=request.logged_in_user)
return HttpResponseBadRequest("Already submitted")
except models.MyModel.DoesNotExist:
pass
// continue
</code></pre>
<p>Scale of 1-10, how much of a crime is this?</p>
|
<python><django>
|
2025-01-01 18:44:57
| 1
| 1,176
|
Omroth
|
79,322,044
| 6,140,251
|
How to use SQL Stored Procedures for dynamic tables in sqlalchemy with psycopg2
|
<p>I need execute SQL query for different dynamic tables.</p>
<p>I try to do it as follows:</p>
<pre><code>import sqlalchemy.pool as pool
def get_conn_pool():
conn = psycopg2.connect(user=settings.POSTGRES_USER, password=settings.POSTGRES_PASSWORD,
database=settings.POSTGRES_DB, host=settings.POSTGRES_HOST, port=settings.POSTGRES_PORT)
return conn
db_pool = pool.QueuePool(get_conn_pool, max_overflow=10, pool_size=5)
conn = db_pool.connect()
cursor = conn.cursor()
tables = ['current_block', 'tasks']
for table in tables:
cursor.execute(f'CREATE PROCEDURE GetTableData @TableName nvarchar(30) AS SELECT * FROM @TableName GO; EXEC GetTableData @TableName = {table};')
result = cursor.fetchall()
print(result)
</code></pre>
<p>but i have an error:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 2, in <module>
psycopg2.errors.SyntaxError: syntax error at or near "@"
LINE 1: CREATE PROCEDURE GetTableData @TableName nvarchar(30) AS SEL...
</code></pre>
|
<python><stored-procedures><sqlalchemy><psycopg2>
|
2025-01-01 17:34:53
| 0
| 451
|
Vadim
|
79,322,039
| 9,128,863
|
Pytorch: RuntimeError: Numpy is not available
|
<p>I'm trying to handle train dataset from MNIST with Pytorch.</p>
<pre><code> import torch
from torchvision.datasets import MNIST
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
img_transforms = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1305,), (0.3081,))
])
train_dataset = MNIST(root='../mnist_data/',
train=True,
download=True,
transform=img_transforms)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=60,
shuffle=True)
</code></pre>
<p>When I first time iterate over batches in train_loader</p>
<p>for example:</p>
<pre><code> for X_batch, y_batch in train_loader:
# realise logic //
</code></pre>
<p>I receive the following error:</p>
<pre><code> File ".../torchvision/datasets/mnist.py", line 142, in __getitem__
img = Image.fromarray(img.numpy(), mode="L")
^^^^^^^^^^^
RuntimeError: Numpy is not available
</code></pre>
<p><a href="https://stackoverflow.com/questions/72964906/runtimeerror-numpy-is-not-available">This Answer</a> was not useful for me, because my Numpy version is 2.2.1.</p>
<p>Python version is 3.12 (Also I tried 3.11)</p>
<p>torch 2.2.2</p>
<p>Is there is some inconsistency in library versions?</p>
|
<python><numpy><pytorch>
|
2025-01-01 17:29:54
| 0
| 1,424
|
Jelly
|
79,322,010
| 8,458,083
|
How to make mypy correctly type-check a function using functools.partial?
|
<p>I'm trying to create a function that returns a partially applied callable, but I'm encountering issues with mypy type checking.</p>
<p>HEre Is my first implementation:</p>
<p>Help me to explain my question for stackoverflow. i.e find a title and the body</p>
<p>this code :</p>
<pre><code>from collections.abc import Callable
from functools import partial
def f(i: int, j: float, k: int) -> int:
return i + int(j) + k
def g(a: float) -> Callable[[int, int], int]:
return partial(f, j=a)
fun: Callable[[int, int], int] = g(3.0)
r: int = fun(4, 5)
print(r)
</code></pre>
<p>It is successfully checked by mypy</p>
<p>but can not run</p>
<blockquote>
<p>r: int = fun(4, 5) TypeError: f() got multiple values for argument
'j'</p>
</blockquote>
<p>to solve this problem, I call the function with named argument</p>
<pre><code>from functools import partial
def f(i: int, j: float, k: int) -> int:
return i + int(j) + k
def g(a: float) -> Callable[[int, int], int]:
return partial(f, j=a)
fun: Callable[[int, int], int] = g(3.0)
# line 12 in my code (where the error message comes from)
r: int = fun(i=4, k=5)
print(r)
</code></pre>
<p>it works fine now</p>
<p>but mypy checking fails</p>
<blockquote>
<p>main.py:12: error: Unexpected keyword argument "i" [call-arg]
main.py:12: error: Unexpected keyword argument "k" [call-arg] Found 2
errors in 1 file (checked 1 source file)</p>
</blockquote>
<p>Is there a way to annotate this code so that it both runs correctly and passes mypy's type checking? I've tried various combinations of type hints, but I haven't found a solution that satisfies both the runtime behavior and static type checking.</p>
<p>I know there is this solution, without using Partial</p>
<pre><code>from collections.abc import Callable
def f(i :int,j : float,k :int) ->int:
return i+int(j)+k
def g(a :float) -> Callable[[int,int],int]:
def ret(i,k):
return f(i,a,k)
return ret
fun :Callable[[int,int],int]= g(3.0)
r : int = fun(4,5)
print(r)
</code></pre>
<p>But I really want to use Callable because for I am working with functions with a lot of parameter and it this much more simplier to just say which paramters are replaced</p>
|
<python><python-typing><mypy>
|
2025-01-01 17:06:46
| 2
| 2,017
|
Pierre-olivier Gendraud
|
79,321,800
| 22,407,544
|
Which method is best for designing custom error pages in Django?
|
<p>For example, I've seen methods which design custom views with changes to URLconf, I've seen other methods that use <code>handler404 = "mysite.views.my_custom_page_not_found_view"</code> in URLconf with no changes to views. I've seen both of these methods explained in the docs. The easiest method I've seen is to simply create a <code>templates/404.html</code> without any other changes to the app. So which method is best for a production app?</p>
|
<python><django>
|
2025-01-01 14:55:29
| 1
| 359
|
tthheemmaannii
|
79,321,729
| 9,667,949
|
Does Re-initializing my list in while loop, create multiple instances of it in memory or is it simply resetting the previous list to an empty one
|
<p>I have this piece of code for a leetcode problem. Since I normally use c++, my python skills have degraded a little bit and as the title states, I just want to know when I am re-initializing my lists in the while loop over and over again is it creating multiple instances of the list in memory or simply over-writing the previously stored list to an empty one? Thanks.</p>
<pre><code>while (len(node_queue) >= 1):
check_node = node_queue.pop()
answer_nodes = []
for_queue = []
for i in range(len(check_node)):
answer_nodes.append(check_node[i].val)
for y in range(len(check_node[i].children)):
for_queue.append(check_node[i].children[y])
final_answer.append(answer_nodes)
if (len(for_queue) >= 1): node_queue.append(for_queue)
</code></pre>
|
<python><algorithm>
|
2025-01-01 14:08:03
| 0
| 1,700
|
Mahir Islam
|
79,321,673
| 5,460,515
|
How to compare expected and actual database state?
|
<p><em><strong>Before start</strong>
This question seems rather big, so you can skim over current approaches, and go directly to question, marked with bold font at the end of question.</em></p>
<p>I am writing tests for some handlers in project. One of main features is to create/change/delete objects in database.</p>
<p>For some reasons we do not want to write code to assert mutations in database with some logic(e.g. after function finished, one object should been created, this object should contain such field values, let's get its created id, and see, that its dependants created too. Also let's check that no other objects was deleted, etc)</p>
<p>To assert proper database mutations for now, we use next approaches:</p>
<ol>
<li><strong>Direct comparing of contents</strong></li>
</ol>
<p>We have function that load all current database content</p>
<pre class="lang-py prettyprint-override"><code>def get_db_content():
result = []
for table in TABLE_NAME_LIST:
rows = db.fetch(f"SELECT * FROM {table}")
for row in rows:
result.append({"tablename": table, "fields": row})
return result
</code></pre>
<p>That creates list with every entries contain tablename and every field with value of particular object.</p>
<p>Also, we have fixture data file with json encoded data of expected state:</p>
<pre><code>[
{
"tablename": "users",
"fields": {
"id": 1,
"name": "Valt25"
}
}
]
</code></pre>
<p>And during writing tests, we need to prepare such fixture with expected data, and basic compare:</p>
<pre><code>assert get_db_content() == load_fixture("expected_db_state.json")
</code></pre>
<p>This works fine, if we would have problems with ordering, we might add explicit ordering in sql.</p>
<p>As you can see in expected database state, user id is an integer value, higly likely that is serial, and we can not know this value before hand, fortunately we use uuids, generating during handler runtime. And we can mock uuid generator function, to expect in database new objects with certain ids.</p>
<p>However, I want to move from generating things during runtime, that might be generated on database level(such as uuid, date created/updated stamps), so there is no way to get expect certain data during comparing.</p>
<p>To overcome this issue we have next approach(that is actually modified first one):</p>
<ol start="2">
<li><strong>Partial direct comparing of contents</strong></li>
</ol>
<p>We have helper function that compare content partially. So that if some field presented in database, but does not present in expected data, then it is ok.</p>
<p>So now, <code>get_db_content()</code> call would return next result:</p>
<pre><code>[
{
"tablename": "positions",
"fields": {
"id": 127,
"name": "position name",
"date_created": "2025-01-01T17:52:21"
"date_updated": "2025-01-01T17:52:21"
}
},
{
"tablename": "employees",
"fields": {
"id": 256,
"position_id": 127,
"name": "user name",
"date_created": "2024-12-31T10:45:21"
"date_updated": "2025-01-01T17:52:21"
}
}
]
</code></pre>
<p>As you can see, ids and stamps preseted here. But in fixtures data we have something like this:</p>
<pre><code>[
{
"tablename": "positions",
"fields": {
"name": "position name"
}
},
{
"tablename": "employees",
"fields": {
"name": "user name"
}
}
]
</code></pre>
<p>As you can see here is no any auto generated fields. And next line would sucessfully assert that:</p>
<pre><code>assert partial_compare(get_db_content(), load_fixture("expected_db_state.json"))
</code></pre>
<p><strong>Question begins here</strong></p>
<p>As you can see, previous approach, even coup with asserting data without autogenerated fields, but it can not assert database objects relations, but this is rather important feature of objects stored anywhere. To overcome, we write rather complex code, unique for every task.</p>
<p>However, I want to somehow relate objects in fixture file. Something like this:</p>
<pre><code>[
{
"tablename": "positions",
"$tag": "created_position",
"fields": {
"name": "position name"
}
},
{
"tablename": "employees",
"ref_fields": ["position_id"]
"fields": {
"position_id": "$created_position.id"
"name": "user name"
}
}
]
</code></pre>
<p>As you can see, at first I tagged position with some string. Then in employee object, I mark that field "position_id" should not be asserted by value, but its value should be got from real database object that is assotiated with specified tag. And also show, that "position_id" field should be the same as "id" field at object tagged with "created_position" tag.</p>
<p><strong>Question itself</strong></p>
<p>I am sure I can implement something like this myself, however I expect here a number of pitfalls. And that is why I am writing this question, maybe there are similar approaches, solving same problems, if you do not know, maybe you can see pitfalls here?</p>
<p>PS. Code is written in python, but that might be any programming languages.</p>
<p>PS2. Any code written here was not tested, since it was written to show approach, not to implement particular feature.</p>
|
<python><database><testing><automated-tests>
|
2025-01-01 13:30:10
| 1
| 398
|
ΠΠ°Π»Π΅ΡΠΈΠΉ ΠΠ΅ΡΠ°ΡΠΈΠΌΠΎΠ²
|
79,321,289
| 9,052,139
|
How to show dedicated progress bar in each tab in a Gradio app?
|
<p>I am developing an image generation Gradio app that uses multiple models like SD3.5, Flux, and others to generate images from a given prompt.</p>
<p>The app has 7 tabs, each corresponding to a specific model. Each tab displays an image generated by its respective model.</p>
<p>My problem is that I am unable to show a progress bar for each tab individually. Currently, the progress bar is displayed across all tabs simultaneously. However, I need a 'tab-specific progress bar.'</p>
<p>Below is my codebase and screenshots of the app that mocks the image generation process in the attachment. How can I implement this feature?</p>
<pre><code>import random
from time import sleep
import gradio as gr
import threading
import requests
from PIL import Image
from io import BytesIO
# Constants
MAX_IMAGE_SIZE = 1024
# Model configurations
MODEL_CONFIGS = {
"Stable Diffusion 3.5": {
"repo_id": "stabilityai/stable-diffusion-3.5-large",
"pipeline_class": "StableDiffusion3Pipeline"
},
"FLUX": {
"repo_id": "black-forest-labs/FLUX.1-dev",
"pipeline_class": "FluxPipeline"
},
"PixArt": {
"repo_id": "PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
"pipeline_class": "PixArtSigmaPipeline"
},
"AuraFlow": {
"repo_id": "fal/AuraFlow",
"pipeline_class": "AuraFlowPipeline"
},
"Kandinsky": {
"repo_id": "kandinsky-community/kandinsky-3",
"pipeline_class": "Kandinsky3Pipeline"
},
"Hunyuan": {
"repo_id": "Tencent-Hunyuan/HunyuanDiT-Diffusers",
"pipeline_class": "HunyuanDiTPipeline"
},
"Lumina": {
"repo_id": "Alpha-VLLM/Lumina-Next-SFT-diffusers",
"pipeline_class": "LuminaText2ImgPipeline"
}
}
# Dictionary to store model pipelines
pipes = {}
model_locks = {model_name: threading.Lock() for model_name in MODEL_CONFIGS.keys()}
def fetch_image_from_url(url):
try:
response = requests.get(url)
response.raise_for_status()
return Image.open(BytesIO(response.content))
except Exception as e:
print(f"Error fetching image from URL {url}: {e}")
return None
def generate_all(prompt, negative_prompt, seed, randomize_seed, width, height,
guidance_scale, num_inference_steps):
# Initialize a list to store all outputs
all_outputs = [None] * (len(MODEL_CONFIGS) * 2) # Pre-fill with None for each model's image and seed
for idx, model_name in enumerate(MODEL_CONFIGS.keys()):
try:
progress_dict[model_name](0, desc=f"Starting generation for {model_name}...")
print(f"IMAGE GENERATING {model_name}")
generated_seed = seed if not randomize_seed else random.randint(0, 100000)
# Fetch an image from a URL
url = f"https://placehold.co/600x400/000000/FFFFFF.png?text=Hello+{model_name}+ +{generated_seed}" # Replace with actual URL as needed
image = fetch_image_from_url(url)
progress_dict[model_name](0.9, desc=f"downloaded {model_name}...")
# Update the outputs array with the result and seed, leaving remaining slots as None
all_outputs[idx * 2] = image # Image slot
all_outputs[idx * 2 + 1] = generated_seed # Seed slot
# Add intermediate results to progress * (len(all_outputs) - len(all_outputs))
yield all_outputs + [None]
progress_dict[model_name](1, desc=f"generated {model_name}...")
sleep(1) # Simulate processing time
except Exception as e:
print(f"Error generating with {model_name}: {str(e)}")
# Leave the slots for this model as None
all_outputs[idx * 2] = None
all_outputs[idx * 2 + 1] = None
# Return the final completed array
return all_outputs
# Gradio Interface
css = """
#col-container {
margin: 0 auto;
max-width: 1024px;
}
"""
with gr.Blocks(css=css) as demo:
with gr.Column(elem_id="col-container"):
gr.Markdown("# Multi-Model Image Generation")
with gr.Row():
prompt = gr.Text(
label="Prompt",
show_label=False,
max_lines=1,
placeholder="Enter your prompt",
container=False,
)
run_button = gr.Button("Generate", scale=0, variant="primary")
with gr.Accordion("Advanced Settings", open=False):
seed = gr.Slider(
label="Seed",
minimum=0,
maximum=100,
step=1,
value=0,
)
randomize_seed = gr.Checkbox(label="Randomize seed", value=True)
memory_indicator = gr.Markdown("Current memory usage: 0 GB")
with gr.Row():
with gr.Column(scale=2):
with gr.Tabs() as tabs:
results = {}
seeds = {}
progress_dict: dict[str, gr.Progress] = {}
for model_name in MODEL_CONFIGS.keys():
with gr.Tab(model_name):
results[model_name] = gr.Image(label=f"{model_name} Result")
seeds[model_name] = gr.Number(label="Seed used", visible=True)
progress_dict[model_name] = gr.Progress()
# Prepare the input and output components
input_components = [
prompt, seed, randomize_seed,
]
output_components = []
for model_name in MODEL_CONFIGS.keys():
output_components.extend([results[model_name], seeds[model_name]])
run_button.click(
fn=generate_all,
inputs=input_components,
outputs=output_components,
)
if __name__ == "__main__":
demo.launch()
</code></pre>
<p><a href="https://i.sstatic.net/9MEAl1KN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9MEAl1KN.png" alt="all tabs showing same progress bar like Flux one in Pixart tab" /></a></p>
|
<python><artificial-intelligence><stable-diffusion><gradio><image-generation>
|
2025-01-01 07:20:05
| 1
| 1,004
|
RagAnt
|
79,320,921
| 22,146,392
|
Adding custom filters with mkdocs-simple-hooks
|
<p>I'm using MkDocs and I want to add a custom filter. Based on <a href="https://stackoverflow.com/a/67278596/22146392">this answer</a>, I'm using <a href="https://github.com/aklajnert/mkdocs-simple-hooks" rel="nofollow noreferrer">mkdocs-simple-hooks</a>. The answer suggests using the <a href="https://www.mkdocs.org/dev-guide/plugins/#on_env" rel="nofollow noreferrer"><code>on_env</code></a> event to add custom filters to the <a href="https://jinja.palletsprojects.com/en/latest/api/#jinja2.Environment.filters" rel="nofollow noreferrer">Jinja2 environment</a>. Here's what my structure looks like:</p>
<pre><code>/
|__ docs/
| |__ data/my_data.csv
| |__ modules/hooks.py
| |__ my_doc.md
|
|__ templates/my_template.html
|
|__ mkdocs.yml
</code></pre>
<p>My <code>mkdocs.yml</code> config file looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>plugins:
- search
- mkdocs-simple-hooks:
hooks:
on_env: "docs.modules.hooks:on_env"
theme:
name: material
custom_dir: templates
</code></pre>
<p>The <code>docs/modules/hooks.py</code> looks like this:</p>
<pre class="lang-py prettyprint-override"><code>import pandas
def csv_to_html(csv_path):
return pandas.read_csv(csv_path).to_html()
def on_env(env, config, **kwargs):
env.filters['csv_to_html'] = csv_to_html
return env
</code></pre>
<p>The <code>templates/my_template.html</code> looks like this:</p>
<pre class="lang-html prettyprint-override"><code><div>
{{ csv_to_html("data/my_data.csv") }}
</div>
</code></pre>
<p>I'm expecting the <code>csv_to_html</code> filter to be made available during the <code>on_env</code> event so that the Jinja2 environment in <code>my_template.html</code> can use it, but when I run <code>mkdocs build</code> I recieve this error:</p>
<pre><code>jinja2.exceptions.UndefinedError: 'csv_to_html' is undefined
</code></pre>
<p>What am I doing wrong? How do I correctly add custom filters in MkDocs?</p>
|
<python><mkdocs><mkdocs-material>
|
2024-12-31 23:13:05
| 0
| 1,116
|
jeremywat
|
79,320,841
| 11,946,505
|
NASDAQ Python SDK Client - How to resolve Market Open/Close Data Delay?
|
<p>I am using the NASDAQ Python SDK client to consume market data for the <code>NLSUTP</code> topic. The data is fetched using the <code>NCDSClient</code> and processed in real-time. I send the data received from the consumer via WebSocket. During normal market hours, I get multiple responses from WebSocket within 1 second, which works as expected. However, during market opening and closing timestamps, I experience significant delays in receiving messages. These delays are sometimes more than 15 seconds, and occasionally up to a few minutes, which is problematic for my real-time application.</p>
<h4>Code Snippet</h4>
<p>Below is the relevant portion of my code:</p>
<pre class="lang-py prettyprint-override"><code>def init_nasdaq_kafka_connection(topic):
security_cfg = {
"oauth.token.endpoint.uri": os.getenv("NASDAQ_KAFKA_ENDPOINT"),
"oauth.client.id": os.getenv("NASDAQ_KAFKA_CLIENT_ID"),
"oauth.client.secret": os.getenv("NASDAQ_KAFKA_CLIENT_SECRET"),
}
kafka_cfg = {
"bootstrap.servers": os.getenv("NASDAQ_KAFKA_BOOTSTRAP_URL"),
"auto.offset.reset": "latest",
"socket.keepalive.enable": True,
}
ncds_client = NCDSClient(security_cfg, kafka_cfg)
consumer = ncds_client.ncds_kafka_consumer(topic)
logger.info(f"Success to connect NASDAQ Kafka server for topic {topic}.")
return consumer
# Usage
consumer = init_nasdaq_kafka_connection("NLSUTP")
while True:
messages = consumer.consume(num_messages=1000000, timeout=0.25)
if messages:
response = makeRespFromKafkaMessages(messages)
# Sending response via WebSocket
</code></pre>
<p>I wrote a script with connect to the websocket and save data in a format <code>{current_datetime}_{number_of_messages}.json</code>
During normal hours I received data like this where you can see I get multiple records in 1 second:</p>
<pre><code>Data saved to ./websocket_data/2024-12-31_15-52-00-256_13.json
Data saved to ./websocket_data/2024-12-31_15-52-00-754_12.json
Data saved to ./websocket_data/2024-12-31_15-52-01-458_45.json
Data saved to ./websocket_data/2024-12-31_15-52-01-956_26.json
Data saved to ./websocket_data/2024-12-31_15-52-02-556_48.json
Data saved to ./websocket_data/2024-12-31_15-52-03-310_45.json
</code></pre>
<p>But during market close I get this:</p>
<pre><code>Data saved to ./websocket_data/2024-12-31_15-54-48-756_405.json
Data saved to ./websocket_data/2024-12-31_15-54-56-198_500.json
Data saved to ./websocket_data/2024-12-31_15-55-05-019_1033.json
Data saved to ./websocket_data/2024-12-31_15-55-21-788_2057.json
Data saved to ./websocket_data/2024-12-31_15-55-42-214_1318.json
Data saved to ./websocket_data/2024-12-31_15-56-00-091_1324.json
Data saved to ./websocket_data/2024-12-31_15-56-15-703_1200.json
Data saved to ./websocket_data/2024-12-31_15-56-32-602_1120.json
Data saved to ./websocket_data/2024-12-31_15-56-46-802_1149.json
Data saved to ./websocket_data/2024-12-31_15-57-00-099_940.json
Data saved to ./websocket_data/2024-12-31_15-57-13-380_875.json
Data saved to ./websocket_data/2024-12-31_15-57-24-969_936.json
Data saved to ./websocket_data/2024-12-31_15-57-36-150_789.json
Data saved to ./websocket_data/2024-12-31_15-57-49-312_1202.json
Data saved to ./websocket_data/2024-12-31_15-58-03-939_1068.json
Data saved to ./websocket_data/2024-12-31_15-58-22-238_1290.json
Data saved to ./websocket_data/2024-12-31_15-58-43-967_1553.json
Data saved to ./websocket_data/2024-12-31_15-59-12-348_1734.json
Data saved to ./websocket_data/2024-12-31_15-59-42-903_2091.json
Data saved to ./websocket_data/2024-12-31_16-00-11-254_2853.json
Data saved to ./websocket_data/2024-12-31_16-01-17-148_5911.json
Data saved to ./websocket_data/2024-12-31_16-01-49-192_2566.json
Data saved to ./websocket_data/2024-12-31_16-02-05-338_5035.json
Data saved to ./websocket_data/2024-12-31_16-02-27-056_2343.json
Data saved to ./websocket_data/2024-12-31_16-02-39-615_36.json
</code></pre>
<p>In the file <a href="https://github.com/user-attachments/files/18283716/2024-12-31_16-02-27-056_2343.json" rel="nofollow noreferrer">2024-12-31_16-02-27-056_2343.json</a> the timestamp of last
message is <code>2024-12-31 16:00:01.072991</code> which shows we got the data after 2:26 minutes.</p>
<h4>Observations</h4>
<ol>
<li>During normal market hours, the WebSocket receives multiple responses within 1 second, which works as expected.</li>
<li>During market opening and closing timestamps:
<ul>
<li>The data flow is delayed significantly.</li>
<li>Delays range from 15 seconds to a few minutes.</li>
<li>The WebSocket responses also experience delays during this period.</li>
</ul>
</li>
</ol>
<h4>Expected Behavior</h4>
<p>The data should have minimal delay, even during the high-activity periods of market opening and closing. The WebSocket should receive multiple responses within 1 second, similar to normal market hours.</p>
<h4>Questions</h4>
<ol>
<li>Is this delay expected behavior for the NASDAQ Python SDK client during market open/close times?</li>
<li>Are there any configuration options or optimizations to reduce these delays?</li>
<li>Could this issue be related to server-side throttling, Kafka settings, or network latency?</li>
</ol>
<h4>Environment Details</h4>
<ul>
<li>NASDAQ Python SDK version: Latest</li>
<li>Python version: 3.10.6</li>
<li>OS: Windows 10</li>
</ul>
|
<python><apache-kafka><websocket><consumer>
|
2024-12-31 21:31:22
| 0
| 718
|
abdulsaboor
|
79,320,792
| 13,971,251
|
When referencing imported Django model, I get 'local variable referenced before assignment' error
|
<p>I am trying to import a model into my Django view and then query all objects, sort them, and iterate over them. I am not getting any error when importing the model, however, when trying to query the model with <code>songs = song.objects.all()#.order_by('-release_date')</code>, I am getting an error:</p>
<pre><code>UnboundLocalError at /hotline/dbm
local variable 'song' referenced before assignment
/home/path/to/site/views.py, line 82, in dbm
songs = song.objects.all()#.order_by('-release_date')
</code></pre>
<p>I do not understand what the problem is, as the variable <code>song</code> is clearly imported from my <code>models.py</code> file, and I am not getting any errors importing it - so why is Python not recognizing <code>song</code> as what I imported from my <code>models.py</code> file?</p>
<p>My <code>models.py</code> file:</p>
<pre><code>class song(models.Model):
name = models.TextField()
file = models.FileField()
release_date = models.DateTimeField(default=timezone.now)
class Meta:
verbose_name = 'Song'
verbose_name_plural = f'{verbose_name}s'
</code></pre>
<p>my <code>views.py</code> file:</p>
<pre><code>#list of modules removed to keep code clean
from .models import *
@csrf_exempt
def dbm(request: HttpRequest) -> HttpResponse:
songs = song.objects.all()#.order_by('-release_date')
response = request.POST.get('Digits')
if response == None:
vr = VoiceResponse()
vr.say("Please choose a song, and then press pound")
vr.pause(length=1)
with vr.gather(finish_on_key='#', timeout=6, numDigits="1") as gather:
for song, num in songs:
gather.pause(length=1)
gather.say(f"For {song.name}, please press {num}")
vr.redirect(reverse('dbm'))
return HttpResponse(str(vr), content_type='text/xml')
elif response != None:
vr = VoiceResponse()
vr.say("hi")
return HttpResponse(str(vr), content_type='text/xml')
</code></pre>
<p>Thanks!</p>
|
<python><django><django-models><django-views>
|
2024-12-31 20:50:10
| 2
| 1,181
|
Kovy Jacob
|
79,320,565
| 13,971,251
|
Unable to use one line statement to get a value from a dictionary in a list, and then save it as a variable Python
|
<p>I am pretty confused.</p>
<p>I have a list of dictionaries, like so:</p>
<pre><code>options = [
{'name':'Notifications','id':'1', 'url':'notifications'},
{'name':'the directory','id':'2', 'url':'directory'},
{'name':'the DBM Studios catalogue','id':'842', 'url':'dbm'},
]
</code></pre>
<p>I also have an id variable:</p>
<pre><code>id = "842"
</code></pre>
<p>If I execute the following:</p>
<pre><code>for option in options:
if id == option['id']:
print (f"url is {option['url']}")
</code></pre>
<p>I get the value of the url key as a string:</p>
<pre><code>>>> url is dbm
</code></pre>
<p>If I execute it in 1 line, like so:</p>
<pre><code>print ([option['url'] for option in options if id == option['id']])
</code></pre>
<p>I get it as a list:</p>
<pre><code>>>> ['dbm']
</code></pre>
<p>Well, seems like the square brackets make it into a list. What if I replace it with round brackets?</p>
<pre><code>print ((option['url'] for option in options if id == option['id']))
</code></pre>
<p>But that gives me an object, not a value:</p>
<pre><code>>>> <generator object <genexpr> at 0x7af8b523e7a0>
</code></pre>
<p>Turns out I can print the value like this:</p>
<pre><code>print (*(option['url'] for option in options if id == option['id']))
</code></pre>
<p>Which returns:</p>
<pre><code>>>> dbm
</code></pre>
<p>But if I try to save it as a variable and then print the variable:</p>
<pre><code>url = *(option['url'] for option in options if id == option['id'])
print (url)
</code></pre>
<p>I get an error:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/bin/pythonanywhere_runner.py", line 26, in _pa_run
code = compile(f.read(), filename.encode("utf8"), "exec")
File "/file.py", line 9
url = *(option['url'] for option in options if id == option['id'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: can't use starred expression here
</code></pre>
<p>I am looking more to understand what is going on than for a way to save it as a variable, as I can technically do this across multiple lines - but a solution as to how to do it would be great, too.</p>
|
<python><list><dictionary>
|
2024-12-31 18:07:13
| 0
| 1,181
|
Kovy Jacob
|
79,320,439
| 4,752,874
|
How to update Pandas DataFrame column using string concatenation in function
|
<p>I have a dataframe where I would like to add a full address column, which would be the combination of 4 other columns (street, city, county, postalcode) from that dataframe. Example output of the address column would be:</p>
<pre><code>5 Test Street, Worthing, West Sussex, RH5 3BX
</code></pre>
<p>Or if the city was empty as an example:</p>
<pre><code>5 Test Street, West Sussex, RH5 3BX
</code></pre>
<p>This is my code, which after testing I see I might need to use something like apply, but I can't workout how to do it.</p>
<pre><code>def create_address(street: str, city: str, county: str, postalcode: str) -> str:
list_address = []
if street:
list_address.append(street)
if city:
list_address.append(city)
if county:
list_address.append(county)
if postalcode:
list_address.append(postalcode)
address = ", ".join(list_address).rstrip(", ")
return address
df["address"] = create_address(df["Street"], df["City"], df["County"], df["PostalCode"])
</code></pre>
|
<python><pandas><string><dataframe>
|
2024-12-31 16:50:21
| 2
| 349
|
CGarden
|
79,320,253
| 12,827,843
|
Extract the Procurement URL from TED using SPARQL query?
|
<p>I am trying to use SPARQL query to get the procurement URL or etenders Resource ID from TED</p>
<pre><code>import sparqldataframe
import pandas as pd
# Define the SPARQL query
sparql_query = """
PREFIX dc: <http://purl.org/dc/elements/1.1/>
PREFIX epo: <http://data.europa.eu/a4g/ontology#>
PREFIX cccev: <http://data.europa.eu/m8g/>
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
PREFIX dcterms: <http://purl.org/dc/terms/>
SELECT DISTINCT ?publicationNumber ?legalName ?publicationDate ?title ?description
?accessURL ?submissionAddress ?procurementDocumentURL ?announcementURL WHERE {
GRAPH ?g {
?notice a epo:Notice ;
epo:hasPublicationDate ?publicationDate ;
epo:hasNoticePublicationNumber ?publicationNumber ;
epo:announcesRole [
a epo:Buyer ;
epo:playedBy [
epo:hasLegalName ?legalName ;
cccev:registeredAddress [
epo:hasCountryCode ?countryUri
]
]
] ;
epo:refersToProcedure [
dcterms:title ?title ;
dcterms:description ?description
] .
OPTIONAL { ?notice dcterms:accessRights ?accessURL . } # Access Rights might not be the correct predicate
OPTIONAL { ?notice dcterms:relation ?submissionAddress . }
OPTIONAL { ?notice epo:hasDocument ?procurementDocumentURL . } # Check if this is the correct predicate
OPTIONAL { ?notice dcterms:isReferencedBy ?announcementURL . } # Use a more relevant predicate for announcement URLs
}
?countryUri a skos:Concept ;
skos:prefLabel "Ireland"@en .
FILTER(CONTAINS(LCASE(STR(?legalName)), "dublin city council"))
}
ORDER BY ?publicationDate
"""
# Execute the SPARQL query
endpoint_url = "https://publications.europa.eu/webapi/rdf/sparql"
df = sparqldataframe.query(endpoint_url, sparql_query)
# Display the results
if not df.empty:
print("Tender Details with URL Fields for Dublin City Council:")
print(df[['publicationNumber', 'legalName', 'publicationDate', 'title', 'description',
'accessURL', 'submissionAddress', 'procurementDocumentURL', 'announcementURL']])
else:
print("No tenders found with URL fields for Dublin City Council.")
</code></pre>
<p>This is the result that I have managed to get so far:</p>
<p><a href="https://i.sstatic.net/GUoZbvQE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GUoZbvQE.png" alt="Results of code" /></a></p>
<p>I have tried to discover what the url field name is looking at Excel exports and Schema documents available from the TED website.</p>
<p><a href="https://docs.ted.europa.eu/ODS/latest/reuse/_attachments/TED-XML_general_description_v2.0_20160219.pdf" rel="nofollow noreferrer">https://docs.ted.europa.eu/ODS/latest/reuse/_attachments/TED-XML_general_description_v2.0_20160219.pdf</a></p>
<p><a href="https://i.sstatic.net/H348QXdO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H348QXdO.png" alt="Excel exported from https://ted.europa.eu/en/" /></a></p>
<p>I am not able to find a code example that returns the url on the GitHub supporting documents.</p>
<p><a href="https://github.com/OP-TED/ted-rdf-docs/blob/main/notebooks/import-into-dataframe.ipynb" rel="nofollow noreferrer">https://github.com/OP-TED/ted-rdf-docs/blob/main/notebooks/import-into-dataframe.ipynb</a></p>
|
<python><sparql>
|
2024-12-31 15:08:53
| 1
| 647
|
Christopher
|
79,320,067
| 7,468,566
|
Comparing .loc/.iloc to tuples and chained indexing
|
<pre><code>import pandas as pd
# Creating a DataFrame with some sample data
data = {
'Name': [Jason, 'Emma', 'Alex', 'Sarah'],
'Age': [28, 24, 32, 27],
'City': ['New York', 'London', 'Paris', 'Tokyo'],
'Salary': [75000, 65000, 85000, 70000]
}
df = pd.DataFrame(data)
# Display the DataFrame
print(df)
I want to update Jasonβs age, and I do so with
df['Age'][df['Name'] == 'Jason'] = 29
</code></pre>
<p>For code such as the code shown above, the df may or may not be update Jason's age to 29 due to the chained indexing that is being used.</p>
<p>The documentation <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#indexing-view-versus-copy" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#indexing-view-versus-copy</a> mentions how .iloc/.loc is a better option. For example, something such as the following.</p>
<pre><code>df.loc[df['Name'] == 'Jason', 'Age'] = 29
</code></pre>
<p>However it is not clear about best practices regarding tuples, such as the following.</p>
<pre><code>df[('Age', df['Name'] == 'Jason')] = 29
</code></pre>
<p>I am trying to understand how the use of tuples would compare to the use of .iloc/.loc and the use of chained indexing in the context of best practices in pandas. Considerations can include time complexity, space complexity, code readability, etc.</p>
|
<python><pandas><dataframe>
|
2024-12-31 13:25:02
| 0
| 2,583
|
itstoocold
|
79,320,041
| 21,185,825
|
Python - Flask blueprint parameter?
|
<p>I need to pass a parameter (some_url) from the main app to the blueprint using Flask</p>
<p>This is my (oversimplified) app</p>
<pre><code>app = Flask(__name__)
app.register_blueprint(my_bp, url_prefix='/mybp', some_url ="http....")
</code></pre>
<p>This is my (oversimplified) blueprint</p>
<pre><code>my_bp = Blueprint('mybp', __name__, url_prefix='/mybp')
@repositories_bp.route('/entrypoint', methods=['GET', 'POST'])
def entrypoint():
some_url = ????
</code></pre>
<p>Not sure this is the way to go, but I parsed countless threads, I just cannot find any Info about this</p>
<p>Thanks for your help</p>
|
<python><flask><parameters><blueprint>
|
2024-12-31 13:09:51
| 2
| 511
|
pf12345678910
|
79,319,985
| 16,136,190
|
Lambda Function in Widget Binding Uses Last Iteration's Values
|
<p>I have a loop to generate widgets dynamically, and the commands for the buttons are set using lambdas, but the event bindings for <code>Listbox</code>es don't seem to work well:</p>
<pre class="lang-py prettyprint-override"><code>def fn1():
def cbf(e, param1, param2):
val = param2.get(param2.curselection())
param1.delete(0, tk.END)
param1.insert(0, val)
def fn2():
for x in range(n):
entry = Entry(root, textvariable=sometextvar, bg="somecolour")
lb = Listbox(root, height=someheight)
lb.insert(0, *["Some", "values"])
entry.bind("<FocusIn>", lambda e, entry=entry: lb.grid(row=entry.grid_info()["row"] - 1, column=2, pady=0, sticky=""))
entry.bind("<FocusOut>", lambda e: lb.grid_remove())
lb.bind("<<ListboxSelect>>", lambda e, param1=entry, param2=lb: cbf(e, param1=param1, param2=param2))
fn1()
</code></pre>
<p>I'm not sure how, but the <code>lb.grid_remove()</code> for <code><FocusOut></code> and the <code>grid(...)</code> for <code><FocusIn></code> work correctly. The one for <code><<ListboxSelect>></code> doesn't work correctly.</p>
<p>It works only for the last widget because, somehow, only the last widget is retained after the loop, though I'm capturing the variables when the lambda is defined (<code>lambda e, param1=entry, param2=lb:</code>). It also works only for the last widget for <code><KeyRelease></code>.</p>
<p>I've also tried <code>functools.partial</code> like:</p>
<pre class="lang-py prettyprint-override"><code>lb.bind("<<ListboxSelect>>", partial(cbf, None, param1=entry, param2=lb))
</code></pre>
<p>both as keyword and positional arguments. I got <code>TypeError: mainfile.<locals>.cbf() got multiple values for argument 'param1' tkinter</code> for this one.</p>
<p>I've also tried around 25 other permutations and combinations of function definitions, wrapping them and nested callbacks, etc. I also don't want to use classes, and I want something like what I'm currently doing. Library functions like <code>functools.partial</code> are also not preferred.</p>
<p>Why does it work for <code>command</code>s of buttons and some bindings, but not for others?</p>
<p>Here's a MRE:</p>
<pre class="lang-py prettyprint-override"><code>from tkinter import Tk, Entry, Listbox, StringVar
import tkinter as tk
def fn1():
root = Tk()
def cbf(e, param1, param2):
val = param2.get(param2.curselection())
param1.delete(0, tk.END)
param1.insert(0, val)
def fn2(n):
for x in range(n):
sometextvar = StringVar()
someheight = 5
entry = Entry(root, textvariable=sometextvar, bg="red")
lb = Listbox(root, height=someheight)
lb.insert(0, *["Some", "values"])
entry.bind("<FocusIn>",
lambda e, entry=entry: lb.grid(row=entry.grid_info()["row"] - 1, column=2, pady=0, sticky=""))
entry.bind("<FocusOut>", lambda e: lb.grid_remove())
lb.bind("<<ListboxSelect>>", lambda e, param1=entry, param2=lb: cbf(e, param1=param1, param2=param2))
entry.grid(row=x + 2, column=4)
fn2(5)
for r in range(root.grid_size()[1]):
root.rowconfigure(r, weight=1)
root.mainloop()
fn1()
</code></pre>
<p>Why is the value of the last entry set when I'm selecting the options ('Some' or 'values') from the <code>Listbox</code> meant for another entry? And why does the focus mechanism work?</p>
|
<python><loops><tkinter><lambda>
|
2024-12-31 12:37:09
| 1
| 859
|
The Amateur Coder
|
79,319,878
| 7,358,909
|
Initializing Hugging Face Transformer restarts program loop
|
<p>Initializing hugging face transformer causes loops to restart. I have created simple loop which reads text and replies but the loop is restarting new thread when initalizing chatbot pipeline. Minimum replication example given below.</p>
<pre><code>from transformers import pipeline
from transformers.pipelines import Text2TextGenerationPipeline
chatbot_model = pipeline(task="text-generation",model="facebook/blenderbot-400M-distill")
i=0
while(True):
try:
if (i%2)==0:
print(f"{i} - Even")
i = i+1
time.sleep(0.5)
else:
i = i+1
continue
except Exception as e:
pass
</code></pre>
<p>Output loops restart after chatbot initialization after printing 40 and seems other loop starts from 0. Same is when i read files in a loop. Is is due to to multiple threads. How is it working?
Output mentioned below.</p>
<pre><code>Device set to use cuda:0
0 - Even
2 - Even
4 - Even
6 - Even
8 - Even
10 - Even
12 - Even
14 - Even
16 - Even
18 - Even
20 - Even
22 - Even
24 - Even
26 - Even
28 - Even
30 - Even
32 - Even
34 - Even
36 - Even
38 - Even
40 - Even
The model 'TFBlenderbotForConditionalGeneration' is not supported for text-generation. Supported models are ['TFBertLMHeadModel', 'TFCamembertForCausalLM', 'TFCTRLLMHeadModel', 'TFGPT2LMHeadModel', 'TFGPT2LMHeadModel', 'TFGPTJForCausalLM', 'TFMistralForCausalLM',].
0 - Even
42 - Even
2 - Even
44 - Even
4 - Even
46 - Even
</code></pre>
|
<python><python-3.x><chatbot><python-multithreading><huggingface-transformers>
|
2024-12-31 11:45:15
| 1
| 1,868
|
Shahir Ansari
|
79,319,663
| 629,960
|
FastAPI + Apache. 409 response from FastAPI is converted to 502. What can be the reason?
|
<p>I have a FastAPI application, which, in general, works fine. My setup is Apache as a proxy and FastAPI server behind it. This is the apache config:</p>
<pre><code>ProxyPass /fs http://127.0.0.1:8000/fs retry=1 acquire=3000 timeout=600 Keepalive=On disablereuse=ON
ProxyPassReverse /fs http://127.0.0.1:8000/fs
</code></pre>
<p>I have one endpoint that can return <code>409</code> HTTP response, if an object exists. FastAPI works correctly. I can see in logs:</p>
<pre><code>INFO: 172.**.0.25:0 - "PUT /fs/Automation/123.txt HTTP/1.1" 409 Conflict
</code></pre>
<p>But the final response to the client is "502 Bad Gateway".</p>
<p>Apache error log has a record for this:</p>
<pre><code>[Tue Dec 31 04:45:54.545972 2024] [proxy:error] [pid 3019178:tid 140121168807680] (32)Broken pipe: [client 172.31.0.25:63759] AH01084: pass request body failed to 127.0.0.1:8000 (127.0.0.1), referer: https://10.100.21.13/fs/view/Automation
[Tue Dec 31 04:45:54.545996 2024] [proxy_http:error] [pid 3019178:tid 140121168807680] [client 172.31.0.25:63759] AH01097: pass request body failed to 127.0.0.1:8000 (127.0.0.1) from 172.31.0.25 (), referer: https://10.100.21.13/fs/view/Automation
</code></pre>
<p>What can be the reason?</p>
<p>Another interesting thing is that it doesn't happen for any <code>PUT</code> request.
How can I debug this? Maybe FastAPI has to return something else, some header? Or it returns too much , some extra data? How to catch this?</p>
|
<python><apache><fastapi><reverse-proxy><http-status-code-502>
|
2024-12-31 09:51:23
| 1
| 2,113
|
Roman Gelembjuk
|
79,319,632
| 9,954,014
|
Return multiple values from HTTP response
|
<p>I am trying to create a property map in "Authentik" which fetches values from other services. in this case I need to use an API key to return 3 values from a request to an Emby Server. At first I tried with <code>curl</code> to see if I am able to get a response:</p>
<pre class="lang-bash prettyprint-override"><code>curl 'http://myserver:80/backend?api_key=xyz' -H 'Content-Type: application/json; charset=UTF-8' --data-raw '{[my request data]}' --compressed
</code></pre>
<p>The returned data containes the values I require: "AccessToken", "ServerId" and "Id". Now I am trying to do the same with Python. The goal is to end up with something like thins:</p>
<pre class="lang-json prettyprint-override"><code>{
"ak_proxy": {
"user_attributes": {
"additionalHeaders": {
"X-Emby-Token": "[value from response data]",
"X-Emby-ServerId": "[value from response data]",
"X-Emby-Id": "[value from response data]"
}
}
}
}
</code></pre>
<p>with this in mind, I wrote the following:</p>
<pre class="lang-py prettyprint-override"><code>import json
from urllib.parse import urlencode
from urllib.request import Request, urlopen
base_url = "http://myserver:80"
end_point = "/backend?api_key=xyz"
json_data = {[my request data]}
postdata = json.dumps(json_data).encode()
headers = {"Content-Type": "application/json; charset=UTF-8"}
try:
httprequest = Request(base_url + end_point, data=postdata, method="POST", headers=headers)
with urlopen(httprequest) as response:
responddata = json.loads(response.read().decode())
return {"ak_proxy": {"user_attributes": {"additionalHeaders": {"X-server-Token": responddata['AccessToken'], "X-server-ServerId": responddata['ServerId'], "X-server-Id": responddata['Id']}}}}
except: return "null"
</code></pre>
<p>I get:</p>
<pre><code>SyntaxError: 'return' outside function
</code></pre>
<p>when testing the Python code.</p>
|
<python><httprequest>
|
2024-12-31 09:34:30
| 1
| 367
|
Asem Khen
|
79,319,631
| 6,449,621
|
how to decode Proto2 response python
|
<p>Calling an api using request & the response content is in proto2 format.</p>
<pre><code>r = requests.post('https://gs-loc.apple.com/clls/wloc',headers=HEADERS,data=DATA,verify=False)
print(r.content)
</code></pre>
<p>Response:</p>
<pre><code>b"\x125\n\x1164:fb:92:9c:91:8d\x12\x1a\x08\x88\x90\xbf\xf0\x04\x10\xe8\xf7\x84\xf2\x1c\x18\x1d \x00(\x94\x070\nX>`\xf6\x01\xa8\x01\x01\xb0\x01\x00\x124\n\x103c:1e:4:e3:b6:9b\x12\x1a\x08\x82\x81\xbe\xf0\x04\x10\xc1\xbc\x86\xf2\x1c\x184 \x00(\x94\x070\nX?`\xcc\x03\xa8\x01\r\xb0\x01\x00\x125\n\x114c:ae:1c:20:de:4e\x12\x1a\x08\xda\x8a\xbe\xf0\x04\x10\x9c\xd1\x85\xf2\x1c\x18' \x03(\x94\x070\nX?`\xd0\x03\xa8\x01\x04\xb0\x01\x00\x125\n\x1164:fb:92:9c:b6:f5\x12\x1a\x08\xff\xcf\xbe\xf0\x04\x10\xf0\x9b\x87\xf2\x1c\x185 \x00(\x94\x070\nX?`\xf7\x01\xa8\x01\x01\xb0\x01\x00\x124\n\x100:31:92:57:1b:6d\x12\x1a\x08\x98\xd8\xbe\xf0\x04\x10\xc3\x8c\x84\xf2\x1c\x18\x14 \x03(\x93\x070\nX>`\xec\x01\xa8\x01\x06\xb0\x01\x00\x124\n\x1160:32:b1:dc:65:81\x12\x19\x08\xd8\xc8\xbf\xf0\x04\x10\xd9\x89\x85\xf2\x1c\x18\x17 \x03(\x94\x070\nX>`V\xa8\x01\x0b\xb0\x01\x00\x125\n\x1164:fb:92:9c:8e:68\x12\x1a\x08\x9e\xd9\xc0\xf0\x04\x10\xcb\xb0\x86\xf2\x1c\x18\x1f \x00(\x94\x070\nX>`\xee\x02\xa8\x01\x05\xb0\x01\x00\x125\n\x1164:fb:92:9c:8e:69\x12\x1a\x08\x9b\xdc\xc0\xf0\x04\x10\xcb\xb0\x86\xf2\x1c\x18' \x00(\x94\x070\nX>`\xc8\x03\xa8\x01\x05\xb0\x01\x00\x125\n\x1164:fb:92:9c:91:8c\x12\x1a\x08\x8b\x8d\xbf\xf0\x04\x10\xe3\xfd\x84\xf2\x1c\x18\x1b \x00(\x94\x070\nX>`\xf7\x01\xa8\x01\x01\xb0\x01\x00\x125\n\x1164:fb:92:9c:b6:f4\x12\x1a\x08\xff\xcf\xbe\xf0\x04\x10\xf5\x95\x87\xf2\x1c\x184 \x00(\x94\x070\nX>`\x84\x02\xa8\x01\x01\xb0\x01\x00\x125\n\x1164:fb:92:9d:b8:10\x12\x1a\x08\xe2\x88\xc0\xf0\x04\x10\xfa\x8f\x87\xf2\x1c\x18@ \x00(\x93\x070\nX>`\xc9\x04\xa8\x01\x01\xb0\x01\x00\x125\n\x1164:fb:92:9d:b8:11\x12\x1a\x08\x85\x85\xc0\xf0\x04\x10\xfa\x8f\x87\xf2\x1c\x18C \x00(\x93\x070\nX>`\xf0\x04\xa8\x01\x01\xb0\x01\x00\x126\n\x1164:fb:92:ab:f8:9f\x12\x1b\x08\x8c\xc8\xc0\xf0\x04\x10\xb0\xb9\x85\xf2\x1c\x18\x9e\x01 \x03(\x93\x070\nX>`\xe7\x08\xa8\x01\x0b\xb0\x01\x00\x126\n\x1164:fb:92:ab:f8:a0\x12\x1b\x08\xd5\xbd\xc0\xf0\x04\x10\xa1\xcb\x85\xf2\x1c\x18\x9d\x01 \x03(\x93\x070\nX>`\xc7\x07\xa8\x01\x0b\xb0\x01\x00\x124\n\x1174:da:88:ca:16:c6\x12\x19\x08\xd5\x90\xbe\xf0\x04\x10\xf9\xfa\x85\xf2\x1c\x18\x19 \x03(\x94\x070\nX>`U\xa8\x01\x02\xb0\x01\x00\x124\n\x117e:61:66:81:c7:49\x12\x19\x08\xf2\xde\xbe\xf0\x04\x10\xd2\xfa\x83\xf2\x1c\x18\x14 \x03(\x93\x070\nX>`G\xa8\x01\x01\xb0\x01\x00\x125\n\x119c:53:22:66:c8:52\x12\x1a\x08\x9e\xe7\xbf\xf0\x04\x10\xe1\xe8\x83\xf2\x1c\x18B \x03(\x93\x070\nX>`\xcf\x05\xa8\x01\x04\xb0\x01\x00\x125\n\x11b4:f9:49:3b:ad:e5\x12\x1a\x08\xe7\xd5\xbd\xf0\x04\x10\xe7\xe2\x83\xf2\x1c\x18H \x03(\x93\x070\nX>`\xd5\x04\xa8\x01\x06\xb0\x01\x00\x125\n\x11b4:f9:49:3b:ad:e8\x12\x1a\x08\xcb\xd0\xbd\xf0\x04\x10\xec\xdc\x83\xf2\x1c\x18K \x03(\x94\x070\nX>`\xd4\x04\xa8\x01\x06\xb0\x01\x00\x125\n\x1184:d8:1b:b1:69:6e\x12\x1a\x08\x9c\xcb\xbc\xf0\x04\x10\xaa\xaa\x84\xf2\x1c\x18k \x03(\x95\x070\nX=`\xf0\x05\xa8\x01\n\xb0\x01\x00\x125\n\x119c:53:22:66:ce:5a\x12\x1a\x08\xa6\xb8\xbf\xf0\x04\x10\xf6\xd0\x83\xf2\x1c\x18\x18 \x03(\x93\x070\nX=`\xa1\x01\xa8\x01\x02\xb0\x01\x00\x125\n\x11a2:53:22:66:c8:52\x12\x1a\x08\x97\xf0\xbf\xf0\x04\x10\xec\xdc\x83\xf2\x1c\x18C \x03(\x93\x070\nX=`\xce\x05\xa8\x01\x04\xb0\x01\x00\x125\n\x11a2:53:22:66:ce:5a\x12\x1a\x08\xe7\xb6\xbf\xf0\x04\x10\xf6\xd0\x83\xf2\x1c\x18\x19 \x03(\x93\x070\x0bX=`\xb1\x01\xa8\x01\x02\xb0\x01\x00\x125\n\x11a6:53:22:66:c8:52\x12\x1a\x08\xbc\xe9\xbf\xf0\x04\x10\xe7\xe2\x83\xf2\x1c\x18? \x03(\x93\x070\nX=`\xa5\x04\xa8\x01\x04\xb0\x01\x00\x125\n\x11a6:53:22:66:ce:5a\x12\x1a\x08\xa6\xb8\xbf\xf0\x04\x10\xf1\xd6\x83\xf2\x1c\x18\x1c \x03(\x93\x070\x0bX=`\xfd\x01\xa8\x01\x02\xb0\x01\x00\x122\n\x0f42:49:f:2:36:f8\x12\x19\x08\xe5\xb9\xbf\xf0\x04\x10\xb2\x89\x83\xf2\x1c\x18\x14 \x03(\x93\x070\nX/`K\xa8\x01\x0b\xb0\x01\x00\x125\n\x11dc:62:79:a6:94:22\x12\x1a\x08\xf7\xa4\xbf\xf0\x04\x10\xa8\x95\x83\xf2\x1c\x18\x1f \x00(\x93\x070\nX,`\x9b\x02\xa8\x01\t\xb0\x01\x00\x125\n\x11dc:62:79:a6:94:1e\x12\x1a\x08\xf6\xca\xbf\xf0\x04\x10\xad\x8f\x83\xf2\x1c\x18( \x00(\x94\x070\nX+`\x97\x02\xa8\x01\t\xb0\x01\x00\x125\n\x11e2:62:79:a6:94:22\x12\x1a\x08\xac\x8c\xbf\xf0\x04\x10\xad\x8f\x83\xf2\x1c\x18% \x00(\x94\x070\nX*`\x9a\x02\xa8\x01\t\xb0\x01\x00\x125\n\x1164:fb:92:ad:b2:23\x12\x1a\x08\xbe\x85\xbe\xf0\x04\x10\xf9\x99\x8e\xf2\x1c\x18t \x00(\x96\x070\nX?`\x80\x07\xa8\x01\x05\xb0\x01\x00\x125\n\x11b4:b0:24:4a:2a:84\x12\x1a\x08\xe3\xf0\xbe\xf0\x04\x10\xa6\x8a\x89\xf2\x1c\x18B \x03(\x94\x070\nX?`\xd1\x05\xa8\x01\x0b\xb0\x01\x00\x125\n\x11f0:a7:31:f8:3a:aa\x12\x1a\x08\xd7\x8d\xbe\xf0\x04\x10\xf2\xb0\x88\xf2\x1c\x18* \x00(\x94\x070\nX?`\xbd\x02\xa8\x01\x0b\xb0\x01\x00\x126\n\x113c:52:a1:6e:3f:76\x12\x1b\x08\x8c\xc8\xc0\xf0\x04\x10\x86\x99\x88\xf2\x1c\x18\xcf\x01 \x00(\x94\x070\nX>`\x96\x08\xa8\x01\x0b\xb0\x01\x00\x126\n\x1154:af:97:4f:10:ca\x12\x1b\x08\xc4\xc8\xb9\xf0\x04\x10\x8e\xbd\x8a\xf2\x1c\x18\x86\x01 \x00(\x98\x070\nX>`\xb7\x08\xa8\x01\x01\xb0\x01\x00\x125\n\x1164:fb:92:ad:b2:24\x12\x1a\x08\xf5\xe9\xbd\xf0\x04\x10\x83\x8e\x8e\xf2\x1c\x18g \x00(\x96\x070\nX>`\xa0\x07\xa8\x01\x05\xb0\x01\x00\x124\n\x1064:fb:92:ae:5:7b\x12\x1a\x08\xa8\x81\xbb\xf0\x04\x10\xcb\xcf\x8e\xf2\x1c\x18\x14 \x00(\x98\x070\nX>`\x8b\x01\xa8\x01\x05\xb0\x01\x00\x124\n\x1064:fb:92:ae:5:7c\x12\x1a\x08\xa8\x81\xbb\xf0\x04\x10\xcb\xcf\x8e\xf2\x1c\x18\x14 \x00(\x98\x070\nX>`\x8e\x01\xa8\x01\x05\xb0\x01\x00\x124\n\x10e4:fa:c4:98:5:9f\x12\x1a\x08\xe5\xed\xbe\xf0\x04\x10\x80\xe4\x8b\xf2\x1c\x18| \x00(\x95\x070\nX>`\x97\x08\xa8\x01\x06\xb0\x01\x00\x125\n\x11f0:a7:31:f7:a6:8a\x12\x1a\x08\x90\x8e\xbc\xf0\x04\x10\xfb\xe9\x8b\xf2\x1c\x18. \x00(\x96\x070\nX>`\xc8\x01\xa8\x01\x01\xb0\x01\x00\x125\n\x11bc:62:d2:91:80:f8\x12\x1a\x08\xa7\xcd\xbb\xf0\x04\x10\xc2\x96\x8b\xf2\x1c\x189 \x00(\x97\x070\nX6`\xf3\x02\xa8\x01\x01\xb0\x01\x00\x125\n\x11be:62:d2:91:80:f8\x12\x1a\x08\xcf\xc3\xbb\xf0\x04\x10\xc7\x90\x8b\xf2\x1c\x18> \x00(\x97\x070\nX6`\xc9\x03\xa8\x01\x01\xb0\x01\x00\x123\n\x1042:49:f:67:1c:b1\x12\x19\x08\xd0\x9d\xbb\xf0\x04\x10\xc6\xd5\x8e\xf2\x1c\x18\x14 \x00(\x98\x070\nX5`\\\xa8\x01\x0b\xb0\x01\x00"
</code></pre>
<p>How to convert the above response to string or dictionary</p>
|
<python><protocol-buffers>
|
2024-12-31 09:33:49
| 1
| 465
|
anandyn02
|
79,319,503
| 2,897,115
|
Pass a list of parameter to parametrize from a function
|
<p>I have session scope fixture for initiating db connection</p>
<pre><code>import pytest
@pytest.fixture(scope="session")
def dconn():
# Your database connection logic
return "database_connection"
@pytest.fixture
def some_function(dconn):
# Use dconn to generate data
return [
{"value1": "a", ....,"expected": "result1"},
{"value1": "b", ....,"expected": "result2"},
{"value1": "c", ....,"expected": "result3"}
]
@pytest.mark.parametrize("param", some_function))
def test_with_params(param):
value1 = param["value1"]
....
expected = parma["expected"]
assert process_data(value1) == expected
</code></pre>
<p>it says <code>some_function</code> is not iterable. I changed to:</p>
<pre><code>@pytest.mark.parametrize("param", some_function()))
def test_with_params(param):
value1 = param["value1"]
....
expected = parma["expected"]
assert process_data(value1) == expected
</code></pre>
<p>says fixtures cannot be called directly.</p>
<p>How to pass the list to parametrize?</p>
|
<python><pytest>
|
2024-12-31 08:26:47
| 1
| 12,066
|
Santhosh
|
79,319,434
| 329,829
|
DuplicateError with name 'null' when trying to pivot a Polars DataFrame
|
<p>I have this example dataframe in polars:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df_example = pl.DataFrame(
{
"DATE": ["2024-11-11", "2024-11-11", "2024-11-12", "2024-11-12", "2024-11-13"],
"A": [None, None, "option1", "option2", None],
"B": [None, None, "YES", "YES", "NO"],
}
)
</code></pre>
<p>Which looks like this:</p>
<pre><code>shape: (5, 3)
ββββββββββββββ¬ββββββββββ¬βββββββ
β DATE β A β B β
β --- β --- β --- β
β str β str β str β
ββββββββββββββͺββββββββββͺβββββββ‘
β 2024-11-11 β null β null β
β 2024-11-11 β null β null β
β 2024-11-12 β option1 β YES β
β 2024-11-12 β option2 β YES β
β 2024-11-13 β null β NO β
ββββββββββββββ΄ββββββββββ΄βββββββ
</code></pre>
<p>As you can see this is a long format dataframe. I want to have it in a wide format, meaning that I want the DATE to be unique per row and for each other column several columns have to be created. What I want to achieve is:</p>
<pre><code>shape: (3, 5)
ββββββββββββββ¬ββββββββββββ¬ββββββββββββ¬ββββββββ¬βββββββ
β DATE β A_option1 β A_option2 β B_YES β B_NO β
β --- β --- β --- β --- β --- β
β str β bool β bool β bool β bool β
ββββββββββββββͺββββββββββββͺββββββββββββͺββββββββͺβββββββ‘
β 2024-11-11 β null β null β null β null β
β 2024-11-12 β true β true β true β null β
β 2024-11-13 β null β null β null β true β
ββββββββββββββ΄ββββββββββββ΄ββββββββββββ΄ββββββββ΄βββββββ
</code></pre>
<p>I have tried doing the following:</p>
<pre class="lang-py prettyprint-override"><code>df_example.pivot(
index="DATE", on=["A", "B"], values=["A", "B"], aggregate_function="first"
)
</code></pre>
<p>However, I get this error:</p>
<pre><code># DuplicateError: column with name 'null' has more than one occurrence
</code></pre>
<p>Which is logical, as it tries to create a column for the Null values in columns A, and a column for the Null values in column B.</p>
<p>I am looking for a clean solution to this problem. I know I can impute the nulls per column with something unique and then do the pivot. Or by pivoting per column and then dropping the Null columns. However, this will create unnecessary columns. I want something more elegant.</p>
|
<python><dataframe><pivot><python-polars><polars>
|
2024-12-31 07:44:03
| 2
| 5,232
|
Olivier_s_j
|
79,319,294
| 188,331
|
BLEURT evaluation metric consumed too much RAM
|
<p>The BLEURT codes almost used up all the 24GB RAM of NVIDIA GeForce RTX 4090 to evaluate just 1 set of sentences.</p>
<pre><code>ref = 'reference sentence here'
hypo = 'hypothesis sentence here'
scores = {}
import evaluate
metric_bleurt = evaluate.load('bleurt')
bleurt = metric_bleurt.compute(predictions=[hypo], references=[ref])
scores["bleurt"] = round(bleurt["scores"][0], 3)
print(scores)
</code></pre>
<p>Total memory spent is 22290MB. How is that possible? How do I reduce memory usage when evaluating BLEURT? Thanks.</p>
<hr />
<p><strong>UPDATE</strong> With the <code>evaluate</code> library, the default checkpoint loaded is <code>BLEURT-BASE-128</code>. To set a smaller checkpoint, I use <code>metric_bleurt = evaluate.load('bleurt', config_name="bleurt-tiny-128")</code> (which is undocumented). Still, 22GB RAM is occupied when using BLEURT Tiny checkpoint.</p>
|
<python><out-of-memory><huggingface>
|
2024-12-31 06:06:52
| 0
| 54,395
|
Raptor
|
79,319,263
| 1,788,656
|
Why does geopandas dissolve function keep working forever?
|
<p>All,
I am trying to use the Geopandas dissolve function to aggregate a few countries; the function countries.dissolve keeps running forever. Here is a minimal script.</p>
<pre><code>import geopandas as gpd
shape='/Volumes/TwoGb/shape/fwdshapfileoftheworld/'
countries=gpd.read_file(shape+'TM_WORLD_BORDERS-0.3.shp')
# Add columns
countries['wmosubregion'] = ''
countries['dummy'] = ''
country_count = len(countries)
# If the country list is empty then use all countries.
country_list=['SO','SD','DJ','KM']
default = 'Null'
for i in range(country_count):
countries.at[i, 'wmosubregion'] = default
if countries.ISO2[i] in country_list:
countries.at[i, 'wmosubregion'] = "EAST_AFRICA"
print(countries.ISO2[i])
region_shapes = countries.dissolve(by='wmosubregion')
</code></pre>
<p>I am using the TM_WORLD_BORDERS-0.3 shape files, which is freely accessible. You can get the shape files (TM_WORLD_BORDERS-0.3.shp, TM_WORLD_BORDERS-0.3.dbf, TM_WORLD_BORDERS-0.3.shx, TM_WORLD_BORDERS-0.3.shp ) from the following GitHub <a href="https://github.com/rmichnovicz/Sick-Slopes/tree/master" rel="nofollow noreferrer">https://github.com/rmichnovicz/Sick-Slopes/tree/master</a></p>
<p>Thanks</p>
|
<python><python-3.x><geopandas>
|
2024-12-31 05:46:50
| 1
| 725
|
Kernel
|
79,319,156
| 162,349
|
How to add Python type annotations to a class that inherits from itself?
|
<p>I'm trying add type annotations to an <code>ElementList</code> object that inherits from <code>list</code> and can contain either <code>Element</code> objects or other <code>ElementGroup</code> objects.</p>
<p>When I run the following code through mypy:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Self
class Element:
pass
class ElementList(list[Element | Self]):
pass
elements = ElementList(
[
Element(),
Element(),
ElementList(
[
Element(),
Element(),
]
),
]
)
</code></pre>
<p>I get the following error:</p>
<pre><code>element.py:8: error: Self type is only allowed in annotations within class definition [misc]
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<p>What's the recommended way to add typing annotations to this so that mypy doesn't throw an error?</p>
|
<python><python-typing><mypy>
|
2024-12-31 04:14:35
| 1
| 4,472
|
cdwilson
|
79,318,848
| 8,229,029
|
How to properly run Python multiprocessing pool inside larger loop and shut it down before next loop starts
|
<p>I have a large script where I am processing terabytes of weather/climate data that comes in gridded format. I have a script that uses an outer loop (over years - 1979 to 2024), and for each year, loops over each month (1 - 12), as data for each month comes in monthly files, and an additional loop that loops over each hour or station, as appropriate, within each monthly loop. These loops fill arrays that look like <code>[hr, stn, pressure level]</code>, or <code>[hr, stn]</code>, depending on the variable. This part works fine.</p>
<p>Once I have the data in the arrays, I am using multiprocessing <code>pool.starmap</code> function to run, in parallel, metpy package calculations that only operate on 1D arrays. This is where something weird happens. The program now seems to go back to the outer loop for years, and begins to run "# begin by processing some files that come in yearly format" again.</p>
<p>Here is my code, which is generalized, as I have had a lot of trouble trying to reproduce this error with a much smaller example.</p>
<pre><code># loop over years
for yr in np.arange(46) + 1979:
# begin by processing some files that come in yearly format
# loop over months
for mo in range(12) + 1:
# open and put data from specific lat/lon points into arrays by variable
# loop over lat/lon locations ("stations")
for station in range(50):
for hr in range(2920):
# fill arrays
# go back to working within outer loop (years) for starting multiprocessing work on filled arrays
with Pool(processes = 16) as pool1:
tw_sfc_pooled = pool1.starmap(mpcalc.wet_bulb_temperature, tw_sfc_argument_list)
bulk_shear_1km_pooled = pool1.starmap(mpcalc.bulk_shear, bulk_shear_1km_argument_list)
many_more_pooled = poo1.starmap(mpcalc.func, many arg lists)
pool1.close() # am I closing this wrong?
pool1.join() # do I need this statement here?
# put pooled lists into final variable arrays for use in work
</code></pre>
<p>The code works fine until the multiprocessing ends. From there, it goes back to the outer loop, and begins to read in the next year's data in parallel, with many xarray and other errors (I'm using xarray to read netcdf files). So, the first year of data processing goes fine, but everything fails for the 2nd year.</p>
<p>My guess is that either multiprocessing isn't being started or stopped correctly in my code, or that it doesn't like being inside a loop, or maybe the problems arise from having 20+ <code>pool.starmap</code> processes. I don't know.</p>
<p>Any thoughts on what the problem is?</p>
|
<python><parallel-processing><multiprocessing><metpy>
|
2024-12-30 23:26:51
| 2
| 1,214
|
user8229029
|
79,318,845
| 1,141,798
|
Serving MMCV/MMDet on Databricks - GLIBC_2.32 not found
|
<p>I'm trying to host MMDetection model on <a href="https://learn.microsoft.com/en-us/azure/databricks/machine-learning/model-serving/" rel="nofollow noreferrer">Databricks Serving (on Azure)</a>. The model is trained on 15.4 LTS ML. However, during the serving endpoint update, it complains about <code>GLIBC_2.32</code>:</p>
<pre><code>An error occurred while loading the model: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mmcv/_ext.cpython-311-x86_64-linux-gnu.so)
</code></pre>
<p>Normally I'd just run <code>apt-get install glibc</code> or something on a startup script. However, <a href="https://learn.microsoft.com/en-us/azure/databricks/machine-learning/model-serving/model-serving-limits" rel="nofollow noreferrer">Databricks documentation</a> says <code>Init scripts are not supported.</code></p>
<h3>Question</h3>
<p>So, how to get past this error and deploy an MMDet model on Databrciks?
Previously I deployed many PyTorch Lightning models (that is, no OpenMMLab) and <code>glibc</code> was not an issue for Databricks Serving.</p>
<h3>Solution sketches</h3>
<p>I see two possible paths towards potential solution.</p>
<ul>
<li>Maybe there's a magic trick allowing installing missing libraries on the serving endpoint?</li>
<li>Alternatively, maybe we can get rid of the root cause of the problem, which is <code>mmcv</code>, and this error sounds a bit like <code>opencv</code> issue to me? The model I'm trying to use is CoDETR <a href="https://github.com/open-mmlab/mmdetection/blob/main/projects/CO-DETR/configs/codino/co_dino_5scale_r50_lsj_8xb2_1x_coco.py" rel="nofollow noreferrer">with a config based on this one</a>.</li>
</ul>
<p>Both sounds equally unlikely to me. Maybe there's another way?</p>
<h3>Excerpt of key packages used:</h3>
<pre><code>pip install --upgrade pip
pip install uv
uv pip install torch==2.1.0 torchvision==0.16.0 numpy==1.26.4 openmim "mmengine==0.10.5"
uv pip install "mmcv==2.1.0" -f https://download.openmmlab.com/mmcv/dist/cu121/torch2.1.0/index.html
uv pip install albumentations==1.4.18 pycocotools==2.0.7 mlflow python-snappy==0.7.3
uv pip install lightning==2.2.2 mmdet==3.3.0
</code></pre>
<h3>Full stack trace:</h3>
<pre><code>[mlrpj] [2024-12-30 09:50:06 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mlflowserving/scoring_server/__init__.py", line 130, in _load_model_closure
[mlrpj] [2024-12-30 09:50:06 +0000] model = load_model_fn(path)
[mlrpj] [2024-12-30 09:50:06 +0000] ^^^^^^^^^^^^^^^^^^^
[mlrpj] [2024-12-30 09:50:06 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mlflow/tracing/provider.py", line 309, in wrapper
[mlrpj] [2024-12-30 09:50:06 +0000] is_func_called, result = True, f(*args, **kwargs)
[mlrpj] [2024-12-30 09:50:06 +0000] ^^^^^^^^^^^^^^^^^^
[mlrpj] [2024-12-30 09:50:06 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mlflow/pyfunc/__init__.py", line 1067, in load_model
[mlrpj] [2024-12-30 09:50:06 +0000] model_impl = importlib.import_module(conf[MAIN])._load_pyfunc(data_path)
[mlrpj] [2024-12-30 09:50:06 +0000] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[mlrpj] [2024-12-30 09:50:06 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mlflow/pyfunc/model.py", line 561, in _load_pyfunc
[mlrpj] [2024-12-30 09:50:06 +0000] context, python_model, signature = _load_context_model_and_signature(model_path, model_config)
[mlrpj] [2024-12-30 09:50:06 +0000] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[mlrpj] [2024-12-30 09:50:06 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mlflow/pyfunc/model.py", line 555, in _load_context_model_and_signature
[mlrpj] [2024-12-30 09:50:06 +0000] python_model.load_context(context=context)
[mlrpj] [2024-12-30 09:50:06 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/computer_vision/engine/mmdet/utils.py", line 44, in load_context
[mlrpj] [2024-12-30 09:50:06 +0000] from mmdet.apis import DetInferencer
[mlrpj] [2024-12-30 09:50:06 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mmdet/apis/__init__.py", line 2, in <module>
[mlrpj] [2024-12-30 09:50:06 +0000] from .det_inferencer import DetInferencer
[mlrpj] [2024-12-30 09:50:06 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mmdet/apis/det_inferencer.py", line 22, in <module>
[mlrpj] [2024-12-30 09:50:06 +0000] from mmdet.evaluation import INSTANCE_OFFSET
[mlrpj] [2024-12-30 09:50:06 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mmdet/evaluation/__init__.py", line 4, in <module>
[mlrpj] [2024-12-30 09:50:06 +0000] from .metrics import * # noqa: F401,F403
[mlrpj] [2024-12-30 09:50:06 +0000] ^^^^^^^^^^^^^^^^^^^^^^
[mlrpj] [2024-12-30 09:50:06 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mmdet/evaluation/metrics/__init__.py", line 5, in <module>
[mlrpj] [2024-12-30 09:50:06 +0000] from .coco_metric import CocoMetric
[mlrpj] [2024-12-30 09:50:06 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mmdet/evaluation/metrics/coco_metric.py", line 16, in <module>
[mlrpj] [2024-12-30 09:50:06 +0000] from mmdet.datasets.api_wrappers import COCO, COCOeval, COCOevalMP
[mlrpj] [2024-12-30 09:50:06 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mmdet/datasets/__init__.py", line 31, in <module>
[mlrpj] [2024-12-30 09:50:06 +0000] from .utils import get_loading_pipeline
[mlrpj] [2024-12-30 09:50:06 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mmdet/datasets/utils.py", line 5, in <module>
[mlrpj] [2024-12-30 09:50:06 +0000] from mmdet.datasets.transforms import LoadAnnotations, LoadPanopticAnnotations
[mlrpj] [2024-12-30 09:50:06 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mmdet/datasets/transforms/__init__.py", line 6, in <module>
[mlrpj] [2024-12-30 09:50:06 +0000] from .formatting import (ImageToTensor, PackDetInputs, PackReIDInputs,
[mlrpj] [2024-12-30 09:50:06 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mmdet/datasets/transforms/formatting.py", line 11, in <module>
[mlrpj] [2024-12-30 09:50:06 +0000] from mmdet.structures.bbox import BaseBoxes
[mlrpj] [2024-12-30 09:50:06 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mmdet/structures/bbox/__init__.py", line 2, in <module>
[mlrpj] [2024-12-30 09:50:06 +0000] from .base_boxes import BaseBoxes
[mlrpj] [2024-12-30 09:50:06 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mmdet/structures/bbox/base_boxes.py", line 9, in <module>
[mlrpj] [2024-12-30 09:50:06 +0000] from mmdet.structures.mask.structures import BitmapMasks, PolygonMasks
[mlrpj] [2024-12-30 09:50:06 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mmdet/structures/mask/__init__.py", line 3, in <module>
[mlrpj] [2024-12-30 09:50:06 +0000] from .structures import (BaseInstanceMasks, BitmapMasks, PolygonMasks,
[mlrpj] [2024-12-30 09:50:06 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mmdet/structures/mask/structures.py", line 12, in <module>
[mlrpj] [2024-12-30 09:50:06 +0000] from mmcv.ops.roi_align import roi_align
[mlrpj] [2024-12-30 09:50:06 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mmcv/ops/__init__.py", line 3, in <module>
[mlrpj] [2024-12-30 09:50:06 +0000] from .active_rotated_filter import active_rotated_filter
[mlrpj] [2024-12-30 09:50:06 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mmcv/ops/active_rotated_filter.py", line 10, in <module>
[mlrpj] [2024-12-30 09:50:06 +0000] ext_module = ext_loader.load_ext(
[mlrpj] [2024-12-30 09:50:06 +0000] ^^^^^^^^^^^^^^^^^^^^
[mlrpj] [2024-12-30 09:50:06 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mmcv/utils/ext_loader.py", line 13, in load_ext
[mlrpj] [2024-12-30 09:50:06 +0000] ext = importlib.import_module('mmcv.' + name)
[mlrpj] [2024-12-30 09:50:06 +0000] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[mlrpj] [2024-12-30 09:50:06 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/importlib/__init__.py", line 126, in import_module
[mlrpj] [2024-12-30 09:50:06 +0000] return _bootstrap._gcd_import(name[level:], package, level)
[mlrpj] [2024-12-30 09:50:06 +0000] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[mlrpj] [2024-12-30 09:50:06 +0000] ImportError: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mmcv/_ext.cpython-311-x86_64-linux-gnu.so)
[mlrpj] [2024-12-30 09:50:06 +0000] [595] [INFO] Worker exiting (pid: 595)
[mlrpj] [2024-12-30 09:50:07 +0000] An error occurred while loading the model: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mmcv/_ext.cpython-311-x86_64-linux-gnu.so)
[mlrpj] [2024-12-30 09:50:07 +0000] Traceback (most recent call last):
[mlrpj] [2024-12-30 09:50:07 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mlflowserving/scoring_server/__init__.py", line 212, in get_model_option_or_exit
[mlrpj] [2024-12-30 09:50:07 +0000] self.model = self.model_future.result()
[mlrpj] [2024-12-30 09:50:07 +0000] ^^^^^^^^^^^^^^^^^^^^^^^^^^
[mlrpj] [2024-12-30 09:50:07 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/concurrent/futures/_base.py", line 449, in result
[mlrpj] [2024-12-30 09:50:07 +0000] return self.__get_result()
[mlrpj] [2024-12-30 09:50:07 +0000] ^^^^^^^^^^^^^^^^^^^
[mlrpj] [2024-12-30 09:50:07 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
[mlrpj] [2024-12-30 09:50:07 +0000] raise self._exception
[mlrpj] [2024-12-30 09:50:07 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/concurrent/futures/thread.py", line 58, in run
[mlrpj] [2024-12-30 09:50:07 +0000] result = self.fn(*self.args, **self.kwargs)
[mlrpj] [2024-12-30 09:50:07 +0000] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[mlrpj] [2024-12-30 09:50:07 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mlflowserving/scoring_server/__init__.py", line 130, in _load_model_closure
[mlrpj] [2024-12-30 09:50:07 +0000] model = load_model_fn(path)
[mlrpj] [2024-12-30 09:50:07 +0000] ^^^^^^^^^^^^^^^^^^^
[mlrpj] [2024-12-30 09:50:07 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mlflow/tracing/provider.py", line 309, in wrapper
[mlrpj] [2024-12-30 09:50:07 +0000] is_func_called, result = True, f(*args, **kwargs)
[mlrpj] [2024-12-30 09:50:07 +0000] ^^^^^^^^^^^^^^^^^^
[mlrpj] [2024-12-30 09:50:07 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mlflow/pyfunc/__init__.py", line 1067, in load_model
[mlrpj] [2024-12-30 09:50:07 +0000] model_impl = importlib.import_module(conf[MAIN])._load_pyfunc(data_path)
[mlrpj] [2024-12-30 09:50:07 +0000] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[mlrpj] [2024-12-30 09:50:07 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mlflow/pyfunc/model.py", line 561, in _load_pyfunc
[mlrpj] [2024-12-30 09:50:07 +0000] context, python_model, signature = _load_context_model_and_signature(model_path, model_config)
[mlrpj] [2024-12-30 09:50:07 +0000] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[mlrpj] [2024-12-30 09:50:07 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mlflow/pyfunc/model.py", line 555, in _load_context_model_and_signature
[mlrpj] [2024-12-30 09:50:07 +0000] python_model.load_context(context=context)
[mlrpj] [2024-12-30 09:50:07 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/computer_vision/engine/mmdet/utils.p
</code></pre>
<h1>Update 7 Jan</h1>
<p>Here's an update of my research so far.</p>
<ul>
<li>A possible and rocky way to solution would be converting the model to ONNX with <code>mmdeploy</code>, but CoDETR is not yet supported by it as for now.</li>
<li>While Databricks on Azure does not permit to use custom images with serving, it is interestingly possible to customise them on AWS.</li>
</ul>
|
<python><databricks><azure-databricks><openmmlab>
|
2024-12-30 23:23:35
| 1
| 1,302
|
Dominik Filipiak
|
79,318,707
| 5,109,125
|
SQLDatabaseChain result shows "Answer" value incorrectly?
|
<p>I am looking for help regarding an issue with my <code>SQLDatabasechain invoke()</code> results.</p>
<p>Here is my langchain code - I created a <code>db_chain</code> with my <code>llm</code> (model) and <code>db</code> (MySQL database) using <code>SQLDatabaseChain.from_llm</code> method - then I executed <code>db_chain.invoke()</code> passing my database query as a parameter. The query is written in human-readable format instead of an SQL statement, because the llm can take care of it. I passed the <code>verbose=True</code> parameter in the invoke method so it will produce the intermediate log results that I can validate.</p>
<pre><code>from langchain_experimental.sql import SQLDatabaseChain
# pass the verbose=True to check the internals
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True) # , return_direct = True)
qns1 = db_chain.invoke("How many t-shirts do we have left for nike in extra small size and red color?")
</code></pre>
<p>Below is the <code>verbose</code> result of the <code>db_chain.invoke()</code>:
<a href="https://i.sstatic.net/YFrtVJLx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YFrtVJLx.png" alt="enter image description here" /></a></p>
<p>Everything look ok - the SQLQuery shows 59 which is the expected count from the query - however, I am expecting the "Answer" statement to show <code>59</code> - but instead it is showing the <code>Question:</code> label followed by the value of the generated SQL query instead.</p>
<p>What am I missing???</p>
|
<python><langchain>
|
2024-12-30 21:52:55
| 0
| 597
|
punsoca
|
79,318,679
| 3,965,828
|
Get all tuple permutations between two lists
|
<p>Given two lists of length n, I've been trying to find a pythonic way to return a list of a list of n-tuples where each list of tuples is a distinct permutation of the possible values between the two lists. So given:</p>
<pre><code>a = [1, 2]
b = [3, 4]
</code></pre>
<p>I'd expect an output of:</p>
<pre><code>[ [(1, 3), (2, 4)], [(1, 4), (2, 3)] ]
</code></pre>
<p>I looked at questions like <a href="https://stackoverflow.com/questions/1953194/permutations-of-two-lists-in-python">permutations of two lists in python</a>, but that's not quite what I'm after. I looked at the <code>itertools</code> library and nothing immediately popped out at me.</p>
<p>What's a good pythonic way to solve this?</p>
|
<python><list><permutation>
|
2024-12-30 21:40:36
| 1
| 2,631
|
Jeffrey Van Laethem
|
79,318,595
| 10,140,821
|
extract file name and sub subdirectory names as variables from absolute path of file in python
|
<p>I have a file in <code>python</code>. Path to the path is <code>/home/user/pythonfiles/test_files/test1.py</code></p>
<p>I want to find the file name and last subdirectories of the file as variables.</p>
<p>I have done like below</p>
<pre><code># find file name
sess_name = os.path.basename(__file__).split('.')[0]
print("file name is " + sess_name)
</code></pre>
<p>same way I want to capture the last subdirectory name. How do I do that.</p>
<p>Expected output is below</p>
<pre><code> test_files # this is the last subdirectory I want to capture.
</code></pre>
|
<python>
|
2024-12-30 20:51:21
| 0
| 763
|
nmr
|
79,318,540
| 13,971,251
|
Django model foreign key to whichever model calls it
|
<p>I am getting back into Django after a few years, and am running into the following problem. I am making a system where there are 2 models; a survey, and an update. I want to make a notification model that would automatically have an object added when I add a survey object or update object, and the notification object would have a foreign key to the model object which caused it to be added.</p>
<p>However I am running into a brick wall figuring out how I would do this, to have a model with a foreign key which can be to one of two models, which would be automatically set to the model object which creates it. Any help with this would be appreciated.</p>
<p>I am trying to make a model that looks something like this (psuedocode):</p>
<pre><code>class notification(models.model):
source = models.ForeignKey(to model that created it) #this is what I need help with
start_date = models.DateTimeField(inherited from model that created it)
end_date = models.DateTimeField(inherited from model that created it)
</code></pre>
<p>Also, just to add some context to the question and in case I am looking at this from the wrong angle, I am wanting to do this because both surveys and updates will be displayed on the same page, so my plan is to query the notification model, and then have the view do something like this:</p>
<pre><code>from .models import notification
notifications = notification.objects.filter(start_date__lte=now, end_date__gte=now).order_by('-start_date')
for notification in notifications:
if notification.__class__.__name__ == "survey_question":
survey = notification.survey_question.all()
question = survey.question()
elif notification.__class__.__name__ == "update":
update = notification.update.all()
update = update.update()
</code></pre>
<p>I am also doing this instead of combining the 2 queries and then sorting them by date as I want to have notifications for each specific user anyways, so my plan is (down the road) to have a notification created for each user.</p>
<p>Here are my models (that I reference in the question):</p>
<pre><code>from django.db import models
from datetime import timedelta
from django.utils import timezone
def tmrw():
return timezone.now() + timedelta(days=1)
class update(models.Model):
update = models.TextField()
start_date = models.DateTimeField(default=timezone.now, null=True, blank=True)
end_date = models.DateTimeField(default=tmrw, null=True, blank=True)
class Meta:
verbose_name = 'Update'
verbose_name_plural = f'{verbose_name}s'
class survey_question(models.Model):
question = models.TextField()
start_date = models.DateTimeField(default=timezone.now, null=True, blank=True)
end_date = models.DateTimeField(default=tmrw, null=True, blank=True)
class Meta:
verbose_name = 'Survey'
verbose_name_plural = f'{verbose_name}s'
</code></pre>
|
<python><django><django-models>
|
2024-12-30 20:19:52
| 1
| 1,181
|
Kovy Jacob
|
79,318,211
| 3,840,530
|
How to extract the (major/minor) ticks from a seaborn plot?
|
<p>I am trying to extract ticks from my python plot drawn with seaborn.
I have two sets of code below which I thought would produce same results. However, one extracts the ticks correctly and the other just returns zeros.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.gca()
t = np.arange(0.0, 100.0, 0.1)
s = np.sin(0.1 * np.pi * t) * np.exp(-t * 0.01)
ax.plot(t, s)
plt.show()
print([p.label.get_position()[0] for p in ax.xaxis.get_major_ticks()])
print([p.label.get_position()[0] for p in ax.xaxis.get_minor_ticks()])
print([p.label.get_position()[1] for p in ax.yaxis.get_major_ticks()])
print([p.label.get_position()[1] for p in ax.yaxis.get_minor_ticks()])
</code></pre>
<p>Output:</p>
<pre><code>[-20.0, 0.0, 20.0, 40.0, 60.0, 80.0, 100.0, 120.0]
[]
[-1.0, -0.75, -0.5, -0.25, 0.0, 0.25, 0.5, 0.75, 1.0, 1.25]
[]
</code></pre>
<p>The other piece of code is with seaborn:</p>
<pre><code>import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
fig = plt.figure()
ax = fig.gca()
t = np.arange(0.0, 100.0, 0.1)
s = np.sin(0.1 * np.pi * t) * np.exp(-t * 0.01)
data = pd.DataFrame()
data['x'] = t
data['y'] = s
display(data)
sns_plot = sns.scatterplot(x='x', y='y', data=data, ax=ax)
sns_plot.set_title("test plot")
print([p.label.get_position()[0] for p in ax.xaxis.get_major_ticks()])
print([p.label.get_position()[0] for p in ax.xaxis.get_minor_ticks()])
print([p.label.get_position()[1] for p in ax.yaxis.get_major_ticks()])
print([p.label.get_position()[1] for p in ax.yaxis.get_minor_ticks()])
</code></pre>
<p>Output:</p>
<pre><code>[0, 0, 0, 0, 0, 0, 0, 0]
[]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[]
</code></pre>
<p>Can someone point out how I can extract the tick labels from my seaborn plot?
Or point out what is wrong in the second block of code?</p>
<p>Thanks!</p>
|
<python><matplotlib><label><seaborn>
|
2024-12-30 17:28:54
| 0
| 302
|
user3840530
|
79,317,933
| 5,507,055
|
How to use multipart/form-data with requests session?
|
<p>I want to send data with requests as form-data. I use the request's session. However, it still send the data in application/x-www-form-urlencoded form. I started a test servver with <code>nc -kdl 8000</code>.</p>
<pre><code>import requests
import http.client as http_client
http_client.HTTPConnection.debuglevel = 1
session = requests.Session()
headers = {
"Content-Type": "multipart/form-data",
}
data = {
"field1": "value1",
"field2": "value2",
}
response = session.post(
"http://localhost:8000",
data=data,
headers=headers,
)
</code></pre>
<p>The output</p>
<pre><code>Host: localhost:8000
User-Agent: python-requests/2.32.3
Accept-Encoding: gzip, deflate
Accept: */*
Connection: keep-alive
Content-Type: multipart/form-data
Content-Length: 27
field1=value1&field2=value2
</code></pre>
<p>However I want the followed output</p>
<pre><code>POST / HTTP/1.1
Host: example.com
Content-Type: multipart/form-data;boundary="delimiter12345"
--delimiter12345
Content-Disposition: form-data; name="field1"
value1
--delimiter12345
Content-Disposition: form-data; name="field2"; filename="example.txt"
value2
--delimiter12345--
</code></pre>
|
<python><session><python-requests><form-data>
|
2024-12-30 15:08:00
| 0
| 2,845
|
ikreb
|
79,317,927
| 521,070
|
How to find overlapping multi-dimensional ranges?
|
<p>Suppose I am writing function <code>find_overlapping_ranges</code> to find overlapping ranges in a list of multi-dimensional ranges. Two multi-dimensional ranges overlap if they overlap in <strong>all</strong> dimensions.</p>
<pre><code>from typing import List, Tuple
from dataclasses import dataclass
Interval = Tuple[float, float] # Single interval with (min, max) bounds
Range = List[Interval] # Multi-dimensional range
@dataclass
class Overlap:
range1_index: int # Index of the first overlapping range in the list
range2_index: int # Index of the second overlapping range
def find_overlapping_ranges(ranges: List[Range]) -> List[Overlap]
</code></pre>
<p>The number of ranges is ~100K or more. Maybe it is worth noting that in the most case no overlap is expected.</p>
<p>The simplest approach is to check all pairs of ranges in the list, that results in a complexity <code>O(n^2)</code>. Is there a more efficient algorithm to find overlapping multi-dimensional ranges ?</p>
|
<python><algorithm><range>
|
2024-12-30 15:04:49
| 1
| 42,246
|
Michael
|
79,317,861
| 2,936,329
|
Pyre-check error for @property: Undefined attribute: `<class>` has no attribute `<attribute>`
|
<p>I use <a href="https://pyre-check.org/" rel="nofollow noreferrer">pyre-check</a> for type checking my python code, with strict setting on. It works fine except for OOP properties.</p>
<p>Example code:</p>
<pre class="lang-py prettyprint-override"><code>class MyClass:
def __init__(self, full_name: str):
self.full_name = full_name
@property
def full_name(self) -> str:
return self._full_name
@full_name.setter
def full_name(self, value: str) -> None:
self._full_name = value
# Usage
obj = MyClass("John Doe")
print(obj.full_name)
</code></pre>
<pre><code>Undefined attribute: `<class>` has no attribute `_full_name`.
</code></pre>
<p>I believe its because _full_name is not defined in the init. But if I use the following code, then the setter is not used.</p>
<pre><code> self._full_name = full_name
</code></pre>
<p>I want to only expose full_name via the getter and setter. I believe Pyre incorrectly says this is wrong, or how else should I make my code?</p>
|
<python><python-typing><pyre-check>
|
2024-12-30 14:27:07
| 0
| 468
|
Rien
|
79,317,756
| 13,848,874
|
I want %matplotlib notebook not %matplotllib widget or %matplotlib ipympl : Javascript Error: IPython is not defined
|
<p>I start with the usual imports and go to start an interactive plotting session:</p>
<pre><code># for viewing figures interactively
%matplotlib notebook
# To start plotting in matplotlib
import IPython
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("whitegrid")
# Other data libraries
import numpy as np
import pandas as pd
# 3 dimensional plotting
from mpl_toolkits import mplot3d
# enable three-dimensional axes
fig = plt.figure()
ax = plt.axes(projection='3d')
</code></pre>
<p>this results in the error:</p>
<blockquote>
<p>Javascript Error: IPython is not defined</p>
</blockquote>
<p>Then I read here to test some other things. I install IPython Widgets:</p>
<pre><code>pip install ipywidgets
jupyter nbextension enable --py widgetsnbextension
</code></pre>
<p>Now putting <code>%matplotlib widgets</code> instead of <code>%matplotlib notebook</code> at the beginning of the codes displays the interactive figure.</p>
<p>In another approach, I can go and install ipympl and enable that extension:</p>
<pre><code>pip install ipympl
jupyter nbextension enable --py --sys-prefix ipympl
</code></pre>
<p>After that useing <code>%matplotlib ipympl</code> instead of <code>%matplotlib notebook</code> at the first line shows the figure interactively.
<strong>But,</strong> the error for notebook magic command still persists. The thing is, this gets around the error and makes the code work but it does not solve the problem. I still have no clue why JavaScript cannot recognize IPyhton.</p>
<p>Any ideas for really solving the issue?</p>
|
<javascript><python><matplotlib><jupyter-notebook>
|
2024-12-30 13:39:46
| 1
| 473
|
Malihe Mahdavi sefat
|
79,317,564
| 10,364,071
|
Will python multiprocessing deep copy parameters using spawn method?
|
<p>Assume I am using <code>multiprocessing.set_start_method('spawn')</code>. When creating a process using <code>multiprocessing.Process(target=fun, args=(a,))</code>, I am wondering whether <code>a</code> will be deep copied. From previous questions <a href="https://stackoverflow.com/questions/5983159/python-multiprocessing-arguments-deep-copy">python multiprocessing arguments: deep copy?</a> , it seems that using <code>fork</code> start method will not copy parameters. But does it hold for other start methods such as <code>spawn</code> or <code>forkserver</code>?</p>
<p>My second question is, if deep copy is performed, is there any approach to avoid such a copy without using <code>fork</code> start method (since <code>fork</code> can only be used on Linux and is not thread safe)? This is particularly useful when the memory does not support copy large classes such as high-dimensional tensors.</p>
<p>I have Googled extensively but could not find an answer yet.</p>
|
<python><multiprocessing>
|
2024-12-30 12:19:44
| 0
| 463
|
zbh2047
|
79,317,464
| 10,722,752
|
Getting AttributeError: partially initialized module 'numpy.core.arrayprint' has no attribute 'array2string' (most likely due to circular import) eror
|
<p>I tried installing pandarallel but couldn't install due to some errors. Now when I try to simply import pandas and numpy, I am getting error:</p>
<pre><code>import pandas as pd
import numpy as np
AttributeError: partially initialized module 'numpy.core.arrayprint' has no attribute 'array2string' (most likely due to circular import)
</code></pre>
<p>I am getting it if I try to import either pandas or numpy. I looked up other answers to similar <code>partially initialied module</code> errors, tried renaming <code>numpy</code> file, but it's still not fixing the issue.
I tried upgrading <code>numpy</code> using <code>pip</code>, still it didn't help.</p>
<p>Can someone please help me with this error.</p>
|
<python><pandas><numpy>
|
2024-12-30 11:25:55
| 1
| 11,560
|
Karthik S
|
79,317,324
| 6,597,296
|
Handle an option only if another option is used
|
<p>I need to process command-line options from my Python script that correspond to the syntax</p>
<pre class="lang-bash prettyprint-override"><code>tesy.py [-h] [-a [-b]]
</code></pre>
<p>(This is just a simplistic example illustrating the problem; in reality my script has lots of other options.)</p>
<p>That is, the following synaxes are correct</p>
<pre class="lang-bash prettyprint-override"><code>test.py
test.py -h
test.py -a
test.py -a -b
</code></pre>
<p>but the following one is <em>not</em></p>
<pre class="lang-bash prettyprint-override"><code>test.py -b
</code></pre>
<p>because the option <code>-a</code> is not specified.</p>
<p>Can this been done with <code>argparse</code> in an elegant way?</p>
<p>The obvious way doesn't work:</p>
<pre class="lang-py prettyprint-override"><code>from argparse import ArgumentParser
parser = ArgumentParser(description='Test script')
parser.add_argument('-a', action='store_true')
parser.add_argument('-b', action='store_true')
args = parser.parse_args()
</code></pre>
<p>because it would accept the invalid syntax <code>test.py -b</code>. Now, I could check manually if the option <code>-a</code> is present when the option <code>-b</code> is given and issue an error otherwise:</p>
<pre class="lang-py prettyprint-override"><code>from argparse import ArgumentParser
parser = ArgumentParser(description='Test script')
parser.add_argument('-a', action='store_true')
parser.add_argument('-b', action='store_true')
args = parser.parse_args()
if args.b and not args.a:
parser.error('Option -b requires option -a.')
</code></pre>
<p>but that's somewhat inellegant. Is there a way for <code>argparse</code> to handle the syntax itself?</p>
<p>Or I could use some trickery (thanks, ChatGPT) like this:</p>
<pre class="lang-py prettyprint-override"><code>from argparse import ArgumentParser
parser = ArgumentParser(description='Test script')
parser.add_argument('-a', action='store_true')
args, unknown = parser.parse_known_args()
if args.a:
parser.add_argument('-b', action='store_true')
args = parser.parse_args()
else:
args = parser.parse_args(unknown)
</code></pre>
<p>and this works - but the help text (<code>test.py -h</code>) does not mention the option <code>-b</code> at all. In theory, I could override the help text by using the <code>usage=</code> argument of <code>ArgumentParser</code> but this is error-prone - my script is much more complex than this simplistic example, has lots of options, is still under development, and if I have to maintain my own usage text, I might forget to update it when I add, remove, or modify an option.</p>
<p>I can't use a sub-parser because those work only with positional arguments - not with options.</p>
<p>Maybe what I want can be done with <code>docopt</code> but my script's command-line processing is already built around <code>argparse</code> and I'd rather not have to rewrite that.</p>
<p>So, is there an elegant way of doing what I want with <code>argparse</code>? Or should I just swallow my pride and use one of the klunky ones?</p>
|
<python><argparse>
|
2024-12-30 10:25:20
| 1
| 578
|
bontchev
|
79,317,167
| 4,505,998
|
DataFrame with all NaT should be timedelta and not datetime
|
<p>I have a DataFrame with a column <code>min_latency</code>, which represents the minimum latency achieved by a predictor. If the predictor failed, there's no value, and therefore it returns <code>min_latency=pd.NaT</code>.</p>
<p>The dataframe is created from a dict, and if and only if all the rows have a <code>pd.NaT</code> value, the resulting column will have a <code>datetime64[ns]</code> dtype. It seems impossible to convert it to <code>timedelta</code>.</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame([{'id': i, 'min_latency': pd.NaT} for i in range(10)])
print(df['min_latency'].dtype) # datetime64[ns]
df['min_latency'].astype('timedelta64[ns]') # TypeError: Cannot cast DatetimeArray to dtype timedelta64[ns]
</code></pre>
<p>This problem doesn't happen if there's some timedelta in there:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame([{'id': i, 'min_latency': pd.NaT} for i in range(10)] + [{'id': -1, 'min_latency': dt.timedelta(seconds=3)}])
print(df['min_latency'].dtype) # timedelta64[ns]
</code></pre>
|
<python><pandas><dataframe>
|
2024-12-30 09:09:38
| 1
| 813
|
David DavΓ³
|
79,317,125
| 4,444,757
|
How to connect python to coinex api v2?
|
<p>I saw the below links, However, all of them use the <code>v1</code> API and I'd like to use the <code>v2</code> API.<br />
<a href="https://stackoverflow.com/questions/69767586/connecting-to-coinex-api-issue">Connecting to Coinex API issue</a><br />
<a href="https://stackoverflow.com/questions/68060457/how-to-connect-coinex-exchange-with-api">how to connect Coinex exchange with API?</a><br />
<a href="https://stackoverflow.com/questions/67967972/how-to-trade-with-coinex-api">How To Trade With CoinEx API?</a></p>
<p>So, I use the <a href="https://docs.coinex.com/api/v2/authorization" rel="nofollow noreferrer">v2 API documentations</a> as below code in python3:</p>
<pre><code>import time
import requests
import hmac
import hashlib
baseUrl = 'https://api.coinex.com'
requestPath = '/v2/account/info'
timestamp = int(time.time()*1000)
preparedString = "GET"+requestPath+str(timestamp)
accessId = '???'
secretKey = '???'
signed_str = hmac.new(bytes(secretKey, 'latin-1'), bytes(preparedString, 'latin-1'), hashlib.sha256).hexdigest().lower()
params = {'X-COINEX-KEY':accessId,
'X-COINEX-SIGN':signed_str,
'X-COINEX-TIMESTAMP':timestamp}
response = requests.get(baseUrl+requestPath,
params=params
)
response.json()
</code></pre>
<p>However, I get this error:</p>
<blockquote>
<p>{'code': 11003, 'data': {}, 'message': 'Access ID does not exist'}</p>
</blockquote>
<p>The error code <code>11003</code> doesn't exist in <a href="https://docs.coinex.com/api/v2/error" rel="nofollow noreferrer">coinex error handling</a>.</p>
<p>Why does it say <code>Access ID does not exist</code>?</p>
<p>I am sure the <code>access_id</code> and <code>secret_key</code> are correct and I can connect to Coinex and get the public data, such as the market info (e.g. <code>/futures/ticker?market=BTCUSDT</code>) without any problems.</p>
|
<python><python-requests>
|
2024-12-30 08:40:55
| 1
| 1,290
|
Sadabadi
|
79,317,101
| 15,154,700
|
How to uv init without hello.py
|
<p>After using <code>uv init</code> in a project directory, <code>uv</code> creates these files:</p>
<pre><code>.git
README.md
pyproject.toml
hello.py
.python-version
.gitignore
</code></pre>
<p>I don't want it to generate hello.py. I want the others, but <code>hello.py</code> is unusable, and I delete it every time I initialize a new environment.<br />
I read the uv help manual and searched, but couldn't find my answer.</p>
<p>How can I prevent this command from generating <code>hello.py</code>?</p>
|
<python><package-managers><uv>
|
2024-12-30 08:24:11
| 1
| 545
|
Sadegh Pouriyan Zadeh
|
79,317,040
| 12,870,651
|
SQLAlchemy Teradata - Unable to use the df.to_sql functionality
|
<p>I am attempting to use SQLAlchemy to load data from a pandas dataframe into a Teradata database table using the <code>pandas.to_sql</code> method.</p>
<pre class="lang-py prettyprint-override"><code>import sqlalchemy as sa
username = "my_username"
password = "my_password"
hostname = "hostname"
database = "database"
table_name = "database.table"
teradata_engine = sa.create_engine(f"teradatasql://{hostname}/?user={username}&password={password}")
df.to_sql(name=table_name, con=teradata_engine, if_exists='append', index=False)
</code></pre>
<p>When attempting to use the <code>pandas.to_sql</code> method, I get below error.</p>
<pre class="lang-py prettyprint-override"><code>sqlalchemy.exc.OperationalError: (teradatasql.OperationalError) [Version 20.0.0.20] [Session 23332535] [Teradata Database] [Error 3524] The user does not have CREATE TABLE access to database my_username.
</code></pre>
<p>Only suggestions that I could find related to this were from <a href="https://stackoverflow.com/a/71844148">this post</a>, suggesting:</p>
<blockquote>
<p>You can work on this by either using another user or defining the "--TargetWorkingDatabase" parameter.</p>
</blockquote>
<p>Since I am using python, I understand this to be the <a href="https://%20https://pypi.org/project/teradatasqlalchemy/#ConnectionParameters" rel="nofollow noreferrer">database parameter</a> which I added:</p>
<pre class="lang-py prettyprint-override"><code>teradata_engine = sa.create_engine(f"teradatasql://{hostname}/?user={username}&password={password}?database={database}")
</code></pre>
<p>But after doing this I am getting a username and password error, which goes away once I remove the database parameter.</p>
<pre class="lang-py prettyprint-override"><code>sqlalchemy.exc.OperationalError: (teradatasql.OperationalError) [Version 20.0.0.20] [Session 23330616] [Teradata Database] [Error 8017] The UserId, Password or Account is invalid.
</code></pre>
<p>I am wondering what else to try. Any suggestions would be really appreciated.</p>
|
<python><pandas><sqlalchemy><teradata><teradatasql>
|
2024-12-30 07:38:07
| 0
| 439
|
excelman
|
79,316,861
| 14,250,641
|
Huggingface trainer is not showing any progress for finetuning
|
<p>I have a dataset I want to fine-tune a huggingface LLM with.
This dataset is quite simple. It has two columns: one column has DNA sequences (each in the form of a string 5000 letters long). Another column has a binary label. My dataset is only 240 rows long.</p>
<p>For some reason, the trainer.train() step is not making any progress.
I have access to 2 a100 GPUs, so I don't think it's a computational resource issue.</p>
<p>Here I'm loading in the data and tokenizing the sequences.</p>
<pre><code>dataset = Dataset.from_pandas(df[['sequence', 'label']])
tokenizer = AutoTokenizer.from_pretrained("InstaDeepAI/nucleotide-transformer-v2-500m-multi-species", trust_remote_code=True)
def tokenize_function(examples):
outputs = tokenizer.batch_encode_plus(examples["sequence"], return_tensors="pt", truncation="longest_first", padding='max_length',
max_length=836)
return outputs
# Creating tokenized dataset
tokenized_dataset = dataset.map(
tokenize_function,
batched=True, batch_size=2048)
</code></pre>
<p>Here I'm reducing the parameters of the model:</p>
<pre><code>model = AutoModelForSequenceClassification.from_pretrained("InstaDeepAI/nucleotide-transformer-v2-500m-multi-species", num_labels=2, trust_remote_code=True)
peft_config = LoraConfig(
task_type=TaskType.SEQ_CLS, inference_mode=False, r=1, lora_alpha= 32, lora_dropout=0.1, target_modules= ["query", "value"])
lora_classifier = get_peft_model(model, peft_config) # transform our classifier into a peft model
lora_classifier.print_trainable_parameters()
lora_classifier = nn.DataParallel(lora_classifier, device_ids=[0, 1])
lora_classifier = lora_classifier.to("cuda:0")
</code></pre>
<p>Here I'm prepping for training:</p>
<pre><code>args_ = TrainingArguments(
"finetuned_NT",
remove_unused_columns=False,
evaluation_strategy="steps",
save_strategy="steps",
learning_rate=5e-4,
per_device_train_batch_size=32,
gradient_accumulation_steps= 1,
per_device_eval_batch_size= 32,
eval_steps=10,
logging_steps= 10,
load_best_model_at_end=True, # Keep the best model according to the evaluation
metric_for_best_model="ROC-AUC", # The mcc_score on the evaluation dataset used to select the best model
label_names=["label"],
dataloader_drop_last=True,
max_steps= 10
)
def compute_metrics(eval_pred):
# get predictions
predictions, labels = eval_pred
# apply softmax to get probabilities
probabilities = np.exp(predictions) / np.exp(predictions).sum(-1,
keepdims=True)
# use probabilities of the positive class for ROC AUC
positive_class_probs = probabilities[:, 1]
# compute auc
auc = np.round(auc_score.compute(prediction_scores=positive_class_probs,
references=labels)['roc_auc'],3)
# predict most probable class
predicted_classes = np.argmax(predictions, axis=1)
# compute accuracy
acc = np.round(accuracy.compute(predictions=predicted_classes,
references=labels)['accuracy'],3)
return {"Accuracy": acc, "ROC-AUC": auc}
</code></pre>
<p>Splitting data:</p>
<pre><code>train_test_split = tokenized_dataset.select_columns(['label', 'input_ids', 'attention_mask']).train_test_split(test_size=0.2, seed=42)
# Now split the test set into test (50%) and validation (50%)
test_val_split = train_test_split['test'].train_test_split(test_size=0.5, seed=42)
# Combine the splits into a DatasetDict
final_split = DatasetDict({
'train': train_test_split['train'],
'test': test_val_split['train'],
'validation': test_val_split['test']
})
trainer = Trainer(
model=lora_classifier, # Assuming `lora_classifier` is your model
args=args_, # Make sure `args_` contains the correct training arguments
train_dataset=final_split['train'], # Train dataset from the split
eval_dataset=final_split['validation'], # Validation dataset from the split
tokenizer=tokenizer, # Tokenizer used for preprocessing
compute_metrics=compute_metrics # Metric computation function, assuming `compute_metrics_mcc` is defined
)
</code></pre>
<p>When I run trainier.train(), no progress bar comes up, no logs come up (I tried waiting for hours). It says like this with no change.</p>
<pre><code>/sc/arion/work/test-env/envs/test/lib/python3.8/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
warnings.warn(
***** Running training *****
Num examples = 192
Num Epochs = 4
Instantaneous batch size per device = 32
Total train batch size (w. parallel, distributed & accumulation) = 64
Gradient Accumulation steps = 1
Total optimization steps = 10
Number of trainable parameters = 1170434
</code></pre>
|
<python><huggingface-transformers><huggingface-tokenizers><fine-tuning><huggingface-trainer>
|
2024-12-30 05:03:33
| 1
| 514
|
youtube
|
79,316,741
| 12,104,604
|
When using pygrabber and bleak together, the error "Thread is configured for Windows GUI but callbacks are not working." occurs
|
<p>In the following Python code, whether or not the line <code>from pygrabber.dshow_graph import FilterGraph</code> is included determines whether an error occurs. If I do not include it, no error appears, but if I include it, the error <code>"device search error: Thread is configured for Windows GUI but callbacks are not working."</code> occurs. It seems that pygrabber and bleak are conflicting and causing the error. I need both libraries. What should I do?</p>
<pre><code># -*- coding: utf-8 -*-
import multiprocessing
import asyncio
from bleak import BleakScanner
#This is the cause of the error.
from pygrabber.dshow_graph import FilterGraph
async def bluetooth_receiver_start():
print("bluetooth_receiver_start")
try:
devices = await BleakScanner.discover()
print("scnanned devices:")
for device in devices:
print(f" - {device.name} ({device.address})")
except Exception as e:
print(f"device search error: {e}")
return
def bluetooth_process():
print("bluetooth_process started")
asyncio.run(bluetooth_receiver_start())
print("bluetooth_process finished")
if __name__ == '__main__':
process_bluetooth = multiprocessing.Process(target=bluetooth_process)
process_bluetooth.start()
try:
devices = FilterGraph().get_input_devices()
for device_index, device_name in enumerate(devices):
print(device_name)
except:
pass
</code></pre>
|
<python><python-bleak>
|
2024-12-30 03:15:37
| 1
| 683
|
taichi
|
79,316,657
| 7,295,169
|
How to generate matrix funtion in sympy?
|
<p>I defined <code>f(At) = e^(At)</code> where <code>A = [[0, 1], [-1, 0]]</code></p>
<pre class="lang-py prettyprint-override"><code>from sympy import *
from sympy.abc import x,y
t = symbols("t")
A = Matrix([
[0, 1],
[-1, 0]
])*t
A1 = A.exp()
</code></pre>
<p><code>A1</code> outputs <code>Matrix([[exp(I*t)/2 + exp(-I*t)/2, -I*exp(I*t)/2 + I*exp(-I*t)/2], [I*exp(I*t)/2 - I*exp(-I*t)/2, exp(I*t)/2 + exp(-I*t)/2]])</code> when I want the output to be <code>[ [cos(t), sin(t)], [-sin(t), cos(t)] ]</code>. How can I get that result?</p>
|
<python><sympy>
|
2024-12-30 02:01:48
| 1
| 1,193
|
jett chen
|
79,316,633
| 10,832,189
|
How to write the JavaScript Discorver Readers in Stripe?
|
<p>I'm developing an POS system and I would like to connect a Stripe physical reader, not a simulation.
Here is my JavaScript codes for initializing, Redears discovering and connecting reader.</p>
<pre><code>// Initialize Stripe Terminal
const terminal = StripeTerminal.create({
onFetchConnectionToken: async () => {
try {
const response = await fetch("http://127.0.0.1:8000/create_connection_token/", { method: 'POST' });
if (!response.ok) {
throw new Error("Failed to fetch connection token");
}
const { secret } = await response.json();
return secret;
} catch (error) {
console.error("Error fetching connection token:", error);
throw error;
}
},
onUnexpectedReaderDisconnect: () => {
console.error("Reader unexpectedly disconnected.");
alert("Le terminal s'est déconnecté de manière inattendue. Veuillez vérifier la connexion et réessayer.");
},
});
console.log("Stripe Terminal initialized.");
// Discover readers
async function discoverReaders() {
try {
console.log("Discovering readers...");
const config = {
simulated: false,
location: "LOCATION_ID"
};
const discoverResult = await terminal.discoverReaders(config);
console.log("Discover Result:", discoverResult);
if (discoverResult.error) {
console.error('Error discovering readers:', discoverResult.error.message);
alert('Erreur lors de la dΓ©couverte des lecteurs. VΓ©rifiez votre configuration rΓ©seau.');
return null;
}
if (discoverResult.discoveredReaders.length === 0) {
console.warn("No available readers. Ensure the terminal is powered on and connected.");
alert("Aucun terminal trouvΓ©. VΓ©rifiez la connectivitΓ© et la configuration rΓ©seau.");
return null;
}
console.log("Discovered readers:", discoverResult.discoveredReaders);
alert("Lecteurs découverts avec succès.");
return discoverResult.discoveredReaders[0];
} catch (error) {
console.error("Error during reader discovery:", error);
alert("Une erreur inattendue s'est produite lors de la dΓ©couverte des lecteurs.");
return null;
}
}
// Connect to a reader
async function connectReader(reader) {
try {
console.log("Attempting to connect to reader:", reader.label);
// Connect to the selected reader
const connectResult = await terminal.connectReader(reader);
// Handle connection errors
if (connectResult.error) {
console.error("Failed to connect:", connectResult.error.message);
alert(`Connexion Γ©chouΓ©e : ${connectResult.error.message}`);
return false;
}
console.log("Connected to reader:", connectResult.reader.label);
alert(`ConnectΓ© au lecteur : ${connectResult.reader.label}`);
return true;
} catch (error) {
console.error("Error during reader connection:", error);
alert("Une erreur inattendue s'est produite lors de la connexion au terminal. Consultez la console pour plus de dΓ©tails.");
return false;
}
}
// Example usage: Discover and connect to a reader
async function handleReaderSetup() {
const reader = await discoverReaders();
if (reader) {
await connectReader(reader);
}
}
</code></pre>
<p>I was able to connect the reader using the Stripe API, and from my stripe I can see if reader is or not online.
However, when I run the application and try to send the money to the reader for payment, it's showing <em><strong>Aucun terminal trouvΓ©. VΓ©rifiez la connectivitΓ© et la configuration rΓ©seau.</strong></em> which means that in English <em><strong>No terminal found. Check connectivity and network configuration.</strong></em> The confusing thing is, when I run this:</p>
<pre><code>import stripe
stripe.api_key = "sk_live_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
try:
# List all readers
readers = stripe.terminal.Reader.list()
print("Readers:", readers)
except stripe.error.StripeError as e:
print("Stripe error:", e)
except Exception as e:
print("An unexpected error occurred:", e)
</code></pre>
<p>I'm getting this output:</p>
<pre><code>Readers: {
"data": [ { "action": null, "device_sw_version": "2.27.7.0", "device_type": "bbpos_wisepos_e",
"id": "tmr_XXXXXXXXXXXXXX",
"ip_address": "x.0.0.xxx",
"label": "Testing_Reader",
"last_seen_at": 1735518518163,
"livemode": true,
"location": "tml_ZZZZZZZZZZZZ",
"metadata": {},
"object": "terminal.reader",
"serial_number": "YYYYYYYYYYYYY",
"status": "online"
}
],
"has_more": false,
"object": "list",
"url": "/v1/terminal/readers"
}
</code></pre>
<p>In addition, this command: <code>stripe terminal readers list</code>
shows this result:</p>
<pre><code>{
"object": "list",
"data": [
{
"id": "tmr_XXXXXXXXXXXXXX",
"object": "terminal.reader",
"action": null,
"device_sw_version": "2.27.7.0",
"device_type": "bbpos_wisepos_e",
"ip_address": "x.0.0.xxx",
"label": "Testing_Reader",
"last_seen_at": 1735517252951,
"livemode": true,
"location": "tml_ZZZZZZZZZZZZ",
"metadata": {},
"serial_number": "YYYYYYYYYYYYY",
"status": "online"
}
],
"has_more": false,
"url": "/v1/terminal/readers"
</code></pre>
<p>I really don't understand, why clicking in this button</p>
<pre><code><button type="button" id="send-to-terminal" class="btn btn-primary" data-order-id="{{ order.id }}"> Envoyer au terminal</button> gives me the error I mentioned above.
</code></pre>
<p>For more details, I've this service as well:</p>
<pre><code>import stripe
import logging
from decimal import Decimal
from django.conf import settings
class PaymentService:
def __init__(self):
"""Initialize the PaymentService with the Stripe API key."""
stripe.api_key = settings.STRIPE_SECRET_KEY
self.logger = logging.getLogger(__name__)
def get_online_reader(self):
"""
Fetch the first online terminal reader from Stripe.
:return: Stripe Terminal Reader object.
:raises: ValueError if no online reader is found.
"""
try:
readers = stripe.terminal.Reader.list(status="online").data
if not readers:
self.logger.error("Aucun lecteur de terminal en ligne trouvΓ©.")
raise ValueError("Aucun lecteur de terminal en ligne trouvΓ©.")
return readers[0] # Return the first online reader
except stripe.error.StripeError as e:
self.logger.error(f"Erreur Stripe lors de la rΓ©cupΓ©ration des lecteurs: {str(e)}")
raise Exception(f"Erreur Stripe: {str(e)}")
def create_payment_intent(self, amount, currency="CAD", payment_method_types=None, capture_method="automatic"):
"""
Create a payment intent for a terminal transaction.
:param amount: Decimal, total amount to charge.
:param currency: str, currency code (default: "CAD").
:param payment_method_types: list, payment methods (default: ["card_present"]).
:param capture_method: str, capture method for the payment intent.
:return: Stripe PaymentIntent object.
"""
try:
if payment_method_types is None:
payment_method_types = ["card_present"]
payment_intent = stripe.PaymentIntent.create(
amount=int(round(amount, 2) * 100), # Convert to cents
currency=currency.lower(),
payment_method_types=payment_method_types,
capture_method=capture_method # Explicitly include this argument
)
self.logger.info(f"PaymentIntent created: {payment_intent['id']}")
return payment_intent
except stripe.error.StripeError as e:
self.logger.error(f"Stripe error while creating PaymentIntent: {str(e)}")
raise Exception(f"Stripe error: {str(e)}")
except Exception as e:
self.logger.error(f"Unexpected error while creating PaymentIntent: {str(e)}")
raise Exception(f"Unexpected error: {str(e)}")
def send_to_terminal(self, payment_intent_id):
"""
Send a payment intent to the online terminal reader for processing.
:param payment_intent_id: str, ID of the PaymentIntent.
:return: Stripe response from the terminal reader.
"""
try:
# Retrieve the Reader ID from settings
reader_id = settings.STRIPE_READER_ID # Ensure this is correctly set in your configuration
# Send the payment intent to the terminal
response = stripe.terminal.Reader.process_payment_intent(
reader_id, {"payment_intent": payment_intent_id}
)
self.logger.info(f"PaymentIntent {payment_intent_id} sent to reader {reader_id}.")
return response
except stripe.error.StripeError as e:
self.logger.error(f"Erreur Stripe lors de l'envoi au terminal: {str(e)}")
raise Exception(f"Erreur Stripe: {str(e)}")
except Exception as e:
self.logger.error(f"Unexpected error while sending to terminal: {str(e)}")
raise Exception(f"Unexpected error: {str(e)}")
</code></pre>
<p>and these views:</p>
<pre><code>@login_required
def send_to_terminal(request, order_id):
"""
Send the payment amount to the terminal.
"""
if request.method == "POST":
try:
# Validate amount
amount = Decimal(request.POST.get('amount', 0))
if amount <= 0:
return JsonResponse({'success': False, 'error': 'Montant non valide.'}, status=400)
# Create PaymentIntent
payment_intent = stripe.PaymentIntent.create(
amount=int(amount * 100),
currency="CAD",
payment_method_types=["card_present"]
)
# List online readers dynamically
readers = stripe.terminal.Reader.list(status="online").data
if not readers:
return JsonResponse({'success': False, 'error': 'Aucun lecteur en ligne trouvΓ©.'}, status=404)
# Use the first available reader
reader = readers[0]
# Send PaymentIntent to the terminal
response = stripe.terminal.Reader.process_payment_intent(
reader["id"], {"payment_intent": payment_intent["id"]}
)
# Handle the response
if response.get("status") == "succeeded":
return JsonResponse({
'success': True,
'payment_intent_id': payment_intent["id"],
'message': 'Paiement envoyé avec succès au terminal.'
})
else:
return JsonResponse({
'success': False,
'error': response.get("error", "Erreur inconnue du terminal.")
}, status=400)
except stripe.error.StripeError as e:
return JsonResponse({'success': False, 'error': f"Erreur Stripe : {str(e)}"}, status=500)
except Exception as e:
return JsonResponse({'success': False, 'error': f"Une erreur inattendue s'est produite: {str(e)}"}, status=500)
return JsonResponse({'success': False, 'error': 'MΓ©thode non autorisΓ©e.'}, status=405)
@csrf_exempt # Allow requests from the frontend if CSRF tokens are not included
def create_connection_token(request):
try:
# Create a connection token
connection_token = stripe.terminal.ConnectionToken.create()
return JsonResponse({"secret": connection_token.secret})
except stripe.error.StripeError as e:
# Handle Stripe API errors
return JsonResponse({"error": str(e)}, status=500)
except Exception as e:
# Handle other unexpected errors
return JsonResponse({"error": f"Unexpected error: {str(e)}"}, status=500)
</code></pre>
|
<javascript><python><stripe-payments>
|
2024-12-30 01:35:01
| 1
| 395
|
Mohamed Abdillah
|
79,316,554
| 11,505,680
|
matplotlib: share axes like in plt.subplots but using the mpl.Figure API
|
<p>I know how to make subplots with shared axes using the <code>pyplot</code> API:</p>
<pre class="lang-py prettyprint-override"><code>from matplotlib import pyplot as plt
(fig, axes) = plt.subplots(3, 1, sharex=True)
</code></pre>
<p><a href="https://i.sstatic.net/lGAz3vA9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lGAz3vA9.png" alt="Figure with 3 panes and one set of x-axis labels" /></a></p>
<p>But I can't replicate this effect using the <code>matplotlib.figure.Figure</code> API. I'm doing approximately the following. (Warning: I can't isolate the code because it's embedded in a whole Qt GUI, and if I take it out, I can't get the figure to display at all.)</p>
<pre class="lang-py prettyprint-override"><code>from matplotlib.figure import Figure
n_axes = 3
fig = Figure()
axes = [fig.add_subplot(n_axes, 1, n+1)
for n in range(n_axes)]
for ax in axes[:-1]:
ax.sharex(axes[-1])
</code></pre>
<p>The <code>ax.sharex</code> command seems to have no effect.</p>
<p>For what it's worth, I switched to the <code>plt.subplots</code> method and everything seems to be working fine, but this smells like a bug or deficiency in <code>matplotlib</code>.</p>
|
<python><matplotlib>
|
2024-12-30 00:00:17
| 1
| 645
|
Ilya
|
79,316,346
| 8,229,029
|
How to include exception handling within a Python pool.starmap multiprocess
|
<p>I'm using the metpy library to do weather calculations. I'm using the multiprocessing library to run them in parallel, but I get rare exceptions, which completely stop the program. I am not able to provide a minimal, reproducible example because I can't replicate the problems with the metpy library functions and because there is a huge amount of code that runs before the problem occurs that I can't put here.</p>
<p>I want to know how to write multiprocessing code to tell the pool.starmap function to PASS if it encounters an error. The first step in my code produces an argument list, which then gets passed to the pool.starmap function, along with the metpy function (metpy.ccl, in this case). The argument list for metpy.ccl includes a list of pressure levels, air temperatures, and dew point values.</p>
<pre><code>ccl_pooled = pool.starmap(mpcalc.ccl, ccl_argument_list)
</code></pre>
<p>I tried to write a generalized function that would take the metpy function I pass to it and tell it to pass when it encounters an error.</p>
<pre><code> def run_ccl(p,t,td):
try:
result = mpcalc.ccl(p,t,td)
except IndexError:
pass
</code></pre>
<p>Is there a way for me to write the "run_ccl" function so I can check for errors in my original code line - something like this:</p>
<pre><code>ccl_pooled = pool.starmap(run_ccl, ccl_argument_list)
</code></pre>
<p>If not, what would be the best way to do this? EDIT: To clarify, these argument lists are thousands of data points long. I want to pass on the data point that causes the problem (and enter a nan in the result, "ccl_pooled", for that data point), and keep going.</p>
|
<python><error-handling><multiprocessing><metpy>
|
2024-12-29 21:06:23
| 1
| 1,214
|
user8229029
|
79,316,278
| 1,142,881
|
Is there a more elegant rewrite for this Python Enum value_of implementation?
|
<p>I would like to get a <code>value_of</code> implementation for the <code>StrEnum</code> (Python 3.9.x). For example:</p>
<pre><code>from enum import Enum
class StrEnum(str, Enum):
"""Enum with str values"""
pass
class BaseStrEnum(StrEnum):
"""Base Enum"""
@classmethod
def value_of(cls, value):
try:
return cls[value]
except KeyError:
try:
return cls(value)
except ValueError:
return None
</code></pre>
<p>and then can use it like this:</p>
<pre><code>class Fruits(BaseStrEnum):
BANANA = "Banana"
PEA = "Pea"
APPLE = "Apple"
print(Fruits.value_of('BANANA'))
print(Fruits.value_of('Banana'))
</code></pre>
<p>it is just that the nested try-except doesn't look amazing, is there a better more idiomatic rewrite?</p>
|
<python><python-3.9>
|
2024-12-29 20:18:14
| 2
| 14,469
|
SkyWalker
|
79,316,088
| 11,277,108
|
Unable to create schema for Postgres database using sqlalchemy
|
<p>I'm able to create a schema via the command line using <code>CREATE SCHEMA test_schema;</code>.</p>
<p>However, running the following code doesn't create a schema:</p>
<pre><code>from sqlalchemy import create_engine
from sqlalchemy.schema import CreateSchema
def main():
conn_str = "postgresql+psycopg2://<myusername>:<mypassword>@localhost/belgarath_test"
engine = create_engine(conn_str, echo=True)
connection = engine.connect()
connection.execute(CreateSchema("test_schema"))
if __name__ == "__main__":
main()
</code></pre>
<p>Weirdly sqlalchemy does emit the correct sql. Here's the full output:</p>
<pre><code>2024-12-29 17:54:22,128 INFO sqlalchemy.engine.Engine select pg_catalog.version()
2024-12-29 17:54:22,129 INFO sqlalchemy.engine.Engine [raw sql] {}
2024-12-29 17:54:22,129 INFO sqlalchemy.engine.Engine select current_schema()
2024-12-29 17:54:22,129 INFO sqlalchemy.engine.Engine [raw sql] {}
2024-12-29 17:54:22,130 INFO sqlalchemy.engine.Engine show standard_conforming_strings
2024-12-29 17:54:22,130 INFO sqlalchemy.engine.Engine [raw sql] {}
2024-12-29 17:54:22,130 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2024-12-29 17:54:22,130 INFO sqlalchemy.engine.Engine CREATE SCHEMA test_schema # <------------------
2024-12-29 17:54:22,130 INFO sqlalchemy.engine.Engine [no key 0.00008s] {}
</code></pre>
<p>Any ideas on why the schema isn't created?</p>
|
<python><postgresql><sqlalchemy><postgresql-14>
|
2024-12-29 18:06:59
| 1
| 1,121
|
Jossy
|
79,315,980
| 19,959,092
|
Problem Setting up a FAISS vector memory in Python with embeddings
|
<p>I'm trying to run an LLM locally and feed it with the contents of a very large PDF. I have decided to try this via a RAG. For this I wanted to create a vectorstore, which contains the content of the pdf. however, I have a problem here when creating, which I can not solve, because I am still quite new in this area.</p>
<p>The problem is that I use FAISS and don't know how to pass my values to the .from_embeddings.
As a result, I have already received several errors.</p>
<p>My code looks like this:</p>
<pre><code>import re
import PyPDF2
from nltk.tokenize import sent_tokenize # After downloading resources
from sentence_transformers import SentenceTransformer
from langchain_community.vectorstores import FAISS # Updated import
def extract_text_from_pdf(pdf_path):
"""Extracts text from a PDF file.
Args:
pdf_path (str): Path to the PDF file.
Returns:
str: Extracted text from the PDF.
"""
with open(pdf_path, 'rb') as pdf_file:
reader = PyPDF2.PdfReader(pdf_file)
text = ""
for page_num in range(len(reader.pages)):
page = reader.pages[page_num]
text += page.extract_text()
return text
if __name__ == "__main__":
pdf_path = "" # Replace with your actual path
text = extract_text_from_pdf(pdf_path)
print("Text extracted from PDF file successfully.")
# Preprocess text to remove special characters
text = re.sub(r'[^\x00-\x7F]+', '', text) # Remove non-ASCII characters
sentences = sent_tokenize(text)
print(sentences) # Print the extracted sentences
# Filter out empty sentences (optional)
sentences = [sentence for sentence in sentences if sentence.strip()]
model_name = 'all-MiniLM-L6-v2'
model = SentenceTransformer(model_name)
# Ensure model.encode(sentences) returns a list of NumPy arrays
embeddings = model.encode(sentences)
vectorstore = FAISS.from_embeddings(embeddings, sentences_list=sentences)#problem here
print("Vector store created successfully.")
# Example search query (replace with your actual question)
query = "Was sind die wichtigsten Worte?"
search_results = vectorstore.search(query)
print("Search results:")
for result in search_results:
print(result)
</code></pre>
<p>If I execute the code as it is there, then the following error occurs:</p>
<pre><code>Traceback (most recent call last):
File β/Users/user/PycharmProjects/PythonProject/extract_pdf_text.pyβ, line 53, in <module>
vectorstore = FAISS.from_embeddings(embeddings, sentences_list=sentences)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: FAISS.from_embeddings() missing 1 required positional argument: 'embedding'
</code></pre>
<p>However, if I now write <code>vectorstore = FAISS.from_embeddings(embedding= embeddings, sentences_list=sentences)</code>, then the text_embeddings parameter is missing</p>
<p>How do I have to fill the parameters so that I can use this, or is there a better way to implement this?</p>
|
<python><langchain><faiss><vectorstore>
|
2024-12-29 16:56:09
| 0
| 428
|
Pantastix
|
79,315,942
| 2,446,071
|
Performant Arrays of Objects in Python
|
<p>I am trying to create a sparse data structure in Python but am having a difficult time putting together the right set of primitives to get the job done.</p>
<p>The data structure is a page-table, not unlike ones you find in modern computer architectures. The idea is that the data structure is sparse until it gets filled out. To implement this, I need to have a <em>performant</em> want to manage an array of arrays.</p>
<p>In my case, I want to map out which IP addresses are encountered during a given operation. Adding the ip address to the data structure looks like this:</p>
<pre><code> def __setitem__(self, key, value):
p = IpTable._parts(key)
map = self._map
for idx in p[0:3]:
if type(np.ndarray) != type(map[idx]):
map[idx] = np.zeros(256)
map = map[idx]
map[p[3]] = value
</code></pre>
<p>where <code>_parts()</code> takes e.g. <code>192.168.0.1</code> and produces <code>[192, 168, 0, 1]</code>. I thought maybe I could use NumPy here (or even SciPy's sparse arrays) but I think my conclusion is that those are primarily targeting matrix math operations; they did not like it when I set values to anything other than integers. In my case I need to set element values to be subsequent arrays.</p>
<p>One entry in the data structure might looks like so:</p>
<pre><code>A[192] = B
B[168] = C
C[0] = D
D[1] = value
</code></pre>
<p>Am I missing something with NumPy/SciPy or are these not the right tools for the job? I was hoping to make this reasonably performant, otherwise I know I could just use objects directly like:</p>
<pre><code>A['192.168.0.1'] = value
</code></pre>
<p>But considering that likely uses hash tables internally, maybe it's not a terrible choice either.</p>
|
<python><numpy>
|
2024-12-29 16:29:38
| 1
| 4,918
|
sherrellbc
|
79,315,901
| 7,885,426
|
Clustering lines in bands
|
<p><strong>Little intro</strong></p>
<p>I have data (link at the bottom), with on the y-axis the score, x-axis the position, for different labels. Now I want to know if there is one label that is "significantly" different than the others and the "background". I have been playing with this the last few weeks but can't seem to figure it out (used watershedding, DBscan, LOF, and a couple more algos). Pretty sure there is a smart way to do this :).</p>
<p><em>Note</em>, this is just one of the many kind of plots and we can't always assume a <code>k</code>, as some have outliers and others don't</p>
<p>Lets take a look at the plot to get an idea:
<a href="https://i.sstatic.net/9WkUwlKN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9WkUwlKN.png" alt="enter image description here" /></a></p>
<p>Here we can see that this olive color deviates(top score point circled in red):</p>
<ul>
<li>Deviates from the <strong>background</strong> (most scores, expect the first peak are between 0.1-0.3)</li>
<li>Still uniquely rises above the others that stop at ~0.6, basically being a local outlier in the global outlier region.
Now we can set all kinds of cut-offs, like minimum score to be a global outlier, and minimum difference between the top score and second score for a different label. But this is all so subjective.</li>
</ul>
<p><strong>Using DBSCAN</strong></p>
<p>I though of using DBscan which does quite well:
<a href="https://i.sstatic.net/ip9mGKj8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ip9mGKj8.png" alt="enter image description here" /></a>
but the data seems to have some clear horizontal clustering of "bands" of lines, however I can't find a way to cluster such pattern.</p>
<p><strong>Description of band clusters</strong></p>
<p>I thought it would be possible to cluster to something like the image below. I should note that since there are so many points the plots only show the top 200 points per label. So it's possible that x-y coordinates are not present at all positions. So perhaps we can't call them "lines" anymore. For the outlier I can then probably just check:</p>
<ul>
<li>is the "top" band not the same as the bottom band</li>
<li>does the top band only contain a single label
<a href="https://i.sstatic.net/M9k45ipB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M9k45ipB.png" alt="enter image description here" /></a></li>
</ul>
<p><strong>Data</strong></p>
<p>I put the data for plot shown on <a href="https://pastebin.com/wyrnPW7y" rel="nofollow noreferrer">pastebin</a>, part of it here:</p>
<pre><code>28 1 0.16
17 1 0.14705882352941177
12 1 0.16
54 1 0.16666666666666666
2 1 0.18
8 1 0.11
42 1 0.14705882352941177
16 1 0.14705882352941177
44 1 0.19607843137254902
1 1 0.4
36 1 0.16
55 1 0.12745098039215685
50 1 0.12745098039215685
22 1 0.16666666666666666
46 1 0.1568627450980392
5 1 0.13
...
</code></pre>
<p>where the first column is the label (color), second column position (x-axis), and last column the score (y-axis)</p>
<p>Thanks a lot, really curious if there are some cool ideas for this. Breaking my head about this for the last couple of weeks :)</p>
|
<python><cluster-analysis><outliers>
|
2024-12-29 15:55:39
| 1
| 1,840
|
CodeNoob
|
79,315,725
| 10,764,260
|
Context managers as attributes
|
<p>I am having a hard time to write clean code with context managers in Python without getting in a context manager hell. Imagine something like:</p>
<pre class="lang-py prettyprint-override"><code>class A:
def __init__(self):
self.b = B() # where B is a context manager
</code></pre>
<p>This leaks now, because I did not use a <code>with</code> statement. In the wild I have seen two solutions to this:</p>
<ol>
<li><p>Inject <code>B</code> in the class and have some outer <code>with</code> statement</p>
<pre><code>with B() as b:
a = A(b)
</code></pre>
</li>
<li><p>Make <code>A</code> a context manager as well that implements closing <code>B</code>:</p>
<pre><code>with A() as a:
...
# A will close B now in its __exit__ method
</code></pre>
</li>
</ol>
<p>I dislike both methods. In my example the usage of <code>B</code> inside of <code>A</code> is an implementation detail. It is not something I want to expose to the outer world. Thus, Option 1 is obviously not suitable, but even Option 2 exposes some details. Mainly that A does some resource management. Additionally having to manually implement the <code>__exit__</code> method does seem to defeat a little bit the purpose of <code>B</code> being a context manager, if I cannot use the context manager syntax.</p>
<p>So I am sort of missing the C++ RAII here I think, which is not directly suitable for Python. So how can I avoid making basically all my classes context managers if there is a dependency chain?</p>
<p>To give a very concrete use case:</p>
<p>An ML agent (like ChatGPT) wants to execute code. So I might have a class <code>Agent</code> that can execute code. This is done with jupyter notebook. Thus my program needs a jupyter notebook server running. Ideally I would like to encapsulate this within the Agent class. The Agent class starts the server and stops the server. The server here is implemented as a context manager. The solution I would come up with is making the Agent now a context manager too, which I dislike. Because then some class using the Agent needs to become a context manager too, ... so in the end my whole program is a context manager.</p>
|
<python><contextmanager>
|
2024-12-29 14:00:05
| 1
| 308
|
Leon0402
|
79,315,687
| 473,347
|
How to Web Scrape Web Page Where Server is Replying to Get Request With Sort-of Redirected Post Request
|
<p>I am attempting to write a python script to perform some simple web scraping, but I am stumped by how to process (and understand) what the web server is passing me. When viewing the network traffic when submitting the get request in a browser, I see the server provides a 200 response and then the browser automatically sends a subsequent post request to the same server with the response having the data I am hoping to scrape.</p>
<p>Here is the URL which is entered into a browser results in a final web page with the data to scrape: <a href="https://www.defensetravel.dod.mil/neorates/report/index.php?report=oha&locode=AS001&locode2=&rank=13&depend=YES&year=2025&month=01&day=01" rel="nofollow noreferrer">https://www.defensetravel.dod.mil/neorates/report/index.php?report=oha&locode=AS001&locode2=&rank=13&depend=YES&year=2025&month=01&day=01</a></p>
<p>Below is the traffic as shown in a browser plug-in.</p>
<p><a href="https://i.sstatic.net/59Ah2KHO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/59Ah2KHO.png" alt="https://www.defensetravel.dod.mil/neorates/report/index.php?report=oha&locode=AS001&locode2=&rank=13&depend=YES&year=2025&month=01&day=01" /></a></p>
<p>Below are the details of the final post request the browser is sending:</p>
<p><a href="https://i.sstatic.net/J1Q8YX2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J1Q8YX2C.png" alt="post request info-1" /></a></p>
<p><a href="https://i.sstatic.net/MBxRC46p.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MBxRC46p.png" alt="post request info-2" /></a></p>
<p>When I submit a get request, I see the returned content from the server says "You are being redirected ...":</p>
<pre><code>import requests
from requests import Request, Session
from bs4 import BeautifulSoup
session = requests.Session()
urlString = 'https://www.defensetravel.dod.mil/neorates/report/index.php?report=oha&locode=AS001&locode2=&rank=13&depend=YES&year=2025&month=01&day=01'
# Making a GET request
r1 = session.get(urlString, allow_redirects=True)
print(r1.history);
soup_r1 = BeautifulSoup(r1.content, 'html.parser')
print(soup_r1.prettify())
</code></pre>
<p>I have also tried to fabricate the post request based on what I saw in the browser plugin, but the server complains with a "The input you provided is not valid".</p>
<pre><code>r3_url = "https://www.defensetravel.dod.mil/neorates/report/index.php";
r3_data = {"day": "01",
"depend": "YES",
"locode": "AS001",
"locode2": "",
"month": "01",
"rank": "13",
"report": "oha",
"year": "2025"};
r3_header= {"initiator": "https://www.defensetravel.dod.mil",
"Origin": "https://www.defensetravel.dod.mil",
"Referer": "https://www.defensetravel.dod.mil/neorates/report/index.php?report=oha&locode=AS001&locode2=&rank=13&depend=YES&year=2025&month=01&day=01",
"Content-Type": "multipart/form-data; boundary=----WebKitFormBoundary34wfzRWc2GjpY1IW",
};
r3 = session.post(r3_url, data=r3_data, headers=r3_header)
soup4 = BeautifulSoup(r3.content, 'html.parser')
print(soup4.prettify())
</code></pre>
<p>(Not providing the countless other troubleshooting versions of my code as I attempted to understand what was going on).</p>
<p>I guess at the core, I do not quite understand how the get request (when submitted via a browser) results in the get and a post request. And more importantly, how does one handle interacting, with a web page/server that is doing this, in python to obtain the final post request's returned content?</p>
|
<python><web-scraping>
|
2024-12-29 13:34:40
| 1
| 791
|
Mike
|
79,315,600
| 2,672,788
|
How to call OpenAi function from python
|
<p>I generated a <code>function</code> using OpenAI as below:
<a href="https://i.sstatic.net/Z433ep6m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z433ep6m.png" alt="enter image description here" /></a></p>
<p>So the function definition is (json schema):</p>
<pre class="lang-json prettyprint-override"><code>{
"name": "generate_jokes",
"description": "Generate 3 jokes based on input text",
"strict": true,
"parameters": {
"type": "object",
"required": [
"input_text",
"number_of_jokes"
],
"properties": {
"input_text": {
"type": "string",
"description": "The text or context based on which jokes will be generated"
},
"number_of_jokes": {
"type": "number",
"description": "The number of jokes to generate; default is 3"
}
},
"additionalProperties": false
}
}
</code></pre>
<p>How can I call it in python 3.X?</p>
|
<python><function><openai-api>
|
2024-12-29 12:41:27
| 2
| 2,958
|
Babak.Abad
|
79,315,440
| 10,764,260
|
Combine Async with Parallism
|
<p>I have the following code:</p>
<pre class="lang-py prettyprint-override"><code>async def run_task(...):
...
semaphore = asyncio.Semaphore(cfg.concurrency_limit)
async def run_single_sample(task_sample: TaskSample):
async with semaphore:
await run_agent(cfg, task_sample, cfg.output_dir / task.value)
samples = [run_single_sample(task_sample) for task_sample in sliced_samples]
await tqdm.gather(*samples, desc=f"Task: {task.value}")
</code></pre>
<p>It is part of a ML application. So imagine <code>run_agent</code> being some async chat system, where LLMs solve a problem. In this process multiple API calls are made to OpenAI, to some local modals and also a little bit CPU processes for executing local code.</p>
<p>While the async / blocking stuff makes sense for the API calls, I am not sure it makes sense for the CPU processes. Because it is my understanding that everything is still sequential. E.g. while sample 1 might wait for an CPU call, sample 2 begins, but then sample 1 continues again.</p>
<p>Looking at my htop output, it seems it uses 1 cpu 100%, so I figured maybe due to code local execution within the agent run, CPU might be a bottleneck here.</p>
<p>Thus I tried combining it with something like ThreadPoolExecutor, asyncio.to_thread, asyncio.run_in_executor, ...
but all of them are for running sync methods it seems.</p>
<p>How could I do it correctly? Or is my use case somehow flawed.</p>
|
<python><python-asyncio>
|
2024-12-29 10:54:13
| 1
| 308
|
Leon0402
|
79,315,256
| 2,177,047
|
Profile selected functions in Python
|
<p>I can profile and visualize my entire application with:</p>
<pre><code>python -m cProfile -o program.prof my_program.py
snakeviz program.prof
</code></pre>
<p>Is it possible to write a decorator that will profile only some functions?</p>
<p>In the end this would look like this:</p>
<pre><code># This is the profiling decorator function.
def profile_function(func):
def wrapper(*args, **kwargs):
# TODO write profiling information to program.prof
return func(*args, **kwargs)
return wrapper
@profile_function
def function1():
for _ in range(0, 1000):
_**2
@profile_function
def function2():
for _ in range(0, 100000):
_**4
# Do not profile function3
def function3():
return
function1()
function2()
function3()
</code></pre>
|
<python><optimization><runtime><python-decorators><profiler>
|
2024-12-29 08:53:38
| 0
| 2,136
|
Ohumeronen
|
79,314,756
| 6,017,833
|
Limited to 5 RPM on Vertex AI
|
<p>I'm looking for some help on this nebulous issue please.</p>
<p>I have this very simple script that invokes model generation on Vertex AI:</p>
<pre class="lang-py prettyprint-override"><code>import vertexai
from vertexai.preview.generative_models import GenerativeModel
import asyncio
PROJECT_ID = "MY_PROJECT"
vertexai.init(project=PROJECT_ID, location="us-central1")
async def _query_async(model: GenerativeModel, i: int) -> str:
print(f"Sending request {i}")
response = await model.generate_content_async("message")
return response.text
async def run_pipeline_async() -> str:
model = GenerativeModel("gemini-1.5-pro-002")
query_jobs = asyncio.gather(*[_query_async(model, i) for i in range(5)])
query_responses = await query_jobs
return query_responses
result = asyncio.run(run_pipeline_async())
print(result)
</code></pre>
<p>When I execute this, I get the following output:</p>
<pre><code>Sending request 0
Sending request 1
Sending request 2
Sending request 3
Sending request 4
</code></pre>
<p>up until the exception:</p>
<pre><code>Exception has occurred: ResourceExhausted
429 Online prediction request quota exceeded for gemini-1.5-pro. Please try again later with backoff.
grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with:
status = StatusCode.RESOURCE_EXHAUSTED
details = "Online prediction request quota exceeded for gemini-1.5-pro. Please try again later with backoff."
debug_error_string = "UNKNOWN:Error received from peer ipv4:142.250.70.170:443 {created_time:"2024-12-28T23:43:43.8287643+00:00", grpc_status:8, grpc_message:"Online prediction request quota exceeded for gemini-1.5-pro. Please try again later with backoff."}"
>
The above exception was the direct cause of the following exception:
File "D:\Users\Harry\Code\PropScan\document\rate_limit_test.py", line 18, in _query_async
response = await model.generate_content_async("message")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Users\Harry\Code\PropScan\document\rate_limit_test.py", line 26, in run_pipeline_async
query_responses = await query_jobs
^^^^^^^^^^^^^^^^
File "D:\Users\Harry\Code\PropScan\document\rate_limit_test.py", line 30, in <module>
result = asyncio.run(run_pipeline_async())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
google.api_core.exceptions.ResourceExhausted: 429 Online prediction request quota exceeded for gemini-1.5-pro. Please try again later with backoff.
</code></pre>
<p>This is consistent behaviour. The <a href="https://cloud.google.com/vertex-ai/generative-ai/docs/quotas#view-the-quotas-by-region-and-by-model" rel="nofollow noreferrer">docs</a> indicate that the default quota is 60 RPM, so I do not know why I am getting throttled to 5 RPM and cannot find any documentation explaining this. Some posts online suggested it might be because I am using a free trial account, but I have confirmed in the console that I am using a paid account (albeit there are remaining free credits.)</p>
<p><a href="https://i.sstatic.net/JfwkTgI2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JfwkTgI2.png" alt="enter image description here" /></a></p>
<p>I am at a bit of a loss for where to go from here... The Quota and System Limits dashboard doesn't really help me at all (I think I am looking at the right thing here?)</p>
<p><a href="https://i.sstatic.net/nSRiqejP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nSRiqejP.png" alt="enter image description here" /></a></p>
<p>I would appreciate any help, thanks!</p>
|
<python><google-cloud-platform><python-asyncio><google-cloud-vertex-ai>
|
2024-12-28 23:59:04
| 0
| 1,945
|
Harry Stuart
|
79,314,688
| 7,693,707
|
Cupy creating array using other variables
|
<p>I am trying to make a numpy/cupy interchange script, similar to <a href="https://github.com/rafael-fuente/diffractsim/blob/main/diffractsim/util/backend_functions.py" rel="nofollow noreferrer">this backend</a> implementation. Such that by using something like <code>from Util.Backend import backend as bd</code>, I can create <code>bd.array()</code> that can switch between numpy and cupy. But I had a lot of trouble doing so, especially about creating array.</p>
<p>After some experiments, it seems that an assignment in the form of:</p>
<pre><code>import cupy as cp
A = cp.array([1, 2, 3])
B = cp.array([A[0], A[1], 3])
</code></pre>
<p>Will result in error:</p>
<pre><code>TypeError: Implicit conversion to a NumPy array is not allowed. Please use `.get()` to construct a NumPy array explicitly.
</code></pre>
<p>However, if the same array creation is written as:</p>
<pre><code>B = cp.array([A[0], A[1], A[2]])
</code></pre>
<p>Then it becomes totally fine <em>(which is also weird, since I did not import numpy at all in the example above, it's almost like <code>[]</code> is being created as a numpy array first)</em>.</p>
<p>Similarly,</p>
<pre><code>s = 2
c = 3
one = 1.0
B = cp.array([s, c, one])
</code></pre>
<p>Is fine. But if some of the entries are not created in the same way, such as:</p>
<pre><code>s = cp.sin(2)
c = cp.cos(1)
one = 1.0
B = cp.array([s, c, one])
</code></pre>
<p>Then the <code>TypeError</code> would come in again. Note that if numpy is used instead of cupy, none of the above array creation would have raised an error.</p>
<p><a href="https://github.com/cupy/cupy/issues/8389" rel="nofollow noreferrer">This post</a> seem to indicate that cupy does not support a mix type data. But by the suggested method I would have to write <code>CParray.get()</code> to make the conversion, it is a cupy method and not a numpy one, thus will create new errors if that backend module is running numpy.</p>
<p>Would it be possible to find a way to write array creations so that <code>cp</code> and <code>np</code> are interchangeable?</p>
|
<python><numpy><cupy>
|
2024-12-28 22:54:47
| 0
| 1,090
|
Amarth GΓ»l
|
79,314,406
| 4,451,315
|
"n_unique" aggregation using DuckDB relational API
|
<p>Say I have</p>
<pre class="lang-py prettyprint-override"><code>import duckdb
rel = duckdb.sql('select * from values (1, 4), (1, 5), (2, 6) df(a, b)')
rel
</code></pre>
<pre class="lang-py prettyprint-override"><code>Out[3]:
βββββββββ¬ββββββββ
β a β b β
β int32 β int32 β
βββββββββΌββββββββ€
β 1 β 4 β
β 1 β 5 β
β 2 β 6 β
βββββββββ΄ββββββββ
</code></pre>
<p>I can group by a and find the mean of 'b' by doing:</p>
<pre class="lang-py prettyprint-override"><code>rel.aggregate(
[duckdb.FunctionExpression('mean', duckdb.ColumnExpression('b'))],
group_expr='a',
)
</code></pre>
<pre><code>βββββββββββ
β mean(b) β
β double β
βββββββββββ€
β 4.5 β
β 6.0 β
βββββββββββ
</code></pre>
<p>which works wonderfully</p>
<p>Is there a similar way to create a "n_unique" aggregation? I'm looking for something like</p>
<pre class="lang-py prettyprint-override"><code>rel.aggregate(
[duckdb.FunctionExpression('count_distinct', duckdb.ColumnExpression('b'))],
group_expr='a',
)
</code></pre>
<p>but that doesn't exist. Is there something that does?</p>
|
<python><duckdb>
|
2024-12-28 19:09:13
| 1
| 11,062
|
ignoring_gravity
|
79,314,321
| 5,116,559
|
Create a new Polars column from a multiple choice of expressions by mapping values to a dictionary
|
<p>I want to use an expression dictionary to perform calculations for a new column.
I have this Polars dataframe:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({
"col1": ["a", "b", "a"],
"x": [1,2,3],
"y": [2,2,5]
})
</code></pre>
<p>And I have an expression dictionary:</p>
<pre class="lang-py prettyprint-override"><code>expr_dict = {
"a": pl.col("x") * pl.col("y"),
"b": pl.col("x"),
}
</code></pre>
<p>I want to create a column where each value is calculated based on a key in in another column, but I do not know how. I want to have result like this:</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(r=pl.col("col1").apply(lambda x: expr_dict[X])
</code></pre>
<pre><code>shape: (3, 3)
ββββββββ¬ββββββ¬ββββββ¬ββββββ
β col1 β x β y β r β
β --- β --- β --- β --- β
β str β i64 β i64 β i64 β
ββββββββͺββββββͺββββββͺββββββ‘
β a β 1 β 2 β 2 β
β b β 2 β 2 β 2 β
β a β 3 β 5 β 15 β
ββββββββ΄ββββββ΄ββββββ΄ββββββ
</code></pre>
<p>Is this possible?</p>
|
<python><dataframe><python-polars><coalesce>
|
2024-12-28 18:10:48
| 1
| 1,068
|
Babak Fi Foo
|
79,314,231
| 2,630,406
|
Restrict possible transformations for estimateAffinePartial2D
|
<p>I am using <code>estimateAffinePartial2D</code> to match stitch two images together. Sometimes I get a completely wrong result (scaled way too much, rotated way too much, ...) as my cameras are handheld, certain movements are impossible (like upside down, ...). Is there a way to restrict the possible transformations which <code>estimateAffinePartial2D</code> can make? Is there any other method which might be better suited?</p>
<p>I know I could just limit (min,max) the output transformation but this will still get me something wrong. I want to reduce the possible solutions to help <code>estimateAffinePartial2D</code> find the best solution.</p>
|
<python><opencv><computer-vision>
|
2024-12-28 17:14:33
| 0
| 933
|
perotom
|
79,313,973
| 14,667,788
|
Unable to run chromium in Python
|
<p>I would like to run chromium in my python script.
Here is my test code:</p>
<pre class="lang-py prettyprint-override"><code>from selenium import webdriver
from selenium.webdriver.chrome.service import Service
service = Service("/usr/bin/chromedriver")
driver = webdriver.Chrome(service=service)
driver.get("https://www.google.com")
print(driver.title)
driver.quit()
</code></pre>
<p>But I am getting <code>selenium.common.exceptions.WebDriverException: Message: Service /usr/bin/chromedriver unexpectedly exited. Status code was: 1</code></p>
<p>If I check in bash chromium location <code>which chromedriver</code> I get the correct location <code>/usr/bin/chromedriver</code>. Where is the problem, please?</p>
|
<python><selenium-webdriver>
|
2024-12-28 14:43:48
| 1
| 1,265
|
vojtam
|
79,313,854
| 6,449,621
|
identify how many times same vehicle was captured in the provided video
|
<p>Working on Video analysis assignment where I need to capture how many times same vehicle was captured in given video.</p>
<p>So far using YOLO11 was able to identify the vehicles such cars, bike, bus & truck. Accordingly drawing rects as vehicles appears in the video frame.</p>
<p>I could not make out, how to mark a vehicle with some identification code. so that, when same vehicle appears in the video frame i can increase count of that vehicle.</p>
<p>Adding my code which I tried</p>
<pre><code>from ultralytics import YOLO
import cv2
from enum import Enum
class DetectionType(Enum):
CAR = 2
MOTORCYCLE = 3
BUS = 5
TRUCK = 6
coco_model = YOLO('yolo11n.pt')
cap = cv2.VideoCapture('testVideo.mp4')
vehicles = [
DetectionType.CAR.value,
DetectionType.MOTORCYCLE.value,
DetectionType.BUS.value,
DetectionType.TRUCK.value
]
ret = True
while ret:
ret, frame = cap.read()
if ret:
#detect vehicle
detections_model = coco_model(frame)[0]
for detection in detections_model.boxes.data.tolist():
x1, y1, x2, y2, score, class_id = detection
if int(class_id) in vehicles:
x1, y1, x2, y2 = int(x1), int(y1), int(x2), int(y2)
cv2.rectangle(frame, (x1, y1), (x2, y2), (255, 0, 0), 2)
# Display frames in a window
cv2.imshow('video', frame)
if cv2.waitKey(33) == 27:
break
cap.release()
cv2.destroyAllWindows()
</code></pre>
<p>Any suggestion or snippet will help me to complete this assignment.</p>
|
<python><machine-learning><computer-vision><object-detection><yolo>
|
2024-12-28 13:29:54
| 1
| 465
|
anandyn02
|
79,313,607
| 21,864,938
|
Why does this web scraper using Selenium not return the entire website?
|
<p>I tried to program a web scraper with Selenium for educational purposes that displays stock market data from βThe Wall Street Journalβ. I want to know the number of advancing and declining issues from this link: <a href="https://www.wsj.com/market-data/stocks/us" rel="nofollow noreferrer">https://www.wsj.com/market-data/stocks/us</a></p>
<p>My webscraper looks like this:</p>
<pre><code>import time
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from webdriver_manager.chrome import ChromeDriverManager
options = webdriver.ChromeOptions()
options.add_argument("--headless")
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=options)
url = "https://www.wsj.com/market-data/stocks/us"
driver.get(url)
time.sleep(10)
try:
element = driver.find_element(By.XPATH, "/html/body/div/div/div/div/div[1]/div[2]/div/div[2]/table/tbody[1]/tr/td[2]")
data = element.text
print(f"Found data: {data}")
except Exception as e:
print(f"Error: {e}")
driver.quit()
</code></pre>
<p>For example, if I try to look for the number of advancing issues on the NYSE like this:</p>
<p><code>element = driver.find_element(By.XPATH, β/html/body/div/div/div/div/div[1]/div[2]/div/div[2]/table/tbody[1]/tr/td[2]β)</code></p>
<p>I get the following error message:</p>
<pre><code>Message: no such element: Unable to locate element: {"method":"xpath","selector":"/html/body/div/div/div/div/div[1]/div[2]/div/div[2]/table/tbody[1]/tr/td[2]"}
(Session info: chrome=131.0.6778.205); For documentation on this error, please visit: https://www.selenium.dev/documentation/webdriver/troubleshooting/errors#no-such-element-exception
</code></pre>
<p>Therefore, I tried to look for the entire body to see if I have access to the data:</p>
<pre><code>element = driver.find_element(By.XPATH, "//body")
</code></pre>
<p>However, I only received the full text from the header, the "Stock Indexes" section and the "Stocks News" section. I did not get any content from the "Markets Diary" section or other parts further down the page.
Increasing the waiting time <code>time.sleep(10)</code> did not change anything.
Why am I not being shown the entire body?</p>
|
<python><selenium-webdriver><web-scraping>
|
2024-12-28 10:33:22
| 2
| 478
|
Lukinator
|
79,313,474
| 6,834,925
|
pybind11: how to add document the exported module?
|
<p>I have some c++ code, and I successfully export it to python module using pybind11. The code is:</p>
<pre><code>class A
{
public:
A()
{
}
/**
* @brief explanation from c++
*/
int GetWidth()
{
return width;
}
private:
int width = 0;
};
PYBIND11_MODULE(module, m) {
m.doc() = "My Library Python Bindings";
py::class_<A>(m, "A")
.def(py::init<>())
.def("GetWidth", &A::GetWidth, "explanation from pybind");
}
</code></pre>
<p>My question is: how to read the documentation for function in pycharm?</p>
<p>For example, when I use <code>opencv</code>, I can know the input for function. For example:</p>
<p><a href="https://i.sstatic.net/zOYYqaQ5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zOYYqaQ5.png" alt="enter image description here" /></a></p>
<p>But, when I import my-self module, I don't know what is the input parameter.</p>
<p>For <code>pybind11</code>, how can I add the documentation for the exported module?
In addition, I add lots of explanation for c++ function using the <code>doxygen</code> format. Is it possible to convert the c++ explanation to python function.</p>
<p>Any suggestion is appreciated~~~</p>
|
<python><c++><pybind11>
|
2024-12-28 08:47:07
| 1
| 962
|
Qiang Zhang
|
79,313,370
| 5,332,349
|
Discord.py MyClient(Commands.Bot) and commands.Greedy
|
<p><a href="https://discordpy.readthedocs.io/en/stable/ext/commands/commands.html#greedy" rel="nofollow noreferrer">https://discordpy.readthedocs.io/en/stable/ext/commands/commands.html#greedy</a></p>
<p>I am following a tutorial that creates a child object of commands.bot in order to run a background task.</p>
<p>I am attempting to implement the discord.ext.commands.Greedy on a slash command in order to pass a list of discord.Member objects to be operated on.</p>
<p>I am recieving the given error on startup <code>TypeError: unsupported type annotation Greedy[Member] </code></p>
<p>example code to replicate my error</p>
<pre><code>import discord
from discord.ext import commands
from discord import app_commands
class MyClient(commands.Bot):
def __init__(self, *, intents: discord.Intents):
super().__init__(command_prefix="/", intents=intents)
intents = discord.Intents.default()
intents.message_content = True
intents.members = True
client = MyClient(intents=intents)
@client.tree.command(name="slap")
async def slap(interaction: discord.Interaction, members: discord.ext.commands.Greedy[discord.Member], *, reason='no reason'):
slapped = ", ".join(x.name for x in members)
await interaction.response.send_message(f'{slapped} just got slapped for {reason}')
</code></pre>
<p>I am on discord.py version 2.4</p>
<p>how would I fix the sample code to have a slash command run with more than one <code>discord.Member</code> and output the given text.</p>
<p>a working command should look like</p>
<p><code>/slap members:@user1, @user2, @user3 reason="I am greedy"</code></p>
|
<python><discord><discord.py>
|
2024-12-28 07:05:43
| 1
| 309
|
Reed
|
79,313,343
| 2,641,825
|
How to fix setuptools_scm._file_finders.git listing git files failed?
|
<p>I am using <code>pyproject.toml</code> to build a package. I use setuptools_scm to automatically determine the version number. I use python version <code>3.11.2</code>, setuptools <code>66.1.1</code> and setuptools-scm <code>8.1.0</code>.</p>
<p>Here are the relevant parts of <code>pyproject.toml</code></p>
<pre><code># For a discussion on single-sourcing the version, see
# https://packaging.python.org/guides/single-sourcing-package-version/
dynamic = ["version"]
[tool.setuptools_scm]
# can be empty if no extra settings are needed, presence enables setuptools-scm
</code></pre>
<p>I build the project with</p>
<pre><code>python3 -m build
</code></pre>
<p>When I run the build command, I see</p>
<pre><code>ERROR setuptools_scm._file_finders.git listing git files failed - pretending there aren't any
</code></pre>
<p>What I've Tried:</p>
<ul>
<li>There is a <code>.git</code> directory at the root of my project. It's readable by all users.</li>
<li>Git is installed and accessible from my PATH.</li>
<li>I've committed changes to ensure there's Git history available.</li>
</ul>
<p>How can I fix this error? Are there additional configurations or checks I should perform to ensure setuptools_scm can correctly interact with Git for version determination?</p>
<h1>Reproducible example</h1>
<pre><code>cd /tmp/
mkdir setuptools_scm_example
cd setuptools_scm_example
git init
touch .gitignore
git add .
git commit -m "Initial commit"
</code></pre>
<p>Add the following to pyproject.toml</p>
<pre><code>[build-system]
requires = ["setuptools>=61.0", "setuptools_scm>=7.0"]
build-backend = "setuptools.build_meta"
[project]
name = "example_package"
dynamic = ["version"]
[tool.setuptools_scm]
# No additional configuration needed, but can add if needed
</code></pre>
<p>Create and build a python package</p>
<pre><code>mkdir -p example_package
touch example_package/__init__.py
echo "print('Hello from example package')" > example_package/__init__.py
python3 -m build
</code></pre>
<p>I see the error</p>
<pre><code>ERROR setuptools_scm._file_finders.git listing git files failed - pretending there aren't any
</code></pre>
|
<python><setuptools><setuptools-scm>
|
2024-12-28 06:46:12
| 1
| 11,539
|
Paul Rougieux
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.