QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,380,546
| 1,422,096
|
Zero pad a numpy n-dimensional array
|
<p>Not a duplicate of <a href="https://stackoverflow.com/questions/38191855/zero-pad-numpy-array">Zero pad numpy array</a> (that I posted 9 years ago, ouch!) because here it's about n-dimensional arrays.</p>
<p><strong>How to zero pad a numpy n-dimensional array, if possible in one line?</strong></p>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code>a = np.array([1, 2, 3])
zeropad(a, 8) # [1, 2, 3, 0, 0, 0, 0, 0]
b = np.array([[1, 2], [3, 4], [5, 6]])
zeropad(b, (5, 2)) # [[1, 2], [3, 4], [5, 6], [0, 0], [0, 0]]
</code></pre>
<p>When using <code>b.resize((5, 2))</code>, here it works, but in some real cases, it gives:</p>
<pre><code>ValueError: cannot resize this array: it does not own its data
</code></pre>
<p><strong>How to zero pad numpy nd arrays no matter if it owns its data or not?</strong></p>
|
<python><arrays><numpy>
|
2025-01-23 09:52:32
| 3
| 47,388
|
Basj
|
79,380,487
| 6,751,456
|
Celery: How to add a delay to a message sending to SQS
|
<p>I'm using Celery to consume messages from SQS queue.</p>
<p>The queue is Standard type.</p>
<p>There are cases [exceptions caught] when I explicitly re-enqueue tasks back to the queue.</p>
<pre><code> def run(self):
try:
# some exceptions occurred
...
except Exception as e:
log.error(str(e), exc_info=True)
self.enqueue_message()
return
def enqueue_message(self, task='llm_extraction_task', queue='llm-extraction-queue'):
# TODO retries mechanism
llm_app.send_task(name=task, kwargs=self.message, queue=queue)
</code></pre>
<p>Messages are consumed:</p>
<pre><code>@shared_task(name= "llm_extraction_task")
def check_nlp_job_status(**kwargs):
log.info("Message received in llm_extraction_task")
consume_llm_data_obj = ConsumeLLMData(payload=kwargs)
consume_llm_data_obj.run()
</code></pre>
<p>This will immediately push the message back to the queue.</p>
<p>The problem is that with a fewer messages - say a SINGLE message - the same re-enqueued message is immediately consumed back.</p>
<p>I want to add a timer or delay to such messages so that failed messages are prioritized or picked later.</p>
|
<python><celery><amazon-sqs><django-celery><celery-task>
|
2025-01-23 09:31:47
| 0
| 4,161
|
Azima
|
79,380,222
| 17,721,722
|
How to immediately release memory after removing columns in PySpark?
|
<p>I want to remove specific columns from a DataFrame in PySpark and immediately release the memory occupied by these columns to optimize resource usage due to limited RAM. What is the best way to achieve this?</p>
<p>Here are a few approaches I am considering:</p>
<p>Note: df is already in-memory.</p>
<p><strong>Approach 1</strong>:</p>
<pre class="lang-py prettyprint-override"><code>new_df = df.select(*column_list_that_needed)
df.unpersist()
new_df.persist()
df = new_df
</code></pre>
<p><strong>Approach 2</strong>:</p>
<pre class="lang-py prettyprint-override"><code>new_df = df.drop(*drop_column_list)
df.unpersist()
new_df.persist()
df = new_df
</code></pre>
<p><strong>Approach 3</strong>:</p>
<pre class="lang-py prettyprint-override"><code>df = df.drop(*drop_column_list)
df.persist()
</code></pre>
<p>Which of these approaches is more efficient for memory management? Or is there a better way to achieve the goal of removing columns and freeing up memory simultaneously?</p>
<p>Any advice or insights are greatly appreciated.</p>
|
<python><apache-spark><hadoop><pyspark><apache-spark-sql>
|
2025-01-23 07:48:22
| 0
| 501
|
Purushottam Nawale
|
79,379,899
| 5,273,594
|
Calling VerifyAddFixedPriceItem python Ebay sdk
|
<p>I am trying to call the VerifyAddItem endpoint with the following XML:</p>
<pre><code><?xml version='1.0' encoding='utf-8'?>
<VerifyAddFixedPriceItemRequest xmlns="urn:eb*y:apis:eBLBaseComponents"><RequesterCredentials><eBayAuthToken>v^1.1#i^1#r^1#I^3#p^3#f^0#t^Ul4xMF84Ojg0NUNBQzJCAAAAAABBBBBBBBBBBCCCCCFAKE=</eBayAuthToken>
</RequesterCredentials>
<Item>
<ConditionID>1500</ConditionID>
<Country>US</Country>
<Currency>USD</Currency>
<Description>Genuine leather jacket in excellent condition</Description><DispatchTimeMax>3</DispatchTimeMax>
<ItemSpecifics>
<NameValueList><Name>Brand</Name><Value>Custom</Value></NameValueList>
<NameValueList><Name>Size (Men's)</Name><Value>L</Value></NameValueList>
<NameValueList><Name>Color</Name><Value>Yellow</Value></NameValueList>
<NameValueList><Name>Style</Name><Value>Basic Coat</Value></NameValueList><NameValueList><Name>Type</Name><Value>Leather Jacket</Value></NameValueList><NameValueList><Name>Material</Name><Value>Leather</Value></NameValueList><NameValueList><Name>Pattern</Name><Value>Solid</Value></NameValueList>
<NameValueList><Name>Department</Name><Value>Men</Value></NameValueList></ItemSpecifics>
<ListingDuration>Days_7</ListingDuration>
<ListingType>FixedPriceItem</ListingType>
<Location>San Jose, CA</Location>
<PictureDetails>
<PictureURL>http://www.mensworld.com.bd/wp-content/uploads/2023/09/FSF-1166.jpg</PictureURL>
</PictureDetails>
<PostalCode>95125</PostalCode>
<PrimaryCategory><CategoryID>57988</CategoryID></PrimaryCategory>
<Quantity>1</Quantity>
<ReturnPolicy><RefundOption>MoneyBack</RefundOption><ReturnsAcceptedOption>ReturnsAccepted</ReturnsAcceptedOption><ReturnsWithinOption>Days_30</ReturnsWithinOption><ShippingCostPaidByOption>Buyer</ShippingCostPaidByOption></ReturnPolicy><ShippingDetails><ShippingServiceOptions><ShippingService>USPSPriority</ShippingService><ShippingServiceCost><currencyID>USD</currencyID>
<value>0.00</value>
</ShippingServiceCost><ShippingServicePriority>1</ShippingServicePriority></ShippingServiceOptions><ShippingType>Flat</ShippingType></ShippingDetails><Site>US</Site><StartPrice><currencyID>USD</currencyID><value>99.00</value></StartPrice><Title>Yellow Leather Jacket</Title>
</Item>
</VerifyAddFixedPriceItemRequest>
</code></pre>
<p>I get the following error:</p>
<pre><code>VerifyAddItem: Class: RequestError, Severity: Error, Code: 20170, Schema XML request error. Schema XML request error: SimpleDeserializer encountered a child element, which is NOT expected, in something it was trying to deserialize..
</code></pre>
<p>When I change the PictureURL (remove the https://) ) to:</p>
<pre><code><PictureURL>www.mensworld.com.bd/wp-content/uploads/2023/09/FSF-1166.jpg</PictureURL>
</code></pre>
<p>I get the following error:</p>
<pre><code>VerifyAddFixedPriceItem: Class: RequestError, Severity: Error, Code: 37, Input data is invalid. Input data for tag <Item.PictureDetails.PictureURL> is invalid or missing. Please check API documentation.
</code></pre>
<p>Anyway to figure out how to correct the XML ?</p>
|
<python><xml><ebay-sdk>
|
2025-01-23 05:07:43
| 0
| 2,073
|
Fredy
|
79,379,838
| 4,647,107
|
Binding Python process to specific cores (Linux) using mpirun
|
<p>I have a Python file bla.py:</p>
<pre><code>import os
from mpi4py import MPI
import psutil
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
PID = os.getpid()
cpu_affinity = psutil.Process().cpu_num()
print(f'rank: {rank} has PID: {PID} with affinity {cpu_affinity}')
</code></pre>
<p>But when I execute it using <code>mpirun --cpu-set 15-20 --bind-to core -n 6 python3 bla.py</code>, I get:</p>
<pre><code>rank: 5 has PID: 2451954 with affinity 16
rank: 2 has PID: 2451923 with affinity 15
rank: 0 has PID: 2451911 with affinity 20
rank: 4 has PID: 2451944 with affinity 18
rank: 3 has PID: 2451935 with affinity 16
rank: 1 has PID: 2451919 with affinity 17
</code></pre>
<p>Note how there are 2 processes with affinity 16 and no process with affinity 19. This is non-deterministic and sometimes the processes are actually mapped 1:1 to specific cores, and sometimes not.</p>
<p>With <code>--display-map</code>:</p>
<pre><code>Data for JOB [62540,1] offset 0 Total slots allocated 24
======================== JOB MAP ========================
Data for node: triton Num slots: 24 Max slots: 0 Num procs: 6
Process OMPI jobid: [62540,1] App: 0 Process rank: 0 Bound: UNBOUND
Process OMPI jobid: [62540,1] App: 0 Process rank: 1 Bound: UNBOUND
Process OMPI jobid: [62540,1] App: 0 Process rank: 2 Bound: UNBOUND
Process OMPI jobid: [62540,1] App: 0 Process rank: 3 Bound: UNBOUND
Process OMPI jobid: [62540,1] App: 0 Process rank: 4 Bound: UNBOUND
Process OMPI jobid: [62540,1] App: 0 Process rank: 5 Bound: UNBOUND
=============================================================
</code></pre>
<p>With <code>--report-bindings</code>:</p>
<pre><code>[triton.ecn.purdue.edu:2486163] MCW rank 0 is not bound (or bound to all available processors)
[triton.ecn.purdue.edu:2486163] MCW rank 1 is not bound (or bound to all available processors)
[triton.ecn.purdue.edu:2486163] MCW rank 2 is not bound (or bound to all available processors)
[triton.ecn.purdue.edu:2486163] MCW rank 3 is not bound (or bound to all available processors)
[triton.ecn.purdue.edu:2486163] MCW rank 4 is not bound (or bound to all available processors)
[triton.ecn.purdue.edu:2486163] MCW rank 5 is not bound (or bound to all available processors)
</code></pre>
<p>How do I pin my 6 launched processes to 6 different cores?</p>
<p>I have tried <code>--map-by core</code> without passing <code>--cpu-set</code> and that actually assigns each process to a particular core, but it does not let me select which CPUs I want to use (<code>--map-by core</code> and --cpu-set` cannot be passed together).</p>
|
<python><mpi><affinity>
|
2025-01-23 04:28:02
| 1
| 534
|
Pratyush Das
|
79,379,785
| 1,232,087
|
Pandas ValueError when using on_bad_lines
|
<p><strong>Question</strong>: What I may be doing wrong and how we can fix the following error?</p>
<pre><code>import pandas as pd
def handle_bad_line(bad_line: list[str]) -> list[str] | None:
# Do something with the bad line, e.g., print it or modify it
print("Bad line:", bad_line)
# Return a modified line (if needed) or None to skip it
return None # Skip the bad line
df = pd.read_csv("DataFile.csv",engine="python", on_bad_lines=handle_bad_line)
</code></pre>
<p><strong>Error</strong>:</p>
<blockquote>
<p>ValueError: Argument <function handle_bad_line at 0x000001D9A6683E20> is invalid for on_bad_lines</p>
</blockquote>
<p><strong>Remark</strong>: The function option for <code>on_bad_lines</code> was added on Pandas 1.4.0 and above. Following is from <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html" rel="nofollow noreferrer">official document</a>:</p>
<blockquote>
<blockquote>
<p>Callable, function with signature (bad_line: list[str]) -> list[str] | None that will process a single bad line. bad_line is a list of strings split by the sep. If the function returns None, the bad line will be ignored. If the function returns a new list of strings with more elements than expected, a ParserWarning will be emitted while dropping extra elements. Only supported when engine='python'</p>
</blockquote>
</blockquote>
|
<python><python-3.x><pandas>
|
2025-01-23 03:49:22
| 1
| 24,239
|
nam
|
79,379,761
| 11,663,956
|
paramiko.transport hang after ssh.connect from jump server to target server
|
<p>I have to use a jump server to login to a restricted server. Below setup in my jump server:</p>
<pre><code>PasswordAuthentication yes
ChallengeResponseAuthentication yes
UsePAM yes
</code></pre>
<p>Below is my code</p>
<pre><code>import paramiko, traceback
from getpass import getpass
paramiko.common.logging.basicConfig(level=paramiko.common.DEBUG)
def connect_to_jump(hostname, port, username, password, ):
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
transport = paramiko.Transport((hostname, port))
transport.start_client()
def challenge_handler(title, instructions, prompt_list):
responses = []
for prompt in prompt_list:
if "password" in prompt[0].lower():
responses.append(password)
elif "rsa" in prompt[0].lower() :
token = getpass(f"{prompt[0].strip()}: ")
responses.append(token)
else:
responses.append(getpass(f"{prompt[0].strip()}: "))
return responses
transport.auth_interactive(username, handler=challenge_handler)
ssh._transport = transport
return ssh
except Exception as e:
print(f"Error: {e}")
traceback.print_exc()
if 'ssh' in locals():
ssh.close()
def connect_to_target(jump_ssh, target_host, target_port, username, password):
dest_addr = (target_host, target_port)
local_addr = ("127.0.0.1", 0)
try:
jump_transport = jump_ssh.get_transport()
channel = jump_transport.open_channel("session", dest_addr, local_addr)
target_ssh = paramiko.SSHClient()
target_ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
target_ssh.connect(target_host, port=target_port, username=username, password=password, sock=channel)
stdin, stdout, stderr = target_ssh.exec_command("hostname")
print(stdout.read().decode())
finally:
target_ssh.close()
jump_ssh.close()
</code></pre>
<p>And below is my code to call the above functions:</p>
<pre><code>jump_hostname = 'jump'
jump_port = 22
target_host = 'target'
target_port = 22
username = get_user_name()
password = keyring.get_password('unix',username)
jump_ssh = connect_to_jump(jump_hostname, jump_port, username, password)
stdin, stdout, stderr = jump_ssh.exec_command('uname', timeout=10)
o = stdout.read().decode()
if "Linux" in o:
print(rf"connect to {jump_hostname} successfully")
connect_to_target(jump_ssh, target_host, target_port, username, password)
</code></pre>
<p>I have the line <code>connect to jump successfully</code> which means I have connected to the jump server successfully. However, I couldn't connect to the target. My process hang forever.</p>
<p>Can you please share your thoughts for what could be wrong in my code? I can use putty to ssh to my jump server then to the target.</p>
<p>Below are my logs:</p>
<pre class="lang-none prettyprint-override"><code>DEBUG:paramiko.transport:starting thread (client mode): 0xbbc4af20
DEBUG:paramiko.transport:Local version/idstring: SSH-2.0-paramiko_3.5.0
DEBUG:paramiko.transport:Remote version/idstring: SSH-2.0-OpenSSH_7.4
INFO:paramiko.transport:Connected (version 2.0, client OpenSSH_7.4)
DEBUG:paramiko.transport:=== Key exchange possibilities ===
DEBUG:paramiko.transport:kex algos: curve25519-sha256, curve25519-sha256@libssh.org, ecdh-sha2-nistp256, ecdh-sha2-nistp384, ecdh-sha2-nistp521, diffie-hellman-group-exchange-sha256, diffie-hellman-group16-sha512, diffie-hellman-group18-sha512, diffie-hellman-group-exchange-sha1, diffie-hellman-group14-sha256, diffie-hellman-group14-sha1, diffie-hellman-group1-sha1
DEBUG:paramiko.transport:server key: ssh-rsa, rsa-sha2-512, rsa-sha2-256, ecdsa-sha2-nistp256, ssh-ed25519
DEBUG:paramiko.transport:client encrypt: chacha20-poly1305@openssh.com, aes128-ctr, aes192-ctr, aes256-ctr,
aes128-gcm@openssh.com, aes256-gcm@openssh.com, aes128-cbc, aes192-cbc, aes256-cbc, blowfish-cbc, cast128-cbc, 3des-cbc
DEBUG:paramiko.transport:server encrypt: chacha20-poly1305@openssh.com, aes128-ctr, aes192-ctr, aes256-ctr,
aes128-gcm@openssh.com, aes256-gcm@openssh.com, aes128-cbc, aes192-cbc, aes256-cbc, blowfish-cbc, cast128-cbc, 3des-cbc
DEBUG:paramiko.transport:client mac: umac-64-etm@openssh.com, umac-128-etm@openssh.com, hmac-sha2-256-etm@openssh.com, hmac-sha2-512-etm@openssh.com, hmac-sha1-etm@openssh.com, umac-64@openssh.com, umac-128@openssh.com, hmac-sha2-256, hmac-sha2-512, hmac-sha1
DEBUG:paramiko.transport:server mac:
umac-64-etm@openssh.com, umac-128-etm@openssh.com, hmac-sha2-256-etm@openssh.com, hmac-sha2-512-etm@openssh.com, hmac-sha1-etm@openssh.com, umac-64@openssh.com, umac-128@openssh.com, hmac-sha2-256, hmac-sha2-512, hmac-sha1
DEBUG:paramiko.transport:client compress: none, zlib@openssh.com
DEBUG:paramiko.transport:server compress: none, zlib@openssh.com
DEBUG:paramiko.transport:client lang: <none>
DEBUG:paramiko.transport:server lang: <none>
DEBUG:paramiko.transport:kex follows: False
DEBUG:paramiko.transport:=== Key exchange agreements ===
DEBUG:paramiko.transport:Kex: curve25519-sha256@libssh.org
DEBUG:paramiko.transport:HostKey: ssh-ed25519
DEBUG:paramiko.transport:Cipher: aes128-ctr
DEBUG:paramiko.transport:MAC: hmac-sha2-256
DEBUG:paramiko.transport:Compression: none
DEBUG:paramiko.transport:=== End of kex handshake ===
DEBUG:paramiko.transport:kex engine KexCurve25519 specified hash_algo
<built-in function openssl_sha256>
DEBUG:paramiko.transport:Switch to new keys ...
DEBUG:paramiko.transport:Got EXT_INFO: {'server-sig-algs': b'rsa-sha2-256,rsa-sha2-512'}
DEBUG:paramiko.transport:userauth is OK
INFO:paramiko.transport:Authentication (keyboard-interactive) successful!
DEBUG:paramiko.transport:[chan 0] Max packet in: 32768 bytes
DEBUG:paramiko.transport:Received global request "hostkeys-00@openssh.com"
DEBUG:paramiko.transport:Rejecting "hostkeys-00@openssh.com" global request from server.
DEBUG:paramiko.transport:[chan 0] Max packet out: 32768 bytes
DEBUG:paramiko.transport:Secsh channel 0 opened.
DEBUG:paramiko.transport:[chan 0] Sesch channel 0 request ok
DEBUG:paramiko.transport:[chan 0] EOF received (0)
DEBUG:paramiko.transport:[chan 1] Max packet in: 32768 bytes
DEBUG:paramiko.transport:[chan 0] EOF sent (0)
**connect to jump successfully**
DEBUG:paramiko.transport:[chan 1] Max packet out: 32768 bytes
DEBUG:paramiko.transport:Secsh channel 1 opened.
DEBUG:paramiko.transport:starting thread (client mode): 0xaa1dfd00
</code></pre>
|
<python><python-3.x><ssh><paramiko><jumphost>
|
2025-01-23 03:30:51
| 1
| 347
|
Ginger_Chacha
|
79,379,667
| 3,179,698
|
switch between different notebooks in the same environment(session) in colab
|
<p>I am currently working with colab.</p>
<p>I think I have a demand to open different notebooks in the same session.</p>
<p>I did some work in the ipynb1, which downloaded a lot of data, installed a lot of packages in the current session, and I want to continue my work on one of my another ipynb2, to try methods on this notebook on the same session(so that I could have all the data and library I need)</p>
<p>Currently I can upload and open the ipynb2, but it seems start a brand new session, and all my original preparation is gone, is there a way to open a new notebook and have all former work/status continued?</p>
|
<python><session><google-colaboratory>
|
2025-01-23 02:26:06
| 1
| 1,504
|
cloudscomputes
|
79,379,608
| 4,839,713
|
No Module Named error when trying to run pybind11-stubgen
|
<p>I've been scratching my head (googling, chatgpt-ing) trying to figure out why I can't generate a pyi file for a pybind module with pybind11-stubgen. I did this about a year ago successfully but can't remember the specifics.</p>
<p>I'm getting a "No Module Named {my-pybind-module}" error when I try running pybind11-stubgen in a terminal window.</p>
<p>Then it occurred to me (and it rings a memory bell) that maybe the pybind module needs to be "installed" in my environment in order for this to work. These are ad hoc pybind11 modules I'm building - I can successfully import and use them from python, but I wouldn't normally install them in my environment.</p>
<p>Do I need to do this in order to run pybind11-stubgen successfully? And if so, what is the simplest and easiest way to do this? These are just private modules, rebuilt very frequently during development - putting some together some kind of package that can be used with pip seems like overkill.</p>
|
<python><pybind11><stub>
|
2025-01-23 01:36:19
| 1
| 599
|
Andrew Voelkel
|
79,379,515
| 8,032,508
|
Create an (XML) file in working memory, rather than writing locally (Python)
|
<p><strong>tl;dr -</strong> In python, is there a way to write a file in memory and store it in a variable, rather than writing it locally to disk?</p>
<p><strong>Use Case</strong></p>
<p>I'm trying to generate an XML file using the <code>xml.etree.ElementTree</code> package. I've been able to create a local file with <code>ElementTree.write(etc.)</code>, which is helpful for checkout output/debugging. However, the next step is for me to create the full XML file in memory so that I can zip it (also in memory) and send it as part of an API POST without polluting the script's directory with single use files.</p>
|
<python><xml>
|
2025-01-23 00:06:48
| 0
| 752
|
Jwok
|
79,379,421
| 1,406,168
|
Azure function python app - getting nested environments variables
|
<p>I have an azure function app written in python that does not return environment variables from other sections than values.</p>
<p>localsettings.json:</p>
<pre><code>{
"IsEncrypted": false,
"IBAN": {
"API_KEY": "xx",
"CURRENCY": "USD"
},
"Values": {
"AzureWebJobsStorage": "",
"FUNCTIONS_WORKER_RUNTIME": "python",
"xx": "InstrumentationKey=xx-xx-xx-xx-xx",
}
}
</code></pre>
<p>I have tried:</p>
<pre><code>os.getenv('IBAN:API_KEY', '')
and
os.getenv('IBAN_API_KEY', '')
</code></pre>
<p>Both fails getting value. Is it not an option like in .NET to use nested sections in python?</p>
|
<python><azure><azure-functions>
|
2025-01-22 23:01:22
| 1
| 5,363
|
Thomas Segato
|
79,379,418
| 11,441,069
|
Django Allauth's Google Login Redirect and Page Design
|
<p>Currently, on the login page, I have a button:</p>
<pre><code><div class="d-grid gap-2">
<a href="{% provider_login_url 'google' %}" class="btn btn-danger">
<i class="fab fa-google"></i> Sign in with Google
</a>
</div>
</code></pre>
<p>This redirects to accounts/google/login/, and that page allows for redirection to Google authentication.</p>
<p>I have two problems:</p>
<ol>
<li>I don't know if these two steps are necessary and I don't see the value of having the extra step accounts/google/login/.</li>
<li>I don't know how to replace the standard layout of the accounts/google/login/ page (in case it is really needed).</li>
</ol>
|
<python><django><google-oauth><django-allauth>
|
2025-01-22 22:58:53
| 2
| 509
|
Krzysztof Krysztofczyk
|
79,379,407
| 1,938,096
|
Flask doesn't claim any port
|
<p>Been writing some code, used flask in debug mode to test html etc which worked fine on <a href="http://127.0.0.1:5000" rel="nofollow noreferrer">http://127.0.0.1:5000</a>. But I ran into in issue in the Python code that I couldn't get resolved with just print statements of what values some variables had. I really needed to be able to debug the python code line by line.</p>
<p>Tried some configuration changes in VScode but didn't get debugging to run, but worse, the "normal" flask debugging mode isn't claiming a port anymore.</p>
<p>Usually in VScode after starting the app.py, I see flask starting and saying it is now available at <a href="http://127.0.0.1:5000" rel="nofollow noreferrer">http://127.0.0.1:5000</a> and in the "Ports" window, I see port 5000 in use. But it doesn't do that anymore.</p>
<p>I changed something in the python - flask debugger 'launcher.json', but not sure what the original config was. I used to check the webpage with chrome, but that now gives me an error: <code>Access to 127.0.0.1 was denied You don't have authorisation to view this page. HTTP ERROR 403</code> even when trying incognito mode. Firefox just shows an empty page.</p>
<p>app.py:</p>
<pre class="lang-py prettyprint-override"><code><snip some imports>
@app.route("/")
def index():
return render_template("index.html")
if __name__ == "__main__":
app.run(debug=True)
</code></pre>
<p>Also tried</p>
<pre class="lang-py prettyprint-override"><code>app.run(host="127.0.0.1",port=5000,debug=True)
app.run(host="0.0.0.0",port=5000,debug=True)
</code></pre>
<p>launch.json:</p>
<pre class="lang-json prettyprint-override"><code>{
"version": "0.2.0",
"configurations": [
{
"name": "Python Debugger: Flask",
"type": "debugpy",
"request": "launch",
"module": "flask",
"env": {
"FLASK_APP": "app.py",
"FLASK_DEBUG": "1"
},
"args": [
"run",
"--no-debugger",
"--no-reload"
],
"jinja": true,
"autoStartBrowser": false
},
{
"name": "Python Debugger: Python File",
"type": "debugpy",
"request": "launch",
"program": "${file}"
}
]
}
</code></pre>
<p>When the app is started, the terminal window shows:</p>
<ul>
<li>Tip: There are .env files present. Install python-dotenv to use them.</li>
<li>Serving Flask app 'app'</li>
<li>Debug mode: on</li>
</ul>
|
<python><flask>
|
2025-01-22 22:52:42
| 1
| 579
|
Gabrie
|
79,379,336
| 2,600,531
|
boto3 timestream queries very slow
|
<pre><code>import boto3
device_list = ['ARMS-GFY-D5', 'ARMS-GFY-D3']
client = boto3.client('timestream-query')
for device_id in device_list:
query = f"""
SELECT time, charge, current, voltage, temperature
FROM "telemetryDatabase"."telemetryTable"
WHERE device_id = '{device_id}'
AND time > ago(24h)
ORDER BY time
"""
response = client.query(QueryString=query)
</code></pre>
<ul>
<li>The query above entered into the AWS timestream console query editor (with device ID substituted) consistently completes in under 1 second.</li>
<li>The first device_id query executed via this python script from my dev laptop consistently takes >15s, often as much as 30-60s.</li>
<li>The second device_id query executed via this python script consistently returns in <1s</li>
</ul>
<p>In all cases the output is small, only about 12kB (288 rows). Reducing the time range of the query has no impact on query time.</p>
<p>My laptop has a gigabit internet connection and no obvious network throughput issues to AWS.</p>
<p>The table schema:
<a href="https://i.sstatic.net/7AsfEaFe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7AsfEaFe.png" alt="enter image description here" /></a></p>
<p>Why is boto3 so much slower than AWS console, seemingly only until the client is "warm"? How can I avoid this behavior?</p>
|
<python><boto3><amazon-timestream>
|
2025-01-22 22:06:06
| 0
| 944
|
davegravy
|
79,379,215
| 374,198
|
Response from AWS SQS Java SDK does not match Python SDK or CLI
|
<p>We are using Java to send/receive messages from SNS/SQS. When we publish the message, we are doing this:</p>
<pre class="lang-java prettyprint-override"><code>final var req = PublishRequest.builder()
.topicArn(topicArn)
.message(payload)
.subject(subject)
.build();
final var response = snsClient.publish(req);
</code></pre>
<p>Note that we are setting <code>subject</code> here -- this is important as we use it to drive logic on the receiving end. When we do receive messages, we parse the JSON to an object and correctly end up with a <code>subject</code>. However, we are now using the Python SDK for to receive message as well, and in that case there is no "subject" attribute sent back from the API call:</p>
<pre class="lang-py prettyprint-override"><code>queue_name = "..."
sqs = boto3.resource("sqs")
queue = sqs.get_queue_by_name(QueueName=queue_name)
messages = queue.receive_messages(
MaxNumberOfMessages=1,
WaitTimeSeconds=10,
AttributeNames=["All"],
MessageAttributeNames=["All"],
)
for message in messages:
print("Got message:")
print(f"Queue URL: {message.queue_url}")
if message.attributes != None:
print(f"Attributes:\n{json.dumps(message.attributes, indent=2)}")
if message.message_attributes != None:
print(f"Message Attributes:\n{json.dumps(message.message_attributes, indent=2)}")
print(f"Body:\n{json.dumps(json.loads(message.body), indent=2)}")
</code></pre>
<p>Results in:</p>
<pre><code>Got message:
Queue URL: https://...
Attributes:
{
"SenderId": "...",
"ApproximateFirstReceiveTimestamp": "1737570929119",
"ApproximateReceiveCount": "5",
"SentTimestamp": "1737535740136"
}
Body:
{
...
}
</code></pre>
<p>Note that there is no top-level "subject" attribute, and the <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sqs/queue/receive_messages.html" rel="nofollow noreferrer">https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sqs/queue/receive_messages.html</a> does not mention "subject" at all.</p>
<hr />
<p>Am I missing something or is the output of the two SDKs simply not the same? Note that I've also tried using the SQS Client (rather than the "resource") in Python. How can I get the Python SDK to include the "subject"?</p>
<p>I've also configured an email subscription and the subject shows up in the email subject as well in the JSON body of the email, as a standalone property.</p>
<p>Java AWS SDK: 2.29.29</p>
<p>Python: 3.10.2</p>
<p>Python AWS SDK: 1.33.13</p>
|
<python><amazon-web-services><amazon-sqs>
|
2025-01-22 21:06:27
| 1
| 28,017
|
Josh M.
|
79,378,915
| 595,305
|
Filter out unneeded lines in pytest stack traces?
|
<p>Pytest was installed by using <code>pip install pytest</code>.</p>
<p>Pytest is being run <strong>at a command prompt</strong> (W10) not from inside an IDE. Typically:</p>
<pre><code>> pytest --random-order -k test_my_test_module
</code></pre>
<p>(NB <code>--random-order</code> comes from package <a href="https://pypi.org/project/pytest-random-order/" rel="nofollow noreferrer">pytest-random-order</a>. I use this switch every time I run pytest, without exception, to such an extent that I actually run pytest using a python script which does that)</p>
<p>For the sake of clarification, however, running simply like this produces the same sort of output:</p>
<pre><code>> pytest -k test_my_test_module
</code></pre>
<p><strong>I'm not aware now whether other people get vast output like this, clearly including the very long code path through the internals of pytest, which are of absolutely no interest for the user, or whether this is unusual. Should there be fewer details of the internals?</strong></p>
<p>I found <a href="https://docs.pytest.org/en/stable/how-to/output.html" rel="nofollow noreferrer">this page</a> and tried options like <code>--tb=short</code> and <code>-q</code>. They don't seem to have the effect on the stack trace that I'm looking for. Here's an example:</p>
<pre><code>Stack (most recent call last):
File "C:\Users\Mike\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\Mike\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "D:\apps\Python\virtual_envs\dev_organiser\Scripts\pytest.exe\__main__.py", line 7, in <module>
sys.exit(console_main())
File "D:\apps\Python\virtual_envs\dev_organiser\lib\site-packages\_pytest\config\__init__.py", line 206, in console_main
code = main()
File "D:\apps\Python\virtual_envs\dev_organiser\lib\site-packages\_pytest\config\__init__.py", line 178, in main
ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main(
File "D:\apps\Python\virtual_envs\dev_organiser\lib\site-packages\pluggy\_hooks.py", line 513, in __call__
return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)
File "D:\apps\Python\virtual_envs\dev_organiser\lib\site-packages\pluggy\_manager.py", line 120, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "D:\apps\Python\virtual_envs\dev_organiser\lib\site-packages\pluggy\_callers.py", line 103, in _multicall
res = hook_impl.function(*args)
File "D:\apps\Python\virtual_envs\dev_organiser\lib\site-packages\_pytest\main.py", line 332, in pytest_cmdline_main
return wrap_session(config, _main)
File "D:\apps\Python\virtual_envs\dev_organiser\lib\site-packages\_pytest\main.py", line 285, in wrap_session
session.exitstatus = doit(config, session) or 0
File "D:\apps\Python\virtual_envs\dev_organiser\lib\site-packages\_pytest\main.py", line 339, in _main
config.hook.pytest_runtestloop(session=session)
File "D:\apps\Python\virtual_envs\dev_organiser\lib\site-packages\pluggy\_hooks.py", line 513, in __call__
return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)
File "D:\apps\Python\virtual_envs\dev_organiser\lib\site-packages\pluggy\_manager.py", line 120, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "D:\apps\Python\virtual_envs\dev_organiser\lib\site-packages\pluggy\_callers.py", line 103, in _multicall
res = hook_impl.function(*args)
File "D:\apps\Python\virtual_envs\dev_organiser\lib\site-packages\_pytest\main.py", line 364, in pytest_runtestloop
item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem)
File "D:\apps\Python\virtual_envs\dev_organiser\lib\site-packages\pluggy\_hooks.py", line 513, in __call__
return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)
File "D:\apps\Python\virtual_envs\dev_organiser\lib\site-packages\pluggy\_manager.py", line 120, in _hookexec
...
</code></pre>
<p>(and on and on and on...)</p>
<p>It may be that this sort of phenomenon is specific to my setup for some reason. But I really don't need all these details of very long paths through the pytest internals, including nonsense about "pluggy", etc., each time I'm trying to follow what may have occurred.</p>
<p>I'd be helpful to know whether people typically get something <strong>different</strong> to this during normal runs of pytest at a command prompt. Is this atypical output? Currently I have no way of knowing.</p>
<p>In Python <code>logging</code> module there is a method in the most basic <code>Formatter</code> class (superclass of all others) which you can implement (basic version does nothing): <a href="https://github.com/python/cpython/blob/main/Lib/logging/__init__.py" rel="nofollow noreferrer"><code>formatStack(stack_info)</code></a>. This gives you one parameter, a multiline string, which you can edit to cut out unnecessary lines. If Pytest doesn't have an "official" way of configuring its logger formatters to cut out uninteresting stack lines I'll have to try to see whether this is one way to try and do things.</p>
|
<python><pytest><stack-trace>
|
2025-01-22 18:50:24
| 1
| 16,076
|
mike rodent
|
79,378,886
| 5,989,199
|
Syntax highlighting for Jinja and AlpineJS
|
<pre><code>{# jinja-html #}
{% extends "base.html" %}
{% block content %}
<div class="min-h-screen bg-gray-100" x-data="{
photos: [],
selectedPhoto: null,
previewUrl: null,
uploadProgress: 0,
question: '', // Added question field
init() {
// Store auth token from server
if ('{{ auth_token }}') {
Alpine.store('auth').setToken('{{ auth_token }}');
}
},
</code></pre>
<p>Could not get the alpinejs syntax highlight to be working. I have alpinejs intellisense and better jinja and I wonder if anyone of you have experience using python with alpineJS. Any suggestions would be appreciated.</p>
|
<python><jinja2><alpine.js>
|
2025-01-22 18:33:40
| 0
| 2,481
|
Daryl Wong
|
79,378,822
| 5,402,618
|
MongoEngine library doesn't allow to use its BaseFields for persisting data
|
<p>I started to use <a href="https://docs.mongoengine.org/tutorial.html" rel="nofollow noreferrer">mongoengine</a> library in order to read and write data from Mongo DB.</p>
<p>I found some strage behavior. The following class is defined in my codebase:</p>
<pre><code>class CustomerRequest(BaseModel):
type = fields.EnumField(RequestType)
status = fields.EnumField(Status, default=Status.NEW)
request_date = fields.DateTimeField(db_field='requestDate')
</code></pre>
<p>I created an object of type CustomerRequest:</p>
<pre><code>customer_request = CustomerRequest(type=Type.complaints, status=Status.Opened, request_date=DateTimeField("2023-03-01 00:00:00")
</code></pre>
<p>and then I persisted customer_request using mongoengine library:</p>
<pre><code>customer_request.save()
</code></pre>
<p>After executing this line, I got the error:</p>
<blockquote>
<p>Validation Failed: cannot parse date "<mongoengine.fields.datetimefield...>"</p>
</blockquote>
<p>However, I found that if I create a CustomerRequest object with a "regular" datatime Python object:</p>
<pre><code>customer_request = CustomerRequest(type=Type.complaints, status=Status.Opened, request_date=datetime("2023-03-01 00:00:00"))
</code></pre>
<p>Then, persistance goes well without any error.</p>
<p>I don't understand why CustomerRequest defines <code>request_date</code> as field of type DateTimeField, but expects this field to be of type datetime in order to persist it. Should I define two different classes for CustomerRequest? one that describes the data in the database, and a second that describes the object in Python code?</p>
|
<python><mongodb><mongoengine>
|
2025-01-22 18:07:44
| 1
| 15,182
|
CrazySynthax
|
79,378,667
| 8,605,685
|
How to redirect OpenCV backend error messages to my own logger
|
<p>When using OpenCV to enumerate cameras with:</p>
<pre><code>def list_available_cameras(max_cameras=10):
"""List all available camera indices."""
available_cameras = []
for index in range(max_cameras):
logger.info(f"Checking camera {index}...")
cap = cv2.VideoCapture(index, cv2.CAP_DSHOW)
if cap.isOpened():
available_cameras.append(index)
cap.release()
return available_cameras
</code></pre>
<p>I get the output:</p>
<pre><code>2025-01-22 10:12:32,267 - capture_images - INFO - Checking camera 0...
2025-01-22 10:12:34,294 - capture_images - INFO - Checking camera 1...
2025-01-22 10:12:34,991 - capture_images - INFO - Checking camera 2...
[ WARN:0@2.788] global cap.cpp:480 cv::VideoCapture::open VIDEOIO(DSHOW): backend is generally available but can't be used to capture by index
</code></pre>
<p>How can I capture the error message and pass it to my own logger?</p>
<p>Things I have tried:</p>
<ul>
<li><code>cv2.utils.logging.setLogLevel(cv2.utils.logging.LOG_LEVEL_SILENT)</code></li>
<li><code>os.environ["OPENCV_LOG_LEVEL"]="SILENT"</code></li>
<li><code>os.environ["OPENCV_VIDEOIO_DEBUG"] = "0"</code></li>
<li>Redirecting <code>stderr</code> to an <code>io.StringIO()</code> buffer</li>
</ul>
|
<python><opencv><logging>
|
2025-01-22 17:16:59
| 0
| 12,587
|
Salvatore
|
79,378,624
| 2,836,049
|
Error cannot import name 'Connectable' from 'sqlalchemy.engine.base' with pangres 4.1.2 and sqlalchemy 2.0.36
|
<p>When I try to run my Python 3.9 script, I get the following error:</p>
<pre><code> File "/usr/local/lib/python3.9/site-packages/pangres/__init__.py", line 1, in <module>
from pangres.core import aupsert, upsert
File "/usr/local/lib/python3.9/site-packages/pangres/core.py", line 9, in <module>
from sqlalchemy.engine.base import Connectable
ImportError: cannot import name 'Connectable' from 'sqlalchemy.engine.base' (/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py)
</code></pre>
<p>I am using pangres 4.1.2 and sqlalchemy 2.0.36.</p>
|
<python><sqlalchemy>
|
2025-01-22 17:03:35
| 1
| 1,461
|
John Langford
|
79,378,514
| 122,792
|
Force Altair chart to display years
|
<p>Using a data frame of dates and values starting from 1 Jan 2022:</p>
<pre><code>import datetime as dt
import altair as alt
import polars as pl
import numpy as np
alt.renderers.enable("browser")
dates = pl.date_range(dt.date(2022, 1, 1), dt.date(2025, 1, 22), "1d", eager = True)
values = np.random.uniform(size = len(dates))
df = pl.DataFrame({"dates": dates, "values": values})
alt.Chart(df).mark_point().encode(alt.X("dates:T"), alt.Y("values:Q")).show()
</code></pre>
<p><a href="https://i.sstatic.net/8MHcwoZT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8MHcwoZT.png" alt="correct scatter plot with year dates" /></a></p>
<p>But if I start the data frame from 2020 and filter it for dates > 1 Jan 2022:</p>
<pre><code>dates_b = pl.date_range(dt.date(2020, 1, 1), dt.date(2025, 1, 22), "1d", eager = True)
values_b = np.random.uniform(size = len(dates_b))
df_b = pl.DataFrame({"dates": dates, "values": values})
alt.Chart(df_b.filter(pl.col("dates") > dt.date(2022, 1, 1))).mark_point().encode(alt.X("dates:T"), alt.Y("values:Q")).show()
</code></pre>
<p><a href="https://i.sstatic.net/wVoCuZY8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wVoCuZY8.png" alt="incorrect scatter plot with missing years" /></a></p>
<p>How can I specify that years <em>must</em> be shown?</p>
<p>Note that I do get the right result if I filter using <code>>=</code> to include 1 Jan 2022, but that's besides the point. I always need years.</p>
|
<python><python-polars><altair>
|
2025-01-22 16:26:20
| 1
| 25,088
|
Thomas Browne
|
79,378,481
| 7,376,511
|
ipdb set_trace: retry block
|
<pre><code>import ipdb
my_var = 1
try:
assert my_var == 2
except AssertionError:
ipdb.set_trace()
raise
</code></pre>
<p>Is there an easy way to retry the failing block if the value of <code>my_var</code> is changed in the resulting ipdb console?</p>
|
<python><ipdb>
|
2025-01-22 16:13:17
| 0
| 797
|
Some Guy
|
79,378,440
| 16,389,095
|
How to change theme mode of a container and its children in Python Flet
|
<p>I'm trying to develop an app with multiple views. Each view should have a proper theme and theme mode. To solve this problem, I decided to deal with an easier one. I would like to set a page theme and theme mode, and add a container with a different theme mode or a different theme too. I found something similar into <a href="https://flet.dev/docs/cookbook/theming/#nested-themes" rel="nofollow noreferrer">Flet references</a>. So, I set the page theme mode to <em>'LIGHT'</em>, the page theme to <em>'BLUE'</em> and the page dark theme to <em>'GREEN'</em>.
I added a button and a textfield on top of the page to evaluate the rendering.
Finally, I added a container with a button and a textfield, setting its theme mode to <em>'DARK'</em>. Optionally, I would like to set also a different theme to the container. Here is the code:</p>
<pre><code>import flet as ft
def main(page: ft.Page):
page.controls.append(ft.ElevatedButton("Click me!"))
page.controls.append(ft.TextField(hint_text="Enter text"))
page.controls.append(
ft.Container(
content=ft.Column(
[
ft.ElevatedButton("Click me!"),
ft.TextField(hint_text="Enter text"),
]
),
theme_mode=ft.ThemeMode.DARK,
#theme=ft.Theme(color_scheme_seed=ft.colors.RED),
padding=40,
border_radius=10,
border=ft.border.all(3, "red"),
)
)
page.theme_mode = ft.ThemeMode.LIGHT
page.theme=ft.Theme(color_scheme_seed=ft.colors.BLUE)
page.dark_theme=ft.Theme(color_scheme_seed=ft.colors.GREEN)
page.window_height = 700
page.window_width = 500
page.padding=0
page.scroll="auto"
page.window_left = 700
page.horizontal_alignment = ft.CrossAxisAlignment.CENTER
page.vertical_alignment = ft.MainAxisAlignment.CENTER
page.update()
ft.app(target=main)
</code></pre>
<p>And this is the output:</p>
<p><a href="https://i.sstatic.net/IxvMuZcW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IxvMuZcW.png" alt="enter image description here" /></a></p>
<p>As you can see, nothing change into the container. How can I set a different theme mode and a different theme for both the container and its children?</p>
|
<python><flutter><flet>
|
2025-01-22 15:58:57
| 1
| 421
|
eljamba
|
79,378,313
| 3,554,721
|
Contravariance in Python protocols
|
<p>I have defined a simple protocol:</p>
<pre><code>K = TypeVar('K', contravariant=True)
V = TypeVar('V', contravariant=True)
class Settable(Protocol[K,V]):
def __setitem__(self, key: K, value: V) -> None:
...
</code></pre>
<p>and a function that accepts an object that implements it</p>
<pre><code>def hw(obj: Settable[int,str]) -> None:
obj[0] = 'hello'
obj[1] = 'world'
</code></pre>
<p><code>mypy</code> and <code>pyright</code> take issue with the following code:</p>
<pre><code>l = [None, None]
hw(l)
</code></pre>
<p>namely</p>
<pre><code>Argument of type "list[None]" cannot be assigned to parameter "obj" of type "Settable[int, str]" in function "hw"
"list[None]" is incompatible with protocol "Settable[int, str]"
"__setitem__" is an incompatible type
No overloaded function matches type "(key: K@Settable, value: V@Settable) -> None"
</code></pre>
<p>but why? <code>list</code> has a <code>__setitem__(idx: int, obj: Any)</code> method and contravariance means that it is an instance of <code>__setitem__(idx: int, obj: str)</code>. Are <code>mypy</code> and <code>pyright</code> in line with the spec here?</p>
|
<python><python-typing><mypy><pyright>
|
2025-01-22 15:20:20
| 0
| 3,224
|
0x60
|
79,378,168
| 13,498,838
|
How to access and modify the original next_run_time of a paused Job in APScheduler?
|
<p>I am using Python 3.12 and APScheduler version 3.11.0.</p>
<p>I have scheduled a job to run at a specific time. If the job needs to be paused before execution, I would like to adjust its next_run_time before resuming it. However, I am unsure how to access the <strong>originally set trigger time</strong> (the time specified when scheduling the job).</p>
<p>Here's an example to illustrate the issue:</p>
<pre><code>from datetime import datetime, timedelta
from apscheduler.schedulers.background import BackgroundScheduler
from time import sleep
scheduler = BackgroundScheduler()
scheduler.start()
def do_stuff():
pass
job = scheduler.add_job(
func=do_stuff,
trigger="date",
next_run_time=datetime.now() + timedelta(seconds=10),
misfire_grace_time=None,
max_instances=10,
)
print("[SET] Next run time:", job.next_run_time)
print("[SET] Trigger:", job.trigger)
# Pause the job before execution
sleep(5)
job.pause()
print("[PAUSE] Next run time:", job.next_run_time)
# Wait for the original run time to pass
sleep(30)
# Resume the job
job.resume()
print("[RESUMED] Next run time:", job.next_run_time)
scheduler.shutdown()
</code></pre>
<p>Running the above outputs:</p>
<pre><code>[SET] Next run time: 2025-01-22 14:25:44.252052+00:00
[SET] Trigger: date[2025-01-22 14:25:34 GMT]
[PAUSE] Next run time: None
[RESUMED] Next run time: 2025-01-22 14:25:34.252282+00:00
</code></pre>
<p>From this output, it seems that APScheduler resets the job's trigger as the new <code>next_run_time</code> when the job is paused and resumed. However, I want to access the <strong>originally set</strong> <code>next_run_time</code> so that I can modify it before resuming the job.</p>
<p>Is it possible to retrieve the job's originally scheduled <code>next_run_time</code> after the job has been paused?</p>
<p>If not, is there a recommended way to store and modify this information externally before resuming the job?</p>
|
<python><apscheduler>
|
2025-01-22 14:36:19
| 0
| 1,454
|
jda5
|
79,377,876
| 8,112,349
|
How can I convert this pandas.series result to integer?
|
<p>I've got a column <code>['Duration]</code> which is an <code>int</code> datatype. I'm now trying to find out the most occurencing <code>['Duration']</code> in a pandas dataframe.</p>
<pre><code> duration = (inter['duration'].mode())
print(duration)
</code></pre>
<p>Result:</p>
<pre><code> 0 94
Name: duration, dtype: int64
</code></pre>
<p>The answer is right, but the datatype is wrong. It should be an integer When I ran type function over the duration variable it shows this</p>
<pre><code> type(duration)
</code></pre>
<p>Result:</p>
<pre><code> pandas.core.series.Series
</code></pre>
<p>The variable 'duration' should be an integer and not pandas.series.
How can I convert it to that?</p>
|
<python><pandas><dataframe><type-conversion>
|
2025-01-22 13:25:20
| 1
| 825
|
user234568
|
79,377,877
| 8,112,349
|
How to convert this pandas.series result to integer?
|
<p>I'm doing some questions and I'm stuck on this. So basically I've got a column ['Duration] which is an int datatype. I'm now trying to find out the most occurencing ['Duration'] in a pandas dataframe.</p>
<pre><code> duration = (inter['duration'].mode())
print(duration)
</code></pre>
<p>Result:</p>
<pre><code> 0 94
Name: duration, dtype: int64
</code></pre>
<p>The answer is right, but the datatype is wrong. It should be an integer When I ran type function over the duration variable it shows this</p>
<pre><code> type(duration)
</code></pre>
<p>Result:</p>
<pre><code> pandas.core.series.Series
</code></pre>
<p>The variable 'duration' should be an integer and not pandas.series.
How can I convert it to that?</p>
|
<python><pandas><dataframe><type-conversion>
|
2025-01-22 13:25:20
| 0
| 825
|
user234568
|
79,377,791
| 7,980,206
|
What are variables generated as "_1", "_2", "_3"... "_1i"...."_" when printing dir()?
|
<p>I am running Python with IPython. Out of nowhere the following variables are being created when I print <code>dir()</code>.</p>
<p>Here are some of the variables with following code:</p>
<pre class="lang-py prettyprint-override"><code>for i in dir():
if "_" in i:
print(i)
# Output:
_
_12
_13
_14
_15
_19
__
___
_i
_i1
_i10
_i11
_i2
_i3
_i4
_i5
_i6
_i7
_i8
_i9
_ih
_ii
_iii
_oh
</code></pre>
|
<python><ipython>
|
2025-01-22 13:02:00
| 1
| 717
|
ggupta
|
79,377,681
| 876,832
|
Is it a bug to use subprocess.run() in a multithreaded script?
|
<p>I have a long build script which is a mix of "real" python code and lengthy <code>subprocess.run()</code> calls to things like <code>debootstrap</code>, <code>wget</code>, <code>apt-get</code> or <code>build.sh</code>.</p>
<p>Everything is parallelized. One thread does <code>debootstrap</code> followed by <code>apt-get</code>, while another does <code>build.sh</code>, then both are joined and we start more threads for combining the results and so on. Each thread logs to a dedicated file.</p>
<p>While looking for a generic way to log the output of <code>subprocess.run()</code> started by one of these threads, I came upon the following answer: <a href="https://stackoverflow.com/a/31868783/876832">https://stackoverflow.com/a/31868783/876832</a></p>
<p>Is it a bug to call <code>subprocess.run()</code> in a python script with more than one thread?</p>
|
<python><multithreading>
|
2025-01-22 12:19:43
| 1
| 598
|
Fadeway
|
79,377,386
| 12,871,587
|
How to handle #DIV/0! errors with pl.read_excel() when using the calamine/fastexcel engine in Polars?
|
<p>I'm working with a messy Excel file and trying to read it using the <code>pl.read_excel()</code> method in Polars with the fastexcel or calamine engine. My goal is to load only 3 specific columns: "<code>apple_column</code>", "<code>banana_column</code>", and "<code>kiwi_column</code>".</p>
<p>Here’s what I’ve tried:</p>
<pre class="lang-py prettyprint-override"><code>pl.read_excel(
source=xlsx_file_path,
sheet_name="name_of_the_sheet",
columns=["apple_column", "banana_column", "kiwi_column"],
)
</code></pre>
<p>and also:</p>
<pre class="lang-py prettyprint-override"><code>pl.read_excel(
source=xlsx_file_path,
sheet_name="name_of_the_sheet",
read_options={
"use_columns": ["apple_column", "banana_column", "kiwi_column"],
},
)
</code></pre>
<p>Unfortunately, both approaches result in the same error:</p>
<pre><code>CalamineCellError: calamine cell error: #DIV/0!
Context:
0: could not determine dtype for column __UNNAMED__25
</code></pre>
<p>It seems that even though the columns I need (<code>["apple_column", "banana_column", "kiwi_column"]</code>) are not related to the problematic column (<code>__UNNAMED__25</code>), the engine tries to read the entire sheet, encountering a <code>#DIV/0!</code> error in one of the unused columns.</p>
<p>Does this mean that calamine/fastexcel engine always read the entire sheet, even if specific columns are specified? also, what would be the recommended workaround?</p>
|
<python><python-polars>
|
2025-01-22 10:35:50
| 2
| 713
|
miroslaavi
|
79,377,336
| 2,915,050
|
Python unittest failing on function importing local file
|
<p>I have a Python package with a directory that looks like this:</p>
<pre><code>-root
|
|-src
| |
| -app
| |
| |-__init__.py
| |-__main__.py
| |-file1.py
| |-file2.py
|
|-tests
|
|-__init__.py
|-test_function.py
</code></pre>
<p><code>file1.py</code> looks like this</p>
<pre><code>from app import file2
def function():
return file2.another_function()
</code></pre>
<p><code>test_function.py</code> looks like this</p>
<pre><code>import unittest
from src.app.file1 import function
class TestFunction(unittest.TestCase):
def test_case(self):
self.assertTrue(function())
</code></pre>
<p>However, whilst as a package this works fine, when running the test I get a <code>ModuleNotFoundError: No module named 'app'</code> on <code>from app import file2</code>.</p>
<p>It works when I put <code>src.</code> in front of <code>app</code> in <code>file1.py</code> but then the package breaks. Looking around, there isn't really a clear way to fix this so unsure what my path should be looking like.</p>
|
<python><unit-testing><python-import>
|
2025-01-22 10:22:21
| 0
| 1,583
|
RoyalSwish
|
79,377,216
| 16,906,505
|
How to use sqlAlchemy events correctly with FastAPI
|
<p>I have a strange behavior that I do not understand really. I have a function to return a db session that knows a user email or throws 401 and a function that returns a db session without additional logic.</p>
<p>I noticed that when a user is registered after someone logged in and used the db session with events, the <code>created_date</code> field will be populated with the email of the last logged-in user.</p>
<p>I debugged the code and noticed that when the event is triggered by a user that is not logged in, <code>current_user_email</code> is somehow cached with the email of the previous logged user, and the dependency <code>get_current_user_email</code>, which should throw 401, is not even called.</p>
<pre><code>def get_db(current_user_email: str = Depends(get_current_user_email)):
session = SessionLocal()
try:
@event.listens_for(Base, "before_insert", propagate=True)
def set_created_by(mapper, connection, target):
target.created_by = current_user_email
target.created_date = func.now()
@event.listens_for(Base, "before_update", propagate=True)
def set_updated_audit_fields(mapper, connection, target):
target.updated_by = current_user_email
target.updated_date = func.now()
yield session
except Exception:
session.rollback()
raise
finally:
session.close()
def get_db_without_current_user():
session = SessionLocal()
try:
yield session
except Exception:
session.rollback()
raise
finally:
session.close()
</code></pre>
|
<python><sqlalchemy><fastapi>
|
2025-01-22 09:48:54
| 0
| 500
|
RheinmetallSkorpion
|
79,377,019
| 11,663,956
|
"EOF in transport thread" when implementing challenge-response authentication with Python Paramiko
|
<p>I need to login to a Linux server and it has below configuration in <code>sshd_config</code>:</p>
<pre><code>PasswordAuthentication yes
ChallengeResponseAuthentication yes
UsePAM yes
</code></pre>
<p>When I login through PuTTY, it firstly asked me to input the password, followed by the RSA token from an authentication app. I need to do the same via Python, for some automation tasks.
Here's my code:</p>
<pre><code>import paramiko, traceback
from getpass import getpass
paramiko.common.logging.basicConfig(level=paramiko.common.DEBUG)
hostname = '192.169.10.10'
port = 22
username = get_user_name()
password = keyring.get_password('unix',username) # This is my first password
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
transport = paramiko.Transport((hostname, port))
try:
transport.connect(username=username, password=password)
except Exception as e:
print(e)
def challenge_handler(title, instructions, prompt_list):
responses = []
for prompt in prompt_list:
if "password" in prompt[0].lower():
responses.append(password)
elif "rsa" in prompt[0].lower() :
token = getpass(f"Enter {prompt[0].strip()}: ")
responses.append(token)
else:
responses.append(getpass(f"Enter {prompt[0].strip()}: "))
return responses
transport.auth_interactive(username, handler=challenge_handler) #problem starts
print("Authentication successful.")
session = transport.open_session(timeout=10) #Failed with EOF problem
if session.active:
print("Session opened successfully.")
session.exec_command('uname')
output = session.recv(1024).decode()
print("Command output:")
print(output)
error = session.recv_stderr(1024).decode()
if error:
print("Command errors:")
print(error)
session.close()
else:
print("Failed to open session.")
except Exception as e:
print(f"Error: {e}")
traceback.print_exc()
finally:
if 'ssh' in locals():
ssh.close()
print("Connection closed.")
</code></pre>
<p>I couldn't figure out what's wrong. Appreciate if you can shed some lights. If you feel i missed some information please let me know.</p>
<p>And I got below logs from paramiko before that Authentication successful log</p>
<blockquote>
<p>INFO:paramiko.transport:Authentication (keyboard-interactive)
successful! DEBUG:paramiko.transport:[chan 0] Max packet in: 32768
bytes DEBUG:paramiko.transport:<strong>EOF in transport thread</strong> i think
this is where the problem starts</p>
</blockquote>
<p>Thank you in advance</p>
|
<python><ssh><paramiko><challenge-response>
|
2025-01-22 08:43:06
| 1
| 347
|
Ginger_Chacha
|
79,376,992
| 719,001
|
vscode extension uses wrong python
|
<p>I am trying to use <code>sqlfluff</code> linter, which is a SQL formatter and linter. In settings.json I have, among others, this:</p>
<pre><code> "[sql]": {
"editor.defaultFormatter": "dorzey.vscode-sqlfluff" // Use SQLFluff formatter
},
"editor.formatOnSave": true,
"workbench.settings.applyToAllProfiles": [],
"python.defaultInterpreterPath": "/Library/Frameworks/Python.framework/Versions/3.12/bin/python3.12" // Enable auto-formatting on save
</code></pre>
<p>I am using MacBook and when I press save on a SQL file I get:</p>
<pre><code>Traceback (most recent call last):
File "/opt/homebrew/bin/sqlfluff", line 8, in <module>
sys.exit(cli())
^ ^^^^
File "/opt/homebrew/lib/python3.11/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^ ^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/click/core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^ ^^^^^^^^^^ ^^^^^^^^^^ ^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^ ^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^...
</code></pre>
<p><code>/opt/homebrew/lib/python3.11</code> is not the python library I want it to use, which is <code>/Library/Frameworks/Python.framework/Versions/3.12/bin/python3.12</code>.</p>
<p>I tried:</p>
<ol>
<li>Reinstall the extension</li>
<li>Set it in <code>Python: Select Interpreter</code> (real path and virtual env)</li>
</ol>
<p>From the terminal, I get the correct version of <code>python</code>:</p>
<pre><code>nir@mac % python -V
Python 3.12.8
nir@mac % which python
python: aliased to python3
nir@mac % which python3
/Library/Frameworks/Python.framework/Versions/3.12/bin/python3
</code></pre>
<p><code>sqlfluff</code> however, I do not get the right version in the terminal:</p>
<pre><code>nir@mac % which sqlfluff
/opt/homebrew/bin/sqlfluff
nir@mac % sqlfluff version
2.3.5
</code></pre>
<p>In my pyproject.tomal I have:</p>
<pre><code>dependencies = [
"sqlfluff==3.3.0",
...
]
</code></pre>
|
<python><visual-studio-code><vscode-python><sqlfluff>
|
2025-01-22 08:35:42
| 1
| 2,677
|
Nir
|
79,376,870
| 6,141,238
|
How can I remove the dotted border around the tab in focus in a multi-tabbed PyQt5 window?
|
<p>I have built a multi-tab window using PyQt5. Each tab has a thin dotted border when in focus:</p>
<p><a href="https://i.sstatic.net/C4lBpkrk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C4lBpkrk.png" alt="enter image description here" /></a></p>
<p>How do I remove this dotted border? (My operating system is Windows 10 if that matters.)</p>
<p>Here is the Python code that generates the above window:</p>
<pre><code>from PyQt5.QtWidgets import QTabWidget, QWidget, QVBoxLayout, QApplication
import matplotlib.pyplot as plt
from matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg
from numpy.random import rand
from sys import argv, exit
class mainWindow(QTabWidget):
def __init__(self, parent = None):
super(mainWindow, self).__init__(parent)
# Create tab1
self.tab1 = QWidget()
self.addTab(self.tab1,"Tab 1")
self.figure = plt.figure()
self.canvas = FigureCanvasQTAgg(self.figure)
layout = QVBoxLayout()
layout.addWidget(self.canvas)
self.tab1.setLayout(layout)
self.plot()
# Create tab2
self.tab2 = QWidget()
self.addTab(self.tab2,"Tab 2")
self.figure = plt.figure()
self.canvas = FigureCanvasQTAgg(self.figure)
layout = QVBoxLayout()
layout.addWidget(self.canvas)
self.tab2.setLayout(layout)
self.plot()
def plot(self):
data = rand(10)
ax = self.figure.add_subplot(111)
ax.plot(data, '*-')
self.canvas.draw()
app = QApplication(argv)
main = mainWindow()
main.show()
exit(app.exec_())
</code></pre>
<p>Several Stack Overflow questions like <a href="https://stackoverflow.com/questions/9795791">this</a>, <a href="https://stackoverflow.com/questions/63886039">this</a>, and <a href="https://stackoverflow.com/questions/9044570">this</a> one seem to suggest that the border might be removed by adding a line like</p>
<pre><code>self.tab1.setFocusPolicy(QtCore.Qt.NoFocus)
</code></pre>
<p>or</p>
<pre><code>self.tab1.setStyleSheet("QTableView:{outline: 0;}")
</code></pre>
<p>somewhere in the above code. However, I have not yet found a way to do this that succeeds in removing the dotted border.</p>
|
<python><pyqt5><focus><border>
|
2025-01-22 07:47:17
| 1
| 427
|
SapereAude
|
79,376,281
| 3,004,257
|
Mod operator in Free Pascal gives a different result than expected
|
<p>The <code>mod</code> operator in Free Pascal does not produce the results I would expect.</p>
<p>This can be demonstrated by the program below whose output does not agree with the result of the same calculation in Python (or Google).</p>
<pre class="lang-pascal prettyprint-override"><code>program test(output);
var
a, b, c: longint;
begin
a := -1282397916;
b := 2147483647;
c := a mod b;
writeln (c:16);
end.
</code></pre>
<pre class="lang-pascal prettyprint-override"><code> -1282397916
</code></pre>
<p>Compare this to the output of the Python script below which gives the result I expected.</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/python
a = -1282397916
b = 2147483647
c = a % b
print (c)
</code></pre>
<pre class="lang-py prettyprint-override"><code>865085731
</code></pre>
<p>This is the same as the result as that obtained by pasting the following text into Google (and when using the <code>mod</code> operator in VAX‑Pascal).</p>
<pre class="lang-py prettyprint-override"><code>(-1282397916 % 2147483647)
</code></pre>
<p>The question is: Why does Free Pascal behave differently? And how do I get the same result as that obtained when using the <code>mod</code> operator in Python?</p>
|
<python><freepascal>
|
2025-01-22 00:42:59
| 1
| 417
|
Mike T.
|
79,376,131
| 3,045,351
|
Using Python subprocess to resolve relative import error
|
<p>I have a situation where in a script I am having activate a virtualenv, then use subprocess to run a set of scripts in a package that live outside of the virtualenv like so:</p>
<pre><code>#!/usr/local/lib/python3.10/virtual-environments/cogvideox/bin/python3.10
activate_this_file = "/usr/local/lib/python3.10/virtual-environments/cogvideox/bin/activate_this.py"
with open(activate_this_file) as f:
code = compile(f.read(), activate_this_file, 'exec')
exec(code, dict(__file__=activate_this_file))
import subprocess
fname = '/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-CogVideoXWrapper/startup.py'
fdir = '/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-CogVideoXWrapper/'
cmd = subprocess.Popen(['python', fname], stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd='../', shell=True)
results, err = cmd.communicate()
</code></pre>
<p>...the idea of using the 'cwd' parameter in subprocess was to set the PWD if the subprocess as the root folder where my code to be run is. I am still getting the <code>no known parent package error</code> though.</p>
<p>Any ideas?</p>
|
<python><python-3.x><subprocess><virtualenv>
|
2025-01-21 22:49:45
| 0
| 4,190
|
gdogg371
|
79,376,031
| 5,013,066
|
'FileHandler' object has no attribute 'level'
|
<p>I am working on using the Python <code>logging</code> package. I am using Python 3.12.8 from <code>uv</code>.</p>
<p>The <a href="https://docs.python.org/3/howto/logging.html" rel="nofollow noreferrer">documentation</a> gives the following code snippet as an example:</p>
<pre class="lang-py prettyprint-override"><code>import logging
logger = logging.getLogger(__name__)
logging.basicConfig(filename='example.log', encoding='utf-8', level=logging.DEBUG)
logger.debug('This message should go to the log file')
logger.info('So should this')
logger.warning('And this, too')
logger.error('And non-ASCII stuff, too, like Øresund and Malmö')
</code></pre>
<p>However, when I use the <code>force</code> argument to force basicConfig to run, I get an exception.</p>
<p>This is my minimum reproducible example:</p>
<p><em>logging_example/main.py</em></p>
<pre class="lang-py prettyprint-override"><code>import logging
def main(log_file_path: str, message: str) -> None:
logger: logging.Logger = logging.getLogger(__name__)
logging.basicConfig(
format='%(asctime)s.%(msecs)06d|%(levelname)s|saasapp|%(message)s',
datefmt='%Y-%m-%dT%H:%M:%S', encoding='utf-8', level=logging.INFO, filename=log_file_path,
force=True
)
logger.info(message)
return
</code></pre>
<p><em>tests/test_main.py</em></p>
<pre class="lang-py prettyprint-override"><code>import logging_example.main
def test_main() -> None:
# arrange
log_file_path: str = "~/minimum-reproducible-example.log"
message: str = "Hello, world."
# act
logging_example.main.main(log_file_path, message)
# assert
with open(log_file_path, 'r') as log_file:
assert message in log_file.read()
return
</code></pre>
<p>When I run this with <code>poetry run pytest</code>, I get the following exception output:</p>
<pre class="lang-bash prettyprint-override"><code>poetry run pytest
========================================================================= test session starts =========================================================================
platform win32 -- Python 3.12.8, pytest-8.3.4, pluggy-1.5.0
rootdir: c:\Users\USERNAME\workbench\playground\logging-example
configfile: pytest.ini
testpaths: tests
plugins: html-4.1.1, metadata-3.1.1
collected 1 item
tests\test_main.py F
============================================================================== FAILURES ===============================================================================
______________________________________________________________________________ test_main ______________________________________________________________________________
def test_main() -> None:
# arrange
log_file_path: str = "~/minimum-reproducible-example.log"
message: str = "Hello, world."
# act
> logging_example.main.main(log_file_path, message)
tests\test_main.py:9:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
logging_example\main.py:6: in main
logging.basicConfig(
..\..\..\AppData\Roaming\uv\python\cpython-3.12.8-windows-x86_64-none\Lib\logging\__init__.py:2118: in basicConfig
h = FileHandler(filename, mode,
..\..\..\AppData\Roaming\uv\python\cpython-3.12.8-windows-x86_64-none\Lib\logging\__init__.py:1231: in __init__
StreamHandler.__init__(self, self._open())
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <[AttributeError("'FileHandler' object has no attribute 'level'") raised in repr()] FileHandler object at 0x2250a667200>
def _open(self):
"""
Open the current base file with the (original) mode and encoding.
Return the resulting stream.
"""
open_func = self._builtin_open
> return open_func(self.baseFilename, self.mode,
encoding=self.encoding, errors=self.errors)
E FileNotFoundError: [Errno 2] No such file or directory: 'c:\\Users\\USERNAME\\workbench\\playground\\logging-example\\~\\minimum-reproducible-example.log'
..\..\..\AppData\Roaming\uv\python\cpython-3.12.8-windows-x86_64-none\Lib\logging\__init__.py:1263: FileNotFoundError
======================================================================= short test summary info =======================================================================
FAILED tests/test_main.py::test_main - FileNotFoundError: [Errno 2] No such file or directory: 'c:\\Users\\USERNAME\\workbench\\playground\\logging-example\\~\\minimu...
========================================================================== 1 failed in 0.15s ==========================================================================
</code></pre>
<p>Am I using the logging package wrong or is this a bug? This usage seems so simple and reproducible that it's surprising to me that I can't find hardly anyone else on the web who has run into this.</p>
|
<python><python-logging>
|
2025-01-21 21:50:22
| 0
| 839
|
Eleanor Holley
|
79,375,917
| 7,376,511
|
Python: convert back and forth between utf-8 and unicode_escape, preserving character
|
<pre><code>s = "Hello隼"
s = s.encode("utf-8").decode("unicode_escape").encode("unicode_escape").decode("utf-8")
print(s)
</code></pre>
<p>This returns <code>Hello\\xe9\\x9a\\xbc</code>. But why?!</p>
<p>Yes, I know those escaped Unicode characters are equivalent to the kanji. But they're not equivalent in the terminal, nor when they're inside an url or other cases that actually need the symbol.</p>
<p>How do I get the original string back? I thought these operations were supposed to be reversible. Just the utf-8 steps or the unicode_escape steps by themselves work, but when mixed together they break it.</p>
|
<python><unicode>
|
2025-01-21 20:55:01
| 1
| 797
|
Some Guy
|
79,375,876
| 5,856,587
|
Airflow task with parameter for task-generated mappings after a certain previous task
|
<p>I work with GCP Cloud Composer (Airflow), where I would like to list .csv files in a GCS storage bucket having a specific prefix and start an import into a GCP Cloud SQL instance for each of them. I would like to use <a href="https://airflow.apache.org/docs/apache-airflow/2.5.0/concepts/dynamic-task-mapping.html#" rel="nofollow noreferrer">Airflow Dynamic Task Mapping</a>, with <a href="https://airflow.apache.org/docs/apache-airflow/2.5.0/concepts/dynamic-task-mapping.html#task-generated-mapping" rel="nofollow noreferrer">Task-generated Mappings</a>: that is, getting one task to generate the list over which the import task is then <code>expand</code>ed.</p>
<p>The problem is:</p>
<ul>
<li>The generator task must take an argument that references the URI prefix, and this must be the evaluated Jinja template (it references a <code>task_instance.xcom_pull</code>) result</li>
<li>The generator task must be called after <code>generate_run_id</code>, so that it is aware of the correct location.</li>
</ul>
<p>I have something similar, which now does not work, this errors out at the expand call
with <code>ValueError: expand() got an unexpected type 'str' for keyword argument 'prefix'</code> Could someone please suggest a solution to this? I for example do not get how a parameterized task used to generate values for task-generated mappings is supposed to work.</p>
<p>If there is simpler solution to import multiple files in one DAG to a GCP Cloud SQL instance, please le me know.</p>
<pre><code>#!/usr/bin/env python
import time
import logging
from airflow.operators.python import get_current_context
from airflow.decorators import task
from airflow.decorators import dag
from airflow import models
from airflow.operators import empty
from airflow.utils.state import State
from airflow.operators.python import task, get_current_context
from airflow.operators.python_operator import PythonOperator
from airflow.providers.google.cloud.operators.bigquery import BigQueryInsertJobOperator
from airflow.providers.google.cloud.operators.cloud_sql import CloudSQLExecuteQueryOperator
from airflow.providers.google.cloud.operators.cloud_sql import CloudSQLImportInstanceOperator
from airflow.utils.dates import days_ago
from airflow.models.param import Param
from airflow.models import Variable
from google.cloud import storage
import datetime
ENVIRONMENT_TYPE = # ...
GCP_REGION = # ...
foobar_PROJECT_ID = # ...
POSTGRESQL_PROJECT_ID = # ...
EXPORT_BUCKET = # ...
LOD_GCS_STAGING = # ...
EXPORT_PATH="foobar/{{ dag_run.conf.get('load_date', params.load_date) }}/{{ task_instance.xcom_pull(key='simple_run_id', task_ids='generate_run_id') }}"
EXPORT_URI=str("gs://" + EXPORT_BUCKET + "/" + EXPORT_PATH )
CLOUD_SQL_POSTGRESQL_CONNECTION_ID = "..."
def _generate_run_id(**context):
run_id = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
output_variable_name = 'simple_run_id'
context['ti'].xcom_push(key=output_variable_name, value=run_id)
logging.info(f"Generated '{output_variable_name}' with value: '{run_id}'")
default_args = {
"region": GCP_REGION,
'retries': 0
}
with models.DAG(
"FooBar", # id displayed in the DAG airflow page
default_args=default_args,
schedule_interval=None,
params={
"load_date": Param("2024-11-27", type="string", format="date", examples=["2024-11-27", "2024-06-01", "2023-12-29"])
},
max_active_tasks=1 ) as dag:
start = empty.EmptyOperator(task_id='start', trigger_rule='all_success')
end = PythonOperator(
task_id='end',
provide_context=True,
python_callable=final_status,
trigger_rule='all_done', # Ensures this task runs even if upstream fails
dag=dag,
)
generate_run_id = PythonOperator(
task_id="generate_run_id",
python_callable=_generate_run_id,
dag=dag
)
## ... lines omitted
@task
def build_request_bodies(prefix):
delimiter = '/'
logging.info(f"Listing bucket '{EXPORT_BUCKET}' with prefix: '{prefix}'")
storage_client = storage.Client()
blobs = storage_client.list_blobs(bucket_or_name=EXPORT_BUCKET, prefix=prefix, delimiter=delimiter)
csv_files = []
for blob in blobs:
if blob.name.endswith('.csv'):
csv_files.append(f"gs://{blob.bucket.name}/{blob.name}")
logging.info(f"Found {len(csv_files)} CSV files: '{csv_files}'")
bodies = list([{
"importContext": {
"fileType": "csv",
"uri": exported_file,
"database": "foobar",
"csvImportOptions": {
"table": "foobar_export.foo_landing",
}
}
} for exported_file in csv_files ])
logging.info(f"Built import request bodies: {bodies}'")
return bodies
# Import data exported from BigQuery
import_to_cloud_sql = CloudSQLImportInstanceOperator.partial(
project_id = POSTGRESQL_PROJECT_ID,
task_id = "import_to_cloud_sql",
instance = CLOUD_SQL_POSTGRESQL_INSTANCE,
map_index_template="""{{ task.body.importContext.uri }}"""
).expand(body = build_request_bodies.expand(prefix=f"{EXPORT_PATH}/"))
## ... lines omitted
start >>\
generate_run_id >>\
## ... lines omitted
build_request_bodies(prefix=f"{EXPORT_PATH}/") >>\
import_to_cloud_sql >>\
## ... lines omitted
end
</code></pre>
|
<python><postgresql><google-cloud-storage><airflow><google-cloud-composer-2>
|
2025-01-21 20:35:31
| 0
| 555
|
Peter G. Horvath
|
79,375,816
| 6,005,699
|
Timestamps reset every 30 seconds when using distil-whisper with return_timestamps=True
|
<h2>Problem</h2>
<p><a href="https://huggingface.co/distil-whisper/distil-large-v3#sequential-long-form" rel="nofollow noreferrer">distil-large-v3#sequential-long-form</a></p>
<p>I'm using <code>distil-whisper</code> through the 🤗 Transformers pipeline for speech recognition. When setting <code>return_timestamps=True</code>, the timestamps reset to 0 every 30 seconds instead of continuing to increment throughout the entire audio file.</p>
<p>Here's my current code:</p>
<pre class="lang-py prettyprint-override"><code>pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
torch_dtype=torch_dtype,
device=device,
return_timestamps=True,
)
result = pipe("audio.mp4")
</code></pre>
<h2>Output</h2>
<p>The timestamps in the output look like this:</p>
<pre class="lang-py prettyprint-override"><code>{'chunks': [
{'text': 'First segment', 'timestamp': (0.0, 5.2)},
{'text': 'Second segment', 'timestamp': (5.2, 12.8)},
{'text': 'Later segment', 'timestamp': (28.4, 30.0)},
{'text': 'Should be ~35s but shows', 'timestamp': (0.0, 4.6)}, # Resets here!
...
]}
</code></pre>
<h2>Expected Behavior</h2>
<p>I expect the timestamps to continue incrementing past 30 seconds, like this:</p>
<pre class="lang-py prettyprint-override"><code>{'chunks': [
{'text': 'First segment', 'timestamp': (0.0, 5.2)},
{'text': 'Second segment', 'timestamp': (5.2, 12.8)},
{'text': 'Later segment', 'timestamp': (28.4, 30.0)},
{'text': 'Continues properly', 'timestamp': (30.0, 34.6)}, # Should continue
...
]}
</code></pre>
<h2>Environment</h2>
<ul>
<li>Python 3.10</li>
<li>transformers 4.36.2</li>
<li>torch 2.1.2</li>
<li>Model: distil-whisper-large-v3</li>
</ul>
<p>How can I fix this timestamp reset issue? Is there a way to make the timestamps continue incrementing throughout the entire audio file?</p>
|
<python><huggingface-transformers><transformer-model><openai-whisper>
|
2025-01-21 20:10:32
| 0
| 441
|
Martin Zhu
|
79,375,793
| 9,962,007
|
S3UploadFailedError due to MissingContentLength when calling PutObject in MLflow using MinIO
|
<p>When trying to save / upload a file using <code>mlflow.log_artifact()</code> to MinIO, our MLflow users are suddenly getting this error in previously working code, raised in <code>boto3</code> package (used by the <code>mlflow</code> package, and with S3-compatible local MinIO server acting as a data lake for our local installation of MLflow):</p>
<p><code>S3UploadFailedError: Failed to upload ./dict-20231204.2.json to mlflow/24/<run_id>/artifacts/dict-20231204.2.json: An error occurred (MissingContentLength) when calling the PutObject operation: You must provide the Content-Length HTTP header.</code></p>
<p>It is raised here:
<code>[..]/python3.11/site-packages/boto3/s3/transfer.py:378, in S3Transfer.upload_file()</code></p>
<p>Any solutions or at least workarounds to restore file uploads to MinIO buckets?</p>
|
<python><amazon-s3><boto><minio><mlflow>
|
2025-01-21 20:02:05
| 2
| 7,211
|
mirekphd
|
79,375,777
| 19,672,778
|
Fourier Series Implementation cannot approximate batman shape
|
<p>I tried to implement a formula, from which a coefficients of Fourier Series could be calculated. (I used 3B1B's video about it: <a href="https://www.youtube.com/watch?v=r6sGWTCMz2k&t=1284s" rel="nofollow noreferrer">Video</a>) and writing code for that, my first test subject was singular contour of batman logo, I first take a binary picture of batman logo and use marching squares algorithm to find contour of it. after that i rescale values and get this results:
<a href="https://i.sstatic.net/QSm82Ucn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QSm82Ucn.png" alt="POINTS FROM THE MARCHING SQUARES ALGORITHM" /></a></p>
<p>And Here is Code for creating this points: (Contour_Classifier.py)</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
from skimage import measure, draw
def read_binary_image(file_path):
# Open the file and read line by line
with open(file_path, 'r') as file:
lines = file.readlines()
height, width = len(lines), len(lines[0])
print(height, width)
# Process lines into a 2D numpy array
image_data = []
for i in range(height + 2):
arr = []
for j in range(width + 2):
arr.append(0)
image_data.append(arr)
for i in range(2, height + 1):
for j in range(2, width + 1):
if(lines[i - 2][j - 2] != '1'):
image_data[i][j] = 0
else:
image_data[i][j] = 1
# Convert list to numpy array for easier manipulation
image_array = np.array(image_data)
return image_array
def display_image(image_array):
# Display the binary image using matplotlib
plt.imshow(image_array, cmap="gray")
plt.axis('off') # Hide axes
plt.show()
# Example usage
file_path = 'KOREKT\images\sbetmeni.txt' # Replace with the path to your file
image_array = read_binary_image(file_path)
#display_image(image_array)
#----------------------------------------------------------------------------------------------------------
#-------------------------------------------Finding Contours-----------------------------------------------
#----------------------------------------------------------------------------------------------------------
contours = measure.find_contours(image_array, level=0.5, positive_orientation='high')
fixed_contours = []
for contour in contours:
fixed_contour = np.column_stack((contour[:, 1], contour[:, 0])) # Swap (row, column) to (column, row)
fixed_contour[:, 1] = image_array.shape[0] - fixed_contour[:, 1] # Invert the y-axis
# Normalize coordinates between [0, 1]
fixed_contour[:, 0] /= image_array.shape[1] # Normalize x (width)
fixed_contour[:, 1] /= image_array.shape[0] # Normalize y (height)
fixed_contour[:, 0] *= 250 # Normalize x (width)
fixed_contour[:, 1] *= 250 # Normalize y (height)
fixed_contours.append(fixed_contour)
contours = fixed_contours
print(fixed_contours[0])
def visualize_colored_contours(contours, title="Colored Contours"):
# Create a plot
plt.figure(figsize=(8, 8))
for i, contour in enumerate(contours):
# Extract X and Y coordinates
x, y = zip(*contour)
# Plot the points with a unique color
plt.plot(x, y, marker='o', label=f'Contour {i+1}')
plt.title(title)
plt.xlabel("X")
plt.ylabel("Y")
plt.legend()
plt.grid(True)
plt.axis("equal")
plt.show()
# Visualize the normalized contours
visualize_colored_contours(contours)
</code></pre>
<p>Now we go to the main part, where we implement the fourier series algorithm. I divide the time interal (t) into the amount of points provided and i make assumtion that all of that points relative to t have same distances between eachother. I use approximation of integral as the sum of the points as provided into the formula.<a href="https://i.sstatic.net/2f88pYEM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2f88pYEM.png" alt="FORMULA APPROXIMATION" /></a></p>
<p>And Here is code implementing it (Fourier_Coefficients.py):</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
def calculate_Fourier(points, num_coefficients):
complex_points = []
for point in points:
complex_points.append(point[0] + 1j * point[1])
t = np.linspace(0, 1, len(complex_points), endpoint=False)
c_k = np.zeros(num_coefficients, dtype=np.complex128)
for i in range(num_coefficients):
c_k[i] = np.sum(complex_points * np.exp(-2j * np.pi * i * t) * t[1])
return c_k
</code></pre>
<p>(NOTE: For this code t<a href="https://i.sstatic.net/QSm82Ucn.png" rel="nofollow noreferrer">1</a> is basically deltaT, because it equals to 1/len(complex_points)
And Now, in the next slide i animate whole process, where i also wrote additional code snippet for creating a gif. If my implementation were correct it shouldn't have anu difficulty creating a batman shape, but we can observe really weird phenomenons throught the gif.</p>
<p>this is code snippet for this part</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
import imageio
from Fourier_Coefficients import calculate_Fourier
from Countour_Classifier import contours
# List to store file names for GIF creation
png_files = []
# Generate plots iteratively
for i in range(len(contours[0])):
contour_coefficients = []
for contour in contours:
contour_coefficients.append(calculate_Fourier(contour, i))
# Fourier coefficients (complex numbers) and frequencies
coefficients = contour_coefficients[0] # First contour
frequencies = np.arange(len(coefficients))
# Time parameters
t = np.linspace(0, 1, len(coefficients)) # One period
curve = np.zeros(len(t), dtype=complex)
# Use the first (i + 1) coefficients
for j in range(len(coefficients)):
c, f = coefficients[j], frequencies[j]
curve += c * np.exp(1j * 2 * np.pi * f * t)
# Plotting
plt.figure(figsize=(8, 8))
plt.plot(curve.real, curve.imag, label="Trajectory", color="blue")
plt.scatter(0, 0, color="black", label="Origin")
plt.axis("equal")
plt.title(f"Fourier Series with {i + 1} Coefficients")
plt.xlabel("Real Part (X)")
plt.ylabel("Imaginary Part (Y)")
plt.legend()
plt.text(-0.5, -0.5, f"Using {i + 1} coefficients", fontsize=12, color="red")
# Save the figure as a PNG file
filename = f"fourier_{i + 1}_coefficients.png"
plt.savefig(filename)
plt.close()
# Append the file name to the list
png_files.append(filename)
# Create a GIF from the PNG files
gif_filename = "fourier_series.gif"
with imageio.get_writer(gif_filename, mode='I', duration=0.5) as writer:
for filename in png_files:
image = imageio.imread(filename)
writer.append_data(image)
print("Plots saved as PNG files and GIF created as 'fourier_series.gif'.")
</code></pre>
<p>Now this is the result
<a href="https://gifyu.com/image/SelZC" rel="nofollow noreferrer">GIF</a></p>
<p><strong>Observation #1</strong>
when coefficients number is 0, 1, 2 or 3 it doesnt draw anything.</p>
<p><strong>Observation #2</strong></p>
<p>As coefficients number raises, we get the wobbly circular shape, where the lower part of the image is slightly more identical tot he original imagine, but messes up on its wings</p>
<p><strong>Observation #3</strong></p>
<p>As we get closer to the len(complex_numbers), the situacion changes and we get this weird shapes, different from circular</p>
<p><strong>Observation #4</strong></p>
<p>When we surpass the len(complex_number), it draws a random gibberish</p>
<p><strong>Observation #5</strong></p>
<p>When the number of the divisions inside the t value in animation.py code is altered we get completely different images.</p>
<p><em>EDIT 1</em></p>
<p>here is actual .txt data provided for further testing.</p>
<p><a href="https://pastebin.com/Q51pT09E" rel="nofollow noreferrer">https://pastebin.com/Q51pT09E</a></p>
<p>After all of this information given, can you guys help me out whats wrong with my code</p>
|
<python><math>
|
2025-01-21 19:53:08
| 1
| 319
|
NikoMolecule
|
79,375,697
| 6,662,425
|
How to nest representer_mappings in ruamel.yaml?
|
<p>EDIT: made the question more concrete</p>
<p>I want to have the concept of a learning task for optimizer benchmarking</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class LearningTask
model: torch.nn.Module
loss: torch.nn.Module
data: lightning.LightningDataModule
</code></pre>
<p>Of course it might be nice to serialize this into yaml to have benchmarks be configured by yamls or to store the yaml configuration with the performance benchmark.</p>
<p>But the <code>LearningTask</code> above is ill suited for this purpose for two reasons:</p>
<ol>
<li>I might want to repeat the Task with different <code>seeds</code> but the initialization of the parameter weights of the model happens at initialization</li>
<li>The random parameters of the model should not end up in the yaml configuration. But separating the generated and default parameters from the parameters passed to the constructor is very annoying after the object is constructed.</li>
</ol>
<p>So instead, the LearningTask is not defined as the fully assembled object, but instead as a Factory:</p>
<pre class="lang-py prettyprint-override"><code>from attr import dataclass
from typing import Any
@dataclass
class Params:
model: dict[str,Any]
loss: dict[str,Any]
data: dict[str,Any]
@dataclass
class BetterLearningTask:
model_class: type[torch.nn.Module]
loss_class: type[torch.nn.Module]
data_class: type[lightning.LightningDataModule]
params: Params
@property
model(self):
return self.model_class(**self.params.model)
# similar for loss and data
factory = BetterLearningTask(
model_class = MyModelInheritingFromPytorchModule,
loss_class = torch.nn.Crossentropy,
data_class = MyLightningDataModule
Params(model={"conv_kernel_size": 3}, loss={}, data={})
)
</code></pre>
<p>This formulation of the LearningTask is a Factory - it holds all the components to create the model, loss and data but not their initialized version. To store it, we only need to store the class names and the initialization parameters.</p>
<p>I want the <code>yaml</code> configuration of the factory to pretend like it is the simpler LearningTask, i.e.</p>
<pre class="lang-yaml prettyprint-override"><code>!BetterLearningTask
model: !MyModelInheritingFromPytorchModule
conv_kernel_size: 3
loss: !torch.nn.Crossentropy {}
data: !MyLightningDataModule {}
</code></pre>
<p>What I have tried is the following</p>
<pre class="lang-py prettyprint-override"><code>@yaml.register_class
@dataclass
class BetterLearningTask:
# ...
@classmethod
def to_yaml(cls, representer, node):
return representer.representer_mapping(
"!" + cls.__name__,
{
"model": representer.representer_mapping(
"!" + node.model_class.__name__,
node.params.model,
),
"loss": representer.representer_mapping(
"!" + node.loss_class.__name__,
node.params.loss,
),
}
)
</code></pre>
<p>but this results in</p>
<pre><code>ruamel.yaml.representer.RepresenterError:
cannot represent an object: MappingNode(tag='!MyModelInheritingFromPytorchModule', value=[(conv_kernel_size, 3)])
</code></pre>
<p>I actually do not know what the different representers are supposed to do, which is why I am struggling to figure out what I should do differently. The documentation does not include the representers but the maintainers seems to be quite active answering questions here, so I thought I would try.</p>
<p>In this answer: <a href="https://stackoverflow.com/a/76432843/6662425">np.array in yaml</a> nesting of representer at least seem to work.</p>
|
<python><yaml><ruamel.yaml>
|
2025-01-21 19:13:19
| 1
| 1,373
|
Felix Benning
|
79,375,572
| 1,169,091
|
Why won't this Python module import into my code
|
<p>I have this entry point:</p>
<pre><code># main.py
from 022 import *
print("Hello from main.py")
</code></pre>
<p>I have <code>022.py</code> also. It is:</p>
<pre><code>print("Hello World")
</code></pre>
<p>When I run <code>main.py</code> I get this error:</p>
<pre><code> File "C:\Users\nicholdw\source\repos\InClass20250121-4010-001\InClass20250121-4010-001\mainPackage\main.py", line 3
from 022 import *
^
SyntaxError: leading zeros in decimal integer literals are not permitted; use an 0o prefix for octal integers
</code></pre>
|
<python><python-import>
|
2025-01-21 18:31:45
| 1
| 4,741
|
nicomp
|
79,375,386
| 9,452,512
|
How to create a mesh from vertices and faces?
|
<p>I would like to use gmsh to create a mesh given the vertices and the faces (as they are usually stored in the .obj file format).</p>
<p>How do I fill the following code?</p>
<pre><code>import gmsh
import numpy as np
# Initialize Gmsh
gmsh.initialize()
vertices = np.array([
[-0.5, -0.5, -0.5],
[0.5, -0.5, -0.5],
[-0.5, 0.5, -0.5],
[0.5, 0.5, -0.5],
[-0.5, -0.5, 0.5],
[0.5, -0.5, 0.5],
[-0.5, 0.5, 0.5],
[0.5, 0.5, 0.5]])
faces = np.array([
[2, 1, 0],
[1, 2, 3],
[4, 2, 0],
[2, 4, 6],
[1, 4, 0],
[4, 1, 5],
[6, 5, 7],
[5, 6, 4],
[3, 6, 7],
[6, 3, 2],
[5, 3, 7],
[3, 5, 1]]
)
# Add vertices to Gmsh model
for vertex in vertices:
gmsh.model.occ.addPoint(*vertex)
"""
How to add the faces here?
"""
gmsh.write("custom_mesh.msh")
# Finalize Gmsh
gmsh.finalize()
</code></pre>
|
<python><gmsh>
|
2025-01-21 17:28:17
| 1
| 1,473
|
Uwe.Schneider
|
79,375,287
| 997,633
|
GPU utilization almost always 0 during training Hugging Face Transformer
|
<p>I am fine-tuning a <a href="https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2" rel="nofollow noreferrer">Donut Cord-v2</a> model with my invoice data which is around 360 GB in size when preprocessed and saved on disk as a dataset. I am following <a href="https://github.com/philschmid/document-ai-transformers/blob/main/training/donut_sroie.ipynb" rel="nofollow noreferrer">this</a> notebook almost exactly, except I have 6 training epochs instead of 3.</p>
<p>I am training on single Nvidia H100 SXM GPU / Intel Xeon® Gold 6448Y / 128 GB RAM.</p>
<p>Whenever I start training, and inspect CPU and GPU utilization using <code>htop</code> and <code>nvidia-smi</code>, I see that CPU is at 10-12% utilization, used by python, GPU memory is almost 90% filled constantly, but GPU Utilization is almost always 0. If I keep refreshing the output of <code>nvidia-smi</code>, once every 10-12 seconds the utilization will jump to 100% and then go back to 0 immediately. I cant help but feel ther eis a bottleneck between my CPU and GPU, where CPU attempts to constantly process data and send it to GPU, GPU processes it very fast, and just idles, awaiting for the next batch from cpu. I load already pre-processed dataset from disk like so:</p>
<pre><code>from datasets import load_from_disk
processed_dataset = load_from_disk(r"/dataset/dataset_final")
</code></pre>
<p>My processor config is as follows:</p>
<pre><code>from transformers import DonutProcessor
new_special_tokens = [] # new tokens which will be added to the tokenizer
task_start_token = "<s>" # start of task token
eos_token = "</s>" # eos token of tokenizer
processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2")
# add new special tokens to tokenizer
processor.tokenizer.add_special_tokens({"additional_special_tokens": new_special_tokens + [task_start_token] + [eos_token]})
# we update some settings which differ from pretraining; namely the size of the images + no rotation required
processor.feature_extractor.size = [1200,1553] # should be (width, height)
processor.feature_extractor.do_align_long_axis = False
</code></pre>
<p>My model config is:</p>
<pre><code>import torch
from transformers import VisionEncoderDecoderModel, VisionEncoderDecoderConfig
#print(torch.cuda.is_available())
# Load model from huggingface.co
model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2")
# Resize embedding layer to match vocabulary size
new_emb = model.decoder.resize_token_embeddings(len(processor.tokenizer))
print(f"New embedding size: {new_emb}")
# Adjust our image size and output sequence lengths
model.config.encoder.image_size = processor.feature_extractor.size[::-1] # (height, width)
model.config.decoder.max_length = len(max(processed_dataset["train"]["labels"], key=len))
# Add task token for decoder to start
model.config.pad_token_id = processor.tokenizer.pad_token_id
model.config.decoder_start_token_id = processor.tokenizer.convert_tokens_to_ids(['<s>'])[0]
</code></pre>
<p>And my training code is:</p>
<pre><code>import gc
gc.collect()
torch.cuda.empty_cache()
from transformers import Seq2SeqTrainingArguments, Seq2SeqTrainer
import logging
logging.basicConfig(level=logging.INFO)
# Arguments for training
training_args = Seq2SeqTrainingArguments(
output_dir=r"/trained", # Specify a local directory to save the model
num_train_epochs=6,
learning_rate=2e-5,
per_device_train_batch_size=8,
weight_decay=0.01,
fp16=True,
logging_steps=50,
save_total_limit=2,
evaluation_strategy="no",
save_strategy="epoch",
predict_with_generate=True,
report_to="none",
# Disable push to hub
push_to_hub=False
)
# Create Trainer
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset=processed_dataset["train"],
)
# Start training
trainer.train()
</code></pre>
<p>The estimated time to complete the training with 6 epochs, with 360 GB dataset, is 54 hours. When I run the same exact code on my PC that has Intel i9 11900KF / RTX 3050, I see GPU utilization constantly at 100%. Is there a bottleneck in my code? Why does CPU keep processing so much on already preprocessed dataset? Cuda 12.6</p>
<p>Edit:</p>
<p>Does it make sense to change the <a href="https://huggingface.co/docs/transformers/en/main_classes/trainer#transformers.TrainingArguments.dataloader_num_workers" rel="nofollow noreferrer">dataloader_num_workers</a> parameter of <code>Seq2SeqTrainer</code> to >0 value, since my RAM and CPU core count allows it? (and since CPU utilization is at 10-12% max.)</p>
|
<python><machine-learning><huggingface-transformers>
|
2025-01-21 17:09:03
| 1
| 2,511
|
astralmaster
|
79,375,135
| 3,336,423
|
Why does using `logging.Formatter` prevent widget from being deleted?
|
<p>Here is a MCVE, very simple piece of code creating a <code>QMainWindow</code>, with a central empty <code>QWidget</code>:</p>
<pre><code>import sys
from PyQt5.QtWidgets import QWidget
class MyWidget(QWidget):
"""QWidget based class to configure a Simulator object."""
def __init__(self,parent : QWidget):
"""
Construct a new object.
param: parent : QWidget
Parent class
"""
super().__init__(parent)
print("Created GUI")
def __del__(self):
print("Deleting GUI")
if __name__ == '__main__':
from PyQt5.QtWidgets import QMainWindow, QApplication
import logging
class CustomFormatter(logging.Formatter):
def format(self, record):
formatter = logging.Formatter("%(message)s")
return formatter.format(record)
logger = logging.getLogger("my_logger")
logger.setLevel(logging.INFO)
ch = logging.StreamHandler()
ch.setLevel(logging.INFO)
#ch.setFormatter(CustomFormatter())
logger.addHandler(ch)
app = QApplication(sys.argv)
wnd = QMainWindow()
widget = MyWidget(wnd)
wnd.setCentralWidget( widget )
wnd.show()
sys.exit(app.exec())
</code></pre>
<p>Execute it, a GUI opens, click the close button, GUI closes and output is then:</p>
<pre><code>Created GUI
Deleting GUI
</code></pre>
<p>Now, uncomment <code>ch.setFormatter(CustomFormatter())</code>, repeat.</p>
<p>Now output does not show <code>Deleting GUI</code>. I tried to add an <code>assert</code> here, code does not assert. Why does <code>MyWidget.__del__</code> not get called when using a custom formatter?</p>
|
<python><pyqt5><python-logging>
|
2025-01-21 16:13:14
| 1
| 21,904
|
jpo38
|
79,374,797
| 12,871,587
|
How to calculate horizontal median
|
<p>How to calculate horizontal median for numerical columns?</p>
<pre><code>df = pl.DataFrame({"ABC":["foo", "bar", "foo"], "A":[1,2,3], "B":[2,1,None], "C":[1,2,3]})
print(df)
shape: (3, 4)
┌─────┬─────┬──────┬─────┐
│ ABC ┆ A ┆ B ┆ C │
│ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ i64 ┆ i64 │
╞═════╪═════╪══════╪═════╡
│ foo ┆ 1 ┆ 2 ┆ 1 │
│ bar ┆ 2 ┆ 1 ┆ 2 │
│ foo ┆ 3 ┆ null ┆ 3 │
└─────┴─────┴──────┴─────┘
</code></pre>
<p>I want to achieve the same as with the below pl.mean_horizontal, but get median instead of the mean. I did not find existing expression for this.</p>
<pre><code>print(df.with_columns(pl.mean_horizontal(pl.col(pl.Int64)).alias("Horizontal Mean")))
shape: (3, 5)
┌─────┬─────┬──────┬─────┬─────────────────┐
│ ABC ┆ A ┆ B ┆ C ┆ Horizontal Mean │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ i64 ┆ i64 ┆ f64 │
╞═════╪═════╪══════╪═════╪═════════════════╡
│ foo ┆ 1 ┆ 2 ┆ 1 ┆ 1.333333 │
│ bar ┆ 2 ┆ 1 ┆ 2 ┆ 1.666667 │
│ foo ┆ 3 ┆ null ┆ 3 ┆ 3.0 │
└─────┴─────┴──────┴─────┴─────────────────┘
</code></pre>
|
<python><python-polars>
|
2025-01-21 14:29:57
| 4
| 713
|
miroslaavi
|
79,374,674
| 5,618,856
|
pandas dataframe update with filter_func
|
<p>I have two dataframes with identical shape and want to update df1 with df2 if some conditions are met</p>
<pre><code>import pandas as pd
from typing import Any
df1 = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
print(df1, "\n")
df2 = pd.DataFrame({"A": [7, 8, 9], "B": [10, 3, 12]})
print(df2, "\n")
# Define a condition function
def condition(x: Any) -> bool:
"""condition function to update only cells matching the conditions"""
return True if x in [2, 7, 9] else False
df1.update(df2)
print(df1)
</code></pre>
<p>but if I use filter_func <code>df1.update(df2,filter_func=condition)</code> it fails with <code>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() </code><br />
Unfortunately the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.update.html" rel="nofollow noreferrer">docu</a> is not very verbose.</p>
<p>How to update a dataframe with conditions correctly?</p>
|
<python><pandas><dataframe>
|
2025-01-21 13:53:13
| 1
| 603
|
Fred
|
79,374,485
| 5,846,366
|
Override existing custom Django App template tags
|
<p>I have an application that uses Weblate to manage translations. I use <code>weblate/weblate</code> Docker image, with my own customizations built as a separate Python package extending this image and built on top of it. The problem is that in the Weblate HTML templates there is an <code>icon</code> template tag that is supposed to load SVG icons from a <code>STATIC_ROOT</code> or a <code>CACHE_DIR</code> location - but my application runs in a serverless setup and as such offloads all of the static resources to a S3 bucket. For most of the resources it works fine, but due to that template tag logic the icons are not loaded and I get these error messages -</p>
<pre><code>weblate-1 | gunicorn stderr | [2025-01-21 12:41:08,913: WARNING/1540] Could not load icon: FileNotFoundError: [Errno 2] No such file or directory: '/app/cache/static/icons/weblate.svg'
weblate-1 | gunicorn stderr | [2025-01-21 12:41:08,918: WARNING/1540] Could not load icon: FileNotFoundError: [Errno 2] No such file or directory: '/app/cache/static/icons/wrench.svg'
weblate-1 | gunicorn stderr | [2025-01-21 12:41:08,919: WARNING/1540] Could not load icon: FileNotFoundError: [Errno 2] No such file or directory: '/app/cache/static/icons/plus.svg'
weblate-1 | gunicorn stderr | [2025-01-21 12:41:08,923: WARNING/1540] Could not load icon: FileNotFoundError: [Errno 2] No such file or directory: '/app/cache/static/icons/dots.svg'
</code></pre>
<p>I wrote my custom template tag, which I placed in my custom module <code>weblate_customization/templatetags/icon.py</code>, but it does not override weblate default icon loading logic, and I cannot access the default templates in my code, unless I monkey-patch. The code for the default icon template tag exists within <code>weblate.utils</code> app, which is included within the base image and loads all of the utility functionality required by the application to function, so I can't just throw it out. Is there a way to make Django load my custom version of this template tag, instead of the one provided by <code>weblate</code>?</p>
<p><strong>EDIT</strong>
The logic of the default template tag goes like this:</p>
<p><code>weblate/weblate/utils/templatetags/icon.py</code></p>
<pre><code>@register.simple_tag()
def icon(name):
"""
Inlines SVG icon.
Inlining is necessary to be able to apply CSS styles on the path.
"""
if not name:
msg = "Empty icon name"
raise ValueError(msg)
if name not in CACHE:
if name.startswith("state/"):
icon_file = os.path.join(settings.STATIC_ROOT, name)
else:
icon_file = os.path.join(settings.STATIC_ROOT, "icons", name)
try:
with open(icon_file) as handle:
CACHE[name] = mark_safe(handle.read()) # noqa: S308
except OSError:
report_error("Could not load icon")
return ""
return CACHE[name]
</code></pre>
<p>Whereas the logic I am trying to implement is the following:</p>
<p><code>weblate_customization/templatetags/icon.py</code></p>
<pre><code>@register.simple_tag()
def icon(name: str) -> str:
"""
Inlines SVG icon.
Inlining is necessary to be able to apply CSS styles on the path.
"""
if not name:
msg = "Empty icon name"
raise ValueError(msg)
if name not in CACHE:
if name.startswith("state/"):
icon_url = os.path.join(settings.STATIC_URL, name)
else:
icon_url = os.path.join(settings.STATIC_URL, "icons", name)
try:
icon_file = request.urlopen(icon_url)
except OSError:
report_error("Could not load icon")
return ""
else:
CACHE[name] = ""
for line in icon_file.readlines():
CACHE[name] += line
return mark_safe(CACHE[name])
</code></pre>
<p>How do I make Django to use the simple tag I specify instead of the default one?
I cannot rename it to something like "new_icon" and overwrite the templates since the templates come from the base docker image, as well as the <code>weblate.utils</code> module itself, where the template tags are defined and registered.</p>
|
<python><django><weblate>
|
2025-01-21 12:56:46
| 0
| 1,209
|
AlexNikolaev94
|
79,374,125
| 29,295,031
|
How to add custom buttons to update data in plotly grap
|
<p>I'm using Plotly in my Python project, and I came across a challenge, I don't know if I could do it with Plotly.</p>
<p>Imagine this simple example:</p>
<pre class="lang-py prettyprint-override"><code>import plotly.express as px
df = px.data.gapminder().query("continent == 'Oceania'")
fig = px.line(df, x='year', y='lifeExp', color='country', markers=True)
fig.show()
</code></pre>
<p>It gives this result:
<img src="https://i.sstatic.net/QNuhNTnZ.png" alt="graph result" /></p>
<p>Now what I'm trying to do is, if I slide the line chart forward I want to get the next seven years, and if I slide the line chart backwards I want to go back with 7 years back..if this is not possible I want to add buttons like I draw in the image to do that.</p>
<p>Could you help please, and tell me how I can implement one of these solutions, please?</p>
|
<python><plotly><plotly-dash><streamlit>
|
2025-01-21 11:00:24
| 0
| 401
|
user29295031
|
79,374,069
| 8,943,214
|
Error loading ASGI app. Could not import module "playground" - phidata
|
<p>I am following the guide from <a href="https://docs.phidata.com/agent-ui" rel="nofollow noreferrer">https://docs.phidata.com/agent-ui</a> to set up a UI for interacting with my agents. However, I am encountering the following error when trying to start a playground session with the phidata framework for an Agent UI session:</p>
<pre><code>DEBUG Debug logs enabled
ERROR: Error loading ASGI app. Could not import module "playground".
</code></pre>
<p>Any guidance would be greatly appreciated!</p>
|
<python><phidata>
|
2025-01-21 10:40:13
| 1
| 3,817
|
laitifranz
|
79,374,057
| 9,749,124
|
Jupyter Notebook Kernel dies after trying to load libraries
|
<p>I have Macbook with M2 Chip. I have more than 700Gb free on my laptop.
I want to load some hugging face models to my script.</p>
<p>When ever I do this (this is the only lines of code):</p>
<pre><code>from transformers import pipeline
</code></pre>
<p>The kernel dies:</p>
<p><a href="https://i.sstatic.net/9VPgJxKN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9VPgJxKN.png" alt="enter image description here" /></a></p>
<p>What should I do?
I have a lot of free space. I have installed all libraries.</p>
|
<python><jupyter-notebook><huggingface>
|
2025-01-21 10:34:22
| 0
| 3,923
|
taga
|
79,374,015
| 995,071
|
debug chalice Lambda Event Sources in vscode
|
<p>Chalice does not support debug mode with Lambda Event Sources. Functions based on event sources, such as on_sqs_message, are not even triggered locally when the Chalice app is launched. This limitation raises a critical question: how can developers effectively use debug tools, breakpoints, and other essential debugging features in such a constrained environment?</p>
<p>While it is possible to adopt a degraded process (lacking a proper debugger, variable watcher, etc.), this approach leads to significant inefficiencies. Developers must deploy the Chalice app—a process that takes approximately three minutes—every time a code change is made. Furthermore, the added step of monitoring logs in CloudWatch increases the overhead. Such a workflow is far from acceptable in the software industry, where efficiency and rapid iteration are essential.</p>
<p>It is worth noting that route endpoints function correctly with the debugger at the local level.</p>
|
<python><amazon-web-services><debugging><aws-lambda><chalice>
|
2025-01-21 10:20:52
| 1
| 701
|
negstek
|
79,373,980
| 5,462,743
|
Azure Machine Learning python SDK V2 download logs of steps of a job
|
<p>I wish to download all logs of a job. In the web interface, it is quite easy, but I want to create an automation around it and retrieve all the logs of a job that has multiple steps.</p>
<pre class="lang-py prettyprint-override"><code>job = ml_client.jobs.get(name="my_job_name-08584641874851693546433866596CU116")
</code></pre>
<p>Also, the name of the job is painful to get, because it does not correspond to the display_name of the job. So I have to list all jobs, which takes a lot of time since it's an iterator that goes through ALL jobs in the workspace.</p>
<p>On the documentation of the <a href="https://learn.microsoft.com/fr-fr/python/api/azure-ai-ml/azure.ai.ml.operations.joboperations?view=azure-python#azure-ai-ml-operations-joboperations-get" rel="nofollow noreferrer">job</a>, I can <a href="https://learn.microsoft.com/fr-fr/python/api/azure-ai-ml/azure.ai.ml.operations.joboperations?view=azure-python#azure-ai-ml-operations-joboperations-download" rel="nofollow noreferrer">download</a> its logs, but it's the Job Logs, which are only system logs and not applicative logs generated by the steps.</p>
<p>I found out while inspecting the requests by the browser, that logs files are stored into a blob storage.</p>
<p><a href="https://i.sstatic.net/TeY9bRJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TeY9bRJj.png" alt="enter image description here" /></a></p>
<blockquote>
<p><a href="https://my-storage-aml.blob.core.windows.net/azureml/ExperimentRun/dcid.614b7e0b-4f56-417c-89df-5fb4ac09ce1a/user_logs/std_log.txt?sv=SOME-TOKEN" rel="nofollow noreferrer">https://my-storage-aml.blob.core.windows.net/azureml/ExperimentRun/dcid.614b7e0b-4f56-417c-89df-5fb4ac09ce1a/user_logs/std_log.txt?sv=SOME-TOKEN</a></p>
</blockquote>
<p>So I have some python code to download a file on a blob storage, but I don't know how can I list all the correct folder that is from a unique Job. <code>dcid.614b7e0b-4f56-417c-89df-5fb4ac09ce1a</code> is from one step in the job.</p>
<p>Is there an easy and FAST method to get those logs in python? I say fast, because getting a job with <code>MLClient</code> is very painful, since it's an iterator when we list all jobs. There's no prefilter to get jobs from an experiment or regex filtering.</p>
|
<python><azure><azure-machine-learning-service>
|
2025-01-21 10:14:09
| 2
| 1,033
|
BeGreen
|
79,373,960
| 12,466,687
|
How to use resolved output string from codechunk into markdown quarto?
|
<p>This can also be framed as "How to view dynamic(names stored in variable) <code>html</code> files in <code>markdown</code> documents like <code>quarto</code> ?</p>
<p>I have some <code>html plot</code> files with unique ids that I want to include in the notebook and names of those plots are in variables/parameters and I have to view them in <code>quarto notebook</code> (similar to jupyter markdown notebook).</p>
<p>for example variable <code>corr_net_graph</code> contains the plot name like: <code>plot_userid.html</code></p>
<p>so to view the plot if it is <code>static</code> name then using below code works:</p>
<pre><code>```{=html}
<iframe width="900" height="1200" src="plot_user123.html"></iframe>
```
</code></pre>
<p>but how do I make this work for <code>dynamic names</code> in variables ?</p>
<p>I have tried below method but that doesn't work</p>
<pre><code>```{python}
iframe_str = '<iframe width="900" height="1200" src='+str(corr_net_graph)+'></iframe>'
```
```{=html}
iframe_str
```
</code></pre>
<p>So I am not able to evaluate this into the iframe string in html cell or in markdown.</p>
<p>Appreciate any help or suggestions.</p>
<p><strong>Update:</strong>
I have also tried using <code>pyscript</code></p>
<pre><code>```{python}
import pyscript
```
```{=html}
<iframe width="900" height="1200" src='<pyscript>corr_net_graph</pyscript>'></iframe>
```
```{=html}
<iframe width="900" height="1200" src=<py-script>{corr_net_graph}</py-script>></iframe>
```
```{=html}
<iframe width="900" height="1200" src=<py-script>f"{corr_net_graph}"</py-script>></iframe>
```
checked inspect elements on this and got:
</code></pre>
<p><a href="https://i.sstatic.net/XzJ5W4cg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XzJ5W4cg.png" alt="enter image description here" /></a></p>
|
<python><html><iframe><jupyter-notebook><quarto>
|
2025-01-21 10:08:23
| 0
| 2,357
|
ViSa
|
79,373,843
| 2,881,414
|
How to generate a table of contents with meta data for a set of pages in mkdocs?
|
<p>I'm using mkdocs and mkdocs-material. I have a set of pages for which I want to generate a table of contents. Additionally, I want the table of contents to display a <strong>subset of the meta data</strong> provided by the pages indexed by the TOC. E.g.:</p>
<pre><code>myproject/
|-- docs/
| |-- index.md <- to be generated
| |-- page1.md
| |-- page2.md
</code></pre>
<p>While the pages (e.g. <code>page1.md</code>) contain meta data such as</p>
<pre class="lang-markdown prettyprint-override"><code>---
Title: Some Title
Author: A. U. Thor
Created: 2025-01-21
Something Else: boo
---
</code></pre>
<p>The resulting <code>index.md</code> should contain something like:</p>
<pre class="lang-markdown prettyprint-override"><code>| Title | Author | Created |
|-----------------------|------------|------------|
|[Some Title][page1.md] | A. U. Thor | 2025-01-21 |
</code></pre>
<p>Is there an existing plugin that can do this? If not, what would be the simplest approach to solve this?</p>
|
<python><mkdocs><mkdocs-material><mkdocs-macros>
|
2025-01-21 09:27:09
| 0
| 17,530
|
Bastian Venthur
|
79,373,468
| 2,243,490
|
flake8: ignore F841 unused variable for variable name _ex
|
<p>I have the following code and expecting flake8 not to raise any error as the variable name _ex starts with a underscore. But flake8 still gives me <code>F841 local variable '_ex' is assigned to but never used</code> error. How can I get rid of the error?</p>
<pre><code>try:
1/0
except ZeroDivisionError as _ex:
print("zero division error")
</code></pre>
|
<python><python-3.x><flake8>
|
2025-01-21 07:09:10
| 2
| 1,886
|
Dinesh
|
79,373,307
| 4,382,305
|
TypeError: 'Seq' object does not support item assignment
|
<p>I have a <code>Seq</code> object in Biopython (1.85), and I want to change its third element to <code>A</code>. When I run this code:</p>
<pre><code>from Bio.Seq import Seq
seq = Seq('CCGGGTTAACGTA')
seq[2]= 'A'
</code></pre>
<p>I get this error:</p>
<pre><code>TypeError: 'Seq' object does not support item assignment
</code></pre>
<p>How can I properly reassign the element?</p>
|
<python><bioinformatics><biopython>
|
2025-01-21 05:40:48
| 3
| 2,091
|
Darwin
|
79,373,167
| 1,647,792
|
Python Postgres pull as byte object vs str
|
<p>I'm doing a basic Python Flask application with a Postgres database using psycopg2. I'm having trouble setting up the user registration password hash as it appears I'm working with a string vs byte type. Fundalmentally, I'll get this error when doing the password re-hash on the login (after registration and initial hash)</p>
<pre><code>hash_value = hashlib.pbkdf2_hmac(
# TypeError: a bytes-like object is required, not 'str'
</code></pre>
<p>Here is the relevant table setup:</p>
<pre><code>CREATE TABLE IF NOT EXISTS myschema.mytable
(
--Unrelated fields...
password_hash character varying(500) COLLATE pg_catalog."default" NOT NULL,
salt character varying(100) COLLATE pg_catalog."default" NOT NULL,
--More unrelated fields...
)
</code></pre>
<p>This is how I'm inserting the data:</p>
<pre><code># Code above that is setting up a DB utility, configs, etc...
# Hash the password
salt = os.urandom(16)
iterations = 100000
hash_value = hashlib.pbkdf2_hmac(
'sha256',
password.encode('utf-8') + app_config['PEPPER'].encode('utf-8'),
salt,
iterations
)
password_hash = salt + hash_value
# Redacted extra fields
query = "INSERT INTO mytable (password_hash, salt) VALUES (%s, %s);"
params = (password_hash, salt)
# This kicks off the standard cursor execute, etc...
db.query(query, params)
</code></pre>
<p>And for the retrieval:</p>
<pre><code># Code above that is setting up a DB utility, configs, etc...
query = "SELECT password_hash, salt FROM mytable WHERE email = %s;"
params = (email,)
users = db.query(query, params)
db.close()
# No user found
if not users:
return False
db_password_hash, db_salt = users[0]
iterations = 100000
hash_value = hashlib.pbkdf2_hmac( # This will be the spot that throws the exception as it expects bytes for
'sha256',
password.encode('utf-8') + app_config['PEPPER'].encode('utf-8'),
db_salt,
iterations
)
# Then commence validation logic, etc...
</code></pre>
<p>I have tried using bytes(db_salt, 'utf-8') as it's really throwing the error on the salt field. However, this will not yield a successful rehash. I'll take recommendations - this is new development so if I need to do a postgres binary type I can do that too- whatever option makes sense here.</p>
<p>Thanks!</p>
|
<python><psycopg2><password-hash><python-3.11><postgresql-17>
|
2025-01-21 04:00:20
| 0
| 399
|
jazzmasterkc
|
79,373,155
| 8,098,068
|
Alembic keeps deleting and recreating Foreign Keys during autogenration
|
<p>After the autogenrated initial migration, when running another autogeneration all foreign keys are beeing dropped and recreated.</p>
<p>Example Code from the migration script:</p>
<pre><code>op.drop_constraint('fk_foo_bar_id_bar', 'foo', type_='foreignkey')
op.create_foreign_key(op.f('fk_foo_bar_id_bar'), 'foo', 'bar', ['bar_id'], ['id'], source_schema='public', referent_schema='public')
</code></pre>
<p>I tried changing env.py include_object as well as the way migrations were run. I also tried changing both foo and bar model in any way that seemed reasonable.
Naming the FKs also didnt work.</p>
|
<python><database><postgresql><alembic><timescaledb>
|
2025-01-21 03:49:55
| 1
| 364
|
tnfru
|
79,373,070
| 2,998,077
|
Requests and BeautifulSoup to get video length from YouTube
|
<p>In getting the video length from a YouTube url, Inspect from web browser shows there's a line:</p>
<p><a href="https://i.sstatic.net/oZy2pGA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oZy2pGA4.png" alt="enter image description here" /></a></p>
<p>Then I use requests and BeautifulSoup to get it:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
url = "https://www.youtube.com/watch?v=ANYyoutubeLINK"
response = requests.get(url)
response.raise_for_status()
soup = BeautifulSoup(response.text, 'html.parser')
duration_span = soup.find_all('span', class_='ytp-time-duration')
print (duration_span)
</code></pre>
<p>Neither "soup.find_all" nor "soup.find" works. What went wrong?</p>
|
<python><web-scraping><beautifulsoup><request><youtube>
|
2025-01-21 02:47:34
| 4
| 9,496
|
Mark K
|
79,372,999
| 2,357,712
|
Please explain an unusual imported QIF library
|
<p>In this library on github, <code>https://github.com/giacomos/qifparse/blob/master/qifparse/parser.py</code> there is an import statement that I've never seen before.</p>
<p><code> from qifparse.qif import</code></p>
<p>I have tried all sorts of searches to find what this file source might be, but I'm stumped. Can you explain it to me please?</p>
<pre><code># -*- coding: utf-8 -*-
import six
from datetime import datetime
from decimal import Decimal
from qifparse.qif import (
Transaction,
MemorizedTransaction,
AmountSplit,
Account,
Investment,
Category,
Class,
Qif,
)
</code></pre>
<p>EDIT
Apologies, let me try rephrase the question.</p>
<p>What is <code> from qifparse.qif import...</code>
There is no such source that I can see</p>
|
<python>
|
2025-01-21 01:28:01
| 0
| 1,617
|
Maxcot
|
79,372,940
| 1,046,184
|
How to get a list of models available in Ollama using Langchain
|
<p>I am trying to run a Python script that gets and prints a list of the models that are available to a running instance of Ollama. My code, based on code provided at <a href="https://medium.com/@garysvenson09/how-to-list-all-models-in-ollama-for-langchain-tutorial-786cb14298a8" rel="nofollow noreferrer">https://medium.com/@garysvenson09/how-to-list-all-models-in-ollama-for-langchain-tutorial-786cb14298a8</a>, is provided below:</p>
<pre><code>from langchain import Ollama
ollama_client = Ollama()
model_list = ollama_client.list_models()
for model in model_list: print(f"Model Name: {model.name}, Version: {model.version}, Description: {model.description}")
</code></pre>
<p>The problem is that when I run the script, I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Research\Project 39\langtest1\Test1\firsttest.py", line 2, in <module>
from langchain import Ollama
ImportError: cannot import name 'Ollama' from 'langchain' (C:\Research\Project 39\langtest1\Test1\venv\Lib\site-packages\langchain\__init__.py)
Process finished with exit code 1
</code></pre>
<p>Obviously, the code I am using is flawed.</p>
<p>How to get a list of available models from Ollama?</p>
|
<python><artificial-intelligence><langchain>
|
2025-01-21 00:22:48
| 2
| 2,316
|
Factor Three
|
79,372,723
| 8,800,836
|
Sort columns of numpy unitary matrix such that highest element of each column is on the diagonal
|
<p>Take a unitary matrix <code>U</code>. I want to swap the columns such that the largest element of each column (in absolute value) is on the diagonal (modulo ties). What is the best way to do this in numpy?</p>
|
<python><numpy><sorting><numpy-ndarray>
|
2025-01-20 21:42:58
| 1
| 539
|
Ben
|
79,372,602
| 1,533,306
|
Pass parameters to custom transformer in sklearn
|
<p>I am trying to pass a parameter <code>DummyTransformer__feature_index_sec</code> to my sklearn custom transformer via a pipeline. It seems like I need to implement metadata routing in order to do this. However, I cannot successfully create a working dummy example:</p>
<pre><code>from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.pipeline import Pipeline
from sklearn.utils.metadata_routing import MetadataRouter, MethodMapping
from scipy.sparse import csr_matrix
import pandas as pd
import numpy as np
from sklearn import set_config
# Enable metadata routing globally
set_config(enable_metadata_routing=True)
class DummyTransformer(BaseEstimator, TransformerMixin):
def transform(self, X, feature_index_sec=None):
if feature_index_sec is None:
raise ValueError("Missing required argument 'feature_index_sec'.")
print(f"Received feature_index_sec with shape: {feature_index_sec.shape}")
return X
def fit(self, X, y=None, feature_index_sec=None, **fit_params):
return self
def transform(self, X, feature_index_sec=None):
return X
def fit_transform(self, X, y=None, feature_index_sec=None):
# self.fit(X, y) - fit is stateless in this transformer!
return self.transform(X, feature_index_sec)
def get_metadata_routing(self):
print("Configuring metadata routing for DummyTransformer")
router = (
MetadataRouter(owner=self.__class__.__name__)
.add_self_request(self) # Declare this transformer as a consumer
)
return router
# Declare explicitly what metadata is required for each method
def set_fit_request(self, **metadata):
self._fit_request = metadata
return self
def set_transform_request(self, **metadata):
self._transform_request = metadata
return self
def set_fit_transform_request(self, **metadata):
self._fit_transform_request = metadata
return self
# Dummy data
feature_matrix = csr_matrix(np.random.rand(10, 5))
train_idx = pd.DataFrame({'FileDate_ClosingPrice': np.random.rand(10)})
# Configure metadata requests for DummyTransformer
transformer = DummyTransformer().set_fit_transform_request(feature_index_sec=True)
# Minimal pipeline
pipe = Pipeline(steps=[('DummyTransformer', transformer)])
# Test fit_transform
pipe.fit_transform(feature_matrix, DummyTransformer__feature_index_sec=train_idx)
</code></pre>
<p>The example above results in an error: <code>Pipeline.fit_transform got unexpected argument(s) {'DummyTransformer__feature_index_sec'}, which are not routed to any object. </code></p>
|
<python><scikit-learn><scikit-learn-pipeline>
|
2025-01-20 20:38:11
| 1
| 2,350
|
Jake Drew
|
79,372,577
| 673,600
|
Making long link clickable in Polars output
|
<p>I'm displaying a data frame in collab but finding that whilst the URL link is showing and being formatted, the underline and link don't go across the entire URL, which goes across a couple of lines, and when I click it, it goes to the truncated link.</p>
<pre class="lang-py prettyprint-override"><code>result = {
"score": 1.0,
"url": "https://www.reallyreallyreallylongurlxyx.com",
"url2": "https://www.reallyreallyreallylongurlxyxreallyreallyreallylongurlxyxreallyreallyreallylongurlxyx.com",
}
df = pl.DataFrame([result], strict=False)
</code></pre>
<p><a href="https://i.sstatic.net/oJJexOoA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oJJexOoA.png" alt="enter image description here" /></a></p>
|
<python><python-polars><polars>
|
2025-01-20 20:29:40
| 1
| 6,026
|
disruptive
|
79,372,227
| 14,454,397
|
Adjust the bars in the chart based on the subset condition
|
<p>I am facing an issue as shown in the image below, I am trying to generate the bar chart based on the treatments as a group, based on the drop-down, if I unselect a treatment the bar should disappear and show the bars of the remaining treatments, however when I unselect a treatment, there are gaps created. how to avoid them, please help.</p>
<p><a href="https://i.sstatic.net/AJ9sj9k8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJ9sj9k8.png" alt="enter image description here" /></a></p>
<pre class="lang-py prettyprint-override"><code>def plot_(data,selected_trt):
import matplotlib.pyplot as plt
from matplotlib.lines import Line2D
import streamlit as st
# Create the bar chart
plt.figure(figsize=(10, 7))
# plt.barh(data['USUBJID_numeric'], data['max_ady'], color='skyblue', edgecolor='black', alpha=0.8)
data = data[data['TRT01P'].isin(selected_trt)]
# Example dataset (replace with your data's column of treatment groups)
unique_treatments = data['TRT01P'].unique()
# Define a set of colors (expand or change as needed)
color_palette = plt.cm.Pastel1.colors # Use a colormap (e.g., Pastel1, Set2, etc.)
num_colors = len(color_palette)
# Generate the treatment_colors dictionary dynamically
treatment_colors = {
treatment: color_palette[i % num_colors] # Cycle through the color palette if treatments exceed colors
for i, treatment in enumerate(unique_treatments)
}
# Assign colors based on the treatment group
data['color'] = data['TRT01P'].map(treatment_colors)
xupper = data['max_ady'].max()
# Create the bar chart
plt.figure(figsize=(10, 7))
# for _, row in data.iterrows():
for _, row in data.iterrows():
plt.barh(
row['USUBJID_numeric'], # Subject on the y-axis
row['max_ady'], # Days of treatment on the x-axis
color=row['color'], # Color based on treatment group
edgecolor='black',
alpha=0.8
)
legend_elements = [
Line2D([0], [0], color=color, lw=4, label=treatment)
for treatment, color in treatment_colors.items()
]
plt.legend(handles=legend_elements, title='Treatments', loc='best')
# Update the y-axis ticks to show original USUBJID values
plt.yticks(data['USUBJID_numeric'], data['USUBJID'])
# Add labels and title
plt.xlabel('Days of Treatment')
plt.ylabel('Subjects')
# plt.title('Swimmer Plot for Treatment Exposure')
# Adjust axis limits to start at (0, 0)
plt.margins(x=0, y=0.01)
ax = plt.gca()
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
plt.xlim(0, xupper+100)
# Display the plot in Streamlit
st.pyplot(plt.gcf())
</code></pre>
|
<python><python-3.x><streamlit>
|
2025-01-20 17:44:33
| 1
| 3,465
|
jkatam
|
79,372,210
| 20,302,906
|
Path.touch() patch assertion failed to find calls
|
<p>I'm creating files with <code>pathlib.Path.touch</code> with the following method that I'd like to assert in a unittest using <code>unitest.mock.patch</code> decorator:</p>
<p><em>sessions.py</em></p>
<pre><code>def generate_folders(paths: list):
for p in paths:
p.mkdir()
Path(p / "output").mkdir()
Path(p / "output" / "output_file").touch()
Path(p / "session_file").touch()
</code></pre>
<p><em>test</em></p>
<pre><code> @patch("os.mkdir")
@patch("pathlib.Path.touch")
def test_folder_creation(self, touch_mock, mkdir_mock):
tags = ["concepts", "models", "animations", "vfx", "sfx", "vo", "music"]
paths = sessions.generate_paths(category="character", name="player", tags=tags)
sessions.generate_folders(paths)
for p in paths:
mkdir_mock.assert_any_call(p, 511)
mkdir_mock.assert_any_call(p / "output", 511)
touch_mock.assert_any_call(Path(p / "output" / "output_file"), 511)
touch_mock.assert_any_call(Path(p / "session_file"), 511)
</code></pre>
<p>I can assert folder creation by <code>os.mkdir</code> without any problem and I can also assert that <code>pathlib.Path.touch</code> has been called. However, asserting specific calls with <code>assert_any_call()</code> isn't possible. My test throws this output:</p>
<pre><code>ssF...
======================================================================
FAIL: test_folder_creation (sessions.tests.sessions_tests.TestSessionsCreation.test_folder_creation)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Users\juank\AppData\Local\Programs\Python\Python313\Lib\unittest\mock.py", line 1424, in patched
return func(*newargs, **newkeywargs)
File "C:\Users\juank\dev\projects\python\gamedev_eco\sessions\tests\sessions_tests.py", line 51, in test_folder_creation
touch_mock.assert_any_call(Path(p / "output" / "output_file"), 511)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\juank\AppData\Local\Programs\Python\Python313\Lib\unittest\mock.py", line 1048, in assert_any_call
raise AssertionError(
'%s call not found' % expected_string
) from cause
AssertionError: touch(WindowsPath('C:/Users/juank/dev/projects/python/gamedev_eco/sessions/character/player/concepts/output/output_file'), 511) call not found
----------------------------------------------------------------------
Ran 6 tests in 0.004s
FAILED (failures=1, skipped=2)
</code></pre>
<p>What confuses me about this test result is that in spite of checking that method input values are the same with <code>pdb</code> debugger, still the test fails. I also double-check that the decorators were working on <code>os.mkdir</code> and <code>pathlib.Path.touch</code> respectively.</p>
<p>On the other hand I tried patching <code>touch</code> with <code>with</code> but the test doesn't pass either. I can't tell if this issue is related to my code or to my test so I'd like to ask the community for hints please. What am I doing wrong? What am I missing?</p>
|
<python><mocking><python-unittest>
|
2025-01-20 17:38:03
| 1
| 367
|
wavesinaroom
|
79,372,122
| 22,538,132
|
How to check if a cuboid is inside camera frustum
|
<p>I want to check if an object (defined by four corners in 3D space) is inside the Field of View of a camera pose.</p>
<p>I saw this <a href="https://math.stackexchange.com/questions/4144827/determine-if-a-point-is-in-a-cameras-field-of-view-3d">solution</a> and tried to implement it, but I missed something, can you please tell me how to fix it?</p>
<blockquote>
<p>the provided 4 points are 2 inside, 2 outside camera frustum.</p>
</blockquote>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from typing import Tuple
class CameraFrustum:
def __init__(
self, d_dist: float = 0.3, fov: Tuple[float, float] = (50, 40)
):
self.d_dist = d_dist
self.fov = fov
self.frustum_vectors = None
self.n_sight = None
self.u_hvec = None
self.v_vvec = None
def compute_frustum_vectors(self, cam_pose: np.ndarray):
fov_horizontal, fov_vertical = np.radians(self.fov[0] / 2), np.radians(
self.fov[1] / 2
)
self.cam_position = cam_pose[:3, 3]
cam_orientation = cam_pose[:3, :3]
base_vectors = np.array(
[
[np.tan(fov_horizontal), np.tan(fov_vertical), 1],
[-np.tan(fov_horizontal), np.tan(fov_vertical), 1],
[-np.tan(fov_horizontal), -np.tan(fov_vertical), 1],
[np.tan(fov_horizontal), -np.tan(fov_vertical), 1],
]
)
base_vectors /= np.linalg.norm(base_vectors, axis=1, keepdims=True)
self.frustum_vectors = np.dot(base_vectors, cam_orientation.T)
self.n_sight = np.mean(self.frustum_vectors, axis=0)
self.u_hvec = np.cross(
np.mean(self.frustum_vectors[:2], axis=0), self.n_sight
)
self.v_vvec = np.cross(
np.mean(self.frustum_vectors[1:3], axis=0), self.n_sight
)
def project_point(
self, p_point: np.ndarray, cam_orientation: np.ndarray
) -> bool:
if self.frustum_vectors is None:
self.compute_frustum_vectors(cam_orientation)
#
p_point_vec = p_point - self.cam_position
p_point_vec /= np.linalg.norm(p_point_vec, axis=-1, keepdims=True)
#
d_prime = np.dot(p_point_vec, self.n_sight)
if abs(d_prime) < 1e-6:
print("point is not in front of the camera")
return False
elif d_prime < self.d_dist:
print("point is too close to camera")
return False
#
p_prime_vec = self.d_dist *(
p_point_vec / d_prime
) - self.d_dist * self.n_sight
u_prime = np.dot(p_prime_vec, self.u_hvec)
v_prime = np.dot(p_prime_vec, self.v_vvec)
#
width = 2 * self.d_dist * np.tan(np.radians(self.fov[0]) / 2)
height = 2 * self.d_dist * np.tan(np.radians(self.fov[1]) / 2)
u_min, u_max = -width / 2, width / 2
v_min, v_max = -height / 2, height / 2
if not (u_min < u_prime < u_max):
return False
if not (v_min < v_prime < v_max):
return False
return True
cam_frustum = CameraFrustum()
pts = np.array(
[
[1.54320189, -0.35068437, -0.48266792],
[1.52144436, 0.44898697, -0.48990338],
[0.32197813, 0.41622155, -0.50429738],
[0.34373566, -0.38344979, -0.49706192],
]
)
cam_pose = np.array(
[
[-0.02719692, 0.9447125, -0.3271947, 1.25978471],
[0.99958918, 0.02274412, 0.0, 0.03276859],
[-0.00904433, -0.32711006, -0.94495695, 0.4514743],
[0.0, 0.0, 0.0, 1.0],
]
)
for pt in pts:
res = cam_frustum.project_point(pt, cam_pose)
print(res)
</code></pre>
<p><a href="https://i.sstatic.net/7AlGWlYe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7AlGWlYe.png" alt="enter image description here" /></a>
Can you please tell me how can I fix this? thanks.</p>
<p>I tried to implement this as follows</p>
|
<python><math><graphics><3d><camera>
|
2025-01-20 17:04:31
| 2
| 304
|
bhomaidan90
|
79,372,057
| 2,256,085
|
aggregate 3D array using zone and time index arrays
|
<p>Using the small example below, I'm seeking to aggregate (sum) the values in the 3D <code>dat_arr</code> array using two other arrays to guide the grouping. The first index of <code>dat_arr</code> is related to time. The second and third indices are related to spatial (X, Y) locations. How can I sum values in <code>dat_arr</code> such that the temporal binning is guided by the contents of <code>tim_idx</code> (same length as the first dimension of <code>dat_arr</code>) and the spatial binning uses <code>zon_arr</code> (has same dimensions as the last two indices of <code>dat_arr</code>)?</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
zon_arr = np.zeros((3,5))
tim_idx = np.array([0,0,1,1,2,2,3,3])
# set up arbitrary zones
zon_arr[1, :3] = 1
zon_arr[1, 3:] = 2
# plt.imshow(zon_arr)
# plt.show()
# generate arbitrary array with data
# first index = time; last 2 indices represent X-Y pts in space
# last two indices must have same dims as zon_arr
np.random.seed(100)
dat_arr = np.random.rand(8, 3, 5)
</code></pre>
<p>So the output I'm expecting would give me the sum of values contained in <code>dat_arr</code> for each unique value in <code>tim_idx</code> and <code>zon_arr</code>. In other words, I would expect output that has 3 value (corresponding to 3 zones) for each of the 4 unique time values in <code>tim_idx</code>?</p>
|
<python><numpy>
|
2025-01-20 16:43:19
| 1
| 469
|
user2256085
|
79,371,972
| 5,489,241
|
How to backfill jobs with different requirements to maximise CPU usage, with python joblib?
|
<p><strong>Here is the context to what I am trying to do:</strong></p>
<ul>
<li>I have several data blocks, that each consist of either 6 items or 24
items, and each item is analyzed separately. The analysis code is not
mine. For reasons beyond my control, each item needs to be processed
single-threaded.</li>
<li>But I have made a script/function that processes one such
data block at a time, by spawning 6 or 24 processes as needed, one
process per item, using <code>python.joblib.Parallel()</code>. It works great.</li>
<li>I can know in advance if a block has 6 or 24 items.</li>
<li>The computation of a single item lasts approximately 4hrs. A block of items running in parallel lasts pretty much the same. So an ideal case for parallelization.</li>
<li>I have a single workstation with a total of 40 threads available to
use. So I cannot run more than one 24-item block, but it leaves enough room for two 6-item blocks to run as well. And if the big blocks finish early, there would be room for 6 of the small blocks to run at once.</li>
<li>The number of 24-item and 6-item data blocks in the pool is not
necessarily equal or fixed.</li>
</ul>
<p><strong>Here is what I'd like to do:</strong></p>
<p>I would like to run a whole pool of mixed-size data blocks from a wrapper script, while minimizing the overall time it takes, by not having too many idle cores.</p>
<ul>
<li><p>My initial half-baked idea was to have the data blocks in two pools, by size, and have two calls to <code>Parallel</code> to run my block-processing function, one submitting single jobs from the pool of large blocks and the other submitting two jobs from the pool of small blocks. But then I realized that <code>Parallel</code> will wait for tasks to complete, so the second pool would only run after the first pool is done.</p>
</li>
<li><p>I know cluster computing schedulers handle this kind of stuff, but I am on a single workstation and don't have a scheduler. The data files are too big for our network bandwidth, so buying some cloud computing and scheduling the jobs there is not practical at all.</p>
</li>
<li><p>Dissolving the data blocks and creating one massive pool of single items from across all the data blocks would probably be possible and it would be the easiest and most effective to then parallelize, but it would be a non-negligible amount of effort on my part rethinking and refactoring my processing code to accommodate that. I may do it in the long term, but in the shorter term I'd like another way.</p>
</li>
<li><p>The last option I can think of, is... to have two wrapper script instances, one instance for the large blocks sending single tasks, and one instance for the small blocks, sending pairs of tasks, and rely on bash syntax to have them run at the same time. But that feels... unsatisfactory.</p>
</li>
</ul>
<p>Is there a tidy way to do this, without over-complicating my setup?</p>
<p>PS.: Actually I don't even know if I can nest calls to <code>Parallel</code> with the innermost one spawning more processes than the outermost ones <code>n_jobs</code>, I haven't tried it yet as I realized my initial plan wasn't going to work and I haven't come up with a better one yet. (And I am aware it is probably bad programming design.)</p>
<p><strong>System</strong></p>
<p>python 3.12.8, on an old HP workstation with Ubuntu 22.04 LTS. I'm using the default <code>Parallel</code> backend, I am not IT-savvy enough to make an informed choice there.</p>
|
<python><joblib>
|
2025-01-20 16:13:15
| 0
| 413
|
cymon
|
79,371,835
| 4,873,946
|
pymatgen module not found in vscode
|
<p>I have installed pymatgen using the instructions on their website: <a href="https://pymatgen.org/installation.html" rel="nofollow noreferrer">https://pymatgen.org/installation.html</a></p>
<p>Then I go to my vscode and in the terminal I activate the pymatgen conda environment with: <code>source activate pymatge</code> which turns my terminal to have <code>(pymatgen)</code> at the beginning of the prompt.</p>
<p>I open a <code>ipynb</code> which I understand it is a jupyter notebook and when I try to run the first line:</p>
<p><code>import pymatgen.core as mg</code></p>
<p>I get the following error:</p>
<pre><code>---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[1], line 1
----> 1 import pymatgen.core as mg
ModuleNotFoundError: No module named 'pymatgen'
</code></pre>
<p>What was the point of running this command from the install instructions if not to install the pymatgen module?</p>
<p><code>conda install --channel conda-forge pymatgen</code></p>
<p>To make things more weird I have tried in my linux terminal to activate the pymatgen environment and I wrote <code>python3</code> then enter and at the prompt I imported <code>pymatgen</code>:</p>
<pre><code>import pymatgen.core as mg
</code></pre>
<p>This worked without errors.</p>
<p>If I simply used <code>python</code> I get the module not found error.</p>
<p>Why is there a difference and how can I force vscode to use python3 to run the commands? I just want to run and learn how to use this pymatgen.</p>
<p>Here is the error:
<a href="https://i.sstatic.net/JpcDrAg2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpcDrAg2.png" alt="enter image description here" /></a></p>
|
<python><pymatgen>
|
2025-01-20 15:32:35
| 0
| 454
|
lucian
|
79,371,747
| 16,527,170
|
Install Ta-Lib Library in Google Colab Current Latest: 3.11.11 in Jan-2025
|
<p>As 3.11.11 Python version rolled out in Jan-2025 in Google Colab Runtime, Ta-Lib Library stopped working.</p>
<p>Earlier I used to install Ta-Lib as below in Colab:</p>
<pre><code>url = 'https://anaconda.org/conda-forge/libta-lib/0.4.0/download/linux-64/libta-lib-0.4.0-h166bdaf_1.tar.bz2'
!curl -L $url | tar xj -C /usr/lib/x86_64-linux-gnu/ lib --strip-components=1
url = 'https://anaconda.org/conda-forge/ta-lib/0.4.19/download/linux-64/ta-lib-0.4.19-py310hde88566_4.tar.bz2'
!curl -L $url | tar xj -C /usr/local/lib/python3.10/dist-packages/ lib/python3.10/site-packages/talib --strip-components=3
import talib
</code></pre>
<p>Current Error:</p>
<pre><code> % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4087 0 4087 0 0 7283 0 --:--:-- --:--:-- --:--:-- 7298
100 517k 100 517k 0 0 367k 0 0:00:01 0:00:01 --:--:-- 1038k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4159 0 4159 0 0 10406 0 --:--:-- --:--:-- --:--:-- 10397
100 392k 100 392k 0 0 330k 0 0:00:01 0:00:01 --:--:-- 3510k
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-5-0c7c4394e52f> in <cell line: 0>()
3 url = 'https://anaconda.org/conda-forge/ta-lib/0.4.19/download/linux-64/ta-lib-0.4.19-py310hde88566_4.tar.bz2'
4 get_ipython().system('curl -L $url | tar xj -C /usr/local/lib/python3.10/dist-packages/ lib/python3.10/site-packages/talib --strip-components=3')
----> 5 import talib
ModuleNotFoundError: No module named 'talib'
</code></pre>
<p>For 3.11.11 Colab Version, how to install Ta-lib Library</p>
<pre><code>!python3 --version
Python 3.11.11
</code></pre>
<p>Earlier Post referred: <a href="https://stackoverflow.com/questions/49648391/how-to-install-ta-lib-in-google-colab">How to install TA-Lib in Google Colab?</a></p>
|
<python><google-colaboratory><ta-lib>
|
2025-01-20 15:09:36
| 2
| 1,077
|
Divyank
|
79,371,673
| 14,498,998
|
RUN pip install --no-cache-dir -r requirements.txt installing but no working with Docker
|
<p>I've trying to use docker for a couple of projects, one is a Django and another is a python telegram bot; But in both cases the problem is that no matter how I copy or install requirements.txt into the container, the libraries apparently get installed, but then all of a sudden I get errors like this in the main python container:</p>
<blockquote>
<p>telegram-bot-container | File "/app/run.py", line 15, in <module>
telegram-bot-container | import logging, mysql_handler, cmc_handler, constants
telegram-bot-container | File "/app/mysql_handler.py", line 2, in <module>
telegram-bot-container | from decouple import config
telegram-bot-container | ModuleNotFoundError: No module named 'decouple'</p>
</blockquote>
<p>And I have to install all missing libraries like this, as if requirements.txt was redundant!:</p>
<p><code>pip install python-telegram-bot mysql-connector-python python-coinmarketcap python-decouple</code></p>
<p>Please help me identify the problem.</p>
<p>My whole Dockerfile:</p>
<pre><code>FROM python:3.10-slim
WORKDIR /app
COPY ./requirements.txt /app/
RUN python -m pip install --upgrade pip && \
pip install --no-cache-dir -r requirements.txt || echo "Skipping problematic package." && \
pip install python-telegram-bot mysql-connector-python python-coinmarketcap
COPY . /app
EXPOSE 8081
CMD ["python", "run.py" ]
</code></pre>
<p>I tried rebuilding with/without caching.
I can see that the packages are being installed in logs.</p>
|
<python><python-3.x><django><docker><pip>
|
2025-01-20 14:42:13
| 1
| 313
|
Alin
|
79,371,578
| 4,724,057
|
Sending multiple file uploads as input to Azure function app http trigger
|
<p>I am trying to send multiple file uploads to a function app (http trigger). The http trigger code is below,</p>
<pre><code> app = func.FunctionApp(http_auth_level=func.AuthLevel.ADMIN)
@app.route(route="test_http_trigger", auth_level=func.AuthLevel.ADMIN)
def test_http_trigger(req: func.HttpRequest) -> func.HttpResponse:
try:
if list(req.files.values())[0].filename.split('.')[-1] == 'zip':
logging.info(req.files.values())
return func.HttpResponse(str(req.files.values()), mimetype="text/html")
elif list(req.files.values())[0].filename.split('.')[-1] == 'txt':
logging.info(req.files.values())
return func.HttpResponse(str(req.files.values()), status_code=200, mimetype="application/json")
else:
return func.HttpResponse('0')
except Exception as e:
return func.HttpResponse(f"Internal server error: {e}", status_code=500)
</code></pre>
<p>I tried to send multiple files as input using the curl command below,</p>
<pre><code>curl -X POST http://localhost:7071/api/test_http_trigger -F "file=@file1.txt" -F "file=@file2.txt"
</code></pre>
<p>This only processes the first file sent. I would like to know the right way to send multiple files to function app.</p>
<p>Thanks,</p>
|
<python><azure><azure-functions><multipartform-data><azure-http-trigger>
|
2025-01-20 14:05:36
| 1
| 2,825
|
MenorcanOrange
|
79,371,534
| 10,722,752
|
Have only 2 classes but why am I getting ValueError: Target is multiclass but average = 'binary'. Please choose another average setting
|
<p>I am trying build an employee churn prediction model using <code>GridSearchCV</code> on <code>OneClassSVM</code>. My code is as below:</p>
<pre class="lang-py prettyprint-override"><code>from sklearn.svm import OneClassSVM
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import precision, recall, f1_score
grid = {
'nu' : [0.02, 0.03],
'kernel' : ['poly', 'sigmoid', 'linear', 'rbf'],
'degree' : [1,3,5,7],
'gamma' : [.01, .1, 1, 10, 500]
'coeff0' : [1,2,3]
}
mod = GridSearchCV(OneClassSVM(),
grid,
cv = 3,
scoring = ['f1', 'precision', 'recall'],
refit = 'f1',
return_train_score = True)
mod.fit(x_train, y_train)
</code></pre>
<p>Here the y has only 0s and 1s (essentially resigned or not), but not sure why I am getting:</p>
<p><code>ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted']</code></p>
<p>Can someone please help me with this.</p>
|
<python><scikit-learn><svm><gridsearchcv>
|
2025-01-20 13:50:37
| 1
| 11,560
|
Karthik S
|
79,371,384
| 12,234,535
|
Cartopy doesn't render left and right longitude labes
|
<p>I'm using <code>cartopy</code> to draw a geomap. This is how I set up <code>graticule</code> rendering:</p>
<pre class="lang-py prettyprint-override"><code> if graticule:
gl = ax.gridlines(
draw_labels=True,
linewidth=0.8,
color='gray',
alpha=0.5,
linestyle='--',
x_inline=False,
y_inline=True
)
gl.xlocator = mticker.FixedLocator(np.arange(-180, 181, 10))
gl.ylocator = mticker.FixedLocator(np.arange(0, 91, 10))
gl.top_labels = False
gl.bottom_labels = True
gl.left_labels = True
gl.right_labels = True
gl.xlabel_style = {'size': 10, 'color': 'gray'}
gl.ylabel_style = {'size': 10, 'color': 'gray'}
</code></pre>
<p><code>cartopy</code> doesn't draw longitude labels to the left and to the right of the map. Only bottom and top (if on) labels are drawn:
<a href="https://i.sstatic.net/655lGQWB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/655lGQWB.png" alt="geomap" /></a></p>
<p>With <code>ax.set_extent([0, 40, 75, 85], crs=ccrs.PlateCarree())</code> added:
<a href="https://i.sstatic.net/M6ESZFJp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6ESZFJp.png" alt="geomap with extent" /></a></p>
<p>Latitude labels are okay being inline.</p>
|
<python><matplotlib><plot><latitude-longitude><cartopy>
|
2025-01-20 12:52:21
| 1
| 379
|
Outlaw
|
79,371,127
| 17,837,614
|
Module does not explicitly export attribute [attr-defined]
|
<p>In <code>bar.py</code> <code>foo</code> is imported</p>
<pre class="lang-py prettyprint-override"><code># bar.py
from path import foo
</code></pre>
<p>In my current file <code>bar</code> is imported and I use the <code>get_id</code> function of <code>foo</code>:</p>
<pre class="lang-py prettyprint-override"><code>from path import bar
bar.foo.get_id()
</code></pre>
<hr />
<p>Mypy is complaining</p>
<pre><code>error: Module "bar" does not explicitly export attribute "foo" [attr-defined]
</code></pre>
|
<python><python-typing><mypy>
|
2025-01-20 11:14:55
| 2
| 405
|
Sujay
|
79,371,056
| 3,336,423
|
Is it possible to make a pointer with bound parameter values?
|
<p>Consider this <code>ServicesManager</code> class that handles some services (basically, a function identified by a name):</p>
<pre><code>class ServicesManager:
def __init__(self):
self.services = dict()
def add_service(self,name,func):
self.services[name] = func
def call_service(self,name):
self.services[name](name)
</code></pre>
<p>This is a MCVE, to help asking my question. <code>ServicesManager</code> is actually a ROS <code>Node</code> class. It cannot be modified to find an answer to my question.</p>
<p>Here is a class using <code>ServicesManager</code> with an example of usage:</p>
<pre><code>class B(ServicesManager):
def __init__(self):
super(B, self).__init__()
self.add_service("foo",self.service_foo)
self.add_service("bar",self.service_bar)
def service_foo(self,name):
# do something
print(name)
def service_bar(self,name):
# do something
print(name)
b = B()
b.call_service("foo")
b.call_service("bar")
</code></pre>
<p>The <code>service_foo</code> and <code>service_bar</code> <code># do something</code> part has very common piece of code that I would like to factorize, I'd like an object to be passed to a generic function. The code would look like this:</p>
<pre><code>class B(ServicesManager):
class helper_object_foo():
def func1(self,name):
pass
def func2(self):
pass
class helper_object_bar():
def func1(self,name):
pass
def func2(self):
pass
def __init__(self):
super(B, self).__init__()
#self.add_service("foo",self.service_foo)
#self.add_service("bar",self.service_bar)
# How to call add_service???
self.add_service("foo",?? self.service_generic_func using helper_object_foo())
self.add_service("bar",?? self.service_generic_func using helper_object_bar())
def service_generic_func(self,name,helper_object):
helper_object.func1(name)
helper_object.func2()
print(name)
</code></pre>
<p>Is there a way to do this? What would be the good syntax for calling <code>add_service</code>? May a <code>lambda</code> help?</p>
|
<python>
|
2025-01-20 10:55:37
| 1
| 21,904
|
jpo38
|
79,370,970
| 8,508,117
|
os.device_encoding changes when called from a subprocess, causing decoding error on Windows. How to force encoding in subprocess?
|
<p>I have the following Python 3.7 code in an imported package, not modifiable by myself, that reads and decodes the systems CCompiler name provided by disttools:</p>
<h4>subproc.py</h4>
<pre class="lang-py prettyprint-override"><code>import os
import sys
import locale
import subprocess
from distutils.ccompiler import new_compiler
ccompiler = new_compiler()
ccompiler.initialize()
cc = subprocess.check_output(f"{ccompiler.cc}", stderr=subprocess.STDOUT, shell=True)
encoding = os.device_encoding(sys.stdout.fileno()) or locale.getpreferredencoding()
print("Encoding:", encoding)
compiler_name = cc.decode(encoding).partition("\n")[0].strip()
print("Compiler name:", compiler_name)
</code></pre>
<p>When calling it directly, i.e. running <code>subprocess.py</code> directly, everything works fine. The compiler name is correctly identified as <code>'Microsoft (R) C/C++-Optimierungscompiler Version 19.42.34433 für x64'</code> (I guess the <code>ü</code> is the origin of the issue)</p>
<p>However, when I call it with subprocess.Popen(), the os.device_encoding returns <code>None</code> instead of <code>cp850</code>, causing a the program to default to the windows encoding of <code>cp1252</code>, which then causes <code>cc.decode(encoding)</code> to raise <em>"UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 62: character maps to <undefined>"</em>.</p>
<p>Here is how I start the subprocess:</p>
<h4>call_subprocess.py</h4>
<pre class="lang-py prettyprint-override"><code>import subprocess
subprocess.Popen(
[
"python",
"C:/path/to/subproc.py",
],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
)
</code></pre>
<p>My understanding is that <code>os.device_encoding(sys.stdout.fileno())</code> cannot find an encoding, as the subprocess is running in the background, without a terminal. Furthermore, Windows will always provide <code>cp1252</code> when queried with <code>locale.getpreferredencoding()</code>.</p>
<p>Since I cannot edit the code within the external package, is there a way to call the subprocess to force either one of these commands to return <code>cp850</code>?</p>
<h3>Variants I have tried to solve the problem</h3>
<ol>
<li>Explicitly set encoding in Popen:
<pre class="lang-py prettyprint-override"><code>subprocess.Popen(
...
text=True,
encoding="cp850",
)
</code></pre>
</li>
<li>Explicitly set <code>PYTHONIOENCODING</code> in subprocess environment:
<pre class="lang-py prettyprint-override"><code>environ = os.environ.copy()
environ['PYTHONIOENCODING'] = 'utf-8'
...
subprocess.Popen(
...
env=environ,
encoding='utf-8',
)
</code></pre>
</li>
<li>Use <code>subprocess.run()</code> instead of <code>subprocess.Popen()</code></li>
<li>Various combinations of the solutions above.</li>
</ol>
<h3>Resources I have already looked at</h3>
<ul>
<li><a href="https://bugs.python.org/issue27179" rel="nofollow noreferrer">Subprocess uses wrong encoding on Windows</a></li>
<li><a href="https://bugs.python.org/issue34618" rel="nofollow noreferrer">Encoding error running in subprocess with captured output</a></li>
<li><a href="https://technicqa.com/how-to-change-the-locale-preferred-encoding-in-python/" rel="nofollow noreferrer">Changing the locale preferred encoding for the computer itself</a>: Control panel > Clock and Region > Region > Administrative > Change system locale > Check Beta: Use Unicode UTF-8 > Reboot -> Works, but is undesired as the code must be executable on different machines without individual setup every time.</li>
<li>Since in my case the subprocess was fairly isolated from other code and hat its own startup function, I used the following lines before the first locale import to override the return value of <code>locale.getdefaultlocale()</code> (<a href="https://stackoverflow.com/a/34345136/8508117">Source</a>):</li>
</ul>
<pre><code> # Override locale.getdefaultlocale() for the subprocess.
# This is necessary to avoid issues with the default locale on Windows.
# It might cause issues on computers not in western countries, that do not use cp850.
import _locale
_locale._getdefaultlocale = lambda *args: ["en_US", "cp850"]
</code></pre>
<ul>
<li>The other option I have found is not using <code>stdout=subprocess.PIPE,</code> at all, thanks to the comment of Barmar</li>
</ul>
|
<python><python-3.x><unicode><encoding><subprocess>
|
2025-01-20 10:27:02
| 2
| 441
|
Energeneer
|
79,370,641
| 4,379,593
|
How to enforce ASCII-only identifiers in Python while allowing UTF-8 strings?
|
<p>I want to configure Python so that it raises an error when encountering non-ASCII characters in identifiers (e.g., variable names, function names) but still accepts UTF-8 encoded strings (e.g., "Привет, мир!"). For example:</p>
<pre><code># This should raise an error
def тест():
pass
# This should work
text = "Привет, мир!"
</code></pre>
<p>I know about <code># -*- coding: ascii -*-</code>, but it blocks non-ASCII characters everywhere in the source code, including in string literals.</p>
<p>(and also the same question for jupyter notebook)</p>
|
<python><utf-8><ascii>
|
2025-01-20 08:09:05
| 3
| 373
|
Филя Усков
|
79,370,632
| 25,413,271
|
Asyncio future, running Future objects
|
<p>I have a code:</p>
<pre><code>import asyncio as aio
async def coro(future: aio.Future):
print('Coro start')
await aio.sleep(3)
print('Coro finish')
future.set_result('coro result')
async def main():
future = aio.Future()
aio.create_task(coro(future))
await future
coro_result = future.result()
print(coro_result)
aio.run(main())
</code></pre>
<p>In <code>main()</code> I create an empty aio.Future object, then I create a task with <code>aio.create_task(coro(future))</code> using coroutine which takes aio.Future object. Then I 'run' the empty future with <code>await future</code>. Somehow this line runs the task instead of running the empty coroutine! I don't understand how it works and why it goes like this, because I expect the line <code>await future</code> to run the empty future, not task!</p>
<p>If I reorganize my <code>main()</code> like this:</p>
<pre><code>import asyncio as aio
async def coro(future: aio.Future):
print('Coro start')
await aio.sleep(3)
print('Coro finish')
future.set_result('coro result')
async def main():
future = aio.Future()
await aio.create_task(coro(future))
# await future
coro_result = future.result()
print(coro_result)
aio.run(main())
</code></pre>
<p>I get the same result but the code behaviour becomes much more explicit for me.</p>
|
<python><python-asyncio><coroutine>
|
2025-01-20 08:05:02
| 3
| 439
|
IzaeDA
|
79,370,497
| 17,837,614
|
Type annotate inside loop
|
<p>The mypy error is</p>
<pre><code>Need type annotation for "args" [var-annotated]
Need type annotation for "kwargs" [var-annotated]
</code></pre>
<p>and here is the piece of code</p>
<pre><code>expected_args: Optional[Sequence[Tuple[Any, ...]]]
expected_kwargs: Optional[Sequence[Dict[str, Any]]]
...
expected_args_iter = iter(expected_args or ())
expected_kwargs_iter = iter(expected_kwargs or ())
...
for args, kwargs in itertools.zip_longest(
expected_args_iter, expected_kwargs_iter, fillvalue={})
</code></pre>
<p>Where can I annotate "args" and "kwargs"?</p>
|
<python><python-typing><mypy>
|
2025-01-20 06:58:18
| 1
| 405
|
Sujay
|
79,370,410
| 17,837,614
|
Invalid index type "str" for "dict[Literal[
|
<p>I created below types and used them to type annotate a dictionary.</p>
<pre><code>objectNames = Literal[
'TranslatableHtml',
'TranslatableUnicodeString',
'TranslatableSetOfUnicodeString',
'TranslatableSetOfNormalizedString',
]
objectClasses = Union[
Type[objects.TranslatableHtml],
Type[objects.TranslatableUnicodeString],
Type[objects.TranslatableSetOfUnicodeString],
Type[objects.TranslatableSetOfNormalizedString],
]
objects_dict: Dict[
objectNames, objectClasses
] = {}
</code></pre>
<p>Now, when I am trying to fill <code>objects_dict</code> variable with below code it is giving Mypy error for <code>clazz.__name__</code> to be a string type</p>
<pre><code>for name, clazz in inspect.getmembers(objects):
cls._translatable_objects_dict[clazz.__name__] = clazz
</code></pre>
<p>Error</p>
<pre><code>error: Invalid index type "str" for "dict[Literal['TranslatableHtml', 'TranslatableUnicodeString', 'TranslatableSetOfUnicodeString', 'TranslatableSetOfNormalizedString'], Union[type[TranslatableHtml], type[TranslatableUnicodeString], type[TranslatableSetOfUnicodeString], type[TranslatableSetOfNormalizedString]]]"; expected type "Literal['TranslatableHtml', 'TranslatableUnicodeString', 'TranslatableSetOfUnicodeString', 'TranslatableSetOfNormalizedString']" [index]
</code></pre>
|
<python><python-typing><mypy>
|
2025-01-20 06:09:02
| 0
| 405
|
Sujay
|
79,370,096
| 9,801,811
|
How to efficiently compute spatial interpolation of a long time series data?
|
<p>I have a time series data with 260 time steps and 82k point observations at each time step. The points do not follow a regular gridded pattern, but the locations remain constant throughout the time series. What is the most effective way in Python to convert all time series data to a regular grid? Specifically, can we leverage the knowledge that sensor locations do not vary to compute an interpolated grid faster?</p>
<p>The code below is my solution for a time step. I'm unsure if processing each time step separately is the most optimal method for a long time series. The compute time for a 8000 X 8000 grid for a time step is 10s. Is there any way I can run the code faster (than executing code for each time step separately)?</p>
<pre><code>from scipy.interpolate import griddata
import matplotlib.pyplot as plt
def interpolate_grid(x, y, values, target_size=(800, 800)):
"""Grid interpolation"""
points = (x , y)
xi = np.linspace(x.min(), x.max(), target_size[0])
yi = np.linspace(y.min(), y.max(), target_size[1])
xi, yi = np.meshgrid(xi, yi)
zi = griddata(points, values , (xi, yi), method='linear')
return xi, yi, zi
x, y, z = interpolate_grid(x=gdf.geometry.x,
y=gdf.geometry.y,
values=gdf[gdf.columns[1]],
target_size=(10000, 10000))
</code></pre>
|
<python><spatial><spatial-interpolation>
|
2025-01-20 01:17:40
| 1
| 447
|
PPR
|
79,369,858
| 3,336,423
|
Why double `ctypes.POINTER` object works for `char***` while triple `ctypes.POINTER` would make more sense?
|
<p>I have a library <code>my_lib</code> with a C function that takes a <code>char***</code> parameter, a pointer to an array of <code>char*</code> that is allocated by the function. Here is a minimal reproducible example of such a function:</p>
<pre><code>void getArrayOfStrings(char*** paramPtr)
{
(*paramPtr) = (char**) malloc(3*sizeof(char*));
(*paramPtr)[0] = (char*) malloc(strlen("Foo")+1);
strcpy((*paramPtr)[0], "Foo");
(*paramPtr)[1] = (char*) malloc(strlen("Bar")+1);
strcpy((*paramPtr)[1], "Bar");
(*paramPtr)[2] = 0;
}
</code></pre>
<p>It sets the last array element to 0, so that caller can identify it (rather than providing the size as a second parameter). Note that a separate function is provided to free the memory.</p>
<p>I run <code>ctypesgen</code> to generate a Python binding to this function. It generates this code:</p>
<pre><code>getArrayOfStrings = _lib.get("getArrayOfStrings", "cdecl")
getArrayOfStrings.argtypes = [POINTER(POINTER(POINTER(c_char)))]
getArrayOfStrings.restype = None
</code></pre>
<p>This generated binding can be called from the Python script below:</p>
<pre><code>import my_lib
import ctypes
names = ctypes.POINTER(ctypes.POINTER(ctypes.c_char))()
my_lib.getArrayOfStrings(names)
if names:
for name in names:
name_str = my_lib.String(name)
if name_str:
print("Got name: " + str(name_str))
else:
break
</code></pre>
<p>It works just fine and prints "Foo\nBar\n"</p>
<p>I'm just wondering why using <code>ctypes.POINTER(ctypes.POINTER(ctypes.c_char))</code>, that I understand as being a "point to pointer to char", so a <code>char**</code>, works. Why I should not be using a <code>ctypes.POINTER(ctypes.POINTER(ctypes.POINTER(ctypes.c_char)))</code>?</p>
<p>I tested with <code>ctypes.POINTER(ctypes.POINTER(ctypes.POINTER(ctypes.c_char)))</code>, the same code produces the error:</p>
<pre><code>my_lib.getArrayOfStrings(names)
OSError: exception: access violation writing 0x0000000000000000
</code></pre>
|
<python><c><pointers><ctypes>
|
2025-01-19 22:12:25
| 1
| 21,904
|
jpo38
|
79,369,807
| 4,222,206
|
What component might emit 'Request max total header size exceeded'
|
<p>I am posting documents into <a href="https://docs.paperless-ngx.com/" rel="nofollow noreferrer">paperless-ngx</a> via REST api.
For some pdf documents the API reliably responds with</p>
<pre><code>{"detail":"Multipart form parse error - Request max total header size exceeded."}
</code></pre>
<p>I checked one of the offending documents and found it to be a normal, valid PDF of about 180 kb size. There should not be too much fuss about it, yet I have that error.</p>
<p>Now I am wondering where this error might come from and how to get around it.
Does it come from GUnicorn, Django or maybe the application itself?</p>
|
<python><django><gunicorn>
|
2025-01-19 21:37:52
| 1
| 9,930
|
queeg
|
79,369,752
| 179,014
|
How can I get the absolute path for a directory created with pyinfra?
|
<p>I created a new directory with pyinfra using the relative path <code>new_dir</code></p>
<pre><code>from pyinfra.operations import files, server
files.directory("new_dir")
</code></pre>
<p>Now I would like to get the absolute path of the new directory. I tried</p>
<pre><code>result = server.shell("pwd", _chdir="new_dir")
print(result.stdout)
</code></pre>
<p>But that only raises an exception:</p>
<pre><code>RuntimeError: Cannot evaluate operation result before execution
</code></pre>
<p>How can I get the absolute path of the directory pyinfra created for me?</p>
|
<python><infrastructure-as-code><pyinfra>
|
2025-01-19 21:01:17
| 1
| 11,858
|
asmaier
|
79,369,679
| 8,035,772
|
Why is Dask slower than Pandas in computing the mean of a large dataset, and how can I improve performance?
|
<p>I am learning Dask to make my Python projects more efficient and scalable. To understand its performance better, I wrote a script comparing the computation time of Pandas and Dask when calculating the mean of a column in a large dataset. Here's my code:</p>
<pre><code>import pandas as pd
import dask.dataframe as dd
import time
from memory_profiler import memory_usage
filename = "large_dataset_3.csv"
df_pd = pd.read_csv(filename)
df_dask = dd.read_csv(filename, blocksize=75e6)
start = time.time()
mean_pd = df_pd["points"].mean()
stop = time.time()
print(f"Pandas Mean Computation Time {stop - start:.5f} seconds")
start = time.time()
mean_dask = df_dask["points"].mean().compute(num_workers=4)
stop = time.time()
print(f"Dask Mean Computation Time {stop - start:.5f} seconds")
</code></pre>
<p>When I run this script, I find that Pandas computes the mean in about 0.02 seconds, while Dask takes more than 4.5 seconds. This result is surprising because I expected Dask to be faster due to its parallel processing capabilities.</p>
<h2>For context:</h2>
<p>The dataset (large_dataset_3.csv) contains 100 million rows, with a total size of 292.4 MB.</p>
<p>My system specs are:</p>
<p><strong>Processor</strong>: Intel® Core™ i5-8365U × 8 (4 cores, 8 threads)</p>
<p><strong>RAM</strong>: 16 GB</p>
<h2>My Questions:</h2>
<p>Why is Dask slower than Pandas in this scenario?
Are there optimizations or configurations I can apply to make Dask perform better?</p>
|
<python><pandas><numpy><dask>
|
2025-01-19 20:07:54
| 0
| 661
|
samman adhikari
|
79,369,662
| 506,825
|
How to set default datetime value with SQLAlchemy?
|
<p>I'm trying to set a default server value for a <code>created_at</code> column in a MySQL table with the SQLAlchemy library in Python 3.13. But, whenever I start up my Flask app and the table gets created, I run into an issue where the default value gets set to a string that reads <code>(now())</code> Inserting a row while leaving the <code>created_at</code> field empty returns a MySQL error because <code>(now())</code> is not a valid datetime. Below is how my code is structured.</p>
<pre><code>from db import db
from sqlalchemy import Column, DateTime, String, Integer
from sqlalchemy.sql import func
class BlogModel(db.Model):
__tablename__ = "blogs"
id = Column(Integer, primary_key=True)
created_at = Column(DateTime(timezone=True), server_default=func.now(), nullable=False)
</code></pre>
<p>I've also tried the following with the same results:</p>
<pre><code>created_at = Column(DateTime(timezone=True), server_default=func.current_timestamp(), nullable=False)
</code></pre>
<p>My <code>requirements.txt</code> file is as follows:</p>
<pre><code>alembic==1.14.0
apispec==6.8.1
blinker==1.9.0
certifi==2024.12.14
click==8.1.8
distlib==0.3.9
filelock==3.16.1
Flask==3.1.0
Flask-JWT-Extended==4.7.1
Flask-Migrate==4.1.0
flask-smorest==0.45.0
Flask-SQLAlchemy==3.1.1
itsdangerous==2.2.0
Jinja2==3.1.5
Mako==1.3.8
MarkupSafe==3.0.2
marshmallow==3.25.1
packaging==24.2
pipenv==2024.4.0
platformdirs==4.3.6
PyJWT==2.10.1
PyMySQL==1.1.1
pytz==2024.2
setuptools==75.8.0
SQLAlchemy==2.0.37
typing_extensions==4.12.2
virtualenv==20.29.1
webargs==8.6.0
Werkzeug==3.1.3
</code></pre>
<p><a href="https://i.sstatic.net/itfFzGKj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/itfFzGKj.png" alt="enter image description here" /></a></p>
|
<python><flask><sqlalchemy><flask-sqlalchemy>
|
2025-01-19 19:54:20
| 1
| 4,830
|
Lance
|
79,369,611
| 18,100,562
|
Adding shaders to existing shapes, labels, drawables?
|
<p>Good day, friends!
I would like to use existing drawable object like pyglet.shapes.Rectangle, pyglet.shapes.Line or pyglet.text.Label and add some shader effects like scaling over time, changing color over time.</p>
<pre><code>class TestLabel(Label):
def __init__(self):
super().__init__(
text="y",
font_name="Arial",
font_size=90,
color=(0, 0, 255, 255),
x=640, y=390)
def scale(self, factor, time):
# Slowly grow over time
pass
def change_color(self, to_this_color, time):
# Transition rgba values of label to target color within the time
pass
</code></pre>
<p>Is this even an option?
Do I really have to create a custom class including a shader program from the ground up if I would like to have a blinking/scaling rectangle?
Do I really have then to implement custom events, behaviour etc.?</p>
<p>Does someone have code for blinking/scaling rectangle?</p>
|
<python><glsl><shader><pyglet>
|
2025-01-19 19:24:50
| 0
| 507
|
mister_kanister
|
79,369,363
| 5,678,653
|
SciPy minimise to find inverse function?
|
<p>I have a (non-invertible) function ak([u,v,w])</p>
<p>This takes a point on the surface of the unit octahedron (p: such that |u|+|v|+|w| = 1) and returns a point on the surface of the unit sphere. The function isn't perfect but the intention is to keep the distance between points authalic.</p>
<p>I was thinking of using SciPy minimize to provide a numerical inverse, but I cannot wrap my head around it.</p>
<p>input: spherical pt [x,y,z],<br />
output octahedral pts [u,v,w] such that ak([u,v,w])=[x,y,z]</p>
<p>My function ak() is defined like this:</p>
<pre><code>def ak(p):
# Convert point on octahedron onto the sphere.
# Credit to Anders Kaseorg: https://math.stackexchange.com/questions/5016695/
# input: oct_pt is a Euclidean point on the surface of a unit octagon.
# output: UVW on a unit sphere.
a = 3.227806237143884260376580641604959964752197265625 # 𝛂 - vis. Kaseorg.
p1 = (np.pi * p) / 2.0
tp1 = np.tan(p1)
xu, xv, xw = tp1[0], tp1[1], tp1[2]
u2, v2, w2 = xu ** 2, xv ** 2, xw ** 2
y0p = xu * (v2 + w2 + a * w2 * v2) ** 0.25
y1p = xv * (u2 + w2 + a * u2 * w2) ** 0.25
y2p = xw * (u2 + v2 + a * u2 * v2) ** 0.25
pv = np.array([y0p, y1p, y2p])
return pv / np.linalg.norm(pv, keepdims=True)
</code></pre>
<p>This function is based on <a href="https://math.stackexchange.com/questions/5016695/">a post that I made on the Math StackExchange</a>.</p>
<p>Any hints?</p>
|
<python><optimization><scipy><computational-geometry><inverse>
|
2025-01-19 16:53:14
| 2
| 2,248
|
Konchog
|
79,369,359
| 10,153,071
|
Problem in Backpropagation through a sample in Beta distribution in pytorch
|
<p>Say I have obtained some <strong>alphas</strong> and <strong>betas</strong> as parameters from a neural network, which will be parameters of the Beta distribution. Now, I sample from the Beta distribution and then calculate some loss and back-propagate via the samples obtained. Is it possible to do that? Given that after the sampling process, I do <strong>.requires_grad_(True)</strong> to the sample and then compute the loss? This surely works, but it looks like the loss is not converging, is there any other way to do this in PyTorch?</p>
<p>Say, I get the following variables via some neural network:</p>
<pre class="lang-py prettyprint-override"><code>mu, sigma, pred = model.forward(input)
</code></pre>
<p>Where say, <strong>mu</strong> is the (<code>batch_size x 30</code>) shaped tensor, similarly <strong>sigma</strong> is (<code>batch_size x 30</code>) shaped tensor. I compute the <strong>alphas</strong> and <strong>betas</strong> using the <strong>mu</strong> and <strong>sigma</strong> obtained from a Neural Network (both of the same shape (<code>batch_size x 30</code>)), and then sample it via a beta distribution as follows:</p>
<pre class="lang-py prettyprint-override"><code>def sample_from_beta_distribution(alpha, beta, eps=1e-6):
# Clamp alpha and beta to be positive
alpha_positive = torch.clamp(alpha, min=eps)
beta_positive = torch.clamp(beta, min=eps)
# Create a Beta distribution
# This will automatically broadcast to handle the batch dimension
beta_dist = torch.distributions.beta.Beta(alpha_positive, beta_positive)
# Sample from the distribution
# This will return samples of shape [38, 30]
samples = beta_dist.sample()
return samples
</code></pre>
<p>Here, I take the <strong>samples</strong> which is of the same shape as (<code>batch_size x 30</code>), perform some operations on it, and then calculate the loss. I expected the gradient to propagate through this, but looks like the loss is not converging.</p>
<p>Any leads would help. Please note, this is not as simple as the reparameterization trick in the standard Normal distribution.</p>
|
<python><pytorch><gradient-descent><autograd>
|
2025-01-19 16:52:41
| 1
| 536
|
Jimut123
|
79,369,295
| 2,074,831
|
Convert a PDF to a PNG with transparency
|
<p>My goal is to obtain a PNG file with a transparent background from a PDF file.
The <code>convert</code> tool can do the job:</p>
<pre class="lang-bash prettyprint-override"><code>$ convert test.pdf test.png
$ file test.png
test.png: PNG image data, 595 x 842, 8-bit gray+alpha, non-interlaced
</code></pre>
<p>But I would like to do it programmatically in python without relying on <code>convert</code> or any other tool. I came up with the <code>pdf2image</code> package but I could not figure out how to get transparency. Here is my attempt:</p>
<pre class="lang-py prettyprint-override"><code>import pdf2image
with open("test.pdf", "rb") as fd:
pdf = pdf2image.convert_from_bytes(fd.read(), transparent=True)
pdf[0].save("test.png")
</code></pre>
<p>Unfortunately I lose transparency:</p>
<pre class="lang-bash prettyprint-override"><code>$ python test.py
$ file test.png
test.png: PNG image data, 1654 x 2339, 8-bit/color RGB, non-interlaced
</code></pre>
<p>Is there any way to do this without relying on an external tool using <code>pdf2image</code> or any other package ?</p>
|
<python><pdf><python-imaging-library><png><transparency>
|
2025-01-19 16:20:33
| 1
| 3,974
|
sevan
|
79,369,111
| 16,379,601
|
Numba - List() argument must be iterable
|
<p>I want to use numba to increase performance on a BLAST implementation. This is a very simple function to get all k-mers and their positions in a sequence</p>
<pre><code>def preprocess_sequence(sequence, word_len):
word_positions = dict()
for i in range(0, len(sequence) - word_len + 1):
word = sequence[i:i+word_len]
if word not in word_positions:
word_positions[word] = [i]
else:
word_positions[word].append(i)
result = list(word_positions.items())
return result
</code></pre>
<p>When I change the code to use <code>numba.njit</code> everything breaks.</p>
<pre><code>def preprocess_sequence(sequence, word_len):
word_positions = numba.typed.Dict.empty(
key_type=numba.types.unicode_type,
value_type=numba.types.ListType(numba.types.int64)
)
for i in range(0, len(sequence) - word_len + 1):
word = sequence[i:i+word_len]
if word not in word_positions:
positions = numba.typed.List([i])
word_positions[word] = positions
else:
word_positions[word].append(i)
result = list(word_positions.items())
return result
</code></pre>
<p>Error:</p>
<pre><code> File "...\site-packages\numba\core\dispatcher.py", line 468, in _compile_for_args
error_rewrite(e, 'typing')
File "...\site-packages\numba\core\dispatcher.py", line 409, in error_rewrite
raise e.with_traceback(None)
numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
Failed in nopython mode pipeline (step: nopython frontend)
List() argument must be iterable
During: resolving callee type: typeref[<class 'numba.core.types.containers.ListType'>]
During: typing of call at .\file.py (61)
File "file.py", line 61:
def preprocess_sequence(sequence, word_len):
<source elided>
key_type=numba.types.unicode_type,
value_type=numba.types.ListType(numba.types.int64)
^
During: resolving callee type: type(CPUDispatcher(<function preprocess_sequence at 0x00000273E3E981F0>))
During: typing of call at .\file.py (102)
During: resolving callee type: type(CPUDispatcher(<function preprocess_sequence at 0x00000273E3E981F0>))
During: typing of call at .\file.py (102)
File "file.py", line 102:
def search_sequence(words, sequence, threshold, scoring_matrix):
<source elided>
word_positions_list = preprocess_sequence(sequence, 3)
</code></pre>
<p>I don't know what's wrong with the code, every type is strictly defined and the only error I can extract from the error message that isn't the call stack is <code>List() argument must be iterable</code>. But I'm passing an iterable to <code>typed.List()</code></p>
|
<python><numba>
|
2025-01-19 14:15:41
| 0
| 513
|
Marcel Lorenz
|
79,368,978
| 14,456,484
|
Handling indentation in multiline f-string with multiline string variables
|
<p>I know that we can use <code>textwrap.dedent()</code> to handle indentation of a multiline string. But if it is a multiline f-string which includes replacement of multiline string variable, it becomes a little complicated.</p>
<p>The multiline strings in the following example are intentionally indented to simulate they are inside other structure:</p>
<pre><code>from textwrap import dedent
block_a = """\
Line one
Line two"""
block_b = """\
Line four
Line five"""
blocks = f"""\
{block_a}
Line three
{block_b}"""
print(dedent(blocks))
</code></pre>
<p>What I want is:</p>
<pre><code>Line one
Line two
Line three
Line four
Line five
</code></pre>
<p>Dedenting just <code>blocks</code> is clearly not enough, the result is:</p>
<pre><code> Line one
Line two
Line three
Line four
Line five
</code></pre>
<p>But dedenting also <code>block_a</code> and <code>block_b</code> does not help much,</p>
<pre><code>blocks = f"""\
{dedent(block_a)}
Line three
{dedent(block_b)}"""
print(dedent(blocks))
</code></pre>
<p>and the result becomes:</p>
<pre><code> Line one
Line two
Line three
Line four
Line five
</code></pre>
<hr />
<p>What I really want is those replaced strings should have the same indentation level where I put the replacement field, while keeping their internal indentation in the variable. Then, I can perform a overall dedent to get the expected result.</p>
<p>Before the overall dedent, in the above case, what I want is:</p>
<pre><code> Line one
Line two
Line three
Line four
Line five
</code></pre>
<p>It should still be the case if both replacement fields and <code>Line three</code> indented four spaces more,</p>
<pre><code>blocks = f"""\
{dedent(block_a)}
Line three
{dedent(block_b)}"""
</code></pre>
<p>what I want is:</p>
<pre><code> Line one
Line two
Line three
Line four
Line five
</code></pre>
<p>I have a workaround using a format string instead of f-string,</p>
<pre><code>blocks = """\
{}
Line three
{}"""
print(dedent(blocks).format(dedent(block_a), dedent(block_b)))
</code></pre>
<p>but I am still interested to know if I can do this by f-string, and any other methods that are useful in handling indentation in this case.</p>
|
<python>
|
2025-01-19 12:49:47
| 2
| 1,171
|
adamkwm
|
79,368,765
| 1,525,840
|
Can't create password containing parentheses in Azure CLI
|
<p>I'm setting up a SQL Server using CLI. It works as supposed to.</p>
<pre><code>$Group = @("--resource-group", "groupy")
$Name = @("--name", "servy")
$User = @("--admin-user", "BigCahoona")
$Pass = @("--admin-password", "Abcde12345#¤%")
az sql server create $Group $Name $User $Pass
</code></pre>
<p>Recently, I've been advised that for practical reasons, I should use another password, one that contains parentheses. I replaced it as required, which led me to a surprising discovery: the parantheses are interpreted in a special way, on two levels which confuses me additionally.</p>
<p>This change: <code>$Pass = @("--admin-password", "Abcde12345()")</code> results in a created resource <strong>but</strong> produces the error below. NB! Those are not commands I typed. It's just a printout that the computer gave after displaying the JSON of the created resource.</p>
<blockquote>
<p>C:\source\dev>echo Failed to load python executable.<br />
Failed to load python executable.<br />
C:\source\dev>exit /b 1</p>
</blockquote>
<p>This change: <code>$Pass = @("--admin-password", "Abcde12345()()")</code> doesn't create jack, only producing the error below.</p>
<blockquote>
<p>() was unexpected at this time.<br />
C:\source\dev> "C:\Program Files\Microsoft SDKs\Azure\CLI2\wbin\..\python.exe" -IBm azure.cli<br />
sql server create --resource-group groupy --name servy --admin-user BigCahoona --admin-password Abcde12345()()</p>
</blockquote>
<p>I strongly suspect that the CLI somehow considers my new password as a function call or, possibly, a variable value to be rendered. What I'm stuck on, however, is how to resolve it (otherwise than picking a different set of characters). The consistent observation is that nothing is allowed to succeed the closing parenthesis and that some parameter is expected inside the parentheses.</p>
<p>I've found resources suggesting that should sourround or escape somehow, so I've tried apostrophes and quotes in different combinations as well as backslashes and dollar signs. Nothing of that helped. I'm certain that I've missed some alternative due to the mundane process of trial and horror.</p>
|
<python><azure><azure-cli>
|
2025-01-19 10:16:02
| 1
| 40,130
|
Konrad Viltersten
|
79,368,487
| 219,153
|
How to get row numbers of maximum elements in a 2D Numpy array?
|
<p>I have a 2D array <code>a</code> given by:</p>
<pre><code>a = np.array([[2, 3, 1, 9], [0, 5, 4, 7], [2, 4, 6, 8]])
[[2 3 1 9]
[0 5 4 7]
[2 4 6 8]]
</code></pre>
<p>I would like to get row numbers of maximum elements column-wise, i.e. given by <code>np.amax(a, axis=0)</code>, which in my example are <code>[0, 1, 2, 0]</code>. What is the Numpy magic to accomplish it?</p>
|
<python><arrays><numpy><numpy-ndarray>
|
2025-01-19 05:59:15
| 1
| 8,585
|
Paul Jurczak
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.