QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,574,302
| 13,946,204
|
How to make dynamic path that will be represented as list in aiohttp server?
|
<p>Let's say that I have an example code:</p>
<pre class="lang-py prettyprint-override"><code>from aiohttp import web
async def example(r):
a = r.match_info.get('a')
b = r.match_info.get('b')
c = r.match_info.get('c')
return web.Response(text=f'[{a}, {b}, {c}]')
app = web.Application()
app.add_routes([web.get('/args/{a}/{b}/{c}', example)])
web.run_app(app)
</code></pre>
<p>And now after accessing</p>
<pre><code>http://localhost:8080/args/A/B/C
</code></pre>
<p>I'm getting</p>
<pre><code>[A, B, C]
</code></pre>
<p>response. The question is what is a correct syntax for list (or tuple) like path?</p>
<p>In other words I want to access a URL with random number of arguments like</p>
<pre><code>http://localhost:8080/args/r/a/n/d/o/m/1/2/3
</code></pre>
<p>and get a list or tuple of</p>
<pre><code>('r', 'a', 'n', 'd', 'o', 'm', '1', '2', '3')
</code></pre>
<p>elements</p>
|
<python><query-string><aiohttp>
|
2025-04-15 02:49:18
| 2
| 9,834
|
rzlvmp
|
79,574,248
| 28,063,240
|
Python csv DictReader with optional header
|
<pre class="lang-py prettyprint-override"><code>import csv
import io
FIELDNAMES = ['one', 'two', 'three']
def print_data_rows(csvfile):
reader = csv.DictReader(csvfile, fieldnames=FIELDNAMES)
for row in reader:
print(row)
headerless = r'''
1,2,3
'''
print_data_rows(io.StringIO(headerless.strip()))
headerful = r'''
one,two,three
1,2,3
'''
print_data_rows(io.StringIO(headerful.strip()))
</code></pre>
<p>I would like the output to be</p>
<pre class="lang-none prettyprint-override"><code>{'one': '1', 'two': '2', 'three': '3'}
{'one': '1', 'two': '2', 'three': '3'}
</code></pre>
<p>but actually the output is</p>
<pre class="lang-none prettyprint-override"><code>{'one': '1', 'two': '2', 'three': '3'}
{'one': 'one', 'two': 'two', 'three': 'three'}
{'one': '1', 'two': '2', 'three': '3'}
</code></pre>
<p>because DictReader is not skipping the header row, when it exists.</p>
<p>How can I skip the header row if it exists?</p>
|
<python><python-3.x>
|
2025-04-15 01:29:55
| 4
| 404
|
Nils
|
79,574,127
| 12,158,757
|
Cannot see all `Dense` layer info from `search_space_summary()` when using `RandomSearch Tuner` in Keras-Tuner?
|
<p>I am trying to use <code>keras-tuner</code> to tune hyperparameters, like</p>
<pre><code>!pip install keras-tuner --upgrade
import keras_tuner as kt
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.optimizers import Adam
def build_model(hp):
model = Sequential([
Flatten(input_shape=(28, 28)),
Dense(units= hp.Int('units', min_value = 16, max_value = 64, step = 16), activation='relu'),
Dense(units = hp.Int('units', min_value = 8, max_value = 20, step = 2), activation='softmax')
])
model.compile(
optimizer=Adam(learning_rate=hp.Float('learning_rate', min_value=1e-4, max_value=1e-2, sampling='LOG')),
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
return model
# Create a RandomSearch Tuner
tuner = kt.RandomSearch(
build_model,
objective='val_accuracy',
max_trials=10,
executions_per_trial=2
)
# Display a summary of the search space
tuner.search_space_summary()
</code></pre>
<p>shows</p>
<pre><code>Search space summary
Default search space size: 2
units (Int)
{'default': None, 'conditions': [], 'min_value': 16, 'max_value': 64, 'step': 16, 'sampling': 'linear'}
learning_rate (Float)
{'default': 0.0001, 'conditions': [], 'min_value': 0.0001, 'max_value': 0.01, 'step': None, 'sampling': 'log'}
</code></pre>
<p>However, when checking the <code>search_space_summary()</code> output, only the 1st <code>Dense</code> layer is shown in the summary, while the information about the 2nd <code>Dense</code> layer, i.e., <code>Dense(units = hp.Int('units', min_value = 8, max_value = 20, step = 2), activation='softmax')</code>, is not seen.</p>
<p>Did I misconfigured something or it is supposed to yield the output like that? Could anyone help me to understand why it outputs the summary like this?</p>
|
<python><tensorflow><keras><hyperparameters><keras-tuner>
|
2025-04-14 22:45:31
| 1
| 105,741
|
ThomasIsCoding
|
79,574,073
| 2,133,121
|
how to find full delta using python deepdiff?
|
<p>I wrote the following simple test:</p>
<p><strong>deeptest.py</strong></p>
<pre><code>from deepdiff import DeepDiff, Delta
dict1 = {'catalog': {'uuid': 'e95fb23c-57d2-495f-8ab5-2c6b3152bcee', 'metadata': {'title': 'Catalog', 'last-modified': '2025-04-10T16:00:34.033789-05:00', 'version': '1.0', 'oscal-version': '1.1.2'}, 'controls': [{'id': 'ac-1', 'title': 'Access Control', 'parts': [{'id': 'ac-1_stmt', 'name': 'statement', 'prose': 'Access control text.'}]}]}}
dict2 = {}
diff = DeepDiff(dict1, dict2)
print(diff)
delta = Delta(diff)
print(f'delta {delta}')
</code></pre>
<p>On the console I observe:</p>
<pre><code>$ python python/deep_test.py
{'dictionary_item_removed': ["root['catalog']"]}
delta <Delta: {"dictionary_item_removed":{"root['catalog']":{"uuid":""}}}>
</code></pre>
<p>My question/issue is that the delta should be the entirety if dict1, but not all of it is shown...why?</p>
|
<python><python-deepdiff>
|
2025-04-14 22:04:19
| 1
| 413
|
user2133121
|
79,574,036
| 873,107
|
Using Sympy with an External Function with CTypes
|
<p>I am attempting to leverage sympy to solve a set of equations. However, some of the equations are dependent upon functions that I can only interface via a ctypes interface. The end goal is to use fluid property calculations that are defined in the NIST Refprop python interface called ctRefprop.</p>
<p>I have been unsuccessful in getting sympy to allow calculation of a function connected to a ctypes interface. To demonstrate the issue, I created a simple case with a very simple C++ dll that exposes two methods:</p>
<ul>
<li>The first is a function that adds two numbers together and returns their sum.</li>
<li>The second is a subroutine that adds two numbers together and assigns the result to a third byref argument. This method is more like the ctRefprop ctypes interfaces.</li>
</ul>
<p>The simple dll works as expected when calling the methods via ctypes interface without using any sympy symbols. But when trying to use sympy symbols and an instance of the 'simple_math_class' class, I receive an error stating: 'argument 1: TypeError: Cannot convert expression to float'.</p>
<p>Can anyone provide guidance on how I can get a ctypes interface method to work together with sympy to be able to do a numerical evaluation of an externally defined function/subroutine? I don't need the sympy process to do any symbolic manipulation of the method, I simply need to be able to do a numerical evaluation of the method using symbols that are part of a sympy set of symbols.</p>
<p>The example code shows the simple python implementation and the C++ function code in comments.</p>
<p>Any help is greatly appreciated.</p>
<pre class="lang-py prettyprint-override"><code>import ctypes as ct
import os
from sympy import symbols
class simple_math_class:
def __init__(self):
self.load_dll_and_define_functions()
def load_dll_and_define_functions(self):
# Load the library
self.dll = ct.cdll.LoadLibrary(os.path.join('.', 'x64', 'Debug', 'Demo_AddTwo.dll'))
# Define the arguments for the function case
self.dll.AddNumbers.argtypes = [ct.c_double, ct.c_double]
self.dll.AddNumbers.restype = ct.c_double
# Define the arguments for the byref subroutine case
self.dll.AddNumbersByRef.argtypes = [ct.c_double, ct.c_double, ct.POINTER(ct.c_double)]
self.dll.AddNumbersByRef.restype = None
def add_numbers(self, a:float, b:float) -> float:
return self.dll.AddNumbers(a, b)
def add_numbers_ref(self, a:float, b:float) -> float:
# Same argument type as ctRefprop
result = ct.c_double(0)
self.dll.AddNumbersByRef(a, b, ct.byref(result))
return result.value
def add(a, b):
c = simple_math_class()
return c.add_numbers(a, b)
def test_ctypes_sympy():
a, b = symbols('a b')
result = add(a, b)
print(result)
if __name__ == '__main__':
dll_class = simple_math_class()
print('\nFunction with double return:')
a = 1.0
b = 2.0
result_add_numbers = dll_class.add_numbers(a, b)
print(f'\t{a} + {b} = {result_add_numbers}')
print('\nSubroutine with argument byref:')
c = 11.0
d = 12.0
result_add_numbers_ref = dll_class.add_numbers_ref(c, d)
print(f'\t{c} + {d} = {result_add_numbers_ref}')
test_ctypes_sympy()
print('Done')
</code></pre>
<p>Simple DLL <code>Demo_AddTwo.c</code>, MSVC build: <code>cl /LD Demo_AddTwo.c</code></p>
<pre class="lang-c prettyprint-override"><code>__declspec(dllexport)
double AddNumbers(double a, double b) {
return a + b;
}
__declspec(dllexport)
void AddNumbersByRef(double a, double b, double *result) {
*result = a + b;
}
</code></pre>
|
<python><sympy><ctypes>
|
2025-04-14 21:32:20
| 2
| 453
|
Justin Kauffman
|
79,573,914
| 1,420,489
|
Can you include a standalone executable in a Python package with pyproject.toml?
|
<p>Looking at Python packaging with <code>pyproject.toml</code>, I wonder whether it is possible to add a standalone executable (with <code>#!/usr/bin/env python</code> in the first line) to <code>pyproject.toml</code>, without going for a <code>[project.scripts]</code> entrypoint definition and the corresponding auto-generated wrapper. Do I understand right that <code>pyproject.toml</code> supports only the <code>[project.scripts]</code> entrypoint definitions as a way to define executables?</p>
<p>I don't have anything against the entrypoints. It makes the definition of Python executables platform-independent. But it's just a curious aspect of <code>pyproject.toml</code>.</p>
<p>Specifically, is it possible to have a project like this:</p>
<pre><code>pyproject.toml
my_executable.py
my_packet/
__init__.py
my_module.py
</code></pre>
<p>Where <code>my_executable.py</code> has the <code>+x</code> permissions bits for execution and contains code like this:</p>
<pre><code>#!/usr/bin/env python
from my_packet.my_module import func
def main():
return func()
if __name__ == '__main__':
main()
</code></pre>
<p>For completeness, the same in the entrypoint way:</p>
<pre><code>pyproject.toml
my_packet/
__init__.py
my_module.py
my_executable.py
</code></pre>
<p>With <code>pyproject.toml</code>:</p>
<pre><code>...
[project.scripts]
my-executable = "my_packet.my_executable:main"
</code></pre>
<p>Where <code>my_executable.py</code> is:</p>
<pre><code>from my_packet.my_module import func
def main():
return func()
# may not have if __name__ at all
# or contain not application code
# like some tests
</code></pre>
<p>Then the executable gets generated during installation. And it looks like this:</p>
<pre><code>#!/home/.../my_venv/bin/python
# -*- coding: utf-8 -*-
import re
import sys
from my_packet.my_executable import main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(main())
</code></pre>
|
<python><python-packaging>
|
2025-04-14 19:48:11
| 0
| 4,808
|
xealits
|
79,573,908
| 5,675,325
|
OpenAI Assistants with citations like【4:2†source】and citeturnXfileY
|
<p>When streaming with OpenAI Assistants</p>
<pre><code>openai.beta.threads.messages.create(
thread_id=thread_id,
role="user",
content=payload.question
)
run = openai.beta.threads.runs.create(
thread_id=thread_id,
assistant_id=assistant_id,
stream=True,
tool_choice={"type": "file_search"},
)
streamed_text = ""
for event in run:
if event.event == "thread.message.delta":
delta_content = event.data.delta.content
if delta_content and delta_content[0].type == "text":
text_fragment = delta_content[0].text.value
streamed_text += text_fragment
yield {"data": text_fragment}
if event.event == "thread.run.completed":
break
</code></pre>
<p>the citations are coming in the formats like <code>【4:2†source】</code> or <code>citeturnXfileY</code></p>
<p><a href="https://i.sstatic.net/tCdMaAWy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tCdMaAWy.png" alt="OpenAI weird citation 1" /></a>
<a href="https://i.sstatic.net/1KPjKgb3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1KPjKgb3.png" alt="OpenAI weird citation 2" /></a></p>
<p>How to fix it?</p>
|
<python><fastapi><openai-api><openai-assistants-api>
|
2025-04-14 19:42:52
| 1
| 15,859
|
Tiago Peres
|
79,573,889
| 10,917,549
|
Running DeepSeek-V3 inference without GPU (on CPU only)
|
<p>I am trying to run the DeepSeek-V3 model inference on a remote machine (SSH). This machine does not have any GPU, but has many CPU cores.</p>
<p><strong>1rst method/</strong></p>
<p>I try to run the model inference using the <a href="https://github.com/deepseek-ai/DeepSeek-V3?tab=readme-ov-file#61-inference-with-deepseek-infer-demo-example-only" rel="nofollow noreferrer">DeepSeek-Infer Demo</a> method:</p>
<blockquote>
<p>generate.py --ckpt-path /path/to/DeepSeek-V3-Demo --config configs/config_671B.json --interactive --temperature 0.7 --max-new-tokens 200</p>
</blockquote>
<p>This produced the following error message:</p>
<blockquote>
<p>RuntimeError: Found no NVIDIA driver on your system. Please check that
you have an NVIDIA GPU and installed a driver from
<a href="http://www.nvidia.com/Download/index.aspx" rel="nofollow noreferrer">http://www.nvidia.com/Download/index.aspx</a></p>
</blockquote>
<p><strong>2nd method/</strong></p>
<p>I then try to use a second method, using the Hugging-Face Transformer library.<br />
I installed the Transformers Python package v4.51.3 (which supports DeepSeek-V3).<br />
I then implemented the script described in the <a href="https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/deepseek_v3.md" rel="nofollow noreferrer">Transformers/DeepSeek-V3 documentation</a>:</p>
<pre><code># `run_deepseek_v1.py`
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch.manual_seed(30)
tokenizer = AutoTokenizer.from_pretrained("path/to/local/deepseek-v3")
chat = [
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
model = AutoModelForCausalLM.from_pretrained("path/to/local/deepseek-v3", device_map="auto", torch_dtype=torch.bfloat16)
inputs = tokenizer.apply_chat_template(chat, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device)
import time
start = time.time()
outputs = model.generate(inputs, max_new_tokens=50)
print(tokenizer.batch_decode(outputs))
print(time.time()-start)
</code></pre>
<p>A got a similar error message when running this:</p>
<blockquote>
<p>transformers/quantizers/quantizer_finegrained_fp8.py, line 51, in
validate_environment raise RuntimeError("No GPU found. A GPU is needed
for FP8 quantization.").</p>
</blockquote>
<p>I tried to change device_map="auto" to device_map="cpu", but it did not change anything (I still got the same error message)</p>
<p>So my question is the following, is there any way to run DeepSeek on CPU only (without any GPU), ideally using one of this method (or another method that I would not know) ?</p>
|
<python><pytorch><huggingface-transformers><huggingface><deepseek>
|
2025-04-14 19:35:59
| 0
| 409
|
The_Average_Engineer
|
79,573,882
| 395,857
|
How can I deploy a fine-tuned GPT model in Azure via Python without using a token (e.g., using an endpoint key instead)?
|
<p>I follow Azure's <a href="https://learn.microsoft.com/en-us/azure/ai-services/openai/tutorials/fine-tune?tabs=python-new%2Ccommand-line" rel="nofollow noreferrer">tutorial</a> on fine-tuning GPT. Here is the code for the deployment phase:</p>
<pre><code># Deploy fine-tuned model
import json
import requests
token = '[redacted]'
subscription = '[redacted]'
resource_group = "[redacted]"
resource_name = "[redacted]"
model_deployment_name = "gpt-4o-mini-2024-07-18-ft" # Custom deployment name you chose for your fine-tuning model
deploy_params = {'api-version': "2023-05-01"}
deploy_headers = {'Authorization': 'Bearer {}'.format(token), 'Content-Type': 'application/json'}
deploy_data = {
"sku": {"name": "standard", "capacity": 1},
"properties": {
"model": {
"format": "OpenAI",
"name": "gpt-4o-mini-2024-07-18.ft-[redacted]", #retrieve this value from the previous call, it will look like gpt-4o-mini-2024-07-18.ft-[redacted]
"version": "1"
}
}
}
deploy_data = json.dumps(deploy_data)
request_url = f'https://management.azure.com/subscriptions/{subscription}/resourceGroups/{resource_group}/providers/Microsoft.CognitiveServices/accounts/{resource_name}/deployments/{model_deployment_name}'
print('Creating a new deployment...')
r = requests.put(request_url, params=deploy_params, headers=deploy_headers, data=deploy_data)
print(r)
print(r.reason)
print(r.json())
</code></pre>
<p>That works fine, but the token generated via <code>az account get-access-token</code> expires quickly. Looking at the <a href="https://learn.microsoft.com/en-us/cli/azure/account?view=azure-cli-latest#az-account-get-access-token" rel="nofollow noreferrer">documentation</a>:</p>
<pre><code>az account get-access-token [--name]
[--resource]
[--resource-type {aad-graph, arm, batch, data-lake, media, ms-graph, oss-rdbms}]
[--scope]
[--tenant]
</code></pre>
<p>there are no parameters to extend the token lifespan.</p>
<p>This annoys me. How can I deploy a fine-tuned GPT model in Azure via Python without using a token (e.g., using an endpoint key instead)?</p>
|
<python><azure><azure-openai><fine-tuning><gpt-4>
|
2025-04-14 19:27:27
| 1
| 84,585
|
Franck Dernoncourt
|
79,573,819
| 2,009,612
|
PyMySQL mogrify not taking list of dicts as argument
|
<p>I'm trying to simply log my queries and their parameters using the <code>pymysql.cursor.mogrify()</code> function as detailed in the documentation.</p>
<pre class="lang-py prettyprint-override"><code>python-1 | INFO:__main__:Creating records.
python-1 | INFO:trex.confluence.main:Creating templates with [{'template_name': 'Phil Test 001', 'template_data': '<h1>Farts</h1><p>This is a template</p>'}, {'template_name': 'Phil Test 002', 'template_data': '<h1>Farts</h1><p>This is a template</p>'}]
python-1 | INFO:trex.confluence.db:Creating templates with [{'template_name': 'Phil Test 001', 'template_data': '<h1>Farts</h1><p>This is a template</p>'}, {'template_name': 'Phil Test 002', 'template_data': '<h1>Farts</h1><p>This is a template</p>'}]
python-1 | INFO:trex.confluence.db:Creating DB Client
python-1 | INFO:trex.confluence.db:Setting up local dev client.
python-1 | INFO:trex.confluence.db:Running SQL Query: INSERT INTO confluence_templates (name, template)
python-1 | VALUES (%(template_name)s, %(template_data)s);
python-1 | INFO:trex.confluence.db:Executing many SQL Queries. Processing the SQL Query: INSERT INTO confluence_templates (name, template)
python-1 | VALUES (%(template_name)s, %(template_data)s);, and Parameters: [{'template_name': 'Phil Test 001', 'template_data': '<h1>Farts</h1><p>This is a template</p>'}, {'template_name': 'Phil Test 002', 'template_data': '<h1>Farts</h1><p>This is a template</p>'}]
</code></pre>
<p>This is where the <code>mogrify()</code> is supposed to happen, but instead we get the error. Above, you'll see what query I'm passing and the arguments. Without the mogrify method being executed, the executemany() works just fine.</p>
<pre class="lang-py prettyprint-override"><code>python-1 | ERROR:flaskapp:Exception on /api/templates [POST]
python-1 | Traceback (most recent call last):
python-1 | File "/usr/local/lib/python3.13/site-packages/flask/app.py", line 1511, in wsgi_app
python-1 | response = self.full_dispatch_request()
python-1 | File "/usr/local/lib/python3.13/site-packages/flask/app.py", line 919, in full_dispatch_request
python-1 | rv = self.handle_user_exception(e)
python-1 | File "/usr/local/lib/python3.13/site-packages/flask/app.py", line 917, in full_dispatch_request
python-1 | rv = self.dispatch_request()
python-1 | File "/usr/local/lib/python3.13/site-packages/flask/app.py", line 902, in dispatch_request
python-1 | return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
python-1 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
python-1 | File "/app/flaskapp.py", line 87, in templates
python-1 | cf_templates = trex.confluence.create_templates(request.get_json())
python-1 | File "/app/trex/confluence/main.py", line 177, in create_templates
python-1 | cf_templates = trex.confluence.db.create_templates(page_data=page_data)
python-1 | File "/app/trex/confluence/db.py", line 392, in create_templates
python-1 | cf_templates = run_query(sql_query=sql_query, parameters=page_data)
</code></pre>
<p>So, this is where the <code>mogrify()</code> is executed in the stack trace.</p>
<pre class="lang-py prettyprint-override"><code>python-1 | File "/app/trex/confluence/db.py", line 157, in run_query
python-1 | logger.info(db_curr.mogrify(query=sql_query, args=kwargs.get('parameters')))
python-1 | ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
python-1 | File "/usr/local/lib/python3.13/site-packages/pymysql/cursors.py", line 129, in mogrify
python-1 | query = query % self._escape_args(args, conn)
python-1 | ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
python-1 | File "/usr/local/lib/python3.13/site-packages/pymysql/cursors.py", line 102, in _escape_args
python-1 | return tuple(conn.literal(arg) for arg in args)
python-1 | File "/usr/local/lib/python3.13/site-packages/pymysql/cursors.py", line 102, in <genexpr>
python-1 | return tuple(conn.literal(arg) for arg in args)
python-1 | ~~~~~~~~~~~~^^^^^
python-1 | File "/usr/local/lib/python3.13/site-packages/pymysql/connections.py", line 530, in literal
python-1 | return self.escape(obj, self.encoders)
python-1 | ~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
python-1 | File "/usr/local/lib/python3.13/site-packages/pymysql/connections.py", line 523, in escape
python-1 | return converters.escape_item(obj, self.charset, mapping=mapping)
python-1 | ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
python-1 | File "/usr/local/lib/python3.13/site-packages/pymysql/converters.py", line 23, in escape_item
python-1 | val = encoder(val, charset, mapping)
python-1 | File "/usr/local/lib/python3.13/site-packages/pymysql/converters.py", line 30, in escape_dict
python-1 | raise TypeError("dict can not be used as parameter")
python-1 | TypeError: dict can not be used as parameter
</code></pre>
<p>However, as you can see from the logs, I'm sending to <code>mogrify()</code> a string for my query and a list for my args to interpolate. but the documentation says that this should work.</p>
<ul>
<li><a href="https://pymysql.readthedocs.io/en/latest/modules/cursors.html#pymysql.cursors.Cursor.mogrify" rel="nofollow noreferrer">https://pymysql.readthedocs.io/en/latest/modules/cursors.html#pymysql.cursors.Cursor.mogrify</a></li>
</ul>
<p>Am I reading this wrong? I'm not following the code real well in GitHub, it's not making sense to me.</p>
<p>--EDIT--</p>
<p>An example of the code.</p>
<pre class="lang-py prettyprint-override"><code>import logging
import pymysql
import pymysql.cursors
sql_query = r"""INSERT INTO confluence_templates (name, template) VALUES (%(template_name)s, %(template_data)s);"""
parameters = [{'template_name': 'Phil Test 001', 'template_data': '<h1>Farts</h1><p>This is a template</p>'},
{'template_name': 'Phil Test 002', 'template_data': '<h1>Farts</h1><p>This is a template</p>'}]
db_conn = pymysql.connect(host="mysql",
user="trexdev",
password="mysupersecretpassword",
db="trex_confluence",
port=3306,
cursorclass=pymysql.cursors.DictCursor,
charset='utf8')
with db_conn:
with db_conn.cursor() as db_curr:
logging.info(db_curr.mogrify(query=sql_query, args=parameters))
db_curr.execute(query=sql_query, args=parameters)
db_conn.commit()
</code></pre>
|
<python><flask><pymysql>
|
2025-04-14 18:41:12
| 1
| 3,766
|
FilBot3
|
79,573,796
| 1,406,168
|
Having secrets locally when pip installing a private github repo
|
<p>I have this pipeline:</p>
<pre class="lang-yaml prettyprint-override"><code>name: Build and deploy Python app to Azure Web App - app-xx-xx-api-dev
on:
push:
branches:
- dev
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- uses: actions/checkout@v4
- name: Set up Python version
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Install dependencies
run: |
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
- name: DNA Utils
run: |
python -m venv venv
source venv/bin/activate
pip install git+https://${{ secrets.MACHINE_USER_PAT }}@github.com/xxx-xx/xx-utils.git@main
</code></pre>
<p>As you can see we have separated installing the requirements and the private package. That is no problem when running the pipeline in GitHub.</p>
<p>However, when running locally on a Dev Box you normally would just call:</p>
<pre class="lang-bash prettyprint-override"><code>pip install -r requirements.txt
</code></pre>
<p>Now the developer also needs to pip install the private package. And if we have more it start getting complex.</p>
<p>We are not just adding the token as we do not want the in the repository code. Can this be solved in a more suitable way?</p>
|
<python><pip><github-actions><python-packaging>
|
2025-04-14 18:30:39
| 1
| 5,363
|
Thomas Segato
|
79,573,744
| 15,528,750
|
Django Advanced Tutorial: Executing a Command in Dockerfile Leads to Error
|
<p>I followed the <a href="https://docs.djangoproject.com/en/5.2/intro/reusable-apps/" rel="nofollow noreferrer">advanced django tutorial</a> on reusable apps, for which I created a Dockerfile:</p>
<pre class="lang-none prettyprint-override"><code>FROM python:3.11.11-alpine3.21
# Install packages (acc. to instructions
# https://github.com/gliderlabs/docker-alpine/blob/master/docs/usage.md)
RUN apk --no-cache add curl ca-certificates sqlite git
WORKDIR /app
COPY setup.py .
COPY pyproject.toml .
# Install `uv` acc. to the instructions
# https://docs.astral.sh/uv/guides/integration/docker/#installing-uv
ADD https://astral.sh/uv/0.5.29/install.sh /uv-installer.sh
RUN sh /uv-installer.sh && rm /uv-installer.sh
ENV PATH="/root/.local/bin/:$PATH"
# Install packages with uv into system python
RUN uv pip install --system -e .
# Expose the Django port
EXPOSE 8000
RUN git config --global --add safe.directory /app
</code></pre>
<p>Now, I have the following structure:</p>
<pre class="lang-bash prettyprint-override"><code>django-polls-v2/
- dist/
- .gitignore
- django_polls_v2-0.0.1-py3-none-any.whl
- django_polls_v2-0.0.1.tar.gz
- django_polls_v2/
- migrations/
- ...
- static/
- ...
- templates/
- ...
- __init__.py
- admin.py
- ...
- README.rst
- pyproject.toml
- MANIFEST.in
- LICENSE
</code></pre>
<p>In order to install my own package, cf. <a href="https://docs.djangoproject.com/en/5.2/intro/reusable-apps/#using-your-own-package" rel="nofollow noreferrer">here</a>, the documentation states to execute the following (where I adjusted the filenames to reflect my setting):</p>
<pre class="lang-bash prettyprint-override"><code>python -m pip install --user django-polls-v2/dist/django_polls_v2/0.0.1.tar.gz
</code></pre>
<p>Now, when I execute this command inside a running docker container (where I get into the docker container by running
<code>docker run -p 8000:8000 --shm-size 512m --rm -v $(pwd):/app -it django:0.0.1 sh</code>), the package gets installed.</p>
<p>However, putting the same command into the Dockerfile, I get the error</p>
<pre class="lang-bash prettyprint-override"><code>1.619 WARNING: Requirement 'django-polls-v2/dist/django_polls_v2-0.0.1.tar.gz' looks like a filename, but the file does not exist
1.670 Processing ./django-polls-v2/dist/django_polls_v2-0.0.1.tar.gz
1.671 ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: '/app/django-polls-v2/dist/django_polls_v2-0.0.1.tar.gz'
</code></pre>
<p>However, since the path is correct, I am baffled as to why the error occurs. I already tried to create a non-root user before executing the command, yet to no avail.
Any advice?</p>
|
<python><django><docker>
|
2025-04-14 17:44:32
| 1
| 566
|
Imahn
|
79,573,720
| 242,042
|
Is there a way to automate activating the virtualenv in Powershell (in Windows)?
|
<p>I know that to activate virtualenv it's just <code>.venv/Scripts/activate.ps1</code> but I was wondering if there's a way of having powershell do it automatically?</p>
<p>Existing ones just talk about activating it, but not how to have Powershell do it automatically</p>
<ul>
<li><a href="https://stackoverflow.com/questions/1365081/virtualenv-in-powershell">virtualenv in PowerShell?</a></li>
<li><a href="https://stackoverflow.com/questions/55734460/how-to-activate-virtualenv-using-powershell">How to activate virtualenv using PowerShell?</a></li>
</ul>
<p>I am looking for something like this but in Powershell</p>
<p><a href="https://stackoverflow.com/questions/34362439/automating-virtualenv-activation-deactivation-in-zsh">Automating virtualenv activation/deactivation in zsh</a></p>
|
<python><windows><powershell><virtualenv>
|
2025-04-14 17:30:36
| 2
| 43,097
|
Archimedes Trajano
|
79,573,717
| 774,575
|
Signal declaration in a separate class for use with QRunnable worker
|
<p>I'm defining a Qt signal in a separate class derived from <code>QObject</code> for use in a <code>QRunnable</code> worker. The signal is emitted once by the worker as it completes. When creating two workers, the signal is received twice for each workers (hence received 4 times, if there were 3 workers, the signal would be received 9 times, etc.).</p>
<p>E.g. using the code below on two workers with IDs 3419 and 45435, the output is:</p>
<pre><code>complete: 3419
complete: 3419
complete: 45435
complete: 45435
</code></pre>
<p>I suspect the signal is not correctly defined in the separate class.</p>
<pre><code>import time, uuid
from qtpy.QtCore import QObject, QThreadPool, QRunnable, Signal
from qtpy.QtWidgets import QApplication, QMainWindow, QPushButton
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.button = QPushButton("Start 2 workers")
self.button.pressed.connect(self.start)
self.setCentralWidget(self.button)
self.worker_manager = WorkerManager()
def start(self):
for i in range(2):
worker = Worker()
self.worker_manager.enqueue(worker)
class Signals(QObject):
complete = Signal(int)
class Worker(QRunnable):
signals = Signals()
def __init__(self):
super().__init__()
self.worker_id = uuid.uuid4().time_mid
def run(self):
time.sleep(0.01)
self.signals.complete.emit(self.worker_id)
class WorkerManager(QObject):
def __init__(self):
super().__init__()
self.threadpool = QThreadPool()
def enqueue(self, worker):
worker.signals.complete.connect(self.complete)
self.threadpool.start(worker)
def complete(self, worker_id):
print('complete:', worker_id)
def main():
app = QApplication([])
window = MainWindow()
window.show()
app.exec()
if __name__ == '__main__':
main()
</code></pre>
|
<python><qt5><signals><qtpy>
|
2025-04-14 17:28:18
| 1
| 7,768
|
mins
|
79,573,704
| 8,387,076
|
How to fix `InvalidRequestError: Can't attach instance <ClassName at 0x7baada2f3cb0>` in SQLAlchemy relationships?
|
<p>I have already gone through quite a few answers on SO (eg. <a href="https://stackoverflow.com/a/61421908/8387076">this one</a> and <a href="https://stackoverflow.com/q/47912520/8387076">this one</a>, but none of them seem to work for me.</p>
<p>I am using SQLAlchemy with sqlite, python 3.13.3. In case it matters, this is for a Discord bot, but I doubt that the code will change for that.</p>
<p>I have the following tables:</p>
<pre class="lang-py prettyprint-override"><code>import sqlalchemy as sqal
client_group_association = sqal.Table(
"client_group_association",
Base.metadata,
sqal.Column("client_id", sqal.UUID, sqal.ForeignKey("client_info.id"), primary_key=True),
sqal.Column("group_id", sqal.UUID, sqal.ForeignKey("group_info.id"), primary_key=True)
)
class ClientInfo(Base):
"""Stores information of clients."""
__tablename__ = "client_info"
id: Mapped[uuid6.UUID] = mapped_column(sqal.UUID(as_uuid=True), primary_key=True,
insert_default=uuid6.uuid7())
name: Mapped[str] = mapped_column(sqal.String, nullable=False)
email: Mapped[str] = mapped_column(sqal.String, nullable=False)
phone: Mapped[str] = mapped_column(sqal.String, nullable=False)
age: Mapped[int] = mapped_column(sqal.Integer, nullable=False)
# Other unrelated fields
groups: Mapped[List[uuid6.UUID]] = mapped_column(UUIDList, nullable=True)
group_objs: Mapped[List["GroupInfo"]] = relationship(
secondary=client_group_association,
back_populates="client_objs"
)
__table_args__ = (sqal.UniqueConstraint('name', 'email', 'phone', name='_unique_client_details'),
sqal.ForeignKeyConstraint(['groups'], ['group_info.id'],
onupdate='CASCADE', ondelete='CASCADE'))
class GroupInfo(Base):
"""Stores information of groups."""
__tablename__ = "group_info"
id: Mapped[uuid6.UUID] = mapped_column(sqal.UUID(as_uuid=True), primary_key=True,
insert_default=uuid6.uuid7())
location: Mapped[Locations] = mapped_column(sqal.Enum(Locations), nullable=False)
members: Mapped[List[uuid6.UUID]] = mapped_column(UUIDList, nullable=True)
client_objs: Mapped[List["ClientInfo"]] = relationship(
secondary=client_group_association,
back_populates="group_objs"
)
__table_args__ = (sqal.UniqueConstraint('members', name='_unique_members'),
sqal.ForeignKeyConstraint(['members'], ['client_info.id'],
onupdate='CASCADE', ondelete='CASCADE'))
</code></pre>
<p>My aim was to set up a proper bidirectional many-to-many relationship such that I can access the <code>GroupInfo</code> objects directly from a <code>ClientInfo</code> object (which represents the groups the user is in), and the <code>ClientInfo</code> objects from <code>GroupInfo</code> (representing the clients in the group).</p>
<p>I am using an <code>AsyncSession</code>, which I obtain as follows:</p>
<pre class="lang-py prettyprint-override"><code>__LOCK: asyncio.Lock = asyncio.Lock()
__SQL_ENGINE: AsyncEngine = create_async_engine(f"sqlite+aiosqlite:///{DATABASE_NAME}")
@contextlib.asynccontextmanager
async def get_async_session(self, locked: bool = True) -> AsyncIterator[AsyncSession]:
Session = sessionmaker(bind=self.__SQL_ENGINE, class_=AsyncSession)
if locked:
async with self.__LOCK:
async with Session() as session:
yield session
else:
async with Session() as session:
yield session
</code></pre>
<p>Things seemed to work for a while; I could retrieve the nested objects using nested <code>selectinload</code> statements:</p>
<pre class="lang-py prettyprint-override"><code>stmt = select(ClientInfo).where(...)\
.limit(10)\
.options(selectinload(ClientInfo.group_objs)\
.selectinload(GroupInfo.client_objs)) # Force eager-loading
</code></pre>
<p>Every time I want to create a new group, now I am facing</p>
<pre class="lang-py prettyprint-override"><code>sqlalchemy.exc.InvalidRequestError: Can't attach instance <ClientInfo at 0x7baada2f3cb0>; another instance with key (<class 'db_models.ClientInfo'>, (UUID('019634d4-1422-7a0e-9a21-96941acc10dd'),), None) is already present in this session.
</code></pre>
<p>error.</p>
<p>Note that every time I want to access the database, I create a new session.</p>
<p>The issue appears when I add the <code>ClientInfo</code> objects into the <code>GroupInfo</code> object before adding it to a session and committing. This is the most essential part of the transaction because I refer to the client objects through that relationship in the future.</p>
<pre class="lang-py prettyprint-override"><code>group_info: GroupInfo = GroupInfo(id=uuid6.uuid7(),
location=self.location,
members=list(self.clients.keys()))
for client_info in self.clients.values():
group_info.client_objs.append(client_info)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
</code></pre>
<p>Most probably this happens because I load some <code>ClientInfo</code> objects to let the user pick the person, which makes sure that all the <code>ClientInfo</code> objects that I want to append to the relationship are already in the session. But what surprises me is that, this choosing part is done in a separate session in a separate class.</p>
<p>I have tried to delete those objects from the current session using</p>
<pre class="lang-py prettyprint-override"><code>session.expunge(client_info)
session.expire(client_info)
</code></pre>
<p>I have also created two instance variables in the main class which will hold the <code>ClientInfo.group_objs</code> field and <code>GroupInfo.client_objs</code> relationships, and try to use that if adding to the session returns an error:</p>
<pre class="lang-py prettyprint-override"><code># In constructor
self.group_objs_relationship = None
self.client_objs_relationship = None
# Later in the separate class
bot.client_object_relationship = group_info.client_objs # Nullability checked
bot.group_objs_relationship = group_info.client_objs[0].group_objs # Nullability checked
# When adding the new group:
try:
session.add(group_info)
except sqal.exc.InvalidRequestError:
print('Error raised when adding GroupInfo, trying alternate path!')
if bot.group_objs_relationship:
bot.group_objs_relationship.append(group_info)
else:
print('Sorry, could not add that.')
finally:
await session.commit()
</code></pre>
<p>This works for <code>GroupInfo</code>, but I have no idea about how to add those <code>ClientInfo</code> objects to the <code>GroupInfo</code> object before committing because deleting doesn't work in that case.</p>
<p>Any help is appreciated.</p>
|
<python><sqlite><sqlalchemy><orm><relationship>
|
2025-04-14 17:19:34
| 0
| 1,403
|
Wrichik Basu
|
79,573,564
| 2,287,458
|
Group-By column in polars DataFrame inside with_columns
|
<p>I have the following dataframe:</p>
<pre><code>import polars as pl
df = pl.DataFrame({
'ID': [1, 1, 5, 5, 7, 7, 7],
'YEAR': [2025, 2025, 2023, 2024, 2020, 2021, 2021]
})
shape: (7, 2)
┌─────┬──────┐
│ ID ┆ YEAR │
│ --- ┆ --- │
│ i64 ┆ i64 │
╞═════╪══════╡
│ 1 ┆ 2025 │
│ 1 ┆ 2025 │
│ 5 ┆ 2023 │
│ 5 ┆ 2024 │
│ 7 ┆ 2020 │
│ 7 ┆ 2021 │
│ 7 ┆ 2021 │
└─────┴──────┘
</code></pre>
<p>Now I would like to get the unique number of years per ID, i.e.</p>
<pre><code>shape: (7, 3)
┌─────┬──────┬──────────────┐
│ ID ┆ YEAR ┆ UNIQUE_YEARS │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ u32 │
╞═════╪══════╪══════════════╡
│ 1 ┆ 2025 ┆ 1 │
│ 1 ┆ 2025 ┆ 1 │
│ 5 ┆ 2023 ┆ 2 │
│ 5 ┆ 2024 ┆ 2 │
│ 7 ┆ 2020 ┆ 2 │
│ 7 ┆ 2021 ┆ 2 │
│ 7 ┆ 2021 ┆ 2 │
└─────┴──────┴──────────────┘
</code></pre>
<p>So I tried <code>df.with_columns(pl.col('YEAR').over('ID').alias('UNIQUE_YEARS'))</code> but this gives the wrong result. So I came up with</p>
<pre><code>df.join(df.group_by('ID').agg(pl.col('YEAR').unique().len().alias('UNIQUE_YEARS')), on='ID', how='left')
</code></pre>
<p>which does gives correct result! But it looks a bit clunky, and I wonder if there is a more natural way using <code>with_columns</code> and <code>over</code>?</p>
|
<python><dataframe><group-by><window-functions><python-polars>
|
2025-04-14 15:57:58
| 1
| 3,591
|
Phil-ZXX
|
79,573,543
| 2,021,948
|
I don't find pip.conf file in my file system
|
<p>I work with Linux Ubuntu 22.04 LTS, Python 3.10.12, Pip 22.0.2 and rasa version 3.12.2. And I've also studied the answers of <a href="https://stackoverflow.com/questions/38869231/python-cant-find-the-file-pip-conf/79475811?noredirect=1#comment140336865_79475811">this question</a>, but they didn't resolve my problem</p>
<p>My doubt is that I don't have Rasa installed correctly because in the <a href="https://www.youtube.com/watch?v=bWVHIwRpgZY&list=PL75e0qA87dlHPWoD4c-NrYszndgq-NFz3&index=4" rel="nofollow noreferrer">instruction video</a> starting at minute 4, it's said that one line with an internet address should be added to the content of the <code>pip.conf</code> file.</p>
<p>But I don't find any in my whole system. I don't even find a <code>/pip</code>, <code>/pip3</code>, <code>/.pip</code>, or <code>/.pip3</code> folder.</p>
<p>Any suggestions?</p>
<p>-----------------------------15/04/25-----------------------
This edit is the answer to the 2 comments, trying to help me:
@poisoned_monkey, @rasjani
Trying out this instruction:</p>
<p><code>python3 -m pip config debug</code></p>
<p>I get the following response:</p>
<pre><code>env_var:
env:
global:
/etc/xdg/xdg-ubuntu/pip/pip.conf, exists: False
/etc/xdg/pip/pip.conf, exists: False
/etc/pip.conf, exists: False
site:
/usr/pip.conf, exists: False
user:
/home/uwez/.pip/pip.conf, exists: False
/home/uwez/.config/pip/pip.conf, exists: False
</code></pre>
<p>------------------- 29/04/2025--------------------------</p>
<p>Thanks to all. I could resolve the problem in part, but there is still something left. The <a href="https://europe-west3-python.pkg.dev/rasa-releases" rel="nofollow noreferrer">address</a> is not valid anymore and I can't find any follower. I took it from <a href="https://www.youtube.com/watch?v=bWVHIwRpgZY&list=PL75e0qA87dlHPWoD4c-NrYszndgq-NFz3&index=20" rel="nofollow noreferrer">this video tutorial of rasa-pro</a>. It should be added to the pip.conf file according to the tutorial at minute 3:58. Anyway I'll post another question.</p>
|
<python><linux><pip><rasa>
|
2025-04-14 15:49:50
| 3
| 985
|
Uwe_98
|
79,573,496
| 14,463,396
|
Folium heatmap with tooltips
|
<p>I'm trying to create a heatmap with tool tips displaying additional information when the user rolls their mouse over that area. Example of the data I have is:</p>
<pre><code>heat_df = pd.DataFrame({'Latitude':[45.3288, 45.3311],
'Longitude':[-121.6625, -121.6625,],
'Count':[4, 2],
'Note':[10, 20]})
</code></pre>
<p>Where count determines the density colour of the heatmap, and Note contains information which I would like to include in a tooltip (in reality I have a lot more data than this). Creating the heatmap is simple:</p>
<pre><code>m = folium.Map([45.35, -121.6972], zoom_start=12)
#Repeat by count number
heat_data = heat_df.loc[heat_df.index.repeat(heat_df['Count']), ['Latitude','Longitude']].dropna()
heat_data = [[row['Latitude'], row['Longitude']] for index, row in heat_df.iterrows()]
HeatMap(heat_data).add_to(m)
m.save(r"test.html")
</code></pre>
<p><a href="https://i.sstatic.net/rnynM0kZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rnynM0kZ.png" alt="enter image description here" /></a></p>
<p>But I can't find an obvious way to add labels to the data. I tried adding markers with the tool tips, which does show what I want, but then my heatmap is covered in markers and can't be seen very well:</p>
<pre><code>m = folium.Map([45.35, -121.6972], zoom_start=12)
heat_data = heat_df.loc[heat_df.index.repeat(heat_df['Count']), ['Latitude','Longitude']].dropna()
heat_data = [[row['Latitude'], row['Longitude']] for index, row in heat_df.iterrows()]
HeatMap(heat_data).add_to(m)
for i, row in heat_df.iterrows():
folium.Marker([row['Latitude'], row['Longitude']], tooltip=f'Tooltip value: {row['Note']}', icon=None).add_to(m)
m.save(r"test.html")
</code></pre>
<p><a href="https://i.sstatic.net/HiUmqPOy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HiUmqPOy.png" alt="enter image description here" /></a></p>
<p>Is there a way to add tooltips to a heatmap (perhaps they change depending on zoom level? as heatmaps change depending on the zoom). Or is there a way to have an invisible marker, so the tooltip would still appear when the mouse rolls over it but there isn't a big marker in the way?</p>
|
<python><dictionary><folium>
|
2025-04-14 15:24:12
| 1
| 3,395
|
Emi OB
|
79,573,420
| 3,322,273
|
Should I always use asyncio.Lock for fairness
|
<p>I have a Python service that uses Python's virtual threads (<code>threading.Thread</code>) to handle requests. There is a shared singleton functionality that all threads are trying to access, which is protected using <code>threading.Lock</code>.</p>
<pre><code>g_lock = threading.Lock()
def my_threaded_functionality():
try:
g_lock.acquire()
# ... Do something with a shared resource ...
finally:
g_lock.release()
</code></pre>
<p>In the <a href="https://docs.python.org/3/library/threading.html#threading.Lock.acquire" rel="nofollow noreferrer">docs</a> of <code>threading.Lock.acquire</code>, there is <strong>no mentioning of fairness</strong>, whereas, in <a href="https://docs.python.org/3/library/asyncio-sync.html#asyncio.Lock.acquire" rel="nofollow noreferrer">asyncio's</a> <code>asyncio.Lock.acquire</code>, they mention that the lock is fair.</p>
<p>As I want to prevent starvation of threads, and want to preserve the order of the tasks the same as they have arrived, I would go for <code>asyncio</code>'s Lock if they didn't mention that the locks are not thread safe. The question is whether it should be an issue also with Python's virtual "threads".</p>
|
<python><multithreading>
|
2025-04-14 14:48:33
| 2
| 12,360
|
SomethingSomething
|
79,573,391
| 12,187,881
|
How to use peewee for `COPY FROM` with `FORMAT BINARY`?
|
<p>I'm trying to load sparse vectors into a table managed by PostgreSQL. For simplicity, let's suppose that the table has only two fields: a vector ID and the vector itself (which is the result of applying <a href="https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html#sklearn.feature_extraction.text.TfidfVectorizer" rel="nofollow noreferrer">sklearn's TF-IDF</a>). As far as I understand, the most efficient way to load a lot of data into a table is to use <code>COPY FROM</code>, which is <a href="https://github.com/pgvector/pgvector?tab=readme-ov-file#storing" rel="nofollow noreferrer">supported</a> for sparse vectors by the <code>pgvector</code> extension. The documentation for <code>pgvector</code> contains <a href="https://github.com/pgvector/pgvector-python/blob/master/examples/loading/example.py" rel="nofollow noreferrer">an example</a> of how to use <code>COPY FROM</code> with vectors, and the example utilizes <code>psycopg</code>.</p>
<p>Is there support for <code>COPY FROM</code> with <code>FORMAT BINARY</code> in <code>peewee</code>? I've seen <code>bulk_create()</code> and <code>insert_many()</code>, but I'm not sure how to use them in my case. Can they use <code>FORMAT BINARY</code> and, if so, how to utilize the format?</p>
|
<python><postgresql><peewee>
|
2025-04-14 14:39:43
| 0
| 381
|
TopCoder2000
|
79,573,374
| 5,037,202
|
How to process large files efficiently with generators in Python without memory issues?
|
<p>I’m working on a data processing pipeline in Python that needs to handle very large log files (several GBs). I want to avoid loading the entire file into memory, so I’m trying to use generators to process the file line-by-line.</p>
<p>Here’s a simplified version of what I’m doing:</p>
<pre><code>
def read_large_file(file_path):
with open(file_path, 'r') as f:
for line in f:
yield process_line(line)
def process_line(line):
# some complex processing logic here
return line.strip()
for processed in read_large_file('huge_log.txt'):
# write to output or further process
pass
</code></pre>
<p>My questions are:</p>
<ol>
<li><p>Is this the most memory-efficient way to handle large files in Python?</p>
</li>
<li><p>Would using mmap or Path(file).open() provide any performance benefit over a standard open() call?</p>
</li>
<li><p>Are there any Pythonic patterns or third-party libraries that better support this kind of stream processing with low overhead?</p>
</li>
</ol>
<p>Would appreciate any advice on best practices for large-file processing in real-world scenarios.</p>
|
<python><performance><memory-management><file-io><generator>
|
2025-04-14 14:30:00
| 0
| 477
|
Waqas Gondal
|
79,573,370
| 2,233,500
|
CairoSVG: The SVG size is undefined
|
<p>I need to convert a SVG file into a PNG. To do so, I'm using CairoSVG, which works fine.
I have however an image causing the CairoSVG package to raise an Exception, and I don't understand why. Note that the problematic image is public (Wikipedia), which is a good thing for sharing the code bellow:</p>
<pre class="lang-py prettyprint-override"><code>import cairosvg
URL = "https://upload.wikimedia.org/wikipedia/commons/3/32/Saltwater_Limpet_Diagram-en.svg"
OUTPUT_PATH = "Saltwater_Limpet_Diagram-en.png"
cairosvg.svg2png(url=URL, write_to=OUTPUT_PATH)
</code></pre>
<p>The error I get is <code>raise ValueError('The SVG size is undefined')</code>. But when I open the SVG file, I can see that the width and height are defined in <code><svg></code> tag:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<!-- Generator: Adobe Illustrator 18.1.1, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<svg version="1.1" id="Layer_3" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" width="2569.14px" height="1481.013px" viewBox="0 0 2569.14 1481.013" style="enable-background:new 0 0 2569.14 1481.013;" xml:space="preserve">
</code></pre>
<p>Edit: This issue was reported on the <a href="https://github.com/Kozea/CairoSVG/issues/440" rel="nofollow noreferrer">CairoSVG GitHub page</a></p>
|
<python><cairo>
|
2025-04-14 14:25:31
| 1
| 867
|
Vincent Garcia
|
79,573,357
| 16,383,578
|
What is a faster way to find all unique partitions of an integer and their weights?
|
<p>I have seen loads of posts on this topic, but none are exactly what I am looking for.</p>
<p>I want to find all ways a positive integer N that is greater than 1 can be expressed as the sum of at most N integers from 1 up to N, just like the usual.</p>
<p>For example, in the standard notation, these are all the partitions of 6:</p>
<pre><code>[(1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 2),
(1, 1, 1, 2, 1),
(1, 1, 1, 3),
(1, 1, 2, 1, 1),
(1, 1, 2, 2),
(1, 1, 3, 1),
(1, 1, 4),
(1, 2, 1, 1, 1),
(1, 2, 1, 2),
(1, 2, 2, 1),
(1, 2, 3),
(1, 3, 1, 1),
(1, 3, 2),
(1, 4, 1),
(1, 5),
(2, 1, 1, 1, 1),
(2, 1, 1, 2),
(2, 1, 2, 1),
(2, 1, 3),
(2, 2, 1, 1),
(2, 2, 2),
(2, 3, 1),
(2, 4),
(3, 1, 1, 1),
(3, 1, 2),
(3, 2, 1),
(3, 3),
(4, 1, 1),
(4, 2),
(5, 1),
(6,)]
</code></pre>
<p>Now, the notation is very low-entropy, first every occurrence of the number increases the size of a particular partition, this is inefficient and it is hard to count the occurrences of the numbers when they recur many times. I want to replace all the occurrences of a number with a two-element tuple, in which the first element is the number and the second is the count, for example <code>(1, 1, 1, 1, 1, 1)</code> is equivalent to <code>(1, 6)</code>, they both contain the same information, but one is clearly much more concise.</p>
<p>And second, there are lots of duplicates in the output, for example, there are five partitions that contain four 1s and one 2, they are counted as five separate elements. This is inefficient, too, since addition is commutative, changing the order of the numbers doesn't change the result, so they are all equivalent, they are all the same element.</p>
<p>However if we replace all five with just one element we lose information.</p>
<p>I want to instead replace it with the following format:</p>
<pre><code>Counter({((1, 2), (2, 2)): 6,
((1, 1), (2, 1), (3, 1)): 6,
((1, 4), (2, 1)): 5,
((1, 3), (3, 1)): 4,
((1, 2), (4, 1)): 3,
((1, 1), (5, 1)): 2,
((2, 1), (4, 1)): 2,
((1, 6),): 1,
((2, 3),): 1,
((3, 2),): 1,
((6, 1),): 1})
</code></pre>
<p>So I want the result to be a <code>Counter</code> in which the keys are the unique partitions and the values are how many ways the numbers can be arranged.</p>
<p>And yes I have written a function for this, using brute-force and memoization. It turns out to be pretty efficient.</p>
<p>First this is the implementation that outputs in the standard format, I post it here for comparison:</p>
<pre><code>def partitions(number: int) -> list[tuple[int, ...]]:
result = []
stack = [(number, ())]
while stack:
remaining, path = stack.pop()
if not remaining:
result.append(path)
else:
stack.extend((remaining - i, path + (i,)) for i in range(remaining, 0, -1))
return result
</code></pre>
<p>It takes 582 milliseconds to find all partitions of 20 in CPython and 200 milliseconds in PyPy3:</p>
<p>CPython</p>
<pre><code>In [22]: %timeit partitions(20)
582 ms ± 4.22 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
</code></pre>
<p>PyPy3</p>
<pre><code>In [36]: %timeit partitions(20)
199 ms ± 3.17 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
</code></pre>
<p>Now, the bruteforce approach with memoization that outputs in the intended format:</p>
<pre><code>PARTITION_COUNTERS = {}
def partition_counter(number: int) -> Counter:
if result := PARTITION_COUNTERS.get(number):
return result
result = Counter()
for i in range(1, number):
for run, count in partition_counter(number - i).items():
new_run = []
added = False
for a, b in run:
if a == i:
new_run.append((a, b + 1))
added = True
else:
new_run.append((a, b))
if not added:
new_run.append((i, 1))
result[tuple(sorted(new_run))] += count
result[((number, 1),)] = 1
PARTITION_COUNTERS[number] = result
return result
</code></pre>
<p>CPython</p>
<pre><code>In [23]: %timeit PARTITION_COUNTERS.clear(); partition_counter(20)
10.4 ms ± 72.1 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
</code></pre>
<p>PyPy3</p>
<pre><code>In [37]: %timeit PARTITION_COUNTERS.clear(); partition_counter(20)
9.75 ms ± 58.3 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
</code></pre>
<p>It takes only 10 milliseconds to find all partitions of 20, much, much faster than the first function, and PyPy3 doesn't make it faster.</p>
<p>But how can we do better? After all, I am just using brute-force, I know there are lots of smart algorithms for integer partition, but none of them generate outputs in the intended format.</p>
|
<python><algorithm><number-theory><integer-partition>
|
2025-04-14 14:18:46
| 2
| 3,930
|
Ξένη Γήινος
|
79,573,330
| 1,866,177
|
Pandas Dataframe unique values per group based on label
|
<p>I have a dataframe with two labels e.g.:</p>
<pre><code>df = pd.DataFrame({'Brand': ['VW', 'VW', 'BMW', 'BMW', 'Mercedes'],
'color': ['red', 'red', 'red', 'blue', 'black']})
>>df
Brand color
0 VW red
1 VW red
2 BMW red
3 BMW blue
4 Mercedes black
</code></pre>
<p>For each unique item in the first label ('brand') I want a list of all unique values of the second label ('color'). So the output would look like:</p>
<pre><code> Brand colors
0 VW [red]
1 BMW [red, blue]
2 Mercedes [black]
</code></pre>
|
<python><pandas><unique>
|
2025-04-14 14:05:30
| 0
| 3,930
|
Dschoni
|
79,573,221
| 7,112,039
|
Keep context vars values between FastAPI/starlette middlewares depending on the middleware order
|
<p>I am developing a FastAPI app, and my goal is to record some information in a Request scope and then reuse this information later in log records.</p>
<p>My idea was to use context vars to store the "request context", use a middleware to manipulate the request and set the context var, and finally use a LogFilter to attach the context vars values to the LogRecord.</p>
<p>This is my app skeleton</p>
<pre class="lang-py prettyprint-override"><code>logger = logging.getLogger(__name__)
app = FastAPI()
app.add_middleware(SetterMiddlware)
app.add_middleware(FooMiddleware)
@app.get("/")
def read_root(setter = Depends(set_request_id)):
print("Adding req_id to body", req_id.get()) # This is 1234567890
logging.info("hello")
return {"Req_id": str(req_id.get())}
</code></pre>
<p>and those are my middlewares</p>
<pre class="lang-py prettyprint-override"><code>class SetterMiddlware(BaseHTTPMiddleware):
async def dispatch(self, request: Request, call_next):
calculated_id = "1234567890"
req_id.set(calculated_id)
request.state.req_id = calculated_id
response = await call_next(request)
return response
class FooMiddleware(BaseHTTPMiddleware):
async def dispatch(self, request: Request, call_next):
response = await call_next(request)
return response
</code></pre>
<p>and the Logging Filter</p>
<pre class="lang-py prettyprint-override"><code>from vars import req_id
class CustomFilter(Filter):
"""Logging filter to attach the user's authorization to log records"""
def filter(self, record: LogRecord) -> bool:
record.req_id = req_id.get()
return True
</code></pre>
<p>And finally following a part of my log configuration</p>
<pre class="lang-py prettyprint-override"><code>...
"formatters": {
"default": {
"format": "%(levelname)-9s %(asctime)s [%(req_id)s]| %(message)s",
"datefmt": "%Y-%m-%d,%H:%M:%S",
},
},
"handlers": {
...
"handlers": {
"console": {
"class": "logging.StreamHandler",
"formatter": "default",
"stream": "ext://sys.stderr",
"filters": [
"custom_filter",
],
"level": logging.NOTSET,
},
...
"loggers": {
"": {
"handlers": ["console"],
"level": logging.DEBUG,
},
"uvicorn": {"handlers": ["console"], "propagate": False},
},
</code></pre>
<p>When <code>SetterMiddlware</code> is the latest added in the app (<code>FooMiddleware</code> commented in the example), my app logs as expected</p>
<pre class="lang-console prettyprint-override"><code>Adding req_id to body 1234567890
INFO 2025-04-14,15:02:28 [1234567890]| hello
INFO 2025-04-14,15:02:28 [1234567890]| 127.0.0.1:52912 - "GET / HTTP/1.1" 200
</code></pre>
<p>But if I add some other middleware after <code>SetterMiddlware</code>, uvicorn logger does not find anymore the context_var <code>req_id</code> set.</p>
<pre class="lang-console prettyprint-override"><code>Adding req_id to body 1234567890
INFO 2025-04-14,15:03:56 [1234567890]| hello
INFO 2025-04-14,15:03:56 [None]| 127.0.0.1:52919 - "GET / HTTP/1.1" 200
</code></pre>
<p>I tried using the package <code>https://starlette-context.readthedocs.io/en/latest/</code> but I wasn't luckier; it looks like it suffers the same problems.</p>
<p>I would like to know why this behavior happens and how I can fix it, without the constraint of having the SetterMiddleware in the last middleware position.</p>
|
<python><fastapi><starlette><python-contextvars><fastapi-middleware>
|
2025-04-14 13:07:09
| 2
| 303
|
ow-me
|
79,573,192
| 9,973,879
|
What's wrong with numpy_nullable and NaNs?
|
<p>Doing the same operation using plain <code>numpy</code>, the <code>numpy_nullable</code> backend and the <code>pyarrow</code> backend shows that <code>numpy_nullable</code> behaves differently than the other two and in a rather counter-intuitive way.</p>
<pre class="lang-py prettyprint-override"><code>>>> import pandas as pd
>>> sr = pd.Series([1.5, 0.0])
>>> (sr / sr).max()
1.0
>>> sr = pd.Series([1.5, 0.0]).convert_dtypes()
>>> (sr / sr).max()
<NA>
>>> sr = pd.Series([1.5, 0.0]).convert_dtypes(dtype_backend='pyarrow')
>>> (sr / sr).max()
1.0
</code></pre>
<p>Is there a good reason, or at least an explanation, for this inconsistent behavior? Is there a way around it or should we simply avoid the <code>numpy_nullable</code> backend?</p>
|
<python><pandas>
|
2025-04-14 12:49:45
| 1
| 1,967
|
user209974
|
79,572,852
| 10,755,782
|
How can I access the original xlo/xhi box bounds from a LAMMPS dump file using MDAnalysis in Python?
|
<p>I'm working with a LAMMPS trajectory in MDAnalysis using the default LAMMPSDUMP format, like this:</p>
<pre><code>import MDAnalysis as mda
u = mda.Universe("datfile.data", "dump.LAMMPSDUMP")
</code></pre>
<p>In my LAMMPS dump file, the box bounds are written explicitly and do not start at '0', for example:</p>
<pre><code>ITEM: TIMESTEP
0
ITEM: NUMBER OF ATOMS
2197
ITEM: BOX BOUNDS pp pp pp
1.7831079224879716e+00 3.3421689207745555e+02
1.7831079224879716e+00 3.3421689207745555e+02
1.7831079224879716e+00 3.3421689207745555e+02
ITEM: ATOMS id type x y z vx vy vz
</code></pre>
<p>However, when I access <code>u.trajectory.ts.dimensions</code>, I only get the box lengths,</p>
<p><code>332.43378, 332.43378, 332.43378, 90. , 90. , 90. </code></p>
<p>MDAnalysis assumes that the box starts at zero and is giving me the length along X,Y and Z. This causes issues when I'm trying to re-center atoms. Is there a way to access the actual xlo, xhi, etc. values from within MDAnalysis (e.g., something like <code>u.trajectory.ts.xlo</code>)? Or do I have to manually parse the dump file to retrieve these values?</p>
<p>Thanks !</p>
|
<python><lammps><mdanalysis>
|
2025-04-14 09:54:33
| 0
| 660
|
brownser
|
79,572,849
| 2,239,318
|
python-docx header set anchor "to page"
|
<p>Problem: put a logo image at top left position of the page.</p>
<p>I've tried the following to accomplish the same but it doesn't work as expected:</p>
<pre><code>from docx import Document
from docx.shared import Mm, Pt
from docx.enum.text import WD_PARAGRAPH_ALIGNMENT
doc = Document()
# A4 vertical
section = doc.sections[0]
section.page_height = Mm(297)
section.page_width = Mm(210)
# tried to remove margin to position logo
section.top_margin = Mm(0)
section.bottom_margin = Mm(0)
section.left_margin = Mm(0)
section.right_margin = Mm(0)
# placed logo but still got a left margin
header = section.header
p = header.paragraphs[0]
p.paragraph_format.space_before = 0
p.paragraph_format.space_after = 0
p.alignment = WD_PARAGRAPH_ALIGNMENT.LEFT
p.add_run().add_picture("logo.png", width=Mm(210))
# restored margins for next content but trigger a break of the page
section.top_margin = Mm(20)
section.bottom_margin = Mm(20)
section.left_margin = Mm(25)
section.right_margin = Mm(20)
</code></pre>
<p>Manually editing the generated docx file I've found out that with image properties anchor to page it would fix.</p>
<p>But this option seems missing in docx code.</p>
|
<python><python-docx>
|
2025-04-14 09:53:06
| 2
| 2,826
|
user2239318
|
79,572,697
| 17,040,989
|
produce nice barplots with python in PyCharm
|
<p>I'm working on a very basic barplot in <code>Python</code> where I need to plot a series of length occurrences showcasing how many times a specific one appears.</p>
<p>I'm storing everything in an array, but when I attempt to plot I either get the <em>y</em>-scale wrong, or on the <em>x</em>-axis all the instances when instead they should be “added” on top of each other towards the total count.</p>
<p>Below, the code I tested and an ideal output I wish to achieve which I plotted with <code>R</code>:</p>
<p><code>print(l)</code></p>
<pre><code>[408, 321, 522, 942, 462, 564, 765, 747, 465, 957, 993, 1056, 690, 1554, 1209, 246, 462, 3705, 1554, 507, 681, 1173, 408, 330, 1317, 240, 576, 2301, 1911, 1677, 1014, 756, 918, 864, 528, 882, 1131, 1440, 1167, 1146, 1002, 906, 1056, 1881, 396, 1278, 501, 1110, 303, 1176, 699, 747, 1971, 3318, 1875, 450, 354, 1218, 378, 303, 777, 915, 5481, 576, 1920, 2022, 1662, 519, 936, 423, 1149, 600, 1896, 648, 2238, 1419, 423, 552, 1299, 1071, 963, 471, 408, 729, 1896, 1068, 1254, 1179, 1188, 645, 978, 903, 1191, 1119, 747, 1005, 273, 1191, 519, 930, 1053, 2157, 933, 888, 591, 1287, 457, 294, 291, 669, 270, 556, 444, 483, 438, 452, 659, 372, 480, 464, 477, 256, 350, 357, 524, 477, 218, 192, 216, 587, 473, 525, 657, 241, 719, 383, 459, 855, 417, 283, 408, 678, 681, 1254, 879, 250, 857, 706, 456, 567, 190, 887, 287, 240, 960, 587, 361, 816, 297, 290, 253, 335, 609, 507, 294, 1475, 464, 780, 552, 555, 1605, 1127, 382, 579, 645, 273, 241, 552, 344, 890, 1346, 1067, 764, 431, 796, 569, 1386, 413, 401, 407, 252, 375, 378, 339, 457, 1779, 243, 701, 552, 708, 174, 300, 257, 378, 777, 729, 969, 603, 378, 436, 348, 399, 1662, 1511, 799, 715, 1400, 399, 516, 399, 355, 1291, 1286, 657, 374, 492, 334, 295, 210, 270, 858, 1487, 1020, 1641, 417, 396, 303, 553, 492, 1097, 612, 441, 654, 611, 532, 474, 864, 377, 465, 435, 1003, 608, 486, 748, 351, 245, 545, 627, 303, 457, 419, 449, 843, 312, 398, 704, 315, 330, 1054, 259, 507, 372, 468, 345, 1303, 408, 1031, 471, 653, 925, 397, 231, 684, 449, 336, 344, 619, 917, 417, 516, 359, 550, 222, 789, 608, 659, 853, 360, 657, 372, 305, 353, 650, 564, 547, 969, 505, 230, 953, 769, 307, 516, 408, 342, 267, 570, 572, 348, 1005, 981, 1586, 1302, 369, 1290, 1458, 572, 1122, 363, 879, 651, 466, 1203, 485, 440, 473, 810, 1320, 461, 455, 258, 660, 297, 285, 424, 273, 378, 432, 293, 410, 327, 483, 477, 551, 894, 638, 538, 678, 303, 478, 1046, 995, 360, 252, 480, 490, 475, 394, 1185, 357, 361, 387, 489, 450, 788, 366, 340, 829, 469, 404, 593, 498, 840, 601, 235, 452, 395, 504, 299, 662, 357, 686, 683, 248, 574, 1108, 587, 483, 1481, 1297, 1334, 579, 182, 456, 1335, 513, 967, 918, 607, 564, 727, 913, 743, 312, 480, 659, 939, 705, 1001, 553, 339, 286, 452, 744, 519, 521, 491, 565, 522, 377, 861, 812, 523, 332, 800, 1015, 1000, 513, 990, 1003, 733, 542, 940, 399, 399, 612, 1361, 399, 399, 318, 319, 510, 504, 841, 1529, 506, 1881, 500, 358, 240, 1261, 354, 519, 779, 656, 311, 635, 527, 759, 333, 648, 770, 330, 584, 453, 632, 513, 998, 343, 696, 1286, 391, 374, 893, 375, 426, 658, 455, 518, 466, 417, 614, 285, 480, 845, 344, 534, 572, 1727, 1085, 480, 468, 192, 348, 578, 2433, 390, 1031, 1129, 626, 735, 963, 439, 272, 806, 743, 560, 250, 679, 459, 207, 905, 616, 404, 489, 582, 340, 435, 1632, 417, 221, 279, 462, 357, 288, 248, 981, 1015, 935, 678, 279, 348, 470, 958, 867, 352, 735, 293, 911, 460, 767, 386, 531, 411, 192, 742, 373, 1454, 970, 285, 468, 273, 1527, 612, 983, 552, 998, 553, 812, 983, 403, 1706, 781, 183, 405, 891, 647, 1022, 946, 476, 270, 471, 888, 435, 354, 563, 526, 877, 1170, 351, 863, 1503, 562, 1174, 345, 385, 275, 374, 171, 474, 408, 1640, 345, 462, 722, 1645, 504, 840, 459, 783, 501, 473, 609, 684, 543, 353, 788, 684, 734, 242, 751, 478, 471, 365, 293, 380, 486, 617, 786, 436, 632, 624, 386, 925, 469, 405, 2406, 462, 435, 251, 1118, 349, 779, 343, 458, 264, 243, 935, 535, 576, 480, 406, 606, 495, 396, 456, 798, 404, 285, 375, 922, 1136, 330, 339, 559, 998, 239, 587, 468, 1237, 1722, 699, 436, 377, 306, 326, 1076, 385, 537, 315, 342, 386, 400, 340, 202, 266, 455, 435, 259, 317, 456, 249, 452, 1345, 699, 456, 456, 453, 275, 315, 693, 354, 475, 780, 415, 956, 554, 258, 418, 996, 552, 511, 1404, 469, 262, 398, 242, 350, 538, 379, 300, 460, 373, 276, 258, 740, 609, 753, 357, 495, 532, 551, 234, 633, 480, 312, 898, 350, 705, 265, 345, 334, 334, 582, 583, 582, 478, 465, 480, 408, 870, 624, 1107, 303, 384, 1165, 1456, 878, 297, 301, 276, 372, 551, 799, 496, 204, 552, 791, 330, 359, 480, 468, 414, 1102, 876, 1112, 850, 536, 500, 374, 825, 476, 499, 275, 345, 616, 360, 609, 310, 260, 376, 283, 390, 1529, 1310, 207, 1039, 661, 570, 1292, 914, 843, 658, 302, 1119, 609, 225, 317, 1091, 225, 403, 544, 495, 912, 744, 473, 985, 342, 630, 298, 392, 297, 933, 888, 666, 1023, 346, 310, 1134, 840, 1277, 387, 463, 435, 610, 492, 1107, 582, 582, 582, 1307, 647, 1280, 555, 645, 267, 952, 588, 348, 287, 507, 410, 737, 731, 354, 2192, 309, 388, 692, 389, 742, 766, 1228, 1640, 237, 495, 351, 285, 2443, 963, 296, 420, 482, 246, 553, 621, 405, 597, 459, 310, 300, 450, 471, 291, 610, 723, 380, 1439, 312, 900, 275, 396, 342, 309, 549, 355, 474, 417, 372, 384, 291, 987, 629, 407, 655, 357, 473, 348, 459, 599, 474, 430, 620, 584, 546, 435, 242, 1167, 627, 378, 945, 349, 255, 216, 530, 516, 606, 449, 1490, 401, 1070, 899, 452, 1304, 451, 723, 354, 229, 629, 639, 501, 465, 344, 1895, 288, 341, 2377, 542, 453, 291, 645, 494, 471, 612, 1294, 713, 1291, 467, 734, 300, 1432, 320, 753, 609, 1051, 231, 875, 704, 438, 742, 504, 1334, 738, 342, 435, 1133, 1229, 436, 310, 494, 273, 1228, 626, 470, 235, 1264, 465, 450, 350, 647, 541, 256, 231, 435, 485, 224, 555, 395, 300, 969, 237, 1717, 416, 538, 371, 326, 360, 1194, 397, 519, 645, 324, 465, 402, 477, 527, 831, 1179, 366, 889, 941, 374, 775, 581, 392, 1188, 797, 480, 418, 733, 857, 332, 255, 2847, 917, 478, 585, 591, 480, 1293, 273, 375, 489, 727, 316, 1451, 975, 762, 528, 408, 1104, 375, 265, 609, 317, 879, 542, 332, 462, 492, 284, 282, 394, 483, 493, 778, 291, 443, 350, 491, 374, 369, 862, 245, 269, 640, 282, 606, 393, 307, 488, 276, 611, 471, 1806, 1296, 336, 244, 1105, 444, 375, 1214, 294, 455, 353, 605, 669, 354, 692, 345, 643, 289, 460, 771, 351, 1635, 331, 465, 703, 352, 396, 269, 1142, 353, 552, 2790, 611, 606, 731, 447, 485, 420, 283, 744, 1265, 381, 1146, 589, 477, 309, 669, 389, 435, 558, 445, 1448, 333, 762, 1222, 779, 519, 465, 317, 375, 480, 371, 787, 305, 1276, 408, 304, 246, 791, 341, 330, 536, 278, 383, 417, 351, 323, 1068, 507, 741, 678, 613, 823, 1748, 411, 676, 287, 486, 433, 506, 194, 444, 860, 1212, 1005, 321, 462, 1158, 223, 625, 294, 294, 1598, 205, 764, 2649, 1226, 479, 543, 321, 1143, 648, 2409, 291, 1095, 651, 405, 294, 728, 267, 805, 294, 1010, 405, 368, 442, 363, 3117, 296, 466, 1621, 509, 219, 692, 453, 749, 828, 950, 683, 574, 438, 396, 461, 740, 350, 408, 1636, 746, 821, 912, 482, 532, 397, 582, 537, 761, 348, 354, 356, 978, 348, 441, 464, 1206, 576, 355, 446, 577, 1186, 396, 980, 213, 498, 597, 335, 419, 351, 617, 226, 609, 206, 762, 596, 999, 589, 585, 477, 558, 206, 806, 405, 356, 742, 881, 426, 434, 735, 494, 611, 308, 453, 426, 664, 384, 335, 612, 286, 463, 363, 460, 327, 1007, 1285, 1021, 464, 662, 1266, 1275, 205, 581, 351, 409, 387, 406, 296, 353, 447, 472, 667, 572, 682, 460, 941, 382, 477, 819, 340, 477, 716, 461, 302, 348, 291, 459, 567, 625, 216, 713, 394, 462, 620, 486, 1049, 1027, 761, 534, 348, 346, 313, 551, 522, 612, 303, 186, 288, 1054, 481, 1263, 530, 603, 491, 297, 1989, 598, 545, 291, 568, 201, 538, 267, 894, 2037, 456, 291, 367, 338, 782, 435, 570, 245, 371, 341, 478, 511, 348, 1019, 1315, 1007, 469, 711, 848, 1810, 807, 455, 607, 435, 270, 489, 408, 574, 444, 438, 495, 474, 675, 1024, 610, 464, 477, 549, 305, 366, 306, 222, 158, 893, 312, 348, 259, 261, 336, 495, 560, 452, 273, 357, 455, 195, 506, 1403, 345, 347, 462, 957, 224, 798, 487, 372, 798, 420, 316, 400, 399, 878, 618, 371, 369, 336, 474, 350, 1081, 1012, 649, 480, 430, 570, 341, 759, 456, 237, 466, 531, 455, 846, 280, 767, 758, 624, 724, 582, 1924, 270, 570, 1800, 530, 826, 1478, 345, 624, 498, 231, 686, 592, 1671, 413, 582, 302, 504, 666, 727, 613, 857, 270, 446, 483, 1781, 1308, 358, 1393, 453, 672, 264, 412, 281, 378, 476, 562, 792, 342, 495, 342, 392, 269, 1495, 668, 490, 272, 266, 270, 1080, 401, 405, 395, 588, 306, 604, 482, 301, 1439, 1605, 1833, 441, 1287, 1093, 1564, 1093, 624, 1925, 1287, 894, 428, 547, 1924, 1455, 938, 1369, 1794, 404, 605, 570, 447, 1171, 268, 626, 318, 406, 1471, 1069, 792, 657, 482, 420, 1121, 844, 522, 1560, 734, 1318, 723, 1335, 830, 825, 287, 440, 895, 323, 782, 479, 1397, 860, 297, 1002, 570, 603, 576, 269, 466, 758, 509, 552, 462, 493, 477, 431, 351, 757, 438, 1765, 1486, 480, 907, 620, 600, 438, 576, 576, 801, 515, 862, 337, 532, 385, 953, 719, 1223, 468, 486, 445, 231, 610, 474, 311, 738, 868, 453, 558, 409, 305, 827, 308, 614, 519, 380, 763, 472, 313, 447, 960, 741, 444, 520, 543, 531, 450, 413, 305, 492, 868, 207, 1285, 492, 802, 435, 303, 723, 705, 308, 417, 353, 347, 737, 380, 477, 343, 345, 409, 408, 276, 193, 270, 845, 792, 443, 1111, 256, 800, 549, 315, 274, 426, 470, 359, 473, 271, 576, 1293, 342, 761, 577, 671, 340, 276, 394, 467, 387, 336, 920, 350, 1400, 195, 336, 1282, 282, 773, 757, 566, 396, 880, 494, 661, 953, 480, 314, 468, 468, 339, 550, 1075, 334, 318, 365, 567, 286, 1560, 207, 1344, 584, 333, 387, 1164, 1074, 1324, 1080, 405, 264, 300, 582, 342, 427, 514, 576, 993, 208, 669, 993, 439, 219, 742, 890, 966, 520, 337, 488, 438, 561, 319, 476, 300, 465, 1056, 1044, 216, 198, 267, 327, 527, 746, 447, 288, 923, 268, 300, 262, 1015, 468, 289, 341, 345, 483, 482, 548, 255, 441, 229, 435, 453, 264, 369, 403, 333, 461, 446, 221, 405, 848, 616, 396, 405, 495, 476, 315, 351, 438, 495, 482, 456, 322, 666, 1031, 633, 306, 880, 2683, 774, 494, 993, 430, 1284, 1118, 1030, 219, 384, 2249, 301, 195, 689, 251, 302, 474, 732, 790, 435, 436, 270, 198, 435, 583, 800, 310, 576, 280, 363, 651, 743, 855, 485, 673, 1014, 345, 407, 351, 3668, 355, 396, 415, 361, 229, 269, 1094, 435, 327, 587, 299, 362, 375, 414, 440, 637, 732, 845, 432, 360, 572, 198, 934, 1480, 948, 976, 899, 372, 459, 997, 165, 734, 455, 479, 480, 514, 504, 446, 504, 1620, 552, 1118, 485, 509, 892, 1025, 546, 777, 455, 445, 985, 474, 864, 302, 712, 283, 307, 432, 1075, 478, 732, 685, 375, 507, 1209, 1097, 2480, 477, 343, 432, 496, 465, 457, 768, 561, 660, 915, 661, 255, 217, 960, 265, 526, 672, 798, 357, 1692, 622, 465, 612, 228, 1086, 444, 261, 345, 238, 706, 240, 444, 288, 632, 528, 318, 401, 378, 192, 461, 528, 393, 486, 409, 831, 1019, 745, 222, 216, 465, 839, 1399, 523, 461, 457, 388, 438, 1062, 351, 553, 814, 345, 494, 643, 307, 306, 252, 569, 534, 557, 372, 374, 344, 696, 351, 582, 903, 375, 432, 303, 743, 617, 459, 492, 495, 999, 284, 538, 291, 748, 742, 739, 449, 212, 261, 579, 1311, 1178, 330, 458, 276, 563, 467, 565, 578, 227, 178, 959, 642, 475, 1242, 325, 365, 360, 314, 523, 201, 569, 571, 351, 319, 298, 468, 1154, 351, 599, 574, 947, 480, 415, 770, 459, 263, 285, 281, 465, 1429, 498, 199, 345, 639, 261, 489, 314, 291, 692, 318, 351, 399, 275, 540, 542, 914, 492, 872, 231, 1324, 373, 270, 302, 479, 285, 381, 270, 410, 1366, 242, 698, 1044, 513, 1004, 951, 702, 796, 291, 282, 444, 734, 1669, 500, 350, 319, 1092, 239, 434, 266, 297, 323, 407, 252, 879, 893, 267, 222, 326, 311, 288, 680, 568, 477, 877, 408, 968, 888, 1497, 1312, 336, 279, 459, 876, 294, 324, 324, 801, 383, 225, 449, 609, 384, 738, 951, 312, 550, 810, 765, 377, 297, 179, 213, 320, 489, 797, 1637, 558, 616, 1907, 517, 556, 773, 669, 426, 432, 956, 336, 757, 353, 420, 462, 797, 475, 1124, 356, 579, 212, 472, 361, 408, 390, 470, 527, 637, 422, 474, 622, 533, 728, 985, 537, 606, 340, 754, 479, 851, 960, 453, 607, 518, 639, 495, 341, 411, 441, 609, 792, 287, 498, 458, 260, 195, 411, 1646, 375, 665, 243, 356, 426, 207, 362, 452, 339, 666, 852, 476, 312, 375, 284, 437, 673, 507, 332, 380, 747, 734, 431, 268, 243, 315, 221, 767, 894, 225, 362, 358, 919, 294, 396, 449, 179, 549, 435, 528, 479, 300, 436, 380, 523, 550, 255, 1043, 645, 402, 203, 479, 679, 478, 654, 769, 471, 418, 617, 342, 674, 993, 321, 615, 150, 204, 1033, 606, 759, 604, 828, 307, 273, 558, 234, 408, 548, 1238, 914, 978, 930, 269, 287, 390, 474, 248, 234, 714, 603, 471, 236, 383, 732, 356, 269, 461, 358, 197, 506, 465, 274, 618, 1309, 1638, 1154, 2222, 930, 1395, 1387, 765, 899, 291, 354, 872, 355, 273, 664, 426, 360, 683, 627, 609, 1230, 861, 6609, 549, 444, 240, 461, 234, 495, 571, 957, 342, 212, 1519, 396, 358, 1272, 1492, 615, 414, 472, 332, 335, 1060, 721, 477, 556, 654, 699, 654, 393, 921, 1651, 504, 710, 1083, 755, 246, 476, 270, 330, 618, 805, 571, 495, 391, 498, 1390, 444, 207, 615, 349, 548, 467, 301, 216, 473, 724, 744, 504, 673, 525, 670, 669, 1221, 288, 884, 462, 565, 434, 522, 455, 639, 1221, 301, 1223, 1029, 991, 491, 465, 434, 472, 392, 821, 719, 543, 246, 818, 913, 402, 535, 492, 492, 491, 534, 968, 886, 316, 541, 494, 409, 246, 435, 442, 989, 473, 790, 624, 398, 469, 273, 735, 328, 601, 627, 356, 344, 410, 1261, 495, 506, 518, 388, 624, 687, 237, 972, 476, 527, 1518, 479, 633, 675, 374, 573, 444, 357, 239, 581, 799, 308, 522, 758, 272, 171, 276, 879, 275, 455, 648, 252, 474, 303, 510, 348, 590, 1086, 504, 928, 530, 495, 1587, 239, 608, 326, 585, 373, 496, 482, 1158, 885, 333, 459, 370, 455, 893, 307, 468, 290, 604, 1198, 306, 1110, 922, 705, 418, 1441, 613, 401, 546, 354, 465, 1205, 328, 703, 570, 428, 232, 1292, 415, 1007, 1285, 1019, 968, 245, 606, 1284, 798, 1588, 1547, 606, 326, 506, 228, 1071, 429, 485, 1508, 625, 294, 330, 405, 343, 192, 452, 359, 222, 1282, 521, 461, 403, 735, 297, 1288, 606, 382, 339, 650, 918, 309, 724, 479, 439, 289, 364, 1683, 226, 1139, 372, 495, 741, 923, 464, 629, 266, 1186, 891, 429, 271, 224, 723, 408, 687, 763, 421, 398, 599, 918, 272, 610, 932, 247, 306, 1224, 594, 531, 349, 332, 405, 486, 406, 752, 441, 386, 368, 663, 350, 480, 1067, 368, 816, 468, 615, 976, 339, 332, 903, 357, 961, 970, 657, 942, 662, 400, 304, 858, 332, 238, 231, 327, 475, 1499, 432, 585, 392, 412, 594, 263, 381, 432, 1320, 269, 439, 465, 321, 718, 1059, 408, 1308, 392, 856, 1255, 536, 339, 2192, 455, 1390, 715, 522, 980, 432, 320, 2766, 531, 697, 378, 717, 246, 590, 731, 976, 733, 177, 345, 588, 348, 1187, 318, 724, 705, 1146, 284, 610, 354, 298, 331, 693, 1210, 1470, 540, 612, 419, 1039, 574, 739, 1213, 1332, 296, 292, 493, 1046, 567, 662, 708, 233, 1123, 933, 624, 159, 492, 210, 473, 1153, 1489, 974, 669, 1281, 737, 729, 545, 532, 357, 565, 844, 939, 468, 878, 772, 773, 355, 469, 2315, 171, 654, 1063, 432, 1938, 270, 866, 716, 1022, 323, 330, 226, 285, 300, 896, 300, 659, 246, 1493, 231, 906, 294, 465, 533, 525, 363, 524, 891, 788, 270, 240, 723, 734, 2027, 474, 1327, 547, 589, 240, 465, 339, 614, 492, 486, 398, 639, 345, 974, 156, 664, 1544, 1367, 776, 610, 465, 519, 478, 1524, 640, 1431, 1288, 419, 189, 275, 651, 852, 939, 672, 316, 489, 456, 360, 921, 939, 446, 366, 384, 366, 266, 332, 492, 1479, 825, 460, 351, 549, 475, 740, 313, 357, 556, 618, 1039, 411, 234, 378, 567, 269, 990, 270, 573, 629, 996, 1107, 393, 480, 624, 583, 485, 1770, 323, 374, 484, 1128, 609, 379, 1426, 551, 1182, 680, 607, 472, 467, 1312, 468, 342, 473, 1279, 832, 408, 802, 764, 290, 668, 440, 1085, 492, 1523, 189, 329, 1334, 403, 285, 427, 653, 346, 1385, 197, 1281, 465, 468, 414, 981, 473, 879, 552, 246, 522, 610, 609, 255, 915, 2142, 624, 236, 892, 480, 944, 847, 674, 739, 275, 1139, 291, 815, 357, 387, 613, 160, 341, 630, 794, 3061, 552, 167, 447, 300, 471, 1182, 867, 424, 1104, 417, 648, 708, 700, 405, 399, 231, 246, 1588, 766, 1127, 611, 892, 604, 995, 657, 2170, 336, 492, 273, 874, 303, 487, 500, 967, 1380, 345, 300, 1863, 408, 446, 1269, 351, 1448, 570, 336, 487, 270, 270, 804, 833, 1384, 1235, 404, 285, 1499, 708, 834, 584, 309, 492, 528, 762, 624, 380, 323, 916, 403, 384, 409, 530, 241, 724, 1950, 645, 301, 386, 704, 708, 1389, 588, 693, 484, 469, 299, 467, 1119, 696, 610, 824, 231, 531, 321, 663, 177, 635, 573, 268, 711, 892, 513, 707, 872, 619, 576, 476, 506, 285, 594, 495, 564, 399, 387, 638, 536, 594, 772, 955, 672, 312, 305, 627, 774, 575, 1178, 1647, 390, 879, 563, 931, 464, 440, 515, 201, 499, 703, 738, 1372, 794, 712, 503, 1034, 618, 753, 225, 736, 688, 395, 345, 531, 695, 467, 1009, 789, 1659, 532, 913, 261, 359, 611, 660, 480, 555, 551, 849, 743, 1224, 841, 442, 408, 372, 625, 437, 825, 297, 375, 647, 304, 992, 722, 451, 684, 155, 780, 543, 340, 477, 1659, 2790, 480, 445, 457, 968, 360, 306, 676, 498, 603, 318, 724, 600, 265, 718, 381, 343, 776, 600, 600, 600, 600, 600, 600, 600, 597, 600, 597, 584, 255, 1539, 672, 1726, 179, 589, 326, 629, 626, 789, 440, 954, 537, 262, 3015, 405, 374, 381, 743, 272, 479, 640, 293, 359, 412, 959, 550, 1088, 492, 615, 279, 480, 864, 369, 491, 467, 343, 537, 723, 254, 567, 1049, 1313, 591, 311, 477, 1617, 744, 251, 299, 159, 461, 464, 1042, 668, 301, 771, 533, 280, 713, 544, 608, 493, 644, 344, 456, 560, 1110, 307, 290, 1069, 606, 717, 1167, 653, 356, 495, 1012, 432, 297, 1618, 405, 449, 405, 573, 565, 962, 364, 369, 910, 223, 245, 398, 495, 577, 616, 468, 620, 316, 230, 633, 334, 808, 543, 744, 935, 1004, 863, 615, 592, 429, 333, 204, 484, 287, 642, 930, 866, 997, 299, 290, 520, 342, 959, 588, 851, 629, 522, 537, 569, 336, 391, 462, 824, 474, 959, 760, 353, 348, 462, 1420, 1386, 1275, 548, 408, 600, 600, 600, 600, 402, 242, 1391, 1215, 573, 470, 1168, 476, 1712, 376, 868, 495, 379, 300, 1359, 1053, 662, 465, 526, 427, 543, 667, 322, 778, 1327, 435, 360, 507, 1079, 1201, 477, 403, 261, 673, 499, 580, 446, 908, 1490, 552, 269, 576, 616, 933, 961, 384, 236, 479, 255, 495, 483, 602, 354, 435, 650, 826, 455, 704, 246, 636, 1267, 1201, 282, 567, 432, 2289, 666, 549, 162, 510, 748, 297, 372, 270, 699, 227, 412, 344, 470, 491, 1370, 403, 456, 246, 317, 335, 1379, 952, 456, 416, 519, 312, 656, 338, 863, 688, 340, 854, 666, 697, 742, 967, 587, 192, 462, 490, 337, 890, 1539, 244, 229, 536, 280, 264, 414, 438, 1311, 300, 884, 695, 1509, 798, 612, 611, 414, 533, 678, 426, 274, 466, 883, 864, 603, 873, 1398, 477, 495, 528, 767, 613, 304, 1419, 832, 488, 489, 1290, 648, 266, 1200, 957, 407, 507, 703, 715, 495, 305, 389, 949, 492, 1155, 693, 333, 464, 331, 769, 660, 1115, 403, 483, 899, 279, 371, 354, 361, 444, 552, 286, 248, 265, 662, 393, 2433, 766, 752, 326, 692, 1185, 1170, 678, 728, 432, 656, 1190, 510, 878, 366, 434, 297, 680, 735, 533, 935, 774, 692, 1162, 687, 540, 1417, 464, 339, 779, 471, 566, 281, 384, 271, 760, 698, 357, 513, 888, 475, 515, 216, 864, 303, 630, 425, 299, 562, 522, 1155, 457, 489, 812, 719, 405, 1313, 735, 255, 275, 384, 274, 1007, 289, 457, 1239, 368, 1148, 581, 351, 488, 712, 1097, 639, 478, 481, 630, 479, 493, 740, 1239, 366, 380, 1234, 358, 483, 824, 593, 994, 318, 465, 797, 715, 766, 333, 615, 693, 495, 366, 366, 420, 400, 381, 879, 431, 404, 645, 405, 451, 360, 263, 522, 315, 294, 610, 382, 1304, 417, 655, 824, 829, 463, 798, 453, 495, 264, 1122, 1476, 469, 285, 1098, 838, 430, 293, 418, 225, 260, 1004, 346, 552, 1383, 708, 1218, 348, 738, 358, 342, 303, 993, 597, 1048, 571, 448, 752, 581, 475, 803, 1209, 863, 385, 737, 435, 651, 982, 1286, 1175, 1172, 329, 582, 485, 1280, 338, 520, 308, 407, 330, 392, 420, 1595, 951, 454, 348, 482, 305, 1004, 498, 243, 768, 470, 1773, 770, 266, 543, 456, 622, 516, 773, 661, 368, 395, 364, 444, 506, 606, 1077, 429, 557, 478, 311, 1318, 2398, 724, 402, 435, 345, 511, 1004, 1119, 293, 365, 715, 360, 191, 955, 480, 954, 347, 421, 495, 416, 432, 457, 583, 484, 894, 918, 705, 471, 378, 499, 889, 1277, 624, 307, 1274, 405, 299, 430, 1449, 879, 374, 1078, 1326, 860, 586, 192, 1356, 815, 595, 817, 484, 476, 373, 416, 744, 526, 352, 207, 460, 542, 334, 332, 499, 702, 258, 951, 771, 1199, 372, 425, 459, 448, 542, 343, 270, 791, 969, 287, 316, 398, 460, 357, 270, 811, 741, 474, 374, 582, 869, 404, 409, 421, 581, 797, 1197, 225, 408, 366, 338, 1098, 474, 609, 1318, 568, 864, 813, 1560, 543, 312, 321, 305, 1125, 420, 771, 400, 302, 251, 476, 321, 1140, 405, 764, 390, 275, 317, 697, 447, 573, 348, 1829, 1062, 459, 361, 861, 1385, 1797, 1182, 477, 445, 552, 537, 359, 684, 1079, 342, 260, 519, 408, 827, 823, 456, 529, 1155, 291, 900, 730, 445, 564, 399, 1149, 488, 192, 658, 1520, 1024, 861, 1007, 455, 808, 750, 489, 411, 486, 382, 566, 354, 366, 542, 542, 413, 1056, 1056, 486, 793, 431, 790, 416, 610, 504, 491, 1393, 611, 392, 531, 588, 905, 820, 955, 1148, 782, 1104, 314, 744, 729, 428, 256, 680, 337, 372, 622, 289, 367, 676, 327, 465, 1311, 1101, 370, 401, 729, 302, 587, 378, 420, 1124, 450, 1387, 387, 240, 1232, 352, 589, 669, 1181, 405, 656, 1185, 946, 610, 1696, 610, 294, 537, 381, 646, 393, 325, 274, 300, 449, 669, 342, 551, 1329, 473, 398, 1222, 881, 651, 234, 467, 682, 457, 905, 292, 330, 726, 291, 312, 438, 393, 477, 1494, 188, 369, 491, 394, 539, 674, 569, 531, 342, 770, 347, 279, 510, 360, 346, 959, 661, 315, 406, 813, 527, 517, 568, 373, 417, 429, 330, 572, 638, 210, 266, 894, 746, 344, 459, 772, 261, 339, 876, 575, 317, 1534, 707, 1141, 405, 1104, 282, 954, 441, 573, 656, 255, 444, 610, 1696, 207, 610, 610, 648, 548, 948, 641, 344, 505, 397, 388, 1859, 488, 251, 320, 314, 408, 180, 956, 776, 823, 645, 585, 373, 338, 666, 354, 537, 462, 865, 303, 1098, 602, 501, 714, 766, 348, 534, 446, 534, 1176, 1158, 412, 989, 360, 2165, 971, 993, 240, 606, 1554, 216, 387, 749, 384, 467, 654, 685, 954, 608, 299, 2270, 1178, 460, 548, 753, 399, 310, 837, 709, 259, 456, 351, 299, 950, 759, 178, 1072, 824, 198, 354, 608, 484, 717, 154, 598, 300, 303, 252, 565, 526, 381, 520, 384, 339, 461, 353, 391, 438, 450, 474, 228, 477, 623, 1196, 269, 341, 559, 468, 492, 528, 254, 1341, 545, 1276, 483, 794, 990, 742, 258, 341, 521, 714, 1234, 437, 1169, 660, 409, 873, 317, 1230, 1029, 1243, 390, 463, 335, 405, 1166, 357, 495, 530, 732, 330, 1368, 330, 330, 1368, 331, 930, 903, 801, 901, 1443, 324, 1444, 1443, 905, 324, 927, 2911, 468, 295, 370, 744, 235, 453, 355, 809, 1494, 168, 480, 494, 1102, 374, 480, 262, 563, 1844, 893, 180, 445, 588, 662, 746, 1482, 1054, 4866, 1377, 560, 726, 292, 377, 315, 1836, 782, 357, 1171, 190, 648, 715, 582, 1386, 540, 336, 482, 607, 361, 542, 357, 276, 1278, 593, 1019, 548, 1390, 552, 465, 372, 1283, 1281, 895, 751, 301, 261, 771, 428, 1206, 441, 1546, 285, 479, 902, 459, 603, 1187, 855, 856, 1444, 903, 930, 334, 856, 334, 856, 334, 1369, 331, 1368, 928, 324, 903, 494, 355, 450, 747, 410, 659, 477, 657, 2609, 477, 991, 930, 944, 464, 645, 476, 347, 849, 327, 445, 729, 486, 198, 369, 232, 396, 480, 269, 426, 351, 249, 803, 475, 228, 266, 844, 393, 516, 779, 483, 374, 561, 368, 374, 203, 494, 1443, 334, 856, 494, 1045, 894, 593, 590, 1086, 504, 928, 265, 312, 465, 408, 493, 265, 1625, 968, 1234, 348, 459, 1098, 318, 621, 549, 785, 1218, 585, 438, 1476, 230, 688, 584, 812, 423, 525, 459, 324, 981, 509, 323, 530, 466, 553, 462, 285, 1275, 402, 756, 1586, 588, 1004, 1170, 555, 426, 288, 605, 699, 1493, 621, 1746, 1023, 502, 375, 1028, 855, 581, 327, 162, 200, 201, 399, 435, 482, 690, 1173, 409, 836, 1526, 1020, 1088, 330, 315, 480, 593, 522, 444, 210, 739, 1900, 778, 847, 711, 219, 300, 303, 1109, 1283, 461, 860, 834, 778, 944, 282, 523, 593, 833, 564, 595, 534, 530, 582, 315, 1236, 1307, 939, 496, 667, 378, 1205, 174, 1331, 443, 479, 648, 857, 1285, 1071, 372, 1116, 577, 646, 645, 759, 1137, 819, 1577, 201, 374, 314, 736, 463, 1179, 491, 588, 953, 528, 392, 1367, 747, 344, 1762, 1048, 1070, 563, 474, 374, 327, 621, 596, 536, 260, 452, 576, 1476, 675, 824, 603, 511, 2064, 405, 548, 388, 1227, 368, 504, 1002, 327, 1544, 728, 906, 880, 405, 477, 585, 1141, 544, 530, 704, 1583, 1006, 422, 657, 1140, 482, 879, 750, 408, 951, 870, 488, 850, 537, 561, 555, 444, 822, 662, 333, 1993, 420, 406, 674, 644, 1392, 1031, 616, 815, 1180, 677, 861, 855, 251, 213, 375, 890, 200, 162, 1195, 1035, 388, 1224, 3684, 1002, 2398, 311, 355, 1626, 674, 626, 663, 646, 528, 1217, 348, 2272, 966, 658, 981, 511, 1121, 760, 312, 566, 961, 1659, 374, 480, 782, 1190, 324, 1140, 1254, 1513, 414, 1015, 1151, 786, 1122, 1642, 316, 476, 393, 1264, 530, 757, 716, 1019, 447, 279, 576, 681, 661, 1827, 267, 852, 738, 992, 1106, 1284, 234, 859, 692, 738, 1263, 473, 1122, 590, 307, 444, 529, 1217, 435, 1910, 1234, 1122, 473, 216, 678]
</code></pre>
<p>CODE</p>
<pre><code>###library import
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
###first attempt
sorted_len = sorted(l)
sorted_counted = Counter(sorted_len)
range_length = list(range(max(l)))
data_series = {}
for x in range_length:
data_series[x] = 0
for key, value in sorted_counted.items():
data_series[key] = value
data_series = pd.Series(data_series)
x_values = data_series.index
###second attemmpt
df = pd.DataFrame(l, columns=['len'])
###actual plots
#x-axis is correct, but y-axis don't display colors...
plt.bar(x_values, data_series.values)
plt.show()
#x-axis is correct, but y-axis don't display colors...
val, cnt = np.unique(l, return_counts=True)
sns.catplot(data=df, kind='count', x='len')
#correct number of instances, but wrong x-axis...
df.len.value_counts()[df.len.unique()].plot(kind='bar')
</code></pre>
<p><code>R</code> output
<a href="https://i.sstatic.net/LRHlVx4d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LRHlVx4d.png" alt="length_dist" /></a></p>
<p>P.S. to be noted I'm using <code>PyCharm</code> with the <code>Invert image outputs for dark themes</code> options so that the cell output for images is displayed on a white background for visibility</p>
|
<python><matplotlib><pycharm><seaborn><bar-chart>
|
2025-04-14 08:30:41
| 1
| 403
|
Matteo
|
79,572,640
| 1,145,666
|
suds is not using HTTPS, even though the URL is set correctly
|
<p>Consider this code:</p>
<pre><code>from suds.client import Client
wsdl = "https://path/to/scv?wsdl"
client = Client(wsdl)
token = client.factory.create("ns1:AuthenticationToken")
token.LoginName = "login"
token.UserId = "userid"
token.Password = "password"
client.set_options(soapheaders=token)
articleno = "articleno"
list = client.factory.create("ns1:ArticlesList")
list.Article.append(articleno)
result = client.service.GetStockLevel(list)
</code></pre>
<p>When inspecting the stream, I see it uses HTTP instead of HTTPS, causing an error because the endpoint gives a HTTP301 on that. Even though I start my url with <code>https://</code>.</p>
<p>Is there a way to force <code>suds</code> to use HTTPS?</p>
|
<python><https><suds>
|
2025-04-14 08:02:29
| 1
| 33,757
|
Bart Friederichs
|
79,572,562
| 16,383,578
|
PyPy3 on Windows 11 doesn't display non-ASCII characters correctly
|
<p>PyPy3 doesn't display non-ASCII UNICODE characters correctly.</p>
<p>A simple example, the following:</p>
<pre><code>b'\xce\x9e\xce\xad\xce\xbd\xce\xb7 \xce\x93\xce\xae\xce\xb9\xce\xbd\xce\xbf\xcf\x82'.decode('utf8')
</code></pre>
<p>Should evaluate to my user name: <code>'Ξένη Γήινος'</code>.</p>
<p>But the output of PyPy3 is this:</p>
<pre><code>In [1]: b'\xce\x9e\xce\xad\xce\xbd\xce\xb7 \xce\x93\xce\xae\xce\xb9\xce\xbd\xce\xbf\xcf\x82'.decode('utf8')
Out[1]: '╬×╬¡╬¢╬À ╬ô╬«╬╣╬¢╬┐¤é'
</code></pre>
<p>There is very little information on this I can find through Google searching, but I have found this:</p>
<p><a href="https://github.com/pypy/pypy/issues/4948" rel="nofollow noreferrer">https://github.com/pypy/pypy/issues/4948</a></p>
<p>So this is a known issue, and it hasn't been fixed.</p>
<p>I tried to fix this issue using information I found from the linked page, the encoding and locale used by CPython are:</p>
<pre><code>In [1]: import locale, os, sys
In [2]: locale.getdefaultlocale()
<ipython-input-2-64720e52add3>:1: DeprecationWarning: 'locale.getdefaultlocale' is deprecated and slated for removal in Python 3.15. Use setlocale(), getencoding() and getlocale() instead.
locale.getdefaultlocale()
Out[2]: ('en_US', 'cp1252')
In [3]: locale.getlocale()
Out[3]: ('English_United States', '1252')
In [4]: locale.getencoding()
Out[4]: 'cp1252'
In [5]: locale.LC_ALL
Out[5]: 0
In [6]: sys.getdefaultencoding()
Out[6]: 'utf-8'
</code></pre>
<p>And these are what's used by PyPy3:</p>
<pre><code>In [2]: import sys, locale
In [3]: locale.getencoding()
Out[3]: 'utf-8'
In [4]: locale.getlocale()
Out[4]: ('English_United States', '1252')
In [5]: sys.getdefaultencoding()
Out[5]: 'utf-8'
</code></pre>
<p>So it seems evident that the issue is caused by the mismatch between the code page Windows uses and the code page PyPy3 uses, Windows uses <code>'cp1252'</code> and CPython uses the same code page, but PyPy3 doesn't. Thus the fix is to either make PyPy3 use <code>'cp1252'</code> code page or make Windows console use <code>'utf-8'</code>.</p>
<p>I tried many ways to fix it, setting the environment variable doesn't work:</p>
<pre><code>Set-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\Environment" -Name "PYTHONIOENCODING" -Type STRING -Value "UTF-8"
</code></pre>
<p>The following also doesn't work:</p>
<pre><code>chcp 65001
</code></pre>
<p>There is no <code>reload</code> in Python 3 and therefore I cannot use <code>sys.setdefaultencoding</code>, and <code>locale.setlocale(0)</code> doesn't work either.</p>
<p>The above are just a few of the methods I have tried.</p>
<p>But the following works:</p>
<pre><code>PS C:\Users\xenig> [Console]::InputEncoding = New-Object System.Text.UTF8Encoding
PS C:\Users\xenig> [Console]::OutputEncoding = New-Object System.Text.UTF8Encoding
PS C:\Users\xenig> D:\Programs\pypy3\Scripts\ipython.exe
Python 3.11.11 (0253c85bf5f8, Feb 26 2025, 10:43:25)
Type 'copyright', 'credits' or 'license' for more information
IPython 9.0.2 -- An enhanced Interactive Python. Type '?' for help.
Tip: IPython 9.0+ have hooks to integrate AI/LLM completions.
In [1]: b'\xce\x9e\xce\xad\xce\xbd\xce\xb7 \xce\x93\xce\xae\xce\xb9\xce\xbd\xce\xbf\xcf\x82'.decode('utf8')
Out[1]: 'Ξένη Γήινος'
</code></pre>
<p>Okay, now, how can I make it so I can directly launch PyPy3 in Windows Terminal without launching PowerShell first and make PyPy3 display UNICODE characters correctly?</p>
<hr />
<p>Oh, and the output for the first code block from the linked GitHub page is:</p>
<pre><code>pypy win32 3.11.11 (0253c85bf5f8, Feb 26 2025, 10:43:25)
[PyPy 7.3.19 with MSC v.1941 64 bit (AMD64)]
os.device_encoding(0)='cp850'
os.device_encoding(1)='cp850'
sys.getdefaultencoding()='utf-8'
sys.getfilesystemencoding()='utf-8'
locale.getpreferredencoding()='utf-8'
locale.getencoding()='utf-8'
locale.getlocale()=('English_United States', '1252')
locale.getlocale()=('English_United States', '1252')
</code></pre>
|
<python><utf-8><pypy>
|
2025-04-14 07:24:18
| 1
| 3,930
|
Ξένη Γήινος
|
79,572,368
| 9,951,273
|
Parsing Pydantic dict params
|
<p>I have an endpoint that takes a Pydantic model, <code>Foo</code>, as a query parameter.</p>
<pre class="lang-py prettyprint-override"><code>from typing import Annotated
import uvicorn
from fastapi import FastAPI, Query
from pydantic import BaseModel
app = FastAPI()
class Foo(BaseModel):
bar: str
baz: dict[str, str]
@app.get("/")
def root(foo: Annotated[Foo, Query()]):
return foo
if __name__ == "__main__":
uvicorn.run("test:app")
</code></pre>
<p>I'm defining my query params using Swagger, so the encoding should be correct. I know the <code>baz</code> param syntax looks redundant because I've nested a dictionary, but parsing fails even without nesting.</p>
<p><a href="https://i.sstatic.net/Olucm3p1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Olucm3p1.png" alt="FastAPI query params" /></a></p>
<p>But when I call the endpoint...</p>
<pre><code>curl -X 'GET' \
'http://127.0.0.1:8000/?bar=sda&baz=%7B%22abc%22%3A%22def%22%7D' \
-H 'accept: application/json'
</code></pre>
<p>FastAPI does not seem to read in <code>Foo.baz</code> correctly, returning</p>
<pre><code>{
"detail": [
{
"type": "dict_type",
"loc": [
"query",
"baz"
],
"msg": "Input should be a valid dictionary",
"input": "{\"abc\":\"def\"}"
}
]
}
</code></pre>
<p>I've read similar questions and I know I can ingest the dictionary by accessing <code>dict(request.query_params)</code>, but this bypasses FastAPI's validation and I'd prefer to keep the endpoint simple and consistent with the rest of my codebase by keeping the param as a Pydantic model.</p>
<p>How can I get FastAPI to parse <code>Foo</code> as a query param?</p>
|
<python><request><fastapi><pydantic>
|
2025-04-14 04:54:41
| 1
| 1,777
|
Matt
|
79,572,227
| 12,603,110
|
How to annotate a Pandas index of datetime.date values using Pandera and mypy?
|
<p>I'm using Pandera to define a schema for a pandas DataFrame where the index represents calendar dates (without time). I want to type-annotate the index as holding datetime.date values. Here's what I tried:</p>
<pre><code># mypy.ini
[mypy]
plugins = pandera.mypy
</code></pre>
<pre class="lang-py prettyprint-override"><code># schema.py
from datetime import date
import pandera as pa
from pandera.typing import Index
class DateIndexModel(pa.DataFrameModel):
date: Index[date]
</code></pre>
<p>But running <code>mypy</code> gives the following error:</p>
<pre><code>error: Type argument "date" of "Index" must be a subtype of "bool | int | str | float | ExtensionDtype | <30 more items>" [type-var]
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<p>I know that <code>datetime64[ns]</code> or <code>pandas.Timestamp</code> work fine, but I specifically want to model just <strong>dates without time</strong>. Is there a type-safe way to do this with <code>Pandera</code> and <code>mypy</code>?</p>
<p>Any workaround that lets me enforce date-only index semantics (with or without <code>datetime.date</code>) while keeping <code>mypy</code> happy?</p>
<p>Colab example notebook:<br />
<a href="https://colab.research.google.com/drive/1AdiztxHlyvEMo6B3CzYnvzlnh6a0GfUQ?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1AdiztxHlyvEMo6B3CzYnvzlnh6a0GfUQ?usp=sharing</a></p>
|
<python><pandas><python-typing><mypy><pandera>
|
2025-04-14 01:08:05
| 1
| 812
|
Yorai Levi
|
79,572,177
| 4,398,952
|
Subclass of tkinter's Toplevel class doesn't seem to inherit "tk" attribute
|
<p>I'm writing a Python application using tkinter GUI.
I created the <em>TimestampWindow</em> class which should be one of my Windows and made it extend the <em>Toplevel</em> class of tkinter's library and <em>Window,</em> a custom class where I put common attributes/methods for all my Windows.
Unfortunately when I try to call <code>self.title(...)</code> on <em>Window</em> class (from a <em>TimestampWindow</em> instance) I get the following error:</p>
<pre><code>File "/home/ela/elaPythonVirtualENV/PythonScripts/pgnclocker/pgnClocker/gui/windows/Window.py", line 54, in setUp
self.title(CommonStringsEnum.APP_NAME.value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/tkinter/__init__.py", line 2301, in wm_title
return self.tk.call('wm', 'title', self._w, string)
^^^^^^^
AttributeError: 'TimestampWindow' object has no attribute 'tk'
</code></pre>
<p>Things doesn't change if I move the invocation of <em>title</em> method directly in <em>TimestampWindow</em>.</p>
<p>I leave two snippets with a "sketch" of <em>TimestampWindow</em> and <em>Window</em> classes below to clarify the whole thing:</p>
<p><strong>Timestamp Window</strong></p>
<pre><code>import tkinter as tk
from pgnClocker.gui.windows.Window import *
... more imports ...
class TimestampWindow(tk.Toplevel, Window):
... code ...
</code></pre>
<p><strong>Window</strong></p>
<pre><code>class Window:
... code ...
def setUp(self):
self.title(CommonStringsEnum.APP_NAME.value)
</code></pre>
<p>Could you please help me understand what's going on here?
Shouldn't I be able to use <em>TimestampWindow</em> as a tkinter window since it inherits all methods from <em>tk.Toplevel</em>? Why it isn't inheriting <em>tk</em> attribute?</p>
|
<python><tkinter><multiple-inheritance><toplevel>
|
2025-04-13 23:50:49
| 1
| 401
|
ela
|
79,572,155
| 389,119
|
How to automatically start the debugging session in Playwright?
|
<p>I want to automatically start the debugging session instead of having it starting as paused.
I found out that adding this code makes it work but it feels hackish to me:</p>
<pre><code>context.add_init_script("setTimeout(window.__pw_resume, 500)")
</code></pre>
<p>Without the <code>setTimeout</code>, it won't work.
Am I doing anything wrong?</p>
|
<python><playwright><playwright-python>
|
2025-04-13 23:10:56
| 1
| 12,025
|
antoyo
|
79,571,959
| 5,224,236
|
Efficient rolling, non-equi joins
|
<p>Looking for the current most efficient approach in either R, python or c++ (with Rcpp).</p>
<p>Taking an example with financial data,</p>
<pre><code>df
time bid ask time_msc flags wdayLab wday rowid
<POSc> <num> <num> <POSc> <int> <ord> <num> <int>
1: 2025-01-02 04:00:00 21036.48 21043.08 2025-01-02 04:00:00.888 134 Thu 5 1
2: 2025-01-02 04:00:00 21037.54 21043.27 2025-01-02 04:00:00.888 134 Thu 5 2
3: 2025-01-02 04:00:00 21036.52 21042.55 2025-01-02 04:00:00.888 134 Thu 5 3
4: 2025-01-02 04:00:00 21036.82 21041.75 2025-01-02 04:00:00.888 134 Thu 5 4
5: 2025-01-02 04:00:00 21036.79 21040.78 2025-01-02 04:00:00.891 134 Thu 5 5
6: 2025-01-02 04:00:00 21035.86 21039.95 2025-01-02 04:00:00.891 134 Thu 5 6
7: 2025-01-02 04:00:00 21036.05 21038.76 2025-01-02 04:00:00.891 134 Thu 5 7
8: 2025-01-02 04:00:00 21034.74 21038.33 2025-01-02 04:00:00.891 134 Thu 5 8
9: 2025-01-02 04:00:00 21034.72 21039.35 2025-01-02 04:00:00.892 134 Thu 5 9
10: 2025-01-02 04:00:00 21034.99 21038.08 2025-01-02 04:00:00.892 134 Thu 5 10
</code></pre>
<p>I want, for each <code>rowid</code>, the most recent <code>rowid</code> in the past where the <code>ask</code> was higher. My real data has <code>29,871,567</code> rows (can share if needed). The solution doesn't need to be a join as long as the last higher <code>rowid</code> is retrieved.</p>
<p><strong>R data.table</strong></p>
<p>I usually solve this using R's <code>data.table</code> joins:</p>
<pre><code>library(data.table)
setDTthreads(detectCores() - 2) # no effect
df_joined <- df[,.(rowid, ask, time_msc)][,rowid_prevHi:=rowid][,ask_prevHi := ask][
df,
on = .(rowid < rowid, ask >= ask),
mult = "last", # Take the closest (most recent) match
# by = .EACHI, # Do it row-by-row
nomatch = NA, # Allow NA if no such row exists
#.(i.rowid, last_higher_row = x.rowid, last_higher = x.time, lastHigh = x.ask)
][, difference_from_previous_higher := ask_prevHi - ask]
</code></pre>
<p>This works on smaller datasets because both multiple inequalities and the rolling condition <code>mult = "last"</code> are supported. However, it is single-threaded and my rig doesn't manage the full dataset.</p>
<p>Expected result is below, and I expect the <code>difference_from_previous_higher</code> to be always positive and the <code>rowid_prevHi</code> always smaller than <code>rowid</code>.</p>
<pre><code> rowid ask time_msc rowid_prevHi ask_prevHi time bid i.time_msc flags
<int> <num> <POSc> <int> <num> <POSc> <num> <POSc> <int>
1: 1 21043.08 <NA> NA NA 2025-01-02 04:00:00 21036.48 2025-01-02 04:00:00.888 134
2: 2 21043.27 <NA> NA NA 2025-01-02 04:00:00 21037.54 2025-01-02 04:00:00.888 134
3: 3 21042.55 2025-01-02 04:00:00.888 2 21043.27 2025-01-02 04:00:00 21036.52 2025-01-02 04:00:00.888 134
4: 4 21041.75 2025-01-02 04:00:00.888 3 21042.55 2025-01-02 04:00:00 21036.82 2025-01-02 04:00:00.888 134
5: 5 21040.78 2025-01-02 04:00:00.888 4 21041.75 2025-01-02 04:00:00 21036.79 2025-01-02 04:00:00.891 134
6: 6 21039.95 2025-01-02 04:00:00.891 5 21040.78 2025-01-02 04:00:00 21035.86 2025-01-02 04:00:00.891 134
7: 7 21038.76 2025-01-02 04:00:00.891 6 21039.95 2025-01-02 04:00:00 21036.05 2025-01-02 04:00:00.891 134
8: 8 21038.33 2025-01-02 04:00:00.891 7 21038.76 2025-01-02 04:00:00 21034.74 2025-01-02 04:00:00.891 134
9: 9 21039.35 2025-01-02 04:00:00.891 6 21039.95 2025-01-02 04:00:00 21034.72 2025-01-02 04:00:00.892 134
10: 10 21038.08 2025-01-02 04:00:00.892 9 21039.35 2025-01-02 04:00:00 21034.99 2025-01-02 04:00:00.892 134
wdayLab wday difference_from_previous_higher
<ord> <num> <num>
1: Thu 5 NA
2: Thu 5 NA
3: Thu 5 0.72
4: Thu 5 0.80
5: Thu 5 0.97
6: Thu 5 0.83
7: Thu 5 1.19
8: Thu 5 0.43
9: Thu 5 0.60
10: Thu 5 1.27
</code></pre>
<p><strong>polars</strong></p>
<p>I've tried a <code>polars</code> implementation in python, but although <code>join_asof</code> is multiprocessed, fast and supports the <code>backwards</code> strategy it doesn't support specifying other inequalities while joining, only filtering after the join which is not useful.</p>
<pre><code>joined = df.join_asof(
df.select(['rowid', 'time_msc', 'ask']).with_columns([
pl.col('time_msc').alias('time_prevhi')
]),
on="time_msc",
strategy="backward",
suffix="_prevhi",
allow_exact_matches=False
).with_columns([
(pl.col('rowid')-pl.col('rowid_prevhi')).alias('ticksdiff_prevhi'),
(pl.col('ask')-pl.col('ask_prevhi')).alias('askdiff_prevhi'),
])
</code></pre>
<p>I'm not even sure how the matches are chosen, but of course <code>ask</code> is not always smaller than <code>ask_prevHi</code> since I couldn't mention it.</p>
<pre><code>shape: (10, 13)
┌─────┬─────┬─────┬───────┬────────┬──────┬───────┬────────┬───────┬───────┬───────┬───────┬───────┐
│ bid ┆ ask ┆ tim ┆ flags ┆ wdayLa ┆ wday ┆ rowid ┆ time ┆ rowid ┆ ask_p ┆ time_ ┆ ticks ┆ askdi │
│ --- ┆ --- ┆ e_m ┆ --- ┆ b ┆ --- ┆ --- ┆ --- ┆ _prev ┆ revhi ┆ prevh ┆ diff_ ┆ ff_pr │
│ f64 ┆ f64 ┆ sc ┆ i64 ┆ --- ┆ i64 ┆ i64 ┆ dateti ┆ hi ┆ --- ┆ i ┆ prevh ┆ evhi │
│ ┆ ┆ --- ┆ ┆ str ┆ ┆ ┆ me[μs] ┆ --- ┆ f64 ┆ --- ┆ i ┆ --- │
│ ┆ ┆ dat ┆ ┆ ┆ ┆ ┆ ┆ i64 ┆ ┆ datet ┆ --- ┆ f64 │
│ ┆ ┆ eti ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ime[m ┆ i64 ┆ │
│ ┆ ┆ me[ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ s] ┆ ┆ │
│ ┆ ┆ ms] ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ │
╞═════╪═════╪═════╪═══════╪════════╪══════╪═══════╪════════╪═══════╪═══════╪═══════╪═══════╪═══════╡
│ 210 ┆ 210 ┆ 202 ┆ 134 ┆ Thu ┆ 5 ┆ 1 ┆ 2025-0 ┆ null ┆ null ┆ null ┆ null ┆ null │
│ 36. ┆ 43. ┆ 5-0 ┆ ┆ ┆ ┆ ┆ 1-02 ┆ ┆ ┆ ┆ ┆ │
│ 48 ┆ 08 ┆ 1-0 ┆ ┆ ┆ ┆ ┆ 00:00: ┆ ┆ ┆ ┆ ┆ │
│ ┆ ┆ 2 ┆ ┆ ┆ ┆ ┆ 00 ┆ ┆ ┆ ┆ ┆ │
│ ┆ ┆ 00: ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ │
│ ┆ ┆ 00: ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ │
│ ┆ ┆ 00. ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ │
│ ┆ ┆ 888 ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ │
│ 210 ┆ 210 ┆ 202 ┆ 134 ┆ Thu ┆ 5 ┆ 2 ┆ 2025-0 ┆ 1 ┆ 21043 ┆ 2025- ┆ 1 ┆ 0.19 │
│ 37. ┆ 43. ┆ 5-0 ┆ ┆ ┆ ┆ ┆ 1-02 ┆ ┆ .08 ┆ 01-02 ┆ ┆ │
│ 54 ┆ 27 ┆ 1-0 ┆ ┆ ┆ ┆ ┆ 00:00: ┆ ┆ ┆ 00:00 ┆ ┆ │
│ ┆ ┆ 2 ┆ ┆ ┆ ┆ ┆ 00 ┆ ┆ ┆ :00.8 ┆ ┆ │
│ ┆ ┆ 00: ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ 88 ┆ ┆ │
│ ┆ ┆ 00: ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ │
│ ┆ ┆ 00. ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ │
│ ┆ ┆ 889 ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ │
│ 210 ┆ 210 ┆ 202 ┆ 134 ┆ Thu ┆ 5 ┆ 3 ┆ 2025-0 ┆ 1 ┆ 21043 ┆ 2025- ┆ 2 ┆ -0.53 │
│ 36. ┆ 42. ┆ 5-0 ┆ ┆ ┆ ┆ ┆ 1-02 ┆ ┆ .08 ┆ 01-02 ┆ ┆ │
│ 52 ┆ 55 ┆ 1-0 ┆ ┆ ┆ ┆ ┆ 00:00: ┆ ┆ ┆ 00:00 ┆ ┆ │
│ ┆ ┆ 2 ┆ ┆ ┆ ┆ ┆ 00 ┆ ┆ ┆ :00.8 ┆ ┆ │
│ ┆ ┆ 00: ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ 88 ┆ ┆ │
</code></pre>
<p>I've also tried Polar's <code>join_where</code> which supports inequalities, but not a "nearest" constraint or strategy, and therefore explodes the number of lines quadratically, consuming all compute resources without a result.</p>
<pre><code>jw = df.join_where( df.select(['rowid', 'time_msc', 'ask']), pl.col("rowid") > pl.col("rowid_prevhi"), pl.col("ask") > pl.col("ask_prevhi"), suffix="_prevhi",)
</code></pre>
<p>My next approach might be to loop over each row using an <code>Rcpp</code> function executed in parallel from R, which retrieves the <code>rowid</code> of the last previous higher <code>ask</code>.
Or perhaps <code>frollapply</code> from <code>data.table</code> would do the trick?</p>
<p>Suggestions most welcome.</p>
|
<python><c++><r><data.table><python-polars>
|
2025-04-13 19:30:25
| 2
| 6,028
|
gaut
|
79,571,953
| 508,402
|
re.search() raises recursion depth error when in thread
|
<p>In MicroPython, <code>re.findall()</code> does not exist. Therefore, I made one myself:</p>
<pre><code>def findAll(pattern, s, flags = 0):
found = []
while True:
m = re.search(pattern, s, flags)
if m:
g = m.groups()
s = s[m.span()[1]:]
found.append(g[0] if len (g) == 1 else g)
else:
return found
</code></pre>
<p>This works well in all my tests, equally in cPython and uPython. However, in uPython, when called from a separate thread (Pico-W), I get an exception that I do not understand:</p>
<pre><code> File "src/platforms/utils.py", line 16, in findAll
RuntimeError: maximum recursion depth exceeded
</code></pre>
<p>(line 16 is the first line in the loop, <code>m = re.search(...)</code>)</p>
<p>The pattern is <code>'\x01\x05\x02(.+)\x03\x04';</code> the searched string one or more repetitions of the pattern with a JSON-like string in the middle.</p>
<p>What <em>might</em> be causing this and how to solve it?</p>
|
<python><multithreading><micropython><raspberry-pi-pico>
|
2025-04-13 19:23:20
| 0
| 494
|
resurrected user
|
79,571,947
| 1,747,834
|
How to run a command over an ssh tunnel?
|
<p>I'm trying to automate launches of VNC-clients for multiple remote systems. Currently I achieve that with a <code>LocalForward 15900 127.0.0.1:5900</code>. VNC uses port 5900 by default, which is fine -- there should only be one VNC-server at a time on any machine. But I may some times have multiple VNC-viewers running, so the local port must be dynamic -- and cannot be hardcoded (such as to 15900).</p>
<p>To make it nicer, I'd like to implement a script, which will:</p>
<ol>
<li>Find an unused local port (<code>freePort</code>).</li>
<li>SSH into the specified remote computer (as the specified user).</li>
<li>Create a tunnel from the remote 5900 to the local <code>freePort</code>.</li>
<li>Invoke the <code>x11vnc -xkb -display :0 -localhost</code> remotely.</li>
<li>Invoke the <code>vncviewer localhost:freePort</code> locally.</li>
</ol>
<p>The first item I do using <a href="https://unix.stackexchange.com/a/132524/38393">this method</a>. I then use <a href="https://sshtunnel.readthedocs.io/en/latest/index.html?highlight=sshtunnelforwarder#sshtunnel.SSHTunnelForwarder" rel="nofollow noreferrer">SSHTunnelForwarder</a>:</p>
<pre class="lang-py prettyprint-override"><code>import os
import paramiko
import socket
import sys
import sshtunnel
import logging
from paramiko import SSHClient
def freePort():
s = socket.socket()
s.bind(('localhost', 0))
port = s.getsockname()[1]
s.close()
return port
logging.basicConfig(level = logging.DEBUG,
format='%(asctime)s %(message)s')
try:
user, host = sys.argv[1].split('@')
except ValueError:
host = sys.argv[1]
user = os.getlogin()
tunnel = sshtunnel.SSHTunnelForwarder(host, ssh_username = user,
local_bind_address = ('127.0.0.1', freePort()),
remote_bind_address = ('127.0.0.1', 5900))
print(tunnel)
</code></pre>
<p>The above works nicely and prints the following at the end:</p>
<pre class="lang-none prettyprint-override"><code><class 'sshtunnel.SSHTunnelForwarder'> object
ssh gateway: foo.example.com:23
proxy: no
username: meow
authentication: {'pkeys': [('ssh-rsa', b'081c323ca3b5bb6f157f91984b2cb7b2')]}
hostkey: not checked
status: not started
keepalive messages: disabled
tunnel connection check: disabled
concurrent connections: allowed
compression: not requested
logging level: ERROR
local binds: [('127.0.0.1', 21242)]
remote binds: [('127.0.0.1', 5900)]
</code></pre>
<p>But now I need to launch the remote command (step 4.)... Presumably, I need to use Paramiko's SSHClient() for that, but I don't see, how to do that without starting a whole new ssh-connection. There's got to be a way to issue a command over the already-created tunnel. How would I do that?</p>
<p><em>Also</em>: how would I insist on the remote hostkey being verified -- against the <code>known_hosts</code>-database?</p>
|
<python><paramiko><ssh-tunnel>
|
2025-04-13 19:18:50
| 1
| 4,246
|
Mikhail T.
|
79,571,814
| 5,118,421
|
SQLAlchemy session.query(object) returns automap object without __dict__
|
<p>I follow up this <a href="https://django.fun/docs/sqlalchemy/2.0/orm/extensions/automap/" rel="nofollow noreferrer">tutorial</a> and got an error.</p>
<p>Code:</p>
<pre><code>engine = create_engine("sqlite:///mydatabase.db")
# reflect the tables
Base.prepare(autoload_with=engine)
# mapped classes are now created with names by default
# matching that of the table name.
User = Base.classes.user
u1 = session.query(User).first()
print(u1.id)
</code></pre>
<p>Error:</p>
<pre><code>{AttributeError}AttributeError('id')
</code></pre>
<p><code>__dict__</code> also doesn't work. I've got an object of <code>(<sqlalchemy.ext.automap.user object at 0x78d606a35010>,)</code>
class.</p>
<p>How to obtain the properties of the object returned by a select/query statements?</p>
|
<python><sqlalchemy>
|
2025-04-13 17:06:12
| 0
| 1,407
|
Irina
|
79,571,751
| 9,506,773
|
How to catch findTransformECC error (-7:Iterations do not converge)
|
<p>I am using ECC for camera motion compensation but it fails with the following error:</p>
<p><code>OpenCV(4.11.0) /Users/xperience/GHA-Actions-OpenCV/_work/opencv-python/opencv-python/opencv/modules/video/src/ecc.cpp:589: error: (-7:Iterations do not converge) The algorithm stopped before its convergence. The correlation is going to be minimized. Images may be uncorrelated or non-overlapped in function 'findTransformECC'</code></p>
<p>My guess is that I have two images that are too dissimilar and hence the algorithm does not converge, but in the context of tracking for low FPSs and heavy camera motion this is not too uncommon. This must be failing already in the c++ layer, not throwing a catchable exception:</p>
<pre class="lang-py prettyprint-override"><code>warp_matrix = np.eye(2, 3, dtype=np.float32)
try:
(ret_val, warp_matrix) = cv2.findTransformECC(
prev_img,
curr_img,
warp_matrix,
cv2.MOTION_EUCLIDEAN,
(cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 100, 1e-5),
None,
1
)
except Exception as e:
LOGGER.warning(f'Affine matrix could not be generated: {e}. Returning identity')
return warp_matrix
</code></pre>
<p>I also tried to redirect an error with a callback, catching <code>cv2.error</code>, but nothing. And GPT is giving me BS answers.</p>
|
<python><opencv><computer-vision>
|
2025-04-13 16:10:52
| 1
| 3,629
|
Mike B
|
79,571,701
| 10,024,860
|
asyncpg when to use get a new connection with pool.acquire vs share one connection
|
<p>I'm wondering when to share a asyncpg connection vs create a new one. Suppose I have:</p>
<pre><code>async def monitor(data_stream):
# ...
while True:
# ...
x, y = await data_stream.get_data()
await conn.execute("INSERT INTO table (x, y) VALUES ($1, $2)", x, y)
await asyncio.sleep(1)
async def main():
async with asyncpg.create_pool(user='user', password='pass', database='db', host='localhost') as pool:
# ...
await asyncio.gather(monitor(data_stream_1), monitor(data_stream_2), monitor(data_stream_3))
asyncio.run(main())
</code></pre>
<p>Should I place <code>async with pool.acquire() as conn:</code> in main (and share it across each method), at the top of monitor before the loop, or inside the body of the loop, and why? (assuming I pass pool as an argument if needed)</p>
<p>From my understanding the connection pool keeps connections open and pool.acquire is very fast, so I should place it as low as possible, e.g. in the body of the loop.</p>
|
<python><database><asyncpg>
|
2025-04-13 15:14:04
| 0
| 491
|
Joe C.
|
79,571,647
| 6,346,482
|
Visualise colour gradient based on histogram
|
<p>I feel like this is more of a math question. I have a programme which visualises distances and then draws them on a map, based on the hue spectre. The problem is that the distance values are not equally distributed and therefore, I only see a few colours of the spectre and it gets quite boring.</p>
<p>Attached as photos are the result and the histogram. As you can see, most points are more in the 20-40% range.<a href="https://i.sstatic.net/TpWwMWnJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TpWwMWnJ.png" alt="enter image description here" /></a><a href="https://i.sstatic.net/H5Jy7COy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H5Jy7COy.png" alt="enter image description here" /></a></p>
<p>My code of the colouring function is this one:</p>
<pre><code>def colour_gradient_from_distance(distance_array):
spectre_size = 1 # number of iterations around rainbow, preferably < 1.0
max_dist = numpy.amax(distance_array)
min_dist = numpy.amin(distance_array)
rangemm = max_dist - min_dist
length = len(distance_array)
colours = [None] * length
for i in range(length):
start = 0 # anything between 0 and 360
hue = ((distance_array[i] - min_dist) / rangemm + (start / 360)) % 1 * spectre_size
colours[i] = hue # only hue is changed, the other two are 1.0
return colours
</code></pre>
<p>It is quite a primitive function, but I have been through like 20 tries. I have tried logarithms, exponentials, linear functions, nothing really fixed this problem.</p>
<p>Currently, I am trying to fix exactly this distribution here, but in a perfect world, it would work for all of them.</p>
<p>Does anybody have smart inputs to this?</p>
|
<python><colors><histogram>
|
2025-04-13 14:18:03
| 1
| 804
|
Hemmelig
|
79,571,602
| 2,278,546
|
Plugging SciPy's Nelder-Mead result back into the function does not return reported result
|
<p>When I take the <code>x</code> reported by the optimization and plug it back into my function <code>optimizee</code>, I don't get the value outputted by the optimization: -749.260. Rather I get -637.65, which is not as good...</p>
<p>I'm using SciPy's Nelder-Mead to optimize a nasty function I have put together:</p>
<pre><code>result = minimize(optimizee, x0, method='Nelder-Mead', bounds=(
(0, 250),
(0, 250),
(0, 1),
(0, 1),
(0, 1),
(0, 1),
(0, 1),
(0, 1),
(0, 1),
(0, 1),
(0, 1),
(0, 1),
), options={'xatol': 1e-2, 'maxiter': 50000})
</code></pre>
<p>By raising maxiter, I get it to to converge and successfully return:</p>
<pre><code> message: Optimization terminated successfully.
success: True
status: 0
fun: -749.2601549652912
x: [ 5.536e-03 2.500e+02 6.156e-04 0.000e+00 1.033e-04
2.533e-06 1.000e+00 5.640e-01 8.739e-01 3.828e-03
9.999e-01 1.000e+00]
nit: 1811
nfev: 2589
final_simplex: (array([[ 5.536e-03, 2.500e+02, ..., 9.999e-01,
1.000e+00],
[ 5.557e-03, 2.500e+02, ..., 9.999e-01,
1.000e+00],
...,
[ 5.788e-03, 2.500e+02, ..., 9.999e-01,
1.000e+00],
[ 5.782e-03, 2.500e+02, ..., 1.000e+00,
1.000e+00]], shape=(13, 12)), array([-7.493e+02, -7.493e+02, ..., -7.493e+02, -7.493e+02],
shape=(13,)))
</code></pre>
|
<python><optimization><scipy>
|
2025-04-13 13:35:44
| 1
| 505
|
grasswistle
|
79,571,481
| 439,203
|
Get current function name
|
<p>Executing</p>
<pre><code>vscode.executeDocumentSymbolProvider
</code></pre>
<p>gives me all symbols in the file.</p>
<p>Can I somehow get name of the function that the cursor is currently in?</p>
|
<python><vscode-extensions><vscode-python>
|
2025-04-13 11:21:58
| 1
| 1,151
|
mike27
|
79,571,478
| 17,396,945
|
Loguru timed rotation keeps creating log files on every application startup
|
<p>So, I tryed to use loguru as main logging utility in my django project, mainly because of cool and simple file logs rotations implementation. My logger configurations looks something like this:</p>
<pre class="lang-py prettyprint-override"><code>from loguru import logger
logger.configure(
handlers=[
{
"sink": "/logs/django_loguru_{time}.log",
"rotation": "1 week",
"retention": "1 month",
"level": "INFO",
},
],
)
</code></pre>
<p>It also configured during like that during development, when I frequently save files and hence django development server often restarts the whole application. That's how I noticed not expected behaviour, specifically on every file save and application restart loguru creates <strong>new</strong> empty log file instead of reusing the old one and as a result my containers <code>/logs</code> directory looks like this:</p>
<pre class="lang-bash prettyprint-override"><code>appuser@6c65f55e4366:/logs$ ls -lh
total 4.0K
-rw-r--r-- 1 appuser appuser 0 Apr 13 10:40 django_loguru_2025-04-13_10-40-40_971825.log
-rw-r--r-- 1 appuser appuser 0 Apr 13 10:40 django_loguru_2025-04-13_10-40-45_501983.log
-rw-r--r-- 1 appuser appuser 0 Apr 13 10:40 django_loguru_2025-04-13_10-40-57_140253.log
-rw-r--r-- 1 appuser appuser 0 Apr 13 10:50 django_loguru_2025-04-13_10-50-07_077760.log
-rw-r--r-- 1 appuser appuser 0 Apr 13 10:50 django_loguru_2025-04-13_10-50-09_453907.log
-rw-r--r-- 1 appuser appuser 0 Apr 13 10:50 django_loguru_2025-04-13_10-50-23_088179.log
-rw-r--r-- 1 appuser appuser 0 Apr 13 10:50 django_loguru_2025-04-13_10-50-25_549341.log
-rw-r--r-- 1 appuser appuser 0 Apr 13 10:50 django_loguru_2025-04-13_10-50-27_829316.log
-rw-r--r-- 1 appuser appuser 0 Apr 13 10:50 django_loguru_2025-04-13_10-50-30_304538.log
-rw-r--r-- 1 appuser appuser 0 Apr 13 10:50 django_loguru_2025-04-13_10-50-32_662305.log
-rw-r--r-- 1 appuser appuser 0 Apr 13 10:50 django_loguru_2025-04-13_10-50-35_008569.log
-rw-r--r-- 1 appuser appuser 0 Apr 13 10:50 django_loguru_2025-04-13_10-50-37_451321.log
-rw-r--r-- 1 appuser appuser 0 Apr 13 10:50 django_loguru_2025-04-13_10-50-40_934342.log
-rw-r--r-- 1 appuser appuser 0 Apr 13 10:50 django_loguru_2025-04-13_10-50-43_372456.log
-rw-r--r-- 1 appuser appuser 0 Apr 13 10:50 django_loguru_2025-04-13_10-50-52_930993.log
-rw-r--r-- 1 appuser appuser 287 Apr 13 09:08 django_root.log
</code></pre>
<p>Doesn't look like <code>"1 week"</code> rotation if you ask me =( So the question, how can I stop loguru from spaming with this empty files and respect rotation settings i provided? Maybe there's mistake in my handler configuration?</p>
<p>Also I notices that standard <code>logging</code> module doesn't have the same problem as you can also see in <code>/logs</code> directory contents, where only one <code>django_root.log</code> file present. It was generated by standard <code>logging</code> with this dict configuration:</p>
<pre class="lang-py prettyprint-override"><code>LOGGING = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"d": {"format": "%(asctime)s - %(levelname)s - %(name)s - %(message)s"}
},
"handlers": {
"file": {
"formatter": "d",
"level": "WARNING",
"class": "logging.handlers.TimedRotatingFileHandler",
"filename": "/logs/django_root.log",
"when": "D",
"interval": 7,
"backupCount": 12,
"utc": True,
},
},
"root": {
"handlers": ["file"],
"level": "WARNING",
},
}
</code></pre>
<p>As you can see, it was also configured with rotation, but even docs <a href="https://docs.python.org/3/library/logging.handlers.html#timedrotatingfilehandler" rel="nofollow noreferrer">mention</a> that:</p>
<blockquote>
<p>When computing the next rollover time for the first time (when the handler is created), the last modification time of an existing log file, or else the current time, is used to compute when the next rotation will occur.</p>
</blockquote>
|
<python><loguru>
|
2025-04-13 11:20:56
| 1
| 499
|
Олексій Холостенко
|
79,571,318
| 5,224,341
|
How to know if a Python package (e.g., ipykernel) is preinstalled on macOS — and whether it's safe to uninstall?
|
<p>I'm using macOS and working on a Python project with a virtual environment (<code>venv</code>) in VS Code. At one point, VS Code prompted me to install <code>ipykernel</code>, and I accidentally clicked "Install" — but only later realized that the selected interpreter wasn't my <code>venv</code>. So <code>ipykernel</code> got installed globally (i.e., outside the virtual environment).</p>
<p>Now I'm wondering:</p>
<ul>
<li><p>Is <code>ipykernel</code> preinstalled as part of macOS's system Python?</p>
</li>
<li><p>How can I check if a globally installed package is part of the system
or user-installed?</p>
</li>
<li><p>Most importantly, is it safe to uninstall such packages globally, or
could that break system-level tools?</p>
</li>
</ul>
|
<python><macos><pip>
|
2025-04-13 07:54:26
| 1
| 522
|
vlangen
|
79,571,068
| 2,398,193
|
How to read a .csv file in a Quarto Shinylive application?
|
<p>This code below works when doing <code>quarto preview</code> in the terminal.
So, the chunk is of type shinylive-python.</p>
<pre><code>```{shinylive-python}
#| standalone: true
#| layout: vertical
#| viewerHeight: 250
from shiny import *
app_ui = ui.page_fluid(
ui.input_slider("n", "N", 0, 100, 40),
ui.output_text_verbatim("txt"),
ui.output_text_verbatim("txt2")
)
def server(input, output, session):
@output
@render.text
def txt():
return f"The value of n*2 is {input.n() * 2}"
@render.text
def txt2():
return f"The value of n**2 is {input.n() ** 2}"
app = App(app_ui, server)
```
</code></pre>
<p>But, I don't know how to read a dataframe and run a plotly off it. For example, a code like this breaks because I'm not sure how to make it read that <code>.csv</code> file:</p>
<pre><code>```{shinylive-python}
#| standalone: true
#| viewerHeight: 420
#| viewerWidth: 500
from shiny import App, ui, render
import pandas as pd
import plotly.express as px
from pathlib import Path
app_ui = ui.page_fluid(
ui.layout_sidebar(
ui.sidebar(
ui.input_slider("cylinder", "Cylinder", 4, 6, 8, step=1)
),
ui.output_plot("plot"),
),
)
@reactive.calc
def dat():
infile = Path(__file__).parent / "exampledf.csv"
return pd.read_csv(infile)
def server(input):
@render.plot(alt="ScatterPlot function")
def plot():
file_path= Path(__file__).parent /'exampledf.csv'
# Load the dataset globally so it can be reused
df = dat()#pd.read_csv(file_path)
# Subset the DataFrame using the slider value
minidf = df[df.cylinders == input.cylinder()].copy()
# Create a plotly express scatter figure
fig = px.scatter(
minidf,
x="horsepower",
y="weight",
title="Scatter Plot of Horsepower vs Weight"
)
return fig
app = App(app_ui, server)
```
</code></pre>
<p>Do you have any ideas? I basically want to be able to <code>quarto preview</code> this file and run that app inside that chunk in Quarto.</p>
|
<python><quarto><py-shiny><py-shinylive>
|
2025-04-13 00:12:58
| 1
| 477
|
Jorge Lopez
|
79,571,015
| 962,212
|
Gradio "AttributeError: 'Image' object has no attribute 'proxy_url'"
|
<p>Since the Gradio Discord server is busy or does not allow new requests, I'm posting this here. I intend to display an image in the rows of a Gradio Dataset. The error is "AttributeError: 'Image' object has no attribute 'proxy_url'" and seems deep in the implementation of Gradio, so the AI did not help to understand the context.</p>
<pre><code>python -c "import gradio as gr; import sys; print(f'Python {sys.version.split()[0]}\\nGradio {gr.__version__}')"
Python 3.11.11
Gradio 5.23.3
</code></pre>
<p>This is the error stack:</p>
<pre><code> File "src/test_001.py", line 18, in show_startup_quotes
return gr.Dataset(
^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/gradio/component_meta.py", line 182, in wrapper
return fn(self, **kwargs)
^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/gradio/components/dataset.py", line 86, in __init__
self.component_props = [
^
File ".venv/lib/python3.11/site-packages/gradio/components/dataset.py", line 88, in <listcomp>
component.get_config(),
^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/gradio/components/base.py", line 245, in get_config
config = super().get_config()
^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/gradio/blocks.py", line 266, in get_config
config = {**config, "proxy_url": self.proxy_url, "name": self.get_block_class()}
</code></pre>
<p>Here is the code to reproduce</p>
<pre><code>import gradio as gr
from PIL import Image
white_image = Image.new("RGBA", (16, 16), color=(0, 0, 0, 0))
red_image = Image.new("RGBA", (16, 16), color=(255, 0, 0, 255))
philosophy_quotes = [
[white_image, "I think therefore I am."],
[white_image, "The unexamined life is not worth living."]
]
startup_quotes = [
[red_image, "Ideas are easy. Implementation is hard"],
[red_image, "Make mistakes faster."]
]
def show_startup_quotes():
return gr.Dataset(
samples=startup_quotes,
components=[
gr.Image(type="pil"),
gr.Text()
])
with gr.Blocks() as demo:
textbox = gr.Textbox()
image = gr.Image(visible=False,type="pil")
dataset = gr.Dataset(components=[textbox, image], samples=philosophy_quotes)
button = gr.Button()
button.click(show_startup_quotes, None, dataset)
demo.launch()
</code></pre>
<p>Any hints?</p>
|
<python><gradio>
|
2025-04-12 22:46:15
| 1
| 302
|
Veronica
|
79,571,010
| 219,153
|
How to create a NumPy structured array with different field values using full_like?
|
<p>I would like to create a NumPy structured array <code>b</code> with the same shape as <code>a</code> and <code>(-1, 1)</code> values, for example:</p>
<pre><code>import numpy as np
Point = [('x', 'i4'), ('y', 'i4')]
a = np.zeros((4, 4), dtype='u1')
b = np.full_like(a, fill_value=(-1, 1), dtype=Point) # fails
b = np.full_like(a, -1, dtype=Point) # works
</code></pre>
<p>Using <code>full_like()</code> works with the same value for all fields, but fails with different values, producing this error:</p>
<pre><code> multiarray.copyto(res, fill_value, casting='unsafe')
ValueError: could not broadcast input array from shape (2,) into shape (4,4)
</code></pre>
<p>. Is there a solution other than explicitly assigning <code>(-1, 1)</code> to each element in a loop?</p>
|
<python><numpy><structured-array>
|
2025-04-12 22:39:24
| 2
| 8,585
|
Paul Jurczak
|
79,570,945
| 740,521
|
Why is stdout PIPE readline() not waiting for a newline character?
|
<p>Why is <code>print(line)</code> in <code>read_stdout</code> printing every character instead of the entire line? I am expecting it to add to the queue once the newline character is reached, but it's just putting every character in the queue instead.</p>
<p><code>plugin_test.py</code>:</p>
<pre><code>from subprocess import PIPE, Popen
from threading import Thread
from queue import Queue, Empty
import re
import os
import sys
def read_stdout(stdout, data_q):
for line in stdout.readline():
print(line)
data_q.put(line)
stdout.close()
class Plugin():
name = ''
def __init__(self, **kwargs) -> None:
self.process = None
def run(self) -> bool:
try:
self.process = Popen(
[sys.executable, 'test.py'],
stdin=PIPE,
stdout=PIPE,
text=True)
except Exception as exc:
print("Could not create plugin process {}. {}".format(self.name, exc))
return False
self.data_q = Queue()
self.read_t = Thread(target=read_stdout, args=(self.process.stdout, self.data_q))
self.read_t.daemon = True
self.read_t.start()
try:
startup = self.data_q.get(timeout=10)
except Empty:
print("Plugin took to long to load {}.".format(self.name))
self.stop()
return False
else:
if 'Error' in startup:
print("Could not load plugin {}. {}".format(self.name, startup))
self.stop()
return False
elif '100%' in startup:
print("Plugin \"{}\" started.".format(self.name))
return True
def write(self, data : str) -> None:
self.process.stdin.write(data)
self.process.stdin.flush()
def read(self) -> str:
try:
data = self.data_q.get()
print("Got data ::{}::".format(data))
except Exception as exc:
print("Error reading data from plugin '{}'".format(exc))
def stop(self):
self.process.terminate()
self.process.wait()
if __name__ == '__main__':
plugin = Plugin()
plugin.run()
</code></pre>
<p><code>test.py</code>:</p>
<pre><code>print("this is a test")
print("this is a test2")
print("100%")
</code></pre>
<pre class="lang-bash prettyprint-override"><code>root@osboxes# python plugin_test.py
t
h
i
s
i
s
a
t
e
s
t
</code></pre>
|
<python><io><subprocess>
|
2025-04-12 21:11:27
| 1
| 1,206
|
user740521
|
79,570,775
| 417,896
|
Invalid Authority - OKX Bitcoin API
|
<p>I am getting Invalid Authority error when running this code to pull data from OKX. I have confirmed the env vars are correct.</p>
<p>I think it is something with the signature.</p>
<pre><code>def generate_signature(timestamp, method, request_path, body, secret_key):
message = f'{timestamp}{method.upper()}{request_path}{body}'
signature = hmac.new(secret_key.encode(), message.encode(), hashlib.sha256).digest()
return base64.b64encode(signature).decode()
def get_brc20_activities(ticker: str, limit=100):
url_path = '/api/v5/mktplace/nft/ordinals/trade-history'
url = 'https://www.okx.com' + url_path
method = 'POST'
cursor = ''
all_transactions = []
while True:
payload = {
'slug': f'brc20_{ticker.lower()}',
'limit': limit,
'sort': 'desc',
'isBrc20': True
}
if cursor:
payload['cursor'] = cursor
body = json.dumps(payload, separators=(',', ':')) # Ensure compact format
timestamp = datetime.utcnow().isoformat("T", "seconds") + "Z"
signature = generate_signature(timestamp, method, url_path, body, API_SECRET)
headers = {
'Content-Type': 'application/json',
'OK-ACCESS-KEY': API_KEY,
'OK-ACCESS-SIGN': signature,
'OK-ACCESS-TIMESTAMP': timestamp,
'OK-ACCESS-PASSPHRASE': API_PASSPHRASE
}
response = requests.post(url, headers=headers, data=body)
if response.status_code != 200:
print(f"Error fetching data: {response.status_code} - {response.text}")
break
data = response.json()
print(json.dumps(data, indent=2)) # Optional: inspect raw result
activities = data.get('data', {}).get('data', [])
all_transactions.extend(activities)
cursor = data.get('data', {}).get('cursor')
if not cursor:
break
time.sleep(0.25)
return all_transactions
ticker = 'FFIE' # Replace with any BRC-20 ticker like 'sats'
txns = get_brc20_activities(ticker)
print(f"Retrieved {len(txns)} transactions for {ticker.upper()}")
for txn in txns[:5]: # print first 5 as preview
print(txn)
</code></pre>
|
<python>
|
2025-04-12 17:53:43
| 1
| 17,480
|
BAR
|
79,570,460
| 1,549,736
|
Why does mlx.core.sqrt() crash on my MacBook Air M2 when applied to a complex argument?
|
<p><code>mlx.core.sqrt()</code> is crashing on my MacBook Air M2 when applied to a complex argument:</p>
<pre class="lang-bash prettyprint-override"><code>Python 3.11.11 (main, Dec 3 2024, 17:20:40) [Clang 16.0.0 (clang-1600.0.26.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import mlx.core as mx
>>> x = mx.array([0.0, 1.0, 2.0])
>>> print(x)
array([0, 1, 2], dtype=float32)
>>> y = mx.sqrt(x)
>>> print(y)
array([0, 1, 1.41421], dtype=float32)
>>> w = 1j * x
>>> print(w)
array([0+0j, 0+1j, 0+2j], dtype=complex64)
>>> z = mx.sqrt(w)
>>> print(z)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: [metal::Device] Unable to load kernel v_Sqrtcomplex64complex64
>>> print(mx.metal.device_info())
{'resource_limit': 499000, 'max_buffer_length': 8589934592, 'architecture': 'applegpu_g14g', 'memory_size': 17179869184, 'max_recommended_working_set_size': 11453251584, 'device_name': 'Apple M2'}
</code></pre>
<p>Is this expected?</p>
<p>I tried to debug it, using a <em>capture</em>, by running this executable Python script:</p>
<pre class="lang-py prettyprint-override"><code>import mlx.core as np
GPU_TRACE_FNAME = "metal.gputrace"
metal_available = np.metal.is_available()
if metal_available:
print(f"Metal Device Info:\n{np.metal.device_info()}")
else:
print("Sorry, Metal is not available.")
np.metal.start_capture(GPU_TRACE_FNAME)
x = np.array([0.0, 1.0, 2.0])
y = 1j * x
z = np.sqrt(y)
print(z)
np.metal.stop_capture()
</code></pre>
<p>But, that fails:</p>
<pre class="lang-bash prettyprint-override"><code>% ./mlx_test_sqrt.py
Metal Device Info:
{'resource_limit': 499000, 'max_buffer_length': 8589934592, 'architecture': 'applegpu_g14g', 'memory_size': 17179869184, 'max_recommended_working_set_size': 11453251584, 'device_name': 'Apple M2'}
Traceback (most recent call last):
File ".../mlx_test_sqrt.py", line 22, in <module>
np.metal.start_capture(GPU_TRACE_FNAME)
RuntimeError: [metal::start_capture] Failed to start: Capturing is not supported.
</code></pre>
<p>Any idea what my next steps should be?</p>
|
<python><gpu><metal><apple-m2><mlx>
|
2025-04-12 12:44:13
| 1
| 2,018
|
David Banas
|
79,570,287
| 577,288
|
python 3 - concurrent.futures - get thread number without adding a function
|
<p>Currently this code prints,</p>
<pre><code>MainThread done at sec 1
MainThread done at sec 1
MainThread done at sec 2
MainThread done at sec 1
MainThread done at sec 0
MainThread done at sec 3
MainThread done at sec 2
MainThread done at sec 5
MainThread done at sec 4
</code></pre>
<p>I need it to print</p>
<pre><code>MainThread 1 done at sec 1
MainThread 3 done at sec 1
MainThread 2 done at sec 2
MainThread 3 done at sec 1
MainThread 1 done at sec 0
MainThread 2 done at sec 3
MainThread 2 done at sec 2
MainThread 3 done at sec 5
MainThread 1 done at sec 4
</code></pre>
<p><strong>How can I do this, by modifying this 1 line of code.</strong></p>
<pre><code>print(threading.current_thread().name + ' done at sec ' + str(future.result()))
</code></pre>
<p>Here is the full code</p>
<pre><code>import concurrent.futures
import threading
import pdb
import time
import random
def Threads2(time_sleep):
time.sleep(time_sleep)
return time_sleep
all_sections = [random.randint(0, 5) for iii in range(10)]
test1 = []
with concurrent.futures.ThreadPoolExecutor(max_workers=3, thread_name_prefix="Ok") as executor:
working_threads = {executor.submit(Threads2, time_sleep): time_sleep for index, time_sleep in enumerate(all_sections)}
for future in concurrent.futures.as_completed(working_threads):
print(threading.current_thread().name + ' done at sec ' + str(future.result()))
</code></pre>
|
<python><multithreading><concurrent.futures>
|
2025-04-12 10:03:03
| 1
| 5,408
|
Rhys
|
79,570,163
| 16,383,578
|
Why do these nearly identical functions perform very differently?
|
<p>I have written four functions that modify a square 2D array in place, it reflects half of the square array delimited by two sides that meet and the corresponding 45 degree diagonal, to the other half separated by the same diagonal.</p>
<p>I have written a function for each of the four possible cases, to reflect <code>product(('upper', 'lower'), ('left', 'right'))</code> to <code>product(('lower', 'upper'), ('right', 'left'))</code>.</p>
<p>They use Numba to compile Just-In-Time and they are parallelized using <code>numba.prange</code> and are therefore much faster than the methods provided by NumPy:</p>
<pre class="lang-none prettyprint-override"><code>In [2]: sqr = np.random.randint(0, 256, (1000, 1000), dtype=np.uint8)
In [3]: %timeit x, y = np.tril_indices(1000); sqr[x, y] = sqr[y, x]
9.16 ms ± 30.9 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
</code></pre>
<p>As you can see, the above code takes a very long time to execute.</p>
<pre><code>import numpy as np
import numba as nb
@nb.njit(cache=True, parallel=True, nogil=True)
def triangle_flip_LL2UR(arr: np.ndarray) -> None:
height, width = arr.shape[:2]
if height != width:
raise ValueError("argument arr must be a square")
for i in nb.prange(height):
arr[i, i:] = arr[i:, i]
@nb.njit(cache=True, parallel=True, nogil=True)
def triangle_flip_UR2LL(arr: np.ndarray) -> None:
height, width = arr.shape[:2]
if height != width:
raise ValueError("argument arr must be a square")
for i in nb.prange(height):
arr[i:, i] = arr[i, i:]
@nb.njit(cache=True, parallel=True, nogil=True)
def triangle_flip_LR2UL(arr: np.ndarray) -> None:
height, width = arr.shape[:2]
if height != width:
raise ValueError("argument arr must be a square")
last = height - 1
for i in nb.prange(height):
arr[i, last - i :: -1] = arr[i:, last - i]
@nb.njit(cache=True, parallel=True, nogil=True)
def triangle_flip_UL2LR(arr: np.ndarray) -> None:
height, width = arr.shape[:2]
if height != width:
raise ValueError("argument arr must be a square")
last = height - 1
for i in nb.prange(height):
arr[i:, last - i] = arr[i, last - i :: -1]
</code></pre>
<pre class="lang-none prettyprint-override"><code>In [4]: triangle_flip_LL2UR(sqr)
In [5]: triangle_flip_UR2LL(sqr)
In [6]: triangle_flip_LR2UL(sqr)
In [7]: triangle_flip_UL2LR(sqr)
In [8]: %timeit triangle_flip_LL2UR(sqr)
194 μs ± 634 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [9]: %timeit triangle_flip_UR2LL(sqr)
488 μs ± 3.26 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [10]: %timeit triangle_flip_LR2UL(sqr)
196 μs ± 501 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [11]: %timeit triangle_flip_UL2LR(sqr)
486 μs ± 855 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
</code></pre>
<p>Why do they have execution times with a significant difference? Two of them take around 200 microseconds to execute, the other two around 500 microseconds, despite the fact that they are almost identical.</p>
<hr />
<p>I have discovered something. <code>triangle_flip_UR2LL(arr)</code> is the same as <code>triangle_flip_LL2UR(sqr.T)</code> and vice versa.</p>
<p>Now if I transpose the array before calling the functions, the trend of performance is reversed:</p>
<pre><code>In [109]: %timeit triangle_flip_UR2LL(sqr.T)
196 μs ± 1.15 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [110]: %timeit triangle_flip_LL2UR(sqr.T)
490 μs ± 1.24 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
</code></pre>
<p>Why is this happening?</p>
|
<python><arrays><numpy><numba>
|
2025-04-12 07:38:21
| 1
| 3,930
|
Ξένη Γήινος
|
79,569,998
| 2,001,654
|
Get the QColor at an arbitrary position within the 0-1 range of a QGradient
|
<h2>Objective</h2>
<p>I want to get the <em>visual</em> color at an arbitrary position within the <code>0.0</code>-<code>1.0</code> range of any given QGradient.</p>
<p>This may be necessary to properly compute new intermediate stops between existing ones, or to get a color at a specific point in order to use it as a display reference within the gradient.</p>
<h2>The <code>QVariantAnimation</code> approach</h2>
<p>At first, I thought about using QVariantAnimation, which seemed reasonable.</p>
<p>Gradients can have many colors set at different points, which can be set by using <code>QVariantAnimation.setKeyValueAt()</code>.</p>
<p>They also can have no <code>0.0</code> or <code>1.0</code> stops, meaning that if, for example, the first stop is at <code>0.2</code> and the last is at <code>0.8</code>, the start and end values of the animation must be set as well with the relative first and last color.</p>
<p>My original attempt was to create a temporary animation, then set its key values based on the gradient stops: using <code>setCurrentTime()</code> based on the computed position within the animation <code>duration()</code> allows to get the intermediate value.</p>
<p>Here is my original implementation:</p>
<p><sup>Note that I'm not able to write C++ code, so I'll provide Python based code for the following examples. C++ based answers will be well received as well, though.</sup></p>
<pre class="lang-py prettyprint-override"><code>def gradientColorAt(grad, pos):
if pos <= 0:
return grad.stops()[0][1]
elif pos >= 1:
return grad.stops()[-1][1]
ani = QVariantAnimation()
ani.setDuration(1000)
startDone = False
for stop, color in grad.stops():
if not startDone:
startDone = True
ani.setStartValue(color)
if not stop:
continue
ani.setKeyValueAt(stop, color)
if stop < 1:
ani.setEndValue(color)
ani.setCurrentTime(round(1000 * pos))
return ani.currentValue()
</code></pre>
<p>The above <em>apparently</em> works fine, as long as all colors are fully opaque. I later realized that a <a href="https://stackoverflow.com/q/3306786">similar post</a> also suggested a similar approach in some of its answers.</p>
<h2>The issue with non opaque colors</h2>
<p>Unfortunately, something unexpected happens whenever a color in the gradient has an <code>alpha()</code> value less than 255 (or <code>1.0</code> for <code>QColor.alphaF()</code>).</p>
<p>Qt seems to follow the convention also used in modern web browsers for which the transitions between colors do not strictly follow linear values in their components (red, green, blue and alpha), and also considering the "transparent" color based on the possible transition to another color, and even though the concept of <code>transparent</code> is actually a "transparent black".</p>
<p>Be aware, though, that <code>QColor(Qt.transparent)</code> is <em>exactly</em> <code>QColor(0, 0, 0, 0)</code>. Even though <code>QColor(255, 0, 0, 0)</code> would have the same result for the purposes of gradients in both Qt <em>and</em> common browsers, it is <strong>not</strong> the same color.</p>
<p>Consider the following example:</p>
<pre class="lang-py prettyprint-override"><code>from PyQt5.QtCore import *
from PyQt5.QtGui import *
from PyQt5.QtWidgets import *
app = QApplication([])
win = QWidget()
lay = QVBoxLayout(win)
grad = QLinearGradient(0, 0, 1, 0)
grad.setCoordinateMode(grad.ObjectBoundingMode)
grad.setColorAt(1, Qt.green)
transBlack = QColor(Qt.transparent)
transRed = QColor(255, 0, 0, 0)
pal = app.palette()
for color in (transBlack, transRed):
grad.setColorAt(0, color)
pal.setBrush(QPalette.Window, QBrush(grad))
widget = QWidget()
widget.setAutoFillBackground(True)
widget.setPalette(pal)
lay.addWidget(widget)
win.resize(400, 400)
win.show()
app.exec()
</code></pre>
<p>And here is the result:</p>
<p><a href="https://i.sstatic.net/bo7eV6Ur.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bo7eV6Ur.png" alt="Screenshot of the first code example" /></a></p>
<p>Some research suggested a similar pattern in browsers (like <a href="https://css-tricks.com/thing-know-gradients-transparent-black/" rel="nofollow noreferrer">this post</a>), and I can understand the reasoning behind that.</p>
<p>Nonetheless, the issue is that the above QVariantAnimation attempt does <em>not</em> follow the same pattern, as it simply uses an internal interpolator that linearly evaluates all color components.</p>
<p>Let's add the <code>gradientColorAt()</code> function to the above code, along with the following class:</p>
<pre class="lang-py prettyprint-override"><code>class GradientViewer(QWidget):
def __init__(self, gradient):
super().__init__()
self.gradient = QLinearGradient(gradient)
def paintEvent(self, event):
qp = QPainter(self)
width = self.width()
height = self.height()
for x in range(width + 1):
qp.setPen(gradientColorAt(self.gradient, x / width))
qp.drawLine(x, 0, x, height)
</code></pre>
<p>Then add related instances to the layout using the same gradients:</p>
<pre class="lang-py prettyprint-override"><code>...
# replace the above for loop
for color in (transBlack, transRed):
grad.setColorAt(0, color)
pal.setBrush(QPalette.Window, QBrush(grad))
widget = QWidget()
widget.setAutoFillBackground(True)
widget.setPalette(pal)
lay.addWidget(widget)
# custom widget using the same gradient
lay.addWidget(GradientViewer(grad))
...
</code></pre>
<p>And here is the result:</p>
<p><a href="https://i.sstatic.net/wi0LDWFY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wi0LDWFY.png" alt="Screenshot of the altered code" /></a></p>
<h2>Some digging</h2>
<p>Further research resulted in studying the Qt sources, which looked promising: the raster painting engine has a <code>generateGradientColorTable()</code> function (see it in <a href="https://codebrowser.dev/qt5/qtbase/src/gui/painting/qpaintengine_raster.cpp.html#_ZNK14QGradientCache26generateGradientColorTableERK9QGradientP7QRgba64ii" rel="nofollow noreferrer">code browser</a>), which Qt seems to use to create a bitmap cache based on an arbitrary size of color interpolations within a gradient.</p>
<p>I thought about overriding <code>QVariant::interpolated()</code>, and then recreating a similar function with a QVariantAnimation subclass, based on that code:</p>
<pre class="lang-py prettyprint-override"><code>class QColorInterpolator(QVariantAnimation):
def interpolated(self, c1, c2, pos):
if pos <= 0:
return c1
elif pos >= 1:
return c2
first = c1.rgba64()
first = QRgba64.fromRgba64(
first.red(), first.green(), first.blue(), (first.alpha() * 256) >> 8
).premultiplied()
second = c2.rgba64()
second = QRgba64.fromRgba64(
second.red(), second.green(), second.blue(), (second.alpha() * 256) >> 8
).premultiplied()
r1 = first.red() << 16
g1 = first.green() << 16
b1 = first.blue() << 16
a1 = first.alpha() << 16
r2 = second.red() << 16
g2 = second.green() << 16
b2 = second.blue() << 16
a2 = second.alpha() << 16
rd = round((r2 - r1) * pos)
gd = round((g2 - g1) * pos)
bd = round((b2 - b1) * pos)
ad = round((a2 - a1) * pos)
return QColor.fromRgba64(
(r1 + 32768 + rd) >> 16,
(g1 + 32768 + gd) >> 16,
(b1 + 32768 + bd) >> 16,
(a1 + 32768 + ad) >> 16,
)
</code></pre>
<p><sup>Obviously, the above class isn't really needed, as the interpolation could be achieved by using relative steps between stops. The only benefit is the possibility of using <code>interpolated()</code>, which will always use a <code>0-1</code> range between key frames. It may not be completely efficient, but results in a cleaner code.</sup></p>
<p>Unfortunately, my limited understanding of color composition and alpha premultiplication seem to show its problems: replacing QVariantAnimation with QColorInterpolator in the above <code>gradientColorAt()</code> function will give <em>exactly</em> the same result as in using <code>Qt.transparent</code>.</p>
<h2>Working solution, not really efficient</h2>
<p>Right now, the only working solution I came out with was to use an internal cache of QImages, each one corresponding to possible intermediate gradients (every range between existing stops), paint the gradient on it, and then use QImage's <a href="https://doc.qt.io/qt-5/qimage.html#pixelColor-1" rel="nofollow noreferrer"><code>pixelColor()</code></a> to get the actual color of the computed position:</p>
<pre class="lang-py prettyprint-override"><code>class QColorInterpolator(QVariantAnimation):
_imageCache = {}
def _createCacheGradient(self, c1, c2):
g = QLinearGradient(0, 0, 1024, 0)
g.setStops([(0, c1), (1, c2)])
image = QImage(1024, 1, QImage.Format_ARGB32)
image.fill(Qt.transparent)
qp = QPainter(image)
qp.fillRect(image.rect(), g)
qp.end()
return image
def interpolated(self, c1, c2, pos):
if pos <= 0:
return c1
if pos >= 1:
return c2
if c1 == c2:
return c1
key = c1.rgba64(), c2.rgba64()
if key in self._imageCache:
gradImg = self._imageCache[key]
else:
self._imageCache[key] = gradImg = self._createCacheGradient(c1, c2)
return gradImg.pixelColor(round(1023 * pos), 0)
</code></pre>
<p>Replacing the QColorInterpolator with the new class works as expected:</p>
<p><a href="https://i.sstatic.net/JfX94RU2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JfX94RU2.png" alt="Screenshot of the current acceptable attempt" /></a></p>
<h2>The question</h2>
<p>While the last attempt works as expected, I don't really like it, for the following reasons:</p>
<ul>
<li>as far as I can understand, Qt already creates an internal "color table" cache, and while it obviously cannot be externally accessed, creating separated QImages doesn't seem very efficient;</li>
<li>there is no direct computation of color values, only an <em>assumed</em> one based on the pixel data of QImage;</li>
<li>due to all the above, any temporary computation of intermediate colors would require to create a QVariantAnimation, which I'd prefer to avoid;</li>
</ul>
<p>The question, therefore, is: how (and why) can my <code>interpolated()</code> function be fixed in order to provide <em>correct</em> color values, as used in a visually painted gradient by Qt?</p>
<p><sup>If you want to expand the above, there are some follow ups I'd like to know more about; it seems that QGradient/QBrush allows to set two different types of interpolations (<code>ColorInterpolation</code> and <code>ComponentInterpolation</code>): what is the difference? Does any of those result in what the linear computation used by QVariantAnimation actually does? Is it possible to get the opposite <em>painting</em> behavior, only following linear computation?</sup></p>
|
<python><c++><qt><gradient><qcolor>
|
2025-04-12 03:39:27
| 0
| 49,772
|
musicamante
|
79,569,982
| 6,457,407
|
How do I unittest an HTML page that contains an embedded dynamic image?
|
<p>I've got an dynamic HTML page that contains a QR code that is generated dynamically and embedded using <code><img src=data:image/png:....></code>. I had been testing the page by generating it with specific arguments and verifying that it matches the output I expect.</p>
<p>I'm now discovering that the embedded image is not as fixed as I expected. The PNG format uses zlib compression, and for Python12.3, and Windows and Linux are using different versions of the zlib library. (1.2.13 and 1.2.11, respectively). I have two correct but different compressions of the identical source bits, and the resulting PNG representations differs. The resulting dynamic page now longer matches exactly the output I expect. (The windows version now compresses to a few bytes shorter.)</p>
<p>I can easily verify that both pages contain the identical source, except for the images. Is there a easy way of taking the byte strings that are the PNG representation of the generated image and the image that I expect and confirm that they are the same?</p>
|
<python><html><unit-testing><png>
|
2025-04-12 03:09:27
| 2
| 11,605
|
Frank Yellin
|
79,569,792
| 9,441,040
|
Pickle works but fails pytest
|
<p>I've created a module persist containing a function obj_pickle which I'm using to pickle various objects in my project. It work fine but it's failing pytest, returning;</p>
<pre><code>> pickle.dump(obj, file_handle, protocol=protocol)
E AttributeError: Can't pickle local object 'test_object.<locals>.TestObject'
</code></pre>
<p>Running:</p>
<ul>
<li>python 3.12</li>
<li>pytest Version: 8.3.5</li>
</ul>
<p>I see that the are similar looking issues tied to multiprocessing but I don't think pytest is exposing that issue. Code below, and many thanks.</p>
<pre class="lang-py prettyprint-override"><code># lib.persist.py
from pathlib import Path
import pickle
def obj_pickle(obj: object, dir:Path, protocol: int = pickle.HIGHEST_PROTOCOL) -> None:
"""
Pickle an object to a byte file.
"""
if not dir.exists():
dir.mkdir(parents=True, exist_ok=True)
path = Path(dir, obj.instance_name + '.pkl')
with open(path, "wb") as file_handle:
pickle.dump(obj, file_handle, protocol=protocol)
print(f"{obj.__class__.__name__} object {obj.instance_name} saved to {path}")
</code></pre>
<pre class="lang-py prettyprint-override"><code># tests.test_persist.py
from pathlib import Path
import pytest
from lib.persist import obj_pickle
TEST_DIR = Path("test_dir")
@pytest.fixture
def test_object():
class TestObject():
def __init__(self, instance_file_name):
self.instance_file_name = instance_file_name
self.data = "This is a test object."
test_object = TestObject("test_object_file")
return test_object
def test_obj_pickle(test_object):
obj_pickle(test_object, Path(TEST_DIR))
path = Path(TEST_DIR, "test_object_file" + ".pkl")
assert path.exists()
</code></pre>
|
<python><pytest><pickle>
|
2025-04-11 22:30:33
| 2
| 362
|
Paul C
|
79,569,771
| 6,615,517
|
Not able to import module after adding folder to path
|
<p>I am trying to import a module based on this RDKit blog: <a href="https://greglandrum.github.io/rdkit-blog/posts/2023-12-01-using_sascore_and_npscore.html" rel="nofollow noreferrer">https://greglandrum.github.io/rdkit-blog/posts/2023-12-01-using_sascore_and_npscore.html</a> but it won't work if I try to include it in a separate file. The following works in a jupyter cell:</p>
<pre><code>try:
from rdkit.Contrib.SA_Score import sascorer
except ImportError:
import sys
import os
sys.path.append(os.path.join(os.environ['CONDA_PREFIX'],'share','RDKit','Contrib', 'SA_score'))
import sascorer
</code></pre>
<p>However, if I include that in a python script and define some other function, trying to import that function gives me an ModuleNotFoundError.</p>
|
<python><rdkit>
|
2025-04-11 22:03:25
| 1
| 342
|
rwalroth
|
79,569,591
| 2,410,605
|
First Attempt at Downloading a Google Sheet using Python Selenium, Getting a 401 Authentication Error
|
<p>I need to programmatically download a Google Sheet as a csv file and I'm trying to use Python Selenium to accomplish this. Unfortunately I'm receiving a 401 error on the authentication.</p>
<p>Here's my code:</p>
<pre><code>import os
import logging
import requests
import sys
from google.auth.transport.requests import Request
from google.oauth2.credentials import Credentials
from google_auth_oauthlib.flow import InstalledAppFlow
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
#local defs
SCOPES = ["https://www.googleapis.com/auth/spreadsheets"]
SPREADSHEET_ID = "<ID>"
OUT_DIR = 'tmp/'
OUT_FILENAME = "locationOfServices.csv"
#Process to download the google sheet
def getGoogleSeet(spreadsheet_id, outDir, outFile):
url = f'https://docs.google.com/spreadsheets/d/<ID>/export?format=csv'
response = requests.get(url)
if response.status_code == 200:
filepath = os.path.join(outDir, outFile)
with open(filepath, 'wb') as f:
f.write(response.content)
print('CSV file saved to: {}'.format(filepath))
else:
print(f'Error downloading Google Sheet: {response.status_code}')
sys.exit()
#Main function to loc ingo Google Sheet
def main():
credentials = None
if os.path.exists("token.json"):
credentials = Credentials.from_authorized_user_file("token.json", SCOPES)
if not credentials or not credentials.valid:
if credentials and credentials.expired and credentials.refresh_token:
credentials.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file("credentials.json", SCOPES)
credentials = flow.run_local_server(port=0)
with open("token.json", "w") as token:
token.write(credentials.to_json())
try:
service = build("sheets", "v4", credentials=credentials)
sheets = service.spreadsheets()
os.makedirs(OUT_DIR, exist_ok = True)
filepath = getGoogleSeet(SPREADSHEET_ID, OUT_DIR, OUT_FILENAME)
sys.exit(0); ## success
except HttpError as error:
print(error)
if __name__ == "__main__":
main()
</code></pre>
<p>On my initial run it asked me which account I wanted to use (which is a service account), had my approve some information, and successfully created a token file. But it also gave me the 401 Error. After that I've tried several times to tweak stuff but continue to get the 401.</p>
<p>I have verified that the service account has Edit permissions to access the Google Sheet.</p>
<p>I've created a project in Google Cloud to create the OAuth2 connection. In the project I enabled Google Sheets API and created an OAuth Client ID:</p>
<p><a href="https://i.sstatic.net/0sstofCY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0sstofCY.png" alt="Client Setup" /></a></p>
<p>I downloaded the JSON file and renamed it credentials.json and stored it in my working directory.</p>
<p><a href="https://i.sstatic.net/pBMc7FTf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBMc7FTf.png" alt="Working Directory" /></a></p>
<p>The other parts of the Google service that I set up are below:</p>
<p><a href="https://i.sstatic.net/M608J8Rp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M608J8Rp.png" alt="Client Desktop" /></a></p>
<p><a href="https://i.sstatic.net/65xnCKZB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65xnCKZB.png" alt="Audience" /></a></p>
<p><a href="https://i.sstatic.net/fzQ8jiE6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fzQ8jiE6.png" alt="Data Access" /></a></p>
<p>Can anybody spot what it is I'm doing incorrectly?</p>
|
<python><google-sheets><google-cloud-platform>
|
2025-04-11 19:30:42
| 1
| 657
|
JimmyG
|
79,569,505
| 10,917,549
|
Load DeepSeek-V3 model from local repo
|
<p>I want to run the DeepSeek-V3 model inference using the Hugging-Face Transformer library (>= v4.51.0).</p>
<p>I read that you can do the following to do that (download the model and run it)</p>
<pre><code>from transformers import pipeline
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="deepseek-ai/DeepSeek-R1", trust_remote_code=True)
pipe(messages)
</code></pre>
<p>My issue is that I already downloaded the DeepSeek-V3 hugging-face repository separately, and I just want to tell the Transformer where it is on my local machine, so that it can run the inference.</p>
<p>The model repository is thus not (or not necessarily) in the Hugging-Face cache directory (it can be anywhere on the local machine). When loading the model, I want to provide the path which specifically points to the model's repository on the local machine.</p>
<p>How can I achieve that?</p>
|
<python><huggingface-transformers><huggingface><deepseek>
|
2025-04-11 18:26:42
| 1
| 409
|
The_Average_Engineer
|
79,569,500
| 12,276,279
|
How can I sort order of index based on my preference in multi-index pandas dataframes
|
<p>I have a pandas dataframe <code>df</code>. It has multi-index with Gx.Region and Scenario_Model.
The Scenario_Model index is ordered in alphabetical order des, pes, tes. When I plot it, it comes in the same order. However, I want to reorder it as pes, tes and des, and plot it accordingly. Is it possible to achieve it in Python pandas dataframe?</p>
<pre><code> dict = {('Value', 2023, 'BatteryStorage'): {('Central Africa', 'des'): 0.0,
('Central Africa', 'pes'): 0.0,
('Central Africa', 'tes'): 0.0,
('Eastern Africa', 'des'): 0.0,
('Eastern Africa', 'pes'): 0.0,
('Eastern Africa', 'tes'): 0.0,
('North Africa', 'des'): 0.0,
('North Africa', 'pes'): 0.0,
('North Africa', 'tes'): 0.0,
('Southern Africa', 'des'): 504.0,
('Southern Africa', 'pes'): 100.0,
('Southern Africa', 'tes'): 360.0,
('West Africa', 'des'): 0.0,
('West Africa', 'pes'): 0.0,
('West Africa', 'tes'): 0.0},
('Value', 2023, 'Biomass PP'): {('Central Africa', 'des'): 0.0,
('Central Africa', 'pes'): 0.0,
('Central Africa', 'tes'): 0.0,
('Eastern Africa', 'des'): 40,
('Eastern Africa', 'pes'): 10,
('Eastern Africa', 'tes'): 50,
('North Africa', 'des'): 0.0,
('North Africa', 'pes'): 0.0,
('North Africa', 'tes'): 0.0,
('Southern Africa', 'des'): 90.0,
('Southern Africa', 'pes'): 43.0,
('Southern Africa', 'tes'): 50.0,
('West Africa', 'des'): 200.0,
('West Africa', 'pes'): 150.0,
('West Africa', 'tes'): 100}}
df_sample = pd.DataFrame.from_dict(dict)
df_sample.plot(kind = "bar",
stacked = True)
</code></pre>
<p><a href="https://i.sstatic.net/DUTRGb4E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DUTRGb4E.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><pandas><dataframe><indexing>
|
2025-04-11 18:23:33
| 1
| 1,810
|
hbstha123
|
79,569,466
| 3,225,072
|
Launching audacity from Python and loading multiple audios
|
<p>I'm trying to write a python script which loads Audacity with multiple audio files in the same window. So far I have been able to do this but instead of opening both audio files on the same window, 2 independent Audacity windows are opened , one for each audio file.</p>
<p>Is there a way to get both audios to load on the same Audacity window?</p>
<p>Bonus question in case you know the answer:</p>
<p>Is there a way to close Audacity properly from my Python script? Every time I stop Python script execution and reuse Audacity, it says that my previous projects crashed but I don't see a way to properly close (e.g. say I don't want to save) the Audacity projects from my Python script</p>
<p>See my Python code below</p>
<pre><code>import subprocess
# Opening files in A and B files in Audacity
original_mic_file = "A.wav"
cleaned_mic_file = "B.wav"
if os.name == 'nt': # Windows
audacity_path = "C:\\Program Files\\Audacity\\audacity.exe"
subprocess.Popen([audacity_path, original_mic_file,cleaned_mic_file])
</code></pre>
<p>Edit: To those downvoting, please comment why</p>
|
<python><subprocess><audacity>
|
2025-04-11 17:58:05
| 2
| 960
|
Victor
|
79,569,354
| 12,276,279
|
For multi-index columns in pandas dataframe, how can I group index of a particular level value for visualization in Python?
|
<p>I have a pandas dataframe which is basically a pivot table.</p>
<p><code>df.plot(kind = "bar",stacked = True)</code> results in following plot. The labels in x-axis are congested as shown.
<a href="https://i.sstatic.net/c9qRrDgY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c9qRrDgY.png" alt="enter image description here" /></a></p>
<p>In Excel I can group the first index value for Scenarios pes, tes and des are clear and distinct as shown:
<a href="https://i.sstatic.net/mLzZTUGD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mLzZTUGD.png" alt="enter image description here" /></a></p>
<p>How can I create similar labels in x-axis using matplotlib in Python?</p>
<p>Here is a sample dataset with minimal code:</p>
<pre><code> dict = {'BatteryStorage': {('des-PDef3', 'Central Africa'): 0.0,
('des-PDef3', 'Eastern Africa'): 2475.9,
('des-PDef3', 'North Africa'): 98.0,
('des-PDef3', 'Southern Africa'): 124.0,
('des-PDef3', 'West Africa'): 1500.24,
('pes-PDef3', 'Central Africa'): 0.0,
('pes-PDef3', 'Eastern Africa'): 58.03,
('pes-PDef3', 'North Africa'): 98.0,
('pes-PDef3', 'Southern Africa'): 124.0,
('pes-PDef3', 'West Africa'): 0.0,
('tes-PDef3', 'Central Africa'): 0.0,
('tes-PDef3', 'Eastern Africa'): 1175.86,
('tes-PDef3', 'North Africa'): 98.0,
('tes-PDef3', 'Southern Africa'): 124.0,
('tes-PDef3', 'West Africa'): 0.0},
'Biomass PP': {('des-PDef3', 'Central Africa'): 44.24,
('des-PDef3', 'Eastern Africa'): 1362.4,
('des-PDef3', 'North Africa'): 178.29,
('des-PDef3', 'Southern Africa'): 210.01999999999998,
('des-PDef3', 'West Africa'): 277.4,
('pes-PDef3', 'Central Africa'): 44.24,
('pes-PDef3', 'Eastern Africa'): 985.36,
('pes-PDef3', 'North Africa'): 90.93,
('pes-PDef3', 'Southern Africa'): 144.99,
('pes-PDef3', 'West Africa'): 130.33,
('tes-PDef3', 'Central Africa'): 44.24,
('tes-PDef3', 'Eastern Africa'): 1362.4,
('tes-PDef3', 'North Africa'): 178.29,
('tes-PDef3', 'Southern Africa'): 210.01999999999998,
('tes-PDef3', 'West Africa'): 277.4}}
df = pd.DataFrame.from_dict(dict)
df.plot(kind = "bar",stacked = True)
plt.show()
</code></pre>
|
<python><matplotlib><pivot><xticks>
|
2025-04-11 16:46:53
| 2
| 1,810
|
hbstha123
|
79,569,195
| 1,700,890
|
Pandas - group by and filter
|
<p>Here is my dataframe</p>
<pre><code>my_df = pd.DataFrame({'col_1': ['A', 'A', 'B', 'B', 'C', 'C'],
'col_2': [1, 2, 1, 2, 1, 2]})
</code></pre>
<p>I would like to group by <code>col_1</code> and filter out anything strictly greater than one using <code>col_2</code>. The final result should look like:</p>
<pre><code>final_df = pd.DataFrame({'col_1': ['A', 'B', 'C'],
'col_2': [1, 1, 1, ]})
</code></pre>
<p>Here is what I tried:</p>
<pre><code>df_ts = my_df.groupby('col_1').filter(lambda x: (x['col_2'] <= 1).any())
</code></pre>
<p>It returns the same dataframe</p>
<p>I also tried:</p>
<pre><code>df_ts = my_df.groupby('col_1').filter(lambda x: x['col_2'] <= 1)
</code></pre>
<p>It generates error.</p>
|
<python><pandas><filter><group-by>
|
2025-04-11 15:17:16
| 1
| 7,802
|
user1700890
|
79,569,161
| 393,896
|
How to create json content object where some of the content will be prefixed and suffixed with backslash and doublequote \"
|
<p>I have following python code:</p>
<pre><code>import json
data = {"name": "John", "age": 30, "city": "New York"}
print(data)
json_string = json.dumps(data, indent=4)
print(json_string)
json_string_escaped = json_string.replace('"','\\"')
print(json_string_escaped)
dataone = {"json_string": json_string_escaped}
print(dataone)
</code></pre>
<p>now json_string_escaped is good as showing:</p>
<pre><code>{
\"name\": \"John\",
\"age\": 30,
\"city\": \"New York\"
}
</code></pre>
<p>now I need to attach this escaped string to new JSON. The format of single backslash and double quote is required by the REST API:</p>
<pre><code>dataone = {"json_string": json_string_escaped}
print(dataone)
</code></pre>
<p>But that added another backslash which is not intended behavior:</p>
<pre><code>{'json_string': '{\n \\"name\\": \\"John\\",\n \\"age\\": 30,\n \\"city\\": \\"New York\\"\n}'}
</code></pre>
<p>How to have dataone variable with single backslash and doublequote?</p>
|
<python><replace>
|
2025-04-11 15:04:29
| 1
| 994
|
Ladislav Zitka
|
79,569,154
| 1,898,534
|
Databricks Cluster Initi Script: Multiple global.extra-index-url
|
<p>I'm using a Databricks init script to configure Python package indexes via AWS CodeArtifact. The script fetches an auth token and then sets the index-url and extra-index-url using pip config --global set ....</p>
<p><strong>Here's the relevant snippet:</strong></p>
<pre class="lang-bash prettyprint-override"><code>pip config --global set global.index-url https://aws:${TOKEN}@.../pypi/internal/simple/
pip config --global set global.extra-index-url https://aws:${TOKEN}@.../pypi/secondary/simple/
</code></pre>
<p><strong>What I want to achieve:</strong>
I need to configure:</p>
<ul>
<li>One primary index-url</li>
<li>Multiple extra-index-url values (e.g. internal + public PyPI)</li>
</ul>
<p><strong>Question:</strong>
What is the best practice to configure multiple extra-index-url values in a Databricks init script?</p>
<p>Should I:</p>
<ol>
<li>Manually write to /root/.config/pip/pip.conf?</li>
<li>Use environment variables like PIP_EXTRA_INDEX_URL (even if it's not persistent)?</li>
<li>Use another method to manage multiple indexes cleanly?</li>
</ol>
<p><strong>Constraints:</strong>
The solution must work in Databricks init scripts (runs as root).</p>
<p>The auth token is dynamic and generated during runtime.</p>
<p>Security is handled outside the script; I'm only looking for technical best practices.</p>
<p>I played already with env vars.
Played with multiple ways of modifying the pip.conf via bash script calling <code>pip config</code></p>
|
<python><pyspark><pip><databricks>
|
2025-04-11 15:00:49
| 0
| 6,499
|
PlagTag
|
79,569,029
| 1,735,914
|
Python ElementTree iter() method using XPath
|
<p>I have the following XML element:</p>
<pre><code><Content>
<Controller Use="Context" Name="Base_Project_Maximum_Connections">
<DataTypes Use="Context">
<DataType Use="Target" Name="cs_dint" Family="NoFamily" Class="User">
<Members>
<Member Name="Status" DataType="CONNECTION_STATUS" Dimension="0" Radix="NullType" Hidden="false" ExternalAccess="Read/Write" />
<Member Name="Data" DataType="DINT" Dimension="0" Radix="Decimal" Hidden="false" ExternalAccess="Read/Write" />
</Members>
</DataType>
</DataTypes>
</Controller>
</Content>
</code></pre>
<p>I am trying to find all DataType elements that have a Name of "cs_dint" using the following code:</p>
<pre><code>def GetDataTypeFromElement(root, typeName = None):
dataTypeIter = root.iter("DataType")
for dataTypeElement in dataTypeIter:
print ("Found a data type element")
print ("Finished with first iteration")
dataTypeIter = root.iter("DataType[@Name = 'cs_dint']")
for dataTypeElement in dataTypeIter:
print ("Found a data type element")
print ("Finished with second iteration")
return None
</code></pre>
<p>The first call to iter(), where I merely look for DataType elements, works. The second, where I try to specify the name, doesn't. Why not?</p>
|
<python><xpath><elementtree>
|
2025-04-11 14:07:13
| 1
| 2,431
|
ROBERT RICHARDSON
|
79,568,962
| 12,276,279
|
Aligning matplotlib subplots one with stacked bar plot and another with line plot using matplotlib in Python
|
<p>I have 2 dataframes as follows:
<code>df1.to_dict()</code> results in</p>
<pre><code>{'Peak Demand PES': {2023: 124126.91,
2025: 154803.41,
2030: 231494.66,
2040: 483217.66000000003,
2050: 1004207.86},
'Peak Demand TES': {2023: 125724.8,
2025: 142959.13999999998,
2030: 186044.99000000002,
2040: 288307.99,
2050: 424827.79},
'Peak Demand DES': {2023: 125263.94,
2025: 152080.7,
2030: 219122.6,
2040: 385960.3,
2050: 671678.9}}
</code></pre>
<p><code>df2.to_dict()</code> results in</p>
<pre><code>{'Biomass PP': {2023: 783.2,
2025: 840.5,
2030: 990.5,
2040: 711.0,
2050: 167.0},
'Coal PP': {2023: 51235.8,
2025: 48912.8,
2030: 41527.8,
2040: 24125.8,
2050: 13409.8},
'Diesel PP': {2023: 10498.41,
2025: 10347.69,
2030: 9020.56,
2040: 6227.39,
2050: 3049.75},
'Geothermal PP': {2023: 1004.4,
2025: 1074.4,
2030: 1249.4,
2040: 1148.4,
2050: 328.3},
'HFO PP': {2023: 6462.1,
2025: 6358.04,
2030: 5468.59,
2040: 2521.35,
2050: 205.58},
'Large Hydro Dam PP': {2023: 28363.22,
2025: 32053.86,
2030: 41259.46,
2040: 43980.76,
2050: 32379.44},
'Natural Gas PP': {2023: 116472.48,
2025: 110897.38,
2030: 106429.18000000001,
2040: 69705.68,
2050: 8774.35},
'PumpStorage': {2023: 3721.0,
2025: 4063.0,
2030: 4918.0,
2040: 6498.0,
2050: 6498.0},
'Solar PV - Utility PP': {2023: 10422.59,
2025: 12731.45,
2030: 18576.6,
2040: 23338.15,
2050: 16738.45},
'Solar Thermal PP': {2023: 1095.0,
2025: 1095.0,
2030: 1095.0,
2040: 670.0,
2050: nan},
'Transmission': {2023: 25.83,
2025: 28.01,
2030: 33.46,
2040: 43.76,
2050: 47.56},
'Wind PP': {2023: 8193.02,
2025: 9297.62,
2030: 12193.52,
2040: 7261.3,
2050: 1735.0}}
</code></pre>
<p>I want to create a sub-plot showing df1 as line plot and df2 as stacked bar plot.
I tried:</p>
<pre><code>fig, ax = plt.subplots()
df1.plot(
ax = ax,
zorder = 0,
marker = "o"
)
df2.plot(ax = ax,
kind = "bar",
stacked = True,
color = color_map,
zorder = 1)
plt.legend(bbox_to_anchor = (1.1, 1))
</code></pre>
<p>But i don't see the line plot:
<a href="https://i.sstatic.net/4woKwnLj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4woKwnLj.png" alt="enter image description here" /></a></p>
<p>Next, i tried this code:</p>
<pre><code>import matplotlib.pyplot as plt
fig, ax = plt.subplots()
# Plot stacked bars
df2.plot(kind="bar",
stacked=True,
ax=ax,
color=color_map,
width=0.8,
position=0)
years = df1.index
offset = 0.5 # try small shifts like -0.2, 0.2, etc.
for col in df1.columns:
ax.plot([x for x in range(len(years))],
df1[col].values,
label=col,
linewidth=3,
linestyle='--',
marker='o')
# Adjust legend
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels, bbox_to_anchor=(1.1, 1))
plt.ylabel("GW")
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/eAmRVyuv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eAmRVyuv.png" alt="enter image description here" /></a>
However, the x-axis is not aligned between the 2 sub-plots.
How can I fix this?</p>
|
<python><python-3.x><matplotlib><bar-chart><subplot>
|
2025-04-11 13:36:25
| 1
| 1,810
|
hbstha123
|
79,568,961
| 16,383,578
|
Why does this fast function with Numba JIT slow down if I JIT compile another function?
|
<p>So I have this function:</p>
<pre><code>import numpy as np
import numba as nb
@nb.njit(cache=True, parallel=True, nogil=True)
def triangle_half_UR_LL(size: int, swap: bool = False) -> tuple[np.ndarray, np.ndarray]:
total = (size + 1) * size // 2
x_coords = np.full(total, 0, dtype=np.uint16)
y_coords = np.full(total, 0, dtype=np.uint16)
offset = 0
side = np.arange(size, dtype=np.uint16)
for i in nb.prange(size):
offset = i * size - (i - 1) * i // 2
end = offset + size - i
x_coords[offset:end] = i
y_coords[offset:end] = side[i:]
return (x_coords, y_coords) if not swap else (y_coords, x_coords)
</code></pre>
<p>What it does is not important, the point is it is JIT compiled with Numba and therefore very fast:</p>
<pre><code>In [2]: triangle_half_UR_LL(10)
Out[2]:
(array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2,
2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5,
5, 6, 6, 6, 6, 7, 7, 7, 8, 8, 9], dtype=uint16),
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 1, 2, 3, 4, 5, 6, 7, 8, 9, 2, 3, 4,
5, 6, 7, 8, 9, 3, 4, 5, 6, 7, 8, 9, 4, 5, 6, 7, 8, 9, 5, 6, 7, 8,
9, 6, 7, 8, 9, 7, 8, 9, 8, 9, 9], dtype=uint16))
In [3]: %timeit triangle_half_UR_LL(1000)
166 μs ± 489 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [4]: %timeit triangle_half_UR_LL(1000)
166 μs ± 270 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [5]: %timeit triangle_half_UR_LL(1000)
166 μs ± 506 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
</code></pre>
<p>Now if I define another function and JIT compile it with Numba, the performance of the fast function inexplicably drops:</p>
<pre><code>In [6]: @nb.njit(cache=True)
...: def dummy():
...: pass
In [7]: dummy()
In [8]: %timeit triangle_half_UR_LL(1000)
980 μs ± 20 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [9]: %timeit triangle_half_UR_LL(1000)
976 μs ± 9.9 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [10]: %timeit triangle_half_UR_LL(1000)
974 μs ± 3.11 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
</code></pre>
<p>This is real, I have successfully reproduced this issue many times without fail, I start a new interpreter session, I paste the code, it runs fast. I define the dummy function then call the dummy function, and the fast function inexplicably slows down.</p>
<p>Screenshot as proof:</p>
<p><a href="https://i.sstatic.net/2qN783M6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2qN783M6.png" alt="enter image description here" /></a></p>
<p>I am using Windows 11, and I absolutely have no idea what the hell is going on.</p>
<p>Is there an explanation for this? And how can I prevent this issue?</p>
<hr />
<p>Interestingly if I get rid of <code>nogil</code> parameter and without changing anything else, the problem magically goes away:</p>
<pre><code>In [1]: import numpy as np
...: import numba as nb
...:
...:
...: @nb.njit(cache=True, parallel=True)
...: def triangle_half_UR_LL(size: int, swap: bool = False) -> tuple[np.ndarray, np.ndarray]:
...: total = (size + 1) * size // 2
...: x_coords = np.full(total, 0, dtype=np.uint16)
...: y_coords = np.full(total, 0, dtype=np.uint16)
...: offset = 0
...: side = np.arange(size, dtype=np.uint16)
...: for i in nb.prange(size):
...: offset = i * size - (i - 1) * i // 2
...: end = offset + size - i
...: x_coords[offset:end] = i
...: y_coords[offset:end] = side[i:]
...:
...: return (x_coords, y_coords) if not swap else (y_coords, x_coords)
In [2]: %timeit triangle_half_UR_LL(1000)
186 μs ± 47.9 μs per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [3]: %timeit triangle_half_UR_LL(1000)
167 μs ± 1.61 μs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [4]: %timeit triangle_half_UR_LL(1000)
166 μs ± 109 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [5]: @nb.njit(cache=True)
...: def dummy():
...: pass
In [6]: dummy()
In [7]: %timeit triangle_half_UR_LL(1000)
167 μs ± 308 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [8]: %timeit triangle_half_UR_LL(1000)
166 μs ± 312 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [9]: %timeit triangle_half_UR_LL(1000)
167 μs ± 624 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
</code></pre>
<p>Why does this happen?</p>
<hr />
<p>But no, if I define other functions, somehow the first function slows down again. The simplest way to reproduce the issue is just redefining it:</p>
<pre><code>In [7]: dummy()
In [8]: %timeit triangle_half_UR_LL(1000)
168 μs ± 750 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [9]: import numpy as np
In [10]: %timeit triangle_half_UR_LL(1000)
167 μs ± 958 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [11]: import numba as nb
In [12]: %timeit triangle_half_UR_LL(1000)
167 μs ± 311 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [13]: @nb.njit(cache=True, parallel=True)
...: def triangle_half_UR_LL(size: int, swap: bool = False) -> tuple[np.ndarray, np.ndarray]:
...: total = (size + 1) * size // 2
...: x_coords = np.full(total, 0, dtype=np.uint16)
...: y_coords = np.full(total, 0, dtype=np.uint16)
...: offset = 0
...: side = np.arange(size, dtype=np.uint16)
...: for i in nb.prange(size):
...: offset = i * size - (i - 1) * i // 2
...: end = offset + size - i
...: x_coords[offset:end] = i
...: y_coords[offset:end] = side[i:]
...:
...: return (x_coords, y_coords) if not swap else (y_coords, x_coords)
In [14]: %timeit triangle_half_UR_LL(1000)
1.01 ms ± 94.3 μs per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [15]: %timeit triangle_half_UR_LL(1000)
964 μs ± 2.02 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
</code></pre>
<p>The slowdown also happens if I define the following function and call it:</p>
<pre><code>@nb.njit(cache=True)
def Farey_sequence(n: int) -> np.ndarray:
a, b, c, d = 0, 1, 1, n
result = [(a, b)]
while 0 <= c <= n:
k = (n + b) // d
a, b, c, d = c, d, k * c - a, k * d - b
result.append((a, b))
return np.array(result, dtype=np.uint64)
</code></pre>
<pre><code>In [6]: %timeit triangle_half_UR_LL(1000)
166 μs ± 296 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [7]: %timeit Farey_sequence(16)
The slowest run took 6.25 times longer than the fastest. This could mean that an intermediate result is being cached.
6.03 μs ± 5.72 μs per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [8]: %timeit Farey_sequence(16)
2.77 μs ± 50.8 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
In [9]: %timeit triangle_half_UR_LL(1000)
966 μs ± 6.48 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
</code></pre>
|
<python><windows><numpy><numba>
|
2025-04-11 13:35:57
| 1
| 3,930
|
Ξένη Γήινος
|
79,568,862
| 1,073,410
|
Python dataclasses, inheritance and alternate class constructors
|
<p>In Python, I am using <code>dataclass</code> with the following class hierarchy:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
@dataclass
class Foo:
id: str
@classmethod
def fromRaw(cls, raw: dict[str, str]) -> 'Foo':
return Foo(raw['id'])
@dataclass
class Bar(Foo):
name: str
@classmethod
def fromRaw(cls, raw: dict[str, str]) -> 'Bar':
return Bar(raw['id'], raw['name'])
</code></pre>
<p>However, it feels a bit double to instantiate all <code>Foo</code> members manually again in <code>Bar::fromRaw()</code>.</p>
<p>Is there a better way to re-use the ancestor's alternate constructor for example?</p>
|
<python><python-dataclasses>
|
2025-04-11 12:36:53
| 1
| 1,487
|
hvtilborg
|
79,568,585
| 1,182,469
|
Running poetry in using jenkins dockerfile
|
<p>I've got my dockerfile</p>
<pre><code>FROM git.corp.com:4567/some/python:3.11-slim
RUN apt update; \
apt install pipx -y; \
pipx install poetry; \
pipx ensurepath; \
chmod a +rx /root/.local/bin/poetry; \
ln -s /root/.local/bin/poetry /usr/bin/poetry; \
</code></pre>
<p>and my jenkins stage</p>
<pre><code>stage('Test') {
agent {
dockerfile{
filename 'Dockerfile.build'
args "-v $WORKSPACE:/app"
reuseNode true
}
}
steps {
sh """
ls -l poetry
poetry install --no-root -E tests -E mypy -E lint
PYTHONPATH="$PWD/src" pytest
"""
}
}
</code></pre>
<p>Why do I get this message?</p>
<pre><code>script.sh.copy: 3: poetry: Permission denied
</code></pre>
<p>I've changed permessions using <code>chmod a +rx</code></p>
<p><code>ls -l poetry</code> output looks like this</p>
<pre><code>lrwxrwxrwx 1 root root 23 Apr 11 10:02 /usr/bin/poetry -> /root/.local/bin/poetry
</code></pre>
<p>I know Jenkins pass -u 1000:1000 arguments to docker run command but shouldn't chmod fix this?</p>
|
<python><linux><docker><jenkins><python-poetry>
|
2025-04-11 10:15:35
| 1
| 3,146
|
xander27
|
79,568,474
| 2,123,706
|
replace non zero result with 1 using a lambda function python
|
<p>I am calculating a binary result of whether or not values in columns have at least one match. It may chance that they have multiple matches. I want to return a result of 1 if they have >1 common value.</p>
<p>MRE below:</p>
<pre><code>df=pd.DataFrame({'col1':['a|b|c','a|b|c','a|b|c'],'col2':['b|d|e|a','d|e|f','a']})
df['new']=df[['col1','col2']].apply(lambda x: len(set(x['col1'].split('|')).intersection(set(x['col2'].split('|')))),axis=1)
</code></pre>
<p>I want to change the return of any value >1 with 1, ie</p>
<pre><code>pd.DataFrame({'col1':['a|b|c','a|b|c','a|b|c'],'col2':['b|d|e|a','d|e|f','a'],'new':[1,0,1]})
</code></pre>
<p>SOLVED</p>
<pre><code>df['new']=df[['col1','col2']].apply(lambda x: 1 if len(set(x['col1'].split('|')).intersection(set(x['col2'].split('|')))) >=1 else 0,axis=1)
</code></pre>
|
<python><pandas><lambda>
|
2025-04-11 09:18:41
| 2
| 3,810
|
frank
|
79,568,436
| 4,235,960
|
Python Web Parsing - Avoid getting redirected to a local version of the website
|
<p>I am using the following code to parse websites:</p>
<pre class="lang-py prettyprint-override"><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
import time
from urllib.parse import urlparse
from bs4 import BeautifulSoup
import requests
def get_navigation_links(url, limit=500, wait_time=5):
def validate_url(url_string):
try:
result = urlparse(url_string)
if not result.scheme:
url_string = "https://" + url_string
result = urlparse(url_string)
return url_string if result.netloc else None
except:
return None
validated_url = validate_url(url)
if not validated_url:
raise ValueError("Invalid URL")
base_netloc = urlparse(validated_url).netloc.split(':')[0]
# Try JavaScript-rendered version first (Selenium)
try:
chrome_options = Options()
chrome_options.add_argument("--headless")
chrome_options.add_argument("--disable-gpu")
chrome_options.add_argument("--no-sandbox")
chrome_options.add_argument("--window-size=1920,1080")
driver = webdriver.Chrome(options=chrome_options)
driver.get(validated_url)
time.sleep(wait_time) # Allow JS to render
# Check if the current URL after loading is what you expect
current_url = driver.current_url
if base_netloc in current_url and current_url != validated_url:
print(f"Redirect detected: {current_url}. Scraping original URL.")
# Continue scraping the page only if the URL is as expected
a_tags = driver.find_elements(By.TAG_NAME, "a")
seen = set()
nav_links = []
for a in a_tags:
try:
href = a.get_attribute("href")
text = a.text.strip()
if href and text and urlparse(href).netloc.split(':')[0] == base_netloc:
if href not in seen:
seen.add(href)
nav_links.append((text, href))
except:
continue
driver.quit()
# If no navigation links found via Selenium, use BeautifulSoup
if not nav_links:
print("No navigation links found via Selenium. Falling back to BeautifulSoup.")
soup = BeautifulSoup(requests.get(validated_url).text, 'html.parser')
a_tags = soup.find_all('a')
for a in a_tags:
href = a.get('href')
text = a.get_text(strip=True)
if href and text and urlparse(href).netloc.split(':')[0] == base_netloc:
if href not in seen:
seen.add(href)
nav_links.append((text, href))
# Return first N links without filtering by keywords
return nav_links[:limit]
except Exception as e:
print(f"[Selenium failed: {e}] Falling back to BeautifulSoup.")
# Fallback to BeautifulSoup in case of an error with Selenium
soup = BeautifulSoup(requests.get(validated_url).text, 'html.parser')
a_tags = soup.find_all('a')
seen = set()
nav_links = []
for a in a_tags:
href = a.get('href')
text = a.get_text(strip=True)
if href and text and urlparse(href).netloc.split(':')[0] == base_netloc:
if href not in seen:
seen.add(href)
nav_links.append((text, href))
return nav_links[:limit]
</code></pre>
<p>the problem I am facing is that when I select a site (e.g. <a href="https://www.nike.com" rel="nofollow noreferrer">https://www.nike.com</a>, I get the local version of the site (Greek) instead of the US one. How can I avoid that and parse the American site which I have selected as my URL?</p>
|
<python><selenium-webdriver><parsing><beautifulsoup>
|
2025-04-11 09:01:38
| 2
| 3,315
|
adrCoder
|
79,568,360
| 17,040,989
|
determine position of an inserted string within another
|
<p>Following <a href="https://stackoverflow.com/questions/79562059/how-to-randomly-add-a-list-of-sequences-into-a-text-body">this post</a> I managed to put together a small function to place within a bigger text body (FASTA) shorter strings determined from another file based on some conditions (<em>e.g.</em> 100 events from a subset of only those 400-to-500 characters in length, and selected randomly).</p>
<p>Now, I'm pretty fine with the result; however, I wish to print for those 100 events where exactly they have been added in the bigger text body — ideally, <strong>start-end</strong> position if not too hard.</p>
<p>I guess this could be integrated in <code>get_retro_text()</code> or, if easier, build as an external function, but I cannot really figure out from where to start... any help is greatly appreciated, thanks in advance!</p>
<pre><code>###library import
from Bio import SeqIO
import random
###string import and wrangling
input_file = open("homo_sapiens_strings.fasta.txt")
my_dict = SeqIO.to_dict(SeqIO.parse(input_file, "fasta"))
s = []
for j in my_dict.values():
s.append(j)
###import FASTA --> some already made function I found to import and print whole FASTA genomes but testing on a part of it
def fasta_reader(filename):
from Bio.SeqIO.FastaIO import FastaIterator
with open(filename) as handle:
for record in FastaIterator(handle):
yield record
head = ""
body = ""
for entry in fasta_reader("hg37_chr1.fna"):
head = str(entry.id)
body = str(entry.seq)
###randomly selects 100 sequences and adds them to the FASTA
def insert (source_str, insert_str, pos):
return source_str[:pos] + insert_str + source_str[pos:]
def get_retro_text(genome, all_strings):
string_of_choice = [string for string in all_strings if 400 < len(string) < 500]
hundred_strings = random.sample(string_of_choice, k=100)
text_of_strings = []
for k in range(len(hundred_strings)):
text_of_strings.append(str(hundred_strings[k].seq))
single_string = ",".join(text_of_strings)
new_genome = insert(genome, single_string, random.randint(0, len(genome)))
return new_genome
big_genome = get_retro_text(body, s)
</code></pre>
<p><strong>EDIT</strong> example of structure of <code>body</code> and <code>s</code></p>
<p><code>body</code></p>
<pre><code>
NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN
NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN
NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNtaaccctaaccctaacccta
accctaaccctaaccctaaccctaaccctaaccctaaccctaaccctaaccctaacccta
accctaaccctaaccctaaccctaacccaaccctaaccctaaccctaaccctaaccctaa
ccctaacccctaaccctaaccctaaccctaaccctaacctaaccctaaccctaaccctaa
ccctaaccctaaccctaaccctaaccctaacccctaaccctaaccctaaaccctaaaccc
taaccctaaccctaaccctaaccctaaccccaaccccaaccccaaccccaaccccaaccc
caaccctaacccctaaccctaaccctaaccctaccctaaccctaaccctaaccctaaccc
taaccctaacccctaacccctaaccctaaccctaaccctaaccctaaccctaaccctaac
ccctaaccctaaccctaaccctaaccctcgCGGTACCCTCAGCCGGCCCGCCCGCCCGGG
TCTGACCTGAGGAGAACTGTGCTCCGCCTTCAGAGTACCACCGAAATCTGTGCAGAGGAc
aacgcagctccgccctcgcggtGCTCtccgggtctgtgctgaggagaacgCAACTCCGCC
GTTGCAAAGGCGcgccgcgccggcgcaggcgcagagaggcgcgccgcgccggcgcaggcg
cagagaggcgcgccgcgccggcgcaggcgcagagaggcgcgccgcgccggcgcaggcgca
gagaggcgcgccgcgccggcgcaggcgcagagaggcgcgccgcgccggcgcaggcgcaga
caCATGCTAGCGCGTCGGGGTGGAGGCgtggcgcaggcgcagagaggcgcgccgcgccgg
cgcaggcgcagagacaCATGCTACCGCGTCCAGGGGTGGAGGCgtggcgcaggcgcagag
aggcgcaccgcgccggcgcaggcgcagagacaCATGCTAGCGCGTCCAGGGGTGGAGGCG
TggcgcaggcgcagagacgcAAGCCTAcgggcgggggttgggggggcgTGTGTTGCAGGA
GCAAAGTCGCACGGCGCCGGGCTGGGGCGGGGGGAGGGTGGCGCCGTGCACGCGCAGAAA
CTCACGTCACGGTGGCGCGGCGCAGAGACGGGTAGAACCTCAGTAATCCGAAAAGCCGGG
ATCGACCGCCCCTTGCTTGCAGCCGGGCACTACAGGACCCGCTTGCTCACGGTGCTGTGC
CAGGGCGCCCCCTGCTGGCGACTAGGGCAACTGCAGGGCTCTCTTGCTTAGAGTGGTGGC
CAGCGCCCCCTGCTGGCGCCGGGGCACTGCAGGGCCCTCTTGCTTACTGTATAGTGGTGG
CACGCCGCCTGCTGGCAGCTAGGGACATTGCAGGGTCCTCTTGCTCAAGGTGTAGTGGCA
GCACGCCCACCTGCTGGCAGCTGGGGACACTGCCGGGCCCTCTTGCTCCAACAGTACTGG
CGGATTATAGGGAAACACCCGGAGCATATGCTGTTTGGTCTCAGTAGACTCCTAAATATG
GGATTCCTgggtttaaaagtaaaaaataaatatgtttaatttgtGAACTGATTACCATCA
GAATTGTACTGTTCTGTATCCCACCAGCAATGTCTAGGAATGCCTGTTTCTCCACAAAGT
GTTtacttttggatttttgccagTCTAACAGGTGAAGCCCTGGAGATTCTTATTAGTGAT
TTGGGCTGGGGCCTGgccatgtgtatttttttaaatttccactgaTGATTTTGCTGCATG
GCCGGTGTTGAGAATGACTGCGCAAATTTGCCGGATTTCCTTTGCTGTTCCTGCATGTAG
TTTAAACGAGATTGCCAGCACCGGGTATCATTCACCATTTTTCTTTTCGTTAACTTGCCG
TCAGCCTTTTCTTTGACCTCTTCTTTCTGTTCATGTGTATTTGCTGTCTCTTAGCCCAGA
CTTCCCGTGTCCTTTCCACCGGGCCTTTGAGAGGTCACAGGGTCTTGATGCTGTGGTCTT
CATCTGCAGGTGTCTGACTTCCAGCAACTGCTGGCCTGTGCCAGGGTGCAAGCTGAGCAC
TGGAGTGGAGTTTTCCTGTGGAGAGGAGCCATGCCTAGAGTGGGATGGGCCATTGTTCAT
</code></pre>
<p><code>s</code></p>
<pre><code>[[SeqRecord(seq=Seq('ATGGCGGGACACCCGAAAGAGAGGGTGGTCACAGATGAGGTCCATCAGAACCAG...TAG'), id='retro_hsap_1', name='retro_hsap_1', description='retro_hsap_1', dbxrefs=[]), SeqRecord(seq=Seq('ATGGTCAACGTACCTAAAACCCGAAGAACCTTCTGTAAGAAGTGTGGCAAGCAT...TAA'), id='retro_hsap_2', name='retro_hsap_2', description='retro_hsap_2', dbxrefs=[]), SeqRecord(seq=Seq('ATGTCCACAATGGGAAACGAGGCCAGTTACCCGGCGGAGATGTGCTCCCACTTT...TGA'), id='retro_hsap_3', name='retro_hsap_3', description='retro_hsap_3', dbxrefs=[])]]
</code></pre>
|
<python><string><bioinformatics><biopython><fasta>
|
2025-04-11 08:22:05
| 1
| 403
|
Matteo
|
79,568,299
| 8,303,802
|
Executing and retrieving stored procedure returns in python with oracle db
|
<p>I am trying to run</p>
<pre><code>
plsql = """
DECLARE
l_zip BLOB;
BEGIN
l_zip := apex_export.zip( p_source_files => apex_export.get_workspace(:1),
p_extra_files => apex_t_export_files( apex_t_export_file( name => 'README.md', contents => 'Merch Read Write Workspace Contents.'),
apex_t_export_file( name => 'LICENSE.txt', contents => 'The Universal Permissive License (UPL), Version 1.0'))
);
:2 := l_zip;
END;
"""
zip_var = cursor.var(oracledb.DB_TYPE_BLOB)
cursor.execute(plsql, [workspace_id, zip_var])
with open("workspace_export.zip", "wb") as f:
f.write(zip_data)
</code></pre>
<p>I always get the error</p>
<pre><code>oracledb.exceptions.NotSupportedError: DPY-3002: Python value of type "tuple" is not supported
in the line cursor.execute(plsql, [workspace_id, zip_var])
</code></pre>
|
<python><python-3.x><oracle-database><oracle-apex><python-oracledb>
|
2025-04-11 07:49:34
| 1
| 630
|
Sourabh
|
79,568,097
| 11,484,423
|
Python static class variable in nested class
|
<p>I have a nested class that uses static vars to have class wide parameters and accumulators.</p>
<p>If I do it as a standalone class it works.</p>
<p>If I do a nested class and inherit the standalone class, it works.</p>
<p>But I can't get a nested class to have static class variables, the interpreter gets confused. What am I doing wrong?</p>
<p>Code snippet:</p>
<pre class="lang-python prettyprint-override"><code>class Cl_static_parameter_standalone:
#static var common to all instances. Two uses: common settings, common accumulator
c_n_counter : int = 0
@staticmethod
def increment() -> int:
Cl_static_parameter_standalone.c_n_counter += 1
return Cl_static_parameter_standalone.c_n_counter
class Cl_some_class:
class Cl_static_parameter_inherited(Cl_static_parameter_standalone):
pass
class Cl_static_parameter_nested:
c_n_counter : int = 0
@staticmethod
def increment() -> int:
Cl_static_parameter_nested.c_n_counter += 1
return Cl_static_parameter_nested.c_n_counter
def __init__(self):
return
def do_something(self):
print(f"Execute Standalone: {Cl_static_parameter_standalone.increment()}")
print(f"Execute Inherited: {self.Cl_static_parameter_inherited.increment()}")
print(f"Execute Nested: {self.Cl_static_parameter_nested.increment()}")
return
my_instance = Cl_some_class()
my_instance.do_something()
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>Execute Standalone: 1
Execute Inherited: 2
Traceback (most recent call last):
File "stack_overflow_class_static_parameter.py", line 52, in <module>
my_instance.do_something()
~~~~~~~~~~~~~~~~~~~~~~~~^^
File "stack_overflow_class_static_parameter.py", line 48, in do_something
print(f"Execute Nested:{self.Cl_static_parameter_nested.increment()}")
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "stack_overflow_class_static_parameter.py", line 38, in increment
Cl_static_parameter_nested.c_n_counter += 1
^^^^^^^^^^^^^^^^^^^^^^^^^^
NameError: name 'Cl_static_parameter_nested' is not defined. Did you mean: 'Cl_static_parameter_standalone'?
</code></pre>
|
<python><inner-classes><static-variables>
|
2025-04-11 06:00:18
| 1
| 670
|
05032 Mendicant Bias
|
79,568,068
| 4,473,615
|
Extract specific dictionary value from dataframe in PySpark having case insensitive attributes
|
<p>I have a below dataframe</p>
<pre><code>dataDictionary = [('value1', [{'key': 'Fruit', 'value': 'Apple'}, {'key': 'Colour', 'value': 'White'}]),
('value2', [{'key': 'Fruit', 'value': 'Mango'}, {'key': 'Bird', 'value': 'Eagle'}, {'key': 'Colour', 'value': 'Black'}]),
('value3', [{'key': 'Fruit', 'value': 'Apple'}, {'key': 'colour', 'value': 'Blue'}])]
df = spark.createDataFrame(data=dataDictionary)
df.printSchema()
df.show(truncate=False)
</code></pre>
<pre><code>+------+------------------------------------------------------------------------------------------------+
|_1 |_2 |
+------+------------------------------------------------------------------------------------------------+
|value1|[{value -> Apple, key -> Fruit}, {value -> White, key -> Colour}] |
|value2|[{value -> Mango, key -> Fruit}, {value -> Eagle, key -> Bird}, {value -> Black, key -> Colour}]|
|value3|[{value -> Apple, key -> Fruit}, {value -> Blue, key -> colour}]
+------+------------------------------------------------------------------------------------------------+
</code></pre>
<p>I wanted to extract only the values of key -> Colour and I'm using below to get the exact result</p>
<pre><code>from pyspark.sql import SparkSession, functions as F
...
df = df.select('_1', F.filter('_2', lambda x: x['key'] == 'Colour')[0]['value'])
</code></pre>
<p>result,</p>
<pre><code>_1 _2
value1 White
value2 Black
value3
</code></pre>
<p>But for value3, there is no result because key is in lower case <code>colour</code>, for value1 and vaue2 key is in camel case <code>Colour</code> which works with the lambda function <code>F.filter('_2', lambda x: x['key'] == 'Colour')[0]['value']</code>. I tried using upper to handle all three scenarios, but it's not working.</p>
<p><code>F.filter('_2', lambda x: x['key'].upper() == 'COLOUR')[0]['value']</code></p>
|
<python><pyspark><lambda>
|
2025-04-11 05:36:29
| 1
| 5,241
|
Jim Macaulay
|
79,567,933
| 16,686,130
|
Using a class property as an iterable produces a reassign warning
|
<p>I need to use an iterable and a loop variable as a class property.
But the flake8 checker produces B2020 warning:</p>
<pre><code>easy.py:11:13: B020 Found for loop that reassigns the iterable it is iterating with each iterable value.
</code></pre>
<p>If I use a variable for iterable there is OK.</p>
<p>What is wrong?</p>
<p>The warning example:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
"""Example of B020 error."""
class My_Template:
"""My example."""
def __init__(self, *template):
"""Obviously init."""
self.all_templates = (1, 2, 3)
for self.tpl in self.all_templates:
print(self.tpl)
</code></pre>
<p>The flake8 complains about loop variable:</p>
<pre><code>easy.py:11:13: B020 Found for loop that reassigns the iterable it is iterating with each iterable value.
</code></pre>
<p>The OK example:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
"""Example of B020 error."""
class My_Template:
"""My example."""
def __init__(self, *template):
"""Obviously init."""
all_templates = (1, 2, 3)
for self.tpl in all_templates:
print(self.tpl)
</code></pre>
|
<python><python-3.x><flake8>
|
2025-04-11 03:26:06
| 1
| 608
|
Sergey Zaykov
|
79,567,641
| 6,386,632
|
Filter a dict, keeping only certain keys, and applying defaults where they are missing
|
<p>In Python, I have a dict (called <code>params</code>) containing a variable list of params. I want a function to return this dict with only certain keys kept, and default values applied if these keys are missing in the first dict.</p>
<p>E.g.</p>
<pre class="lang-py prettyprint-override"><code>params = {'a':1,'b':2,'z':'bad'}
paramsToKeepWithDefaults = {'a':0, 'c':0}
assert(f(paramsToKeepWithDefaults,params) == {'a':1,'c':0})
</code></pre>
<p>How in this example could <code>f</code> be defined?</p>
<p>(It could also be useful to allow defaults to be optional, in an elegant way... my use case here is for parameter definitions and application, and I'd like the code to be as readable and concise as reasonably possible)</p>
<p>This is similar to <a href="https://stackoverflow.com/questions/3420122/filter-dict-to-contain-only-certain-keys">Filter dict to contain only certain keys?</a>, but I want default values as well. I'm using Python >3.6.</p>
|
<python>
|
2025-04-10 22:02:57
| 6
| 2,034
|
phhu
|
79,567,480
| 8,284,452
|
How to select from xarray.Dataset without hardcoding the name of the dimension?
|
<p>When selecting data from an xarray.Dataset type, <a href="https://docs.xarray.dev/en/latest/user-guide/indexing.html#quick-overview" rel="nofollow noreferrer">the examples they provide</a> all include hardcoding the name of the dimension like so:</p>
<pre class="lang-py prettyprint-override"><code>ds = ds.sel(state_name='California')
</code></pre>
<p><strong>TLDR;</strong> How can you select from a dataset without hardcoding the dimension name? How would I achieve something like this since the below doesn't work?</p>
<pre class="lang-py prettyprint-override"><code>dimName = 'state_name'
ds = ds.sel(dimName='California')
</code></pre>
<p>I have a situation where I won't know the name of the dimension to make my selection on until runtime of the application, but I can't figure out how to select the data with xarray's methods unless I know the dimension name ahead of time. For instance, let's say I have a dataset like this, where <code>dim2</code>, <code>dim3</code>, and <code>dim4</code> all correspond to ID numbers of different spatial bounds that a user could select on a map:</p>
<pre class="lang-py prettyprint-override"><code>import xarray as xr
import numpy as np
dim2 = ['12', '34', '56', '78']
dim3 = ['121', '341', '561', '781']
dim4 = ['1211', '3411', '5611', '7811']
time_mn = np.arange(1, 61)
ds1 = xr.Dataset(
data_vars={
'prcp_dim2': (['dim2', 'time_mn'], np.random.rand(len(dim2), len(time_mn))),
'prcp_dim3': (['dim3', 'time_mn'], np.random.rand(len(dim3), len(time_mn))),
'prcp_dim4': (['dim4', 'time_mn'], np.random.rand(len(dim4), len(time_mn))),
},
coords={
'dim2': (['dim2'], dim2),
'dim3': (['dim3'], dim3),
'dim4': (['dim4'], dim4),
'time_mn': (['time_mn'], time_mn)
}
)
print(ds1)
<xarray.Dataset> Size: 6kB
Dimensions: (dim2: 4, time_mn: 60, dim3: 4, dim4: 4)
Coordinates:
* dim2 (dim2) <U2 32B '12' '34' '56' '78'
* dim3 (dim3) <U3 48B '121' '341' '561' '781'
* dim4 (dim4) <U4 64B '1211' '3411' '5611' '7811'
* time_mn (time_mn) int64 480B 1 2 3 4 5 6 7 8 ... 53 54 55 56 57 58 59 60
Data variables:
prcp_dim2 (dim2, time_mn) float64 2kB 0.8804 0.2733 ... 0.3227 0.4637
prcp_dim3 (dim3, time_mn) float64 2kB 0.1391 0.4541 ... 0.1688 0.3271
prcp_dim4 (dim4, time_mn) float64 2kB 0.4784 0.6666 ... 0.3619 0.4864
</code></pre>
<p>Now let's say a a map is presented to a user and the user chooses ID <code>78</code> to calculate something from the dataset. From this ID, I can glean the dimension value <code>78</code> belongs to is <code>dim2</code>. How would I then make a selection on the xarray dataset where <code>dim2=78</code> without hardcoding <code>dim2</code> in?</p>
<pre class="lang-py prettyprint-override"><code>selectedID = request.get('id') #This is the user's choice, let's say they chose '78'.
#Get the dimension name the selectedID belongs to
if len(selectedID) == 2:
selectedDimension = 'dim2'
elif len(selectedID) == 3:
selectedDimension = 'dim3'
elif len(selectedID) == 4:
selectedDimension = 'dim4'
#This is what I want to be able to do, but it does not work
ds = ds.sel(selectedDimension=selectedID)
</code></pre>
<p>Is there a way to select the data without hardcoding the dimension name?</p>
<p>Edit: I do realize there is a solution like this, but that falls apart if say I wanted to put the <em>above</em> version of the if/else in a callable function because I could be reusing it elsewhere and I don't necessarily want to select the data when I call the function.</p>
<pre class="lang-py prettyprint-override"><code>if len(selectedID) == 2:
ds = ds.sel(dim2=selectedID)
elif len(selectedID) == 3:
ds = ds.sel(dim3=selectedID)
elif len(selectedID) == 4:
ds = ds.sel(dim4=selectedID)
</code></pre>
|
<python><python-xarray><netcdf><netcdf4>
|
2025-04-10 19:54:16
| 1
| 686
|
MKF
|
79,567,429
| 14,298,880
|
Duplicate and rename columns on pandas DataFrame
|
<p>I guess this must be rather simple, but I'm struggling to find the easy way of doing it.</p>
<p>I have a pandas DataFrame with the columns A to D and need to copy some of the columns to new ones. The trick is that it not just involves renaming, I also need to duplicate the values to new columns as well.</p>
<p>Here is an example of the input:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({
'A': [1,2,3],
'B':['2025-10-01', '2025-10-02', '2025-10-01'],
'C': ['2025-02-10', '2025-02-15', '2025-02-20'],
'D': [0, 5, 4],
'values': [52.3, 60, 70.6]
})
mapping_dict = {
'table_1': {
'id': 'A',
'dt_start': 'B',
'dt_end': 'B',
},
'table_2': {
'id': 'D',
'dt_start': 'C',
'dt_end': 'C',
},
}
</code></pre>
<p>I'd like to have as output for <code>table_1</code> a DataFrame as follows:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: center;">id</th>
<th style="text-align: center;">dt_start</th>
<th style="text-align: center;">dt_end</th>
<th style="text-align: center;">values</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">1</td>
<td style="text-align: center;">2025-10-01</td>
<td style="text-align: center;">2025-10-01</td>
<td style="text-align: center;">52.3</td>
</tr>
<tr>
<td style="text-align: center;">2</td>
<td style="text-align: center;">2025-10-02</td>
<td style="text-align: center;">2025-10-02</td>
<td style="text-align: center;">60</td>
</tr>
<tr>
<td style="text-align: center;">3</td>
<td style="text-align: center;">2025-10-01</td>
<td style="text-align: center;">2025-10-01</td>
<td style="text-align: center;">80.6</td>
</tr>
</tbody>
</table></div>
<p>And I guess it is possible to infer the expected output for <code>table_2</code>.</p>
<p>Note that the column <code>values</code>, which is not included in the mapping logic, should remain in the dataframe.</p>
<p>I was able to achieve this by using a for loop, but I feel that should be a natural way of doing this directly on pandas without manually looping over the mapping dict and then dropping the extra columns.</p>
<p>Here is my solution so far:</p>
<pre class="lang-py prettyprint-override"><code>table_name = 'table_1'
new_df = df.copy()
for new_col, old_col in mapping_dict[table_name].items():
new_df[new_col] = df[old_col]
new_df = new_df.drop(mapping_dict[table_name].values(), axis='columns')
</code></pre>
<p>Any help or suggestion will be appreciated!</p>
|
<python><pandas><dataframe>
|
2025-04-10 19:24:58
| 1
| 1,553
|
Ralubrusto
|
79,567,168
| 11,062,613
|
How can I get LLVM loop vectorization debug output in Numba?
|
<p>I'm trying to view LLVM debug messages for loop vectorization using Numba and llvmlite. I want to see the loop vectorization "LV:" debug output (e.g., messages like LV: Checking a loop in ...) so I can analyze the vectorization decisions made by LLVM.
<a href="https://numba.readthedocs.io/en/stable/user/faq.html#does-numba-vectorize-array-computations-simd" rel="nofollow noreferrer">https://numba.readthedocs.io/en/stable/user/faq.html#does-numba-vectorize-array-computations-simd</a></p>
<p>I'm using a conda environment with the following YAML file. This makes sure I’m using the llvmlite build from the numba channel (which should have LLVM built with assertions enabled):</p>
<pre><code>name: numbadev
channels:
- defaults
- numba
dependencies:
- python>=3.12.9
- numba::numba
- numba::llvmlite
- intel-cmplr-lib-rt
</code></pre>
<p>Running conda list shows:</p>
<pre><code># Name Version Build Channel
python 3.13.2 hf623796_100_cp313
llvmlite 0.44.0 py313h84b9e52_0 numba
numba 0.61.2 np2.1py3.13hf94e718_g1e70d8ceb_0 numba
...
</code></pre>
<p>This is my script debug_loop_vectorization.py:</p>
<pre><code>import llvmlite.binding as llvm
llvm.set_option('', '--debug-only=loop-vectorize')
import numpy as np
from numba import jit
@jit(nopython=True, fastmath=True)
def test_func(a):
result = 0.0
for i in range(a.shape[0]):
result += a[i] * 2.0
return result
# Trigger compilation.
a = np.arange(1000, dtype=np.float64)
test_func(a)
# Print the LLVM IR
print(test_func.inspect_llvm(test_func.signatures[0]))
</code></pre>
<p>When I run the script from a Linux terminal I see the LLVM-IR but no lines starting with "LV:...".</p>
<p>How can I get LLVM loop vectorization debug output?</p>
<p>Edit:</p>
<p>I've tried to call the script from the Linux terminal like this:</p>
<pre><code>conda activate numbadev
cd /PathToMyFile/
python debug_loop_vectorization.py
</code></pre>
<p>or like this:</p>
<pre><code>LLVM_DEBUG=1 python debug_loop_vectorization.py 2>&1 | grep '^LV:'
</code></pre>
<p>Edit:</p>
<p>In llvmlite/conda-recipes/llvmdev,</p>
<p>bld.bat contains a cmake argument "DLLVM_ENABLE_ASSERTIONS=ON".</p>
<p><a href="https://github.com/numba/llvmlite/blob/main/conda-recipes/llvmdev/bld.bat#L32" rel="nofollow noreferrer">https://github.com/numba/llvmlite/blob/main/conda-recipes/llvmdev/bld.bat#L32</a></p>
<p>build.sh doesn't contain "_cmake_config+=(-DLLVM_ENABLE_ASSERTIONS:BOOL=ON)" anymore.</p>
<p><a href="https://github.com/numba/llvmlite/blob/main/conda-recipes/llvmdev/build.sh" rel="nofollow noreferrer">https://github.com/numba/llvmlite/blob/main/conda-recipes/llvmdev/build.sh</a></p>
<p>Edit:</p>
<p>LLVM debug messages have been disabled in the Linux build unintentionally and should be enabled in LLVMlite v0.45.0 on the Numba channel.</p>
|
<python><linux><llvm><numba><llvmlite>
|
2025-04-10 16:31:05
| 1
| 423
|
Olibarer
|
79,567,094
| 2,618,676
|
My testing loss isn't improving while I'm using same train and test data
|
<p>I'm trying to fine tune a model for a segmentation task. To see if everything is working properly I'm trying to make my model overfit on a single image split in 16 different patches.
<br> So in my code, I tried using the exact same dataLoader for my training and testing.
<br> What I don't understand is that my training loss is improving whereas my testing loss isn't moving a single bit. If I'm training and testing on the same data, the results of training loss and testing loss should approximately be the same, right ?</p>
<p>Here is my code:</p>
<p>The customDataSet :</p>
<pre><code>class CustomImageDataset(Dataset):
def __init__(self, seg_dir, img_dir, loc, contrast, transform=None, target_transform=None):
self.img_labels = seg_dir #Set the directory of the images
self.img_dir = img_dir #Set the directory of the labels
self.loc = loc
self.contrast = contrast
self.transform = transform
self.target_transform = target_transform
def __len__(self):
return len([entry for entry in os.listdir(self.img_labels) if os.path.isfile(os.path.join(self.img_labels, entry))])-1
def __getitem__(self, idx):
img_path = os.path.join(self.img_dir, "patch_"+str(idx)+".nii")
image = Image(img_path)
label_path = os.path.join(self.img_labels, "label_"+str(idx)+".nii")
label = Image(label_path)
image_data = image.data
image_data = np.float32(np.expand_dims(image_data, axis = 0))
label_data = label.data
label_data = np.float32(np.expand_dims(label_data, axis = 0))
if self.transform:
image = self.transform(image)
if self.target_transform:
label = self.target_transform(label)
return image_data, label_data
</code></pre>
<p>The training and testing loops:</p>
<pre><code>def train_loop(dataloader, model, loss_fn, optimizer, batch_size):
size = len(dataloader.dataset)
model.train()
for batch, (X, y) in enumerate(dataloader):
pred = model(X)
loss = loss_fn(pred, y)
loss.backward()
optimizer.step()
optimizer.zero_grad()
loss, current = loss.item(), batch * batch_size + len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
def test_loop(dataloader, model, loss_fn, batch_size):
# Set the model to evaluation mode - important for batch normalization and dropout layers
model.eval()
size = len(dataloader.dataset)
#print("Size = ", size)
num_batches = len(dataloader)
#print("num_batches = ", num_batches)
test_loss, correct = 0, 0
# Evaluating the model with torch.no_grad() ensures that no gradients are computed during test mode
# also serves to reduce unnecessary gradient computations and memory usage for tensors with requires_grad=True
with torch.no_grad():
for X, y in dataloader:
pred = model(X)
test_loss += loss_fn(pred, y).item()
#print("Test loss = ", test_loss)
correct += (pred.argmax(1) == y).type(torch.float).sum().item()
test_loss /= num_batches
correct /= size
print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
</code></pre>
<p>And my implementation of 3D-DiceLoss :
class DiceLoss(nn.Module):
def <strong>init</strong>(self):
super(DiceLoss, self).<strong>init</strong>()</p>
<pre><code>def forward(self, pred, target):
smooth = 0.1
# Calculate intersection and union
intersection = (pred * target).sum()
union = pred.sum() + target.sum()
dice = (2. * intersection) / (union + 1e-8)
#intersection = (pred * target).sum(dim=(2, 3, 4))
#union = pred.sum(dim=(2, 3, 4)) + target.sum(dim=(2, 3, 4))
# Compute Dice Coefficient
#dice = (2. * intersection + smooth) / (union + smooth)
# Return Dice Loss
return 1 - dice
</code></pre>
<p>Is all of this an expected behavior due to model.train vs model.eval, or is it something wrong in my code or my entire network ?</p>
|
<python><tensorflow><loss-function><dice>
|
2025-04-10 15:48:27
| 0
| 331
|
Antoine
|
79,567,079
| 16,011,842
|
Posting to API with cURL works while Python requests fails
|
<p>I have a cURL command that works when uploading a file while the equivalent Python requests fails.</p>
<p>Here is the cURL command</p>
<pre><code>curl -X POST "http://<DOMAIN>/polarion/rest/v1/projects/<PROJECT_ID>/workitems/<WORKITEM_ID>/attachments" \
-H "Authorization: Bearer <TOKEN>" \
-H "accept: application/json" \
-H "Content-Type: multipart/form-data" \
-F resource="{\"data\": [{\"type\": \"workitem_attachments\",\"attributes\": {\"fileName\": \"test.txt\",\"title\": \"test\"}}]}" \
-F files=@"test.txt"
</code></pre>
<p>And here is the Python</p>
<pre><code>import requests
import json
url = "http://<DOMAIN>/polarion/rest/v1/projects/<PROJECT_ID>/workitems/<WORKITEM_ID>/attachments"
headers = {
"Content-Type": "multipart/form-data",
"accept": 'application/json',
"Authorization": "Bearer " + token
}
data = {
"data": [
{
"type": "workitem_attachments",
"attributes": {
"fileName": "test.txt",
"title": "test"
}
}
]
}
files = {
'resource': (None, json.dumps(data)),
'files': ("test.txt", open('test.txt', 'rb'))
}
r = requests.post(url, files=files, headers=headers)
</code></pre>
<p>While the cURL command works the Python fails with:</p>
<pre><code>{"errors":[{"status":"400","title":"Bad Request","detail":null,"source":null}]}
</code></pre>
<p>I've compared the body for both from <code>--trace-ascii</code> and <code>Request.prepare</code> and they appear to be identical. Unless there is an encoding issue, I'm not sure what's wrong.</p>
<p>Here is the API reference:
<a href="https://testdrive.polarion.com/polarion/sdk/doc/rest/index.html#api-WorkItemAttachments-postWorkItemAttachments" rel="nofollow noreferrer">https://testdrive.polarion.com/polarion/sdk/doc/rest/index.html#api-WorkItemAttachments-postWorkItemAttachments</a></p>
|
<python><curl><python-requests><polarion>
|
2025-04-10 15:41:29
| 1
| 329
|
barneyAgumble
|
79,567,052
| 2,123,706
|
how to search for multiple substrings in a column using polars
|
<p>In pandas, I can search for multiple different substrings in a column, and determine which rows they can be found in. How can I do this in polars?</p>
<p>my pandas code:</p>
<pre><code>df = pd.DataFrame({'col1':['a red','dog','liked to play','with his best friend','the CAT']})
ls = ['best','dog']
df[df.col1.str.contains('|'.join(ls))]
</code></pre>
|
<python><dataframe><contains><python-polars>
|
2025-04-10 15:25:32
| 0
| 3,810
|
frank
|
79,567,049
| 5,100,278
|
I have installed selenium 4.31, but Selenium Manager is not picking up the chrome driver
|
<p>I have a Python project. I installed selenium as <code>pip install -U selenium</code>.
When I run <code>pip show selenium</code>, it outputs:
<code>Name: selenium Version: 4.31.0</code></p>
<p>However, when I run this simple code <code>script.py</code>:</p>
<pre><code>from selenium import webdriver
driver = webdriver.Chrome()
driver.get('https://selenium.dev/')
driver.quit()
</code></pre>
<p>I get the following error:</p>
<p><code>selenium.common.exceptions.NoSuchDriverException: Message: Unable to obtain driver for chrome</code></p>
<p>I thought Selenium Manager is included in Selenium version 4.31 and it takes care of the driver installation for you? What's going on?</p>
|
<python><selenium-webdriver>
|
2025-04-10 15:24:57
| 2
| 10,216
|
Kingamere
|
79,566,952
| 1,406,168
|
How to authenticate a Python package between two repos in GitHub and deploy
|
<p>We have a private utility repo in GitHub and we would like to be able to install this package with a GitHub Actions. When developing locally I have created a ssh key, uploaded the public key to deploy keys in GitHub, and everything works fine locally.</p>
<p><code>requirements.txt</code>:</p>
<pre><code>annotated-types==0.7.0
anyio==4.7.0
certifi==2024.8.30
cffi==1.17.1
click==8.1.7
colorama==0.4.6
cryptography==44.0.0
fastapi==0.115.6
fastapi-azure-auth==5.0.1
h11==0.14.0
httpcore==1.0.7
httpx==0.28.1
idna==3.10
pycparser==2.22
pydantic==2.10.3
pydantic-settings==2.6.1
pydantic_core==2.27.1
PyJWT==2.10.1
python-dotenv==1.0.1
sniffio==1.3.1
starlette==0.41.3
typing_extensions==4.12.2
uvicorn==0.32.1
azure-monitor-opentelemetry
databricks
databricks.sdk
databricks-sql-connector
pyarrow
pandas
sqlalchemy
numpy
azure-identity
opencensus-ext-azure
opentelemetry-api
opentelemetry-sdk
opentelemetry-instrumentation-fastapi
git+ssh://github.com/xxx-xx/xx-utils.git@main
</code></pre>
<p>When doing a <code>pip install</code> locally it installs the package because of the ssh setup locally. How do we deploy this? Is it possible somehow to authenticate the two repos to each other? When running the action in GitHub it fails, because the repo is private. Can you configure it somehow to be authenticated, or do you need to add some ssh in the pipeline?</p>
<pre class="lang-yaml prettyprint-override"><code>name: Build and deploy Python app to Azure Web App - xx-xx-port-api-dev
on:
push:
branches:
- dev
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- uses: actions/checkout@v4
- name: Set up Python version
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Install dependencies
run: |
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
- name: Zip artifact for deployment
run: |
zip -r release.zip . -x "venv/*"
- name: Upload artifact for deployment jobs
uses: actions/upload-artifact@v4
with:
name: python-app
path: release.zip
deploy:
runs-on: ubuntu-latest
needs: build
environment:
name: 'Production'
url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
permissions:
id-token: write
contents: read
steps:
- name: Download artifact from build job
uses: actions/download-artifact@v4
with:
name: python-app
- name: Unzip artifact for deployment
run: unzip release.zip
- name: Login to Azure
uses: azure/login@v2
with:
client-id: ${{ secrets.xx}}
tenant-id: ${{ secrets.xx}}
subscription-id: ${{ secrets.xx}}
- name: Deploy to Azure Web App
uses: azure/webapps-deploy@v3
id: deploy-to-webapp
with:
app-name: 'xx-xx-xx-api-dev'
slot-name: 'Production'
package: .
</code></pre>
|
<python><azure><github><github-actions><ssh-keys>
|
2025-04-10 14:34:35
| 1
| 5,363
|
Thomas Segato
|
79,566,873
| 4,451,521
|
Why is Swagger UI not handling optional files properly?
|
<p>I have a minimal example in this script:</p>
<pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI, File, Form, UploadFile
from typing import Optional
from fastapi.responses import JSONResponse
app = FastAPI()
@app.post("/evaluate")
async def evaluate(
server_id: str = Form(...),
csv_file: Optional[UploadFile] = File(None) # Optional CSV file
):
if csv_file:
return JSONResponse(content={
"server_id": server_id,
"csv_file_name": csv_file.filename
})
else:
return JSONResponse(content={
"server_id": server_id,
"csv_file_name": "No CSV file provided"
})
</code></pre>
<p>After installing the required dependencies:</p>
<pre class="lang-bash prettyprint-override"><code>pip install fastapi uvicorn python-dotenv python-multipart
</code></pre>
<p>I run it with:</p>
<pre class="lang-bash prettyprint-override"><code>uvicorn main:app --reload
</code></pre>
<p>Now when I do a curl without a CSV like:</p>
<pre class="lang-bash prettyprint-override"><code>$ curl -X 'POST' 'http://127.0.0.1:8000/evaluate' -H 'accept: application/json' -F 'server_id=123'
{"server_id":"123","csv_file_name":"No CSV file provided"}
</code></pre>
<p>I get "No CSV...", which is fine. Also if I do:</p>
<pre class="lang-bash prettyprint-override"><code>$ curl -X 'POST' 'http://127.0.0.1:8000/evaluate' -H 'accept: application/json' -F 'server_id=123' -F 'csv_file=@nada.csv'
{"server_id":"123","csv_file_name":"nada.csv"}
</code></pre>
<p>I get the name of the CSV.</p>
<p>However if I do this with Swagger UI (<a href="http://127.0.0.1:8000/docs" rel="nofollow noreferrer">http://127.0.0.1:8000/docs</a>) without the CSV I get an error 422 "Unprocessable entity":</p>
<p><a href="https://i.sstatic.net/eJ2cebvI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eJ2cebvI.png" alt="enter image description here" /></a></p>
<p>Response body:</p>
<pre class="lang-json prettyprint-override"><code>{
"detail": [
{
"type": "value_error",
"loc": [
"body",
"csv_file"
],
"msg": "Value error, Expected UploadFile, received: <class 'str'>",
"input": "",
"ctx": {
"error": {}
}
}
]
}
</code></pre>
<p>Is there something wrong with the code or can Swagger UI not handle this?</p>
|
<python><swagger><fastapi><swagger-ui>
|
2025-04-10 14:01:57
| 0
| 10,576
|
KansaiRobot
|
79,566,786
| 2,350,145
|
How to create nicegui table in sideways manner?
|
<p>I want to have a nicegui table in sideways (3x3) like below</p>
<p><a href="https://i.sstatic.net/FyHcR1mV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FyHcR1mV.png" alt="3x3 table" /></a></p>
<p>I am using this code but it is not giving as expected</p>
<pre class="lang-py prettyprint-override"><code>from nicegui import ui
test_settings = {
"Load duration": "5 min",
"Requested by": "eldsinghaitenant",
"Run source": "Team Services portal",
"Start time": "3/16/2016 10:54:15 AM",
"Test target": "-",
"Warmup duration": "-",
"End time": "3/16/2016 10:59:18 AM",
"Location": "South Central US",
"Agent cores": "1",
}
with ui.grid(columns=3).classes('w-full'):
for key, value in test_settings.items():
ui.label(key).classes('font-bold')
ui.label(value)
ui.run()
</code></pre>
<p><a href="https://i.sstatic.net/AUEV4z8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AUEV4z8J.png" alt="code output" /></a></p>
|
<python><python-3.x><nicegui>
|
2025-04-10 13:30:07
| 1
| 5,808
|
Anirban Nag 'tintinmj'
|
79,566,761
| 2,266,881
|
Sort a polars dataframe based on an external list
|
<p>Morning,</p>
<p>I'm not sure if this can be achieved..</p>
<p>Let's say i have a polars dataframe with cols a, b (whatever).</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.from_repr("""
┌─────┬─────┐
│ a ┆ b │
│ --- ┆ --- │
│ i64 ┆ str │
╞═════╪═════╡
│ 1 ┆ x │
│ 2 ┆ y │
│ 3 ┆ z │
│ 4 ┆ p │
│ 5 ┆ f │
└─────┴─────┘
""")
</code></pre>
<p>And a list.. <code>l = [1,3,5,2,4]</code>; is it possible to sort the dataframe (using column "a") using the list l as the sorting order?</p>
<pre><code>┌─────┬─────┐
│ a ┆ b │
│ --- ┆ --- │
│ i64 ┆ str │
╞═════╪═════╡
│ 1 ┆ x │
│ 3 ┆ z │
│ 5 ┆ f │
│ 2 ┆ y │
│ 4 ┆ p │
└─────┴─────┘
</code></pre>
<p>Thanks in advance!</p>
|
<python><dataframe><python-polars><polars>
|
2025-04-10 13:19:01
| 2
| 1,594
|
Ghost
|
79,566,403
| 8,037,521
|
Problems going between maximized and sidebar-only window
|
<p>I have a specific situation where I do not understand how QT works and what is specifically wrong here. Below I describe two observable issues but I suppose that they follow from same underlying mistake in code.</p>
<p>My goal:</p>
<ol>
<li>Have app with left and right side. Left always visible, right can be hidden.</li>
<li>App can be maximized or any size above some specific minimum size when right panel is visible.</li>
<li>When right panel is <strong>not</strong> visible, app has fixed width and <strong>cannot</strong> be maximized.</li>
<li>By default, app is maximized on start of the app.</li>
<li>If the app was maximized when right panel was hidden, the app goes to the fixed size defined for left-panel-only mode, but when right panel is shown again, app is maximized again.</li>
<li>If the app was not maximized when right panel was hidden, it does <strong>not</strong> maximize when the right panel is shown again, instead just adapting based on the minimum size.</li>
</ol>
<p>Based on the description above, I made this MRE:</p>
<pre><code>import sys
from PySide6.QtCore import QEvent
from PySide6.QtWidgets import (
QApplication,
QMainWindow,
QWidget,
QVBoxLayout,
QPushButton,
QHBoxLayout,
QSizePolicy,
)
hide_viewer_panel = False
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.setWindowTitle("MRE Example")
# Sidebar
self.sidebar = QWidget(self)
self.sidebar.setStyleSheet("background-color: lightgray;")
self.sidebar.setMinimumWidth(200)
self.sidebar.setMinimumHeight(500)
# Viewer Panel
self.viewer_panel = QWidget(self)
self.viewer_panel.setStyleSheet("background-color: lightblue;")
self.viewer_panel.setMinimumWidth(300)
# Layout
main_layout = QHBoxLayout()
main_layout.addWidget(self.sidebar)
main_layout.addWidget(self.viewer_panel)
# Central widget
central_widget = QWidget(self)
central_widget.setLayout(main_layout)
self.setCentralWidget(central_widget)
# Sidebar button
self.hide_button = QPushButton("Hide/Show Viewer Panel", self)
self.hide_button.clicked.connect(self.hide_visu_panel)
self.sidebar_layout = QVBoxLayout(self.sidebar)
self.sidebar_layout.addWidget(self.hide_button)
# Window maximized state tracking
self.was_maximized = not hide_viewer_panel
# Apply initial settings
self.adapt_viewer_panel()
def hide_visu_panel(self):
global hide_viewer_panel
hide_viewer_panel = not hide_viewer_panel
self.adapt_viewer_panel()
def adapt_viewer_panel(self):
if hide_viewer_panel:
self.setFixedWidth(200)
self.setSizePolicy(QSizePolicy.Fixed, QSizePolicy.Preferred)
self.viewer_panel.hide()
else:
self.setMinimumWidth(1500)
self.setMaximumWidth(16777215)
self.setSizePolicy(QSizePolicy.Expanding, QSizePolicy.Preferred)
self.viewer_panel.show()
self.updateGeometry()
self.adjustSize()
self.repaint()
if not hide_viewer_panel and self.was_maximized:
self.showMaximized()
def changeEvent(self, event):
if event.type() == QEvent.WindowStateChange:
if self.isMaximized():
self.was_maximized = True
else:
self.was_maximized = False
super().changeEvent(event)
if __name__ == "__main__":
app = QApplication(sys.argv)
window = MainWindow()
window.show()
sys.exit(app.exec())
</code></pre>
<p>To reproduce problems I describe below: just run this app, there is only one button, press it to hide the right panel and then press it again to show right panel. Do not move/change size of window.</p>
<p>So, it does not work as I want / expect it to be.</p>
<ol>
<li>When I go directly from maximized window to left-panel-only, the title of the window gets messed up. It becomes wider than the app. This is automatically fixed when I move the window, but I do not understand why this happens and it also does not look well. How to get rid of this?</li>
</ol>
<p><img src="https://ddgobkiprc33d.cloudfront.net/36199339-5c60-4736-9425-473b89f69d02.png" alt="181bdbcb-c4b2-4f18-838f-81a2158bab73-image.png" /></p>
<ol start="2">
<li>When I try to go directly back to showing right panel, the window is not maximized. It seems that <code>showMaximized()</code> is actually called but is ignored :/ It works only on first call when the app is started, but after that it does not maximized the window anymore. It is still possible to maximize the window by pressing "maximize" button in the title of the window, so apparently the window <strong>can</strong> be maximized, Qt just chooses to ignore the call for whatever reason. How to fix this one also? What can be blocking Qt here? I understand that Qt does not immediately execute most of calls we pass it, but there should be some way to do something as simple as maximized a window?</li>
</ol>
<p><strong>UPD:</strong> I am on Win11, working with PySide6 v6.8.2.1.</p>
|
<python><qt><user-interface><pyside6><window-resize>
|
2025-04-10 10:38:12
| 0
| 1,277
|
Valeria
|
79,566,384
| 2,277,833
|
Running Django tests ends with MigrationSchemaMissing exception
|
<p>I'm writing because I have a big problem. Well, I have a project in Django where I am using django-tenants. Unfortunately, I can't run any tests as these end up with the following error when calling migrations: ‘django.db.migrations.exceptions.MigrationSchemaMissing: Unable to create the django_migrations table (no schema has been selected to create in LINE 1: CREATE TABLE ‘django_migrations’ (‘id’ bigint NOT NULL PRIMA...’</p>
<p>The problem is quite acute. I am fed up with regression errors and would like to write tests for the code.
I will appreciate any suggestion. If you have any suggestions for improvements to the project, I'd love to read about that too.</p>
<p>Project details below.
Best regards</p>
<p>Dependencies</p>
<pre class="lang-ini prettyprint-override"><code> [tool.poetry.dependencies]
python = "^3.13"
django = "5.1.8" # The newest version is not compatible with django-tenants yet
django-tenants = "^3.7.0"
dj-database-url = "^2.3.0"
django-bootstrap5 = "^25.1"
django-bootstrap-icons = "^0.9.0"
uvicorn = "^0.34.0"
uvicorn-worker = "^0.3.0"
gunicorn = "^23.0.0"
whitenoise = "^6.8.2"
encrypt-decrypt-fields = "^1.3.6"
django-bootstrap-modal-forms = "^3.0.5"
django-model-utils = "^5.0.0"
werkzeug = "^3.1.3"
tzdata = "^2025.2"
pytz = "^2025.2"
psycopg = {extras = ["binary", "pool"], version = "^3.2.4"}
django-colorfield = "^0.13.0"
sentry-sdk = {extras = ["django"], version = "^2.25.1"}
</code></pre>
<p>Settings.py</p>
<pre class="lang-py prettyprint-override"><code> import os
from pathlib import Path
from uuid import uuid4
# External Dependencies
import dj_database_url
from django.contrib.messages import constants as messages
from django.utils.translation import gettext_lazy as _
BASE_DIR = Path(__file__).resolve().parent.parent
PROJECT_DIR = os.path.join(BASE_DIR, os.pardir)
TENANT_APPS_DIR = BASE_DIR / "tenant"
DEBUG = os.environ.get("DEBUG", "False").lower() in ["true", "1", "yes"]
TEMPLATE_DEBUG = DEBUG
SECRET_KEY = os.environ.get("SECRET_KEY", str(uuid4())) if DEBUG else os.environ["SECRET_KEY"]
VERSION = os.environ.get("VERSION", "develop")
SECURE_SSL_REDIRECT = os.environ.get("SECURE_SSL_REDIRECT", "False").lower() in ["true", "1", "yes"]
try:
ALLOWED_HOSTS = os.environ["ALLOWED_HOSTS"].split(";")
except KeyError:
if not DEBUG:
raise
ALLOWED_HOSTS = ["localhost", ".localhost"]
DEFAULT_DOMAIN = os.environ.get("DEFAULT_DOMAIN", ALLOWED_HOSTS[0])
DEFAULT_FILE_STORAGE = "django_tenants.files.storage.TenantFileSystemStorage"
SHARED_APPS = [
"django_tenants",
"sfe.common.apps.CommonConfig",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
]
TENANT_APPS = [
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
"django_bootstrap5",
"django_bootstrap_icons",
"bootstrap_modal_forms",
"sfe.tenant.apps.TenantConfig",
"sfe.tenant.email_controller.apps.EmailControllerConfig",
]
INSTALLED_APPS = list(SHARED_APPS) + [app for app in TENANT_APPS if app not in SHARED_APPS]
MIDDLEWARE = [
"sfe.common.middleware.HealthCheckMiddleware",
"django.middleware.security.SecurityMiddleware",
"whitenoise.middleware.WhiteNoiseMiddleware",
"django.contrib.sessions.middleware.SessionMiddleware",
"django.middleware.locale.LocaleMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
]
if DEBUG:
INTERNAL_IPS = ["localhost", ".localhost"]
ROOT_URLCONF = "sfe.urls_tenant"
PUBLIC_SCHEMA_URLCONF = "sfe.urls_public"
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [],
"APP_DIRS": True,
"OPTIONS": {
"context_processors": [
"django.template.context_processors.debug",
"django.template.context_processors.request",
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
"sfe.common.context_processor.version",
"sfe.common.context_processor.default_domain",
],
},
},
]
WSGI_APPLICATION = "sfe.wsgi.application"
default_db = dj_database_url.config(engine="django_tenants.postgresql_backend")
DATABASES = {
"default": {
"OPTIONS": {"pool": True},
**default_db,
}
}
DATABASE_ROUTERS = ("django_tenants.routers.TenantSyncRouter",)
TEST_RUNNER = "django.test.runner.DiscoverRunner"
AUTH_PASSWORD_VALIDATORS = [
{"NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator"},
{"NAME": "django.contrib.auth.password_validation.MinimumLengthValidator"},
{"NAME": "django.contrib.auth.password_validation.CommonPasswordValidator"},
{"NAME": "django.contrib.auth.password_validation.NumericPasswordValidator"},
]
TENANT_MODEL = "common.SystemTenant"
TENANT_DOMAIN_MODEL = "common.Domain"
PUBLIC_SCHEMA_NAME = "public"
LANGUAGE_CODE = "pl"
LANGUAGES = [("pl", "Polski"), ("en", "English")]
LOCALE_PATHS = (os.path.join(BASE_DIR, "locale"),)
TIME_ZONE = "UTC"
USE_TZ = True
USE_I18N = True
PROJECT_ROOT = os.path.dirname(os.path.abspath(__file__))
STATIC_URL = "/static/"
STATIC_ROOT = os.path.join(PROJECT_ROOT, "static")
LOGIN_URL = _("/login/")
LOGOUT_REDIRECT_URL = "/"
LOGIN_REDIRECT_URL = "/"
SESSION_COOKIE_AGE = 86400
DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"
STATICFILES_STORAGE = "whitenoise.storage.CompressedManifestStaticFilesStorage"
IMAP_TIMEOUT = 60
ADMINS = [("Damian Giebas", "damian.giebas@gmail.com")]
MANAGERS = ADMINS
FIRST_DAY_OF_WEEK = 1
</code></pre>
<p>Project structure:
<a href="https://i.sstatic.net/THwHElJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/THwHElJj.png" alt="Structure" /></a></p>
<p>Simple test setup:</p>
<pre class="lang-py prettyprint-override"><code># External Dependencies
from django.utils.timezone import now
from django_tenants.test.cases import TenantTestCase
from django_tenants.utils import schema_context
# Current App
from sfe.tenant.models.due_date import DueDate
class DueDateModelTests(TenantTestCase):
def setUp(self):
super().setUp()
def test_create_due_date(self):
with schema_context(self.tenant.schema_name):
DueDate.objects.create(date=now().date(), name="TestDueDate")
assert DueDate.objects.all().count() == 1
</code></pre>
<p>EDIT:
It turns out that the transition from django-tenant-schemas to django-tenants has not gone well. Unfortunately, running the migration on an empty database also ends with an error. Migrations in the tenant application do not execute. The problem is therefore not located in the configuration.</p>
|
<python><django><python-unittest>
|
2025-04-10 10:29:00
| 2
| 364
|
Draqun
|
79,566,109
| 10,827,766
|
Partial hypothesis examples
|
<p>Is it in any way possible to provide partial <code>examples</code>, such as in the following case, where <code>b</code> is missing from the example decorator?</p>
<pre class="lang-py prettyprint-override"><code>@given(a=st.integers(), b=st.integers())
@example(a=0)
def test_example(a, b):
...
</code></pre>
<p>Here is the error I'm getting when attempting this:</p>
<pre><code>Test session starts (platform: linux, Python 3.11.8, pytest 7.4.3, pytest-sugar 0.9.7)
cachedir: .pytest_cache
Using --randomly-seed=2715185903
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/mnt/wsl/sylvorg/sylvorg/sylveon/siluam/oreo/.hypothesis/examples')
rootdir: /mnt/wsl/sylvorg/sylvorg/sylveon/siluam/oreo
configfile: pyproject.toml
plugins: order-1.1.0, repeat-0.9.2, hy-1.0.0.0, sugar-0.9.7, lazy-fixture-0.6.3, randomly-3.13.0, custom-exit-code-0.3.0, beartype-0.2.0, xdist-3.3.1, hypothesis-6.84.3
8 workers [2 items]
scheduling tests via LoadGroupScheduling
―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― test_example ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
[gw2] linux -- Python 3.11.8 /nix/store/qv7rqk8qijshc7l07rajv88zsi476aqm-python3-3.11.8-env/bin/python3.11
@given(a=st.integers(), b=st.integers())
> @example(a=0)
hydrox/typing/tests/test_ellipsis.py:27:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state = <hypothesis.core.StateForActualGivenExecution object at 0x7fb4b5504210>, wrapped_test = <function accept.<locals>.test_example at 0x7fb4b5886840>
arguments = (), kwargs = {}, original_sig = <Signature (a, b)>
def execute_explicit_examples(state, wrapped_test, arguments, kwargs, original_sig):
assert isinstance(state, StateForActualGivenExecution)
posargs = [
p.name
for p in original_sig.parameters.values()
if p.kind is p.POSITIONAL_OR_KEYWORD
]
for example in reversed(getattr(wrapped_test, "hypothesis_explicit_examples", ())):
assert isinstance(example, Example)
# All of this validation is to check that @example() got "the same" arguments
# as @given, i.e. corresponding to the same parameters, even though they might
# be any mixture of positional and keyword arguments.
if example.args:
assert not example.kwargs
if any(
p.kind is p.POSITIONAL_ONLY for p in original_sig.parameters.values()
):
raise InvalidArgument(
"Cannot pass positional arguments to @example() when decorating "
"a test function which has positional-only parameters."
)
if len(example.args) > len(posargs):
raise InvalidArgument(
"example has too many arguments for test. Expected at most "
f"{len(posargs)} but got {len(example.args)}"
)
example_kwargs = dict(zip(posargs[-len(example.args) :], example.args))
else:
example_kwargs = dict(example.kwargs)
given_kws = ", ".join(
repr(k) for k in sorted(wrapped_test.hypothesis._given_kwargs)
)
example_kws = ", ".join(repr(k) for k in sorted(example_kwargs))
if given_kws != example_kws:
> raise InvalidArgument(
f"Inconsistent args: @given() got strategies for {given_kws}, "
f"but @example() got arguments for {example_kws}"
) from None
E hypothesis.errors.InvalidArgument: Inconsistent args: @given() got strategies for 'a', 'b', but @example() got arguments for 'a'
/nix/store/qv7rqk8qijshc7l07rajv88zsi476aqm-python3-3.11.8-env/lib/python3.11/site-packages/hypothesis/core.py:441: InvalidArgument
[gw2] FAILED hydrox/typing/tests/test_ellipsis.py
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! xdist.dsession.Interrupted: stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Results (29.62s):
1 failed
- hydrox/typing/tests/test_ellipsis.py:26 test_example
</code></pre>
|
<python><python-hypothesis>
|
2025-04-10 08:25:44
| 2
| 409
|
syvlorg
|
79,565,983
| 6,839,733
|
How to update an existing enrollment in Azure DPS?
|
<p>I'm having a really hard time trying to update an existing enrollment in Azure DPS, ie. in the event of having lost the symmetric keys and generating new ones to be added to the existing enrollment.</p>
<p>From what I see in the Azure examples, this <em>should</em> work. But it doesn't. I'm still getting error 409201: Enrollment already exists.</p>
<p>Am I doing something wrong here? Or is there an issue with the Python IoT Hub SDK that causes updating an existing enrollment not to work?</p>
<pre><code>def _create_or_update_enrollment(
device_id: str,
external_id: str,
iothub_hostname: str,
device_provisioning_client: DeviceProvisioningClient,
):
primary_key = _generate_symmetric_key()
secondary_key = _generate_symmetric_key()
device_provisioning_client = DeviceProvisioningClient.from_connection_string(
DPS_CONNECTION_STRING
)
try:
enrollment = device_provisioning_client.enrollment.get(device_id)
except Exception as e:
_logger.debug(f"Unable to get enrollment for {device_id}: {e}")
enrollment = {}
enrollment.update({
"registrationId": device_id,
"attestation": {
"type": "symmetricKey",
"symmetricKey": {"primaryKey": primary_key, "secondaryKey": secondary_key},
},
"iotHubs": [iothub_hostname],
"deviceId": device_id,
"initialTwin": {
"properties": {"desired": {"key": "value"}},
"tags": {"externalId": external_id, "endpoint": "messages"},
},
"allocationPolicy": "static",
})
new_enrollment = device_provisioning_client.enrollment.create_or_update(
id=device_id, enrollment=enrollment
)
return new_enrollment
</code></pre>
|
<python><azure><azure-iot-hub>
|
2025-04-10 07:23:10
| 1
| 1,332
|
Oystein
|
79,565,937
| 6,862,601
|
Click help text shows 'dynamic' when an option has the default set to a lambda
|
<p>I have this code in a CLI:</p>
<pre><code>@click.option(
"--username",
default=lambda: os.environ.get("USER", None),
show_default=True,
help="User name for SSH configuration.",
)
</code></pre>
<p>When I invoke the CLI with <code>--help</code> option, I get this:</p>
<pre><code> --username TEXT User name for SSH configuration. [default:
(dynamic)]
</code></pre>
<p>Is there a way to make click invoke the lambda function and show the actual username instead of <code>(dynamic)</code>? I know I can call that function before invoking the click decorator and pass that retrieved value as the default instead of lambda. I am trying to do better than that.</p>
|
<python><python-click>
|
2025-04-10 07:01:49
| 1
| 43,763
|
codeforester
|
79,565,827
| 14,098,258
|
how to recover from error "Cannot mix incompatible Qt library (6.8.3) with this library (6.8.2)"
|
<p>I am using a conda virtual environment. I ran my code yesterday and it worked perfectly fine.</p>
<p>I changed <strong>NOTHING</strong> in my code. I changed <strong>NOTHING</strong> in my virtual environment.</p>
<p>Now I run the code again and get the following error:
<code>Process finished with exit code -1073740791 (0xC0000409)</code></p>
<p>Ok, for example <a href="https://intellij-support.jetbrains.com/hc/en-us/community/posts/360005071680-How-to-fix-Process-finished-with-exit-code-1073740791-0xC0000409-in-Pycharm" rel="nofollow noreferrer">here</a> I found out, I can just try to run the command for executing the Python script from console (I usually run the code from Pycharm).</p>
<p>In the external console I get the following error: <code>Cannot mix incompatible Qt library (6.8.3) with this library (6.8.2)</code></p>
<p>I don't understand this, how can something break that worked perfectly the day before?
I figured out, it has to do with matplotlib.</p>
<p>Minimum code for reproducing the error:</p>
<pre><code>from matplotlib import pyplot as plt
plt.plot([1,2,3], [2,3,1])
plt.show()
</code></pre>
<p>That is what my virtual environment looks like, when I run <code>conda list</code>:</p>
<pre><code>_openmp_mutex 4.5 2_gnu conda-forge
ampl-asl 1.0.0 he0c23c2_2 conda-forge
brotli-python 1.0.9 py312h5da7b33_9
bzip2 1.0.8 h2466b09_7 conda-forge
ca-certificates 2025.2.25 haa95532_0
cairo 1.18.4 h5782bbf_0 conda-forge
colorama 0.4.6 pyhd8ed1ab_1 conda-forge
contourpy 1.3.1 py312h214f63a_0
cycler 0.11.0 pyhd3eb1b0_0
deepdiff 8.1.1 pyhd8ed1ab_0 conda-forge
double-conversion 3.3.1 he0c23c2_0 conda-forge
font-ttf-dejavu-sans-mono 2.37 hd3eb1b0_0
font-ttf-inconsolata 2.001 hcb22688_0
font-ttf-source-code-pro 2.030 hd3eb1b0_0
font-ttf-ubuntu 0.83 h8b1ccd4_0
fontconfig 2.15.0 h765892d_1 conda-forge
fonts-anaconda 1 h8fa9717_0
fonts-conda-ecosystem 1 hd3eb1b0_0
fonttools 4.55.3 py312h827c3e9_0
freetype 2.13.3 h0b5ce68_0 conda-forge
geojson 3.2.0 pyhd8ed1ab_0 conda-forge
graphite2 1.3.14 hd77b12b_1
harfbuzz 10.4.0 h9e37d49_0 conda-forge
icu 75.1 he0c23c2_0 conda-forge
intel-openmp 2024.2.1 h57928b3_1083 conda-forge
ipopt 3.14.17 h905d1ba_0 conda-forge
kiwisolver 1.4.8 py312h5da7b33_0
krb5 1.21.3 hdf4eb48_0 conda-forge
lcms2 2.17 hbcf6048_0 conda-forge
lerc 4.0.0 h5da7b33_0
libblas 3.9.0 26_win64_mkl conda-forge
libcblas 3.9.0 26_win64_mkl conda-forge
libclang13 20.1.1 default_ha5278ca_0 conda-forge
libdeflate 1.23 h9062f6e_0 conda-forge
libexpat 2.6.4 he0c23c2_0 conda-forge
libffi 3.4.2 h8ffe710_5 conda-forge
libflang 5.0.0 h6538335_20180525 conda-forge
libgcc 14.2.0 h1383e82_2 conda-forge
libglib 2.82.2 h7025463_1 conda-forge
libgomp 14.2.0 h1383e82_2 conda-forge
libhwloc 2.11.2 default_ha69328c_1001 conda-forge
libiconv 1.17 hcfcfb64_2 conda-forge
libintl 0.22.5 h5728263_3 conda-forge
libjpeg-turbo 3.0.3 h827c3e9_0
liblapack 3.9.0 26_win64_mkl conda-forge
liblzma 5.6.3 h2466b09_1 conda-forge
libpng 1.6.47 had7236b_0 conda-forge
libsqlite 3.48.0 h67fdade_1 conda-forge
libtiff 4.7.0 h797046b_3 conda-forge
libwebp-base 1.5.0 h3b0e114_0 conda-forge
libwinpthread 12.0.0.r4.gg4f2fc60ca h57928b3_9 conda-forge
libxcb 1.17.0 h0e4246c_0 conda-forge
libxml2 2.13.5 he286e8c_1 conda-forge
libxslt 1.1.41 h0739af5_0
libzlib 1.3.1 h2466b09_2 conda-forge
llvm-meta 5.0.0 0 conda-forge
llvmlite 0.44.0 py312h1f7db74_1 conda-forge
lxml 5.3.1 py312h53bce91_0 conda-forge
matplotlib 3.10.1 py312h2e8e312_0 conda-forge
matplotlib-base 3.10.1 py312h90004f6_0 conda-forge
mkl 2024.2.2 h66d3029_15 conda-forge
mumps-seq 5.7.3 h7c2359a_6 conda-forge
networkx 3.4.2 pyh267e887_2 conda-forge
numba 0.61.0 py312hcccf92d_1 conda-forge
numpy 2.1.3 py312h49bc9c5_0 conda-forge
openjpeg 2.5.3 h4d64b90_0 conda-forge
openmp 5.0.0 vc14_1 conda-forge
openssl 3.4.0 ha4e3fda_1 conda-forge
orderly-set 5.2.3 pyh29332c3_1 conda-forge
packaging 24.2 pyhd8ed1ab_2 conda-forge
pandapower 3.0.0 pyhd8ed1ab_0 conda-forge
pandas 2.2.3 py312h72972c8_1 conda-forge
pcre2 10.44 h3d7b363_2 conda-forge
pillow 11.1.0 py312h078707f_0 conda-forge
pip 25.0 pyh8b19718_0 conda-forge
pixman 0.44.2 had0cd8c_0 conda-forge
ply 3.11 pyhd8ed1ab_3 conda-forge
pthread-stubs 0.3 h3c9f919_1
pyomo 6.8.2 py312h275cf98_1 conda-forge
pyparsing 3.2.0 py312haa95532_0
pyside6 6.8.2 py312h2ee7485_0 conda-forge
python 3.12.8 h3f84c4b_1_cpython conda-forge
python-dateutil 2.9.0.post0 pyhff2d567_1 conda-forge
python-tzdata 2025.1 pyhd8ed1ab_0 conda-forge
python_abi 3.12 5_cp312 conda-forge
pytz 2024.1 pyhd8ed1ab_0 conda-forge
qhull 2020.2 h59b6b97_2
qt6-main 6.8.2 h1259614_0 conda-forge
scipy 1.13.1 py312h1f4e10d_0 conda-forge
setuptools 75.8.0 pyhff2d567_0 conda-forge
six 1.17.0 pyhd8ed1ab_0 conda-forge
tbb 2021.13.0 h62715c5_1 conda-forge
tk 8.6.13 h5226925_1 conda-forge
tornado 6.4.2 py312h827c3e9_0
tqdm 4.67.1 pyhd8ed1ab_1 conda-forge
typing_extensions 4.12.2 py312haa95532_0
tzdata 2025a h78e105d_0 conda-forge
ucrt 10.0.22621.0 h57928b3_1 conda-forge
unicodedata2 15.1.0 py312h827c3e9_1
vc 14.3 h5fd82a7_24 conda-forge
vc14_runtime 14.42.34433 h6356254_24 conda-forge
vs2015_runtime 14.42.34433 hfef2bbc_24 conda-forge
wheel 0.45.1 pyhd8ed1ab_1 conda-forge
xorg-libxau 1.0.12 h0e40799_0 conda-forge
xorg-libxdmcp 1.1.5 h0e40799_0 conda-forge
zstd 1.5.7 hbeecb71_2 conda-forge
</code></pre>
<p>Now when I try to upgrade the mentioned "Qt library" (I guess they mean "qt6-main"?) using <code>conda install qt6-main=6.8.3</code>, I get the following error:</p>
<pre><code>LibMambaUnsatisfiableError: Encountered problems while solving:
- package qt6-main-6.8.3-h1259614_0 requires libsqlite >=3.49.1,<4.0a0, but none of the providers can be installed
Could not solve for environment specs
The following packages are incompatible
├─ libsqlite 3.48.0.* is requested and can be installed;
└─ qt6-main 6.8.3* is not installable because it requires
└─ libsqlite >=3.49.1,<4.0a0 , which conflicts with any installable versions previously reported.
</code></pre>
<p>My next guess was to try to downgrade matplotlib using <code>conda install matplotlib=3.9</code>, where I get the following error:</p>
<pre><code>LibMambaUnsatisfiableError: Encountered problems while solving:
- package matplotlib-base-3.10.1-py310h37e0a56_0 requires python >=3.10,<3.11.0a0, but none of the providers can be id
Could not solve for environment specs
The following packages are incompatible
├─ matplotlib-base 3.10.1.* is installable with the potential options
│ ├─ matplotlib-base 3.10.1, which can be installed;
│ ├─ matplotlib-base [3.10.1|3.9.1|3.9.2|3.9.3|3.9.4] would require
│ │ └─ python >=3.10,<3.11.0a0 , which can be installed;
│ ├─ matplotlib-base [3.10.1|3.9.1|3.9.2|3.9.3|3.9.4] would require
│ │ └─ python >=3.11,<3.12.0a0 , which can be installed;
│ └─ matplotlib-base [3.10.1|3.9.2|3.9.3|3.9.4] would require
│ └─ python [>=3.13,<3.14.0a0 |>=3.13.0rc2,<3.14.0a0 ], which can be installed;
├─ matplotlib 3.9** is installable with the potential options
│ ├─ matplotlib [3.9.1|3.9.2|3.9.3|3.9.4] would require
│ │ └─ python >=3.10,<3.11.0a0 , which can be installed;
│ ├─ matplotlib [3.9.1|3.9.2|3.9.3|3.9.4] would require
│ │ └─ python >=3.11,<3.12.0a0 , which can be installed;
│ ├─ matplotlib 3.9.2 would require
│ │ └─ matplotlib-base >=3.9.2,<3.9.3.0a0 with the potential options
│ │ ├─ matplotlib-base [3.10.1|3.9.1|3.9.2|3.9.3|3.9.4], which can be installed (as previously explained);
│ │ ├─ matplotlib-base [3.10.1|3.9.1|3.9.2|3.9.3|3.9.4], which can be installed (as previously explained);
│ │ ├─ matplotlib-base [3.10.1|3.9.2|3.9.3|3.9.4], which can be installed (as previously explained);
│ │ ├─ matplotlib-base 3.9.2 conflicts with any installable versions previously reported;
│ │ └─ matplotlib-base [3.9.1|3.9.2|3.9.3|3.9.4] would require
│ │ └─ python >=3.9,<3.10.0a0 , which can be installed;
│ ├─ matplotlib [3.9.2|3.9.3|3.9.4] would require
│ │ └─ python [>=3.13,<3.14.0a0 |>=3.13.0rc2,<3.14.0a0 ], which can be installed;
│ ├─ matplotlib [3.9.1|3.9.2|3.9.3|3.9.4] would require
│ │ └─ python >=3.9,<3.10.0a0 , which can be installed;
│ ├─ matplotlib 3.9.1 would require
│ │ └─ matplotlib-base >=3.9.1,<3.9.2.0a0 with the potential options
│ │ ├─ matplotlib-base [3.10.1|3.9.1|3.9.2|3.9.3|3.9.4], which can be installed (as previously explained);
│ │ ├─ matplotlib-base [3.10.1|3.9.1|3.9.2|3.9.3|3.9.4], which can be installed (as previously explained);
│ │ ├─ matplotlib-base [3.9.1|3.9.2|3.9.3|3.9.4], which can be installed (as previously explained);
│ │ └─ matplotlib-base 3.9.1 conflicts with any installable versions previously reported;
│ ├─ matplotlib 3.9.3 would require
│ │ └─ matplotlib-base >=3.9.3,<3.9.4.0a0 with the potential options
│ │ ├─ matplotlib-base [3.10.1|3.9.1|3.9.2|3.9.3|3.9.4], which can be installed (as previously explained);
│ │ ├─ matplotlib-base [3.10.1|3.9.1|3.9.2|3.9.3|3.9.4], which can be installed (as previously explained);
│ │ ├─ matplotlib-base [3.10.1|3.9.2|3.9.3|3.9.4], which can be installed (as previously explained);
│ │ ├─ matplotlib-base [3.9.1|3.9.2|3.9.3|3.9.4], which can be installed (as previously explained);
│ │ └─ matplotlib-base 3.9.3 conflicts with any installable versions previously reported;
│ └─ matplotlib 3.9.4 would require
│ └─ matplotlib-base >=3.9.4,<3.9.5.0a0 with the potential options
│ ├─ matplotlib-base [3.10.1|3.9.1|3.9.2|3.9.3|3.9.4], which can be installed (as previously explained);
│ ├─ matplotlib-base [3.10.1|3.9.1|3.9.2|3.9.3|3.9.4], which can be installed (as previously explained);
│ ├─ matplotlib-base [3.10.1|3.9.2|3.9.3|3.9.4], which can be installed (as previously explained);
│ ├─ matplotlib-base [3.9.1|3.9.2|3.9.3|3.9.4], which can be installed (as previously explained);
│ └─ matplotlib-base 3.9.4 conflicts with any installable versions previously reported;
└─ pin-1 is not installable because it requires
└─ python 3.12.* , which conflicts with any installable versions previously reported.
</code></pre>
<p>What else can I do to recover from those errors? I cannot upgrade, i cannot downgrade, I have no clue. Suddenly something broke and there seems to be nothing I can do?</p>
<p>I am working on windows 11.</p>
|
<python><qt><matplotlib><conda>
|
2025-04-10 05:57:26
| 1
| 383
|
Andre
|
79,565,707
| 2,746,878
|
ValueError: PandasEngine works only with pd.DataFrame input data
|
<p>python version 3.9.21</p>
<p>define ref dataset</p>
<pre><code>tr_ds_lr = Dataset.from_pandas(
tr_data_lr,
data_definition=DataDefinition()
)
</code></pre>
<p>define cur dataset</p>
<pre><code>te_ds_lr = Dataset.from_pandas(
te_data_lr,
data_definition=DataDefinition()
)
</code></pre>
<p>defining report</p>
<pre><code>report = Report(metrics=[
# DataDriftPreset(num_stattest='jensenshannon', num_stattest_threshold=0.5, drift_share=0.5)
DataDriftPreset(num_stattest='wasserstein', num_stattest_threshold=0.5, drift_share=0.5)
]
)
</code></pre>
<p>report run is throwin error</p>
<pre><code>report.run(reference_data=tr_ds_lr, current_data=te_ds_lr)
</code></pre>
<p>Error Message,</p>
<pre><code>ValueError
Traceback (most recent call last)
Cell In[22], [line 1](vscode-notebook-cell:?execution_count=22&line=1)
----> [1](vscode-notebook-cell:?execution_count=22&line=1) report.run(reference_data=tr_ds_lr, current_data=te_ds_lr)
File c:\Users\40104089\AppData\Local\miniconda3\envs\drift-env\lib\site-packages\evidently\report\report.py:112, in Report.run(self, reference_data, current_data, column_mapping, engine, additional_data, timestamp)
[110](file:///C:/Users/40104089/AppData/Local/miniconda3/envs/drift-env/lib/site-packages/evidently/report/report.py:110) raise ValueError("No Engine is set")
[111](file:///C:/Users/40104089/AppData/Local/miniconda3/envs/drift-env/lib/site-packages/evidently/report/report.py:111) else:
--> [112](file:///C:/Users/40104089/AppData/Local/miniconda3/envs/drift-env/lib/site-packages/evidently/report/report.py:112) data_definition = self._inner_suite.context.get_data_definition(
[113](file:///C:/Users/40104089/AppData/Local/miniconda3/envs/drift-env/lib/site-packages/evidently/report/report.py:113) current_data,
[114](file:///C:/Users/40104089/AppData/Local/miniconda3/envs/drift-env/lib/site-packages/evidently/report/report.py:114) reference_data,
[115](file:///C:/Users/40104089/AppData/Local/miniconda3/envs/drift-env/lib/site-packages/evidently/report/report.py:115) column_mapping,
[116](file:///C:/Users/40104089/AppData/Local/miniconda3/envs/drift-env/lib/site-packages/evidently/report/report.py:116) self.options.data_definition_options.categorical_features_cardinality,
[117](file:///C:/Users/40104089/AppData/Local/miniconda3/envs/drift-env/lib/site-packages/evidently/report/report.py:117) )
[118](file:///C:/Users/40104089/AppData/Local/miniconda3/envs/drift-env/lib/site-packages/evidently/report/report.py:118) if METRIC_GENERATORS in self.metadata:
[119](file:///C:/Users/40104089/AppData/Local/miniconda3/envs/drift-env/lib/site-packages/evidently/report/report.py:119) del self.metadata[METRIC_GENERATORS]
File c:\Users\40104089\AppData\Local\miniconda3\envs\drift-env\lib\site-packages\evidently\suite\base_suite.py:133, in Context.get_data_definition(self, current_data, reference_data, column_mapping, categorical_features_cardinality)
[131](file:///C:/Users/40104089/AppData/Local/miniconda3/envs/drift-env/lib/site-packages/evidently/suite/base_suite.py:131) if self.engine is None:
[132](file:///C:/Users/40104089/AppData/Local/miniconda3/envs/drift-env/lib/site-packages/evidently/suite/base_suite.py:132) raise ValueError("Cannot create data definition when engine is not set")
--> [133](file:///C:/Users/40104089/AppData/Local/miniconda3/envs/drift-env/lib/site-packages/evidently/suite/base_suite.py:133) self.data_definition = self.engine.get_data_definition(
[134](file:///C:/Users/40104089/AppData/Local/miniconda3/envs/drift-env/lib/site-packages/evidently/suite/base_suite.py:134) current_data,
[135](file:///C:/Users/40104089/AppData/Local/miniconda3/envs/drift-env/lib/site-packages/evidently/suite/base_suite.py:135) reference_data,
[136](file:///C:/Users/40104089/AppData/Local/miniconda3/envs/drift-env/lib/site-packages/evidently/suite/base_suite.py:136) column_mapping,
[137](file:///C:/Users/40104089/AppData/Local/miniconda3/envs/drift-env/lib/site-packages/evidently/suite/base_suite.py:137) categorical_features_cardinality,
...
[52](file:///C:/Users/40104089/AppData/Local/miniconda3/envs/drift-env/lib/site-packages/evidently/calculation_engine/python_engine.py:52) ):
---> [53](file:///C:/Users/40104089/AppData/Local/miniconda3/envs/drift-env/lib/site-packages/evidently/calculation_engine/python_engine.py:53) raise ValueError("PandasEngine works only with pd.DataFrame input data")
[54](file:///C:/Users/40104089/AppData/Local/miniconda3/envs/drift-env/lib/site-packages/evidently/calculation_engine/python_engine.py:54) return create_data_definition(reference_data, current_data, column_mapping, categorical_features_cardinality)
ValueError: PandasEngine works only with pd.DataFrame input data
</code></pre>
|
<python><evidently>
|
2025-04-10 04:11:45
| 0
| 1,096
|
Neel Kamal
|
79,565,611
| 1,440,565
|
Troubleshoot "Invalid input shape for input Tensor"
|
<p>I am trying to build a model with keras and tensorflow to play Go:</p>
<pre><code> training_data = encode_from_file_info(training_files)
testing_data = encode_from_file_info(testing_files)
input_shape = (1, 19, 19)
model = Sequential(
[
keras.layers.Input(input_shape),
keras.layers.ZeroPadding2D(padding=3, data_format='channels_first'),
]
)
model.compile(
loss="categorical_crossentropy", optimizer="sgd", metrics=["accuracy"]
)
model.fit(
training_data, batch_size=64, epochs=15, verbose=1, validation_data=testing_data
)
</code></pre>
<p>I get the following error when I call <code>model.fit()</code>:</p>
<blockquote>
<p>Invalid input shape for input Tensor("Cast:0", shape=(None, 19, 19),
dtype=float32). Expected shape (None, 1, 19, 19), but input has incompatible
shape (None, 19, 19)</p>
</blockquote>
<p>I have verified that the generators <code>training_data</code> and <code>testing_data</code> yield tuples of two ndarray with shape <code>(1, 19, 19)</code> and <code>(361,)</code>.</p>
<p>I'm new to tensorflow and it is a black box to me. Obviously I'm missing something and making incorrect assumptions about my training data and the neural network. How do I troubleshoot issues like this? What tools are available to debug my model to find the shape mismatch?</p>
|
<python><tensorflow><keras>
|
2025-04-10 02:00:16
| 1
| 83,954
|
Code-Apprentice
|
79,565,605
| 5,091,805
|
pyright can't resolve imports in modules with their own pyproject.toml
|
<p>Working with a mono-repository structured as so:</p>
<pre class="lang-none prettyprint-override"><code>- pyproject.toml (root workspace, references other members)
src
├── application
│ └── service.py
├── domain
│ ├── headroom
│ │ └── service.py
│ ├── hosting_capacity
│ │ └── service.py
│ ├── model_free_engine
│ │ └── service.py
│ └── shared
├── entrypoint
│ ├── api
│ │ ├── main.py
│ │ └── pyproject.toml
│ └── worker
│ ├── main.py
│ └── pyproject.toml
├── infrastructure
└── libs
└── schemas
└── pyproject.toml
</code></pre>
<p>Any code within any sub folder with its own <code>pyproject.toml</code> cannot resolve imports. Imports a absolute like so:</p>
<pre class="lang-py prettyprint-override"><code># entrypoint/api/main.py
from application.service import Service
</code></pre>
<ul>
<li>entrypoint/worker</li>
<li>entrypoint/api</li>
<li>libs/schemas</li>
</ul>
<p>However folders that do not have a <code>pyproject.toml</code> are fine:</p>
<ul>
<li>application</li>
<li>domain</li>
<li>infrastructure</li>
</ul>
<p>I have tried various pyrightconfig.json settings both at the root and in the modules and nothing seems to help pyright other than if I DELETE the pyproject.toml from each.</p>
<p>Example config in root.</p>
<pre class="lang-json prettyprint-override"><code>{
"typeCheckingMode": "strict",
"reportMissingImports": true,
"reportMissingTypeStubs": false,
"include": [
"src"
],
"extraPaths": [
"src",
"src/entrypoint/api",
"src/entrypoint/worker",
"src/libs/schemas"
]
}
</code></pre>
|
<python><pyright>
|
2025-04-10 01:55:25
| 1
| 6,289
|
Ari
|
79,565,565
| 350,878
|
python script fails to accept keyboard input, win 11, gitbash
|
<p>Here is a script that try to capture human play and later feed to a train script. I found that is not able to accept keyboard input, win 11, gitbash. using window native cmd, also has same problem. Any idea how to fix it?</p>
<pre><code> print("🕹️ KUNG FU MASTER TRAINER CONTROLS:")
print("↑ ↓ ← → : Move/Jump/Duck")
print("A: Punch | S: Kick")
print("R: Start/Stop recording")
print("O: Save state | P: Load state")
print("M: Toggle AI control")
print("Q: Quit")
</code></pre>
<pre><code>import retro
import numpy as np
import os
import time
import json
import cv2
from pynput import keyboard
from stable_baselines3 import PPO
class TrainingCapturer:
def __init__(self):
self.env = retro.make('KungFu-Nes')
self.save_dir = 'training_data'
self.state_dir = 'saved_states'
self.model_dir = 'models\kungfu_ppo' # Directory where train.py saves models
os.makedirs(self.save_dir, exist_ok=True)
os.makedirs(self.state_dir, exist_ok=True)
os.makedirs(self.model_dir, exist_ok=True)
self.recording = False
self.current_segment = []
self.segment_id = self._get_next_id(self.save_dir)
self.state_id = self._get_next_id(self.state_dir)
self.ai_playing = False
self.model = self._load_ai_model() # Load model immediately
self._init_controls()
def _get_next_id(self, directory):
"""Get the next available ID number for saving"""
try:
existing = [int(f.split('_')[1].split('.')[0])
for f in os.listdir(directory) if f.startswith('segment_')]
return max(existing) + 1 if existing else 0
except Exception:
return 0
def _init_controls(self):
self.controls = {
'up': False, # Jump
'down': False, # Duck
'left': False, # Move left
'right': False, # Move right
'a': False, # Punch
's': False # Kick
}
def _load_ai_model(self):
"""Load the best model from train.py's output"""
model_path = os.path.join(self.model_dir, 'kungfu_ppo_best.zip')
if os.path.exists(model_path):
print(f"Loading AI model from {model_path}")
return PPO.load(model_path, env=self.env)
print(f"No AI model found at {model_path}")
return None
def toggle_ai(self):
if self.model:
self.ai_playing = not self.ai_playing
print(f"AI control {'ON' if self.ai_playing else 'OFF'}")
else:
print("First train a model with train.py!")
def start_recording(self):
if not self.recording:
self.recording = True
self.current_segment = []
print("⏺ Recording STARTED")
def stop_recording(self):
if self.recording:
self.recording = False
self._save_segment()
print("⏹ Recording STOPPED")
def save_game_state(self):
state_path = os.path.join(self.state_dir, f"state_{self.state_id}.state")
self.env.save_state(state_path)
print(f"💾 Saved game state {self.state_id}")
self.state_id += 1
def load_game_state(self):
load_id = max(0, self.state_id - 1)
state_path = os.path.join(self.state_dir, f"state_{load_id}.state")
if os.path.exists(state_path):
self.env.load_state(state_path)
print(f"🔃 Loaded game state {load_id}")
else:
print("❌ No saved states found!")
def _process_frame(self, frame):
"""Process frame for AI input"""
gray = cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY)
return cv2.resize(gray, (84, 84)) / 255.0 # Normalize
def _save_segment(self):
if len(self.current_segment) > 0:
# Save frames and actions
segment_path = os.path.join(self.save_dir, f"segment_{self.segment_id}.npz")
frames = np.array([f[0] for f in self.current_segment])
actions = np.array([f[1] for f in self.current_segment])
np.savez(segment_path, frames=frames, actions=actions)
# Save metadata
metadata = {
'timestamp': time.strftime("%Y-%m-%d %H:%M:%S"),
'num_frames': len(self.current_segment),
'ai_generated': self.ai_playing
}
with open(os.path.join(self.save_dir, f"segment_{self.segment_id}.json"), 'w') as f:
json.dump(metadata, f)
print(f"💿 Saved segment {self.segment_id} ({len(self.current_segment)} frames)")
self.segment_id += 1
def run(self):
print("🕹️ KUNG FU MASTER TRAINER CONTROLS:")
print("↑ ↓ ← → : Move/Jump/Duck")
print("A: Punch | S: Kick")
print("R: Start/Stop recording")
print("O: Save state | P: Load state")
print("M: Toggle AI control")
print("Q: Quit")
def on_press(key):
try:
if key.char.lower() == 'r':
if self.recording: self.stop_recording()
else: self.start_recording()
elif key.char.lower() == 'o':
self.save_game_state()
elif key.char.lower() == 'p':
self.load_game_state()
elif key.char.lower() == 'm':
self.toggle_ai()
elif key.char.lower() == 'q':
return False # Quit
elif key.char.lower() == 'a':
self.controls['a'] = True
elif key.char.lower() == 's':
self.controls['s'] = True
except AttributeError:
if key == keyboard.Key.up: self.controls['up'] = True
elif key == keyboard.Key.down: self.controls['down'] = True
elif key == keyboard.Key.left: self.controls['left'] = True
elif key == keyboard.Key.right: self.controls['right'] = True
def on_release(key):
try:
if key.char.lower() == 'a':
self.controls['a'] = False
elif key.char.lower() == 's':
self.controls['s'] = False
except AttributeError:
if key == keyboard.Key.up: self.controls['up'] = False
elif key == keyboard.Key.down: self.controls['down'] = False
elif key == keyboard.Key.left: self.controls['left'] = False
elif key == keyboard.Key.right: self.controls['right'] = False
listener = keyboard.Listener(on_press=on_press, on_release=on_release)
listener.start()
self.env.reset()
try:
while True:
# Get action (AI or human)
if self.ai_playing and self.model:
frame = self._process_frame(self.env.get_screen())
action, _ = self.model.predict(frame)
else:
action = [
int(self.controls['up']),
int(self.controls['down']),
int(self.controls['left']),
int(self.controls['right']),
int(self.controls['a']),
int(self.controls['s'])
]
# Step environment
obs, _, done, _ = self.env.step(action)
# Record if enabled
if self.recording:
self.current_segment.append((self._process_frame(obs), action))
# Render at consistent speed
self.env.render()
time.sleep(0.02) # ~50 FPS
if done:
self.env.reset()
finally:
listener.stop()
self.env.close()
if __name__ == "__main__":
capturer = TrainingCapturer()
capturer.run()
</code></pre>
|
<python><input><keyboard>
|
2025-04-10 01:16:24
| 1
| 8,376
|
kenpeter
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.