QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,392,103
| 5,151,909
|
NumPy 2D indexing using different slice per column
|
<pre class="lang-py prettyprint-override"><code>import numpy as np
x = np.arange(12).reshape(3, 4)
print(x)
idx = np.array([0, 1])
y = x[:2, idx : idx + 2]
# should be
# [[0 1]
# [5 6]]
</code></pre>
<p>So I want to get different slice for each row. In this example, it's the <code>0, 1</code> indexes from the first row and <code>1, 2</code> indexes from the second row.</p>
<p>Instead, I'm getting:</p>
<blockquote>
<p>TypeError: only integer scalar arrays can be converted to a scalar index</p>
</blockquote>
<p>What are my options? Are there more than one? Any performance implications?</p>
|
<python><numpy>
|
2025-01-27 20:31:44
| 0
| 4,011
|
galah92
|
79,392,038
| 2,893,712
|
Pandas Batch Update Account String
|
<p>My organization has account numbers that are comprised of combining multiple fields. The last field is always 4 characters (typically 0000)</p>
<pre><code>Org Account
01 01-123-0000
01 01-456-0000
02 02-789-0000
02 02-456-0000
03 03-987-0000
03 03-123-1234
</code></pre>
<p>I also have a dictionary mapping of how many characters the last component should be.</p>
<pre><code>MAP = {'01': 4, '02': 3, '03': 3}
</code></pre>
<p>However there also special mappings for Org 03:</p>
<pre><code>D03_SPECIAL_MAP = {'0000': '012', '1234': '123'}
</code></pre>
<p>My code to update the last component is:</p>
<pre><code>for i,r in df.iterrows():
updated = False # Keep track if we have updated this row
# Split off last component from the rest of the account
Acct, last_comp = r['Account'].rsplit('-',1)
# Check if we need to update code length and the code length does not match
if r['Org'] in MAP and len(last_comp) != MGMT_MAP[r['Org']]:
df.at[i,'Account'] = '-'.join(Acct) + "-" + last_comp.zfill(MAP[r['Org']])
updated = True
# Special mapping for Org 03
if r['Org'] =='03' and last_comp in D03_SPECIAL_MAP.keys():
df.at[i,'Account'] = '-'.join(Acct) + "-" + D03_SPECIAL_MAP[last_comp]
updated = True
if not updated: # Join Default if we have not hit either of he conditions above
df.at[i,'Account'] = '-'.join(Acct) + "-" + last_comp
</code></pre>
<p>The output of this will be:</p>
<pre><code>Org Account
01 01-123-0000
01 01-456-0000
02 02-789-000
02 02-456-000
03 03-987-012
03 03-123-123
</code></pre>
<p>My code works as expected except this process is a little slow to check every record. Is there a way to perform the same operation without using <code>df.iterrows()</code>?</p>
|
<python><pandas>
|
2025-01-27 20:00:00
| 1
| 8,806
|
Bijan
|
79,391,995
| 685,022
|
Post multipart/form-data using pythons httpx library only form data
|
<p>How do I send a POST with python HTTPX that will minic a CURL POST that works? This is to an opengear rest API. I believe it has to do something with the data field of the post.</p>
<p>This is the working curl hitting the rest API properly.</p>
<pre class="lang-bash prettyprint-override"><code>curl -s -k \
-X POST \
-H "Authorization: Token ${TOKEN}" \
-H "Content-Type: multipart/form-data" \
-F firmware_url="https://example.com/path/to/file/file.name" \
-F firmware_options="-R" \
https://${SERVER}/api/v2/system/firmware_upgrade
</code></pre>
<p>Based on httpx documentation this should be a simple call with a dictionary of the fields passed in the post method. It also says that posts with files defaults to streaming. It does not clearly state posts with out files use streaming. Based on the results from the logs of the nginx server it appears that it's not a streaming process.</p>
<p>I'm modifying the header <code>content-type</code> as the server is expecting a <code>multipart/form-data</code> and the httpx.post defaults to the <code>application/x-www-form-urlencoded</code>.</p>
<p>The python and different types of posts that I've attempted not all at once.</p>
<pre class="lang-py prettyprint-override"><code># Get a session token and creates the httpx.Client object api_client.
# Multiple GETs are called from the client before trying to post.
...
# POST section
headers = {"Content-Type": "multipart/form-data"}
data = {
"firmware_url": f"https://example.com/path/to/file/file.name",
"firmware_options": "-R"
}
# Simple post
res = api_client.post(f"https://{SERVER}/api/v2/system/firmware_upgrade", data=data, headers=headers)
# Data converted to json string
res = api_client.post(f"https://{SERVER}/api/v2/system/firmware_upgrade", data=json.dumps(data), headers=headers)
# Data converted to byte encoded json string
byte_data = json.dumps(data).encode(encoding="utf-8")
res = api_client.post(f"https://{SERVER}/api/v2/system/firmware_upgrade", data=data, headers=headers)
</code></pre>
<p>The nginx logs show different lines depending on which post is used.</p>
<pre class="lang-none prettyprint-override"><code>CURL with -F form fields (working, appears to be a stream)
::ffff:10.0.0.1 - - [24/Jan/2025:22:39:47 +0000] "POST /api/v2/system/firmware_upgrade HTTP/1.1" 200 54 "-" "curl/8.5.0" rt=0.054 uct="0.001" uht="0.054" urt="0.054"
HTTPX with raw data dict
::ffff:10.0.0.1 - - [24/Jan/2025:22:38:51 +0000] "POST /api/v2/system/firmware_upgrade HTTP/1.1" 000 0 "-" "python-httpx/0.28.1" rt=0.000 uct="-" uht="-" urt="-"
::ffff:10.0.0.1 - - [24/Jan/2025:22:38:51 +0000] "firmware_url=https%3A%2F%2Fexample.com%2Fpath%2Fto%2Ffile%2Ffile.name&firmware_options=-R" 400 157 "-" "-" rt=0.027 uct="-" uht="-" urt="-"
HTTPX with JSON dumps
::ffff:10.0.0.1 - - [24/Jan/2025:22:38:08 +0000] "POST /api/v2/system/firmware_upgrade HTTP/1.1" 000 0 "-" "python-httpx/0.28.1" rt=0.000 uct="-" uht="-" urt="-"
::ffff:10.0.0.1 - - [24/Jan/2025:22:38:08 +0000] "{\x22firmware_url\x22: \x22https://example.com/path/to/file/file.name\x22, \x22firmware_options\x22: \x22-R\x22}" 400 157 "-" "-" rt=0.028 uct="-" uht="-" urt="-"
HTTPX with JSON dumps to byte encoded
::ffff:10.0.0.1 - - [27/Jan/2025:19:01:45 +0000] "POST /api/v2/system/firmware_upgrade HTTP/1.1" 000 0 "-" "python-httpx/0.28.1" rt=0.000 uct="-" uht="-" urt="-"
::ffff:10.0.0.1 - - [27/Jan/2025:19:01:45 +0000] "{\x22firmware_url\x22: \x22https://example.com/path/to/file/file.name\x22, \x22firmware_options\x22: \x22-R\x22}" 400 157 "-" "-" rt=0.006 uct="-" uht="-" urt="-"
</code></pre>
|
<python><post><multipartform-data><httpx>
|
2025-01-27 19:44:17
| 1
| 1,903
|
krizzo
|
79,391,914
| 16,725,431
|
Notification after shut down on windows using python
|
<p>I want to create a function that displays a message for the user when windows start again after a shutdown.</p>
<p>It is used together with the shutdown command <code>shutdown \s \hybrid</code></p>
<p>Heres my attempt with the help of AI</p>
<pre><code>import os
def leave_message_on_startup(message):
script_content = f"""
import pymsgbox
message = "{message}"
pymsgbox.alert(message, "Startup Message")
"""
dirname = os.path.dirname
script_path = f"{dirname(dirname(dirname(__file__)))}\\temp\\DisplayMessageOnStartup.pyw"
os.makedirs("temp", exist_ok=True)
with open(script_path, "w+") as script_file:
script_file.write(script_content)
create_task_command = f'schtasks /create /tn "{task_name}" /tr "py {script_path}" /sc onstart /f'
os.system(create_task_command)
def trigger_task(task_name):
trigger_task_command = f'schtasks /run /tn "{task_name}"'
os.system(trigger_task_command)
task_name = "DisplayMessageOnStartup"
message = "Sample message"
leave_message_on_startup(message)
trigger_task(task_name)
</code></pre>
<p>After running the code, I double-clicked on the created script in File Explorer and it worked.</p>
<p>Even though the script creation was successful, the task creation seems to have failed.</p>
<p>I can't find the problem.</p>
<p>Please enlighten me on ways to achieve notifications after windows shutdown.</p>
|
<python><windows><startup><shutdown><windows-task-scheduler>
|
2025-01-27 19:11:10
| 2
| 444
|
Electron X
|
79,391,748
| 1,552,080
|
Communicating with REST service using Python/Flask, javascript and ajax
|
<p>I have a web application fetching data from database behind a REST service creating output in form of a table:</p>
<p><a href="https://i.sstatic.net/JfBK6Ad2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JfBK6Ad2.png" alt="Output of Flask/javascript code generating table with data." /></a></p>
<p>As you see, there are two buttons (Collect/Archive) in each row of the table allowing the user to update the database via another REST service which accepts data via html PUT.</p>
<p>The click action(s) of the buttons is implemented in javascript. When a button is clicked the following code is executed:</p>
<pre><code>function updateSourceState(sourceTypeId, sourceId, dataForUpdate) {
$.ajax({
url: '/update',
data: {source_type_id: sourceTypeId, source_id: sourceId, data_for_update: dataForUpdate},
dataType: 'json',
type: 'PUT',
success: function (data) {
console.log('Data updated successfully');
},
error: function(data) {
console.log('Data update failed. Response: ' + data);
}
}).done(function() {
console.log('DONE: data updated successfully');
}).fail(function(msg) {
console.log('FAIL: data update failed: ' + msg);
}).always(function(msg) {
console.log('ALWAYS message: ' + dataForUpdate + ', url: ' + url);
});
}
</code></pre>
<p>The communication with the REST service is implemented using Python-Flask and javascript. The main program (WebViewer.py) defines routes like this:</p>
<pre><code>@bp.route('/update', methods=['PUT'])
def update_source_state():
app.logger.debug('[DEBUG] entering update_source_state()')
source_id = request.args.get('source_id')
source_type = request.args.get('source_type_id')
dataForUpdate = request.args.get('data_for_update')
url = '{0}/{1}/source/{2}'.format(ARCHIVE_CONFIG_BASE_URL, source_type, source_id)
response = requests.put( url=url
, data=dataForUpdate
, headers={'Content-Type': 'application/json', 'accept': 'application/json'}
)
if response.status_code == 200:
return jsonify({'status': 'success'}), 200
elif response.status_code == 405:
return jsonify({'status': 'method not allowed'}), 405
elif response.status_code == 500:
return jsonify({'status': 'server error'}), 500
else:
return jsonify({'status': 'failed'}), 600
</code></pre>
<p>I see in the browser debugger that javascript <code>updateSourceState()</code> is called and the variables are filled. However, in method <code>update_source_state()</code> the <code>request.args</code> are all None. It seems that the arguments are not passed from javascript to Flask/Python.</p>
<p>What is the right syntax to perform a PUT request to the REST service in this setup? Any help appreciated.</p>
|
<javascript><python><ajax><flask>
|
2025-01-27 18:04:30
| 1
| 1,193
|
WolfiG
|
79,391,696
| 5,110,870
|
How do you manage (create/delete/update) a custom classification?
|
<p>I am new to Purview, and have been playing with the <code>azure-purview</code> Python package.</p>
<p>I noticed that Purview relies a lot on AtlasEntity, but I was wondering if anyone knew how to create, delete, or update a custom classification programmatically.</p>
<p>Would a classification be an entity?</p>
<p>If so, how should this JSON be structured for a Custom Classification? For example, what is the right value for <code>typeName</code>?</p>
<pre><code>{
"referredEntities": {},
"entity": {
"typeName": "azure_storage_account",
"attributes": {
"owner": "ExampleOwner",
"modifiedTime": 0,
"createTime": 0,
"qualifiedName": "https://exampleaccount.core.windows.net",
"name": "ExampleStorageAccount",
"description": null,
"publicAccessLevel": null
},
"contacts": {
"Expert": [
{
"id": "30435ff9-9b96-44af-a5a9-e05c8b1ae2df",
"info": "Example Expert Info"
}
],
"Owner": [
{
"id": "30435ff9-9b96-44af-a5a9-e05c8b1ae2df",
"info": "Example Owner Info"
}
]
},
"status": "ACTIVE",
"createdBy": "ExampleCreator",
"updatedBy": "ExampleUpdator",
"version": 0
}
}
</code></pre>
<p>I ask because the <code>create_or_update(collection: str, entity: MutableMapping[str, Any], **kwargs: Any) -> MutableMapping[str, Any]</code> method seems to expect an entity as input.</p>
|
<python><azure-purview>
|
2025-01-27 17:43:01
| 1
| 7,979
|
FaCoffee
|
79,391,609
| 3,200,163
|
Python 3 how to zip the contents and folder that are in another folder
|
<p>I want to have a folder called 'product_a1' in a folder called 'product_a1' with a text file and process flag files such as 'completed' file. I want to zip the entire thing so it can be put onto a ftp server. The server is currently PHP using 7zip, and expects this structure, but I need to do this using Python code. Its the structure that's important here as ShUtil defaults to single level folder, and so does python 7zip etc.</p>
<pre><code>/product_a1/product_a1/my_file.txt
/completed
/etc
</code></pre>
|
<python><zip><7zip><shutil>
|
2025-01-27 17:13:16
| 1
| 629
|
Andrew Day
|
79,391,602
| 15,484,393
|
How to mock for subprocesses spawned by ProcessPoolExecutor in Python (here: elasticsearch.Elasticsearch)?
|
<p>I am trying to mock the <code>elasticsearch.Elasticsearch</code> class using <code>unittest.mock.patch</code> in a test, but the mock is not being applied to subprocesses created by <code>ProcessPoolExecutor</code>. Here's the code I'm working with:</p>
<p>main.py</p>
<pre><code>import concurrent.futures
import elasticsearch
class ThreadTest:
def __init__(self):
pass
def process(self):
futures = []
with concurrent.futures.ProcessPoolExecutor(max_workers=5) as executor:
future = executor.submit(
self.just_es,
)
futures.append(future)
concurrent.futures.wait(fs=futures)
for future in futures:
result = future.result()
def just_es(self):
print(elasticsearch.Elasticsearch) # I want this to be mocked
return "hey"
</code></pre>
<p>test.py</p>
<pre><code>import elasticsearch
import pytest
from main import ThreadTest
class MockElasticsearchClient:
def __init__(self):
pass
@property
def meta(self):
return
@pytest.fixture(autouse=True)
def mock_es(monkeypatch):
def fake_es_client_6_for_run_query(*args, **kwargs):
return MockElasticsearchClient()
monkeypatch.setattr(
elasticsearch,
"Elasticsearch",
fake_es_client_6_for_run_query,
)
def test_thread(monkeypatch, mock_es):
tt = ThreadTest()
tt.just_es() # This works
tt.process() # This doesn't work, Elasticsearch is not mocked in subprocesses
</code></pre>
<p>Output:</p>
<pre><code><function mock_es.<locals>.fake_es_client_6_for_run_query at 0x102703c40> # output by just_es()
<class 'elasticsearch.client.Elasticsearch'> # output by process which internally calls just_es()
</code></pre>
<p>The mock works for the main process, but not for the subprocesses spawned by ProcessPoolExecutor. I believe this is because monkeypatch is only applied in the main process, but I need the mock to be propagated to all subprocesses.</p>
<p>What else I’ve Tried:</p>
<p>I’ve tried using initializer with ProcessPoolExecutor to apply the mock, but the mock is still not applied in subprocesses.</p>
|
<python><multiprocessing><pytest><concurrent.futures><monkeypatching>
|
2025-01-27 17:08:26
| 0
| 478
|
TSnake
|
79,391,560
| 1,700,890
|
Annotated python list with function as meta data
|
<p>I am trying to find out the purpose of the following code. What does it exactly do? Why there is operator in place of meta data? How is <code>operator.add</code> used? Does it append the list and if yes then when and how?
I took from the following tutorial:
<a href="https://python.langchain.com/docs/tutorials/summarization/" rel="nofollow noreferrer">Langchain</a></p>
<pre><code>import operator
from typing import Annotated
summaries: Annotated[list, operator.add]
</code></pre>
|
<python><python-typing>
|
2025-01-27 16:52:50
| 0
| 7,802
|
user1700890
|
79,391,480
| 20,302,906
|
Can't assert folder creation path with unittest patch
|
<p>I'm trying to assert all paths that passed through <code>os.makedirs</code> to test that a folder structured has been created. The code isn't complicated and I'm sure it works find but my test reports that the method wasn't called with the paths I'm passing by checking calls in <code>assert_any_call(my_path, 511)</code>. Now, what's weird in this case is that running <code>assert_called_once</code> as a hack to see the output shows that the method was actually called with the expected paths. What's going wrong in this case? I wouldn't like to leave this part of my code untested since another dev could work on it an might fall in the trap of double-checking it and waste his/her time.</p>
<p><em>Code</em></p>
<pre class="lang-py prettyprint-override"><code>def generate_base_folders():
if not os.path.exists(Path.cwd() / "unity_project"):
raise FolderCreationException("unity_project folder doesn't exist in the project")
for category in ["characters", "environments", "ui", "cinematics"]:
for tag in ["concept", "model", "animation", "vfx", "sfx", "vo", "music"]:
os.makedirs(Path.cwd() / "sessions" / category / "common" / tag)
os.makedirs(Path.cwd() / "unity_project" / "Assets" / category / "common" / tag)
</code></pre>
<p><em>Test</em></p>
<pre class="lang-py prettyprint-override"><code> @patch("os.makedirs")
def test_base_folders(self, mkdirs_mock):
assertions = []
for category in ["characters", "environments", "ui", "cinematics"]:
for tag in ["concept", "model", "animation", "vfx", "sfx", "vo", "music"]:
assertions.append(Path.cwd() / "sessions" / category / "common" / tag)
assertions.append(
Path.cwd() / "unity_project" / "Assets" / category / "common" / tag
)
sessions.generate_base_folders()
for a in assertions:
mkdirs_mock.assert_any_call(a, 511)
</code></pre>
<p><em>assert_any_call traceback</em></p>
<pre><code>Fss.
======================================================================
FAIL: test_base_folders (sessions.tests.sessions_tests.TestSessionsSetup.test_base_folders)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Users\juank\AppData\Local\Programs\Python\Python313\Lib\unittest\mock.py", line 1424, in patched
return func(*newargs, **newkeywargs)
File "C:\Users\juank\dev\projects\python\gamedev_eco\sessions\tests\sessions_tests.py", line 40, in test_base_folders
mkdirs_mock.assert_any_call(a, 511)
~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^
File "C:\Users\juank\AppData\Local\Programs\Python\Python313\Lib\unittest\mock.py", line 1048, in assert_any_call
raise AssertionError(
'%s call not found' % expected_string
) from cause
AssertionError: makedirs(WindowsPath('C:/Users/juank/gamedev_eco/sessions/characters/common/concept'), 511) call not found
----------------------------------------------------------------------
Ran 4 tests in 0.018s
FAILED (failures=1, skipped=2)
</code></pre>
<p><em>assert_called_once traceback</em></p>
<pre><code>Fss.
======================================================================
FAIL: test_base_folders (sessions.tests.sessions_tests.TestSessionsSetup.test_base_folders)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Users\juank\AppData\Local\Programs\Python\Python313\Lib\unittest\mock.py", line 1424, in patched
return func(*newargs, **newkeywargs)
File "C:\Users\juank\dev\projects\python\gamedev_eco\sessions\tests\sessions_tests.py", line 40, in test_base_folders
mkdirs_mock.assert_called_once()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\juank\AppData\Local\Programs\Python\Python313\Lib\unittest\mock.py", line 956, in assert_called_once
raise AssertionError(msg)
AssertionError: Expected 'makedirs' to have been called once. Called 56 times.
Calls: [call(WindowsPath('C:/Users/juank/gamedev_eco/sessions/characters/common/concept')),
(truncated the list for brevity)
</code></pre>
|
<python><python-3.x><unit-testing><mocking><python-unittest>
|
2025-01-27 16:27:10
| 1
| 367
|
wavesinaroom
|
79,391,408
| 451,878
|
How to test (pytest), or mock, an app.state object of fastapi
|
<p>After some research, I can't find help to mock an app.state.* object(s) :
Here my code :</p>
<pre><code>def get_app():
settings = get_settings()
application = FastAPI()
config = {...}
application = create_app(**config)
return application
app = get_app()
@pytest.fixture(scope="module")
def test_app():
client = TestClient(app)
yield client
def test_liveness(test_app):
response = test_app.get("/health/liveness")
assert response.status_code == 200
assert response.json() == {"status": "ready"}
</code></pre>
<p>The <code>test_liveness</code> doesn't work cause the endpoint use <code>app.state.*</code> object (previously create in the main) :</p>
<pre><code>db_client = class my_class(dataset,project_id,connector_parameters)
await db_client.create_clients()
app.state.db_client = db_client
</code></pre>
<p>So, how I simulate the <code>app.state.db_client</code> object used in this endpoint <em>/health/liveness</em> :</p>
<pre><code>if request.app.state.db_client and request.app.state.db_client.is_connected:
return Health(status="ready")
else:
raise HTTPException(status_code=503, detail="Not ready")
</code></pre>
<p>The error I've got is this one :</p>
<p><code>AttributeError: 'State' object has no attribute 'db_client'</code></p>
<p>Thanx</p>
|
<python><google-bigquery><mocking><pytest>
|
2025-01-27 16:03:59
| 0
| 1,481
|
James
|
79,391,242
| 668,455
|
How to customise the LlamaIndex starter tuto to use the latest Llama model hosted on "akash.network"
|
<p>The akash chat API is supposed to be compatible with openai : <a href="https://chatapi.akash.network/documentation" rel="nofollow noreferrer">https://chatapi.akash.network/documentation</a>, it's compatible with the basic OpenAI SDK :</p>
<pre><code>import openai
import textwrap
client = openai.OpenAI(
api_key="sk-xxxxxxxx",
base_url="https://chatapi.akash.network/api/v1"
)
response = client.chat.completions.create(
model="Meta-Llama-3-1-8B-Instruct-FP8",
messages = [
{
"role": "user",
"content": "Who are you?"
}
],
)
print(textwrap.fill(response.choices[0].message.content, 50))
</code></pre>
<p>If I customize the LlamaIndex starter tutorial like this :</p>
<pre><code>from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings
from llama_index.llms.openai import OpenAI
API_BASE_URL="https://chatapi.akash.network/api/v1"
EMBEDDING_MODEL="BAAI/bge-small-en-v1.5"
LLM_MODEL="Meta-Llama-3-3-70B-Instruct"
# Load your documents
documents = SimpleDirectoryReader("data").load_data()
# Pick an available OpenAI compatible model
custom_llm = OpenAI(api_base=API_BASE_URL, model=LLM_MODEL)
# Initialize the HuggingFace embedding model
embedding_model = HuggingFaceEmbedding(model_name=EMBEDDING_MODEL)
# Set the local embedding model
Settings.embed_model = embedding_model
# Build the index using the local embeddings
index = VectorStoreIndex.from_documents(documents, llm=custom_llm)
query_engine = index.as_query_engine(llm=custom_llm)
response = query_engine.query("What did the author do growing up?")
print(response)
</code></pre>
<p>I get this error from the client lib :
"Please provide a valid OpenAI model name in: <strong>o1, o1-2024-12-17, o1-preview, o1-preview-2024-09-12, o1-mini,</strong> (...)"</p>
<pre class="lang-py prettyprint-override"><code> File "C:\Dev\projects\llama-index-starter-tuto\.venv\Lib\site-packages\llama_index\llms\openai\utils.py", line 236, in openai_modelname_to_contextsize
raise ValueError(
ValueError: Unknown model 'Meta-Llama-3-3-70B-Instruct'. Please provide a valid OpenAI model name in: o1, o1-2024-12-17, o1-preview, o1-preview-2024-09-12, o1-mini, (...)
</code></pre>
<p>If I remove the "llm" parameter from "as_query_engine" and set a OPENAI_API_BASE=https://chatapi.akash.network/api/v1 env variable, I get this from the API :
"Not allowed to call model=gpt-3.5-turbo. Allowed team models = [<strong>'llama3-8b', 'Meta-Llama-3-1-405B-Instruct-FP8', 'llama3-8b-instruct', 'Meta-Llama-3-1-8B-Instruct-FP8'</strong>, (...)"</p>
<pre><code>openai.AuthenticationError: Error code: 401 - {'error': {'message': "Authentication Error, Team=643a4183-7eb9-4c20-8e31-db45843bffbe not allowed to call model=gpt-3.5-turbo. Allowed team models = ['llama3-8b', 'Meta-Llama-3-1-405B-Instruct-FP8', 'llama3-8b-instruct', 'Meta-Llama-3-1-8B-Instruct-FP8', 'Meta-Llama-3-2-3B-Instruct', 'nvidia-Llama-3-1-Nemotron-70B-Instruct-HF', 'Meta-Llama-3-3-70B-Instruct', (...)]", 'type': 'auth_error', 'param': 'None', 'code': '401'}}
</code></pre>
<p>Does it mean there is no way to use llama_index.llms.openai to reach the Akash endpoint ? The client lib expect a set of models and the API expect an other set ?</p>
<p>UPDATE :</p>
<p>I've also tried to use LlamaAPI class :</p>
<pre><code>from llama_index.llms.llama_api import LlamaAPI
#(...)
#custom_llm = OpenAI(api_base=API_BASE_URL, model=LLM_MODEL)
custom_llm = LlamaAPI(api_key=LLAMA_API_KEY)
</code></pre>
<p>but there is no way to customize the api base URL.</p>
|
<python><large-language-model><llama-index>
|
2025-01-27 15:01:45
| 1
| 9,191
|
Tristan
|
79,390,866
| 4,877,683
|
anndata.concat resulting in 4x the size of the individual files causing memory issues
|
<p>I am new to anndata and would like to know if an issue that i am running into expected or not.</p>
<p>I have 28 h5ad files (Tabula Sapiens)(<a href="https://figshare.com/articles/dataset/Tabula_Sapiens_v2/27921984" rel="nofollow noreferrer">https://figshare.com/articles/dataset/Tabula_Sapiens_v2/27921984</a>), that I am trying to consolidate into one h5ad file in order to calculate some statistics.
I used the code below:</p>
<pre><code># Initialize an empty AnnData object
merged_adata = None
input_files = os.listdir("/full_data/ts_individual_data/}"
# Read and concatenate each file
for file in input_files:
print(f"Processing file: {file}")
adata = sc.read_h5ad(f"/full_data/ts_individual_data/{file}")
print(f"Read Done for {file}")
if merged_adata is None:
merged_adata = adata
else:
merged_adata = ad.concat([merged_adata, adata], axis=0, join='outer', merge='unique')
# Write the merged data to the output file in chunks
print(f"Writing merged data to {output_file}")
merged_adata.write_h5ad(output_file)
</code></pre>
<p>The total size of the individual tissue h5ads is 53gb
where as the size of the combined file is 222gb</p>
<p>I have checked for duplicates in the var and obs layers: No duplicates were found
I have compared expressed values for a couple of cell_ids against one tissue in the combined file and the individual tissue h5ad: Both values match.</p>
<p>Due to this im running into memory issues.
What could be done to resolve this?
Is the approach to combine the h5ad files correct?</p>
|
<python><bioinformatics><scanpy><anndata>
|
2025-01-27 12:56:02
| 1
| 703
|
Danish Zahid Malik
|
79,390,711
| 1,472,474
|
Correct annotation for "apply" function
|
<p>In Python-3.10 (it must be this version) I want to add better annotation for my <code>apply</code> function:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar, Callable, Sequence, Any
T = TypeVar('T')
def apply(fn: Callable[..., T], vals: Sequence[Any]) -> T:
return fn(*vals)
def f(a: int, b: int) -> int:
return a + b
print(apply(f, (1, 2)))
</code></pre>
<p>Because in this code, the type annotation "hides" the connection between <code>vals</code> and <code>fn</code> arguments, so <code>mypy</code> can't check types and number of arguments correctly.</p>
<h2>What have I tried:</h2>
<h4>1. using <code>ParamSpec</code></h4>
<pre class="lang-py prettyprint-override"><code>from typing import ParamSpec
P = ParamSpec('P')
def apply(fn: Callable[P, T], vals: P) -> T:
return fn(*vals)
</code></pre>
<p>which gives this error:</p>
<pre><code>tst_apply.py:6: error: Invalid location for ParamSpec "P" [valid-type]
tst_apply.py:6: note: You can use ParamSpec as the first argument to Callable, e.g., "Callable[P, int]"
tst_apply.py:7: error: Too few arguments [call-arg]
Found 2 errors in 1 file (checked 1 source file)
</code></pre>
<h4>2. using <code>ParamSpec.args</code></h4>
<pre class="lang-py prettyprint-override"><code>from typing import ParamSpec
P = ParamSpec('P')
def apply(fn: Callable[P, T], vals: P.args) -> T:
return fn(*vals)
</code></pre>
<p>which gives this error:</p>
<pre><code>tst_apply.py:7: error: Too few arguments [call-arg]
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<h4>3. using <code>TypeVarTuple</code> as suggested in <a href="https://stackoverflow.com/a/79390786/12439683">Daraan's answer Revision 1</a>:</h4>
<p>Even though <code>TypeVarTuple</code> is available only from python-3.11, I can import it from <code>typing_extensions</code>, but it still doesn't work - it works when I change <code>vals: *Ts</code> to <code>*vals: *Ts</code> but that's unusable for me:</p>
<pre class="lang-py prettyprint-override"><code>Ts = TypeVarTuple('Ts')
def apply(fn: Callable[[*Ts], T], vals: *Ts) -> T:
return fn(*vals)
</code></pre>
<p>which gives this error:</p>
<pre><code>tst_apply.py:7: error: invalid syntax [syntax]
def apply(fn: Callable[[*Ts], T], vals: *Ts) -> T:
^
Found 1 error in 1 file (errors prevented further checking)
</code></pre>
<h4>4. using <code>Tuple[*Ts]</code> for <code>TypeVarTuple</code>:</h4>
<pre class="lang-py prettyprint-override"><code>def apply(fn: Callable[[*Ts], T], vals: Tuple[*Ts]) -> T:
return fn(*vals)
</code></pre>
<p>which gives this error:</p>
<pre><code>tst_apply.py:7: error: invalid syntax. Perhaps you forgot a comma? [syntax]
def apply(fn: Callable[[*Ts], T], vals: Tuple[*Ts]) -> T:
^
Found 1 error in 1 file (errors prevented further checking)
</code></pre>
<h2>Question:</h2>
<p>Is there a way to annotate this so the connection between <code>fn</code> parameters and <code>vals</code> is not lost?</p>
|
<python><python-typing><mypy><python-3.10>
|
2025-01-27 12:21:46
| 1
| 5,587
|
Jan Spurny
|
79,390,708
| 1,277,624
|
Understanding Type Variance in Python Protocols with Generic Types
|
<p>I'm trying to understand how type variance works with Python protocols and generics. My test cases seem to contradict what I expect regarding invariant, covariant, and contravariant behavior.</p>
<p>Here's a minimal example demonstrating the issue:</p>
<pre><code>from typing import TypeVar, Protocol
# Type variables
T = TypeVar('T')
T_co = TypeVar('T_co', covariant=True)
T_contra = TypeVar('T_contra', contravariant=True)
# Class hierarchy
class Animal: pass
class Dog(Animal): pass
# Protocols
class Feeder(Protocol[T]):
def feed(self, animal: T) -> T: ...
class Adopter(Protocol[T_co]):
def adopt(self) -> T_co: ...
class Walker(Protocol[T_contra]):
def walk(self, animal: T_contra) -> None: ...
# Implementations
class AnimalFeeder:
def feed(self, animal: Animal) -> Animal: ...
class DogFeeder:
def feed(self, animal: Dog) -> Dog: ...
class AnimalAdopter:
def adopt(self) -> Animal: ...
class DogAdopter:
def adopt(self) -> Dog: ...
class AnimalWalker:
def walk(self, animal: Animal) -> None: ...
class DogWalker:
def walk(self, animal: Dog) -> None: ...
</code></pre>
<p>When testing type assignments, some cases behave differently than expected:</p>
<pre><code># Test cases with expected vs actual behavior
feeder1: Feeder[Dog] = DogFeeder() # Expected ✅ Actual ✅ (exact match)
feeder2: Feeder[Dog] = AnimalFeeder() # Expected ❌ Actual ❌ (invariant)
feeder3: Feeder[Animal] = DogFeeder() # Expected ❌ Actual ✅ (Why does this work?)
adopter1: Adopter[Dog] = DogAdopter() # Expected ✅ Actual ✅ (exact match)
adopter2: Adopter[Dog] = AnimalAdopter() # Expected ❌ Actual ❌ (return type mismatch)
adopter3: Adopter[Animal] = DogAdopter() # Expected ✅ Actual ✅ (covariant, correct)
walker1: Walker[Dog] = DogWalker() # Expected ✅ Actual ✅ (exact match)
walker2: Walker[Dog] = AnimalWalker() # Expected ✅ Actual ❌ (Should work with contravariance?)
walker3: Walker[Animal] = DogWalker() # Expected ❌ Actual ✅ (Why does this work?)
</code></pre>
<p><strong>Questions:</strong></p>
<ul>
<li>Why does feeder3 type-check when it seemingly violates invariance?</li>
<li>Why does walker2 fail when it should be valid with contravariance?</li>
<li>Are these behaviors correct according to Python's type system, or is
this a limitation in PyCharm's type checking?</li>
</ul>
<p>I'm using Python 3.10 and PyCharm 2024.3.1.</p>
|
<python><python-typing>
|
2025-01-27 12:20:12
| 1
| 408
|
rednammoc
|
79,390,586
| 6,708,322
|
Can I define an Alias to a foreign field in Django?
|
<p>I'm looking to define an alias to a foreign key related set so that it can then be used in a generic filtering function. To explain with a little more detail here is a simplified example of my use case demonstrating what I'm trying to achieve:</p>
<p><strong>models.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from django.db import models
class Family(models.Model):
family_name = models.CharField(max_length=100)
pets: models.Manager['Pet'] = ...
members: models.Manager['Person'] = ...
def __str__(self):
return f'{self.id} - {self.family_name}'
class Pet(models.Model):
class PetType(models.TextChoices):
CAT = 'cat'
DOG = 'dog'
LIZARD = 'lizard'
name = models.CharField(max_length=100)
type = models.CharField(max_length=100, choices=PetType)
family = models.ForeignKey(Family, on_delete=models.CASCADE, related_name='pets')
def __str__(self):
return f'{self.id} - {self.name} {self.family.family_name} [{self.type}]'
class Person(models.Model):
name = models.CharField(max_length=100)
age = models.IntegerField()
family = models.ForeignKey(Family, on_delete=models.CASCADE, related_name='members')
@property
def pets(self) -> models.Manager[Pet]:
return self.family.pets
def __str__(self):
return f'{self.id} - {self.name} {self.family.family_name}'
</code></pre>
<p>If I have a <code>Person</code> entity, I can use it's <code>pets</code> property to get the pets from the family, what I'm looking for is a way to add an alias/annotation/whatever to a queryset of <code>Person</code> so that I can define:</p>
<pre class="lang-py prettyprint-override"><code>def has_a_cat(query_set):
return query_set.filter(pets__type='cat')
</code></pre>
<p>and then be able to use that for both <code>Family</code> and <code>Person</code> query sets. I know I can filter a <code>Person</code> queryset by the pet type because <code>Person.objects.filter(family__pets__type='cat')</code> works perfectly fine, but I'd like a way to alias <code>family__pets</code> to <code>pets</code> so I can use the shared filter.</p>
<p>I've tried using <code>.annotate(pets=F('family__pets'))</code> and <code>.alias(pets=F('family__pets))</code> but then when filtering on <code>pets__type</code> I get the following error:</p>
<pre><code>django.core.exceptions.FieldError: Unsupported lookup 'type' for BigAutoField or join on the field not permitted.
</code></pre>
|
<python><django>
|
2025-01-27 11:34:58
| 1
| 1,111
|
Jake Conkerton-Darby
|
79,390,516
| 14,649,310
|
How to start ollama with docker compose with specific LLM model
|
<p>I have a docker compose where a dummy python app is using ollama LMs on the background for some tasks. I want to be able to tell ollama somehow which model to download and use on the app deployment. Now my docker compose is like this:</p>
<pre><code>version: '3.8'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.0
container_name: elasticsearch
environment:
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- "9200:9200"
volumes:
- esdata:/usr/share/elasticsearch/data
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:9200 || exit 1"]
interval: 30s
timeout: 10s
retries: 5
pokemon_app:
build: .
container_name: pokemon_app
ports:
- "8000:8000"
depends_on:
elasticsearch:
condition: service_healthy
ollama:
condition: service_started
environment:
LLM_MODEL_NAME: ${LLM_MODEL_NAME}
ELASTICSEARCH_URL: ${ELASTICSEARCH_URL}
OLLAMA_URL: ${OLLAMA_URL}
LOG_LEVEL: DEBUG
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:8000/healthcheck || exit 1"]
interval: 30s
timeout: 10s
retries: 5
ollama:
image: ollama/ollama:latest
container_name: ollama
volumes:
- ollama:/root/.ollama
ports:
- "11434:11434"
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:11434 || exit 1"]
interval: 30s
timeout: 10s
retries: 5
volumes:
esdata:
driver: local
ollama:
driver: local
</code></pre>
<p>I have this env variable with the default LLM name that I use in my app:</p>
<pre><code>LLM_MODEL_NAME
</code></pre>
<p>How can I make it so that by changing the value of this env variable I control which model ollama uses?</p>
|
<python><linux><docker><docker-compose><ollama>
|
2025-01-27 11:08:05
| 1
| 4,999
|
KZiovas
|
79,390,166
| 5,618,856
|
Run python script with uv with path
|
<p>I'm using <a href="https://docs.astral.sh/uv/" rel="nofollow noreferrer">uv</a> as my python project manager. Within the project dir I can run my scripts with <code>uv run my_script.py</code> and all dependencies are resolved.</p>
<p>But how can I call this script if my terminal in on another path? <code>uv run /path/to/my_script.py</code> fails resolving the dependencies.</p>
|
<python><virtual-environment><uv>
|
2025-01-27 08:55:42
| 2
| 603
|
Fred
|
79,390,103
| 4,442,753
|
Numpy: vectorizing the comparison of 2 boolean arrays to derive start and end indices of regions of interest
|
<p>Assuming I have 2 boolean arrays of same size, <code>is_overlap</code> and <code>is_incomplete</code>.
I would like to retrieve the start and end indices (end indices excluded) of regions of interest in these arrays (these regions are of interest for an analysis achieved in subsequent steps of the algo - start and end indices excluded will be used to slice a 3rd array of same size).</p>
<p>To identify these regions, this set of conditions need to be fulfilled.</p>
<ul>
<li>1/ a region of interest contains at least one <code>True</code> row in <code>is_overlap</code>,</li>
<li>2/ a region of interest <strong>may</strong> further extend to contiguous rows (be it <code>False</code> or <code>True</code> in <code>is_overlap</code>)
<ul>
<li>if they are contiguous <code>True</code> rows in <code>is_incomplete</code>,</li>
<li>if the total number of rows in the extended region is larger than or equal to <code>max_n_rows</code></li>
<li>the extended region is the one accounting for contiguous <code>is_incomplete</code> rows, and also contiguous regions (regions are merged together)</li>
</ul>
</li>
</ul>
<p>Example</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
max_n_rows = 4
# row indices 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
is_overlap = np.array([0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1], dtype=bool)
is_incomplete = np.array([1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1], dtype=bool)
</code></pre>
<p>Result expected is then,</p>
<pre class="lang-py prettyprint-override"><code># region indices 0 1 2 3 4
region_of_interest_starts = np.array([1, 3, 9, 14, 16])
region_of_interest_ends_excl = np.array([2, 8, 13, 15, 17])
</code></pre>
<p>Explanations:</p>
<ul>
<li><p>region 0:</p>
<ul>
<li>condition 1/ region contains the row 1 because <code>is_overlap[1] = True</code></li>
<li>condition 2/ is not achieved. Region does not extend to row 0 even if <code>is_incomplete[0] = True</code>, because number of rows in this region would then be 2, which is lower than <code>max_n_rows</code>.</li>
</ul>
</li>
<li><p>region 1:</p>
<ul>
<li>condition 1/ region contains the 4 and 5 because <code>is_overlap[4] = is_overlap[5] = True</code></li>
<li>condition 2/ is achieved. Region extends to contiguous rows 3, 6 and 7 because <code>is_incomplete[3] = is_incomplete[6] = is_incomplete[7] = True</code>) and total number of rows in this region is then 5, which is larger or equal than <code>max_n_rows</code>.</li>
</ul>
</li>
<li><p>region 2:</p>
<ul>
<li>condition 1/ region 2 is actually made of 2 separate <code>True</code> in <code>is_overlap</code> at rows 9 and 11.</li>
<li>condition 2/ these 2 separate rows are however linked by the contiguous <code>is_incomplete[10] = True</code>. The total number of rows of this region is then 4 (made by rows 9, 10, 11, 12). Because it is larger than or equal to <code>max_n_rows</code>, merging is achieved.</li>
</ul>
</li>
<li><p>regions 3 and 4:</p>
<ul>
<li>start is similar to region 2. However, following the same logic, the total number of rows would then be 3 (made by rows 14, 15, 16) which is lower than <code>max_n_rows</code>. Merging with contiguous rows in <code>is_incomplete</code> is then not achieved. Only condition 1/ applies, resulting in 2 separate regions.</li>
</ul>
</li>
</ul>
|
<python><numpy>
|
2025-01-27 08:28:01
| 1
| 1,003
|
pierre_j
|
79,389,885
| 3,508,956
|
How do I print out the tensor values in between layers in Keras 3?
|
<p>I'm using Keras 3 with the PyTorch backend.</p>
<p>I'm trying to port a model written by someone else to another runtime, and I want to dump summary statistics about the tensor after each layer so I can figure out which operation I implemented incorrectly in my port (probably attention, lol).</p>
<p>How do I insert print statements into a Keras 3 model? All of the other answers I can find are related to <code>tf.keras</code>, which seems to be completely different from what I'm using. There is no method <a href="https://stackoverflow.com/questions/43448029/how-can-i-print-the-values-of-keras-tensors"><code>keras.backend.print_tensor()</code></a> either.</p>
<p>I also tried creating an intermediate model like this (for context, the model I am picking apart is <a href="https://github.com/usefulsensors/moonshine/blob/main/moonshine/model.py" rel="nofollow noreferrer">Moonshine</a>):</p>
<pre class="lang-py prettyprint-override"><code>encoder = model.encoder.encoder
encoder_intermediate_model = Model(
inputs=encoder.inputs, outputs=[layer.output for layer in encoder.layers]
)
</code></pre>
<p>But attempting to run this crashes with a vague error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\ibiyemi\projects\wellington-ml\moonshine.py", line 764, in <module>
encoder_outputs = encoder_intermediate_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ibiyemi\projects\wellington-ml\.venv\Lib\site-packages\keras\src\utils\traceback_utils.py", line 122, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\Users\ibiyemi\projects\wellington-ml\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ibiyemi\projects\wellington-ml\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: "Exception encountered when calling Functional.call().\n\n\x1b[1m2365371176512\x1b[0m\n\nArguments received by Functional.call():\n • inputs=['torch.Tensor(shape=torch.Size([1, 1248, 416]), dtype=float32)', 'torch.Tensor(shape=torch.Size([1]), dtype=int32)']\n • training=None\n • mask=['None', 'None']"
</code></pre>
|
<python><machine-learning><keras>
|
2025-01-27 06:20:25
| 0
| 7,107
|
laptou
|
79,389,718
| 5,729,613
|
Python Queue not updated outside of Thread
|
<p>I've created a Flask app that retrieves data from a queue that is updated in a separate thread. I'm not sure why the Queue is empty when I retrieve it from the Flask GET endpoint, and am a bit ignorant of what Queues being thread-safe is supposed to mean, since my example doesn't appear to reflect that.</p>
<p>In the example below, the queue in the Flask <code>@app.route('/measurements')</code> measurements route is empty even though it's updated in the TCP message handler. If anyone can enlighten me I'd appreciate it.</p>
<p>I am running this on Ubuntu with python3 in case that's relevant.</p>
<pre><code>from flask import Flask, render_template
import socket
from threading import Thread
import os
from wyze_sdk import Client
from dotenv import load_dotenv
from wyze_sdk.errors import WyzeApiError
import time
from queue import Queue
load_dotenv()
app = Flask(__name__)
start_time = time.time()
response = Client().login(
email=os.environ['WYZE_EMAIL'],
password=os.environ['WYZE_PASSWORD'],
key_id=os.environ['WYZE_KEY_ID'],
api_key=os.environ['WYZE_API_KEY']
)
client = Client(token=response['access_token'])
HOST = '192.168.1.207' # Listen on all network interfaces
PORT = 9000 # The same port as in your ESP8266 sketch
PORT_UI = 7001
MIN_WATER_DIST = 20 # minimum distance from sensor to water in cm
MAX_WATER_DIST = 45 # maximum distance from sensor to water in cm
MAX_TIME_PLUG_ON = 600 # maximum amount of time plug should be on
# Initialize state variables
plug_on_time = None # Track when the plug was turned on
measurements = Queue(maxsize=86400)
@app.route('/measurements')
def measurements_api():
current_time = time.time()
recent_measurements = [m for m in list(measurements.queue) if current_time - m['timestamp'] <= 86400]
return {'measurements': recent_measurements} # empty queue, no measurements returned
# This function will handle incoming TCP messages
def handle_tcp_connection(client_socket, client_address, measurements):
try:
data = client_socket.recv(1024) # Buffer size of 1024 bytes
if data:
distance_str = data.decode('utf-8')
dist = int(distance_str)
print(f"Received message: {distance_str}")
timestamp = time.time()
print(len(measurements.queue)) # prints the correct number of measurements
measurements.get()
measurements.put({'value': value, 'timestamp': timestamp})
client_socket.close()
except Exception as e:
print(f"Error: {e}")
client_socket.close()
# This function runs the TCP server in a separate thread
def run_tcp_server(measurements):
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as server_socket:
server_socket.bind((HOST, PORT))
server_socket.listen(5)
print(f"Listening on {HOST}:{PORT} for TCP connections...")
while True:
client_socket, client_address = server_socket.accept()
# Handle each incoming client connection in a new thread
Thread(target=handle_tcp_connection, args=(client_socket, client_address, measurements)).start()
# Start the TCP server in a separate thread
tcp_server_thread = Thread(target=run_tcp_server, daemon=True, args=(measurements,))
tcp_server_thread.start()
@app.route('/')
def index():
return render_template('index.html')
if __name__ == "__main__":
app.run(host='0.0.0.0', port=PORT_UI, debug=True)
</code></pre>
|
<python><multithreading><flask>
|
2025-01-27 04:01:46
| 1
| 1,867
|
I Like
|
79,389,695
| 17,275,378
|
UnpicklingError from Azure Blob
|
<p>I'm developing an <code>Azure Function</code> to serve inferences from a machine learning model. The model is saved as a <code>.pkl</code> file on <code>Azure Blob Storage</code>. All attempts to read the file into Python fail.</p>
<p>For simplicity, this example uses an arbitrary .pkl instead of a model file.</p>
<h4>Object Creation Code</h4>
<pre><code>import pickle
# Create a pickle
small_object = list(range(0,1000))
name = 'small_file.pkl'
with open(name, 'wb') as f:
pickle.dump(small_object, f)
#### Verify integrity ###
# Loading from disk
with open(name, 'rb') as f:
opened_object_one = pickle.load(f)
print(opened_object_one) # prints expected list
# Loading from byte stream
with open('small_object.pkl', 'rb') as f:
opened_object_two = f.read()
print(pickle.loads(opened_object_two)) # print expected list
</code></pre>
<h4>Azure Function Code</h4>
<pre><code>import logging
import pickle
app = func.FunctionApp()
@app.route(route="http_trigger", auth_level=func.AuthLevel.FUNCTION)
@app.blob_input(
arg_name = 'client',
path = '<container>/small_file.pkl',
connection ='AzureWebJobsStorage'
)
def http_trigger(req: func.HttpRequest, client) -> func.HttpResponse:
# `client` is of type azure.functions.blob.InputStream
package_bytes = client.read() # return bytes
client.close()
small_object = pickle.loads(package_bytes)
</code></pre>
<p>I enter my virtual environment with <code>source .venv/scripts/activate</code> (I'm using <code>GitBash</code>). Then I run locally with <code>func host start</code> and trigger the function by following the link to the relevant localhost endpoint.
This returns <code>Exception: UnpicklingError: invalid load key, '\xef'</code> in the console.</p>
<p>I've tried using <code>joblib</code>, <code>dill</code> and <code>pickle</code> for both read and write stages. I've tried loading the pickle to the blob with both the <code>Azure CLI</code> and with <code>Firefox</code>. I always get the same results.</p>
<p>I'm on Windows 10 using Python 3.10.5.</p>
<p>All help is appreciated.</p>
|
<python><azure-functions><azure-blob-storage><pickle>
|
2025-01-27 03:20:57
| 1
| 326
|
eldrly
|
79,389,482
| 3,120,501
|
Matplotlib FuncAnimation blitting for 3D contourf?
|
<p>I'm trying to make a 3D Matplotlib animation with a contour 'slice' which moves through space. A minimum working example of the kind of thing I'm trying to accomplish is given by the following code:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
from matplotlib.animation import FuncAnimation
from matplotlib.colors import TwoSlopeNorm
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
ax.set_xlim(0, 1)
ax.set_ylim(0, 1)
ax.set_zlim(0, 1)
def f(X, Y, Z):
return ((Y-0.5)**2 + (Z-0.5)**2)*np.cos(X)
x = 0
intv = 0.01
x_vals = np.array([x])
y_vals = np.arange(0, 1, intv)
z_vals = np.arange(0, 1, intv)
X, Y, Z = np.meshgrid(x_vals, y_vals, z_vals)
norm = TwoSlopeNorm(0, -3, 3)
ctr = [ax.contourf(f(X, Y, Z).squeeze(), Y.squeeze(), Z.squeeze(), zdir='x', offset=x, levels=30, cmap='bwr', norm=norm)]
def animate(x):
for elm in ctr[0].collections:
elm.remove()
X, Y, Z = np.meshgrid(np.array([x]), y_vals, z_vals)
ctr[0] = ax.contourf(f(X, Y, Z).squeeze(), Y.squeeze(), Z.squeeze(), zdir='x', offset=x, levels=30, cmap='bwr', norm=norm)
anim = FuncAnimation(fig, animate, frames=np.arange(0, 1, intv), interval=20, repeat_delay=1000)
plt.show(block=True)
</code></pre>
<p>However, this is very slow, and I was wondering if it would be possible to blit it for performance improvement. I tried doing something like the answer to this question: <a href="https://stackoverflow.com/questions/42386372/increase-the-speed-of-redrawing-contour-plot-in-matplotlib">Increase the speed of redrawing contour plot in matplotlib</a>, resulting in the following code:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from matplotlib.animation import FuncAnimation
from matplotlib.colors import TwoSlopeNorm
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
ax.set_xlim(0, 1)
ax.set_ylim(0, 1)
ax.set_zlim(0, 1)
def f(X, Y, Z):
return ((Y-0.5)**2 + (Z-0.5)**2)*np.cos(X)
x = 0
intv = 0.01
x_vals = np.array([x])
y_vals = np.arange(0, 1, intv)
z_vals = np.arange(0, 1, intv)
X, Y, Z = np.meshgrid(x_vals, y_vals, z_vals)
norm = TwoSlopeNorm(0, -3, 3)
ctr = [ax.contourf(f(X, Y, Z).squeeze(), Y.squeeze(), Z.squeeze(), zdir='x', offset=x, levels=30, cmap='bwr', norm=norm)]
def animate(x):
for elm in ctr[0].collections:
elm.remove()
X, Y, Z = np.meshgrid(np.array([x]), y_vals, z_vals)
ctr[0] = ax.contourf(f(X, Y, Z).squeeze(), Y.squeeze(), Z.squeeze(), zdir='x', offset=x, levels=30, cmap='bwr', norm=norm)
return ctr[0].collections
anim = FuncAnimation(fig, animate, frames=np.arange(0, 1, intv), interval=20, repeat_delay=1000, blit=True)
plt.show(block=True)
</code></pre>
<p>...however this doesn't work - I get <code>AttributeError: 'PathCollection' object has no attribute 'do_3d_projection'</code>. Are there any workarounds to this? I have seen mention of using <code>do_3d_projection()</code> to convert the 3D scene to 2D and then blitting this (I suppose this is probably what the above code is trying to do), but that doesn't seem possible if the function isn't implemented. Thank you for any help!</p>
|
<python><matplotlib><animation>
|
2025-01-26 23:32:40
| 0
| 528
|
LordCat
|
79,389,452
| 1,429,450
|
2D NumPy array + 1D array →?
|
<p>For two NumPy arrays,</p>
<pre><code>A = [[1,3,6], [2, 2, 5], …],
B = [1,6,7,8,…],
</code></pre>
<p>what is the fastest way to make</p>
<pre><code>C = [[[1,3,6], 2], [[2, 2, 5], 6], …]
</code></pre>
<p>?</p>
|
<python><arrays><numpy>
|
2025-01-26 23:01:14
| 1
| 5,826
|
Geremia
|
79,389,398
| 3,159,059
|
PyInstaller Build from MacOS Ventura Failing on High Sierra due to Missing Symbol '_OBJC_CLASS_$_MLModelConfiguration'
|
<p>I have developed a cross-platform app which runs on MacOS and Windows. I have packaged the app on MacOS Ventura (i7 Intel-based) with PyInstaller. When I try to run the packaged app on MacOS High Sierra, I get the below error. Can anyone provide insight how I can fix or work around this issue? I'm assuming it's related to OS version.</p>
<pre><code>> Exception in thread Thread-1 (transcribe): Traceback (most recent call
> last): File "threading.py", line 1075, in _bootstrap_inner File
> "threading.py", line 1012, in run File "RT_STT.py", line 61, in
> transcribe File "PyInstaller/loader/pyimod02_importers.py", line
> 384, in exec_module File "RealtimeSTT/__init__.py", line 1, in
> <module> File "PyInstaller/loader/pyimod02_importers.py", line 384,
> in exec_module File "RealtimeSTT/audio_recorder.py", line 33, in
> <module> File "PyInstaller/loader/pyimod02_importers.py", line 384,
> in exec_module File "openwakeword/__init__.py", line 3, in <module>
> File "PyInstaller/loader/pyimod02_importers.py", line 384, in
> exec_module File "openwakeword/vad.py", line 48, in <module> File
> "PyInstaller/loader/pyimod02_importers.py", line 384, in exec_module
> File "onnxruntime/__init__.py", line 58, in <module> File
> "onnxruntime/__init__.py", line 23, in <module> File
> "PyInstaller/loader/pyimod02_importers.py", line 384, in exec_module
> File "onnxruntime/capi/_pybind_state.py", line 32, in <module>
> ImportError: dlopen(/Applications/Virtual
> Reader.app/Contents/MacOS/Virtual_Reader/_internal/onnxruntime/capi/onnxruntime_pybind11_state.so,
> 2): Symbol not found: _OBJC_CLASS_$_MLModelConfiguration Referenced
> from: /Applications/Virtual
> Reader.app/Contents/MacOS/Virtual_Reader/_internal/onnxruntime/capi/onnxruntime_pybind11_state.so
> (which was built for Mac OS X 13.3) Expected in:
> /System/Library/Frameworks/CoreML.framework/Versions/A/CoreML in
> /Applications/Virtual
> Reader.app/Contents/MacOS/Virtual_Reader/_internal/onnxruntime/capi/onnxruntime_pybind11_state.so
> ImportError: dlopen(/Applications/Virtual
> Reader.app/Contents/MacOS/Virtual_Reader/_internal/onnxruntime/capi/onnxruntime_pybind11_state.so,
> 2): Symbol not found: _OBJC_CLASS_$_MLModelConfiguration
</code></pre>
|
<python><macos><pyinstaller><symbols>
|
2025-01-26 22:21:36
| 0
| 5,749
|
GaryMBloom
|
79,389,160
| 8,160,995
|
Bluepy scan for devices and connect to them
|
<p>My task is to search for some devices with Bluetooth with a Raspberry PI 4B and to connect to send some commands and disconnect. For this, I started using bluepy package and I was able to do the two things separately. I can connect to a device with:</p>
<pre class="lang-py prettyprint-override"><code>from bluepy.btle import Peripheral, DefaultDelegate, BTLEException, Scanner
peripheral = Peripheral(mac_address)
service = peripheral.getServiceByUUID(CONFIGURATION_SERVICE_UUID)
write_char = service.getCharacteristics(WRITE_CHAR_UUID)[0]
read_char = service.getCharacteristics(READ_CHAR_UUID)[0]
write_char.write(command, withResponse=True)
# etc.
peripheral.disconnect()
</code></pre>
<p>and I can scan for devices with</p>
<pre class="lang-py prettyprint-override"><code>
class ScanDelegate(DefaultDelegate):
def handleDiscovery(self, dev, isNewDev, isNewData):
print(dev.addr)
scanner = Scanner().withDelegate(ScanDelegate())
scanner.scan(60)
</code></pre>
<p>The problem that I have with the first part, and the reason I want to put them together, is that the devices I want to connect to are connectable only in the period they are broadcasting some information (almost a second) and the connection will go in timeout before succeeding most of the times. My idea is that I should start with the scanner and inside the handleDiscovery I should try to connect but just pasting the code inside doesn't work. Does anyone know how to do it?</p>
|
<python><bluetooth><raspberry-pi4>
|
2025-01-26 19:26:31
| 0
| 762
|
Ripper346
|
79,389,155
| 1,704,628
|
Map Logic with TypeVarTuple
|
<p>I have a rather simple function that accepts a sequence of classes and returns a tuple of their respective instances:</p>
<pre class="lang-py prettyprint-override"><code>def f(*classes):
return tuple(cls() for cls in classes)
</code></pre>
<p>Is there any way to annotate it so that mypy (and more importantly, Intellisense) understands this relation? For example:</p>
<pre class="lang-py prettyprint-override"><code>a, b, c = f(A, B, C)
# reveal_type(a) is A
# reveal_type(b) is B
# reveal_type(c) is C
</code></pre>
<p>I feel like <code>TypeVarTuple</code> is a step in the right direction, but I'm struggling to inject the <code>type[]</code> logic that would make it work. As in:</p>
<pre class="lang-py prettyprint-override"><code>def f[*Ts](
*classes: type[*Ts] # this; as in, tuple[type[T] for T in Ts]
) -> tuple[*Ts]:
return tuple(cls() for cls in classes)
</code></pre>
<p>The <code>type[*Ts]</code> bit doesn't work (<code>Unpack is only valid in a variadic position</code>), and I can't seem to figure out how (or whether it's even possible) to express my typing logic properly in this case.</p>
<h1>Edit</h1>
<p>So far, the only solution I could come up with is something along the lines of:</p>
<pre class="lang-py prettyprint-override"><code>@overload
def f[T1](cls1: type[T1]) -> tuple[T1]: ...
@overload
def f[T1, T2](cls1: type[T1], cls2: type[T2]) -> tuple[T1, T2]: ...
@overload
def f[T1, T2, T3](cls1: type[T1], cls2: type[T2], cls3: type[T3]) -> tuple[T1, T2, T3]: ...
# 10 suchoverloads, assuming nobody is going to call the function with 11 classes or more...
</code></pre>
<p>But it's... you know. Posting it for reference, in case others are struggling with something similar and no better solution crops up.</p>
|
<python><python-typing><mypy>
|
2025-01-26 19:24:00
| 0
| 3,910
|
Dan Gittik
|
79,389,136
| 1,473,517
|
How to create a python module in C++ that multiprocessing does not support
|
<p>I am trying and failing to reproduce and understand a problem I saw where multiprocessing failed when using a python module written in C++. My understanding was that the problem is that multiprocessing needs to pickle the function it is using. So I made <code>my_module.cpp</code> as follows:</p>
<pre><code>#include <pybind11/pybind11.h>
int add(int input_number) {
return input_number + 10;
}
PYBIND11_MODULE(my_module, m) {
m.doc() = "A simple module implemented in C++ to add 10 to a number.";
m.def("add", &add, "Add 10 to a number");
}
</code></pre>
<p>After</p>
<pre><code>pip install pybind11
</code></pre>
<p>I compiled with:</p>
<pre><code>c++ -O3 -Wall -shared -std=c++11 -fPIC $(python3 -m pybind11 --includes) my_module.cpp -o my_module$(python3-config --extension-suffix)
</code></pre>
<p>I can import <code>my_module</code> and it works as expected.</p>
<p>I can test if it can be pickled with:</p>
<pre><code>import my_module
import pickle
# Use the add function
print(my_module.add(5)) # Outputs: 15
# Attempt to pickle the module
try:
pickle.dumps(my_module)
except TypeError as e:
print(f"Pickling error: {e}") # Expected error
</code></pre>
<p>which outputs <code>Pickling error: cannot pickle 'module' object</code> as expected.</p>
<p>Now I tested multiprocessing and was surprising that it worked. I was expecting it to give a pickling error.</p>
<pre><code>import my_module
from multiprocessing import Pool
# A wrapper function to call the C++ add function
def parallel_add(number):
return my_module.add(number)
if __name__ == "__main__":
numbers = [1, 2, 3, 4, 5]
try:
# Create a pool of worker processes
with Pool(processes=2) as pool:
results = pool.map(parallel_add, numbers)
print(results) # If successful, prints the results
except Exception as e:
print(f"Multiprocessing error: {e}")
</code></pre>
<p>How can I make a Python module in C++ with pybind11 which fails with multiprocessing because of a pickling error?</p>
<p>I am using Linux</p>
|
<python><c++><multiprocessing><pybind11>
|
2025-01-26 19:06:53
| 1
| 21,513
|
Simd
|
79,389,053
| 18,775
|
Using Python how do I validate JSON against a JSON schema in a streaming fashion, e.g., not loading the whole object in memory?
|
<p>I have a large JSON that I do not want to load into memory. I would like to validate it against a JSON schema in a <em>streaming</em> fashion. All libraries I could find so far, only validate completely loaded JSON objects (like Pydantic or <a href="https://github.com/python-jsonschema/jsonschema" rel="nofollow noreferrer">https://github.com/python-jsonschema/jsonschema</a>). What I rather need is some way to validate it feeding the original JSON chunk by chunk, i.e, control the size of the buffer.</p>
<p>This could look like this:</p>
<pre><code>import pydantic # I use V2
import ijson
import pathlib
class A(pydantic.BaseModel):
i: int
a: list[int]
s: str
jsonpath = pathlib.Path("some.json")
validator = MyValidator(schema=A.model_json_schema())
with jsonpath.open("rb") as file:
for prefix, event, value in ijson.parse(file, use_float=True):
validator.event((prefix, event, value))
print(validator.errors)
</code></pre>
<p>Imagine <code>some.json</code> file is ~50 MB large <code>A</code> instance with a very long array. I do not want to load the whole object into memory (this is what Pydantic would do), but I want to make sure that <code>some.json</code> complies with the schema of <code>A</code>. The <code>validator.errors</code> could give me a list of errors which is going to be empty in cases none where discovered.</p>
<p>[EDIT 2025-01-31]
The term "streaming fashion" means that event is seen exactly once and there is no way to see it again. However, I am willing to accept an answer if there is a way to do the validation with multiple scans, i.e., in my example above multiple file scans are fine with me.</p>
|
<python><json><stream><jsonschema><python-jsonschema>
|
2025-01-26 18:17:29
| 7
| 6,532
|
Anton Daneyko
|
79,389,038
| 11,092,636
|
Why does plt.axis('off') change the colour of the plot?
|
<p>I'm a bit confused as to why <code>plt.axis('off')</code> would do anything else than removing the ticks in <code>matplotlib</code> (<code>Python 3.12.8</code>, <code>matplotlib==3.10.0</code>).</p>
<p>Minimal Reproducible Example:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
zero_array = np.zeros((100, 100))
plt.imshow(zero_array, cmap='gray', vmin=0, vmax=100, interpolation='none')
# plt.axis('off')
plt.show()
</code></pre>
<p>Proof:
<a href="https://i.sstatic.net/nuDvfAcP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nuDvfAcP.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/7o4LPKCe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7o4LPKCe.png" alt="enter image description here" /></a></p>
<p>Additional info:</p>
<pre><code>PyCharm 2024.3.1.1 (Professional Edition)
Build #PY-243.22562.220, built on December 18, 2024
Licensed to **********************
Subscription is active until ******************
For educational use only.
Runtime version: 21.0.5+8-b631.28 amd64 (JCEF 122.1.9)
VM: OpenJDK 64-Bit Server VM by JetBrains s.r.o.
Toolkit: sun.awt.windows.WToolkit
Windows 11.0
GC: G1 Young Generation, G1 Concurrent GC, G1 Old Generation
Memory: 8192M
Cores: 24
Registry:
ide.experimental.ui=true
i18n.locale=
Non-Bundled Plugins:
com.chesterccw.excelreader (2024.11.1-243)
me.lensvol.blackconnect (0.6.2)
com.github.copilot (1.5.30-242)
</code></pre>
|
<python><matplotlib>
|
2025-01-26 18:04:29
| 2
| 720
|
FluidMechanics Potential Flows
|
79,388,926
| 3,380,902
|
pyenv BUILD FAILED on macOS 15.1
|
<p>I am running into BUILD FAILED Error (OS X 15.1 using python-build 2.5.1) when attempting to install python via <code>pyenv install 3.12.8</code></p>
<pre><code>bash-3.2$ pyenv install 3.12.8
python-build: use openssl@3 from homebrew
python-build: use readline from homebrew
Downloading Python-3.12.8.tar.xz...
-> https://www.python.org/ftp/python/3.12.8/Python-3.12.8.tar.xz
Installing Python-3.12.8...
python-build: use tcl-tk from homebrew
python-build: use readline from homebrew
python-build: use zlib from xcode sdk
BUILD FAILED (OS X 15.1 using python-build 2.5.1)
Inspect or clean up the working tree at /var/folders/d0/gnksqzwn2fn46fjgrkp6045c0000gn/T/python-build.20250126085216.9551
Results logged to /var/folders/d0/gnksqzwn2fn46fjgrkp6045c0000gn/T/python-build.20250126085216.9551.log
Last 10 log lines:
__locale_localeconv in _localemodule.o
__locale_localeconv in _localemodule.o
__locale_localeconv in _localemodule.o
__locale_localeconv in _localemodule.o
"_libintl_textdomain", referenced from:
__locale_textdomain in _localemodule.o
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make: *** [Programs/_freeze_module] Error 1
make: *** Waiting for unfinished jobs....
</code></pre>
<p>I re-installed xcode and set environment variables as suggested in other posts, but that didn't work.</p>
<p><a href="https://stackoverflow.com/questions/51551557/pyenv-build-failed-installing-python-on-macos">PyEnv BUILD FAILED installing Python on MacOS</a></p>
|
<python><pyenv>
|
2025-01-26 16:56:55
| 2
| 2,022
|
kms
|
79,388,819
| 6,502,077
|
How can I change the sequence of the pages in a PDF file?
|
<p>I want to make a simple Python program that changes the order of the pages in a PDF file, which would allow me later to place two pages (A5) on one sheet (A4) and print the document as a booklet on a printer.</p>
<p>As an example, this is how the pages will be placed on one sheet:</p>
<p><a href="https://i.sstatic.net/md8MpPKD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/md8MpPKD.png" alt="enter image description here" /></a></p>
<p>As you can see this means that the normal sequence of the pages in the PDF file would need to be changed accordingly:</p>
<p>Original sequence = 1, 2, 3, 4 ...</p>
<p>New sequence= 2, 3, 4, 1 ...</p>
<p>I came up with the solution below, which actually works, but unfortunately it skips the last pages if they cannot be divided by four:</p>
<pre><code>import pypdf
pdf = pypdf.PdfReader('test.pdf')
writer = pypdf.PdfWriter()
# Different sequences
sq_1 = range(1, len(pdf.pages), 4)
sq_2 = range(2, len(pdf.pages), 4)
sq_3 = range(3, len(pdf.pages), 4)
sq_4 = range(0, len(pdf.pages), 4)
# Iterate over pages
for a, b, c, d in zip(sq_1, sq_2, sq_3, sq_4):
# Copy pages to new file
writer.add_page(pdf.get_page(a))
writer.add_page(pdf.get_page(b))
writer.add_page(pdf.get_page(c))
writer.add_page(pdf.get_page(d))
# write new PDF file
output = open("output.pdf", "wb")
writer.write(output)
# close files
pdf.close()
output.close()
</code></pre>
<p>Is there any way to easily solve this problem?</p>
<p>I guess the problem here is zip() but I cannot find any good replacement for it.</p>
|
<python><pypdf>
|
2025-01-26 15:58:22
| 1
| 702
|
Lavonen
|
79,388,585
| 1,235,227
|
How can I call a @staticmethod on a generic type argument (defined with TypeVar)?
|
<p>I have a generic class in which I would like to call a static method on the type argument, but if I naively try it like in the following example, I get an error because the identifier representing the type is actually an instance of TypeVar which does not have this attribute:</p>
<pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod
from typing import Generic, TypeVar
class MyInterface(ABC):
@staticmethod
@abstractmethod
def default() -> int: ...
class SubclassA(MyInterface):
@staticmethod
def default() -> int:
return 0
class SubclassB(MyInterface):
@staticmethod
def default() -> int:
return 1
SomeSubclass = TypeVar('SomeSubclass', bound=MyInterface)
class InterfaceContainer(Generic[SomeSubclass]):
def check_if_default(self, value: int) -> bool:
return value == SomeSubclass.default() # AttributeError: 'TypeVar' object has no attribute 'default'
# optional, shows runtime error even without static type checking:
container = InterfaceContainer[SubclassA]()
container.check_if_default(42)
</code></pre>
<p>How can I access attributes of the type specified as argument?</p>
<p><a href="https://stackoverflow.com/questions/48572831/how-to-access-the-type-arguments-of-typing-generic">How to access the type arguments of typing.Generic?</a> is a related question where I learned how to access the argument given the type <code>InterfaceContainer[SubclassA]</code> (e.g., via <code>__args__</code> since Python 3.6+), but the instance <code>container</code> does not have <code>__args__</code>, and <code>type(container)</code> or <code>cls</code> in a <code>@classmethod</code> just give me the generic type <code>InterfaceContainer</code> without its argument.</p>
<p>I wonder if <em>instances</em> of a generic class lose their type argument information completely, maybe because it is only meant for type checking?</p>
|
<python><python-typing>
|
2025-01-26 13:20:04
| 0
| 1,941
|
hans_meine
|
79,388,538
| 6,105,259
|
How to set multiple variables in DBT/SQL by running `run_query()` iteratively over a list of strings
|
<p>In dbt, I want to utilize jinja syntax to reduce code repetition when assigning variables (with <code>set</code>). I have one preemptive query that I wish to run <em>repeatedly</em>, changing the value matched in its <code>WHERE</code> clause each time. For each iteration, I want to save the result into a separate variable.</p>
<p>In other words, to iterate <code>run_query</code> macro (<a href="https://docs.getdbt.com/reference/dbt-jinja-functions/run_query" rel="nofollow noreferrer">doc</a>) over a list of strings, such that the query itself remains the same, but the value matched in the <code>WHERE</code> clause changes according the the string in the list being iterated over.</p>
<hr />
<h2>Example</h2>
<p>On the one hand, I have a table that specifies animals and their weights:</p>
<pre class="lang-sql prettyprint-override"><code>-- zoo.sql
WITH zoo (animal, weight) AS (
VALUES
('zebra', 400),
('lion', 200),
('elephant', 4000),
('bear', 160)
)
</code></pre>
<p>On the other hand, I have a model that relies on animals' weights. I wish to assign the weight of each into a dedicated variable in my model's script.</p>
<p>So I thought of doing:</p>
<pre class="lang-sql prettyprint-override"><code>-- first step: set the names in a list, ensuring they match the values
-- in `animal` column in `zoo.sql`:
{% set ANIMALS = ["zebra", "lion", "elephant"] %} -- let's say I only want those animals
-- second step: `run_query`:
{% run_query('SELECT weight FROM zoo WHERE animal = ANIMALS') %}
-- final step: assigning into variables"
{% set ZEBRA_WEIGHT, LION_WEIGHT, ELEPHANT_WEIGHT = ..., ..., ... %} -- took from here: https://stackoverflow.com/a/40177302
</code></pre>
<p>Clearly there should be an iteration here, most likely using <code>{% for animal in ANIMALS %}</code> or something like that. But I'm totally new to this and can't wrap my head how to do the iteration of <code>run_query()</code> & var assignment succinctly.</p>
<p>I expect the result of the iterative var assignment to be equal as if I would've set the variables manually:</p>
<pre class="lang-sql prettyprint-override"><code>{% set ZEBRA_WEIGHT = 400 %}
{% set LION_WEIGHT = 200 %}
{% set ELEPHANT_WEIGHT = 4000 %}
</code></pre>
|
<python><sql><jinja2><dbt>
|
2025-01-26 12:44:07
| 1
| 4,303
|
Emman
|
79,388,126
| 2,106,815
|
Which elif usage is prefered in python?
|
<p>Which of the following is preferred for using chained elif.</p>
<pre><code>elif 12 <= person_age < 16:
print("In range")
</code></pre>
<p>OR</p>
<pre><code>elif person_age >= 12 and person_age < 16:
print("In range")
</code></pre>
<p><strong>Please note Pycharm advised to use the first one.</strong></p>
|
<python>
|
2025-01-26 07:12:11
| 1
| 2,896
|
Jabir
|
79,388,046
| 17,419,414
|
Distinction between Python environment and Jupyter kernel in VS Code
|
<p>In Python I can create a virtual environment in VS Code with the following commands, I'll also install a kernel to the same virtual environment:</p>
<pre class="lang-bash prettyprint-override"><code>python3 -m venv .venv
source .venv/bin/activate
pip install ipykernel
python3 -m ipykernel install --user --name=.venv
</code></pre>
<p>I then go in VS Code and I can <code>Select Kernel</code> and am given three options:</p>
<ol>
<li>Python Environment</li>
<li>Jupyter Kernel</li>
<li>Existing Jupyter Server</li>
</ol>
<p>No matter which I choose it just gives me various ways to select the <code>.venv</code> environment I just setup. When I am trying to select a kernel why I am being presented with <code>.venv</code> environments? Aren't kernels and environments two distinct things in Python? I can already operate out of my <code>.venv</code> environment but can't figure out how to actually select the Jupyter kernel I installed into the <code>.venv</code> environment.</p>
<p>When I look at tutorials it seems there's supposed to be the phrase <code>Jupyter Server: Local</code> at the bottom of my VS Code but that's totally missing. I'm not using conda and want to avoid it if possible.</p>
<p><a href="https://i.sstatic.net/MBSOLTVp.png" rel="nofollow noreferrer">screenshot for reference</a></p>
|
<python><visual-studio-code><jupyter-notebook><virtualenv>
|
2025-01-26 06:10:56
| 2
| 361
|
jophuh
|
79,387,911
| 2,882,380
|
How to make values into rows instead of columns when using pivot table in pandas
|
<p>Say I have this data frame:</p>
<pre><code>import pandas as pd
x = pd.DataFrame([[1, 'step', 'id', 22, 33],
[2, 'step', 'id', 55, 66]],
columns=['time', 'head_1', 'head_2', 'value_1', 'value_2'])
print(x)
time head_1 head_2 value_1 value_2
0 1 step id 22 33
1 2 step id 55 66
</code></pre>
<p>Then I use pivot table like below</p>
<pre><code>print(x.pivot_table(values=['value_1', 'value_2'], columns='time', index=['head_1', 'head_2']))
value_1 value_2
time 1 2 1 2
head_1 head_2
step id 22 55 33 66
</code></pre>
<p>However, I really want to have value_1 and value_2 in rows instead of columns like below (a new header as head_3). That is, put value_1 and value_2 in rows and only time as column. How do I do that?</p>
<pre><code>time 1 2
head_1 head_2 head_3
step id value_1 22 55
step id value_2 33 66
</code></pre>
|
<python><pandas>
|
2025-01-26 03:05:04
| 2
| 1,231
|
LaTeXFan
|
79,387,857
| 8,772,888
|
Django 5.1 - UserCreationForm won't allow empty passwords
|
<p>I'm upgrading a Django 3.0 app to 5.1 and have been moving slowly through each minor release. So far so good.</p>
<p>However, once I went from Django 5.0 to 5.1, I saw changed behavior with my "Create New User" page which uses a <code>UserCreationForm</code> form that allows empty passwords. If no password is supplied, a random one is generated.</p>
<p>Now, if I submit the form with an empty password I get "required field" errors on the password fields, even though they are both explicitly set as <code>required=False</code>.</p>
<p>I saw there were <code>UserCreationForm</code> changes in Django <a href="https://docs.djangoproject.com/en/5.1/releases/5.1/#django-contrib-auth" rel="nofollow noreferrer">5.1.0</a> and <a href="https://docs.djangoproject.com/en/5.1/releases/5.1.1/#bugfixes" rel="nofollow noreferrer">5.1.1</a>. I tried using <code>AdminUserCreationForm</code> and setting the <a href="https://stackoverflow.com/questions/78850636/what-is-password-based-authentication-in-the-usercreationform-in-django-and-how/78851262#78851262">usable_password</a> field to <code>None</code>, but it still won't allow empty passwords like before.</p>
<p>Any ideas?</p>
<p><strong>Environment</strong><br />
Python 3.12.8<br />
Django 5.1.5<br />
Crispy Forms 2.3</p>
<p><strong>Simplified Code</strong></p>
<pre class="lang-py prettyprint-override"><code>from django.contrib.auth.forms import AdminUserCreationForm
from crispy_forms.helper import FormHelper
class SignupForm(AdminUserCreationForm): # previously using UserCreationForm
usable_password = None # Newly added
# Form fields
sharedaccountflag = forms.ChoiceField(
label = 'Cuenta compartida',
required = True
)
# Constructor
def __init__(self, *args, **kwargs):
# Call base class constructor
super(SignupForm, self).__init__(*args, **kwargs)
# Set password fields as optional
self.fields['password1'].required = False
self.fields['password2'].required = False
# Set form helper properties
self.helper = FormHelper()
self.helper.form_tag = False
# Specify model and which fields to include in form
class Meta:
model = get_user_model()
fields = ('password1', 'password2', 'sharedaccountflag')
</code></pre>
<p><strong>Screenshot</strong><br />
<a href="https://i.sstatic.net/2f4Knw4M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2f4Knw4M.png" alt="enter image description here" /></a></p>
<hr />
<p><strong>Update</strong><br />
I used Serhii's example and modified it so the normal validation is called when appropriate. I also changed back to use <code>UserCreationForm</code>:</p>
<pre class="lang-py prettyprint-override"><code>class SignupForm(UserCreationForm):
# Override default validation to allow empty password (change in Django 5.1)
def validate_passwords(
self,
password1_field_name = "password1",
password2_field_name = "password2"
):
# Store password values
password1 = self.cleaned_data.get(password1_field_name)
password2 = self.cleaned_data.get(password2_field_name)
# Do nothing if passwords are not required and no value is provided
if (
not self.fields[password1_field_name].required and
not self.fields[password2_field_name].required and
not password1.strip() and
not password2.strip()
):
pass
# Call default validation if password is required OR a value is provided
else:
super().validate_passwords(password1_field_name, password2_field_name)
</code></pre>
|
<python><django><validation><django-forms><passwords>
|
2025-01-26 01:55:37
| 1
| 3,821
|
ravioli
|
79,387,481
| 9,329,400
|
Python imports from inside the package
|
<p>I'm trying to use a github repository that I cloned locally.</p>
<p>The repo is structured like this:</p>
<pre><code>project-name
| __init__.py
|____project-name
| __init__.py
| http_service.py
| data.py
</code></pre>
<p>Now I'm trying to import stuff from <code>http_service.py</code>, like this:</p>
<pre class="lang-py prettyprint-override"><code>import sys
sys.path.append('/.../project-name/project-name/')
from http_service import classname
</code></pre>
<p>But when I do this, an import inside the <code>http_service.py</code> file fails. The error message is: <code>ModuleNotFoundError: No module named 'project-name.data'</code></p>
<p>The <code>http_service.py</code> file has this:</p>
<p><code>from project-name.data import xyz</code></p>
<p>Clearly, I don't understand how this is supposed to work. How can the http_service file import from <code>project-name</code> when it's inside that directory? And what am I supposed to do differently to use this code?</p>
|
<python><import><package>
|
2025-01-25 20:34:32
| 0
| 610
|
JTB
|
79,387,290
| 76,701
|
Get Click to not expand variables in argument
|
<p>I have a simple Click app like this:</p>
<pre><code>import click
@click.command()
@click.argument('message')
def main(message: str):
click.echo(message)
if __name__ == '__main__':
main()
</code></pre>
<p>When you pass an environment variable in the argument, it expands it:</p>
<pre><code>➜ Desktop python foo.py '$M0/.viola/2025-01-25-17-20-23-307878'
M:/home/ramrachum/.viola/2025-01-25-17-20-23-307878
</code></pre>
<p>Note that I used single quotes above, so my shell is not expanding the environment variable, Click does. How do I get Click to not expand it?</p>
|
<python><environment-variables><python-click>
|
2025-01-25 18:35:42
| 1
| 89,497
|
Ram Rachum
|
79,387,289
| 7,236,133
|
Python, filter vectors from Pinecone vector store based on a field saved in the metadata of these vectors
|
<p>I have vectors stored in a Pinecone vector store, each vector represents a content of a pdf file:</p>
<blockquote>
<p>Metadata::
hash_code: "d53d7ec8b0e66e9a83a97acda09edd3fe9867cadb42833f9bf5525cc3b89fe2d"
id: "cc54ffbe-9cba-4de9-9f30-a114e4c3c3fb"</p>
</blockquote>
<p>I saved a new field in the metadata, which is the hash_code of the pdf content, to avoid adding the same file again and again to the vector store.</p>
<p>To do that, I'm getting the new hash codes of the new documents that I want to add, then I want to scan the existing ones to find if any of them already exists and then filter it out.</p>
<p>I'm using python, and tried such a code, but didn't manage to acheive my goal yet:</p>
<p>First method:</p>
<pre><code>def filter_existing_docs(index_name, docs):
# Initialize the Pinecone index
index = pinecone_client.Index(index_name)
# Extract hash_codes from the docs list using the appropriate method for your Document objects
hash_codes = [doc.metadata['hash_code'] for doc in docs] # Accessing 'metadata' if it's an attribute
print("Hash Codes:", hash_codes)
# Fetch by list of hash_codes (ensure hash_codes are valid ids)
fetch_response = index.fetch(ids=hash_codes)
print("Fetch Response:", fetch_response)
# Get the existing hash_codes that are already in the Pinecone index
existing_hash_codes = set(fetch_response.get('vectors', {}).keys()) # Extract existing IDs from the response
print("1 -----------> Existing Hash Codes:", len(existing_hash_codes))
# Filter out the docs that have already been added to Pinecone
filtered_docs = [doc for doc in docs if doc.metadata['hash_code'] not in existing_hash_codes]
print("2 -----------> Filtered Docs:", len(filtered_docs))
return filtered_docs
</code></pre>
<p>Then tried another approach:</p>
<pre><code>def filter_existing_docs(index_name, docs):
# Initialize the Pinecone index
index = pinecone_client.Index(index_name)
# Extract hash_codes from the docs list using the appropriate method for your Document objects
hash_codes = [doc.metadata['hash_code'] for doc in docs] # Accessing 'metadata' if it's an attribute
print("Hash Codes:", hash_codes)
# We need to query Pinecone using `top_k` and search through the index
query_response = index.query(
top_k=100, # Set a suitable `top_k` to return a reasonable number of documents
include_metadata=True,
#namespace=namespace
)
# Debug: Print the query response to see its structure
print("Query Response:", query_response)
# Extract the hash_codes of the existing documents in Pinecone
existing_hash_codes = {item['metadata']['hash_code'] for item in query_response['matches']}
print("1 -----------> Existing Hash Codes:", len(existing_hash_codes))
# Filter out the docs that have already been added to Pinecone based on hash_code
filtered_docs = [doc for doc in docs if str(doc.metadata['hash_code']) not in existing_hash_codes]
print("2 -----------> Filtered Docs:", len(filtered_docs))
return filtered_docs
</code></pre>
|
<python><pinecone>
|
2025-01-25 18:35:01
| 2
| 679
|
zbeedatm
|
79,387,277
| 843,367
|
How to use single axis title with layer and facet?
|
<p>I would like to use one single title that goes across facets for the x-axis in this plot. How to do that in python using altair?</p>
<p>Apparently, Altair does not provide that functionality.</p>
<pre><code>import altair as alt
import pandas as pd
import textwrap
df = pd.DataFrame({
'y': [10, 20, 30 , 40] ,
'x': ['a', 'b', 'a', 'b'] ,
'facet' : ['case 1', 'case 1', 'case 2', 'case 2']
})
# without wrapping the labels
x = 'x'
y = 'y'
facet = 'facet'
xlabel = ["A long long long long long long",
"long long long long long long title"]
base = (alt.Chart(df))
bar = (base.mark_bar()
.encode(
x = alt.X(x).axis(labelAngle=0).title(xlabel),
y = alt.Y(y),
))
txt = (base.mark_text(dy=-5)
.encode(
x = alt.X(x),
y = alt.Y(y),
text = alt.Y(y),
))
g = (alt.layer(bar, txt)
.properties(width=300, height=250)
.facet(facet=alt.Facet(facet), columns=2)
)
</code></pre>
<p><a href="https://i.sstatic.net/za7Svg5n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/za7Svg5n.png" alt="enter image description here" /></a></p>
|
<python><axis><altair>
|
2025-01-25 18:30:30
| 1
| 871
|
Diogo
|
79,387,209
| 6,168,154
|
Selenium - AttributeError: 'Service' object has no attribute 'get'
|
<p>With this code below:</p>
<pre><code>import requests, os
from selenium import webdriver
from selenium.webdriver.chrome.service import Service as ChromeService
from webdriver_manager.chrome import ChromeDriverManager
url = "www.google.com"
service = ChromeDriverManager().install()
folder = os.path.dirname(service)
chromedriver_path = os.path.join(folder, "chromedriver.exe")
driver = ChromeService(chromedriver_path)
#driver = webdriver.Chrome(chromedriver_path)
driver.get(url)
</code></pre>
<p>I get the following error message:
<code>AttributeError: 'Service' object has no attribute 'get'</code></p>
<p>I can't use <code>from selenium import webdriver</code> because of driver path error (obtained from absolute path exe downloaded from google). I also can't upgrade <code>Selenium</code> in my environment and use "latest" <code> selenium-3.141.0</code> version.</p>
<p>Up to line <code>driver = ...</code> it seems to work, but I just don't know how to use <code>get</code> method as on all websites it's used like <code>driver.get(...)</code>.</p>
<p>I'm sure I mess with classes and so it's compability problem so that's why is my ask.</p>
|
<python><selenium-webdriver><python-3.7>
|
2025-01-25 17:49:01
| 2
| 1,548
|
Peter.k
|
79,387,136
| 4,589,867
|
sudo + fork + Python subprocess = [Errno 38] Function not implemented
|
<p>Here's a simplified Python script (henceforth <code>app.py</code>) that forks a child which then runs a subprocess:</p>
<pre><code>import os
import subprocess
if os.fork() > 0:
os._exit(0)
subprocess.run(["/bin/true"])
</code></pre>
<p>When run as a normal user, it works:</p>
<pre><code>$ python3 app.py
$ echo $?
0
</code></pre>
<p>When run as root from a sudo-launched parent shell, it works:</p>
<pre><code>$ sudo bash
# python3 app.py
#
</code></pre>
<p>However when the parent is sudo, it fails in a bizarre way:</p>
<pre><code>$ sudo python3 app.py
$ Traceback (most recent call last):
File "/tmp/app.py", line 7, in <module>
subprocess.run(["/bin/true"])
File "/usr/lib/python3.11/subprocess.py", line 548, in run
with Popen(*popenargs, **kwargs) as process:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/subprocess.py", line 1026, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.11/subprocess.py", line 1955, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
OSError: [Errno 38] Function not implemented: '/bin/true'
</code></pre>
<p>and:</p>
<pre><code>$ sudo bash
# exec python3 app.py
$ Traceback ... (same error message as above)
</code></pre>
<p>Apparently something in the child process created by subprocess.run it getting ENOSYS, but it's a mystery to me how having sudo as the parent can cause this.</p>
<p>Update: The system in question is TrueNAS SCALE (Debian-derived Linux) with Python 3.11.9. Since posting I discovered that this does not repro on another Linux system, so I now suspect it is somehow related to sudo configuration or kernel security options.</p>
|
<python><subprocess><sudo>
|
2025-01-25 17:00:43
| 1
| 355
|
ab.
|
79,387,009
| 3,070,181
|
How to resolve incompatible imports in python and pytest
|
<p>I have created a project exactly as described by the <a href="https://docs.pytest.org/en/7.1.x/explanation/goodpractices.html#good-integration-practices" rel="nofollow noreferrer">pytest Good Integration Practices documentation</a>, specifically the <a href="https://docs.pytest.org/en/7.1.x/explanation/goodpractices.html#tests-outside-application-code" rel="nofollow noreferrer">Tests outside application code section</a>.</p>
<p>This is my project layout</p>
<pre><code>.
├── pyproject.toml
├── src
│ └── basic_package
│ ├── bar.py
│ ├── __init__.py
│ └── main.py
└── tests
└── test_app.py
</code></pre>
<p>In <em>main.py</em> I import from <em>bar.py</em></p>
<pre><code>from bar import baz
def main() -> str:
return baz()
</code></pre>
<p>in <em>test_app.py</em></p>
<pre><code>from basic_package.main import main
def test_foo():
assert main() == 'qux'
</code></pre>
<p>It works when I run the project. However, if I run pytest, I get an error</p>
<blockquote>
<p>ModuleNotFoundError: No module named 'bar'</p>
</blockquote>
<p>It works in pytest if I change the code in <em>main.py</em> to</p>
<pre><code>from .bar import baz
</code></pre>
<p>But then if I run the application I get the ModuleNotFoundError</p>
<p>How can I resolve this issue?</p>
<p>I am running <em>pytest</em> from the root directory of the project</p>
|
<python><pytest>
|
2025-01-25 15:37:09
| 1
| 3,841
|
Psionman
|
79,386,829
| 10,985,257
|
How to provide tomllib
|
<p>Since python3.11 we are able to use builtin library <code>tomllib</code> before we had access to third party library <code>tomli</code> and a few others.</p>
<p>I did not have analyzed both packages deeply but came to the conclusion, that I am able to replace <code>tomli</code> with <code>tomllib</code> for my purposes.</p>
<p>My issue I have with the situation: How to handle the change while also supporting down to 3.9?</p>
<p>Is there a solution to provide different dependencies in <code>pyproject.toml</code> for different versions of python (with additons to codebase changes accordingly)?</p>
<p>This Question could be probably generalized, but I think the example gives a good picture of the general issue I face.</p>
|
<python><dependency-management><pyproject.toml>
|
2025-01-25 13:44:57
| 1
| 1,066
|
MaKaNu
|
79,386,763
| 6,936,489
|
handle invalid encoding sequences in csv with polars
|
<p>Consider the following snippet:</p>
<pre class="lang-py prettyprint-override"><code>from io import TextIOWrapper, BytesIO
import polars as pl
import pandas as pd
csv_str = (
b"spam,egg\n"
+ "spam,œuf\n".encode("cp1252")
+ "spam,αυγό\n".encode("utf8")
)
content = BytesIO(csv_str)
wrapped = TextIOWrapper(content, errors="replace")
try:
df = pl.read_csv(wrapped)
except Exception as e:
print("polars failed!")
print(e)
wrapped.seek(0)
try:
df = pd.read_csv(wrapped, sep=",")
except Exception as e:
print("pandas failed!")
print(e)
</code></pre>
<p>You got there an invalid CSV a bad as there is, with two different encodings. Strangely enough, this keeps to be a real-life problem, and a too frequent one.</p>
<p>With <code>pandas</code>, you can either handle this through the <code>TextIOWrapper</code> or the built-in <code>encoding_errors</code> argument.</p>
<p>Questions:</p>
<ul>
<li>why is this not working with <code>polars</code>, considering that the <code>TextIOWrapper</code> should handle this input as a stream?</li>
<li>is there a way to handle this natively with <code>polars</code> (I mean any way other than reading it with <code>pandas</code> then converting it with <code>polars.from_pandas</code>)?</li>
</ul>
|
<python><character-encoding><python-polars>
|
2025-01-25 13:04:11
| 1
| 2,562
|
tgrandje
|
79,386,521
| 1,744,834
|
Polars top_k_by with over, k = 1. Bug?
|
<p>Given the following dataFrame:</p>
<pre><code>pl.DataFrame({
'A': ['a0', 'a0', 'a1', 'a1'],
'B': ['b1', 'b2', 'b1', 'b2'],
'x': [0, 10, 5, 1]
})
</code></pre>
<p>I want to take value of column <code>B</code> with max value of column <code>x</code> within same value of <code>A</code> (taken from <a href="https://stackoverflow.com/questions/79384474/polars-get-column-value-at-another-columns-min-max-value">this question</a>).</p>
<p>I know there's solution with <a href="https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.Expr.get.html" rel="nofollow noreferrer"><code>pl.Expr.get()</code></a> and <a href="https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.Expr.arg_max.html" rel="nofollow noreferrer"><code>pl.Expr.arg_max()</code></a>, but I wanted to use <a href="https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.Expr.top_k_by.html" rel="nofollow noreferrer"><code>pl.Expr.top_k_by()</code></a> instead, and for some reason it doesn't work for me with <code>k</code> = <code>1</code>:</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(
pl.col.B.top_k_by("x", 1).over("A").alias("y")
)
</code></pre>
<pre><code>ComputeError: the length of the window expression did not match that of the group
Error originated in expression: 'col("B").top_k_by([dyn int: 1, col("x")]).over([col("A")])'
</code></pre>
<p>It does work for <code>k</code> = <code>2</code> though.
Do you think it's a bug?</p>
|
<python><python-polars>
|
2025-01-25 10:18:03
| 1
| 118,326
|
roman
|
79,386,183
| 6,335,363
|
How can I test that Python package entrypoints are correctly discovered and loaded?
|
<p>I am currently writing a package that seeks to load <a href="https://packaging.python.org/en/latest/guides/creating-and-discovering-plugins/#using-package-metadata" rel="nofollow noreferrer">entrypoint contributions</a> from other installed packages using <a href="https://docs.python.org/3/library/importlib.metadata.html#importlib.metadata.entry_points" rel="nofollow noreferrer"><code>importlib.metadata.entry_points</code></a>, but am unsure how I can test this properly. I am using Pytest for my package's testing.</p>
<p>I'm essentially trying to solve the opposite problem of <a href="https://stackoverflow.com/q/65846516/6335363">this post</a>, where they want to ensure that their package is loaded by another installed package.</p>
<p>I cannot simply have example plugin packages as development dependencies, as I want to be able to test different cases with different plugins:</p>
<ul>
<li>What happens if a plugin fails to import?</li>
<li>What happens if a plugin raises an exception?</li>
<li>Does a correctly-written plugin behave well with my application?</li>
</ul>
<p>I want to test each of these cases in different tests, and so I need a way of dynamically registering and unregistering mock plugins for Python's entrypoint system.</p>
<p>Here's what my ideal code would look like:</p>
<pre class="lang-py prettyprint-override"><code>def test_bad_plugin_class():
class MyBadPlugin(MyPluginInterface):
def do_something(self):
# This code is broken
raise RuntimeError("No.")
with importlib.metadata.add_mock_entrypoint( # function does not exist :(
'my_package.plugins',
'bad_plugin_lib:MyBadPlugin',
# Give the class that should be returned when calling EntryPoint.load
# to avoid needing to create a file to import from
MyBadPlugin,
):
# Would check for actual error handling here
with pytest.raises(ImportError):
my_library.load_plugins()
</code></pre>
<p>I could just mock the entire entrypoint system, but this feels excessive and tedious, and so I am looking for a better solution.</p>
<p>How can I accomplish this?</p>
|
<python><pytest><python-packaging>
|
2025-01-25 04:44:43
| 1
| 2,081
|
Maddy Guthridge
|
79,386,093
| 67,153
|
How to read a file into memory in FastAPI and pass it to MarkItDown library?
|
<p>The need is to upload a file to a FastAPI endpoint, convert it to Markdown and save the text to Redis (Files are up to 4MB in size).</p>
<p>The only logic I have found so far is to upload the file as <code>UploadFile</code>, read the contents, save them to disk with the right extension, pass that path to <a href="https://github.com/microsoft/markitdown" rel="nofollow noreferrer"><code>MarkItDown</code></a> library, read that markdown file again, and then pass it to Redis. Way too much I/O. Is there a way to do all of this in memory?</p>
<p>(For the sake of code simplicity, I removed all error handling and I assume only text files)</p>
<pre><code>@router.post("/upload")
async def uploadPost(filepond: UploadFile = File()):
"""
Convert a textual file to markdown.
Store in Redis
"""
# Create a temporary file to save the uploaded content
# for sake of simplicity I use txt for everything
with NamedTemporaryFile(delete=False,suffix=".txt") as temp_file:
temp_file_path = temp_file.name
content = await filepond.read()
temp_file.write(content)
temp_file.close()
md = MarkItDown()
result = md.convert(temp_file_path)
redis.setex("some key", 3600, result.text_content)
os.remove(temp_file_path)
</code></pre>
|
<python><file><fastapi><markitdown>
|
2025-01-25 02:33:06
| 2
| 53,813
|
Itay Moav -Malimovka
|
79,386,068
| 147,530
|
ImportError: libxxx: cannot open shared object file: No such file or directory
|
<p>I am trying to build a Python package that interops with C++ code. I created a PyBind11 wrapper to do the interop. I created a setup.py file following the instructions <a href="https://pybind11.readthedocs.io/en/stable/compiling.html#modules-with-setuptools" rel="nofollow noreferrer">here</a>:</p>
<pre><code>from pybind11.setup_helpers import Pybind11Extension
ext_modules = [
Pybind11Extension(
"python_example",
sorted(glob("src/*.cpp")), # Sort source files for reproducibility
extra_link_args=[f"-Wl,-rpath,$ORIGIN"]
),
]
setup(..., ext_modules=ext_modules)
</code></pre>
<p>I set <code>extra_link_args=[f"-Wl,-rpath,$ORIGIN"]</code> as I understand it will embed the origin to the search path of native libraries. (I have also tried without it with no luck). My C++ module depends on a shared library that I put right beside <code>__init__.py</code>. But when I try to import my C++ module I get:</p>
<blockquote>
<p>ImportError: libxxx: cannot open shared object file: No such file or directory</p>
</blockquote>
<p>How can I fix the error?</p>
<p>Adding more details:</p>
<ol>
<li><p>Verify that dependency is placed in same directory as pybind module:</p>
<pre><code>$ tree -L 1 venv/lib/python3.10/site-packages/my_package/
venv/lib/python3.10/site-packages/my_package/
├── __init__.py
├── __pycache__
├── my_data
├── libxxx.so.1
├── libyyy.so
├── libyyy.so.1.14.1
└── my_package_cpp.cpython-310-x86_64-linux-gnu.so
</code></pre>
</li>
<li><p>Verify there is an entry setting <code>RUNPATH</code> to <code>$ORIGIN</code> which is supposed to instruct the dynamic linker to add directory containing the module to its search path:</p>
<pre><code>$ readelf -d venv/lib/python3.10/site-packages/my_package/my_package.cpython-310-x86_64-linux-gnu.so | grep -E 'RPATH|RUNPATH'
0x000000000000001d (RUNPATH) Library runpath: [$ORIGIN]
</code></pre>
</li>
<li><p>And yet the dynamic linker searches everywhere except for the directory containing the module itself (the <code>$ORIGIN</code>):</p>
<p><a href="https://gist.github.com/siddhsql/6cb2788866e908209cfba1bcccb513d9" rel="nofollow noreferrer">LD_DEBUG=libs python3 -c "import my_pybind_module"</a></p>
</li>
<li><p><code>ldd</code> does not complain:</p>
<pre><code>$ ldd venv/lib/python3.10/site-packages/my_package/my_package_cpp.cpython-310-x86_64-linux-gnu.so
linux-vdso.so.1 (0x00007ffea75f7000)
libxxx.so.1 => /home/me/my_project/venv/lib/python3.10/site-packages/my_package/libxxx.so.1 (0x00007f072f352000)
libyyy.so.1.14.1 => /home/me/my_project/venv/lib/python3.10/site-packages/my_package/libyyy.so.1.14.1 (0x00007f072e3bb000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f072e187000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f072e167000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f072df3e000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f072de55000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f072de50000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f072de4b000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f072de46000)
/lib64/ld-linux-x86-64.so.2 (0x00007f072f48c000)
</code></pre>
</li>
</ol>
|
<python><setup.py><pybind11>
|
2025-01-25 02:05:24
| 0
| 20,700
|
morpheus
|
79,385,866
| 2,834,978
|
Numpy array boolean indexing to get containing element
|
<p>Given a (3,2,2) array how do I get second dimension elements given a single value on the third dimension</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
arr = np.array([
[[31., 1.], [41., 1.]],
[[63., 1.],[73., 3.]],
[[ 95., 1.], [100., 1]]
]
)
ref = arr[(arr[:,:,0] > 41.) & (arr[:,:,0] <= 63)]
print(ref)
</code></pre>
<p>Result</p>
<pre class="lang-none prettyprint-override"><code>[[63. 1.]]
</code></pre>
<p>Expected result</p>
<pre class="lang-none prettyprint-override"><code>[[63., 1.],[73., 3.]]
</code></pre>
<p>The input value is 63 so I don't know in advance 73 exists but I want to return it as well. In other words, if value exists return the whole parent array without reshaping.</p>
<p>Another example</p>
<p><code>ref = arr[(arr[:,:,0] <= 63)]</code></p>
<p>Returns</p>
<pre class="lang-none prettyprint-override"><code>[[31. 1.]
[41. 1.]
[63. 1.]]
</code></pre>
<p>But should return</p>
<pre class="lang-none prettyprint-override"><code>[[[31. 1.]
[41. 1.]]
[[63. 1.]
[73. 1.]]]
</code></pre>
|
<python><numpy>
|
2025-01-24 23:16:17
| 3
| 14,328
|
LMC
|
79,385,857
| 23,570,806
|
Windows ModuleNotFoundError: No module named 'Crypto' even though pycryptodome is installed
|
<p>I'm trying to use the pycryptodome library in my Python project, but I keep getting the following error:</p>
<pre><code>ModuleNotFoundError: No module named 'Crypto'
</code></pre>
<p>Here's what I have done so far:</p>
<p>I installed pycryptodome using pip:</p>
<pre><code>PS C:\Users\Nathan> pip install pycryptodome
Requirement already satisfied: pycryptodome in c:\users\nathan\appdata\local\programs\python\python311\lib\site-packages (3.21.0)
</code></pre>
<p>Verified the installation:</p>
<pre><code>PS C:\Users\Nathan> pip show pycryptodome
Name: pycryptodome
Version: 3.21.0
Summary: Cryptographic library for Python
Home-page: https://www.pycryptodome.org
Author: Helder Eijs
Author-email: helderijs@gmail.com
License: BSD, Public Domain
Location: C:\Users\Nathan\AppData\Local\Programs\Python\Python311\Lib\site-packages
Requires:
Required-by: cart, eth-keyfile
</code></pre>
<p>Tested importing Crypto in Python:</p>
<pre><code>PS C:\Users\Nathan> python
>>> from Crypto.Cipher import AES
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'Crypto'
</code></pre>
<p>Tried uninstalling and reinstalling:</p>
<pre><code>PS C:\Users\Nathan> pip uninstall pycryptodome -y
Found existing installation: pycryptodome 3.21.0
Uninstalling pycryptodome-3.21.0:
Successfully uninstalled pycryptodome-3.21.0
PS C:\Users\Nathan> pip install --force-reinstall pycryptodome
Collecting pycryptodome
Using cached pycryptodome-3.21.0-cp36-abi3-win_amd64.whl.metadata (3.4 kB)
Using cached pycryptodome-3.21.0-cp36-abi3-win_amd64.whl (1.8 MB)
Installing collected packages: pycryptodome
Successfully installed pycryptodome-3.21.0
</code></pre>
<p>But the error persists.</p>
<p>Verified there are no conflicting packages installed:</p>
<pre><code>PS C:\Users\Nathan> pip uninstall pycrypto
WARNING: Skipping pycrypto as it is not installed.
</code></pre>
<p>Checked the Python path to ensure it includes the correct site-packages directory:</p>
<pre><code>>>> import sys
>>> print(sys.path)
['', 'C:\\Users\\Nathan\\AppData\\Local\\Programs\\Python\\Python311\\python311.zip', 'C:\\Users\\Nathan\\AppData\\Local\\Programs\\Python\\Python311\\DLLs', 'C:\\Users\\Nathan\\AppData\\Local\\Programs\\Python\\Python311\\Lib', 'C:\\Users\\Nathan\\AppData\\Local\\Programs\\Python\\Python311', 'C:\\Users\\Nathan\\AppData\\Roaming\\Python\\Python311\\site-packages', 'C:\\Users\\Nathan\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages', 'C:\\Users\\Nathan\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\win32', 'C:\\Users\\Nathan\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\win32\\lib', 'C:\\Users\\Nathan\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\Pythonwin']
</code></pre>
<p>The directory C:\Users\Nathan\AppData\Local\Programs\Python\Python311\Lib\site-packages is listed.
Despite all this, the error still occurs.</p>
<p>Why am I still getting ModuleNotFoundError: No module named 'Crypto' even though pycryptodome is installed and located in the correct directory? What steps can I take to resolve this issue ?</p>
|
<python><python-3.x><windows><pip>
|
2025-01-24 23:09:15
| 1
| 485
|
Asile34
|
79,385,719
| 516,433
|
Conditionally use an context specific ansible_python_interpreter based on host
|
<p>I recently ran into a bug where my ansible plays stopped working because RedHat backported <a href="https://access.redhat.com/errata/RHSA-2025:0012" rel="nofollow noreferrer">a CVE patch</a> which was incompatible with the <a href="https://github.com/psf/requests/issues/6707" rel="nofollow noreferrer">python <code>docker-py</code> library by way of <code>requests</code></a>. The fix was <a href="https://github.com/psf/requests/issues/6707#issuecomment-2129898633" rel="nofollow noreferrer">added at version 7.1.0 of <code>docker-py</code></a> which would require python 3.8, but i cannot update system linux on the server to 3.8 because we also need <a href="https://github.com/ansible-community/ansible-bender/issues/275" rel="nofollow noreferrer"><code>selinux</code> for other tasks and this package is only supported on python 3.6</a>. So i came to the conclusion i needed to use different python instances for different tasks, simple enough, just specify the interpreter on the task:</p>
<pre class="lang-yaml prettyprint-override"><code>- name: Fetch image info for preexisting copy of {{ fully_qualified_image }}
community.docker.docker_image_info:
name: "{{ fully_qualified_image }}"
register: preexisting_image_info
failed_when: false
vars:
# ansible_python_interpreter_docker points to a python venv containing the necessary
# docker lib
ansible_python_interpreter: "{{ ansible_python_interpreter_docker }}"
</code></pre>
<p>But this is not the case for <em>all</em> of my hosts. On some hosts i need to just use the default interpreter...</p>
<p>So i tried to do this in <code>group_vars/all/vars.yml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>ansible_python_interpreter_docker: "{{ ansible_python_interpreter }}"
</code></pre>
<p>And override it per host that needs the <em>special</em> python. And this works for all the hosts that use the special docker interpreter. However, for those that dont, i get an infinite recursion:</p>
<pre class="lang-none prettyprint-override"><code>...
Error was a <class 'ansible.errors.AnsibleError'>, original message: An unhandled exception occurred while templating '{{ ansible_python_interpreter }}'.
Error was a <class 'ansible.errors.AnsibleError'>, original message: An unhandled exception occurred while templating '{{ ansible_python_interpreter_docker }}'.
Error was a <class 'ansible.errors.AnsibleError'>, original message: recursive loop detected in template string: {{ ansible_python_interpreter_docker }}"}
</code></pre>
<p>This little test play demonstrates <em>a workaround</em>:</p>
<pre class="lang-yaml prettyprint-override"><code>---
- hosts: host-with-system-docker,host-with-venv-docker
gather_facts: true
tasks:
- block:
- name: do a docker thing with system python
debug:
msg: "{{ ansible_python_interpreter }}"
when: ansible_python_interpreter_docker is not defined
- name: do a docker thing with docker specific python
debug:
msg: "{{ ansible_python_interpreter }}"
vars:
ansible_python_interpreter: "{{ ansible_python_interpreter_docker }}"
when: ansible_python_interpreter_docker is defined
- debug:
msg: "YAY"
</code></pre>
<p>But there's gotta be a better way...</p>
<p><strong>Update</strong>, per suggestion from @β.εηοιτ.βε, i tried:</p>
<pre class="lang-yaml prettyprint-override"><code>---
- hosts: host-with-system-docker,host-with-venv-docker
gather_facts: true
tasks:
- block:
- name: dont override at all
debug:
msg: "{{ ansible_python_interpreter }}"
- name: use omit
debug:
msg: "{{ ansible_python_interpreter }}"
vars:
ansible_python_interpreter: "{{ ansible_python_interpreter_docker | default(omit) }}"
</code></pre>
<p>but ended up not working (with rather strange outcome):</p>
<pre class="lang-none prettyprint-override"><code>TASK [dont override at all] ****************************************************************************************************************************************************************************************************************************************************************************
Monday 27 January 2025 10:18:24 -0500 (0:00:02.976) 0:00:03.062 ********
ok: [host-with-venv-docker] => {
"msg": "/usr/bin/python3"
}
ok: [host-with-system-docker] => {
"msg": "/home/ltheisen/.ansible-venvs/d58c1d218f1f70fa8eedf52a851e15d9/bin/python"
}
TASK [use omit] ****************************************************************************************************************************************************************************************************************************************************************************************
Monday 27 January 2025 10:18:24 -0500 (0:00:00.071) 0:00:03.134 ********
ok: [host-with-venv-docker] => {
"msg": "/usr/local/lib/ansible/python3.8-venv-docker/bin/python"
}
ok: [host-with-system-docker] => {
"msg": "Hello world!"
}
</code></pre>
|
<python><docker><ansible>
|
2025-01-24 21:48:22
| 0
| 15,059
|
Lucas
|
79,385,676
| 13,968,392
|
Filter with expression expansion
|
<p>Is it possible to convert the following <code>filter</code>, which uses two conditions, to something that uses expression expansion or a custom function in order to apply the DRY priciple (avoid the repetition)?<br />
Here is the example:</p>
<pre><code>import polars as pl
df = pl.DataFrame(
{
"a": [1, 2, 3, 4, 5],
"val1": [1, None, 0, 0, None],
"val2": [1, None, None, 0, 1],
}
)
df.filter((~pl.col("val1").is_in([None, 0])) | (~pl.col("val2").is_in([None, 0])))
</code></pre>
<p>Results in:</p>
<pre><code>┌─────┬──────┬──────┐
│ a ┆ val1 ┆ val2 │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 │
╞═════╪══════╪══════╡
│ 1 ┆ 1 ┆ 1 │
│ 5 ┆ null ┆ 1 │
└─────┴──────┴──────┘
</code></pre>
|
<python><filter><user-defined-functions><python-polars>
|
2025-01-24 21:29:39
| 1
| 2,117
|
mouwsy
|
79,385,579
| 4,296,426
|
Azure Bot Skill in Copilot doesn't receive event activities in omnichannel live chat widget or SMS channel
|
<p>I have a fairly simple copilot bot with a python-fastapi azure bot service in azure added as a skill doing passthrough of messages. When I use the <code>send_activities</code> method on the <code>turn_context</code> object, messages come through fine on Omnichannel's live chat widget that they provide for a chat stream.</p>
<p>However, when I try to send an event, such as:</p>
<pre><code>transfer_event = {
"type": "event",
"name": "initiateHandoff",
"from": {
"id": "1234",
"name": "me"
}
}
response = await turn_context.send_activities([transfer_event_activity])
</code></pre>
<p>The event never makes it to copilot to trigger a topic. There are no logged errors or anything.</p>
<p>In the demo website, it uses directline channel and I can use the conversationId to send events. The conversation ids in the live chat widget and sms, however, do not work as they look like they're coming from ACS.</p>
<p>If anyone has any ideas, it would be greatly appreciated.</p>
|
<python><botframework><microsoft-copilot>
|
2025-01-24 20:38:19
| 2
| 1,682
|
Optimus
|
79,385,534
| 7,995,293
|
Python adaptor for working around relative imports in my QGIS plugin?
|
<p>I am writing a QGIS plugin. During early development, I wrote and tested the Qt GUI application independently of QGIS. I made use of absolute imports, and everything worked fine.</p>
<p>Then, I had to adapt everything to the quirks of QGIS. I can't explain why and haven't been able to find any supporting documentation, but nonetheless: apparently, QGIS needs or strongly prefers relative imports. The majority of other plugins I've looked at (completely anecdotal of course) have all used a flat plugin directory and relative imports.</p>
<p>My team decided to keep the hierarchical structure. The plugin now works in QGIS, with the same hierarchical structure, using relative imports.</p>
<p>My goal: I would like to still be able to run the GUI independently of QGIS, as it does not (yet) depend on any aspects of QGIS. With the relative imports, this is completely broken.</p>
<p>My project directory has the following hierarchy:</p>
<pre><code>.
├── app
│ └── main.py
├── __init__.py
├── justfile
├── metadata.txt
├── plugin.py
├── README.md
├── resources
│ ├── name_resolver.py
│ └── response_codes.py
├── processing
│ ├── B_processor.py
│ ├── A_processor.py
│ ├── processor_core.py
│ ├── processor_interface.py
│ ├── processor_query_ui.py
│ └── processor_query_ui.ui
└── tests
├── __init__.py
├── test_A_processor.py
└── test_processor_core.py
</code></pre>
<p>The <code>app</code> directory and its <code>main.py</code> module are where I'm trying to run the GUI independently of QGIS. The GUI is in <code>processing/processor_query_ui.py</code>.</p>
<p><code>app/main.py</code> is as follows:</p>
<pre><code>if __name__ == "__main__":
import sys
from PyQt5 import QtWidgets
from processing.processor_query_ui import UI_DataFinderUI
app = QtWidgets.QApplication(sys.argv)
ui = UI_DataFinderUI()
ui.show()
sys.exit(app.exec_())
</code></pre>
<p>When running main from the top level, all imports <em>within main.py</em> work:</p>
<pre><code>$ python app/main.py
</code></pre>
<p>What does NOT work are the subsequent imports:</p>
<pre><code>Traceback (most recent call last):
File "/path/to/app/main.py", line 4, in <module>
from processing.processor_query_ui import UI_DataFinderUI
File "/path/to/processing/processor_query_ui.py", line 2, in <module>
from .A_processor import Aprocessor
File "/path/to/processing/A_processor.py", line 5, in <module>
from ..resources.response_codes import RESPONSE_CODES
ImportError: attempted relative import beyond top-level package
</code></pre>
<p>This shows that all imports in <code>main.py</code> are working correctly. But when <code>processor_query_ui</code> tries to do <em>its</em> imports, <em>those</em> ones fail.</p>
<p>I have tried adding <code>__init__.py</code> files to all first level directories as well (e.g. {<code>app,resources,processing}/__init__.py</code>) to no vail. Running <code>python -m app/main{.py}</code> doesn't work, though I didn't really expect it to.</p>
<p>For <code>pytest</code> to work, the <code>tests</code> directory <em>must</em> have an <strong>init</strong>.py file; then, pytest works as either <code>pytest</code> or <code>python -m pytest</code>.</p>
<p>My goal is to be able to run the GUI in <code>processing/processor_query_ui.py</code> as a standalone app, by writing some kind of adaptor such that I do not have to change the current directory structure or relative imports (which, as above, make QGIS happy).</p>
<p>Any advice is greatly appreciated.</p>
|
<python><plugins><python-import><pyqgis><relative-import>
|
2025-01-24 20:16:05
| 2
| 399
|
skytwosea
|
79,385,532
| 8,543,025
|
np.isin fails on pd.Index with multiple dtypes
|
<p>I noticed this strange behavior, wondering what's happening here:<br />
Say I want to find instances of a numpy array / pandas Index that are included in some predefined list, which has <strong>multiple dtypes</strong>:</p>
<pre><code>lst = ["A", "B", 1, 2, "C", 3]
test = ["B", 1, 2, "C"]
arr = np.array(lst)
idx = pd.Index(lst)
</code></pre>
<p>You'd expect to get the same behavior:</p>
<pre><code>np.isin(lst, test) == np.isin(arr, test) == np.isin(idx, test) # np.array([False, True, True, True, True, False])
</code></pre>
<p>But the <code>Index</code> object doesn't behave as expected. Instead you get</p>
<pre><code>np.isin(idx, test) == np.array([False, True, False, False, True, False])
</code></pre>
<p>This persists even if you convert the <code>Index</code> to an <code>array</code>, but if you convert it to a <code>list</code> it returns to the expected behavior:</p>
<pre><code>np.isin(idx.to_numpy(), test) # np.array([False, True, False, False, True, False])
np.isin(idx.to_list(), test) # np.array([False, True, True, True, True, False])
</code></pre>
<p>What's even weirder: if you init the <code>Index</code> from <code>arr</code> instead of <code>lst</code>, everything works as expected initially:</p>
<pre><code>idx_from_arr = pd.Index(arr)
np.isin(lst, test) == np.isin(arr, test) == np.isin(idx_from_arr, test) # np.array([False, True, True, True, True, False])
</code></pre>
<p>I'm wondering if it's a bug or intentional, and why it happens.</p>
|
<python><pandas><numpy>
|
2025-01-24 20:15:41
| 0
| 593
|
Jon Nir
|
79,385,374
| 3,125,823
|
500 server error with Django Rest Framework
|
<p>I am using Django/DRF with Djoser and djangorestframework-simplejwt to create an API for full authentication including signup, login, activation, forgot password and reset password.</p>
<p>I followed along with this <a href="https://www.youtube.com/watch?v=2pZmxh8Tf78" rel="nofollow noreferrer">YT tutorial</a></p>
<p>For some reason, when I send a POST request in Postman to localhost:8000/api/users/ I am getting this error and I have no idea why at this point, django.db.utils.DatabaseError: Save with update_fields did not affect any rows.</p>
<p>I'm not using SQLite but an actual Postgres db on localhost.
I've tried changing user.save(self._db) to just user.save(), same error.
I've updated Django to 5x. Django is the only one with a significant update compared to the tutorial, he uses Django 4x.</p>
<p>I did move some of the original model code to a <code>managers.py</code> file based on this <a href="https://testdriven.io/blog/django-custom-user-model/" rel="nofollow noreferrer">testdriven.io tutorial</a></p>
<p>I've been able to run <code>python manage.py runserver</code> with no errors after doing so.</p>
<p>It doesn't seem to be any code related to the tutorial but something with the python packages...</p>
<p>Here is the error from the cli:</p>
<pre><code>[24/Jan/2025 16:40:07] "POST /api/users/ HTTP/1.1" 500 138392
Bad Request: /api/users/
[24/Jan/2025 17:05:47] "POST /api/users/ HTTP/1.1" 400 62
Internal Server Error: /api/users/
Traceback (most recent call last):
File "/home/da/projects/full_auth_api/full_auth/backend/venv/lib/python3.12/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "/home/da/projects/full_auth_api/full_auth/backend/venv/lib/python3.12/site-packages/django/core/handlers/base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/da/projects/full_auth_api/full_auth/backend/venv/lib/python3.12/site-packages/django/views/decorators/csrf.py", line 65, in _view_wrapper
return view_func(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/da/projects/full_auth_api/full_auth/backend/venv/lib/python3.12/site-packages/rest_framework/viewsets.py", line 124, in view
return self.dispatch(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/da/projects/full_auth_api/full_auth/backend/venv/lib/python3.12/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/da/projects/full_auth_api/full_auth/backend/venv/lib/python3.12/site-packages/rest_framework/views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "/home/da/projects/full_auth_api/full_auth/backend/venv/lib/python3.12/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
raise exc
File "/home/da/projects/full_auth_api/full_auth/backend/venv/lib/python3.12/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/da/projects/full_auth_api/full_auth/backend/venv/lib/python3.12/site-packages/rest_framework/mixins.py", line 19, in create
self.perform_create(serializer)
File "/home/da/projects/full_auth_api/full_auth/backend/venv/lib/python3.12/site-packages/djoser/views.py", line 134, in perform_create
user = serializer.save(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/da/projects/full_auth_api/full_auth/backend/venv/lib/python3.12/site-packages/rest_framework/serializers.py", line 208, in save
self.instance = self.create(validated_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/da/projects/full_auth_api/full_auth/backend/venv/lib/python3.12/site-packages/djoser/serializers.py", line 40, in create
user = self.perform_create(validated_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/da/projects/full_auth_api/full_auth/backend/venv/lib/python3.12/site-packages/djoser/serializers.py", line 51, in perform_create
user.save(update_fields=["is_active"])
File "/home/da/projects/full_auth_api/full_auth/backend/venv/lib/python3.12/site-packages/django/contrib/auth/base_user.py", line 62, in save
super().save(*args, **kwargs)
File "/home/da/projects/full_auth_api/full_auth/backend/venv/lib/python3.12/site-packages/django/db/models/base.py", line 892, in save
self.save_base(
File "/home/da/projects/full_auth_api/full_auth/backend/venv/lib/python3.12/site-packages/django/db/models/base.py", line 998, in save_base
updated = self._save_table(
^^^^^^^^^^^^^^^^^
File "/home/da/projects/full_auth_api/full_auth/backend/venv/lib/python3.12/site-packages/django/db/models/base.py", line 1136, in _save_table
raise DatabaseError("Save with update_fields did not affect any rows.")
django.db.utils.DatabaseError: Save with update_fields did not affect any rows.
[24/Jan/2025 17:05:56] "POST /api/users/ HTTP/1.1" 500 138392
</code></pre>
<p><strong>My users/models.py</strong>:</p>
<pre><code>from django.contrib.postgres.functions import RandomUUID
from django.db import models
from django.utils import timezone
from django.contrib.auth.models import (
AbstractBaseUser,
PermissionsMixin
)
from .managers import UserAccountManager
from django.utils.translation import gettext_lazy as _
class UserAccount(AbstractBaseUser, PermissionsMixin):
user_id = models.UUIDField(primary_key=True, default=RandomUUID, editable=False)
first_name = models.CharField(_('first_name'), max_length=255)
last_name = models.CharField(_('last_name'), max_length=255)
email = models.EmailField(unique=True, max_length=255)
title = models.CharField(_('title'), max_length=55, blank=True)
is_active = models.BooleanField(default=True)
is_staff = models.BooleanField(default=False)
is_superuser = models.BooleanField(default=False)
date_joined = models.DateTimeField(default=timezone.now)
objects = UserAccountManager()
USERNAME_FIELD = 'email'
REQUIRED_FIELDS = ['first_name', 'last_name']
def __str__(self):
return self.email
</code></pre>
<p><strong>My users/managers.py</strong>:</p>
<pre><code>from django.contrib.auth.models import BaseUserManager
from django.utils.translation import gettext_lazy as _
from django.db import models
class UserAccountManager(BaseUserManager):
def create_user(self, email, password=None, **kwargs):
"""
Creates and saves a User with the given email, date of
birth and password.
"""
if not email:
raise ValueError(_('Users must have an email address'))
email = self.normalize_email(email)
email = email.lower()
user = self.model(
email=email,
**kwargs
)
user.set_password(password)
user.save()
return user
def create_superuser(self, email, password=None, **kwargs):
"""
Creates and saves a superuser with the given email, date of
birth and password.
"""
user = self.create_user(
email,
password=password,
**kwargs
)
user.is_staff = True
user.is_superuser = True
user.save()
return user
</code></pre>
<p>In the tutorial he goes over how to use HTTP-ONLY cookies to manage the tokens, we put that code in authentication.py</p>
<p><strong>And this is the users/authentication.py</strong>:</p>
<pre><code>from django.conf import settings
from rest_framework_simplejwt.authentication import JWTAuthentication
class CustomJWTAuthentication(JWTAuthentication):
def authenticate(self, request):
try:
header = self.get_header(request)
if header is None:
raw_token = request.COOKIES.get(settings.AUTH_COOKIE)
else:
raw_token = self.get_raw_token(header)
if raw_token is None:
return None
validated_token = self.get_validated_token(raw_token)
return self.get_user(validated_token), validated_token
except:
return None
</code></pre>
<p>If anyone needs more code, just let me know. Again I'm not sure why I'm getting this 500 server error that won't let data be saved to the database?</p>
|
<python><django><django-rest-framework>
|
2025-01-24 19:03:36
| 1
| 1,958
|
user3125823
|
79,385,231
| 668,624
|
XRPL-py API - call speed and larger batch retrieval
|
<p>I am working on an app to retrieve account & balance data from the XRPL mainnet using the XRPL-py API without knowing the account (wallet) information beforehand.</p>
<p>Here is link to the XRPL-py API documentation: <a href="https://xrpl-py.readthedocs.io/en/stable/index.html" rel="nofollow noreferrer">XRPL-py API</a></p>
<p>The XRPL uses pagination using a marker and so the design is that of a serialized pipeline requiring fetch of batch of data before the next batch of data can be fetched (unless I am misinformed?).</p>
<p>My script roughly takes 1 to 1.5 seconds on average to fetch and process a single fetch worth of data and since there are currently ~5 million accounts (wallets) and growing each day, it would take roughly 8 hours to retrieve all of the data (provided no fetch errors) which will only grow in length over time as the number of accounts grows with mass adoption.</p>
<p>Lastly, since the state of the accounts changes, I want to re-fetch all of the data often, ideally once a day (if possible) but given the 8 hours it would currently take I suppose I could live with once a week but this very likely will be a problem once the data gets much larger.</p>
<p>My question: is there a more efficient way to fetch the data I aspire from the XRPL or is my objective nearly an impossible task?</p>
<p>Here is my code:</p>
<pre><code>from xrpl.clients import JsonRpcClient
from xrpl.models.requests import LedgerData
import time
# Connect to the XRPL mainnet
client = JsonRpcClient("https://s1.ripple.com:51234")
# Function to fetch data
def fetch_page(marker=None):
ledger_data = LedgerData(
ledger_index=latest_ledger,
binary=False,
marker=marker
)
response = client.request(ledger_data)
if not response.is_successful():
raise ValueError("Failed to fetch ledger data.")
else:
return response.result['state'], response.result.get('marker')
# Function to process data
def process_data(data,accounts):
for obj in data:
if obj['LedgerEntryType'] == 'AccountRoot':
accounts.append({'address': obj['Account'],
'balance': int(obj['Balance']) / 1000000 # Convert drops to number of coins
})
return accounts
latest_ledger = client.request(LedgerData(ledger_index="validated")).result['ledger_index']
accounts = []
while True:
current_marker = None
loop_start_time = time.time() # Start timing
# Fetch data
state, current_marker = fetch_page(current_marker)
# Process data
accounts = process_data(state,accounts)
loop_end_time = time.time() # Start timing
loop_time_elapsed = loop_end_time - loop_start_time
print(f"Total time for pagination call/processing: {loop_time_elapsed:.3f} seconds")
if not current_marker: # No more pages
break
</code></pre>
<p>Example output running of script:</p>
<pre><code>Total time for pagination call/processing: 0.537 seconds
Total time for pagination call/processing: 0.552 seconds
Total time for pagination call/processing: 0.549 seconds
Total time for pagination call/processing: 0.564 seconds
Total time for pagination call/processing: 0.538 seconds
Total time for pagination call/processing: 1.691 seconds
Total time for pagination call/processing: 1.313 seconds
Total time for pagination call/processing: 1.325 seconds
Total time for pagination call/processing: 1.025 seconds
Total time for pagination call/processing: 1.018 seconds
Total time for pagination call/processing: 1.054 seconds
Total time for pagination call/processing: 1.013 seconds
Total time for pagination call/processing: 1.014 seconds
Total time for pagination call/processing: 1.024 seconds
Total time for pagination call/processing: 1.025 seconds
Total time for pagination call/processing: 1.406 seconds
Total time for pagination call/processing: 1.038 seconds
Total time for pagination call/processing: 1.049 seconds
Total time for pagination call/processing: 1.300 seconds
Total time for pagination call/processing: 1.046 seconds
Total time for pagination call/processing: 0.918 seconds
Total time for pagination call/processing: 1.230 seconds
Total time for pagination call/processing: 1.019 seconds
Total time for pagination call/processing: 1.023 seconds
Total time for pagination call/processing: 1.022 seconds
Total time for pagination call/processing: 1.238 seconds
Total time for pagination call/processing: 1.015 seconds
Total time for pagination call/processing: 1.037 seconds
Total time for pagination call/processing: 1.131 seconds
Total time for pagination call/processing: 1.020 seconds
Total time for pagination call/processing: 1.211 seconds
Total time for pagination call/processing: 1.035 seconds
Total time for pagination call/processing: 1.030 seconds
Total time for pagination call/processing: 1.028 seconds
</code></pre>
|
<python><pagination><xrp><rippled>
|
2025-01-24 17:53:36
| 0
| 11,800
|
codingknob
|
79,385,122
| 673,600
|
async posting just hangs
|
<p>I'm using aiohttp to post multiple times, but I find an issue that means that I never get beyond the first post. Code then hangs for a while and produces the below error.</p>
<pre><code>async def posted(URL, payload):
#'Content-Type': 'application/json'}
headers = {'User-Agent': get_random_user_agent(),
'ngrok-skip-browser-warning': '100'}
async with aiohttp.ClientSession(URL, headers=headers) as session:
for i, data in enumerate(payload):
if data[0] is not None and data[1] is not None:
print(data[1], len(data[0]))
myobj = {'url_input': data[1], 'text_input': data[0]}
encoded_data = parse.urlencode(myobj).encode()
async with session.post('/', json=myobj):
print('posted')
#await session.close()
</code></pre>
<p>The call is as follows:</p>
<pre><code>await posted(URL, work_list)
</code></pre>
<p>where</p>
<pre><code>work_list = [('content', 'url'),('content1', 'url1')]
</code></pre>
<p>The error is as follows:</p>
<pre><code>The above exception was the direct cause of the following exception:
ClientOSError Traceback (most recent call last)
/usr/local/lib/python3.11/dist-packages/aiohttp/streams.py in read(self)
669 self._waiter = self._loop.create_future()
670 try:
--> 671 await self._waiter
672 except (asyncio.CancelledError, asyncio.TimeoutError):
673 self._waiter = None
ClientOSError: [Errno 1] [SSL: DECRYPTION_FAILED_OR_BAD_RECORD_MAC] decryption failed or bad record mac (_ssl.c:2580)
</code></pre>
<p>As pointed out this could be due to ngrok being https and forwarding to HTTP. So I created an ad-hoc SSL in flask application and restarted ngrok, but now I do not see any information coming through on debug and the similar problem cropping up.</p>
|
<python><asynchronous><aiohttp>
|
2025-01-24 17:14:23
| 0
| 6,026
|
disruptive
|
79,385,026
| 93,910
|
Pandas groupby with tag-style list
|
<p>I have a dataset with 'tag-like' groupings:</p>
<pre><code> Id tags
0 item1 ['friends','family']
1 item2 ['friends']
2 item3 []
3 item4 ['family','holiday']
</code></pre>
<p>So a row can belong to several groups. I want to create an object similar to groupby, so that I can use agg etc.</p>
<pre><code>df.groupby('tags').count()
</code></pre>
<p>expected result</p>
<pre><code> tags count
0 'friends' 2
1 'family' 2
2 'holiday' 1
</code></pre>
<p>But of course it won't work because it treats the whole list as the key, rather than the individual tags. Here's an attempt</p>
<pre><code>tagset = set(df.tags.explode())
grpby = { t: df.loc[df.tags.str.contains(t, regex=False)]
for t in tagset }
</code></pre>
<p>From what I understand, groupby objects are structured a bit like this. But how to make it a <code>groupby</code> object? So that I can do things like <code>grpby.year.mean()</code> etc?</p>
|
<python><pandas><dataframe><group-by>
|
2025-01-24 16:39:08
| 3
| 7,056
|
Sanjay Manohar
|
79,384,929
| 4,673,585
|
Executing stored procedure via python doesn't modify data but via ssms or data studio works
|
<p>I have a stored procedure in azure sql database which moves data from staging table to target table.
When I run the procedure manually, data moves:</p>
<pre><code>exec [Construction].[uspProcess] 'Equipment',26
</code></pre>
<p>So this procedure moves data for Equipment table from ConstructionStaging.Equipment to Construction.Equipment table.</p>
<p>And then I have python script to execute same procedure. It runs fine, but data is not modified. I get no error, just the data transfer doesn't happen.</p>
<p>config.py:</p>
<pre><code>import os
def get_connection_string():
IS_MANAGED_IDENTITY = os.getenv("IS_MANAGED_IDENTITY", "True").lower() == "true"
if IS_MANAGED_IDENTITY:
connection_string = (
f"mssql+pyodbc://{DATABASE_NAME}?driver=ODBC+Driver+17+for+SQL+Server&Authentication=ActiveDirectoryMsi&autocommit=True"
)
else:
DATABASE_SERVER_NAME = os.getenv("DATABASE_SERVER_NAME")
DATABASE_NAME = os.getenv("DATABASE_NAME")
DATABASE_USER_NAME = os.getenv("DATABASE_USER_NAME")
DATABASE_USER_PASSWORD = os.getenv("DATABASE_USER_PASSWORD")
connection_string = (
f"mssql+pyodbc://{DATABASE_USER_NAME}:{DATABASE_USER_PASSWORD}@{DATABASE_SERVER_NAME}/{DATABASE_NAME}"
"?driver=ODBC+Driver+17+for+SQL+Server&autocommit=True"
)
return connection_string
</code></pre>
<p>db.py:</p>
<pre><code>def staging_to_target(table_name, batch_id):
session = session_factory() # Create a new session instance
try:
sql = text("EXEC SKUK_Lidat.uspProcessLidat @TableName = :table_name, @BatchId = :batch_id").execution_options(autocommit=True)
session.execute(sql, {"table_name": table_name, "batch_id": batch_id})
return True # No need to commit if autocommit is enabled
except Exception as e:
print(f"Error calling uspProcessLidat: {e}")
return False
finally:
session.close() # Ensure session closure
</code></pre>
<p>Finally i am executing the procedure:</p>
<pre><code>if staging_to_target(table_name, batch_id):
logging.info(f"uspProcessLidat executed successfully for table: {table_name}.")
else:
logging.error(f"uspProcessLidat execution failed for table: {table_name}.")
</code></pre>
<p>Please let me know where I am going wrong.</p>
<p>Thank you!</p>
|
<python><sql-server><sqlalchemy><pyodbc><python-sql>
|
2025-01-24 16:06:59
| 0
| 337
|
Rahul Sharma
|
79,384,924
| 626,804
|
Python re.sub: backreference in replacement pattern followed by digit
|
<p>I would like to match a regular expression in a string and add the character <code>0</code> after all occurrences. That is, each match will be replaced with itself followed by <code>0</code>. But because <code>0</code> is a digit, I don't know how to write it in the replacement pattern given as the second argument to <code>re.sub</code>.</p>
<p>Let me give an example of an easier problem: add the character <code>X</code> after all vowels.</p>
<pre><code>import re
s = 'hello'
r = re.sub('([aeiou])', r'\1X', s)
print(r)
</code></pre>
<p>This prints <code>heXlloX</code>.</p>
<p>But suppose instead of adding the character <code>X</code> I want to add the character <code>0</code>. If I try to write this</p>
<pre><code>r = re.sub('([aeiou])', r'\10', s)
</code></pre>
<p>then it thinks I am making a backreference to the capturing group numbered 10, and fails with <code>invalid group reference 10</code>.</p>
<p>I know for this particular pattern I could rework it as a lookbehind assertion, so that the replacement pattern would no longer need a backreference.</p>
<pre><code>r = re.sub('(?<=[aeiou])', '0', s)
</code></pre>
<p>That works -- but not all regular expressions can be used as lookbehind in this way.</p>
<p>Another approach would be to manually break apart the input string at match locations, perhaps with <code>re.finditer</code>, then paste it back together with the <code>0</code> character at the places I want. But I'm hoping to avoid that.</p>
<p>While writing this question I have found the answer, which I will post below.</p>
|
<python><function><python-re><backreference>
|
2025-01-24 16:05:44
| 1
| 1,602
|
Ed Avis
|
79,384,811
| 10,461,632
|
Issues with Axes (matplotlib) inheritance
|
<p>I'm trying to mimic the <code>plt.subplots()</code> behavior, but with custom classes. Rather than return <code>Axes</code> from <code>subplots()</code>, I would like to return <code>CustomAxes</code>. I've looked at the source code and don't understand why I am getting the traceback error below.</p>
<p>I'm able to accomplish what I want without inheriting from <code>Axes</code>, but I think long term I would like to inherit from <code>Axes</code>. If you think this is ridiculous and there's a better way, let me know!</p>
<p>Code:</p>
<pre><code>from matplotlib.figure import Figure
from matplotlib.axes import Axes
class CustomAxes(Axes):
def __init__(self, fig, *args, **kwargs):
super().__init__(fig, *args, **kwargs)
def create_plot(self, i):
self.plot([1, 2, 3], [1, 2, 3])
self.set_title(f'Title {i}')
class CustomFigure(Figure):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def subplots(self, *args, **kwargs):
axes = super().subplots(*args, **kwargs)
axes = [CustomAxes(fig=self, *args, **kwargs) for ax in axes.flatten()]
return axes
fig, axes = CustomFigure().subplots(nrows=2, ncols=2)
for i, ax in enumerate(axes, start=1):
ax.create_plot(i=i)
fig.tight_layout()
fig
</code></pre>
<p>Traceback:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[60], line 23
20 axes = [CustomAxes(fig=self, *args, **kwargs) for ax in axes.flatten()]
21 return axes
---> 23 fig, axes = CustomFigure().subplots(nrows=2, ncols=2)
24 for i, ax in enumerate(axes, start=1):
25 ax.create_plot(i=i)
Cell In[60], line 20
18 def subplots(self, *args, **kwargs):
19 axes = super().subplots(*args, **kwargs)
---> 20 axes = [CustomAxes(fig=self, *args, **kwargs) for ax in axes.flatten()]
21 return axes
Cell In[60], line 20
18 def subplots(self, *args, **kwargs):
19 axes = super().subplots(*args, **kwargs)
---> 20 axes = [CustomAxes(fig=self, *args, **kwargs) for ax in axes.flatten()]
21 return axes
Cell In[60], line 7
6 def __init__(self, fig, *args, **kwargs):
----> 7 super().__init__(fig, *args, **kwargs)
File ~/repos/test/venv/lib/python3.11/site-packages/matplotlib/axes/_base.py:656, in _AxesBase.__init__(self, fig, facecolor, frameon, sharex, sharey, label, xscale, yscale, box_aspect, forward_navigation_events, *args, **kwargs)
654 else:
655 self._position = self._originalPosition = mtransforms.Bbox.unit()
--> 656 subplotspec = SubplotSpec._from_subplot_args(fig, args)
657 if self._position.width < 0 or self._position.height < 0:
658 raise ValueError('Width and height specified must be non-negative')
File ~/repos/test/venv/lib/python3.11/site-packages/matplotlib/gridspec.py:576, in SubplotSpec._from_subplot_args(figure, args)
574 rows, cols, num = args
575 else:
--> 576 raise _api.nargs_error("subplot", takes="1 or 3", given=len(args))
578 gs = GridSpec._check_gridspec_exists(figure, rows, cols)
579 if gs is None:
TypeError: subplot() takes 1 or 3 positional arguments but 0 were given
</code></pre>
<p>Working code without inheritance:</p>
<pre><code>from matplotlib.figure import Figure
class CustomAxes():
def __init__(self, ax):
self.ax = ax
def create_plot(self, i):
self.ax.plot([1, 2, 3], [1, 2, 3])
self.ax.set_title(f'Title {i}')
class CustomFigure(Figure):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def subplots(self, *args, **kwargs):
axes = super().subplots(*args, **kwargs)
axes = [CustomAxes(ax) for ax in axes.flatten()]
return self, axes
fig, axes = CustomFigure().subplots(nrows=2, ncols=2)
for i, ax in enumerate(axes, start=1):
ax.create_plot(i=i)
fig.tight_layout()
fig
</code></pre>
|
<python><python-3.x><matplotlib>
|
2025-01-24 15:22:22
| 1
| 788
|
Simon1
|
79,384,474
| 4,436,517
|
Polars - Get column value at another column's min / max value
|
<p>Given the following polars dataframe:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
pl.DataFrame({'A': ['a0', 'a0', 'a1', 'a1'],
'B': ['b1', 'b2', 'b1', 'b2'],
'x': [0, 10, 5, 1]})
</code></pre>
<pre><code>shape: (4, 3)
┌─────┬─────┬─────┐
│ A ┆ B ┆ x │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 │
╞═════╪═════╪═════╡
│ a0 ┆ b1 ┆ 0 │
│ a0 ┆ b2 ┆ 10 │
│ a1 ┆ b1 ┆ 5 │
│ a1 ┆ b2 ┆ 1 │
└─────┴─────┴─────┘
</code></pre>
<p>I want to add a column <code>y</code> which groups by <code>A</code> and selects the value from <code>B</code> with the maximum corresponding <code>x</code>. The following dataframe should be the result:</p>
<pre><code>┌─────┬─────┬─────┬─────┐
│ A ┆ B ┆ x ┆ y │
│ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 ┆ str │
╞═════╪═════╪═════╪═════╡
│ a0 ┆ b1 ┆ 0 ┆ b2 │
│ a0 ┆ b2 ┆ 10 ┆ b2 │
│ a1 ┆ b1 ┆ 5 ┆ b1 │
│ a1 ┆ b2 ┆ 1 ┆ b1 │
└─────┴─────┴─────┴─────┘
</code></pre>
<p>I've tried various versions of <code>df.with_columns(y=pl.col('B').?.over('A'))</code> without any luck.</p>
|
<python><dataframe><python-polars>
|
2025-01-24 13:15:46
| 3
| 1,159
|
rindis
|
79,384,472
| 2,355,176
|
Puppeteer not working with Django Viewset
|
<p>I am trying to write a Django REST endpoint which will convert the HTML content to PDF and then return the Streaming file response to download the report. For this purpose, I am using Puppeteer, which works fine out of Django scope (e.g. for testing purpose). The download view minimal example is following</p>
<pre><code>import asyncio
from pyppeteer import launch
from django.http import HttpResponse
from rest_framework.viewsets import ViewSet
from rest_framework.permissions import IsAuthenticated
from rest_framework_simplejwt.authentication import JWTAuthentication
class DownloadReport (ViewSet):
permission_classes = [IsAuthenticated]
authentication_classes = [JWTAuthentication]
async def html_to_pdf(self, html):
browser = await launch(
headless=True,
args=['--no-sandbox', '--disable-setuid-sandbox']
)
page = await browser.newPage()
await page.setContent(html)
await page.setViewport({
'width': 1920,
'height': 1080,
'deviceScaleFactor': 1
})
pdf = await page.pdf({
'format': 'A3',
'printBackground': True,
'landscape': True,
'scale': 1
})
await browser.close()
return pdf
def retrieve(self, request):
content = "<h1>Hurrah PDF conversion successfull<h1>"
content = asyncio.run(self.html_to_pdf(content))
response = HttpResponse(content, content_type='application/pdf')
response['Content-Disposition'] = f'attachment; filename="report.pdf"'
return response
</code></pre>
<p>The problem is in Djanog the browser is neven launched it thrwos following exeption</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Zain ul abdin\AppData\Local\Programs\Python\Python310\lib\site-packages\django\core\handlers\exception.py", line 55, in inner
response = get_response(request)
File "C:\Users\Zain ul abdin\AppData\Local\Programs\Python\Python310\lib\site-packages\django\core\handlers\base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "C:\Users\Zain ul abdin\AppData\Local\Programs\Python\Python310\lib\site-packages\django\views\decorators\csrf.py", line 65, in _view_wrapper
return view_func(request, *args, **kwargs)
File "C:\Users\Zain ul abdin\AppData\Local\Programs\Python\Python310\lib\site-packages\rest_framework\viewsets.py", line 124, in view
return self.dispatch(request, *args, **kwargs)
File "C:\Users\Zain ul abdin\AppData\Local\Programs\Python\Python310\lib\site-packages\rest_framework\views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "C:\Users\Zain ul abdin\AppData\Local\Programs\Python\Python310\lib\site-packages\rest_framework\views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "C:\Users\Zain ul abdin\AppData\Local\Programs\Python\Python310\lib\site-packages\rest_framework\views.py", line 480, in raise_uncaught_exception
raise exc
File "C:\Users\Zain ul abdin\AppData\Local\Programs\Python\Python310\lib\site-packages\rest_framework\views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "C:\data\Office\Projects\Dockerized_DjangoApp\analytics\views.py", line 172, in retrieve
content = asyncio.run(self.html_to_pdf(content))
File "C:\Users\Zain ul abdin\AppData\Local\Programs\Python\Python310\lib\asyncio\runners.py", line 44, in run
return loop.run_until_complete(main)
File "C:\Users\Zain ul abdin\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 649, in run_until_complete
return future.result()
File "C:\data\Office\Projects\Dockerized_DjangoApp\analytics\views.py", line 29, in html_to_pdf
browser = await launch(
File "C:\Users\Zain ul abdin\AppData\Local\Programs\Python\Python310\lib\site-packages\pyppeteer\launcher.py", line 307, in launch
return await Launcher(options, **kwargs).launch()
File "C:\Users\Zain ul abdin\AppData\Local\Programs\Python\Python310\lib\site-packages\pyppeteer\launcher.py", line 159, in launch
signal.signal(signal.SIGINT, _close_process)
File "C:\Users\Zain ul abdin\AppData\Local\Programs\Python\Python310\lib\signal.py", line 56, in signal
handler = _signal.signal(_enum_to_int(signalnum), _enum_to_int(handler))
ValueError: signal only works in main thread of the main interpreter
</code></pre>
<p>how can I resolve this issue to actually get PDF in response without this signal related issue? Kindly do not recommend wkhtmltopdf or any other tools as they are not rendering the Modern JS based syntax correctly my html contains tailwind JS integrations.</p>
|
<python><django><puppeteer><python-asyncio>
|
2025-01-24 13:13:49
| 0
| 2,760
|
Zain Ul Abidin
|
79,384,448
| 3,702,377
|
Issue with downloading file via Browser-use
|
<p>I am writing web automation to download a file by <code>browser-use</code> web automation tool that uses LLM as an AI Agent. The download files feature isn't supported by the Browser-use as a built-in functionality. That's why I have a complex code to do that. However, sometimes it works well and downloads the file but sometimes it doesn't.</p>
<p>Here is the log related to the failure time:</p>
<pre><code>INFO [browser_use] BrowserUse logging setup complete with level info
INFO [root] Anonymized telemetry enabled. See https://github.com/browser-use/browser-use for more information.
contexts initial: 0
INFO [agent] 🚀 Starting task: navigate to https://file-examples.com/index.php/sample-documents-download/sample-doc-download/ and download the first doc
INFO [agent]
📍 Step 1
contexts after 5 sec: 1
INFO [agent] 👍 Eval: Success - Looking at a blank page.
INFO [agent] 🧠 Memory: Need to navigate to a specific URL.
INFO [agent] 🎯 Next goal: Navigate to the specified URL to download the document.
INFO [agent] 🛠️ Action 1/1: {"go_to_url":{"url":"https://file-examples.com/index.php/sample-documents-download/sample-doc-download/"}}
INFO [controller] 🔗 Navigated to https://file-examples.com/index.php/sample-documents-download/sample-doc-download/
INFO [agent]
📍 Step 2
INFO [agent] 👍 Eval: Success - Navigated to the site and located download links for document files.
INFO [agent] 🧠 Memory: Ready to download the first DOC file.
INFO [agent] 🎯 Next goal: Download the first DOC file by clicking the download link.
INFO [agent] 🛠️ Action 1/1: {"click_element":{"index":12}}
INFO [controller] 🖱️ Clicked button with index 12: Download sample DOC file
INFO [agent]
📍 Step 3
INFO [agent] 👍 Eval: Success - The download link was clicked and is now redirecting to the file download page.
INFO [agent] 🧠 Memory: The file is being downloaded from the redirect page.
INFO [agent] 🎯 Next goal: Verify and complete the task since the download process has started.
INFO [agent] 🛠️ Action 1/1: {"done":{"text":"The first DOC file has been successfully downloaded from the link: https://file-examples.com/index.php/sample-documents-download/sample-doc-download/"}}
INFO [agent] 📄 Result: The first DOC file has been successfully downloaded from the link: https://file-examples.com/index.php/sample-documents-download/sample-doc-download/
INFO [agent] ✅ Task completed successfully
INFO [agent] Created GIF at agent_history.gif
No files were downloaded during this session.
Press Enter to close...
</code></pre>
<p>And here's the code:</p>
<pre class="lang-py prettyprint-override"><code>import os
import sys
from pathlib import Path
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import asyncio
from langchain_openai import ChatOpenAI
from typing import Dict, List
from browser_use import Agent, Controller
from browser_use.browser.browser import Browser, BrowserConfig
from browser_use.browser.context import BrowserContext
# Initialize controller first
browser = Browser(config=BrowserConfig(headless=False))
controller = Controller()
# Track downloads
downloaded_files: List[str] = []
async def handle_download(download):
# Create downloads directory if it doesn't exist
downloads_dir = Path('./downloads')
downloads_dir.mkdir(exist_ok=True)
# Get original download path
original_path = await download.path()
if original_path:
# Create new path in downloads directory
new_path = downloads_dir / os.path.basename(original_path)
# Move the file to downloads directory
os.rename(original_path, new_path)
# Add the new path to downloaded files list
downloaded_files.append(str(new_path))
print(f"Downloaded and moved to: {new_path}")
@controller.action(
'Upload file - the file name is inside the function - you only need to call this with the correct index',
requires_browser=True,
)
async def upload_file(index: int, browser: BrowserContext):
element = await browser.get_element_by_index(index)
my_file = Path.cwd() / 'examples/test_cv.txt'
if not element:
raise Exception(f'Element with index {index} not found')
await element.set_input_files(str(my_file.absolute()))
return f'Uploaded file to index {index}'
@controller.action('Close file dialog', requires_browser=True)
async def close_file_dialog(browser: BrowserContext):
page = await browser.get_current_page()
await page.keyboard.press('Escape')
def handle_page(new_page):
print("New page created!")
new_page.on(
"download", lambda download: asyncio.create_task(handle_download(download))
)
async def print_contexts_after_delay(playwright_browser):
await asyncio.sleep(5)
if (len(playwright_browser.contexts) < 1):
raise Exception('No contexts found')
# up download handler at Playwright browser level
playwright_browser.contexts[0].on("page", handle_page)
print('contexts after 5 sec:', len(playwright_browser.contexts))
async def main():
task = "navigate to https://file-examples.com/index.php/sample-documents-download/sample-doc-download/ and download the first doc"
model = ChatOpenAI(model='gpt-4o')
agent = Agent(
task=task,
llm=model,
controller=controller,
browser=browser,
)
# Get the underlying Playwright browser instance
playwright_browser = await browser.get_playwright_browser()
print('contexts initial:', len(playwright_browser.contexts))
# Create task for delayed context printing
asyncio.create_task(print_contexts_after_delay(playwright_browser))
await agent.run()
history_file_path = 'AgentHistoryList.json'
agent.save_history(file_path=history_file_path)
await browser.close()
# Print downloaded files
if downloaded_files:
print("\nDownloaded files:")
for file_path in downloaded_files:
print(file_path)
print(f"- {os.path.basename(file_path)}")
else:
print("\nNo files were downloaded during this session.")
input('Press Enter to close...')
if __name__ == '__main__':
asyncio.run(main())
</code></pre>
<p>I guess the issue is related to concurrency and async actions. Or maybe related to the <code>handle_download</code> method.</p>
<p>Any help would be greatly appreciated.</p>
|
<python><python-asyncio><playwright><playwright-python><browser-use>
|
2025-01-24 13:05:18
| 1
| 35,654
|
Benyamin Jafari
|
79,384,404
| 451,878
|
Prepared query with Jinja and BigQuery
|
<p>For now, I use python, with sql templating (jinja2) for BigQuery API (not sdk) + fastapi.</p>
<p>The queries are generated from api parameters to sql code. Those queries are sent to BQ.</p>
<p>To prevent sql injection, I try the JinjaSQL module, but I can't send the prepared query directly to BQ, I must convert to "pure sql", so it's not useful...</p>
<p>Have you an idea to prepare/secure the query with sql in jinja template ?</p>
<p>Thanx</p>
|
<python><google-bigquery><jinja2><sql-injection>
|
2025-01-24 12:51:17
| 0
| 1,481
|
James
|
79,384,228
| 8,663,643
|
Batch insert data using psycopg2 vs psycopg3
|
<p>Currently i am inserting to postgres database using psycopg2. Data is large and also the write frequency is high, so my database has WAL disabled and few other optimizations for faster writes.</p>
<p>When i use <code>psycopg2</code> with execute_values, i am able to write batch of 1000 rows in 0.1-0.15 seconds.</p>
<pre class="lang-py prettyprint-override"><code>from psycopg2.extras import execute_values
self.engine = create_engine(f'postgresql+psycopg2://postgres:password@localhost/postgres', pool_size=DB_POOL_SIZE,max_overflow=20)
def insert_data_todb(self, table_name, batch_data):
try:
t1 = time.perf_counter()
insert_sql = f"""INSERT INTO {table_name} ({self._market_snapshot_columns_str}) VALUES %s;"""
with self.engine.connect() as conn, conn.connection.cursor() as cur:
execute_values(cur, insert_sql, batch_data)
t2 = time.perf_counter()
logger.info(f"Inserted {len(batch_data)} records in {t2 - t1} seconds")
except Exception as ex:
logger.error(f"Error inserting batch data into {table_name}:")
logger.exception(ex)
</code></pre>
<p>I uninstalled psycopg2 and installed psycopg 3.2. and used psycopg3's executemany function like this:</p>
<pre class="lang-py prettyprint-override"><code>import psycopg
self.engine = create_engine(f'postgresql+psycopg://postgres:password@localhost/postgres', pool_size=DB_POOL_SIZE,max_overflow=20)
def insert_data_todb(self, table_name, batch_data):
try:
t1 = time.perf_counter()
placeholders = ', '.join(['%s'] * len(batch_data[0]))
insert_sql = f"""INSERT INTO {table_name} ({self._market_snapshot_columns_str}) VALUES ({placeholders});""" # stored variable
with self.engine.connect() as conn:
with conn.cursor() as cur:
cur.executemany(insert_sql, batch_data) # Pass the batch data directly
t2 = time.perf_counter()
logger.info(f"Inserted {len(batch_data)} records in {t2 - t1} seconds")
except Exception as ex:
logger.error(f"Error inserting batch data into {table_name}:")
logger.exception(ex)
</code></pre>
<p>My <code>psycopg3</code> code is way slower! it takes 8-20 seconds to insert the same batches.</p>
|
<python><postgresql><psycopg2>
|
2025-01-24 11:45:35
| 1
| 2,016
|
Nitesh Tosniwal
|
79,383,889
| 16,869,946
|
Summing columns of Pandas dataframe in a systematic way
|
<p>I have a pandas dataframe which looks like this:</p>
<pre><code>1_2 1_3 1_4 2_3 2_4 3_4
1 5 2 8 2 2
4 3 4 5 8 5
8 8 8 9 3 3
4 3 4 4 8 3
8 0 7 4 2 2
</code></pre>
<p>where the columns are the 4C2 combinations of 1,2,3,4. And I would like to generate 4 new columns <code>f_1, f_2, f_3, f_4</code> where the values of the columns are defined to be</p>
<pre><code>df['f_1'] = df['1_2']+df['1_3']+df['1_4']
df['f_2'] = df['1_2']+df['2_3']+df['2_4']
df['f_3'] = df['1_3']+df['2_3']+df['3_4']
df['f_4'] = df['1_4']+df['2_4']+df['3_4']
</code></pre>
<p>In other words, the column <code>f_i</code> are defined to be the sum of columns <code>i_j</code> and <code>k_i</code>.</p>
<p>So I can brute force my way in this case. However, my original dataframe is a lot bigger and there are <code>20C2 = 190</code> columns instead and hence a brute force method wouldn't work.</p>
<p>So the desired outcome looks like</p>
<pre><code>1_2 1_3 1_4 2_3 2_4 3_4 f_1 f_2 f_3 f_4
1 5 2 8 2 2 8 11 15 6
4 3 4 5 8 5 11 17 13 17
8 8 8 9 3 3 24 20 20 14
4 3 4 4 8 3 11 16 10 15
8 0 7 4 2 2 15 14 6 11
</code></pre>
<p>Thank you so much.</p>
|
<python><pandas><dataframe><combinations>
|
2025-01-24 09:49:07
| 3
| 592
|
Ishigami
|
79,383,833
| 865,169
|
How do I use Pandas' infer_objects correctly (v. 2.2.3)
|
<p>I try the following example in Pandas 2.2.3:</p>
<pre class="lang-py prettyprint-override"><code>outage_mask = pd.Series(([True]*5 + [False]*5)*5, index=pd.date_range("2025-01-01", freq="1h", periods=50))
[ts for ts in outage_mask.loc[outage_mask.diff().fillna(False)].index]
</code></pre>
<p>This gives me the error message:</p>
<blockquote>
<p>FutureWarning: Downcasting object dtype arrays on .fillna, .ffill, .bfill is deprecated and will change in a future version. Call result.infer_objects(copy=False) instead. To opt-in to the future behavior, set <code>pd.set_option('future.no_silent_downcasting', True)</code></p>
</blockquote>
<p>I cannot figure out how to correctly apply this <code>infer_objects</code>. I assume the problem is that the output of <code>diff</code> becomes an 'object' dtype due do containing both <code>NaN</code>s and <code>bool</code>s, but for example this does not help:</p>
<pre class="lang-py prettyprint-override"><code>[ts for ts in outage_mask.loc[outage_mask.diff().infer_objects(copy=False).fillna(False)].index]
</code></pre>
<p>I <em>can</em> avoid the warning by this clumsy work-around:</p>
<pre class="lang-py prettyprint-override"><code>[ts for ts in outage_mask.loc[outage_mask.diff().astype(float).fillna(0.).astype(bool)].index]
</code></pre>
<p>but I would like to understand how to apply the solution from the warning correctly. How do I do that?</p>
|
<python><pandas>
|
2025-01-24 09:27:40
| 2
| 1,372
|
Thomas Arildsen
|
79,383,692
| 4,442,753
|
How to get start indices of regions of empty intervals?
|
<p>I have sorted start indices (included) and end indices (excluded) of intervals (obtained by using <code>seachsorted</code>), for instance:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
# Both arrays are of same size, and sorted.
# Size of arrays is number of intervals.
# Intervals do not overlap.
# interval indices: 0 1 2 3 4 5
interval_start_idxs = np.array([0, 3, 3, 3, 6, 7])
interval_end_excl_idxs = np.array([2, 4, 4, 4, 7, 9])
</code></pre>
<p>An empty interval is identified when</p>
<pre class="lang-py prettyprint-override"><code>interval_start_idxs[interval_idx] == interval_end_excl_idxs[interval_idx]-1
</code></pre>
<p>I would like to identify the starts and ends of each region where intervals are empty. A region is made with one or several intervals sharing the same start indices and end excluded indices.</p>
<p>With previous data, expected result would then be:</p>
<pre class="lang-py prettyprint-override"><code>empty_interval_starts = [1, 4] # start is included
empty_intervals_ends_excl = [4, 5] # end is excluded
</code></pre>
<p>This result is to be understood as:</p>
<ul>
<li>intervals from index 1 to 3, these intervals are a same region of empty intervals</li>
<li>and interval at index 4 is a separate region on its own</li>
</ul>
|
<python><numpy>
|
2025-01-24 08:34:24
| 1
| 1,003
|
pierre_j
|
79,383,586
| 4,382,391
|
langchain / Chroma Process finished with exit code -1073741819 (0xC0000005) with
|
<p>I am stumped with this problem</p>
<pre class="lang-py prettyprint-override"><code>chunks = []
for path in file_paths: # path is a string filepath to a csv
chunks.extend(self.chunk_data(path))
chunks = filter_complex_metadata(chunks)
# add all relevant documents to chunks
# creates a unique cache for this dataset
cache_dir_name = "local_cache"
if not os.path.exists(cache_dir_name):
os.makedirs(cache_dir_name)
self.vector_store = Chroma.from_documents(documents=chunks,
embedding=FastEmbedEmbeddings(model_name="BAAI/bge-small-en",
cache_dir=cache_dir_name)) # crashes here
</code></pre>
<p>When it gets to the end it throws the following error:</p>
<pre><code>
Fetching 5 files: 0%| | 0/5 [00:00<?, ?it/s]C:\Users\...\venv\Lib\site-packages\huggingface_hub\file_download.py:140: UserWarning: `huggingface_hub` cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\Users\...\cache\models--Qdrant--bge-small-en. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.
To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development
warnings.warn(message)
Fetching 5 files: 100%|██████████| 5/5 [00:05<00:00, 1.18s/it]
Process finished with exit code -1073741819 (0xC0000005)
</code></pre>
<p>The weird thing is this code was working fine before. Then I cloned a new repo, reinstalled all dependencies, and now it is not. I am afraid it might be for some reason external to the codebase, however it is at the <code>Chroma.from_documents</code> where it fails.</p>
<p>The issue is exit code 0xC0000005 is an access violation, as far as I can tell, and this is really hard to debug..</p>
<p>Update: I activated developer mode in windows, now it still says Process finished with exit code -1073741819 (0xC0000005) at the same point but with no warnings...</p>
|
<python><langchain><access-violation><chromadb>
|
2025-01-24 07:43:15
| 0
| 1,070
|
Null Salad
|
79,383,492
| 48,956
|
How to handle infinity timestamptz?
|
<p>Executing:</p>
<pre><code>await pg_acursor.execute("SELECT * FROM pg_roles;")
await pg_acursor.fetchall()
</code></pre>
<p>...</p>
<pre><code> File "/usr/local/lib/python3.10/site-packages/psycopg/cursor_async.py", line 235, in fetchall
records = self._tx.load_rows(self._pos, self.pgresult.ntuples, self._make_row)
File "psycopg_binary/_psycopg/transform.pyx", line 463, in psycopg_binary._psycopg.Transformer.load_rows
File "psycopg_binary/types/datetime.pyx", line 796, in psycopg_binary._psycopg.TimestamptzLoader.cload
psycopg.DataError: timestamp too large (after year 10K): 'infinity'
</code></pre>
<p>Mysteriously, this doesn't fail on my local machine (using Docker's postgres:latest).
But does fail on a brand new Amazon RDS postgres 16.3 instance.
I assume this is the <code>rolvaliduntil timestamptz</code> column and, and Amazon sets the value to infinity instead of null for no good reason.</p>
<p>So... any idea how to coerce postgres to handle such dates gracefully (e.g. parse as null).</p>
|
<python><postgresql><psycopg2><psycopg3>
|
2025-01-24 07:03:24
| 1
| 15,918
|
user48956
|
79,383,447
| 12,466,687
|
How to convert a plotnine chart to matplotlib in python?
|
<p>I have created a chart using <code>plotnine</code> and want to put this into <code>matplotlib</code> to combine it with other plots but I am unable to convert it into matplotlib form.</p>
<p>Here is the Reference <a href="https://nrennie.rbind.io/2024-plotnine-contest/" rel="nofollow noreferrer">article</a> where the author has converted this plot into matplotlib and I have also tried something similar but it didn't work.</p>
<p>Code I have tried:</p>
<p><strong>Data:</strong></p>
<pre><code>import pandas as pd
import plotnine as p9
from plotnine import *
import matplotlib.pyplot as plt
new_data = {
'date': pd.date_range('2022-01-01', periods=11, freq="ME"),
'parent_category': ['Electronics', 'Electronics', 'Fashion', 'Fashion', 'Home Goods', 'Electronics', 'Fashion','Electronics','Electronics','Electronics','Electronics'],
'child_category': ['Smartphones', 'Laptops', 'Shirts', 'Pants', 'Kitchenware','Laptops', 'Shirts', 'Smartphones','PS4','Oven','Vaccum cleaner']
}
new_data = pd.DataFrame(new_data)
</code></pre>
<p><strong>plotnine plot:</strong></p>
<pre><code>P = (ggplot(new_data[new_data['parent_category'] == 'Electronics'], aes(x="date", y="child_category", group="child_category")) +
geom_line(size=1, color="pink") +
geom_point(size=3, color="grey") +
# facet_wrap("parent_category", ncol=1, scales="free_y") +
theme_538() +
theme(axis_text_x=element_text(angle=45, hjust=1),
panel_grid_major=element_blank()
# ,figure_size=(8, 3)
)
)
</code></pre>
<p><a href="https://i.sstatic.net/bZFW6o9U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZFW6o9U.png" alt="enter image description here" /></a></p>
<p>But when I try to put this on matplotlib then it doesn't show:</p>
<pre><code>fig = P.draw()
fig.set_size_inches(8, 3, forward=True)
ax = plt.gca()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/DdXv1Ez4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DdXv1Ez4.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><plotnine>
|
2025-01-24 06:39:03
| 1
| 2,357
|
ViSa
|
79,383,240
| 1,232,087
|
Pandas works in terminal of VSCode but fails in code window of VSCode
|
<pre><code>Windows 10 Pro
VSCode latest version
Python 3.12 [selected as python interpreter]
Pandas latest version 2.2.3
Virtual Environment created and pandas installed inside it
</code></pre>
<p><strong>Question</strong>: As shown below, following code works in <code>VSCode Terminal</code> but not inside the code window. What I may be doing wrong and how can we fix it?</p>
<p><strong>Demo.py</strong>:</p>
<pre><code>import pandas as pd
print(pd.__version__)
df = pd.read_csv('data.csv')
print(df.head())
</code></pre>
<p><strong>Error</strong>:</p>
<blockquote>
<p>ModuleNotFoundError: No module named 'pandas'</p>
</blockquote>
<p><strong>Terminal window successfully runs the Demo.py file</strong>:</p>
<pre><code>(.venv) PS C:\PyJunk> python -u "c:\PyJunk\Demo.py"
2.2.3
Name Age City
0 Alice 25 New York
1 Bob 30 Los Angeles
2 Charlie 35 Chicago
</code></pre>
<p><strong>Remarks</strong>:</p>
<ul>
<li>In the code editor, there is no warning on <code>import pandas as pd</code> line. Editor recognizes this declaration and IntelliSense on <code>pd.</code> recognizes all the available commands for pandas</li>
<li>Pandas was installed as <code>(.venv) PS C:\PyJunk>pip3.12 install pandas</code>. Following images show <code>.venv</code> files structure, and python interpreter at the bottom of image 2.</li>
<li>I've uploaded this simple project file <a href="https://limewire.com/d/90fa6a5c-f395-4918-8d4c-9643a445b84f#fd5WxgrILa8pP8CWm0dxsTHfru3SzAFHUPFs419ZjPQ" rel="nofollow noreferrer">here</a> - in case anyone wants to see what could be the issue</li>
</ul>
<p><strong>.venv structure and output window</strong>:
<a href="https://i.sstatic.net/A3v4Fq8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A3v4Fq8J.png" alt="enter image description here" /></a></p>
<p><strong>Demo.py runs fine inside the terminal</strong>:
<a href="https://i.sstatic.net/oJ9p931A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oJ9p931A.png" alt="enter image description here" /></a></p>
|
<python><pandas><visual-studio-code>
|
2025-01-24 04:39:56
| 0
| 24,239
|
nam
|
79,383,195
| 3,719,167
|
Django: How to Represent and Query Symmetrical Relationships for a Family Tree?
|
<p>I am building a family tree application in Django where I need to represent and query marriages symmetrically. Each marriage should have only one record, and the relationship should include both partners without duplicating data. Here's the relevant model structure:</p>
<pre class="lang-py prettyprint-override"><code>class Person(models.Model):
first_name = models.CharField(max_length=100)
last_name = models.CharField(max_length=100)
spouses = models.ManyToManyField(
'self', through="Marriage", symmetrical=True, related_name="partners"
)
class Marriage(models.Model):
person1 = models.ForeignKey(Person, on_delete=models.CASCADE, related_name="marriages_as_person1")
person2 = models.ForeignKey(Person, on_delete=models.CASCADE, related_name="marriages_as_person2")
start_date = models.DateField(null=True, blank=True)
end_date = models.DateField(null=True, blank=True)
</code></pre>
<p>I want to:</p>
<ol>
<li>Ensure both partners appear as spouses for each other symmetrically.</li>
<li>Avoid duplicate entries for the same marriage.</li>
<li>Efficiently query all spouses of a person.</li>
</ol>
<p>Here’s the code I’m using to query spouses:</p>
<pre class="lang-py prettyprint-override"><code># Query spouses for a person
p1 = Person.objects.create()
p2 = Person.objects.create()
Marriage.objects.create(person1=p1, person2=p2)
p1.spouses.all() # Returns list containing p2
p2.spouses.all() # Returns empty list
</code></pre>
<p>However, I’m facing challenges:</p>
<ol>
<li>If <code>p1</code> spouses are queried, it should contain <code>p2</code> and if <code>p2</code> spouses are queried, it should contain <code>p1</code></li>
<li>Both queries are not symmetrical</li>
</ol>
<h3>Questions:</h3>
<ol>
<li>Is my model structure correct for representing marriages symmetrically? If not, what improvements should I make?</li>
<li>How can I efficiently query all spouses of a person in a database-optimized way while ensuring symmetry?</li>
</ol>
<p>My use case is to return a list or Person having <code>pids</code> (partner id) as a list of id like below using DRF</p>
<pre class="lang-py prettyprint-override"><code>[
{
id: 1,
full_name: ‘John’,
pids: [2]
},
{
id: 2,
full_name: ‘Mary’,
pids: [1]
}
]
</code></pre>
<p>Current serializer code is</p>
<pre class="lang-py prettyprint-override"><code>def get_pids(self, obj):
"""
Returns a list of IDs of all the spouses of the person, ensuring bidirectional relationships.
"""
partner_ids = set(obj.spouses.values_list('id', flat=True))
# Ensure bidirectional relationships
for spouse in obj.spouses.all():
partner_ids.update(spouse.spouses.values_list('id', flat=True))
return list(partner_ids)
</code></pre>
|
<python><django><many-to-many><django-orm>
|
2025-01-24 03:59:22
| 1
| 9,922
|
Anuj TBE
|
79,383,056
| 11,062,613
|
How to optimize Delta Lake datasets in Polars (sorting, compaction, cleanup)?
|
<p>I'm planning to use Polars with Delta Lake to manage large, mutable datasets on my laptop. I've encountered two issues:</p>
<ol>
<li><p>Dataset is not sorted after merge:
When I use write_delta() in "merge" mode, the resulting dataset is not sorted.
My current workaround is to manually sort and overwrite the Delta table, but this is obviously inefficient for large datasets.
Is there a way to enforce sorting directly during the merge operation in Polars?</p>
</li>
<li><p>Many unused Files:
When I overwrite a Delta Lake dataset, old and unused files are not automatically deleted. This requires running "VACUUM" manually to clean up.
Is there a way to automate this cleanup process during write_delta() operations?</p>
</li>
</ol>
<p>Any advice or workarounds would be greatly appreciated!</p>
<pre><code>import polars as pl
# Define the Delta Lake path
delta_path = "/PathToDataset/dataset.delta"
df_old = pl.from_repr(
"""
┌───────┬─────┬───────┐
│ group ┆ id ┆ val_1 │
│ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ f64 │
╞═══════╪═════╪═══════╡
│ A ┆ 0 ┆ 1.0 │
│ A ┆ 1 ┆ 2.0 │
│ A ┆ 2 ┆ 3.0 │
│ B ┆ 0 ┆ 11.0 │
│ B ┆ 1 ┆ 12.0 │
│ B ┆ 2 ┆ 13.0 │
│ C ┆ 0 ┆ 21.0 │
│ C ┆ 1 ┆ 22.0 │
│ C ┆ 2 ┆ 23.0 │
└───────┴─────┴───────┘
""")
df_new = pl.from_repr(
"""
┌───────┬─────┬───────┐
│ group ┆ id ┆ val_1 │
│ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ f64 │
╞═══════╪═════╪═══════╡
│ A ┆ 1 ┆ 99.0 │
│ X ┆ 0 ┆ 99.0 │
└───────┴─────┴───────┘
""")
print("Saving the initial dataset to Delta Lake...")
df_old.write_delta(delta_path, mode="overwrite")
print("Updating the Delta Lake dataset with new data...")
df_new.write_delta(
delta_path,
mode="merge", # Merge mode for upserts
delta_merge_options={
"predicate": "s.group = t.group AND s.id = t.id", # Match on 'group' and 'id'
"source_alias": "s",
"target_alias": "t",
}).when_matched_update_all().when_not_matched_insert_all().execute()
print("Update complete.")
df = pl.scan_delta(delta_path).collect()
print(df)
# ┌───────┬─────┬───────┐
# │ group ┆ id ┆ val_1 │
# │ --- ┆ --- ┆ --- │
# │ str ┆ i64 ┆ f64 │
# ╞═══════╪═════╪═══════╡
# │ X ┆ 0 ┆ 99.0 │
# │ A ┆ 1 ┆ 99.0 │
# │ A ┆ 0 ┆ 1.0 │
# │ A ┆ 2 ┆ 3.0 │
# │ B ┆ 0 ┆ 11.0 │
# │ B ┆ 1 ┆ 12.0 │
# │ B ┆ 2 ┆ 13.0 │
# │ C ┆ 0 ┆ 21.0 │
# │ C ┆ 1 ┆ 22.0 │
# │ C ┆ 2 ┆ 23.0 │
# └───────┴─────┴───────┘
</code></pre>
<p>Edit: Here is my current workaround</p>
<ul>
<li>write table using Polars</li>
<li>sort table using deltalake</li>
<li>vacuum using deltalake</li>
</ul>
<p>The function is made for datasets of size/memory ratio >20% on a single machine.</p>
<pre><code>def update_delta(
delta_path: str,
new_data: pl.DataFrame,
index: list[str],
mode: str = "merge",
sort: bool = True,
vacuum: bool = True,
vacuum_retention_hours: int = 0,
verbose: bool = False) -> None:
"""
Upsert or overwrite a dataset into a Delta table using polars and deltalake.
The function is made for datasets of size/memory ratio >20% on a single machine.
Parameters:
delta_path (str): Path to the Delta Lake dataset.
new_data (pl.DataFrame): New dataset to upsert into the Delta table.
index (list[str]): List of column names used as the primary key for matching rows during the merge.
mode (str, optional): Operation mode, either "merge" or "overwrite". Defaults to "merge".
sort (bool, optional): Whether to sort the Delta table by the index after the operation. Defaults to True.
vacuum (bool, optional): Whether to perform a VACUUM operation to clean up unused files. Defaults to True.
vacuum_retention_hours (int, optional): Retention period for the VACUUM operation. Defaults to 0 hour.
verbose (bool, optional): Whether to print logs. Defaults to False.
Returns:
None.
"""
try:
# Check if the Delta table already exists
table_exists = DeltaTable.is_deltatable(delta_path)
if not table_exists:
if verbose:
print(f"Creating a new Delta table at {delta_path}.")
new_data.write_delta(delta_path, mode="overwrite")
else:
if mode == "merge":
if verbose:
print(f"Performing upsert into Delta table at {delta_path}...")
# Generate the merge predicate dynamically from the index
merge_predicate = " AND ".join([f"s.{col} = t.{col}" for col in index])
# Perform the merge operation
new_data.write_delta(
delta_path,
mode="merge",
delta_merge_options={
"predicate": merge_predicate,
"source_alias": "s",
"target_alias": "t",
},
).when_matched_update_all().when_not_matched_insert_all().execute()
elif mode == "overwrite":
if verbose:
print(f"Overwriting Delta table at {delta_path}...")
new_data.write_delta(delta_path, mode="overwrite")
else:
raise ValueError("Invalid mode. Supported modes are 'merge' and 'overwrite'.")
# Reorder the data using a Z-order curve to improve data skipping
if sort:
if verbose:
print("Performing reordering data using a Z-order curve....")
delta_table = DeltaTable(delta_path)
delta_table.optimize.z_order(index)
# Perform the VACUUM operation to clean up unused files
if vacuum:
if verbose:
print("Performing VACUUM to clean up older unused files...")
delta_table = DeltaTable(delta_path)
delta_table.vacuum(
retention_hours=vacuum_retention_hours,
dry_run=False,
enforce_retention_duration=False,
)
if verbose:
print(f"{mode.capitalize()} operation completed successfully.")
return None
except Exception as e:
print(f"An error occurred: {e}")
raise
</code></pre>
|
<python><parquet><python-polars><delta-lake>
|
2025-01-24 01:53:03
| 1
| 423
|
Olibarer
|
79,382,814
| 15,412,256
|
How to manually set certain dependecies in local dev enviorment for UV Python dependency manager to ignore?
|
<p>Currently I have the following pyproject.toml:</p>
<pre class="lang-ini prettyprint-override"><code>[project]
name = "sample_project"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.12"
dependencies = [
"anytree",
"pyvis",
"pandas>=2",
"numpy>=2",
"statsmodels>=0",
"matplotlib>=3",
"seaborn>=0",
"plotly>=5",
"pyarrow>=18",
"polars>=1",
"scipy>=1",
"xgboost>=2",
"lightgbm>=4",
"scikit-learn>=1",
"numba>=0",
"num2words>=0",
"sqlalchemy>=2",
"pygount",
]
[tool.uv]
dev-dependencies = ["ipykernel>=6", "pytest>=8", "ruff>=0", "maturin>=1"]
</code></pre>
<p>but I have a locally-built rust python PyO3 binding package named <code>rusty_crates</code> in my <code>.venv/Lib/site-package</code> folder that I depend on for this project.</p>
<p>How do I tell uv to ignore this "dependency" so that everytime I run <code>uv sync</code> the package is not <strong>uninstalled</strong> by uv</p>
|
<python><pyproject.toml><uv>
|
2025-01-23 22:57:26
| 1
| 649
|
Kevin Li
|
79,382,803
| 13,982,768
|
I am trying to cause race condition for demonstration purposes but fail to fail
|
<p>I am actively trying to get race condition and cause problem in calculation for demonstration purposes but i can't achieve such problem simply.</p>
<p>My tought process was to create a counter variable, reach it from diffrent threads and async functions (i did not tried mp since it pauses process) and increase it by one.</p>
<p>Running 3 instances of for loops in range x and increasing the counter by 1 in each loop i expected to get value lower then 3x due to race condition.</p>
<p>I did not use locks, but i still get 3x value each run. Does it due to GIL update? i have tried with python versions 3.10 / 3.11 / 3.13. What should i do to get race condition simple structure</p>
<p>my code to get race condition</p>
<pre><code>import threading
import asyncio
def multithreading_race_condition():
counter2 = 0
def increment():
nonlocal counter2
for _ in range(10000):
counter2 = counter2 + 1
threads = [threading.Thread(target=increment) for _ in range(3)]
for t in threads:
t.start()
for t in threads:
t.join()
print(f"Multithreading Final Counter: {counter2}")
async def asyncio_race_condition():
counter3 = 0
async def increment():
nonlocal counter3
for _ in range(10000):
counter3 = counter3 + 1
tasks = [asyncio.create_task(increment()) for _ in range(3)]
await asyncio.gather(*tasks)
print(f"Asyncio Final Counter: {counter3}")
def main():
print("\nMultithreading Example:")
multithreading_race_condition()
print("\nAsyncio Example:")
asyncio.run(asyncio_race_condition())
if __name__ == "__main__":
main()
</code></pre>
<p>my output is</p>
<pre><code>Multithreading Example:
Multithreading Final Counter: 30000
Asyncio Example:
Asyncio Final Counter: 30000
</code></pre>
|
<python><python-3.x><multithreading><asynchronous><race-condition>
|
2025-01-23 22:51:57
| 1
| 367
|
Onuralp Arslan
|
79,382,718
| 567,493
|
How can I solve the "unexpected namespace: local" xray issue?
|
<p>I'm running an xray daemon on an ec2-backed ecs cluster, and I'm getting this in the xray daemon's cloudwatch logs:</p>
<blockquote>
<p>warn awsxrayreceiver@v0.78.0/receiver.go:116 X-Ray segment to OT traces conversion failed {"kind": "receiver", "name": "awsxray", "data_type": "traces", "error": "unexpected namespace: local"}</p>
</blockquote>
<p>This is somehow caused by a python flask app instrumented with xray. There must be some configuration that I'm missing for this? Maybe I'm also missing a configuration in xray somewhere too?</p>
<p>What environment variable or other configuration change can I make to fix this problem?</p>
<p>In case it is interesting, I do see traces and trace maps despite the warnings.</p>
<hr />
<p>Python code:</p>
<p>requirements:</p>
<pre><code>aws-xray-sdk
</code></pre>
<p>app.py:</p>
<pre><code>from aws_xray_sdk.core import xray_recorder # type: ignore
from aws_xray_sdk.ext.flask.middleware import XRayMiddleware # type: ignore
...
app = Flask(__name__)
...
xray_recorder.configure(service="helloworld")
XRayMiddleware(app, xray_recorder)
</code></pre>
<p>environment variables:</p>
<pre><code>AWS_XRAY_DAEMON_ADDRESS="172.17.0.1:2000"
</code></pre>
<hr />
<p>For the xray daemon, I am using <code>public.ecr.aws/aws-observability/aws-otel-collector:v0.30.0</code> as the image, and no environment variables or command overrides</p>
|
<python><aws-xray>
|
2025-01-23 22:09:46
| 0
| 2,267
|
davidpricedev
|
79,382,572
| 9,191,460
|
Condensing a Python method that does a different comparison depending on the operator passed
|
<p>I am trying to write a method that evaluates a statement, but the operator (>, <, =) is sent by the user. I am wondering if there is a easy way to write a more concise method.</p>
<p>The simplified version of the code is:</p>
<pre><code>def comparsion(val1: int, val2: int, operator: str):
if operator == ">":
if val1 > val2:
return True
elif operator == "=":
if val1 == val2:
return True
elif operator == "<":
if val1 < val2:
return True
</code></pre>
<p>The only thing that changes in each case is the operator, I am wondering if there is a more efficient way to write this.</p>
|
<python>
|
2025-01-23 21:09:04
| 4
| 3,035
|
Rui Nian
|
79,382,453
| 1,850,165
|
subprocess.Popen: how to proper communicate in an interactive way?
|
<p>I'm working on an app that requires a conversational interaction with an external process, simulated here by the terminal calculator <code>bc</code>: I send something to stdin, and I get the response via stdout, or stderr if an error occurs. However, the proc quits when a non-fatal error occurs.</p>
<p>Below is a snippet <code>ztui.py</code> to replicate the issue. Instead of moving on to the next <code>stdin</code> input, the <code>proc</code> just exits unexpectedly after printing to <code>stderr</code>, and the python program experiences the BrokenPipeError when it tries to send/flush the 3rd command as that is obviously closed.</p>
<pre class="lang-py prettyprint-override"><code>from subprocess import Popen, PIPE
proc = Popen(["bc"], stdin=PIPE, stdout=PIPE, stderr=PIPE)
proc.stdin.write(b"2+2\n")
proc.stdin.flush()
print(proc.stdout.readline().decode())
proc.stdin.write(b"len()\n")
proc.stdin.flush()
print(proc.stdout.readline().decode())
print("stderr ==>", proc.stderr.readlines(2))
# and here the proc exits, checked using ps aux from another terminal window
proc.stdin.write(b"3+3\n")
proc.stdin.flush()
print(proc.stdout.readline().decode())
</code></pre>
<p>This is the output</p>
<pre class="lang-bash prettyprint-override"><code>fabio: ~/Downloads $ python3.11 --version
Python 3.11.3
fabio: ~/Downloads $ python3.11 ztui.py
4
stderr ==> [b'\n', b'Runtime error: undefined function: len()\n']
Traceback (most recent call last):
File "/Users/fabio/Downloads/ztui.py", line 15, in <module>
proc.stdin.flush()
BrokenPipeError: [Errno 32] Broken pipe
</code></pre>
<p>Needless to say, the actual <code>bc</code> program does not exit when you pass <code>len()</code>: it just shows the error and gets ready for the new input</p>
<pre class="lang-bash prettyprint-override"><code>fabio: ~/Downloads $ bc
>>> 2*2
4
>>> len()
Runtime error: undefined function: len()
0: (main)
>>> 3*3
9
>>> quit
</code></pre>
<p>How can I troubleshoot that? I had initially looked into the <code>communicate()</code> command, but that is for a one-off request, and I need to continuously send and receive commands and responses.</p>
|
<python><subprocess><popen>
|
2025-01-23 20:13:57
| 2
| 374
|
fabiog1901
|
79,382,350
| 10,319,707
|
How can I make a task dependent on another, but also only run at a certain time?
|
<p>I have two tasks.</p>
<ol>
<li>T05:30 - I wish for this to run at 05:30 every day.</li>
<li>T08:30 - I wish for this to run at 08:30 every day, but only if T05:30 has succeeded today.</li>
</ol>
<p>Can this be achieved in Airflow? It appears that dependencies (i.e. run T8:30 only if only if T05:30 has succeeded today) are a task-only concept but schedules (05:30 every day and 08:30 every day) are a DAG-only concept.</p>
<p>I could hack this by checking variables or something external, but I'm hoping that there is an idiomatic solution to this.</p>
|
<python><airflow><task><scheduled-tasks><directed-acyclic-graphs>
|
2025-01-23 19:30:05
| 1
| 1,746
|
J. Mini
|
79,382,089
| 12,466,687
|
How to get rid of extra row spaces from a facet plot with multiple Categories and sub categories in plotnine?
|
<p>I have created a <code>facet</code> plot as I am not sure how to do this without facet but I am open to alternate ways of doing this.</p>
<p>I have a column with <code>parent category</code> and another column with <code>sub categories</code>.</p>
<p><strong>Dummy Data:</strong></p>
<pre><code>import pandas as pd
new_data = {
'date': pd.date_range('2022-01-01', periods=11, freq="ME"),
'parent_category': ['Electronics', 'Electronics', 'Fashion', 'Fashion', 'Home Goods', 'Electronics', 'Fashion','Electronics','Electronics','Electronics','Electronics'],
'child_category': ['Smartphones', 'Laptops', 'Shirts', 'Pants', 'Kitchenware','Laptops', 'Shirts', 'Smartphones','PS4','Oven','Vaccum cleaner']
}
new_data = pd.DataFrame(new_data)
</code></pre>
<p><strong>Plot:</strong></p>
<pre><code>import plotnine as p9
from plotnine import *
(ggplot(new_data, aes(x="date", y="child_category", group="child_category")) +
geom_line(size=1, color="pink") +
geom_point(size=3, color="grey") +
facet_wrap("parent_category", ncol=1, scales="free_y") +
theme_538() +
theme(axis_text_x=element_text(angle=45, hjust=1),
panel_grid_major=element_blank(),
figure_size=(8, 6))
)
</code></pre>
<p><a href="https://i.sstatic.net/FeKH5wVo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FeKH5wVo.png" alt="enter image description here" /></a></p>
<p><strong>Issue:</strong></p>
<p>Now when I plot this using <code>facet</code> it creates facet of same height for each Parent category even when some Parent Categories do not have same number of child Categories. So this creates an unnecessary blank spaces rows in the plot for facets with less child categories.</p>
<p>What I would prefer is and not able to do it: No blank row spacing in facets/ Categorized-sections if the sub categories are less.</p>
<p>Appreciate any help or suggestions:</p>
<p><strong>Example plot shown below</strong>: No blank spaces in categories with less subcategories.</p>
<p><a href="https://i.sstatic.net/6VAfZZBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6VAfZZBM.png" alt="enter image description here" /></a>
from <a href="https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a587/4816263/65745148a4d9/abun007498.f4.jpg" rel="nofollow noreferrer">link</a></p>
|
<python><plotnine>
|
2025-01-23 17:47:05
| 1
| 2,357
|
ViSa
|
79,381,979
| 11,895,146
|
How to avoid closed wait sessions that come from huge numbers of HTTP requests?
|
<p>We run thousands of Python scripts on our RHEL machines that open and close socket connections on port 8088. As a result, we are facing a high volume of HTTP requests.</p>
<p>Here is very simple example of one of the scripts:</p>
<pre><code>import socket
import requests
def get_yarn_details(state='RUNNING'):
state_suffix = '' if state == '' else "?states=" + state
yarn_apps = "http://{0}:8088/ws/v1/cluster/apps" + state_suffix
local_fqdn = socket.getfqdn(socket.gethostname())
yarn_apps_url = yarn_apps.format(local_fqdn)
try:
response = requests.get(yarn_apps_url)
response.raise_for_status()ow
apps = response.json().get('apps', {}).get('app', [])
for app in apps:
print(app['id'])
except requests.exceptions.RequestException as e:
print(f"Error fetching YARN details: {e}")
# Example usage
get_yarn_details()
</code></pre>
<p>The problem is that, from time to time, we see a large number of CLOSE_WAIT sessions, and it seems that the high volume of HTTP requests is the root cause.
For example, when I count the CLOSE_WAIT sessions (on the Resource Manager machine), I get the following result:</p>
<pre><code>netstat -tn | grep ':8088' | grep CLOSE_WAIT | wc -l
3945
</code></pre>
<p>as we can see above we have 3945 close wait</p>
<p>and the list looks like this</p>
<pre><code>netstat -tn | grep ':8088' | grep CLOSE_WAIT
tcp 192 0 85.3.45.239:8088 85.3.45.240:61614 CLOSE_WAIT
tcp 187 0 85.3.45.239:8088 85.3.45.240:12594 CLOSE_WAIT
tcp 195 0 85.3.45.239:8088 85.3.45.239:25532 CLOSE_WAIT
.
.
.
.
.
.
.
</code></pre>
<p>So, we are in big trouble and need advice from the members here about any ideas on what we can do to avoid closed wait sessions that come from huge numbers of HTTP requests.</p>
|
<python><http><sockets><tcp><hadoop-yarn>
|
2025-01-23 17:08:28
| 0
| 2,640
|
jessica
|
79,381,751
| 243,031
|
enable socket level logging in uwsgi server
|
<p>Applicaiton is running on <code>uwsgi+DJango rest framework</code> and there is <code>httpd</code> server in front of the <code>uwsgi</code> service.</p>
<p><code>httpd</code> server forward the request to <code>127.0.0.1:9000</code> <code>ProxyPass /myapi uwsgi://127.0.0.1:9000</code></p>
<p>There is <code>uwsgi</code> configuration in <code>ini</code> file.</p>
<p><strong><code>myapi.ini</code></strong></p>
<pre><code>[uwsgi]
module=my.package.app.wsgi:application
master=True
pidfile=/tmp/uwsgi.pid
vacuum=True
max-requests=5000
socket=:9000
env=my.package.app.settings
processes=1
threads=1
buffer-size=16384
</code></pre>
<p>When we start service, we are able to get the response when we access direct <code>9000</code> port.</p>
<pre><code>% curl -X GET http://my.domain.com:9000/api/v1/url/ -s -I
HTTP/1.1 200 OK
Content-Type: application/json
Allow: GET, HEAD, OPTIONS
Access-Control-Allow-Methods: GET, PUT, POST, DELETE, OPTIONS
Access-Control-Allow-Credentials: true
X-Frame-Options: DENY
Vary: Cookie
Content-Length: 18160
X-Content-Type-Options: nosniff
Referrer-Policy: same-origin
Cross-Origin-Opener-Policy: same-origin
Set-Cookie: csrftoken=XXXXXXXXX; Domain=.domain.com; expires=Thu, 22 Jan 2026 13:04:09 GMT; Max-Age=31449600; Path=/; SameSite=Lax
Set-Cookie: sessionid=YYYYYYYYY; expires=Thu, 06 Feb 2025 13:04:09 GMT; HttpOnly; Max-Age=1209600; Path=/; SameSite=Lax
</code></pre>
<p>I can see the uwsgi server log and see that, it get request for url.</p>
<pre><code>$ sudo LOG_DIR=/tmp/ CONF_FILE_PATH=/home/user/sandy_conf.json ./myvirtualenv/bin/uwsgi --http-socket :9000 --honour-stdin --ini /home/user/sandy_corp.ini
[uWSGI] getting INI configuration from /home/user/myapi.ini
*** Starting uWSGI 2.0.27 (64bit) on [Thu Jan 23 13:04:01 2025] ***
compiled with version: 8.5.0 20210514 (Red Hat 8.5.0-22) on 22 January 2025 18:28:27
os: Linux-4.18.0-553.5.1.el8_10.x86_64 #1 SMP Tue May 21 03:13:04 EDT 2024
nodename: my.domain.com
machine: x86_64
clock source: unix
detected number of CPU cores: 2
current working directory: /home/user/virtualenvs
writing pidfile to /tmp/uwsgi.pid
detected binary path: /home/user/virtualenvs/myvirtualenv/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
your processes number limit is 30635
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to TCP address :9000 fd 3
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
Python version: 3.12.1 (main, Feb 21 2024, 00:19:28) [GCC 8.5.0 20210514 (Red Hat 8.5.0-20)]
Python main interpreter initialized at 0x7f89ba9ba708
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 170384 bytes (166 KB) for 1 cores
*** Operational MODE: single process ***
WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0x7f89ba9ba708 pid: 339122 (default app)
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 339122)
spawned uWSGI worker 1 (pid: 339129, cores: 1)
DEBUG:my.package.api.authentication.middleware:Adding user user to the current request
DEBUG:my.package.api.authentication.middleware:Calling login() to set the django cookie
[pid: 339129|app: 0|req: 1/1] 10.42.149.46 () {24 vars in 339 bytes} [Thu Jan 23 13:04:09 2025] GET /api/v1/url/ => generated 18160 bytes in 630 msecs (HTTP/1.1 200) 12 headers in 645 bytes (1 switches on core 0)
</code></pre>
<p><code>httpd</code> configure to validate SSL and its running on <code>443</code> so we have to send <code>https</code> request when we have to go through <code>httpd</code>.</p>
<p>That gives <code>500</code> error.</p>
<pre><code>% curl -X GET --cert ~/.athenz/cert --key ~/.athenz/key https://my.domain.com/myapi/api/v1/url/ -s -I
HTTP/1.1 500 Internal Server Error
Date: Thu, 23 Jan 2025 13:06:34 GMT
P3P: policyref="https://policies.domain.com/w3c/p3p.xml", CP="CAO DSP COR CUR ADM DEV TAI PSA PSD IVAi IVDi CONi TELo OTPi OUR DELi SAMi OTRi UNRi PUBi IND PHY ONL UNI PUR FIN COM NAV INT DEM CNT STA POL HEA PRE LOC GOV"
Content-Length: 2983
Connection: close
Content-Type: text/html; charset=iso-8859-1
</code></pre>
<p>I enable the <code>httpd-dumpio</code> to see that, whats going on.</p>
<pre><code>[Thu Jan 23 15:40:37.196107 2025] [ssl:info] [pid 347838:tid 347838] [client 127.0.0.1:59334] AH01964: Connection to child 0 established (server my.domain.com:443)
[Thu Jan 23 15:40:37.196565 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(135): [client 127.0.0.1:59334] mod_dumpio: dumpio_in [init-blocking] 0 readbytes
[Thu Jan 23 15:40:37.196796 2025] [ssl:debug] [pid 347838:tid 347838] ssl_engine_kernel.c(2391): [client 127.0.0.1:59334] AH02043: SSL virtual host for servername my.domain.com found
[Thu Jan 23 15:40:37.515885 2025] [ssl:debug] [pid 347838:tid 347838] ssl_engine_kernel.c(2251): [client 127.0.0.1:59334] AH02041: Protocol: TLSv1.2, Cipher: ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)
[Thu Jan 23 15:40:37.516065 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(135): [client 127.0.0.1:59334] mod_dumpio: dumpio_in [getline-blocking] 0 readbytes
[Thu Jan 23 15:40:37.830577 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(58): [client 127.0.0.1:59334] mod_dumpio: dumpio_in (data-TRANSIENT): 35 bytes
[Thu Jan 23 15:40:37.830677 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(100): [client 127.0.0.1:59334] mod_dumpio: dumpio_in (data-TRANSIENT): GET /myapi/api/v1/url/ HTTP/1.1
[Thu Jan 23 15:40:37.830761 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(135): [client 127.0.0.1:59334] mod_dumpio: dumpio_in [getline-blocking] 0 readbytes
[Thu Jan 23 15:40:37.830792 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(58): [client 127.0.0.1:59334] mod_dumpio: dumpio_in (data-TRANSIENT): 48 bytes
[Thu Jan 23 15:40:37.830806 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(100): [client 127.0.0.1:59334] mod_dumpio: dumpio_in (data-TRANSIENT): Host: my.domain.com
[Thu Jan 23 15:40:37.830821 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(135): [client 127.0.0.1:59334] mod_dumpio: dumpio_in [getline-blocking] 0 readbytes
[Thu Jan 23 15:40:37.830835 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(58): [client 127.0.0.1:59334] mod_dumpio: dumpio_in (data-TRANSIENT): 24 bytes
[Thu Jan 23 15:40:37.830848 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(100): [client 127.0.0.1:59334] mod_dumpio: dumpio_in (data-TRANSIENT): User-Agent: curl/8.7.1
[Thu Jan 23 15:40:37.830884 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(135): [client 127.0.0.1:59334] mod_dumpio: dumpio_in [getline-blocking] 0 readbytes
[Thu Jan 23 15:40:37.830899 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(58): [client 127.0.0.1:59334] mod_dumpio: dumpio_in (data-TRANSIENT): 13 bytes
[Thu Jan 23 15:40:37.830913 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(100): [client 127.0.0.1:59334] mod_dumpio: dumpio_in (data-TRANSIENT): Accept: */*
[Thu Jan 23 15:40:37.830945 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(135): [client 127.0.0.1:59334] mod_dumpio: dumpio_in [getline-blocking] 0 readbytes
[Thu Jan 23 15:40:37.830960 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(58): [client 127.0.0.1:59334] mod_dumpio: dumpio_in (data-TRANSIENT): 2 bytes
[Thu Jan 23 15:40:37.830974 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(100): [client 127.0.0.1:59334] mod_dumpio: dumpio_in (data-TRANSIENT):
[Thu Jan 23 15:40:37.832282 2025] [ssl:debug] [pid 347838:tid 347838] ssl_engine_kernel.c(415): [client 127.0.0.1:59334 {YRA:127.0.0.1:59334, YPA:127.0.0.1:59334}] AH02034: Initial (No.1) HTTPS request received for child 0 (server my.domain.com:443)
[Thu Jan 23 15:40:37.833054 2025] [proxy:trace2] [pid 347838:tid 347838] mod_proxy.c(848): [client 127.0.0.1:59334 {YRA:127.0.0.1:59334, YPA:127.0.0.1:59334}] AH03461: attempting to match URI path '/myapi/api/v1/url/' against prefix '/myapi' for proxying
[Thu Jan 23 15:40:37.833129 2025] [proxy:trace1] [pid 347838:tid 347838] mod_proxy.c(969): [client 127.0.0.1:59334 {YRA:127.0.0.1:59334, YPA:127.0.0.1:59334}] AH03464: URI path '/myapi/api/v1/url/' matches proxy handler 'proxy:uwsgi://127.0.0.1:9000/api/v1/url/'
[Thu Jan 23 15:40:37.836705 2025] [proxy:trace2] [pid 347838:tid 347838] proxy_util.c(2609): [client 127.0.0.1:59334 {YRA:127.0.0.1:59334, YPA:127.0.0.1:59334}] uwsgi: found worker uwsgi://127.0.0.1:9000 for uwsgi://127.0.0.1:9000/api/v1/url/
[Thu Jan 23 15:40:37.836797 2025] [proxy:debug] [pid 347838:tid 347838] mod_proxy.c(1465): [client 127.0.0.1:59334 {YRA:127.0.0.1:59334, YPA:127.0.0.1:59334}] AH01143: Running scheme uwsgi handler (attempt 0)
[Thu Jan 23 15:40:37.836851 2025] [proxy:debug] [pid 347838:tid 347838] proxy_util.c(2802): AH00942: uwsgi: has acquired connection for (127.0.0.1:9000)
[Thu Jan 23 15:40:37.836876 2025] [proxy:debug] [pid 347838:tid 347838] proxy_util.c(3247): [client 127.0.0.1:59334 {YRA:127.0.0.1:59334, YPA:127.0.0.1:59334}] AH00944: connecting uwsgi://127.0.0.1:9000/api/v1/url/ to 127.0.0.1:9000
[Thu Jan 23 15:40:37.837111 2025] [proxy:debug] [pid 347838:tid 347838] proxy_util.c(2910): [client 127.0.0.1:59334 {YRA:127.0.0.1:59334, YPA:127.0.0.1:59334}] AH10479: uwsgi: 127.0.0.1 resolved to 127.0.0.1:9000
[Thu Jan 23 15:40:37.837190 2025] [proxy:debug] [pid 347838:tid 347838] proxy_util.c(3455): [client 127.0.0.1:59334 {YRA:127.0.0.1:59334, YPA:127.0.0.1:59334}] AH00947: connecting /api/v1/url/ to 127.0.0.1:9000 (127.0.0.1:9000)
[Thu Jan 23 15:40:37.837302 2025] [proxy:trace2] [pid 347838:tid 347838] proxy_util.c(3904): uwsgi: fam 2 socket created for 127.0.0.1:9000 (127.0.0.1:9000)
[Thu Jan 23 15:40:37.837519 2025] [proxy:debug] [pid 347838:tid 347838] proxy_util.c(3961): AH02824: uwsgi: connection established with 127.0.0.1:9000 (127.0.0.1:9000)
[Thu Jan 23 15:40:37.837580 2025] [proxy:debug] [pid 347838:tid 347838] proxy_util.c(4149): AH00962: uwsgi: connection complete to 127.0.0.1:9000 (127.0.0.1)
[Thu Jan 23 15:40:37.837901 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(135): [remote 127.0.0.1:9000] mod_dumpio: dumpio_in [getline-blocking] 0 readbytes
[Thu Jan 23 15:40:41.842664 2025] [proxy:debug] [pid 347838:tid 347838] proxy_util.c(2818): AH00943: uwsgi: has released connection for (127.0.0.1:9000)
[Thu Jan 23 15:40:41.843168 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(164): [client 127.0.0.1:59334] mod_dumpio: dumpio_out
[Thu Jan 23 15:40:41.843187 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(58): [client 127.0.0.1:59334] mod_dumpio: dumpio_out (data-HEAP): 73 bytes
[Thu Jan 23 15:40:41.843193 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(100): [client 127.0.0.1:59334] mod_dumpio: dumpio_out (data-HEAP): HTTP/1.1 500 Internal Server Error
Date: Thu, 23 Jan 2025 15:40:37 GMT
[Thu Jan 23 15:40:41.843206 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(58): [client 127.0.0.1:59334] mod_dumpio: dumpio_out (data-HEAP): 309 bytes
[Thu Jan 23 15:40:41.843210 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(100): [client 127.0.0.1:59334] mod_dumpio: dumpio_out (data-HEAP): P3P: policyref="https://policies.company.com/w3c/p3p.xml", CP="CAO DSP COR CUR ADM DEV TAI PSA PSD IVAi IVDi CONi TELo OTPi OUR DELi SAMi OTRi UNRi PUBi IND PHY ONL UNI PUR FIN COM NAV INT DEM CNT STA POL HEA PRE LOC GOV"
Content-Length: 2983
Connection: close
Content-Type: text/html; charset=iso-8859-1
[Thu Jan 23 15:40:41.843236 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(164): [client 127.0.0.1:59334] mod_dumpio: dumpio_out
[Thu Jan 23 15:40:41.843240 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(58): [client 127.0.0.1:59334] mod_dumpio: dumpio_out (data-HEAP): 2983 bytes
[Thu Jan 23 15:40:41.843244 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(100): [client 127.0.0.1:59334] mod_dumpio: dumpio_out (data-HEAP): <!doctype html public "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html><head><title>Company - 500 Internal Server Error</title><style>
BIGHTML</html>
[Thu Jan 23 15:40:41.843259 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(58): [client 127.0.0.1:59334] mod_dumpio: dumpio_out (metadata-EOS): 0 bytes
[Thu Jan 23 15:40:41.843339 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(164): [client 127.0.0.1:59334] mod_dumpio: dumpio_out
[Thu Jan 23 15:40:41.843346 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(58): [client 127.0.0.1:59334] mod_dumpio: dumpio_out (metadata-EOR): 0 bytes
E 23-154041.856997 347838 ks keyname='tcookie-validate.ks-secret', kt keyname='tcookie-validate.kt-secret', ku keyname='tcookie-validate.ku-secret' - lookup failed no key found
[Thu Jan 23 15:40:41.857099 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(164): [client 127.0.0.1:59334] mod_dumpio: dumpio_out
[Thu Jan 23 15:40:41.857108 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(58): [client 127.0.0.1:59334] mod_dumpio: dumpio_out (metadata-FLUSH): 0 bytes
[Thu Jan 23 15:40:41.857126 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(164): [client 127.0.0.1:59334] mod_dumpio: dumpio_out
[Thu Jan 23 15:40:41.857130 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(58): [client 127.0.0.1:59334] mod_dumpio: dumpio_out (metadata-FLUSH): 0 bytes
[Thu Jan 23 15:40:41.857134 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(58): [client 127.0.0.1:59334] mod_dumpio: dumpio_out (metadata-EOC): 0 bytes
==> /var/log/httpd/access <==
0a2a9555679262f9003d3a8100000ba7/myapi/api/v1/url/Z34bAB59334wmy.domain.comgcurl/8.7.1Ktext/html; charset=iso-8859-1mGETvdnt!s500e500 Internal Server ErrorVPon
==> /var/log/httpd/error <==
[Thu Jan 23 15:40:41.857190 2025] [ssl:debug] [pid 347838:tid 347838] ssl_engine_io.c(1150): [client 127.0.0.1:59334] AH02001: Connection closed to child 0 with standard shutdown (server my.domain.com:443)
</code></pre>
<p>From the log, we can see that, it establish the connection to <code>127.0.0.1:9000</code></p>
<pre><code>[Thu Jan 23 15:40:37.836851 2025] [proxy:debug] [pid 347838:tid 347838] proxy_util.c(2802): AH00942: uwsgi: has acquired connection for (127.0.0.1:9000)
[Thu Jan 23 15:40:37.836876 2025] [proxy:debug] [pid 347838:tid 347838] proxy_util.c(3247): [client 127.0.0.1:59334 {YRA:127.0.0.1:59334, YPA:127.0.0.1:59334}] AH00944: connecting uwsgi://127.0.0.1:9000/api/v1/url/ to 127.0.0.1:9000
[Thu Jan 23 15:40:37.837111 2025] [proxy:debug] [pid 347838:tid 347838] proxy_util.c(2910): [client 127.0.0.1:59334 {YRA:127.0.0.1:59334, YPA:127.0.0.1:59334}] AH10479: uwsgi: 127.0.0.1 resolved to 127.0.0.1:9000
[Thu Jan 23 15:40:37.837190 2025] [proxy:debug] [pid 347838:tid 347838] proxy_util.c(3455): [client 127.0.0.1:59334 {YRA:127.0.0.1:59334, YPA:127.0.0.1:59334}] AH00947: connecting /api/v1/url/ to 127.0.0.1:9000 (127.0.0.1:9000)
</code></pre>
<p>But after that it return <code>500</code> error.</p>
<pre><code>[Thu Jan 23 15:40:41.843187 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(58): [client 127.0.0.1:59334] mod_dumpio: dumpio_out (data-HEAP): 73 bytes
[Thu Jan 23 15:40:41.843193 2025] [dumpio:trace7] [pid 347838:tid 347838] mod_dumpio.c(100): [client 127.0.0.1:59334] mod_dumpio: dumpio_out (data-HEAP): HTTP/1.1 500 Internal Server Error
</code></pre>
<p>I check the <code>uwsgi</code> server log, there is no logging on <code>uwsgi</code> server side. Means its establish the connection, but not reach to the point where it log the message.</p>
<p>I also get the <code>tcpdump</code> using <code>tcpdump -i lo -w /tmp/dump.out port 9000</code> command.</p>
<p>We can see that it get connection.</p>
<p><code>15:15:59.963640 IP localhost.33654 > localhost.cslistener: Flags [P.], seq 1:3150, ack 1, win 342, options [nop,nop,TS val 812262926 ecr 812262926], length 3149</code></p>
<p>ACK data</p>
<p><code>15:15:59.963668 IP localhost.cslistener > localhost.33654: Flags [.], ack 3150, win 1365, options [nop,nop,TS val 812262926 ecr 812262926], length 0</code></p>
<p>But then close the connection.</p>
<p><code>15:16:03.968266 IP localhost.cslistener > localhost.33654: Flags [F.], seq 1, ack 3150, win 1365, options [nop,nop,TS val 812266931 ecr 812262926], length 0</code></p>
<p>Is there any way to enable loggin on <code>uwsgi</code> side to know that, packets reach to the <code>uwsgi</code> server?</p>
|
<python><apache><sockets><uwsgi>
|
2025-01-23 15:56:32
| 0
| 21,411
|
NPatel
|
79,381,694
| 14,463,396
|
sqlalchemy - This result object does not return rows despite using SET NOCOUNT ON
|
<p>I have a query that returns data in Miscrosoft SQL server management studio, but I'm getting the following error when trying to read in a query using pandas:</p>
<pre><code>sqlalchemy.exc.ResourceClosedError: This result object does not return rows. It has been closed automatically.
</code></pre>
<p>I from similar questions, I understand that using <code>SET NOCOUNT ON</code> at the start of the query resolves this (and has resolved this error for me in the past), however I have this in my query and am still getting the error.</p>
<p>My query is like:</p>
<pre><code>query = """SET NOCOUNT ON;
DECLARE @startdttm AS datetime
DECLARE @enddttm AS datetime
SET @startdttm = '01-APR-2024 00:00:00'
SET @enddttm = '30-APR-2024 23:59:59'
SELECT some_columns,
INTO #temp1
FROM table1
WHERE date BETWEEN @startdttm AND @enddttm AND some_other_conditions
GROUP BY some_columns
SELECT some_columns,
INTO #temp2
FROM table2
LEFT JOIN #temp1 ON some_column=some_column
WHERE date BETWEEN @startdttm AND @enddttm AND some_other_conditions
GROUP BY some_columns
SELECT some_columns,
INTO #temp3
FROM table3
WHERE date BETWEEN @startdttm AND @enddttm AND some_other_conditions
GROUP BY some_columns
SELECT columns,
FROM #temp2
INNER JOIN #temp3 ON column=column
WHERE some conditions
"""
</code></pre>
<p>and then I'm trying to read it into a pandas dataframe with:</p>
<pre><code>from sqlalchemy import create_engine
import pandas as pd
engine = create_engine('connection string')
df = pd.read_sql(query, engine)
</code></pre>
<p>and getting the above error. I guess something to do with creating temporary tables, but all the solutions online suggest adding NOCOUNT, which in this case didn't resolve things. Anyone know where to go from here?</p>
|
<python><pandas><sqlalchemy>
|
2025-01-23 15:41:47
| 1
| 3,395
|
Emi OB
|
79,381,686
| 1,406,168
|
ImportError: Install adlfs to access Azure Datalake Gen2 and Azure Blob Storage even after adlsf is installed
|
<p>I have an azure function with code below:</p>
<pre><code>storage_account_url = f"{self.datalake_settings.xx}/{parquet_folder_path}/{file_name}.parquet"
storage_options = {
"account_name": self.datalake_settings.xx,
"client_id": self.datalake_settings.xx,
"client_secret": self.datalake_settings.xx.get_secret_value(),
"tenant_id": self.settings.xx
}
df.to_parquet( storage_account_url, engine='pyarrow', compression='snappy', storage_options=storage_options )
</code></pre>
<p>This is my requirements.txt:</p>
<pre><code>azure-functions
azure-identity
azure-storage-blob
azure-monitor-opentelemetry
opentelemetry-api
opentelemetry-sdk
opentelemetry-semantic-conventions
pydantic
adlfs
azure-storage-blob
azure-storage-file-datalake
</code></pre>
<p>This is my .venv/lib:
<a href="https://i.sstatic.net/GPkVu2sQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GPkVu2sQ.png" alt="enter image description here" /></a></p>
<p>When I run this code I get following error:</p>
<blockquote>
<p>System.Private.CoreLib: Exception while executing function:
Functions.get_exchangerates_trigger. System.Private.CoreLib: Result:
Failure Exception: ImportError: Install adlfs to access Azure Datalake
Gen2 and Azure Blob Storage</p>
</blockquote>
<p>Any ideas how to troubleshoot this? It clearly looks like the adlfs and blobstorage packages are installed.</p>
|
<python><pandas><dataframe><azure-functions><azure-data-lake>
|
2025-01-23 15:38:42
| 1
| 5,363
|
Thomas Segato
|
79,381,630
| 3,941,671
|
Efficient way to cyclically update data of a class instance already saved on disk
|
<p>There is a class <code>Class</code> containing an initial empty list <code>list</code> and a method to add an element to the list. Each time an element was added to the list the whole instance should be saved to disk. So in most cases the already saved data only needs to be extended by the data of the last added element.</p>
<p>Is there an efficient way to achive such a behavior? Deleting the saved data and writing it to disk again seems not to be very efficient. I'm looking for a kind of "update already saved data" like i.e. Git does when doing a commit.</p>
<p>At the moment I save the data using pickle. I prefer to save the whole <code>Class</code>-instance to not loose the class reference.</p>
<pre><code>from pathlib import Path
class Class():
def __init__(self) -> None:
self.list: list[object] = []
def add_element(self, obj: object, path: Path) -> None:
self.list.append(obj)
# now update already saved data in directory given by `path`
</code></pre>
|
<python><python-3.x><pickle>
|
2025-01-23 15:20:42
| 1
| 471
|
paul_schaefer
|
79,381,460
| 12,974,570
|
WebSocket recv() Call Not Timing Out Despite Setting Timeout in Python
|
<p>I'm working on a project where I need to receive messages from a WebSocket server using Python. I'm using the websocket library and have set a timeout for the recv() call, but it seems to be ignored, as the call hangs indefinitely without throwing a timeout exception.</p>
<p>Here is the minimum working example of my code:</p>
<pre><code>from websocket import (
create_connection,
WebSocketTimeoutException,
)
time.sleep(1) # simulate user reading
# Build the WebSocket URL for /live
ws_url = (
f"wss://{urlparse(self.host).netloc}/live"
f"?participant_code={self.participant_code}"
f"&page_name=chat"
f"&page_index=8"
f"&session_code={self.session_code}"
f"&live_method_name=chat.live_method"
)
print(f"Connecting to WebSocket: {ws_url}")
# Build cookies
cookie_dict = self.client.cookies.get_dict()
cookie_header_str = "; ".join(f"{k}={v}" for k, v in cookie_dict.items())
headers = [f"Cookie: {cookie_header_str}"]
print("Cookies dict:", cookie_dict)
print("Headers:", headers)
try:
self.ws = create_connection(
ws_url, header=headers, timeout=30, sslopt={"cert_reqs": 0}
)
self.ws.settimeout(30) # Extend timeout for receive operations
# Send INIT
init_msg = {"init": True}
self.ws.send(json.dumps(init_msg))
print("Sent init. Waiting for GPT init reply...")
print("ws connected?", self.ws.connected)
try:
print("Inside the try block")
reply1 = self.ws.recv()
print("GPT init reply:", reply1)
except WebSocketTimeoutException:
print("Timeout waiting for init reply")
except Exception as e:
print("Some other error while waiting for init reply:", e)
# Then WAIT a while to ensure the user remains on the page # Is this necessary??
time.sleep(5)
# Send TEXT
text_msg = {
"text": "We should stop supporting Ukraine and focus on domestic issues."
}
self.ws.send(json.dumps(text_msg))
print("Sent first real user message. Waiting for GPT text reply...")
try:
reply2 = self.ws.recv()
print("GPT second reply:", reply2)
except WebSocketTimeoutException:
print("Timeout waiting for second reply")
# Close
self.ws.close()
except Exception as e:
import traceback
traceback.print_exc()
print("Error during chat interaction:", e)
</code></pre>
<p>And this is being printed to my console:</p>
<pre class="lang-none prettyprint-override"><code>Connecting to WebSocket: wss://myapp.herokuapp.com/live?participant_code=cw1ptyf9&page_name=chat&page_index=8&session_code=Scientific_Experiment&live_method_name=chat.live_method
Cookies dict: {}
Headers: ['Cookie: ']
Sent init. Waiting for GPT init reply...
ws connected? True
Inside the try block
</code></pre>
<p><strong>Problem</strong>:
I can see in the server logs that <code>self.ws.send(json.dumps(init_msg))</code> sent the initial message successfully to the server, but the <code>recv()</code> call hangs indefinitely and does not throw a <code>WebSocketTimeoutException</code> even after 30 seconds. Are the missing cookies the problem? Why are they even missing?</p>
<p>I have checked my browser developer tools to confirm that the WebSocket connection works as expected when accessed via a browser.</p>
|
<python><network-programming><websocket><timeout>
|
2025-01-23 14:24:27
| 0
| 1,229
|
Johannes Walter
|
79,381,326
| 3,825,996
|
Conceptual issues about responsibility with classmethod factories for inheritance in Python
|
<p>As far as I understand, the difference between <code>@classmethod</code> and <code>@staticmethod</code> is the <code>cls</code> parameter which is used so a class can construct instances of subclasses. I have encountered some issues with that concept while trying to develop a library. An example:</p>
<pre class="lang-py prettyprint-override"><code>class Line:
def __init__(self, length):
self.length = length
@classmethod
def unit(cls):
return cls(1)
def clone(self):
return Line(self.length)
def __mul__(self, other):
return Square(self.length * other.length)
class Square:
def __init__(self, area):
self.area = area
</code></pre>
<p>So the fact that the <code>unit</code> factory is a <code>classmethod</code> (<a href="https://www.geeksforgeeks.org/class-method-vs-static-method-python/#:%7E:text=We%20generally%20use%20the%20class,methods%20to%20create%20utility%20functions." rel="nofollow noreferrer">as is common</a>) and uses <code>cls(...)</code> for object construction signals to me: "Feel free to inherit from this class, it was designed for that." Otherwise it could just use <code>Line(1)</code> for construction. But now lets say a wild user appears and actually inherits from the classes, implementing their own functionality on top:</p>
<pre class="lang-py prettyprint-override"><code>class ColoredLine(Line):
def __init__(self, length, color):
super().__init__(length)
self.color = color
class ColoredSquare(Square):
def __init__(self, area, color):
super().__init__(area)
self.color = color
</code></pre>
<p>Issue 1: As far as I know, it is perfectly legit to modify the constructor in order to add parameters when inheriting. However, <code>Line</code> is not aware of this and if its <code>unit</code> factory tries to call <code>ColoredLine.__init__</code> without the new required <code>color</code> parameter, it throws. So a user is actually required to maintain compatibility with the old constructor (which might be very complex and change as the library evolves). But ok, the user adds a default <code>color="red"</code> and we move on.</p>
<p>Issue 2: We should not differentiate between factory class methods and instance methods that return antoher instance, like <code>clone</code>, so that should actually call <code>type(self)(self.length)</code> instead of <code>Line(...)</code>. This would however make the clone of a blue line become a red (default-colored) line with the same length. So should the base <code>clone</code> instead generically copy all fields using <code>__dict__</code>?</p>
<p>Issue 3: There is no way for <code>__mul__</code> to know that it should now return a <code>ColoredSquare</code>. So the pythonic inheritance considerations inconsistently only work for methods that return an object of the same type, so <code>__add__</code> should work in most cases and <code>__mul__</code> should not, <code>__sub__</code> might.</p>
<p>The question is, where should be the line between already semi implementing parts of the functionality of an inheriting class and what to leave to the responsibility of the user? Should a user be required to sift through the complete implementation of a class and decide method by method if it needs overriding?</p>
<p>The <code>date</code> object from <code>datetime</code> faces all these problems. It silently requires constructor compatibility with its <code>classmethod</code> factories, some methods <code>return type(self)(...)</code>, ignoring any additional instance members and <code>__sub__</code> will always return a <code>timedelta</code>.</p>
<p>In my library, I am considering to less pythonically but more consistently use <code>@staticmethod</code> and <code>MyClass()</code> for factories instead of <code>@classmethod</code>, <code>cls()</code> and <code>type(self)()</code>. A user is free to subclass but everything is their responsibility. That also makes type annotations much easier.</p>
<p>Or are there clear guidelines on this (including recommendations on how to deal with these issues instead of just recommending to use <code>@classmethod</code> for factories)? Are there conceptual mistakes in my reasoning?</p>
|
<python><inheritance><class-method>
|
2025-01-23 13:47:56
| 1
| 766
|
mqnc
|
79,381,256
| 1,593,077
|
How to "fold" python files used as modules into the main script file?
|
<p>Suppose I have two Python script files: <code>foo</code> and <code>utils/bar.py</code> in some directory. In <code>foo</code>, I have:</p>
<pre><code>import os
import sys
sys.path.append(os.path.dirname(os.path.realpath(__file__)))
from utils.bar import func1, func2
# ... code using func1 or func2
# ... perhaps some code using utils.bar.func3
</code></pre>
<p>that is, the subdirectory <code>utils</code> is added to the module search path, and then <code>utils/bar.py</code> is used as the source of utils.bar functions.</p>
<p>I'm not much of a pythonista, so I'm not sure if this hack is customary or not, but regardless - I need a single deployable script file, not a hierarchy of files. So, I want to "fold" the file hierarchy all into a single file - put the contents of <code>bar.py</code> into <code>foo</code> in some way, so that the script will continue working even if I then removed the <code>utils/</code> subdirectory and the <code>bar.py</code> file.</p>
<p>I could remove all references to <code>utils.bar</code> and just copy the plain functions from <code>bar.py</code> into <code>foo</code>; but I was wondering if there was something less "brute-force" than that.</p>
<p>(There are actually multiple files in multiple subdirectories, I just gave a simplified example.)</p>
|
<python><python-module><single-file>
|
2025-01-23 13:29:41
| 3
| 137,004
|
einpoklum
|
79,381,037
| 4,891,717
|
How to interpolate a multidimensional xarray DataArray?
|
<p>I am using the <code>xarray</code> library and I have some doubts/questions.</p>
<p>I have this dataset::</p>
<pre class="lang-none prettyprint-override"><code>ds
<xarray.Dataset> Size: 2GB
Dimensions: (Latitude: 364, Longitude: 246, Lon_u: 247, Lat_v: 364,
Time: 1087)
Coordinates:
Longitude (Latitude, Longitude)
Latitude (Latitude, Longitude)
Lon_u (Latitude, Lon_u)
Lat_v (Lat_v, Longitude)
* Time (Time) datetime64[ns]
Data variables:
Lat_u (Latitude, Lon_u)
Lon_v (Lat_v, Longitude)
uSurf (Time, Latitude, Lon_u)
vSurf (Time, Lat_v, Longitude)
</code></pre>
<p>I want to interpolate the data (Lat_v > Latitude and Lon_u > Longitude). The result must be like something like this:</p>
<pre class="lang-none prettyprint-override"><code>ds
<xarray.Dataset> Size: 2GB
Dimensions: (Latitude: 364, Longitude: 246, Time: 1087)
Coordinates:
Longitude (Longitude)
Latitude (Latitude)
* Time (Time) datetime64[ns]
Data variables:
uSurf (Time, Latitude, Longitude)
vSurf (Time, Latitude, Longitude)
</code></pre>
<p>Or actually I don't know if converting 2D coordinates to 1D makes sense. I read that if they are 2D coordinates, they are logical coordinates (Longitude, Latitude), working as a irregular matrix. But I think the example above, it's also a irregular matrix just because the dimensions are different: <code>{Latitude: 364, Longitude: 246}</code>.</p>
<p>So I don't know if I should preserve the coordinates like this:</p>
<pre class="lang-none prettyprint-override"><code>ds
<xarray.Dataset> Size: 2GB
Dimensions: (Latitude: 364, Longitude: 246, Lon_u: 247, Lat_v: 364,
Time: 1087)
Coordinates:
Longitude (Latitude, Longitude)
Latitude (Latitude, Longitude)
* Time (Time) datetime64[ns]
Data variables:
uSurf (Time, Latitude, Longitude)
vSurf (Time, Latitude, Longitude)
</code></pre>
<p>I don't understand this notation though. Because I find it recursive. Imagine I want to get all the possible Latitude values: I can do this:</p>
<pre class="lang-py prettyprint-override"><code>ds.Latitude.values[:, 0]
</code></pre>
<p>But Latitude is also a coordinate of Latitude itself. So I could do something like this</p>
<pre class="lang-py prettyprint-override"><code>ds.Latitude.Latitude.Latitude.Latitude.values[:, 0]
</code></pre>
<p>Is that a normal behaviour? Or it's something wrong with my DataSet and I need to flatten it?</p>
<p>I have tried this code:</p>
<pre class="lang-py prettyprint-override"><code>interp_coords = {'Latitude': ds.Latitude.values[:, 0], 'Longitude': ds.Longitude.values[0, :]}
uSurf_interp = ds.uSurf.interp(
Latitude=interp_coords['Latitude'],
Lon_u=interp_coords['Longitude'],
method='linear'
)
vSurf_interp = ds.vSurf.interp(
Lat_v=interp_coords['Latitude'],
Longitude=interp_coords['Longitude'],
method='linear'
)
ds_interp = xr.Dataset(
{
'uSurf': uSurf_interp,
'vSurf': vSurf_interp
},
coords={
'Latitude': interp_coords['Latitude'],
'Longitude': interp_coords['Longitude'],
'Time': ds.Time
}
)
</code></pre>
<p>I get this error in the first <code>interp</code> instruction:</p>
<pre><code>ValueError: Input DataArray is not 1-D.
</code></pre>
<p>As interp accepts only 1-D coordinates. So I need to flatten the coordinates. It's one thing I wanted to avoid.</p>
<p>I have read the documentation. But I cannot find a solution:</p>
<ul>
<li><a href="https://docs.xarray.dev/en/stable/user-guide/interpolation.html#multi-dimensional-interpolation" rel="nofollow noreferrer">Multi-dimensional interpolation</a></li>
<li><a href="https://docs.xarray.dev/en/stable/examples/multidimensional-coords.html#working-with-multidimensional-coordinates" rel="nofollow noreferrer">Working with Multidimensional Coordinates</a></li>
</ul>
<p>I found <a href="https://earth-env-data-science.github.io/lectures/xarray/xarray_intro.html" rel="nofollow noreferrer">this diagram</a> quite ilustrative. But for 1D coordinates</p>
<p><a href="https://i.sstatic.net/Fyfly17V.webp" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fyfly17V.webp" alt="enter image description here" /></a></p>
<h1>UPDATE</h1>
<p>I'm trying now to do it manually with <a href="https://numpy.org/doc/2.1/reference/generated/numpy.meshgrid.html" rel="nofollow noreferrer">np.meshgrid</a> and <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html" rel="nofollow noreferrer">scipy.interpolate.griddata</a>. Also, I'm trying to combine them with <code>xarray.apply_ufunc</code></p>
|
<python><multidimensional-array><dataset><interpolation><python-xarray>
|
2025-01-23 12:29:37
| 0
| 9,732
|
ChesuCR
|
79,381,028
| 2,066,083
|
Why can't subfigures be nested in gridspecs to keep their suptitles separate in matplotlib?
|
<p>I would expect this code:</p>
<pre><code>import matplotlib.pyplot as plt
fig = plt.figure(figsize=(8, 6))
fig_gridspec = fig.add_gridspec(1, 1)
top_subfig = fig.add_subfigure(fig_gridspec[(0, 0)])
top_subfig.suptitle("I am the top subfig")
top_subfig_gridspec = top_subfig.add_gridspec(1, 1, top=.7)
nested_subfig = top_subfig.add_subfigure(top_subfig_gridspec[(0, 0)])
nested_subfig.suptitle("I am the nested subfig")
plt.show()
</code></pre>
<p>to generate two suptitles on different lines. Instead, they are overlapping.</p>
<p><a href="https://i.sstatic.net/xFzCRVSi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xFzCRVSi.png" alt="a plot with two overlapping suptitles" /></a></p>
<p>Can anyone explain why? Also, is there a way to achieve such separation with nested subfigures?</p>
<p>Edit: To be clear, I mean, without changing the dimensions of the grid in the gridspec.</p>
<p>I know I can do this, and it might be what I wind up doing:</p>
<pre><code>import matplotlib.pyplot as plt
fig = plt.figure(figsize=(8, 6))
fig_gridspec = fig.add_gridspec(1, 1)
top_subfig = fig.add_subfigure(fig_gridspec[(0, 0)])
top_subfig.suptitle("I am the top subfig")
top_subfig_gridspec = top_subfig.add_gridspec(2, 1, height_ratios=[.1, 1])
nested_subfig = top_subfig.add_subfigure(top_subfig_gridspec[(1, 0)])
nested_subfig.suptitle("I am the nested subfig")
plt.show()
</code></pre>
<p>I just don't understand why my first code block doesn't work, and why there seems to be no way to adjust a nested subfigures's position within another subfigure.</p>
<p>Second edit: I don't have a minimum reproducible example for this, but relatedly, it also doesn't seem to be than hspace is doing anything for a gridspec that contains subfigures that also contains subfigures. I am starting to conclude that gridspec keyword arguments simply do not work when what the gridspec contains is subfigures, when the gridspec is associated with a subfigure, or both. I don't yet know the boundaries of the phenomenon.</p>
<p>Yet another edit: I should have said to begin with that I find in my larger context, not discussed here, that constrained_layout causes a bunch of problems even as it solves some others, so I can't address my issue that way. I'm really wondering about why it is I can't just get a subfigure to respect the gridspec it's in, and if the answer is "this is a bug in matplotlib" and you should not use subfigures for your use case, I'd accept it. Or if the answer is, "here is the fully justified reason for subfigures to behave this way" and you should not use subfigures for your use case, I'd accept that too.</p>
|
<python><matplotlib>
|
2025-01-23 12:27:50
| 3
| 937
|
Katie
|
79,380,860
| 1,480,018
|
Python parse id attribute instead of element from xml
|
<p>This python code processed some XML data as shown in the comment</p>
<pre><code># <members><id>26</id><name>Alexi Delano</name><id>27</id><name>Cari Lekebusch</name></members>
def element_members(self, element):
for id, name in grouper([child.text for child in element.iterchildren()], 2):
yield int(id), name.strip()
</code></pre>
<p>Now the XML data has removed the ID element and make it an attribute instead</p>
<pre><code># <members><name id="26">Alexi Delano</name><name id="27">Cari Lekebusch</
</code></pre>
<p>so the code no longer works</p>
<p>I can see I need to remove the <code>id</code> from <code>for id</code> but I cannot work out to get from attribute instead</p>
<p>This is an open src project on GitHub and the <a href="https://github.com/philipmat/discogs-xml2db/blob/develop/discogsxml2db/parser.py" rel="nofollow noreferrer">complete source file is here</a></p>
|
<python><xml>
|
2025-01-23 11:31:45
| 1
| 13,362
|
Paul Taylor
|
79,380,831
| 4,247,599
|
Python redirect logger to print (without repeating output when executing in a jupyter notebook)
|
<p>I found already how to <a href="https://stackoverflow.com/questions/11124093/redirect-python-print-output-to-logger">re-direct print to logger</a>, though I would like to do the other way around, so that the logger output will be also printed to console, and will be shown when running the code from jupyter notebooks.</p>
<ul>
<li>The first solution I found is to tweak the logger to redirect the logger to console.</li>
</ul>
<pre class="lang-py prettyprint-override"><code># in a jupyter notebook
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
logging.info("logging test")
</code></pre>
<p>The problem is that this option re-directs <strong>all</strong> the loggers of all the sub-libraries, and not only the loggers in the code that I wrote, creating a lot of noise in the output.</p>
<ul>
<li>The other solution I found (adapted from <a href="https://betterstack.com/community/questions/how-to-log-to-file-and-console-in-python/" rel="nofollow noreferrer">there</a>) is that one:</li>
</ul>
<pre class="lang-py prettyprint-override"><code># in a jupyter notebook
import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
file_handler = logging.FileHandler('app.log')
file_handler.setLevel(logging.DEBUG)
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)
logger.addHandler(file_handler)
logger.addHandler(console_handler)
logger.debug('This is a debug message')
logger.info('This is an info message')
logger.warning('This is a warning message')
logger.error('This is an error message')
</code></pre>
<p>which works as I want, except if you run consecutively the same cell, the logger accumulates the lines from previous runs</p>
<p>First run output</p>
<pre><code>This is an info message
This is a warning message
This is an error message
</code></pre>
<p>Second run output</p>
<pre><code>This is an info message
This is an info message
This is a warning message
This is a warning message
This is an error message
This is an error message
</code></pre>
<ul>
<li>The only way I could achieve what I want is to log and print when running the code from a jupyter notebook, i.e.:</li>
</ul>
<pre class="lang-py prettyprint-override"><code># in the python module
logger.info("this is an info message") # this prints to log
print("this is an info message") # this prints to the jupyter notebook output
</code></pre>
<p>Another option was to wrap the logger the print in a decorator to avoid repeated code. This also looks like an hack, and it is useless code when the module is not run from a notebook.</p>
<p>Is there a way to re-direct logger from python module to the output cell of a jupyter notebook (as <code>print</code> would do) without prompting the logger of any other sub-library, without having to repeat the logger message with a <code>print</code> statement in the python code?</p>
|
<python><python-3.x><logging>
|
2025-01-23 11:23:03
| 0
| 4,299
|
SeF
|
79,380,723
| 6,623,277
|
Relationships are both of the same direction when declaring two-way foreign keys
|
<p>For my ORM classes <code>Trade</code>, <code>Order</code> and <code>Account</code> (of stock markets) I want <code>Order</code> and <code>Trade</code> linked to each other by a foreign key, while setting the column in order to be nullable (an order may not have a trade). But SQL Alchemy complains about relationships pointing in the same direction.</p>
<pre><code>import sqlite3
from typing import Optional
from sqlalchemy import String,ForeignKey,create_engine
from sqlalchemy.testing.schema import mapped_column
from sqlalchemy.orm import DeclarativeBase, Mapped, relationship, sessionmaker
class Base(DeclarativeBase):
pass
class Order(Base):
__tablename__ = 'order'
order_id: Mapped[int] = mapped_column(primary_key=True)
account_id: Mapped[str] = mapped_column(ForeignKey("account.account_id"))
corresp_trade_id: Mapped[Optional[int]] = mapped_column(ForeignKey("trade.trade_id"))
corresp_trade: Mapped[Optional["Trade"]] = relationship(back_populates='corresp_order', foreign_keys=[corresp_trade_id])
parent_order_id: Mapped[int] = mapped_column(ForeignKey(f"order.order_id"))
parent_order: Mapped["Order"] = relationship(back_populates='parent_order', foreign_keys=[parent_order_id], uselist=True)
class Trade(Base):
__tablename__ = "trade"
trade_id: Mapped[int] = mapped_column(primary_key=True)
account_id: Mapped[str] = mapped_column(ForeignKey("account.account_id"))
corresp_order_id: Mapped[int] = mapped_column(ForeignKey("order.order_id"))
corresp_order: Mapped["Order"] = relationship(back_populates='corresp_trade', foreign_keys=[corresp_order_id])
class Account(Base):
__tablename__ = 'account'
account_id: Mapped[int] = mapped_column(primary_key=True)
account_name: Mapped[str] = mapped_column(String(75))
avl_funds: Mapped[float]
class DB:
def __init__(self):
self.db = sqlite3.connect("<db_file_path>")
self.db_engine = create_engine(f"sqlite:///{<db_file_path>}", echo=True)
Base.metadata.create_all(self.db_engine)
self.Session_factory = sessionmaker(bind=self.db_engine)
self.insert_init_rows()
def insert_init_rows(self):
with self.Session_factory() as session:
first_account=Account(account_id='0', account_name='First', avl_funds=100000.00)
session.add(first_account) ### Exception here
session.commit()
</code></pre>
<p>When I try to persist a new account I get:</p>
<blockquote>
<p>sqlalchemy.exc.ArgumentError: Trade.corresp_order and back-reference Order.corresp_trade are both of the same direction <RelationshipDirection.MANYTOONE: 2>. Did you mean to set remote_side on the many-to-one side ?</p>
</blockquote>
<p>If I remove the foreign key from any of the tables it still fails because there are multiple other foreign keys in <code>Trade</code> and <code>Order</code>. SQLAlchemy then explicitly asks to specify a foreign key. I also need to specify <code>back_populates</code> in both tables.</p>
|
<python><python-3.x><sqlite><sqlalchemy><orm>
|
2025-01-23 10:50:24
| 0
| 2,077
|
KCK
|
79,380,706
| 7,887,965
|
ValueError: text input must be of type `str` (single example), `List[str]` (batch or single pretokenized example)
|
<p>I am trying to fine tune <code>vision llm model</code> like <code>Qwen/Qwen2-VL-7B-Instruct</code>. I have the following dataset in <code>.jsonl</code> file</p>
<pre><code>{"messages": [{"role": "system", "content": [{"type": "text", "text": "You are an expert assistant for answering questions based on textual and visual data."}]}, {"role": "user", "content": [{"type": "text", "text": "\nBased on the provided context and image, answer the following questions.\n\n## CONTEXT ##\nGerman Text: Schörghuber Unternehmensgruppe\nESG – Status quo, Chancen und Risiken aus Sicht eines \nImmobilien-Bestandshalters\nGastvortrag Advanced Topics in Sustainable Real Estate am 12. Mai 2023, Janine Schluer\nTranslated Text: Schörghuber group\nESG - status quo, opportunities and risks from the perspective of one\nReal estate stock holder\nGuest lecture Advanced Topics in Sustainable Real Estate on May 12, 2023, Janine Schluer\nTopic: Real Estate Investment\nSummary: Schörghuber Unternehmensgruppe\nESG – Status quo, Chancen und Risiken aus Sicht eines \nImmobilien-Bestandshalters\nGastvortrag Advanced Topics in Sustainable Real Estate am 12. Mai 2023, Janine Schluer\nKeywords: []\nExtracted Text: nan\nGenerated Questions: ['Schörghuber Unternehmensgruppe ESG – Status quo, Chancen und Risiken aus Sicht eines Immobilien-Bestandshalters Gastvortrag Advanced Topics in Sustainable Real Estate am 12. Mai 2023, Janine Schluer Keywords: [Extracted Text]'\n 'Schörghuber Unternehmensgruppe ESG – Status quo, Chancen und Risiken aus Sicht eines Immobilien-Bestandshalters Gastvortrag Advanced Topics in Sustainable Real Estate am 12. Mai 2023, Janine Schluer Keywords: [Extracted Text: nan]'\n 'Schörghuber Unternehmensgruppe ESG – Status quo, Chancen und Risiken aus Sicht eines Immobilien-Bestandshalters Gastvortrag Advanced Topics in Sustainable Real Estate am 12. Mai 2023, Janine Schluer Keywords: [Extracted Text: nan ]'\n 'Schörghuber Unternehmensgruppe ESG - Status quo, Chancen und Risiken aus Sicht eines Immobilien-Bestandshalters Gastvortrag Advanced Topics in Sustainable Real Estate am 12. Mai 2023, Janine Schluer Keywords: [Extracted Text: nan ]'\n 'Schörghuber Unternehmensgruppe ESG - Status quo, Chancen und Risiken aus Sicht eines Immobilien-Bestandshalters Gastvortrag Advanced Topics in Sustainable Real Estate am 12. Mai 2023, Janine Schluer Keywords: [Extracted Text: nan]']\n\n## IMAGE ##\nImage Path: C:\\Users\\ehsan\\Downloads\\Initial Dataset\\ehsan\\230512_ESG-Vortrag_TUM\\230512_ESG-Vortrag_TUM.pdf_page_1_img_1.png\n\nOnly return the answers as your output.\n"}, {"type": "image", "image": "C:\\Users\\ehsan\\Downloads\\Initial Dataset\\ehsan\\230512_ESG-Vortrag_TUM\\230512_ESG-Vortrag_TUM.pdf_page_1_img_1.png"}]}, {"role": "assistant", "content": [{"type": "text", "text": ""}]}]}
{"messages": [{"role": "system", "content": [{"type": "text", "text": "You are an expert assistant for answering questions based on textual and visual data."}]}, {"role": "user", "content": [{"type": "text", "text": "\nBased on the provided context and image, answer the following questions.\n\n## CONTEXT ##\nGerman Text: Schörghuber Unternehmensgruppe\nESG – Status quo, Chancen und Risiken aus Sicht eines \nImmobilien-Bestandshalters\nGastvortrag Advanced Topics in Sustainable Real Estate am 12. Mai 2023, Janine Schluer\nTranslated Text: Schörghuber group\nESG - status quo, opportunities and risks from the perspective of one\nReal estate stock holder\nGuest lecture Advanced Topics in Sustainable Real Estate on May 12, 2023, Janine Schluer\nTopic: Real Estate Investment\nSummary: Schörghuber Unternehmensgruppe\nESG – Status quo, Chancen und Risiken aus Sicht eines \nImmobilien-Bestandshalters\nGastvortrag Advanced Topics in Sustainable Real Estate am 12. Mai 2023, Janine Schluer\nKeywords: []\nExtracted Text: nan\nGenerated Questions: ['Schörghuber Unternehmensgruppe ESG – Status quo, Chancen und Risiken aus Sicht eines Immobilien-Bestandshalters Gastvortrag Advanced Topics in Sustainable Real Estate am 12. Mai 2023, Janine Schluer Keywords: [Extracted Text]'\n 'Schörghuber Unternehmensgruppe ESG – Status quo, Chancen und Risiken aus Sicht eines Immobilien-Bestandshalters Gastvortrag Advanced Topics in Sustainable Real Estate am 12. Mai 2023, Janine Schluer Keywords: [Extracted Text: nan]'\n 'Schörghuber Unternehmensgruppe ESG – Status quo, Chancen und Risiken aus Sicht eines Immobilien-Bestandshalters Gastvortrag Advanced Topics in Sustainable Real Estate am 12. Mai 2023, Janine Schluer Keywords: [Extracted Text: nan ]'\n 'Schörghuber Unternehmensgruppe ESG - Status quo, Chancen und Risiken aus Sicht eines Immobilien-Bestandshalters Gastvortrag Advanced Topics in Sustainable Real Estate am 12. Mai 2023, Janine Schluer Keywords: [Extracted Text: nan ]'\n 'Schörghuber Unternehmensgruppe ESG - Status quo, Chancen und Risiken aus Sicht eines Immobilien-Bestandshalters Gastvortrag Advanced Topics in Sustainable Real Estate am 12. Mai 2023, Janine Schluer Keywords: [Extracted Text: nan]']\n\n## IMAGE ##\nImage Path: C:\\Users\\ehsan\\Downloads\\Initial Dataset\\ehsan\\230512_ESG-Vortrag_TUM\\230512_ESG-Vortrag_TUM.pdf_page_1_img_2.jpeg\n\nOnly return the answers as your output.\n"}, {"type": "image", "image": "C:\\Users\\ehsan\\Downloads\\Initial Dataset\\ehsan\\230512_ESG-Vortrag_TUM\\230512_ESG-Vortrag_TUM.pdf_page_1_img_2.jpeg"}]}, {"role": "assistant", "content": [{"type": "text", "text": ""}]}]}
</code></pre>
<p>below is the code which i am using for fine tuning</p>
<pre><code>import torch
from transformers import AutoModelForVision2Seq, AutoProcessor, BitsAndBytesConfig
from trl import SFTTrainer, SFTConfig
from peft import LoraConfig
from datasets import load_dataset
from PIL import Image
from typing import List, Dict, Any
import os
# Helper function to process visual references (e.g., images)
def process_vision_info(visual_reference: str) -> List[Any]:
"""
Load and preprocess images given their file paths.
:param visual_reference: Filepath to the image.
:return: Preprocessed image tensor.
"""
try:
image = Image.open(visual_reference).convert("RGB")
return [image]
except Exception as e:
print(f"Error processing image {visual_reference}: {e}")
return [None] # Placeholder for failed image loads
# Extract relevant information from messages
def extract_information(messages: List[Dict[str, Any]]) -> Dict[str, Any]:
"""
Extract information from the 'messages' field in each example.
:param messages: List of message dictionaries.
:return: A dictionary with extracted text and other fields.
"""
german_text = ""
translated_text = ""
topic = ""
generated_questions = []
summary = ""
image_path = None
for message in messages:
if isinstance(message, dict): # Ensure the message is a dictionary
for content in message.get("content", []):
if isinstance(content, dict): # Ensure content is a dictionary
if content["type"] == "text":
if "German Text" in content["text"]:
german_text = content["text"]
elif "Translated Text" in content["text"]:
translated_text = content["text"]
elif "Topic:" in content["text"]:
topic = content["text"].split("Topic:")[-1].strip()
elif "Summary:" in content["text"]:
summary = content["text"].split("Summary:")[-1].strip()
if "Generated Questions" in content["text"]:
generated_questions.append(content["text"])
elif content["type"] == "image" and content["image"]:
image_path = content["image"]
return {
"German Text": german_text,
"Translated Text": translated_text,
"Topic": topic,
"Generated Questions": generated_questions,
"Summary": summary,
"image_path": image_path
}
def collate_fn(examples: List[Dict[str, Any]], processor: AutoProcessor) -> Dict[str, Any]:
german_texts, translated_texts, topics, generated_questions, summaries, images = [], [], [], [], [], []
for example in examples:
# Extracting text from the "content" field inside the messages
extracted_data = extract_information(example["messages"])
german_text = extracted_data.get("German Text", "")
translated_text = extracted_data.get("Translated Text", "")
topics.append(extracted_data.get("Topic", ""))
generated_questions.append(extracted_data.get("Generated Questions", []))
summaries.append(extracted_data.get("Summary", ""))
images.append(process_vision_info(extracted_data["image_path"])[0] if extracted_data["image_path"] else None)
german_texts.append(german_text)
translated_texts.append(translated_text)
# Debugging step: Ensure the texts are in correct format
print(f"german_texts: {german_texts}")
print(f"translated_texts: {translated_texts}")
# Pass raw texts directly to the processor
batch = processor(
text=german_texts, # List of strings (texts)
images=images,
return_tensors="pt",
padding=True # Ensure padding is handled correctly
)
# If you need to process translated texts similarly
translated_batch = processor(
text=translated_texts,
images=images,
return_tensors="pt",
padding=True
)
# Prepare labels and ignore padding and image tokens in loss computation
labels = batch["input_ids"].clone()
labels[labels == processor.tokenizer.pad_token_id] = -100
for image_token_id in [151652, 151653, 151655]:
labels[labels == image_token_id] = -100
batch["labels"] = labels
# Add additional metadata fields to the batch
batch.update({
"translated_texts": translated_texts,
"topics": topics,
"generated_questions": generated_questions,
"summaries": summaries
})
# Add translated batch for further processing if needed
batch.update({"translated_batch": translated_batch})
return batch
# Data formatting function
def format_data(example):
# Ensure "messages" is a list before attempting to access its elements
if isinstance(example.get("messages"), list) and len(example["messages"]) > 2:
# Safeguard added: Ensure the content exists and is in a valid format before accessing
context = ""
if isinstance(example["messages"][1], dict) and "content" in example["messages"][1]:
content = example["messages"][1].get("content", [])
if isinstance(content, list) and len(content) > 0:
context = content[0].get("text", "")
# Handle the image path and questions with similar safeguard checks
image_path = ""
if isinstance(example["messages"][2], dict) and "content" in example["messages"][2]:
content = example["messages"][2].get("content", [])
if isinstance(content, list) and len(content) > 0:
image_path = content[0].get("image", "")
questions = ""
if isinstance(example["messages"][1], dict) and "content" in example["messages"][1]:
content = example["messages"][1].get("content", [])
if isinstance(content, list) and len(content) > 0:
questions = content[0].get("text", "").split("Generated Questions: ")[-1].strip() # Extract questions
# Debugging output to check the extracted values
print("Context:", context)
print("Image Path:", image_path)
print("Questions:", questions)
# Return the extracted information in a dictionary format
return {
"context": context,
"image_path": image_path,
"questions": questions,
}
else:
raise ValueError("Expected 'messages' to be a list with at least 3 elements.")
# Hugging Face model ID and dataset
model_id = "Qwen/Qwen2-VL-7B-Instruct"
data_path = "new_fine_tuning_vqa_dataset.jsonl"
if not os.path.exists(data_path):
raise FileNotFoundError(f"Dataset file not found at {data_path}")
# Dataset loading and validation
dataset = load_dataset("json", data_files=data_path)["train"]
for i, example in enumerate(dataset):
if "messages" not in example:
raise ValueError(f"Missing 'messages' key in dataset at index {i}: {example}")
formatted_dataset = dataset.map(
format_data,
batched=False, # Process one example at a time
remove_columns=dataset.column_names # Optional: remove other columns if unnecessary
)
print(formatted_dataset[0])
# Load model and processor with BitsAndBytes quantization
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
llm_int8_enable_fp32_cpu_offload=True
)
model = AutoModelForVision2Seq.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16,
quantization_config=bnb_config
)
processor = AutoProcessor.from_pretrained(model_id)
# PEFT configuration with LoRA
peft_config = LoraConfig(
lora_alpha=16,
lora_dropout=0.05,
r=8,
bias="none",
target_modules=["q_proj", "v_proj"],
task_type="CAUSAL_LM",
)
args = SFTConfig(
output_dir="qwen2-7b-custom-dataset",
num_train_epochs=3,
per_device_train_batch_size=4,
gradient_accumulation_steps=8,
gradient_checkpointing=True,
optim="adamw_torch_fused",
logging_steps=5,
save_strategy="epoch",
learning_rate=2e-4,
bf16=True,
tf32=True,
max_grad_norm=0.3,
warmup_ratio=0.03,
lr_scheduler_type="constant",
push_to_hub=False,
report_to="tensorboard",
dataset_kwargs={"skip_prepare_dataset": False}, # Ensure this matches your needs
)
# Trainer configuration
trainer_config = SFTConfig(
output_dir="qwen2-7b-custom-dataset",
num_train_epochs=3,
per_device_train_batch_size=4,
gradient_accumulation_steps=8,
gradient_checkpointing=True,
optim="adamw_torch_fused",
logging_steps=5,
save_strategy="epoch",
learning_rate=2e-4,
bf16=True,
tf32=True,
max_grad_norm=0.3,
warmup_ratio=0.03,
lr_scheduler_type="constant",
push_to_hub=False,
report_to="tensorboard"
)
trainer = SFTTrainer(
model=model,
args=args,
train_dataset=dataset, # Pass the unformatted dataset directly
data_collator=lambda examples: collate_fn(examples, processor),
peft_config=peft_config,
tokenizer=processor.tokenizer,
formatting_func=format_data, # Pass your custom formatting function here
)
# Training the model
trainer.train()
trainer.save_model(trainer_config.output_dir)
# Clear GPU memory
del model, trainer
torch.cuda.empty_cache()
</code></pre>
<p>but i am getting the following error:</p>
<pre><code>> Loading checkpoint shards: 100%|██████████| 5/5 [00:05<00:00,
> 1.02s/it] Some parameters are on the meta device because they were offloaded to the cpu. Map: 0%| | 0/8733 [00:00<?, ?
> examples/s] Context: Image Path: Questions:
>
> --------------------------------------------------------------------------- ValueError Traceback (most recent call
> last) Cell In[26], line 247
> 228 # Trainer configuration
> 229 trainer_config = SFTConfig(
> 230 output_dir="qwen2-7b-custom-dataset",
> 231 num_train_epochs=3, (...)
> 245 report_to="tensorboard"
> 246 )
> --> 247 trainer = SFTTrainer(
> 248 model=model,
> 249 args=args,
> 250 train_dataset=dataset, # Pass the unformatted dataset directly
> 251 data_collator=lambda examples: collate_fn(examples, processor),
> 252 peft_config=peft_config,
> 253 tokenizer=processor.tokenizer,
> 254 formatting_func=format_data, # Pass your custom formatting function here
> 255 )
> 257 # Training the model
> 258 trainer.train()
>
> File
> c:\Users\ehsan\AppData\Local\Programs\Python\Python38\lib\site-packages\huggingface_hub\utils\_deprecation.py:101,
> in
> _deprecate_arguments.<locals>._inner_deprecate_positional_args.<locals>.inner_f(*args,
> **kwargs)
> 99 message += "\n\n" + custom_message
> 100 warnings.warn(message, FutureWarning) ... 3088 "text input must be of type `str` (single example), `List[str]` (batch
> or single pretokenized example) " 3089 "or
> `List[List[str]]` (batch of pretokenized examples)." 3090 )
>
> ValueError: text input must be of type `str` (single example),
> `List[str]` (batch or single pretokenized example) or
> `List[List[str]]` (batch of pretokenized examples).
</code></pre>
|
<python><large-language-model>
|
2025-01-23 10:43:48
| 0
| 407
|
Filbadeha
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.