QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
79,491,408
5,660,533
AsyncMock not working when mocked using SQLAlchemy in FastAPI
<p>I'm testing a FastAPI service that queries a database using SQLAlchemy's async execution. However, when I mock the database query, execute().scalars().all() returns an empty list instead of the expected [1, 2, 3].</p> <pre><code>async def retrieve_relevant_docs(session: AsyncSession, query: str, top_k: int): results = await session.execute(select(Document.id).limit(top_k)) print(&quot;debug results&quot;, vars(results)) # Debugging document_ids = list(await results.scalars()) print(&quot;document_ids&quot;, document_ids) # Debugging return document_ids </code></pre> <pre><code>import pytest from unittest.mock import AsyncMock @pytest.mark.asyncio async def test_retrieve_relevant_docs(): mock_session = AsyncMock() mock_execute = AsyncMock() # Attempt to mock scalars() to return values mock_scalars_selected = AsyncMock() mock_scalars_selected.scalars.return_value.all.return_value = [1, 2, 3] mock_execute.side_effect = [mock_scalars_selected, mock_scalars_selected] mock_session.execute = mock_execute # Mocking session.execute # Calling the function document_ids = await retrieve_relevant_docs(mock_session, &quot;test query&quot;, top_k=3) # Debugging print(&quot;Final document_ids:&quot;, document_ids) assert document_ids == [1, 2, 3] # This fails because document_ids is [] </code></pre> <p>Observed Behavior: results.scalars().all() unexpectedly returns [], even though I attempted to mock it. Debugging vars(results) shows _mock_side_effect = None, suggesting the mock isn't working as expected. Expected Behavior: document_ids should contain [1, 2, 3], matching the mocked return value</p> <p>What I've Tried:</p> <ul> <li>Explicitly setting scalars().all().return_value = [1, 2, 3].</li> <li>Checking vars(results) for missing attributes.</li> <li>Ensuring mock_execute.side_effect is properly assigned.</li> <li>Calling await session.execute(...).scalars().all() instead of wrapping it in list().</li> </ul> <p>Extra Debug logs:</p> <blockquote> <p>selected_ids_result debug results {'_mock_return_value': sentinel.DEFAULT, '_mock_parent': None, '_mock_name': None, '_mock_new_name': '()', '_mock_new_parent': , '_mock_sealed': False, '_spec_class': None, '_spec_set': None, '_spec_signature': None, '_mock_methods': None, '_spec_asyncs': [], '_mock_children': {'scalars': , '<strong>str</strong>': }, '_mock_wraps': None, '_mock_delegate': None, '_mock_called': False, '_mock_call_args': None, '_mock_call_count': 0, '_mock_call_args_list': [], '_mock_mock_calls': [call.<strong>str</strong>(), call.scalars(), call.scalars().all()], 'method_calls': [call.scalars()], '_mock_unsafe': False, '_mock_side_effect': None, '_is_coroutine': &lt;object object at 0x0000029C06E16A30&gt;, '_mock_await_count': 0, '_mock_await_args': None, '_mock_await_args_list': [], '<strong>code</strong>': , '<strong>str</strong>': } document_ids [] <strong>AssertionError: assert [] == [1, 2, 3]</strong></p> </blockquote> <p>What is the correct way to mock SQLAlchemy's async execution (session.execute().scalars().all()) in a FastAPI test using AsyncMock?</p>
<python><asynchronous><sqlalchemy><mocking><fastapi>
2025-03-07 06:07:36
0
740
Rohit Kumar
79,491,327
8,896,855
boto3 resource connection fail
<p>I haven't found any mention anywhere of a similar problem, but I am having difficulty connecting to an AWS SQS queue when using the boto3 library. When I connect to a boto3 resource in a program (polling application), it exits unexpectedly without any reported error. However, when I access the same queue in an interactive Python console, it works perfectly every time. In the following code, the first message &quot;Made it to the queue data&quot; is logged, and also the printout of the actual boto3.resource() call with the parameters printed. When I copy and paste this into iPython, it executes just fine. In the script though, it fails, and the final line/message is not logged.</p> <p>Any insight would be appreciated.</p> <pre><code>logging.info(&quot;Made it to get_queue_data&quot;) logging.info(f&quot;&quot;&quot; client = boto3.resource('sqs', region_name='{AWS_REGION}', aws_access_key_id='{AWS_KEY}', aws_secret_access_key='{AWS_SECRET}') &quot;&quot;&quot;) # This line leads the program to exit without a reported error. client = boto3.resource('sqs', region_name=AWS_REGION, aws_access_key_id=AWS_KEY, aws_secret_access_key=AWS_SECRET) logging.info(&quot;Created SQS client.&quot;) </code></pre>
<python><boto3>
2025-03-07 05:16:20
1
608
AlecZ
79,491,308
1,629,527
Airbyte and langchain dependencies not compatible
<p>I am using <em>poetry</em> to manage the python project dependencies. When I try to install airbyte using command <code>poetry add airbyte</code> it throws below error:</p> <pre><code>Because no versions of airbyte-cdk match &gt;6.28.0,&lt;6.29.0 || &gt;6.29.0,&lt;6.29.1 || &gt;6.29.1,&lt;6.30.0 || &gt;6.30.0,&lt;6.31.0 || &gt;6.31.0,&lt;6.31.1 || &gt;6.31.1,&lt;6.31.2.dev0 || &gt;6.31.2.dev0,&lt;6.32.0 || &gt;6.32.0,&lt;6.33.0 || &gt;6.33.0,&lt;6.33.1 || &gt;6.33.1,&lt;6.33.2 || &gt;6.33.2,&lt;6.33.3 || &gt;6.33.3,&lt;6.33.4 || &gt;6.33.4,&lt;6.33.5 || &gt;6.33.5,&lt;6.33.6 || &gt;6.33.6,&lt;6.33.7 || &gt;6.33.7,&lt;6.34.0 || &gt;6.34.0,&lt;6.34.1 || &gt;6.34.1,&lt;6.35.0 || &gt;6.35.0,&lt;6.36.0 || &gt;6.36.0,&lt;6.36.1 || &gt;6.36.1,&lt;6.36.2 || &gt;6.36.2,&lt;6.36.3 || &gt;6.36.3,&lt;6.37.0.dev0 || &gt;6.37.0.dev0,&lt;6.37.0.dev1 || &gt;6.37.0.dev1,&lt;6.37.2.dev1 || &gt;6.37.2.dev1,&lt;7.0.0 and airbyte-cdk (6.36.3) depends on langchain_core (0.1.42), airbyte-cdk (&gt;6.28.0,&lt;6.29.0 || &gt;6.29.0,&lt;6.29.1 || &gt;6.29.1,&lt;6.30.0 || &gt;6.30.0,&lt;6.31.0 || &gt;6.31.0,&lt;6.31.1 || &gt;6.31.1,&lt;6.31.2.dev0 || &gt;6.31.2.dev0,&lt;6.32.0 || &gt;6.32.0,&lt;6.33.0 || &gt;6.33.0,&lt;6.33.1 || &gt;6.33.1,&lt;6.33.2 || &gt;6.33.2,&lt;6.33.3 || &gt;6.33.3,&lt;6.33.4 || &gt;6.33.4,&lt;6.33.5 || &gt;6.33.5,&lt;6.33.6 || &gt;6.33.6,&lt;6.33.7 || &gt;6.33.7,&lt;6.34.0 || &gt;6.34.0,&lt;6.34.1 || &gt;6.34.1,&lt;6.35.0 || &gt;6.35.0,&lt;6.36.0 || &gt;6.36.0,&lt;6.36.1 || &gt;6.36.1,&lt;6.36.2 || &gt;6.36.2,&lt;6.37.0.dev0 || &gt;6.37.0.dev0,&lt;6.37.0.dev1 || &gt;6.37.0.dev1,&lt;6.37.2.dev1 || &gt;6.37.2.dev1,&lt;7.0.0) requires langchain_core (0.1.42). And because airbyte-cdk (6.36.2) depends on langchain_core (0.1.42), airbyte-cdk (&gt;6.28.0,&lt;6.29.0 || &gt;6.29.0,&lt;6.29.1 || &gt;6.29.1,&lt;6.30.0 || &gt;6.30.0,&lt;6.31.0 || &gt;6.31.0,&lt;6.31.1 || &gt;6.31.1,&lt;6.31.2.dev0 || &gt;6.31.2.dev0,&lt;6.32.0 || &gt;6.32.0,&lt;6.33.0 || &gt;6.33.0,&lt;6.33.1 || &gt;6.33.1,&lt;6.33.2 || &gt;6.33.2,&lt;6.33.3 || &gt;6.33.3,&lt;6.33.4 || &gt;6.33.4,&lt;6.33.5 || &gt;6.33.5,&lt;6.33.6 || &gt;6.33.6,&lt;6.33.7 || &gt;6.33.7,&lt;6.34.0 || &gt;6.34.0,&lt;6.34.1 || &gt;6.34.1,&lt;6.35.0 || &gt;6.35.0,&lt;6.36.0 || &gt;6.36.0,&lt;6.36.1 || &gt;6.36.1,&lt;6.37.0.dev0 || &gt;6.37.0.dev0,&lt;6.37.0.dev1 || &gt;6.37.0.dev1,&lt;6.37.2.dev1 || &gt;6.37.2.dev1,&lt;7.0.0) requires langchain_core (0.1.42). And because airbyte-cdk (6.36.1) depends on langchain_core (0.1.42) and airbyte-cdk (6.36.0) depends on langchain_core (0.1.42), airbyte-cdk (&gt;6.28.0,&lt;6.29.0 || &gt;6.29.0,&lt;6.29.1 || &gt;6.29.1,&lt;6.30.0 || &gt;6.30.0,&lt;6.31.0 || &gt;6.31.0,&lt;6.31.1 || &gt;6.31.1,&lt;6.31.2.dev0 || &gt;6.31.2.dev0,&lt;6.32.0 || &gt;6.32.0,&lt;6.33.0 || &gt;6.33.0,&lt;6.33.1 || &gt;6.33.1,&lt;6.33.2 || &gt;6.33.2,&lt;6.33.3 || &gt;6.33.3,&lt;6.33.4 || &gt;6.33.4,&lt;6.33.5 || &gt;6.33.5,&lt;6.33.6 || &gt;6.33.6,&lt;6.33.7 || &gt;6.33.7,&lt;6.34.0 || &gt;6.34.0,&lt;6.34.1 || &gt;6.34.1,&lt;6.35.0 || &gt;6.35.0,&lt;6.37.0.dev0 || &gt;6.37.0.dev0,&lt;6.37.0.dev1 || &gt;6.37.0.dev1,&lt;6.37.2.dev1 || &gt;6.37.2.dev1,&lt;7.0.0) requires langchain_core (0.1.42). And because airbyte-cdk (6.35.0) depends on langchain_core (0.1.42) and airbyte-cdk (6.34.1) depends on langchain_core (0.1.42), airbyte-cdk (&gt;6.28.0,&lt;6.29.0 || &gt;6.29.0,&lt;6.29.1 || &gt;6.29.1,&lt;6.30.0 || &gt;6.30.0,&lt;6.31.0 || &gt;6.31.0,&lt;6.31.1 || &gt;6.31.1,&lt;6.31.2.dev0 || &gt;6.31.2.dev0,&lt;6.32.0 || &gt;6.32.0,&lt;6.33.0 || &gt;6.33.0,&lt;6.33.1 || &gt;6.33.1,&lt;6.33.2 || &gt;6.33.2,&lt;6.33.3 || &gt;6.33.3,&lt;6.33.4 || &gt;6.33.4,&lt;6.33.5 || &gt;6.33.5,&lt;6.33.6 || &gt;6.33.6,&lt;6.33.7 || &gt;6.33.7,&lt;6.34.0 || &gt;6.34.0,&lt;6.37.0.dev0 || &gt;6.37.0.dev0,&lt;6.37.0.dev1 || &gt;6.37.0.dev1,&lt;6.37.2.dev1 || &gt;6.37.2.dev1,&lt;7.0.0) requires langchain_core (0.1.42). And because airbyte-cdk (6.34.0) depends on langchain_core (0.1.42) and airbyte-cdk (6.33.7) depends on langchain_core (0.1.42), airbyte-cdk (&gt;6.28.0,&lt;6.29.0 || &gt;6.29.0,&lt;6.29.1 || &gt;6.29.1,&lt;6.30.0 || &gt;6.30.0,&lt;6.31.0 || &gt;6.31.0,&lt;6.31.1 || &gt;6.31.1,&lt;6.31.2.dev0 || &gt;6.31.2.dev0,&lt;6.32.0 || &gt;6.32.0,&lt;6.33.0 || &gt;6.33.0,&lt;6.33.1 || &gt;6.33.1,&lt;6.33.2 || &gt;6.33.2,&lt;6.33.3 || &gt;6.33.3,&lt;6.33.4 || &gt;6.33.4,&lt;6.33.5 || &gt;6.33.5,&lt;6.33.6 || &gt;6.33.6,&lt;6.37.0.dev0 || &gt;6.37.0.dev0,&lt;6.37.0.dev1 || &gt;6.37.0.dev1,&lt;6.37.2.dev1 || &gt;6.37.2.dev1,&lt;7.0.0) requires langchain_core (0.1.42). And because airbyte-cdk (6.33.6) depends on langchain_core (0.1.42) and airbyte-cdk (6.33.5) depends on langchain_core (0.1.42), airbyte-cdk (&gt;6.28.0,&lt;6.29.0 || &gt;6.29.0,&lt;6.29.1 || &gt;6.29.1,&lt;6.30.0 || &gt;6.30.0,&lt;6.31.0 || &gt;6.31.0,&lt;6.31.1 || &gt;6.31.1,&lt;6.31.2.dev0 || &gt;6.31.2.dev0,&lt;6.32.0 || &gt;6.32.0,&lt;6.33.0 || &gt;6.33.0,&lt;6.33.1 || &gt;6.33.1,&lt;6.33.2 || &gt;6.33.2,&lt;6.33.3 || &gt;6.33.3,&lt;6.33.4 || &gt;6.33.4,&lt;6.37.0.dev0 || &gt;6.37.0.dev0,&lt;6.37.0.dev1 || &gt;6.37.0.dev1,&lt;6.37.2.dev1 || &gt;6.37.2.dev1,&lt;7.0.0) requires langchain_core (0.1.42). And because airbyte-cdk (6.33.4) depends on langchain_core (0.1.42) and airbyte-cdk (6.33.3) depends on langchain_core (0.1.42), airbyte-cdk (&gt;6.28.0,&lt;6.29.0 || &gt;6.29.0,&lt;6.29.1 || &gt;6.29.1,&lt;6.30.0 || &gt;6.30.0,&lt;6.31.0 || &gt;6.31.0,&lt;6.31.1 || &gt;6.31.1,&lt;6.31.2.dev0 || &gt;6.31.2.dev0,&lt;6.32.0 || &gt;6.32.0,&lt;6.33.0 || &gt;6.33.0,&lt;6.33.1 || &gt;6.33.1,&lt;6.33.2 || &gt;6.33.2,&lt;6.37.0.dev0 || &gt;6.37.0.dev0,&lt;6.37.0.dev1 || &gt;6.37.0.dev1,&lt;6.37.2.dev1 || &gt;6.37.2.dev1,&lt;7.0.0) requires langchain_core (0.1.42). And because airbyte-cdk (6.33.2) depends on langchain_core (0.1.42) and airbyte-cdk (6.33.1) depends on langchain_core (0.1.42), airbyte-cdk (&gt;6.28.0,&lt;6.29.0 || &gt;6.29.0,&lt;6.29.1 || &gt;6.29.1,&lt;6.30.0 || &gt;6.30.0,&lt;6.31.0 || &gt;6.31.0,&lt;6.31.1 || &gt;6.31.1,&lt;6.31.2.dev0 || &gt;6.31.2.dev0,&lt;6.32.0 || &gt;6.32.0,&lt;6.33.0 || &gt;6.33.0,&lt;6.37.0.dev0 || &gt;6.37.0.dev0,&lt;6.37.0.dev1 || &gt;6.37.0.dev1,&lt;6.37.2.dev1 || &gt;6.37.2.dev1,&lt;7.0.0) requires langchain_core (0.1.42). And because airbyte-cdk (6.33.0) depends on langchain_core (0.1.42) and airbyte-cdk (6.32.0) depends on langchain_core (0.1.42), airbyte-cdk (&gt;6.28.0,&lt;6.29.0 || &gt;6.29.0,&lt;6.29.1 || &gt;6.29.1,&lt;6.30.0 || &gt;6.30.0,&lt;6.31.0 || &gt;6.31.0,&lt;6.31.1 || &gt;6.31.1,&lt;6.31.2.dev0 || &gt;6.31.2.dev0,&lt;6.37.0.dev0 || &gt;6.37.0.dev0,&lt;6.37.0.dev1 || &gt;6.37.0.dev1,&lt;6.37.2.dev1 || &gt;6.37.2.dev1,&lt;7.0.0) requires langchain_core (0.1.42). And because airbyte-cdk (6.31.1) depends on langchain_core (0.1.42) and airbyte-cdk (6.31.0) depends on langchain_core (0.1.42), airbyte-cdk (&gt;6.28.0,&lt;6.29.0 || &gt;6.29.0,&lt;6.29.1 || &gt;6.29.1,&lt;6.30.0 || &gt;6.30.0,&lt;6.31.2.dev0 || &gt;6.31.2.dev0,&lt;6.37.0.dev0 || &gt;6.37.0.dev0,&lt;6.37.0.dev1 || &gt;6.37.0.dev1,&lt;6.37.2.dev1 || &gt;6.37.2.dev1,&lt;7.0.0) requires langchain_core (0.1.42). And because airbyte-cdk (6.30.0) depends on langchain_core (0.1.42) and airbyte-cdk (6.29.1) depends on langchain_core (0.1.42), airbyte-cdk (&gt;6.28.0,&lt;6.29.0 || &gt;6.29.0,&lt;6.31.2.dev0 || &gt;6.31.2.dev0,&lt;6.37.0.dev0 || &gt;6.37.0.dev0,&lt;6.37.0.dev1 || &gt;6.37.0.dev1,&lt;6.37.2.dev1 || &gt;6.37.2.dev1,&lt;7.0.0) requires langchain_core (0.1.42). And because airbyte-cdk (6.29.0) depends on langchain_core (0.1.42) and airbyte-cdk (6.28.0) depends on langchain_core (0.1.42), airbyte-cdk (&gt;=6.28.0,&lt;6.31.2.dev0 || &gt;6.31.2.dev0,&lt;6.37.0.dev0 || &gt;6.37.0.dev0,&lt;6.37.0.dev1 || &gt;6.37.0.dev1,&lt;6.37.2.dev1 || &gt;6.37.2.dev1,&lt;7.0.0) requires langchain_core (0.1.42). And because airbyte-cdk (6.37.2.dev1) depends on langchain_core (0.1.42) and airbyte-cdk (6.37.0.dev1) depends on langchain_core (0.1.42), airbyte-cdk (&gt;=6.28.0,&lt;6.31.2.dev0 || &gt;6.31.2.dev0,&lt;6.37.0.dev0 || &gt;6.37.0.dev0,&lt;7.0.0) requires langchain_core (0.1.42). And because airbyte-cdk (6.37.0.dev0) depends on langchain_core (0.1.42) and airbyte-cdk (6.31.2.dev0) depends on langchain_core (0.1.42), airbyte-cdk (&gt;=6.28.0,&lt;7.0.0) requires langchain_core (0.1.42). And because airbyte (0.23.0) depends on airbyte-cdk (&gt;=6.28.0,&lt;7.0.0) and langchain (0.3.9) depends on langchain-core (&gt;=0.3.21,&lt;0.4.0), airbyte (0.23.0) is incompatible with langchain (0.3.9). So, because ai depends on both langchain (0.3.9) and airbyte (0.23.0), version solving failed. </code></pre> <p>It seems like it has an conflict with langchain dependencies. My pyproject.toml is as below:</p> <pre><code>[tool.poetry] name = &quot;ai&quot; version = &quot;1.0.10&quot; description = &quot;&quot; authors = [&quot;Your Name &lt;you@example.com&gt;&quot;] readme = &quot;README.md&quot; packages = [ { include = &quot;app&quot; }, ] [tool.poetry.dependencies] python = &quot;&gt;=3.11.0,&lt;3.12.0&quot; pymupdf = &quot;^1.23.26&quot; uvicorn = &quot;^0.23.2&quot; langserve = {extras = [&quot;server&quot;], version = &quot;&gt;=0.1.1&quot;} pydantic = &quot;^2&quot; supabase = &quot;^2.4.5&quot; tiktoken = &quot;0.7&quot; langchain = &quot;0.3.9&quot; langchainhub = &quot;0.1.15&quot; tavily-python = &quot;^0.3.1&quot; fastapi-health = &quot;^0.4.0&quot; pandas = &quot;^2.2.0&quot; unstructured = {extras = [&quot;all-docs&quot;], version = &quot;^0.13.6&quot;} python-json-logger = &quot;^2.0.7&quot; azure-search-documents = &quot;^11.5.1&quot; langgraph = &quot;0.2.35&quot; langchain-openai = &quot;0.3.6&quot; langchain-community = &quot;0.3.2&quot; langchain-experimental = &quot;0.3.2&quot; azure-ai-documentintelligence = &quot;1.0.0b4&quot; lancedb = &quot;0.17.0&quot; clickhouse-connect = &quot;0.8.11&quot; langchain-cli = &quot;0.0.35&quot; python-dotenv = &quot;^1.0.1&quot; apache-airflow = &quot;2.10.5&quot; [tool.poetry.group.dev.dependencies] pytest = &quot;^8.3.3&quot; pytest-mock = &quot;^3.14.0&quot; pytest-asyncio = &quot;^0.21.0&quot; pytest-timeout = &quot;^2.1.0&quot; [build-system] requires = [&quot;poetry-core&quot;] build-backend = &quot;poetry.core.masonry.api&quot; </code></pre>
<python><langchain><python-poetry><airbyte>
2025-03-07 05:01:44
0
6,210
dd619
79,491,294
17,275,378
SqlAlchemy Table Object Doesn't Synchronise with BigQuery
<p>I'm reading data from <code>Google BigQuery</code> into a <code>polars</code> dataframe. Using a string query succeeds. I'd prefer to use an alchemy statement. Using <code>python-bigquery-sqlalchemy</code> provided by Google and following their <a href="https://github.com/googleapis/python-bigquery-sqlalchemy" rel="nofollow noreferrer">SDK instructions</a> fails.</p> <h4>This Method Works</h4> <pre><code>import polars as pl from sqlalchemy import create_engine project = &quot;my-project-name&quot; schema = &quot;bigquery-public-data&quot; dataset = &quot;pypi&quot; table = &quot;file_downloads&quot; full_path = f&quot;{schema}.{dataset}.{table}&quot; engine = create_engine(f&quot;bigquery://{project}&quot;) query = f&quot;SELECT * FROM {full_path} LIMIT 100&quot; df = pl.read_database(query = query, connection=engine) df # prints output </code></pre> <h4>This Method Fails</h4> <p>I followed the instructions linked above. Google appear to use an older version of alchemy, where it is not necessary to pass the <code>MetaData</code> object when instantiating a <code>Table</code>. So I've added that in.</p> <pre><code>from sqlalchemy import MetaData, select, Table meta_data = MetaData() sample_table = Table( 'file_downloads', meta_data, schema = f&quot;{schema}.{dataset}&quot;, ) query = select(sample_table).limit(100) df = pl.read_database(query, connection = engine) </code></pre> <p>Returns error:</p> <blockquote> <p>DatabaseError: (google.cloud.bigquery.dbapi.exceptions.DatabaseError) 400 POST <a href="https://bigquery.googleapis.com/bigquery/v2/projects/my-project-name/queries?prettyPrint=false" rel="nofollow noreferrer">https://bigquery.googleapis.com/bigquery/v2/projects/my-project-name/queries?prettyPrint=false</a>: Syntax error: SELECT list must not be empty at [2:1] [SQL: SELECT FROM <code>bigquery-public-data.pypi</code>.<code>file_downloads</code>]</p> </blockquote> <p>I believe the alchemy has failed to synchronise the table object with the database. Normally when I use alchemy to query a database where I'm not also using it to define the schema, I'd perform:</p> <pre><code>conn = engine.connect() meta_data.reflect(conn) sample_table = meta_data.tables['file_downloads'] </code></pre> <p>That is not realistic here. It took more than an hour before I killed it. I presume it was synchronising the entire public datasets catalogue, which I don't want.</p> <h4>My Versions</h4> <ul> <li>Python 3.13.1</li> <li>polars == 1.24.0</li> <li>sqlalchemy == 2.0.38</li> <li>sqlalchemy-bigquery == 1.12.1</li> </ul> <p>What's missing? I'm going around in circles.</p>
<python><google-bigquery><sqlalchemy><python-polars>
2025-03-07 04:50:41
1
326
eldrly
79,491,182
8,278,075
FastAPI using GET with alias params
<p>I want to serve a GET endpoint with a Pydantic BaseModel but a body is expected in the GET.</p> <pre class="lang-py prettyprint-override"><code>def convert_to_camel_case(string: str) -&gt; str: new_word = [] splitted = string.split(&quot;_&quot;) new_word.append(splitted[0]) for word in splitted[1:]: new_word.append(word.capitalize()) new_word = &quot;&quot;.join(new_word) return new_word class PersonRequest(BaseModel): model_config = ConfigDict( # For generating alias in bulk. alias_generator=convert_to_camel_case, # Allow both alias and original field names to be used. populate_by_name=True, ) # Using snake case on the server but client can use either. birth_date: datetime.date first_name: str last_name: str @app.get(&quot;/&quot;) # async def create_alias(person: PersonRequest = Query(PersonRequest)): # This works BUT loses ability to take both camel and snake case. async def create_alias(person: PersonRequest): # Expects a &quot;body&quot; like in POST. try: print(person) return {&quot;message&quot;: person} except ClientError as e: raise HTTPException(status_code=500, detail=f&quot;Error creating alias: {str(e)}.&quot;) </code></pre> <p>My test request:</p> <p>Request can have mixed either camel or snake case.</p> <pre><code>GET http://localhost:8000 { &quot;birthDate&quot;: &quot;1999-01-01&quot;, &quot;first_name&quot;: &quot;Tyler&quot;, &quot;lastName&quot;: &quot;Durden&quot; } </code></pre> <p>I'm using 2 concepts which work fine independently but when I use at the same time, the result is not what I expect.</p> <p>I know the quick solution is for the client to use camel case which is usual anyway but I'm wondering why the alias function seems to work in once case but not both. It seems when using <code>Query()</code> makes <code>populate_by_name</code> useless.</p>
<python><fastapi><pydantic>
2025-03-07 03:16:18
0
3,365
engineer-x
79,491,063
9,232,275
LangChain AgentExecutor invoke() doesn't return multiline output
<p>I found that I'm only seeing the result as the last message output, instead of the full response. Here's an example that surprised me.</p> <p>I'm new to LangChain (though I've use LlamaIndex). Is it the case that this is just deprecated now in favor of LangGraph, or should I be always using streaming since one never knows what the output may look like, or am I just using the wrong calls or arguments?</p> <p>In this code we try a simple tool-use (function call) prompt that just returns test output:</p> <pre><code>#!/usr/bin/env python3 import os import logging from dotenv import load_dotenv from langchain_core.tools import Tool from langchain_anthropic import ChatAnthropic from langchain_core.prompts import ChatPromptTemplate from langchain.agents import AgentExecutor from langchain.agents import create_tool_calling_agent from langchain_core.prompts import MessagesPlaceholder load_dotenv() logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) def web_fetcher(url: str) -&gt; str: &quot;&quot;&quot;Fetches a URL and returns the HTML content.&quot;&quot;&quot; logger.info(f&quot;Fetching url={url}&quot;) return &quot;&lt;html&gt;&lt;body&gt;&lt;h1&gt;Hello, world!&lt;/h1&gt;&lt;/body&gt;&lt;/html&gt;&quot; return out[0].get('text') def run(llm): web_fetcher_tool = Tool( name=&quot;web_fetcher&quot;, description=&quot;Fetches content from a given URL. Input should be a valid URL.&quot;, func=web_fetcher, ) tools=[web_fetcher_tool] agent = create_tool_calling_agent( llm=llm, tools=tools, prompt=ChatPromptTemplate.from_messages([ # (&quot;system&quot;, &quot;You are a helpful assistant that can use tools to help with tasks.&quot;), (&quot;system&quot;, &quot;You are an agent that can use tools to help with tasks. Your function is to closely follow instructions.&quot;), (&quot;user&quot;, &quot;&quot;&quot;Given an input url &quot;{input}&quot;, fetch the web page content and return ONLY the exact raw html results&quot;&quot;&quot;), MessagesPlaceholder(variable_name=&quot;agent_scratchpad&quot;), ]), ) agent_executor = AgentExecutor( agent=agent, tools=tools, verbose=True, ) result = agent_executor.invoke(input={&quot;input&quot;: &quot;https://www.github.com/&quot;}) logger.info(f&quot;result={result}\n result type={type(result)}&quot;) logger.info(f&quot;text={result.get('output', list(dict()))[0].get('text')}&quot;) # result = agent_executor.stream(input={&quot;input&quot;: &quot;https://www.github.com/&quot;}) # for chunk in result: # logger.info(f&quot;chunk={chunk}&quot;) def main(): logger.info(&quot;Starting&quot;) llm = ChatAnthropic( api_key=os.getenv(&quot;ANTHROPIC_API_KEY&quot;), model_name=&quot;claude-3-5-sonnet-20241022&quot;, temperature=0, max_tokens=4096, # haiku can do 4K tokens ) run(llm) logger.info(&quot;Ending&quot;) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>In this output, you can see that some of the logging has the full text of the html returned from web_fetcher, but that isn't actually in the returned result object.</p> <pre><code>INFO:__main__:Starting &gt; Entering new AgentExecutor chain... INFO:httpx:HTTP Request: POST https://api.anthropic.com/v1/messages &quot;HTTP/1.1 200 OK&quot; Invoking: `web_fetcher` with `https://www.github.com/` responded: [{'text': 'I\'ll help you fetch the web content from &quot;https://www.github.com/&quot; using the web_fetcher tool and return the raw HTML results.', 'type': 'text', 'index': 0}, {'id': 'toolu_01PoCxoc4DrNpzq7Y1tWeypk', 'input': {}, 'name': 'web_fetcher', 'type': 'tool_use', 'index': 1, 'partial_json': '{&quot;__arg1&quot;: &quot;https://www.github.com/&quot;}'}] INFO:__main__:Fetching url=https://www.github.com/ INFO:httpx:HTTP Request: POST https://api.anthropic.com/v1/messages &quot;HTTP/1.1 200 OK&quot; &lt;html&gt;&lt;body&gt;&lt;h1&gt;Hello, world!&lt;/h1&gt;&lt;/body&gt;&lt;/html&gt;[{'text': &quot;Above is the raw HTML content fetched from github.com. I've provided it exactly as returned by the web_fetcher tool without any modifications or additional commentary.&quot;, 'type': 'text', 'index': 0}] &gt; Finished chain. INFO:__main__:result={'input': 'https://www.github.com/', 'output': [{'text': &quot;Above is the raw HTML content fetched from github.com. I've provided it exactly as returned by the web_fetcher tool without any modifications or additional commentary.&quot;, 'type': 'text', 'index': 0}]} result type=&lt;class 'dict'&gt; INFO:__main__:text=Above is the raw HTML content fetched from github.com. I've provided it exactly as returned by the web_fetcher tool without any modifications or additional commentary. INFO:__main__:Ending </code></pre>
<python><langchain><langchain-agents>
2025-03-07 01:42:50
0
402
Charles Thayer
79,491,031
6,227,035
Google Calendar - get an event by ID in python script
<p>I have created an event on Google calendar in python using these lines:</p> <pre><code>service = build('calendar', 'v3', credentials=credentials) now = datetime.datetime.utcnow().isoformat() + 'Z' events_result = service.events().list(calendarId='primary', timeMin=now, maxResults=10, singleEvents=True, orderBy='startTime').execute() events = events_result.get('items', []) event = { 'summary': 'EVENT TEST', 'eventId': 'passengerID', 'colorId': '6', 'location': '800 Howard St., San Francisco, CA 94103', 'description': 'A chance to hear more about Google\'s developer products.', 'start': { 'dateTime': '2025-03-07T09:00:00-08:00', 'timeZone': 'America/Los_Angeles', }, 'end': { 'dateTime': '2025-03-07T17:00:00-08:00', 'timeZone': 'America/Los_Angeles', }, 'recurrence': [ ], 'attendees': [ {'email': 'lpage@example.com'}, {'email': 'sbrin@example.com'}, ], 'reminders': { 'useDefault': False, 'overrides': [ {'method': 'email', 'minutes': 24 * 60}, {'method': 'popup', 'minutes': 10}, ], }, } event = service.events().insert(calendarId='primary', body=event).execute() </code></pre> <p>This works. However, when I try to get the same event by ID using this line:</p> <pre><code>getEvent = service.events().get(calendarId='primary', eventId='passengerID').execute() </code></pre> <p>I get this error:</p> <pre><code>googleapiclient.errors.HttpError: &lt;HttpError 404 when requesting https://www.googleapis.com/calendar/v3/calendars/primary/events/passengerID?alt=json returned &quot;Not Found&quot;. Details: &quot;[{'domain': 'global', 'reason': 'notFound', 'message': 'Not Found'}]&quot;&gt; </code></pre> <p>It seems it cannot find this event. Do you know what I am doing wrong? Thank you!</p>
<python><get><calendar>
2025-03-07 01:13:04
1
1,974
Sim81
79,490,968
6,213,809
Access other columns in ibis across
<p>Is there any way to access other columns inside an ibis aggregate statement using across? I created a bunch of boolean columns using mutate across, but I want to sum another column (<code>df. net_spend_amount</code>) if the boolean column (<code>s.matches(&quot;is_pre|is_post&quot;)</code>) is True.</p> <pre class="lang-py prettyprint-override"><code>(df .pipe(mutate_is_weeks_pre('pre_end_date', 'trn_dt', weeks=[2, 4, 52])) .pipe(mutate_is_weeks_post('post_start_date', 'trn_dt', weeks=[2, 4, 52])) .group_by(&quot;campaign_id&quot;, &quot;modality&quot;) .agg( s.across( selector= s.matches(&quot;is_pre|is_post&quot;), func= table.net_spend_amount.count(_)) ) ) ) </code></pre> <h3>Reproducible example</h3> <h4>Create data, add to connection, mutate boolean columns</h4> <p>Create a dataframe and add it to an ibis connection (duckdb here for simplicity, pyspark in my environment). Then I run some mutates to create my boolean columns.</p> <pre class="lang-py prettyprint-override"><code>import ibis import datetime from ibis import _ from ibis import selectors as s from pandas import DataFrame from random import randrange ibis.options.interactive = True rows = 100 data = { 'campaign_id': [102793] * rows, 'modality': ['INSTORE', 'ONLINE'] * int(rows / 2), 'post_start_date': [datetime.date(2024, 6, 23)] * rows, 'pre_end_date': [datetime.date(2024, 6, 22)] * rows, 'trn_dt': [datetime.date(2024, 6, 8) + datetime.timedelta(days=randrange(50)) for i in range(rows)], 'net_spend_amt': [float(randrange(100)) - 35 + float(randrange(100))/100 for i in range(rows)] } pdf = DataFrame(data) con = ibis.duckdb.connect() df = con.create_table(&quot;test&quot;, obj = pdf, overwrite = True) def mutate_is_weeks_pre(pre_end_date_col, trn_date_col, weeks): def inner(df): def is_weeks_pre(pre_end_date, trn_date, week): days = week * 7 pre_start_date = pre_end_date.sub(ibis.interval(days=days)) return (pre_start_date &lt;= trn_date) &amp; (trn_date &lt;= pre_end_date) return df.mutate(**{ f'is_pre{week}w': is_weeks_pre(df[pre_end_date_col], df[trn_date_col], week) for week in weeks }) return inner def mutate_is_weeks_post(post_start_date_col, trn_date_col, weeks): def inner(df): def is_weeks_post(post_start_date, trn_date, weeks): days = weeks * 7 post_end_date = post_start_date.add(ibis.interval(days=days)) return (post_start_date &lt;= trn_date) &amp; (trn_date &lt;= post_end_date) return df.mutate(**{ f'is_post{week}w': is_weeks_post(df[post_start_date_col], df[trn_date_col], week) for week in weeks }) return inner df = (df .pipe(mutate_is_weeks_pre('pre_end_date', 'trn_dt', weeks=[2, 4, 52])) .pipe(mutate_is_weeks_post('post_start_date', 'trn_dt', weeks=[2, 4, 52])) ) </code></pre> <p><a href="https://i.sstatic.net/KnOmuwfG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KnOmuwfG.png" alt="enter image description here" /></a></p> <h4>Run with working across (works)</h4> <p>This is pretty useless because it just counts booleans, but I wanted to make sure I had my across setup correctly</p> <pre class="lang-py prettyprint-override"><code>(df .pipe(mutate_is_weeks_pre('pre_end_date', 'trn_dt', weeks=[2, 4, 52])) .pipe(mutate_is_weeks_post('post_start_date', 'trn_dt', weeks=[2, 4, 52])) .group_by(&quot;campaign_id&quot;, &quot;modality&quot;) .agg( s.across( selector= s.matches(&quot;is_pre|is_post&quot;), func=_.count() ) ) ) </code></pre> <p><a href="https://i.sstatic.net/9dw2xqKN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9dw2xqKN.png" alt="enter image description here" /></a></p> <h4>Add sum for net spend (does not work)</h4> <p>I can't figure out a way to do this... I couldn't even get it to run when I used <code>df.net_spend_amt</code></p> <pre class="lang-py prettyprint-override"><code>(df .pipe(mutate_is_weeks_pre('pre_end_date', 'trn_dt', weeks=[2, 4, 52])) .pipe(mutate_is_weeks_post('post_start_date', 'trn_dt', weeks=[2, 4, 52])) .group_by(&quot;campaign_id&quot;, &quot;modality&quot;) .agg( s.across( selector= s.matches(&quot;is_pre|is_post&quot;), func=df.net_spend_amt.sum(where=_) ) ) ) </code></pre> <h4>Without across (works)</h4> <p>This solution works, I can loop over and create a bunch of expressions and pass them but I'd like to break apart the logic for the boolean columns from the logic to sum the spend and simply use across to loop if it's possible / practical...</p> <pre class="lang-py prettyprint-override"><code>(df .pipe(mutate_is_weeks_pre('pre_end_date', 'trn_dt', weeks=[2, 4, 52])) .pipe(mutate_is_weeks_post('post_start_date', 'trn_dt', weeks=[2, 4, 52])) .group_by(&quot;campaign_id&quot;, &quot;modality&quot;) .agg( **{'post+2w+sales': _.net_spend_amt.sum(where=((_.post_start_date &lt;= _.trn_dt) &amp; (_.trn_dt &lt;= (_.post_start_date + datetime.timedelta(days=14))))) } ) ) </code></pre> <p><a href="https://i.sstatic.net/CbotGAkr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CbotGAkr.png" alt="enter image description here" /></a></p>
<python><dataframe><ibis>
2025-03-07 00:06:07
1
896
Mark Druffel
79,490,852
3,078,473
Selenium Handshake Failed (new version of Selenium)
<pre><code>options1 = webdriver.ChromeOptions() options1.add_argument('--ignore-certificate-errors') options1.add_argument('--ignore-ssl-errors') driver = webdriver.Chrome(options=options1) </code></pre> <p>[14816:10204:0306/172921.348:ERROR:ssl_client_socket_impl.cc(877)] handshake failed; returned -1, SSL error code 1, net_error -101</p>
<python><selenium-webdriver>
2025-03-06 22:37:28
0
419
JackOfAll
79,490,841
5,937,757
Snowflake ML Registry for Model Explainability: `ValueError: Model type <class 'NoneType'> is not supported` when logging a Pipeline
<p>I am training an <strong>XGBClassifier model using Snowflake ML</strong> and attempting to register it using <strong>Snowflake’s model registry</strong>. The training and evaluation steps complete successfully, but when I try to log the model using <code>log_model()</code>, I get the following error:</p> <pre><code>Training Accuracy: 0.9610 Evaluation Accuracy: 0.8730 Traceback (most recent call last): File &quot;C:\Mas\rla_projects\Claims-AI--lodgement\code\python\src\explain\train_model.py&quot;, line 56, in &lt;module&gt; model_version = native_registry.log_model( File &quot;C:\Mas\py39_sf_ml\lib\site-packages\snowflake\ml\_internal\telemetry.py&quot;, line 542, in wrap return ctx.run(execute_func_with_statement_params) ... File &quot;C:\Mas\py39_sf_ml\lib\site-packages\snowflake\ml\model\_packager\model_task\model_task_utils.py&quot;, line 149, in _get_model_task raise ValueError(f&quot;Model type {type(model)} is not supported&quot;) ValueError: Model type &lt;class 'NoneType'&gt; is not supported </code></pre> <p>Code Snippet:</p> <pre class="lang-py prettyprint-override"><code>from snowflake.snowpark import Session from snowflake.ml.registry import registry from snowflake.ml.modeling.preprocessing import StandardScaler from snowflake.ml.modeling.impute import SimpleImputer from snowflake.ml.modeling.pipeline import Pipeline from snowflake.ml.modeling.xgboost import XGBClassifier # Snowflake connection parameters conn_params = { &quot;user&quot;: &quot;&lt;...&gt;&quot;, &quot;account&quot;: &quot;&lt;...&gt;&quot;, &quot;warehouse&quot;: &quot;&lt;...&gt;&quot;, &quot;database&quot;: &quot;&lt;...&gt;&quot;, &quot;schema&quot;: &quot;&lt;...&gt;&quot;, &quot;role&quot;: &quot;&lt;...&gt;&quot;, &quot;authenticator&quot;: &quot;externalbrowser&quot;, } # Create session session = Session.builder.configs(conn_params).create() # Load and prepare data all_data = session.sql(&quot;SELECT *, IFF(CLASS = 'g', 1.0, 0.0) AS LABEL FROM Gamma_Telescope_Data&quot;).drop(&quot;CLASS&quot;) train_data, test_data = all_data.random_split(weights=[0.9, 0.1], seed=0) # Define feature and label columns FEATURE_COLS = [c for c in train_data.columns if c != &quot;LABEL&quot;] LABEL_COLS = [&quot;LABEL&quot;] # Construct pipeline pipeline = Pipeline(steps=[ (&quot;impute&quot;, SimpleImputer(input_cols=FEATURE_COLS, output_cols=FEATURE_COLS)), (&quot;scaler&quot;, StandardScaler(input_cols=FEATURE_COLS, output_cols=FEATURE_COLS)), (&quot;model&quot;, XGBClassifier(input_cols=FEATURE_COLS, label_cols=LABEL_COLS)) ]) # Train the pipeline pipeline.fit(train_data) # Register model native_registry = registry.Registry( session=session, database_name=session.get_current_database(), schema_name=session.get_current_schema() ) model_name = &quot;Gamma_test&quot; version = &quot;V8&quot; model_version = native_registry.log_model( model=pipeline, # &lt;-- This line triggers the error model_name=model_name, version_name=version, sample_input_data=test_data, comment=&quot;Gamma test&quot;, conda_dependencies=[&quot;snowflake-ml-python==1.7.4&quot;, &quot;snowflake-snowpark-python==1.28.0&quot;], options={&quot;enable_explainability&quot;: True} ) </code></pre> <h3><strong>Observations &amp; Debugging Attempts:</strong></h3> <ol> <li><strong>Pipeline Training Works</strong> – <code>pipeline.fit(train_data)</code> runs without errors.</li> <li><strong>Pipeline Predictions Work</strong> – Predictions on training and test data succeed.</li> <li><strong>Model Explanation Works Without Pipeline</strong> – If I train an <code>XGBClassifier</code> <strong>without a pipeline</strong>, I can successfully generate predictions and explanations.</li> <li><strong>Session is Active</strong> – <code>session.get_current_database()</code> and <code>session.get_current_schema()</code> return valid values.</li> <li><strong>Feature &amp; Label Columns Look Correct</strong> – <code>FEATURE_COLS</code> and <code>LABEL_COLS</code> contain expected values.</li> </ol> <h3><strong>Additional Context:</strong></h3> <ul> <li>Environment: <ul> <li>Win 10</li> <li>Python 3.9</li> <li>snowflake-connector-python 3.14.0</li> <li>snowflake-ml-python 1.7.4</li> <li>snowflake-snowpark-python 1.28.0</li> </ul> </li> <li>This example is based on <strong>Snowflake’s official documentation</strong>:<br /> <strong><a href="https://docs.snowflake.com/en/developer-guide/snowflake-ml/modeling#feature-preprocessing-and-training-on-non-synthetic-data" rel="nofollow noreferrer">Feature Preprocessing and Training on Non-Synthetic Data</a></strong></li> <li>The documentation suggests using a <code>Pipeline</code>, but it does not provide an example of registering a <code>Pipeline</code>-based model that has explainability.</li> <li>The error message suggests that the model is somehow being treated as <code>NoneType</code> when passed to <code>log_model()</code>.</li> <li>As mentioned the whole pipeline can be registered and retrieved when I remove this <code>options={&quot;enable_explainability&quot;: True}</code> from <code>.log_model</code>.</li> <li>This <code>options={&quot;enable_explainability&quot;: True}</code> works if I don't have a pipeline: <a href="https://stackoverflow.com/questions/79488423/snowflake-ml-there-is-no-method-with-name-explain-available-in-the-model">Snowflake ML: `There is no method with name explain available in the model`</a></li> </ul> <h3><strong>Question:</strong></h3> <p>Is it possible to register a pipeline with <code>options={&quot;enable_explainability&quot;: True}</code>?</p>
<python><machine-learning><snowflake-cloud-data-platform>
2025-03-06 22:28:30
1
423
mas
79,490,818
4,463,825
Function with two variables - Polynomial fit Python
<p>I have various pump performance data, and I am trying to fit curves as a combination of them.</p> <p>In below image, different curves are at different pump speeds (RPM), and the Y axis is Pressure head, and X axis is flow rate.</p> <p>I am trying to figure out the following functions:</p> <pre><code>pressure = func (flowrate, rpm) flowrate = func (pressure, rpm) rpm = func (flowrate, pressure) </code></pre> <p>Usually the pump curves follow some form of affinity laws, as</p> <ol> <li>flow rate is proportional to rpm.</li> <li>pressure is proportional to square of rpm.</li> </ol> <p>However in reality - they do not, and there will be a difference between experimental data &amp; theoretical data. So that is why I am looking to fit a function that looks like this: (open to other suggestions) - like 3rd degree polynomial as well.</p> <pre><code>Y = a*x1**2 + b*x1+ c*x2**2 + d*x2 + e*x1*x2 + f </code></pre> <p>However, I am getting the wrong results with scipy - curve_fit. I am not in favor of employing sklearn regression as we would not be able to figure out the coefficients.</p> <p>Is there a way one can employ this in Python?</p> <p>See my code below so far:</p> <pre><code>from numpy import genfromtxt import numpy as np import matplotlib.pyplot as plt from scipy.optimize import curve_fit import pandas as pd # df = pd.read_excel('./nominal_data/pump_curve.xlsx', sheet_name =0) df = pd.DataFrame({ 'Flow(lpm)': [128.0846942, 105.7579874, 95.11146262, 69.57902038, 53.25344504, 35.47260492, np.nan, 131.96, 110.57, 91.32, 73.02, 53.9, 20.41, np.nan, 116.06, 99.46, 83.7, 68.84, 54.47, 20.98, np.nan, 103.0, 87.6, 73.6, 57.8, 44.0, 19.49, np.nan, 86.2, 73.1, 56.0, 42.6, 28.1, 16.33, np.nan, 56.2, 47.1, 38.6, 30.9, 24.0, 12.3], 'Speed Feedback (RPM)': [5204.0, 5208.0, 5206.0, 5206.0, 5176.0, 5175.0, np.nan, 4710.72, 4706.4, 4714.93, 4687.11, 4691.0, 4602.0, np.nan, 4103.21, 4115.26, 4147.8, 4148.14, 4141.09, 4124.72, np.nan, 3675.89, 3657.88, 3673.73, 3671.41, 3675.27, 3664.88, np.nan, 3118.66, 3186.23, 3106.92, 3107.19, 3114.69, 3090.08, np.nan, 2077.44, 2073.23, 2062.01, 2069.37, 2068.02, 2067.91], 'dP (PSI)': [16.5, 25.34, 28.78, 35.45, 37.86, 38.87, np.nan, 8.85, 17.01, 23.42, 27.48, 30.5, 32.4, np.nan, 6.69, 11.84, 17.24, 20.16, 22.64, 25.81, np.nan, 5.2, 9.6, 13.2, 16.3, 18.1, 20.38, np.nan, 3.7, 6.5, 10.0, 12.1, 13.5, 14.54, np.nan, 1.2, 2.7, 3.7, 4.7, 5.2, 6.3] }) flow_lpm = df['Flow(lpm)'].to_numpy() rpm = df['Speed Feedback (RPM)'].to_numpy() dP = df['dP (PSI)'].to_numpy() X = (flow_lpm, rpm) y = dP def f(X, a, b, c, d, e, f): x_1, x_2 = X return a*x_1**2 + b*x_1 + c*x_2**2 + d*x_2 + e*x_1*x_2 + f param, param_cov = curve_fit(f, X, y) # Test a random data point. flow_test = 54 rpm_test = 5195 dp_test = param[0]*flow_test**2 + param[1]*flow_test + param[2]*rpm_test**2 + param[3]**rpm_test + param[4]*flow_test*rpm_test + param[5] print (dp_test) </code></pre> <p>I could have multiple curves of Pressure = function (flow rate) at multiple RPMs like the figure shows. For example there is a 3rd degree curve for a 5100 RPM curve. However it only works for a given RPM. The curve is different for a 2070 RPM situation.</p> <p>Seeking suggestions on proper curve fitting methodology. Open to deleting end points, to ensure the curve fit works in the middle range.</p> <p><a href="https://limewire.com/d/c5e6c2f7-4fcc-4fd9-af60-489c1af61780#mL7y1jKdr4zm14PbGKrNz_GeJIEVBQPpjFt4R2LedFc" rel="nofollow noreferrer">datafile_imagefile</a></p>
<python><scipy><regression><curve-fitting>
2025-03-06 22:15:10
2
993
Jesh Kundem
79,490,815
4,710,409
ValueError: axes don't match array when trying to transpose
<p>I'm fetching a frame correctly from webcam:</p> <pre><code>success, image = self.video.read() </code></pre> <p>But when I try to change the mode of my frame to grayscale using the following code:</p> <pre><code>gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) </code></pre> <p>I get the following error:</p> <pre><code> im = im[..., ::-1].transpose((0, 3, 1, 2)) # BGR to RGB, BHWC to BCHW, (n, 3, h, w) ValueError: axes don't match array </code></pre> <p>What seems to be the problem ?</p> <p>========UPDATE=========== Here is my code:</p> <pre><code>class xyx(object): def __init__(self): ... def __del__(self): self.video.release() def get_frame(self): success, image = self.video.read() print(type(image)) #clean image = np.zeros((480,640,3), dtype=np.uint8) image = get_grayscale(image) ... #... x = ... #... return ... </code></pre> <p>And for the functions:</p> <pre><code>def get_grayscale(img): return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) def remove_noise(img): return cv2.medianBlur(img,5) def thresholdImg(img): return cv2.threshold(img,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1] </code></pre> <p>The error is not deep in the code; without converting to grayscale it works. The whole problem is in the conversion from BGR to grayscale.</p>
<python><numpy><opencv><image-processing><valueerror>
2025-03-06 22:13:01
1
575
Mohammed Baashar
79,490,645
7,437,143
How to write an integration test on an urwid tui?
<p>While trying to test a urwid-based question with input validation and autocomplete suggestions, I am experiencing some difficulties. The code that I am trying to test using <code>pytest</code> is given in the two files below:</p> <h2>InputValidationQuestion.py</h2> <pre class="lang-py prettyprint-override"><code>from typing import List import urwid from typeguard import typechecked from tui_labeller.file_read_write_helper import write_to_file @typechecked def get_filtered_suggestions( *, input_text: str, available_suggestions: List[str] ) -&gt; List[str]: &quot;&quot;&quot; Filter suggestions based on input text, matching from start with wildcard support. Special case: '*' alone shows all available suggestions. Args: input_text (str): The text entered by user, can include '*' as wildcard available_suggestions (list): List of possible suggestion strings Returns: list: Filtered suggestions based on input criteria &quot;&quot;&quot; input_text = input_text.strip() # Special case: if input is '*', return all suggestions if input_text == &quot;*&quot;: return available_suggestions # If no input, return all suggestions if not input_text: return available_suggestions # Handle wildcard case if &quot;*&quot; in input_text: # Split input by wildcard parts = input_text.lower().split(&quot;*&quot;) prefix = parts[0] # What comes before the wildcard # Filter suggestions filtered = [ suggestion for suggestion in available_suggestions if suggestion.lower().startswith(prefix) and all(part in suggestion.lower() for part in parts[1:] if part) ] else: # Original filtering for non-wildcard case filtered = [ suggestion for suggestion in available_suggestions if suggestion.lower().startswith(input_text.lower()) ] # If no matches found, return ['-'] return filtered if filtered else [&quot;-&quot;] class InputValidationQuestion(urwid.Edit): def __init__( self, caption, suggestions=None, autocomplete_box=None, pile=None ): super().__init__(caption=caption) self.suggestions = suggestions or [] self.autocomplete_box = autocomplete_box self.pile = pile self._in_autocomplete: bool = False def handle_autocomplete(self, key, size): &quot;&quot;&quot;Handle autocomplete logic based on input key and suggestions. Args: key: The pressed key size: The size parameter for keypress Returns: The result of keypress or None if handled Raises: ValueError: When autocomplete conditions aren't met &quot;&quot;&quot; if not self.suggestions: write_to_file( filename=&quot;eg.txt&quot;, content=f&quot;self.suggestions={self.suggestions}&quot;, append=True, ) return super().keypress(size, key) # Handle automatic substitution when '*' yields single match if &quot;*&quot; in self.edit_text: matches = [s for s in self.suggestions if self._match_pattern(s)] if len(matches) == 1: self.set_edit_text(matches[0]) self.owner.set_attr_map({None: &quot;normal&quot;}) write_to_file( filename=&quot;eg.txt&quot;, content=f&quot;self.edit_text={self.edit_text}&quot;, append=True, ) return None elif len(matches) == 0: raise ValueError(&quot;No matches found for pattern&quot;) # TODO: do stuff here. # If multiple matches, continue to tab handling # Handle tab key press if key == &quot;tab&quot;: matches = [s for s in self.suggestions if self._match_pattern(s)] if len(matches) == 1: self.set_edit_text(matches[0]) self.owner.set_attr_map({None: &quot;normal&quot;}) return None elif len(matches) == 0: raise ValueError(&quot;No matching suggestion found&quot;) else: raise ValueError(&quot;Multiple ambiguous suggestions available&quot;) return super().keypress(size, key) def valid_char(self, ch): return len(ch) == 1 and (ch.isalpha() or ch in [&quot;:&quot;, &quot;*&quot;]) def keypress(self, size, key): write_to_file( filename=&quot;eg.txt&quot;, content=f&quot;key={key}, self.edit_text={self.edit_text}&quot;, append=True, ) if key in [&quot;tab&quot;, &quot;*&quot;]: return self.handle_autocomplete(key, size) elif key == &quot;enter&quot;: return &quot;enter&quot; elif key in (&quot;up&quot;, &quot;down&quot;): if self.pile: current_pos = self.pile.focus_position new_pos = current_pos - 1 if key == &quot;up&quot; else current_pos + 1 if 0 &lt;= new_pos &lt; len(self.pile.contents) - 2: self.pile.focus_position = new_pos focused_widget = self.pile.focus if isinstance(focused_widget, urwid.AttrMap): focused_widget.base_widget.update_autocomplete() return None return key elif key in (&quot;delete&quot;, &quot;backspace&quot;, &quot;left&quot;, &quot;right&quot;): write_to_file( filename=&quot;eg.txt&quot;, content=f&quot;self.edit_text={self.edit_text}&quot;, append=True, ) result = super().keypress(size, key) self.update_autocomplete() return result elif self.valid_char(key): result = super().keypress(size, key) self.update_autocomplete() return result return None def _match_pattern(self, suggestion): pattern = self.edit_text.lower().replace(&quot;*&quot;, &quot;.*&quot;) return bool(re.match(f&quot;^{pattern}$&quot;, suggestion.lower())) def update_autocomplete(self): if self._in_autocomplete: # Prevent recursion return if not self.autocomplete_box: return self._in_autocomplete = True # Set flag try: remaining_suggestions = get_filtered_suggestions( input_text=self.edit_text, available_suggestions=self.suggestions, ) suggestions_text = &quot;, &quot;.join(remaining_suggestions) write_to_file( filename=&quot;eg.txt&quot;, content=f&quot;suggestions_text={suggestions_text}&quot;, append=True, ) self.autocomplete_box.base_widget.set_text(suggestions_text) self.autocomplete_box.base_widget._invalidate() if &quot;*&quot; in self.edit_text: if len(remaining_suggestions) == 1: # Use set_edit_text instead of direct assignment to avoid triggering signals self.set_edit_text(remaining_suggestions[0]) else: self.owner.set_attr_map({None: &quot;normal&quot;}) finally: self._in_autocomplete = False # Reset flag class QuestionApp: def __init__(self): self.questions = [ (&quot;Question 1: &quot;, [&quot;apple&quot;, &quot;apricot&quot;, &quot;avocado&quot;]), (&quot;Question 2: &quot;, [&quot;banana&quot;, &quot;blueberry&quot;, &quot;blackberry&quot;]), (&quot;Question 3: &quot;, [&quot;cat&quot;, &quot;caterpillar&quot;, &quot;cactus&quot;]), ] self.palette = [ (&quot;normal&quot;, &quot;white&quot;, &quot;black&quot;), (&quot;highlight&quot;, &quot;white&quot;, &quot;dark red&quot;), (&quot;autocomplete&quot;, &quot;yellow&quot;, &quot;dark blue&quot;), ] self.autocomplete_box = urwid.AttrMap( urwid.Text(&quot;&quot;, align=&quot;left&quot;), &quot;autocomplete&quot; ) self.pile = urwid.Pile([]) self.inputs = [] for question, suggestions in self.questions: edit = InputValidationQuestion( question, suggestions, self.autocomplete_box, self.pile ) attr_edit = urwid.AttrMap(edit, &quot;normal&quot;) edit.owner = attr_edit self.inputs.append(attr_edit) self.pile.contents = [ (self.inputs[0], (&quot;pack&quot;, None)), (self.inputs[1], (&quot;pack&quot;, None)), (self.inputs[2], (&quot;pack&quot;, None)), (urwid.Divider(), (&quot;pack&quot;, None)), ( urwid.Columns( [(30, urwid.Text(&quot;Autocomplete: &quot;)), self.autocomplete_box] ), (&quot;pack&quot;, None), ), ] self.fill = urwid.Filler(self.pile, valign=&quot;top&quot;) self.loop = urwid.MainLoop( self.fill, self.palette, unhandled_input=self.handle_input ) def handle_input(self, key): print(f&quot;Unhandled input: {key}&quot;) write_to_file( filename=&quot;eg.txt&quot;, content=f&quot;Unhandled input: {key}&quot;, append=False ) # TODO: if cursor is at the first question and up is pressed, go to last question. # TODO: if cursor is at the last question and down is pressed, go to first question. raise ValueError(f&quot;STOPPED at:{key}&quot;) def run(self): def update_autocomplete(widget, new_text): widget.update_autocomplete() for input_widget in self.inputs: urwid.connect_signal( input_widget.base_widget, &quot;change&quot;, update_autocomplete ) if self.inputs: self.pile.focus_position = 0 self.inputs[0].base_widget.update_autocomplete() self.loop.run() </code></pre> <h2>InputValidationQuestions.py:</h2> <pre class="lang-py prettyprint-override"><code>import urwid from tui_labeller.file_read_write_helper import write_to_file from tui_labeller.tuis.urwid.InputValidationQuestion import ( InputValidationQuestion, ) class InputValidationQuestions: def __init__(self): self.questions = [ (&quot;Question 1: &quot;, [&quot;apple&quot;, &quot;apricot&quot;, &quot;avocado&quot;]), (&quot;Question 2: &quot;, [&quot;banana&quot;, &quot;blueberry&quot;, &quot;blackberry&quot;]), (&quot;Question 3: &quot;, [&quot;cat&quot;, &quot;caterpillar&quot;, &quot;cactus&quot;]), ] self.palette = [ (&quot;normal&quot;, &quot;white&quot;, &quot;black&quot;), (&quot;highlight&quot;, &quot;white&quot;, &quot;dark red&quot;), (&quot;autocomplete&quot;, &quot;yellow&quot;, &quot;dark blue&quot;), ] self.autocomplete_box = urwid.AttrMap( urwid.Text(&quot;&quot;, align=&quot;left&quot;), &quot;autocomplete&quot; ) self.pile = urwid.Pile([]) self.inputs = [] for question, suggestions in self.questions: edit = InputValidationQuestion( question, suggestions, self.autocomplete_box, self.pile ) attr_edit = urwid.AttrMap(edit, &quot;normal&quot;) edit.owner = attr_edit self.inputs.append(attr_edit) self.pile.contents = [ (self.inputs[0], (&quot;pack&quot;, None)), (self.inputs[1], (&quot;pack&quot;, None)), (self.inputs[2], (&quot;pack&quot;, None)), (urwid.Divider(), (&quot;pack&quot;, None)), ( urwid.Columns( [(30, urwid.Text(&quot;Autocomplete: &quot;)), self.autocomplete_box] ), (&quot;pack&quot;, None), ), ] self.fill = urwid.Filler(self.pile, valign=&quot;top&quot;) self.loop = urwid.MainLoop( self.fill, self.palette, unhandled_input=self.handle_input ) def handle_input(self, key): print(f&quot;Unhandled input: {key}&quot;) write_to_file( filename=&quot;eg.txt&quot;, content=f&quot;Unhandled input: {key}&quot;, append=False ) # TODO: if cursor is at the first question and up is pressed, go to last question. # TODO: if cursor is at the last question and down is pressed, go to first question. raise ValueError(f&quot;STOPPED at:{key}&quot;) def run(self): def update_autocomplete(widget, new_text): widget.update_autocomplete() for input_widget in self.inputs: urwid.connect_signal( input_widget.base_widget, &quot;change&quot;, update_autocomplete ) if self.inputs: self.pile.focus_position = 0 self.inputs[0].base_widget.update_autocomplete() self.loop.run() </code></pre> <p>These are ran with:</p> <pre class="lang-py prettyprint-override"><code>from typeguard import typechecked from src.tui_labeller.tuis.urwid.InputValidationQuestions import ( InputValidationQuestions, ) app = InputValidationQuestions() app.run() </code></pre> <h2>Expected behaviour</h2> <p>The class handles incremental input (e.g., typing &quot;a,v,<em>&quot; selects &quot;avocado&quot;) in a TUI app launched with <code>python -m src.tui_labeller -i &lt;image&gt; -o &lt;dir&gt; -t urwid</code>. In the real run, &quot;a,v,</em>&quot; works, and pressing Enter raises a <code>ValueError: STOPPED at:enter</code>, which is expected/fine.</p> <h2>Test behaviour</h2> <p>However, my test fails because the <code>handle_input</code> method raises <code>ValueError: STOPPED at:a</code> on the first keypress (&quot;a&quot;), stopping further processing.</p> <h2>Test Code</h2> <p>Below is the code that attempts to test the code above:</p> <pre class="lang-py prettyprint-override"><code>import urwid import pytest from tui_labeller.tuis.urwid.SUBSTITUECLASSHERE import SUBSTITUECLASSHERE @pytest.fixture def app(): app = SUBSTITUECLASSHERE() app.loop.screen = urwid.raw_display.Screen() return app def test_avocado_selection(app): input_sequence = [&quot;a&quot;, &quot;v&quot;, &quot;*&quot;] for key in input_sequence: app.loop.process_input([key]) if hasattr(app.loop, &quot;unhandled_input&quot;) and app.loop.unhandled_input: app.loop.unhandled_input(key) # How to check &quot;avocado&quot; is selected? Edit widget, ListBox, or custom attribute? def test_enter_error_handling(app): input_sequence = [&quot;a&quot;, &quot;v&quot;, &quot;*&quot;, &quot;enter&quot;] with pytest.raises(ValueError, match=&quot;STOPPED at:enter&quot;): for key in input_sequence: app.loop.process_input([key]) if hasattr(app.loop, &quot;unhandled_input&quot;) and app.loop.unhandled_input: app.loop.unhandled_input(key) </code></pre> <p>The test fails with:</p> <pre><code>ValueError: STOPPED at:a AssertionError: Regex pattern 'STOPPED at:enter' did not match 'STOPPED at:a' </code></pre> <h2>Question</h2> <p>How do I modify the test to:</p> <ol> <li>Process &quot;a,v,*&quot; to select &quot;avocado&quot; without raising errors early,</li> <li>Verify the selection (what widget/attribute holds it?)</li> </ol>
<python><testing><integration-testing><tui><urwid>
2025-03-06 20:40:54
1
2,887
a.t.
79,490,610
2,700,041
How do I get google/gemma-2-2b to strictly follow my prompt in Hugging Face Transformers?
<p>I'm using the following code to send a prompt to the &quot;google/gemma-2-2b&quot; model via Hugging Face's Transformers pipeline:</p> <pre><code>from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline import torch HUGGINGFACE_TOKEN = &quot;&lt;my-token&gt;&quot; model_name = &quot;google/gemma-2-2b&quot; tokenizer = AutoTokenizer.from_pretrained(model_name, token=HUGGINGFACE_TOKEN) model = AutoModelForCausalLM.from_pretrained( model_name, device_map=&quot;auto&quot;, torch_dtype=torch.float16, token=HUGGINGFACE_TOKEN ) text_generator = pipeline(&quot;text-generation&quot;, model=model, tokenizer=tokenizer, token=HUGGINGFACE_TOKEN) prompt = &quot;What is the capital of France? Just select an option. Choose only one option from the following A) Paris B) London C) Delhi 4) Goa&quot; output = text_generator(prompt, max_new_tokens=100) print(output) </code></pre> <p><strong>Expected output</strong>:</p> <pre><code>A) Paris </code></pre> <p><strong>Actual output</strong>:</p> <pre><code>[{'generated_text': 'What is the capital of France? Just select an option. Choose only one option from the following A) Paris B) London C) Delhi 4) Goa 5) New York ...'}] </code></pre> <p>The model seems to be echoing the prompt and then generating a long list of options, not strictly following my instructions.</p> <p>How can I modify the prompt or generation parameters so that the model produces output that strictly follows the prompt without just repeating the input? Any suggestions on which settings (e.g., temperature, sampling flags) or prompt modifications can help ensure that the model generates new text according to my instructions?</p>
<python><prompt><large-language-model>
2025-03-06 20:24:34
1
1,427
hanugm
79,490,573
8,075,540
Field validator not called on SQLModel
<p>I have a FastAPI setup of the form:</p> <pre class="lang-py prettyprint-override"><code>class Foo(sqlmodel.SQLModel, table=True): id: typing.Optional[int] = sqlmodel.Field(primary_key=True) data: str @pydantic.field_validator(&quot;data&quot;, mode=&quot;before&quot;) def serialize_dict(cls, value): if isinstance(value, dict): return json.dumps(value) return value @app.post(&quot;/foos&quot;) def create_foo(foo: Foo, session: sqlmodel.Session = fastapi.Depends(get_session)): session.add(foo) session.commit() return fastapi.Response() </code></pre> <p>I then POST</p> <pre class="lang-json prettyprint-override"><code>{ &quot;data&quot;: { &quot;bar&quot;: 5 } } </code></pre> <p>to <code>/foos</code>. However, this is throwing a SQL exception because the <code>data</code> value couldn't be bound. After putting in some logging statements, I discovered that <code>foo.data</code> is a <code>dict</code> and not a <code>str</code>. In addition, I confirmed that my validator is never called.</p> <p>Since <code>SQLModel</code> inherits from <code>pydantic.BaseModel</code>, I would have thought I could use such a validator. What am I missing?</p> <p>This is sqlmodel 0.0.23 with pydantic 2.10.6.</p>
<python><pydantic><sqlmodel>
2025-03-06 20:06:58
1
6,906
Daniel Walker
79,490,493
5,660,533
AsyncMock Returns Empty List When Mocking SQLAlchemy Async Execution in FastAPI
<p>I have an app that basically ingests documents, stores their embeddings in vector form, then responds to queries that relate to the ingested docs.</p> <p>I have this retrieval.py function which searches through ingested and selected documents and returns which files have matching vectors.</p> <p>I'm testing a FastAPI service that queries a database using SQLAlchemy's async execution. However, when I mock the database query, execute().scalars().all() returns an empty list instead of the expected [1, 2, 3].</p> <pre><code>import numpy as np from sqlalchemy.ext.asyncio import AsyncSession from sqlalchemy.sql import text from src.backend.database.config import AsyncSessionLocal from src.services.ingestion_service.embedding_generator import EmbeddingGenerator import asyncio class RetrievalService: &quot;&quot;&quot; Handles retrieval of similar documents based on query embeddings. &quot;&quot;&quot; def __init__(self): self.embedding_generator = EmbeddingGenerator() async def retrieve_relevant_docs(self, query: str, top_k: int = 5): &quot;&quot;&quot; Converts the query into an embedding and retrieves the most similar documents asynchronously. &quot;&quot;&quot; async with AsyncSessionLocal() as db: async with db.begin(): # ✅ Generate embedding in a separate thread query_embedding = await self.embedding_generator.generate_embedding(query) print('query_embedding', query_embedding) # ✅ Convert NumPy array to PostgreSQL-compatible format query_embedding_str = &quot;[&quot; + &quot;,&quot;.join(map(str, query_embedding)) + &quot;]&quot; # ✅ Fetch selected document IDs asynchronously selected_ids_result = await db.execute(text(&quot;SELECT document_id FROM selected_documents;&quot;)) print('selected_ids_result', selected_ids_result) selected_ids = (await selected_ids_result.scalars()).all() # ✅ Ensure selected_ids is not empty to prevent SQL errors if not selected_ids: selected_ids = [-1] # Dummy ID to avoid SQL failure # ✅ Execute the vector similarity search query search_query = text(&quot;&quot;&quot; SELECT document_id FROM embeddings WHERE document_id = ANY(:selected_ids) ORDER BY vector &lt;-&gt; CAST(:query_embedding AS vector) LIMIT :top_k; &quot;&quot;&quot;).execution_options(cacheable=False) results = await db.execute( search_query, { &quot;query_embedding&quot;: query_embedding_str, # Pass as string &quot;top_k&quot;: top_k, &quot;selected_ids&quot;: selected_ids, }, ) # document_ids = (await results.scalars()).all()\ print('debug results', vars(results)) document_ids = list(await results.scalars()) print('document_ids', document_ids) return document_ids async def get_document_texts(self, document_ids: list[int]): &quot;&quot;&quot; Fetches the actual document texts for the given document IDs. &quot;&quot;&quot; if not document_ids: return [] async with AsyncSessionLocal() as db: async with db.begin(): query = text(&quot;SELECT content FROM documents WHERE id = ANY(:document_ids);&quot;) results = await db.execute(query, {&quot;document_ids&quot;: document_ids}) return (await results.scalars()).all() </code></pre> <p>I have a simple tests file:</p> <pre><code>import pytest from unittest.mock import patch, AsyncMock from src.services.retrieval_service.retrieval import RetrievalService from sqlalchemy.ext.asyncio import AsyncSession import pytest from unittest.mock import patch, AsyncMock from src.services.retrieval_service.retrieval import RetrievalService from sqlalchemy.ext.asyncio import AsyncSession @pytest.mark.asyncio async def test_retrieve_relevant_docs_valid_query(): service = RetrievalService() query = &quot;What is AI?&quot; top_k = 3 with patch.object(service.embedding_generator, 'generate_embedding', new_callable=AsyncMock) as mock_generate_embedding, \ patch.object(AsyncSession, 'execute', new_callable=AsyncMock) as mock_execute: mock_generate_embedding.return_value = [0.1] * 384 # Mock DB query for selected documents mock_scalars_selected = AsyncMock() mock_scalars_selected.scalars.return_value.all.return_value = [1, 2, 3] mock_execute.side_effect = [mock_scalars_selected, mock_scalars_selected] # Mock the `execute` method mock_execute.return_value = mock_scalars_selected # Call the method document_ids = await service.retrieve_relevant_docs(query, top_k) # Assertion assert document_ids == [1, 2, 3], f&quot;Expected [1, 2, 3] but got {document_ids}&quot; @pytest.mark.asyncio async def test_retrieve_relevant_docs_valid_query_1(): service = RetrievalService() query = &quot;What is AI?&quot; top_k = 3 with patch.object(service.embedding_generator, 'generate_embedding', new_callable=AsyncMock) as mock_generate_embedding, \ patch.object(AsyncSession, 'execute', new_callable=AsyncMock) as mock_execute: mock_generate_embedding.return_value = [0.1] * 384 # Mock DB query for selected documents mock_scalars_selected = AsyncMock() mock_scalars_selected.all = AsyncMock(return_value=[1, 2, 3]) mock_execute.side_effect = [AsyncMock(return_value=AsyncMock(scalars=mock_scalars_selected)), AsyncMock(return_value=AsyncMock(scalars=mock_scalars_selected))] document_ids = await service.retrieve_relevant_docs(query, top_k) assert document_ids == [1, 2, 3] @pytest.mark.asyncio async def test_retrieve_relevant_docs_no_selected_docs(): service = RetrievalService() query = &quot;What is AI?&quot; top_k = 3 with patch.object(service.embedding_generator, 'generate_embedding', new_callable=AsyncMock) as mock_generate_embedding, \ patch.object(AsyncSession, 'execute', new_callable=AsyncMock) as mock_execute: mock_generate_embedding.return_value = [0.1] * 384 # Mock DB returning no selected docs mock_scalars_selected = AsyncMock() mock_scalars_selected.all.return_value = [] mock_execute.return_value = mock_scalars_selected document_ids = await service.retrieve_relevant_docs(query, top_k) assert document_ids == [] @pytest.mark.asyncio async def test_retrieve_relevant_docs_empty_query(): service = RetrievalService() query = &quot;&quot; top_k = 3 with patch.object(service.embedding_generator, 'generate_embedding', new_callable=AsyncMock) as mock_generate_embedding, \ patch.object(AsyncSession, 'execute', new_callable=AsyncMock) as mock_execute: mock_generate_embedding.return_value = [0.1] * 384 # Mock DB returning no documents mock_scalars_selected = AsyncMock() mock_scalars_selected.all.return_value = [] mock_execute.return_value = mock_scalars_selected document_ids = await service.retrieve_relevant_docs(query, top_k) assert document_ids == [] @pytest.mark.asyncio async def test_get_document_texts_valid_ids(): service = RetrievalService() document_ids = [1, 2, 3] with patch.object(AsyncSession, 'execute', new_callable=AsyncMock) as mock_execute: # Mock query result mock_scalars = AsyncMock() mock_scalars.all.return_value = [&quot;Document 1 text&quot;, &quot;Document 2 text&quot;, &quot;Document 3 text&quot;] mock_execute.return_value = mock_scalars document_texts = await service.get_document_texts(document_ids) assert document_texts == [&quot;Document 1 text&quot;, &quot;Document 2 text&quot;, &quot;Document 3 text&quot;] @pytest.mark.asyncio async def test_get_document_texts_no_ids(): service = RetrievalService() document_ids = [] with patch.object(AsyncSession, 'execute', new_callable=AsyncMock) as mock_execute: document_texts = await service.get_document_texts(document_ids) assert document_texts == [] </code></pre> <p>I have added so much debugging information, but i do not understand why when I am mocking the retreivalservice to have the side effect [1,2,3] and just yield that value after going through the service. I keep getting error that shows that my <code>mock_execute.side_effect</code> is not working at all.</p> <p>These are the logs that get printed:</p> <blockquote> <p>selected_ids_result debug results {'_mock_return_value': sentinel.DEFAULT, '_mock_parent': None, '_mock_name': None, '_mock_new_name': '()', '_mock_new_parent': , '_mock_sealed': False, '_spec_class': None, '_spec_set': None, '_spec_signature': None, '_mock_methods': None, '_spec_asyncs': [], '_mock_children': {'scalars': , '<strong>str</strong>': }, '_mock_wraps': None, '_mock_delegate': None, '_mock_called': False, '_mock_call_args': None, '_mock_call_count': 0, '_mock_call_args_list': [], '_mock_mock_calls': [call.<strong>str</strong>(), call.scalars(), call.scalars().all()], 'method_calls': [call.scalars()], '_mock_unsafe': False, '_mock_side_effect': None, '_is_coroutine': &lt;object object at 0x0000029C06E16A30&gt;, '_mock_await_count': 0, '_mock_await_args': None, '_mock_await_args_list': [], '<strong>code</strong>': , '<strong>str</strong>': } document_ids []</p> </blockquote> <p>and these errors:</p> <blockquote> <p>short test summary info ==================================== FAILED src/tests/unit/test_retrieve_docs.py::test_retrieve_relevant_docs_valid_query - AssertionError: Expected [1, 2, 3] but got [] FAILED src/tests/unit/test_retrieve_docs.py::test_retrieve_relevant_docs_valid_query_1 - AssertionError: assert [] == [1, 2, 3] FAILED src/tests/unit/test_retrieve_docs.py::test_get_document_texts_valid_ids - AssertionError: assert &lt;coroutine object AsyncMockMixin._execute_mock_call at 0x000002373EA3B...</p> </blockquote> <p><strong>Observed Behavior:</strong> results.scalars().all() unexpectedly returns [], even though I attempted to mock it. Debugging vars(results) shows _mock_side_effect = None, suggesting the mock isn't working as expected.</p> <p><strong>Expected Behavior:</strong> document_ids should contain [1, 2, 3], matching the mocked return value.</p> <p><strong>What I've Tried:</strong> Explicitly setting scalars().all().return_value = [1, 2, 3]. Checking vars(results) for missing attributes. Ensuring mock_execute.side_effect is properly assigned. Calling await session.execute(...).scalars().all() instead of wrapping it in list(). What is the correct way to mock SQLAlchemy's async execution (session.execute().scalars().all()) in a FastAPI test using AsyncMock? Or Can someone point why my <strong></strong> is not behaving as I expect it to?</p> <p>I feel if i fix one test, all my tests should get fixed the same way. I am not new to python, but very new to sqlalchemy</p>
<python><sqlalchemy><mocking><pytest><fastapi>
2025-03-06 19:36:40
1
740
Rohit Kumar
79,490,103
72,437
Ensuring Successful File Uploads to S3 Using Presigned URLs
<p>Previously, I used the following approach to allow users to upload files to our cloud storage service:</p> <ol> <li>Server-side (Python): Generate a time-limited presigned upload URL.</li> <li>Client-side (Java): Upload the file using the presigned URL.</li> </ol> <h2>Java Code: Uploading a File</h2> <pre><code>private static boolean upload(String urlAsString, File inputFile, String checksum) { boolean success = false; HttpURLConnection urlConnection = null; FileInputStream fileInputStream = null; OutputStream outputStream = null; try { URL url = new URL(urlAsString); urlConnection = (HttpURLConnection) url.openConnection(); urlConnection.setRequestMethod(PUT); urlConnection.setConnectTimeout(CONNECT_TIMEOUT); urlConnection.setReadTimeout(READ_TIMEOUT); // https://stackoverflow.com/questions/8587913/what-exactly-does-urlconnection-setdooutput-affect urlConnection.setDoOutput(true); // // Checksum // if (checksum != null) { urlConnection.setRequestProperty(&quot;content-md5&quot;, checksum); urlConnection.setRequestProperty(&quot;x-amz-meta-md5&quot;, checksum); } // // Do this before writting to output stream. // final long length = inputFile.length(); if (Build.VERSION.SDK_INT &gt;= Build.VERSION_CODES.KITKAT) { urlConnection.setFixedLengthStreamingMode(length); } urlConnection.setRequestProperty(&quot;Content-Length&quot;, String.valueOf(length)); fileInputStream = new FileInputStream(inputFile); outputStream = urlConnection.getOutputStream(); byte[] buffer = new byte[BUFFER_SIZE]; int bufferLength = 0; while ((bufferLength = fileInputStream.read(buffer)) != -1) { if (bufferLength &gt; 0) { outputStream.write(buffer, 0, bufferLength); } } int responseCode = urlConnection.getResponseCode(); if (responseCode == HttpURLConnection.HTTP_OK) { success = true; } } catch (MalformedURLException e) { Log.e(TAG, &quot;&quot;, e); } catch (IOException e) { Log.e(TAG, &quot;&quot;, e); } finally { close(fileInputStream); close(outputStream); if (urlConnection != null) { urlConnection.disconnect(); } } </code></pre> <h2>Python Code: Generating a Presigned Upload URL</h2> <pre><code>def get_presigned_upload_url(s3_client, customer_id, key, checksum): presigned_upload_url = None if checksum is None: presigned_upload_url = s3_client.generate_presigned_url( ClientMethod='put_object', Params={ 'Bucket': constants.S3_BUCKET_NAME, 'Key': get_user_folder_name(customer_id) + key }, ExpiresIn=constants.EXPIRES_IN ) else: presigned_upload_url = s3_client.generate_presigned_url( ClientMethod='put_object', Params={ 'Bucket': constants.S3_BUCKET_NAME, 'Key': get_user_folder_name(customer_id) + key, 'ContentMD5': checksum, 'Metadata': { 'md5' : checksum } }, ExpiresIn=constants.EXPIRES_IN ) return presigned_upload_url </code></pre> <h2>Issue: HTTP 200 OK, but File Not in S3?</h2> <p>I noticed cases where the client-side upload returns HttpURLConnection.HTTP_OK, but the file does not reach S3. (Is this even possible?!)</p> <p>To verify the upload's correctness, I implemented an additional verification step after uploading.</p> <h2>Java Code: Verifying the Upload</h2> <pre><code>private static boolean verifyUpload(String headUrl, String checksum) { boolean success = false; HttpURLConnection headConnection = null; try { headConnection = (HttpURLConnection) new URL(headUrl).openConnection(); headConnection.setRequestMethod(&quot;HEAD&quot;); final int headResponseCode = headConnection.getResponseCode(); if (headResponseCode == HttpURLConnection.HTTP_OK) { final String metaMd5 = headConnection.getHeaderField(&quot;x-amz-meta-md5&quot;); success = checksum.equals(metaMd5); } } catch (MalformedURLException e) { Log.e(TAG, &quot;&quot;, e); } catch (IOException e) { Log.e(TAG, &quot;&quot;, e); } finally { if (headConnection != null) { headConnection.disconnect(); } } return success; } </code></pre> <h2>Python Code: Generating a Presigned HEAD URL for Verification</h2> <pre><code>def get_presigned_head_url(s3_client, customer_id, key): &quot;&quot;&quot; Generate a pre-signed URL to perform a HEAD request on an S3 object. This URL allows the client to check if the upload was successful. &quot;&quot;&quot; presigned_head_url = s3_client.generate_presigned_url( ClientMethod='head_object', Params={ 'Bucket': constants.S3_BUCKET_NAME, 'Key': get_user_folder_name(customer_id) + key }, ExpiresIn=constants.EXPIRES_IN ) return presigned_head_url </code></pre> <h2>Does This Approach Make Sense?</h2> <p>Would this method reliably verify uploads, given that <code>HTTP_OK</code> does not always mean a successful upload to S3? Any feedback would be appreciated.</p>
<python><java><amazon-s3>
2025-03-06 16:49:40
0
42,256
Cheok Yan Cheng
79,490,056
366,628
compute named quantiles in pandas using groupby aggregate
<p>Among other descriptive statistics, I want to get some quantiles out of my pandas <code>DataFrame</code>. I can get the quantiles I want a couple of different ways, but I can't find the right way to do it with <code>aggregate</code>. I'd like to use aggregate because it'd be tidy and maybe computationally efficient to get all my stats in one go.</p> <pre><code>rng = np.random.default_rng(seed=18860504) df = pd.DataFrame({ &quot;dummy&quot;: 1, &quot;bell&quot;: rng.normal(loc=0, scale=1, size=100), &quot;fish&quot;: rng.poisson(lam=10, size=100), &quot;cabin&quot;: rng.lognormal(mean=0, sigma=1.0, size=100), }) quants = [x/5 for x in range(6)] quantiles = pd.DataFrame({ &quot;quantile&quot; : [f&quot;q{100*q:02n}&quot; for q in quants], &quot;bell&quot; : df.groupby(&quot;dummy&quot;)[&quot;bell&quot;].quantile(quants), &quot;fish&quot; : df.groupby(&quot;dummy&quot;)[&quot;fish&quot;].quantile(quants), }) print(quantiles) </code></pre> <p>Output:</p> <pre><code> quantile bell fish dummy 1 0.0 q000 -2.313461 4.0 0.2 q020 -0.933831 7.0 0.4 q040 -0.246860 9.0 0.6 q060 0.211076 10.0 0.8 q080 0.685958 13.0 1.0 q100 3.017258 20.0 </code></pre> <p>I'd like to get these quantiles using <code>groupby().agg()</code>, ideally with programmatically named columns like &quot;bell_q90&quot;. Here's an example of the <code>aggregate</code> syntax that feels natural to me:</p> <pre><code>df.groupby(&quot;dummy&quot;).agg( bell_med=(&quot;bell&quot;, &quot;median&quot;), bell_mean=(&quot;bell&quot;, &quot;mean&quot;), fish_med=(&quot;fish&quot;, &quot;median&quot;), fish_mean=(&quot;fish&quot;, &quot;mean&quot;), # fish_q10=(&quot;fish&quot;, &quot;quantile(0.1)&quot;), # nothing like it # fish_q10=(&quot;fish&quot;, &quot;quantile&quot;, 0.1), # nothing like it # fish_q10=(&quot;fish&quot;, &quot;quantile&quot;, kwargs({&quot;q&quot;:0.1}), # nothing like it ) </code></pre> <p>I can imagine generating the columns by iterating over <code>quants</code> and a list of named columns, using <code>Series.agg</code> and than stitching them together, but this seems like a hack. (For example, it would require me to do my &quot;normal&quot; aggregation first and then add quantiles on afterwards.)</p> <pre><code>my_aggs = dict() for q in quants: for col in [&quot;bell&quot;, &quot;fish&quot;]: my_aggs[f&quot;{col}_q{100*q:03n}&quot;] = df.groupby(&quot;dummy&quot;)[col].quantile(q) print(pd.DataFrame(my_aggs)) # numbers equivalent to those above </code></pre> <p>Is there a better way?</p>
<python><pandas><group-by><aggregate>
2025-03-06 16:34:51
2
2,177
flies
79,490,048
52,791
Placing a Bull Put Spread order with IBKR's TWS API using ibapi
<p>I'm playing around with IBKR's ibapi library using the TWS API, and trying to make a simple script that will place a Bull Put Spread order.</p> <pre class="lang-none prettyprint-override"><code>from ibapi.wrapper import EWrapper from ibapi.contract import Contract from ibapi.order import Order import threading import time class IBapi(EWrapper, EClient): def __init__(self): EClient.__init__(self, self) self.nextOrderId = None self.position_data = [] self.order_status = {} self.completed_orders = {} self.contract_details = {} self.contract_details_end = {} def nextValidId(self, orderId: int): super().nextValidId(orderId) self.nextOrderId = orderId print(f&quot;Next Valid Order ID: {orderId}&quot;) def orderStatus(self, orderId, status, filled, remaining, avgFillPrice, permId, parentId, lastFillPrice, clientId, whyHeld, mktCapPrice): super().orderStatus(orderId, status, filled, remaining, avgFillPrice, permId, parentId, lastFillPrice, clientId, whyHeld, mktCapPrice) print(f&quot;Order Status - OrderId: {orderId}, Status: {status}, Filled: {filled}, Remaining: {remaining}&quot;) self.order_status[orderId] = status def openOrder(self, orderId, contract, order, orderState): super().openOrder(orderId, contract, order, orderState) print(f&quot;Open Order - OrderId: {orderId}, Symbol: {contract.symbol}, Order Type: {order.orderType}, Action: {order.action}&quot;) def execDetails(self, reqId, contract, execution): super().execDetails(reqId, contract, execution) print(f&quot;Execution Details - ReqId: {reqId}, Symbol: {contract.symbol}, ExecId: {execution.execId}, Price: {execution.price}&quot;) def contractDetails(self, reqId, contractDetails): super().contractDetails(reqId, contractDetails) self.contract_details[reqId] = contractDetails print(f&quot;Contract Details - ReqId: {reqId}, ConId: {contractDetails.contract.conId}, Symbol: {contractDetails.contract.symbol}&quot;) def contractDetailsEnd(self, reqId): super().contractDetailsEnd(reqId) self.contract_details_end[reqId] = True print(f&quot;Contract Details End - ReqId: {reqId}&quot;) def create_option_contract(symbol, expiration, strike, right): contract = Contract() contract.symbol = symbol contract.secType = &quot;OPT&quot; contract.exchange = &quot;SMART&quot; contract.currency = &quot;USD&quot; contract.lastTradeDateOrContractMonth = expiration # Format &quot;YYYYMMDD&quot; contract.strike = strike contract.right = right # &quot;P&quot; for Put, &quot;C&quot; for Call contract.multiplier = &quot;100&quot; return contract def create_order(action, quantity, price=None, order_type=&quot;LMT&quot;): order = Order() order.action = action order.totalQuantity = quantity order.orderType = order_type if price is not None: order.lmtPrice = price # Disable problematic attributes order.eTradeOnly = False order.firmQuoteOnly = False # Set additional order properties to ensure proper execution order.transmit = True order.outsideRth = False # Execute during regular trading hours only return order def run_loop(): app.run() # Main code to execute a bull put spread if __name__ == &quot;__main__&quot;: from ibapi.contract import ComboLeg # Connect to IB TWS or IB Gateway app = IBapi() app.connect('127.0.0.1', 7497, 0) # 7497 for TWS paper trading, 7496 for IB Gateway paper # Start the thread for processing IB messages api_thread = threading.Thread(target=run_loop, daemon=True) api_thread.start() # Wait for connection and nextValidId time.sleep(2) if app.nextOrderId is None: print(&quot;Failed to connect to IB API. Check if TWS/IB Gateway is running and API connections are enabled.&quot;) app.disconnect() exit(1) try: symbol = &quot;SPY&quot; expiration = &quot;20250411&quot; # April 11, 2025 higher_strike = 575 lower_strike = 570 quantity = 1 credit_limit = 1.70 # Create contracts for both legs sell_put_contract = create_option_contract(symbol, expiration, higher_strike, &quot;P&quot;) buy_put_contract = create_option_contract(symbol, expiration, lower_strike, &quot;P&quot;) print(f&quot;Requesting contract details for {symbol} puts at strikes {higher_strike} and {lower_strike}...&quot;) # Request contract details to get the conIds app.reqContractDetails(1, sell_put_contract) app.reqContractDetails(2, buy_put_contract) # Wait for contract details to be received timeout = 10 # seconds start_time = time.time() while (1 not in app.contract_details_end or 2 not in app.contract_details_end) and (time.time() - start_time &lt; timeout): time.sleep(0.1) if 1 not in app.contract_details or 2 not in app.contract_details: print(&quot;Failed to receive contract details. Check if the options exist for the specified expiration and strikes.&quot;) app.disconnect() exit(1) # Get the contract IDs sell_put_conid = app.contract_details[1].contract.conId buy_put_conid = app.contract_details[2].contract.conId print(f&quot;Retrieved Contract IDs - Sell {higher_strike} Put: {sell_put_conid}, Buy {lower_strike} Put: {buy_put_conid}&quot;) # Create a combo contract for the bull put spread combo = Contract() combo.symbol = symbol combo.secType = &quot;BAG&quot; combo.exchange = &quot;SMART&quot; combo.currency = &quot;USD&quot; # Create the legs # For a bull put spread: # - SELL the higher strike put (575) # - BUY the lower strike put (570) leg1 = ComboLeg() leg1.conId = sell_put_conid # ConId for the 575 strike put leg1.ratio = 1 leg1.action = &quot;SELL&quot; leg1.exchange = &quot;SMART&quot; leg2 = ComboLeg() leg2.conId = buy_put_conid # ConId for the 570 strike put leg2.ratio = 1 leg2.action = &quot;BUY&quot; leg2.exchange = &quot;SMART&quot; combo.comboLegs = [leg1, leg2] # Create the combo order spread_order = create_order(&quot;SELL&quot;, quantity, credit_limit) print(f&quot;Placing bull put spread on {symbol} as a combo order:&quot;) print(f&quot;- SELL {quantity} {symbol} {expiration} {higher_strike} Put (ConId: {sell_put_conid})&quot;) print(f&quot;- BUY {quantity} {symbol} {expiration} {lower_strike} Put (ConId: {buy_put_conid})&quot;) print(f&quot;- Credit limit: ${credit_limit}&quot;) # Place the combo order app.placeOrder(app.nextOrderId, combo, spread_order) combo_order_id = app.nextOrderId app.nextOrderId += 1 print(f&quot;Combo order placed - Order ID: {combo_order_id}&quot;) # Monitor order status for a bit time.sleep(10) # Check order status print(&quot;\nOrder status summary:&quot;) for order_id, status in app.order_status.items(): print(f&quot;Order {order_id}: {status}&quot;) except Exception as e: print(f&quot;Error: {e}&quot;) finally: # Disconnect print(&quot;\nDisconnecting from IB API...&quot;) app.disconnect() print(&quot;Disconnected.&quot;) </code></pre> <p>It seems like it's <em>almost</em> working, except I'm getting an error in TWS that says &quot;Riskless combination orders are not allowed&quot;.</p> <p>In addition, in the Orders tab in TWS, the order looks correct to me: <a href="https://i.sstatic.net/65hzu3LB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65hzu3LB.png" alt="enter image description here" /></a></p> <p>However, when I go into the Order Ticket, the buy/sell actions for the legs are reversed: <a href="https://i.sstatic.net/lGB68xt9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lGB68xt9.png" alt="enter image description here" /></a></p> <p>I believe this is the cause of the error, but it's not clear to me what I'm doing wrong in my script. Does anyone know what the issue might be?</p>
<python><interactive-brokers><tws>
2025-03-06 16:31:48
1
787
Rhys Causey
79,489,971
1,999,873
Assign numpy array to duckdb array column in where clause
<p>How do I update a duckdb array column with the value from a numpy array (returned from a python function) in a line where a where-clause matches?</p> <p>In the example I want to assign the value of <code>new_array</code> in the second line in the table example, that is the line in which <code>WHERE id = 2</code> matches.</p> <pre><code>import duckdb import numpy as np con = duckdb.connect() con.sql(&quot;CREATE TABLE example (id INTEGER, vec FLOAT[3]);&quot;) con.sql(&quot;INSERT INTO example VALUES (1, array_value(1.0::FLOAT, 2.0::FLOAT, 3.0::FLOAT));&quot;) con.sql(&quot;INSERT INTO example VALUES (2, NULL);&quot;) con.sql(&quot;SELECT * FROM example;&quot;).fetchall() new_array = np.array([4,5,6], dtype=np.float32) new_array </code></pre>
<python><arrays><duckdb>
2025-03-06 16:07:56
1
2,662
snaut
79,489,931
2,532,408
Can you perform WebDriverWait in an ActionChain?
<p>I often build ActionChains to interact with MUI components. Here is one that enters the date <code>MM/DD/YYYY</code> but ensures we start typing in the MM portion of the input. I have to wait for the animation of the field label to move out of the way before the input will accept anything.</p> <pre class="lang-py prettyprint-override"><code>actions = ActionChains(driver, duration=CHAIN_DURATION) actions.move_to_element(element) actions.click(element) actions.pause(.25) # animation of label to move out of the way. actions.send_keys(Keys.ARROW_LEFT) actions.send_keys(Keys.ARROW_LEFT) actions.send_keys(date_str) actions.send_keys(Keys.TAB) actions.perform() </code></pre> <p><a href="https://i.sstatic.net/m7n0u2Ds.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/m7n0u2Ds.gif" alt="mui component animation" /></a></p> <p>The problem is this animation speed is variable (and it's not always known how fast each component will be). What I wish I had was the ability to perform a <code>WebDriverWait.until(expected_condition)</code> in the middle of the ActionChain. In this case I would wait until the label element (different from the input element) has stopped moving.</p> <p><strong>Is this possible in some way?</strong></p>
<python><selenium-webdriver><material-ui><mui-x-date-picker>
2025-03-06 15:52:34
2
4,628
Marcel Wilson
79,489,878
11,793,491
My modal doesn't appear after an action in Streamlit
<p>I have this Streamlit app:</p> <pre class="lang-python prettyprint-override"><code>import streamlit as st st.title(&quot;Simulator&quot;) tab_names = [&quot;tab1&quot;, &quot;tab2&quot;] tab1, tab2= st.tabs(tab_names) @st.dialog(&quot;Edit your relationships&quot;) def edit_relationships(result): edit_options = tuple(result) selection = st.selectbox( &quot;Select an entity relationship&quot;, edit_options ) st.write(f&quot;This is a dialog {selection}&quot;) if st.button(&quot;Submit&quot;): st.session_state.vote = 'pear' st.rerun() with tab1: st.write(&quot;This is the first tab&quot;) with tab2: query = st.text_input(&quot;Enter the entity&quot;, key='t2tinput') if st.button('send', key='t2button'): try: result = ['banana', 'apple', 'pear'] if st.button(&quot;Edit Relationships&quot;): edit_relationships(result) except Exception as e: st.error(f&quot;Error: {e}&quot;) </code></pre> <p>And I want that after the <code>result</code> list is received (from an API), that a button 'Edit relationships&quot; appear so I can click on it and a modal appears. I tried this code but after clicking on 'Edit relationships', the modal doesn't appear.</p> <p>Please, could you point out what I am doing wrong?</p>
<python><streamlit>
2025-03-06 15:36:41
2
2,304
Alexis
79,489,723
3,259,331
Why is this python code not running faster with parallelization?
<p>This is a MWE of some code I'm writing to do some monte carlo exercises. I need to estimate models across draws and I'm parallelizing across models. In the MWE a &quot;model&quot; is just parametrized by a number of draws and a seed.</p> <p>I define the functions below.</p> <pre><code>import time import pandas as pd import numpy as np import multiprocessing as mp def linreg(df): y = df[['y']].values x = np.hstack([np.ones((df.shape[0], 1)), df[['treat']].values]) xx_inv = np.linalg.inv(x.T @ x) beta_hat = xx_inv @ (x.T @ y) return pd.Series(beta_hat.flat, index=['intercept', 'coef']) def shuffle_treat(df): df['treat'] = df['treat'].sample(frac=1, replace=False).values return df def run_analysis(draws, seed, sleep=0): N = 5000 df = pd.DataFrame({'treat':np.random.choice([0,1], size=N, replace=True)}) df['u'] = np.random.normal(size=N) df['y'] = df.eval('10 + 5*treat + u') np.random.seed(seed) time.sleep(sleep) est = [linreg(shuffle_treat(df)) for k in range(draws)] est = pd.concat(est, axis=0, sort=False, keys=range(draws), names=['k', 'param']) return est </code></pre> <p>I then test them and show that running in serial takes a similar amount of time as running in parallel. I can confirm they are running in parallel because if I force some sleep time there is a clear gain from parallelization. I know the problem is coming from this list comprehension: <code>[linreg(shuffle_treat(df)) for k in range(draws)]</code>, but I don't understand why I don't achieve gains from parallelization across models. I've tried to parallelize across draws instead, but the results were even worse.</p> <pre><code>param_list = [dict(draws=500, seed=1029), dict(draws=500, seed=1029)] param_list_sleep = [dict(draws=500, seed=1029, sleep=5), dict(draws=500, seed=1029, sleep=5)] def run_analysis_wrapper(params): run_analysis(**params) start = time.time() for params in param_list: run_analysis_wrapper(params) end = time.time() print(f'double run 1 process: {(end - start):.2f} sec') start = time.time() with mp.Pool(processes=2) as pool: pool.map(run_analysis_wrapper, param_list) end = time.time() print(f'double run 2 processes: {(end - start):.2f} sec') start = time.time() for params in param_list_sleep: run_analysis_wrapper(params) end = time.time() print(f'double run 1 process w/ sleep: {(end - start):.2f} sec') start = time.time() with mp.Pool(processes=2) as pool: pool.map(run_analysis_wrapper, param_list_sleep) end = time.time() print(f'double run 2 processes w/ sleep: {(end - start):.2f} sec') </code></pre> <p>Output:</p> <pre><code>double run 1 process: 2.52 sec double run 2 processes: 2.94 sec double run 1 process w/ sleep: 12.30 sec double run 2 processes w/ sleep: 7.71 sec </code></pre> <p>For reference machine is Linux-based EC2 instance with <code>nproc --a</code> showing 48 CPUs. I'm running within a conda environment with Python 3.9.16.</p>
<python><python-3.x><pandas><numpy><multiprocessing>
2025-03-06 14:45:25
2
1,601
jtorca
79,489,716
15,452,168
Download Confluence Pages as PDFs via API using Python
<p>I’m trying to download Confluence pages in a space as PDFs using the Confluence API and a Python script. However, I am unable to download the PDF despite manually being able to export pages via the browser.</p> <p>Goal: Download all pages in a Confluence space (in a hierarchy) as PDF files using Python and an API key.</p> <p>What I’ve Tried: Basic Authentication via API I used the following Python script to send a GET request to the PDF export endpoint:</p> <pre><code>import requests from requests.auth import HTTPBasicAuth # Configuration base_url = &quot;https://abc-services.atlassian.net/wiki&quot; space_key = &quot;SPACE&quot; username = &quot;qwerty.abcd@xyz.com&quot; api_token = &quot;YOUR_API_TOKEN&quot; # replace with your actual API token page_id = &quot;463***765&quot; # Create the URL for PDF export url = f&quot;{base_url}/spaces/{space_key}/pdfpageexport.action?pageId={page_id}&quot; # Headers headers = { &quot;X-Atlassian-Token&quot;: &quot;no-check&quot; } # Send the request using basic authentication response = requests.get(url, auth=HTTPBasicAuth(username, api_token), headers=headers) # Check if the request was successful if response.status_code == 200: print(&quot;Request was successful!&quot;) # Save the file (optional) with open('page_export.pdf', 'wb') as file: file.write(response.content) else: print(f&quot;Request failed with status code: {response.status_code}&quot;) print(response.text) </code></pre> <p>The Issue: I am unable to download the PDF using Python, even though the browser-based export works fine. I am unsure whether I'm missing some required headers I would like to automate this process to download all pages from a Confluence space in a hierarchical structure</p>
<python><confluence><confluence-rest-api>
2025-03-06 14:44:44
0
570
sdave
79,489,702
9,548,525
Is there a Numpy method or function to split an array of uint64 into two arrays of uint32
<p>Say I have an array as follows:</p> <pre><code>arr = np.asarray([1, 2, 3, 4294967296, 100], dtype=np.uint64) </code></pre> <p>I now want two arrays, one array with the lower 32 bits of every element, and one with the upper 32 bits of every element, preferably by using views and minimizing copies, to get something like this:</p> <pre><code>upper = np.array([0, 0, 0, 1, 0], dtype=np.uint32) lower = np.array([1, 2, 3, 0, 100], dtype=np.uint32) </code></pre> <p>I tried the following:</p> <pre><code>lower = arr.view() &amp; 0xFFFFFFFF upper = np.bitwise_right_shift(arr.view(), 32) </code></pre> <p>But this results in a copy for the upper bits due to the bitshift, and both arrays are still of type <code>uint64</code>. Are there further optimizations I can try or am I out of luck and need to eat up the extra copies?</p>
<python><numpy><performance><optimization><numpy-ndarray>
2025-03-06 14:37:32
1
360
lrdewaal
79,489,599
5,931,672
ValueError: Expected x_min for bbox to be in the range [0.0, 1.0]
<p>I am using albumentations for my object detection data augmentation because it deals with bbox at the same time. My format is [xmin, ymin, xmax, ymax] with <a href="https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#pascal_voc" rel="nofollow noreferrer">pascal_voc format</a>.</p> <pre class="lang-none prettyprint-override"><code> A.Compose( [ A.HorizontalFlip(p=0.5), A.VerticalFlip(p=0.5), A.OneOf( [ A.ShiftScaleRotate( shift_limit=0.2, scale_limit=(-0.1, 0.6), rotate_limit=0, p=0.5, ), A.RandomSizedCrop( min_max_height=[200, 352], size=(352, 2048), p=0.5 ), ], p=0.5, ), A.AdditiveNoise( noise_type=&quot;uniform&quot;, noise_params={&quot;ranges&quot;:[(-0.2, 0.2)]}, p=0.5 ), ToTensorV2(), ], bbox_params=A.BboxParams( format=&quot;pascal_voc&quot;, label_fields=[&quot;class_labels&quot;] ), ) </code></pre> <p>Then I create my data as:</p> <pre class="lang-none prettyprint-override"><code> def __getitem__(self, index): data = get_data(index) torch_target = _to_torch_target(data) torch_img = _get_torch_image(data) if self.transform: sample = self.transform( image=torch_img.numpy(), bboxes=torch_target[&quot;boxes&quot;], class_labels=torch_target[&quot;labels&quot;], ) torch_img = sample[&quot;image&quot;] torch_target[&quot;boxes&quot;] = torch.Tensor(sample[&quot;bboxes&quot;]) torch_target[&quot;labels&quot;] = torch.Tensor(sample[&quot;class_labels&quot;]) torch_target[&quot;boxes&quot;] = torch.as_tensor(torch_target[&quot;boxes&quot;], dtype=torch.float32) torch_target[&quot;labels&quot;] = torch.as_tensor(torch_target[&quot;labels&quot;], dtype=torch.int64) return torch_img, torch_target </code></pre> <p>But I am getting an error:</p> <pre><code>Traceback (most recent call last): File &quot;/opt/conda/lib/python3.10/runpy.py&quot;, line 196, in _run_module_as_main return _run_code(code, main_globals, None, File &quot;/opt/conda/lib/python3.10/runpy.py&quot;, line 86, in _run_code exec(code, run_globals) File &quot;/home/.../main.py&quot;, line 116, in &lt;module&gt; train(results_path, dataset_path, weights_path, **conf) File &quot;/home/.../main.py&quot;, line 77, in train trainer.fit(model, train_dataloader, val_dataloader) File &quot;/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py&quot;, line 543, in fit call._call_and_handle_interrupt( File &quot;/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py&quot;, line 44, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) File &quot;/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py&quot;, line 579, in _fit_impl self._run(model, ckpt_path=ckpt_path) File &quot;/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py&quot;, line 986, in _run results = self._run_stage() File &quot;/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py&quot;, line 1032, in _run_stage self.fit_loop.run() File &quot;/opt/conda/lib/python3.10/site-packages/lightning/pytorch/loops/fit_loop.py&quot;, line 205, in run self.advance() File &quot;/opt/conda/lib/python3.10/site-packages/lightning/pytorch/loops/fit_loop.py&quot;, line 363, in advance self.epoch_loop.run(self._data_fetcher) File &quot;/opt/conda/lib/python3.10/site-packages/lightning/pytorch/loops/training_epoch_loop.py&quot;, line 138, in run self.advance(data_fetcher) File &quot;/opt/conda/lib/python3.10/site-packages/lightning/pytorch/loops/training_epoch_loop.py&quot;, line 204, in advance batch, _, __ = next(data_fetcher) File &quot;/opt/conda/lib/python3.10/site-packages/lightning/pytorch/loops/fetchers.py&quot;, line 133, in __next__ batch = super().__next__() File &quot;/opt/conda/lib/python3.10/site-packages/lightning/pytorch/loops/fetchers.py&quot;, line 60, in __next__ batch = next(self.iterator) File &quot;/opt/conda/lib/python3.10/site-packages/lightning/pytorch/utilities/combined_loader.py&quot;, line 341, in __next__ out = next(self._iterator) File &quot;/opt/conda/lib/python3.10/site-packages/lightning/pytorch/utilities/combined_loader.py&quot;, line 78, in __next__ out[i] = next(self.iterators[i]) File &quot;/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py&quot;, line 628, in __next__ data = self._next_data() File &quot;/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py&quot;, line 671, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File &quot;/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py&quot;, line 58, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File &quot;/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py&quot;, line 58, in &lt;listcomp&gt; data = [self.dataset[idx] for idx in possibly_batched_index] File &quot;/home/jovyan/trv/MVP_SPACE/mvp_space/src/train_pipeline/dataset.py&quot;, line 184, in __getitem__ sample = self.transform( File &quot;/opt/conda/lib/python3.10/site-packages/albumentations/core/composition.py&quot;, line 493, in __call__ self.preprocess(data) File &quot;/opt/conda/lib/python3.10/site-packages/albumentations/core/composition.py&quot;, line 533, in preprocess self._preprocess_processors(data) File &quot;/opt/conda/lib/python3.10/site-packages/albumentations/core/composition.py&quot;, line 560, in _preprocess_processors processor.preprocess(data) File &quot;/opt/conda/lib/python3.10/site-packages/albumentations/core/utils.py&quot;, line 155, in preprocess data[data_name] = self.check_and_convert(data[data_name], shape, direction=&quot;to&quot;) File &quot;/opt/conda/lib/python3.10/site-packages/albumentations/core/bbox_utils.py&quot;, line 306, in check_and_convert self.check(converted_data, shape) File &quot;/opt/conda/lib/python3.10/site-packages/albumentations/core/bbox_utils.py&quot;, line 314, in check check_bboxes(data) File &quot;/opt/conda/lib/python3.10/site-packages/albumentations/augmentations/utils.py&quot;, line 189, in wrapper return func(*args, **kwargs) File &quot;/opt/conda/lib/python3.10/site-packages/albumentations/core/bbox_utils.py&quot;, line 549, in check_bboxes raise ValueError( ValueError: Expected x_min for bbox [ 4.372159 20.666666 4.9176135 26.666666 1. ] to be in the range [0.0, 1.0], got 4.372159004211426. Epoch 0: 0%| | 0/569 [00:00&lt;?, ?it/s] </code></pre> <p>However, <code>pascal_voc</code> is NOT supposed to be between [0, 1]... What is going on here?</p> <hr /> <p>Something more weird are the numbers, they are all integers, as you can see by this assert I added at the start. I don't know when they are transformed to float.</p> <pre class="lang-none prettyprint-override"><code> for bbox in torch_target[&quot;boxes&quot;]: assert np.all([int(x) == x for x in bbox]) if self.transform: sample = self.transform( image=torch_img.numpy(), bboxes=torch_target[&quot;boxes&quot;], class_labels=torch_target[&quot;labels&quot;], ) </code></pre>
<python><albumentations>
2025-03-06 13:55:42
1
4,192
J Agustin Barrachina
79,489,528
16,389,095
How to show horizontal scroll bar into a python flet datatable
<p>I developed a simple code in Python Flet that consists of a <a href="https://flet.dev/docs/controls/datatable" rel="nofollow noreferrer"><code>DataTable</code></a> added to a Row, inserted into a Column. These nested controls are added into a container with dimensions that don't allow to display all the table content. With the current settings (<code>ft.Scroll.ALWAYS</code> for the Column, and <code>ft.Scroll.ADAPTIVE</code> for the Row), the vertical scroll bar is shown but the horizontal scrollbar appears only when the table is moved down at its bottom. I would like to have both vertical and horizontal scrollbars always visible on the table. I tried to change the values of both scroll types but nothing changed.</p> <pre class="lang-py prettyprint-override"><code>import flet as ft def main(page: ft.Page): column_items = [ft.DataColumn(ft.Text(f&quot;Column {i+1}&quot;)) for i in range(36)] cell_items = [ft.DataCell(ft.Text(f&quot;Cell {i+1}&quot;)) for i in range(len(column_items))] row_items = [ft.DataRow(cells=cell_items) for _ in range(50)] data_table = ft.DataTable(columns=column_items, rows=row_items) page.horizontal_alignment = ft.CrossAxisAlignment.CENTER page.vertical_alignment = ft.MainAxisAlignment.CENTER page.add( ft.Container( height=500, width=700, content=ft.Column( scroll=ft.ScrollMode.ALWAYS, controls=[ ft.Row( scroll=ft.ScrollMode.ADAPTIVE, controls=[data_table], ) ], ), ) ) ft.app(target=main) </code></pre>
<python><flutter><scroll><scrollbar><flet>
2025-03-06 13:29:48
1
421
eljamba
79,489,313
5,316,326
Join large partitioned parquet datasets in Polars and write to Postgres?
<p>I have two large datasets stored in partitioned Parquet format on S3, partitioned by <code>category_id</code>. I need to join them on <code>category_id</code> and <code>label_id</code> using Polars and write the results to Postgres.</p> <h2>The problem:</h2> <ul> <li>Data is too large for memory: Calling <code>.collect()</code> on the joined dataframe is not feasible.</li> <li>Writing per partition is too slow: Iterating over <code>category_id</code> and processing each partition takes too long (several seconds per partition).</li> </ul> <p>My current approach:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df_left = pl.scan_parquet( &quot;s3://my-bucket/left/**/*.parquet&quot;, hive_partitioning=True, ) df_right = pl.scan_parquet( &quot;s3://my-bucket/right/**/*.parquet&quot;, hive_partitioning=True, ) df_joined = df_left.join(df_right, on=[&quot;category_id&quot;, &quot;label_id&quot;], how=&quot;inner&quot;) </code></pre> <p>At this point, I would like to efficiently:</p> <ul> <li>Process the data in a streaming fashion</li> <li>Write the joined data to Postgres</li> </ul> <p>What I have tried:</p> <ol> <li>Looping over unique <code>category_ids</code> and processing one by one. <ul> <li>Too slow, as reading Parquet partitions takes a few seconds each time.</li> </ul> </li> <li>Using Polars' lazy execution (<code>scan_parquet</code>). <ul> <li>I cannot .collect() since it does not fit in memory.</li> <li>I am unsure how to efficiently stream the data to Postgres.</li> </ul> </li> </ol> <h2>Question</h2> <p>How can I efficiently join two large Parquet datasets using Polars and write the result to Postgres in a way that avoids memory issues and excessive partition reads?</p> <p>Would using Polars streaming help here? Or is there a way to batch-process partitions efficiently without reading each partition multiple times?</p>
<python><postgresql><python-polars>
2025-03-06 12:13:46
2
4,147
Joost Döbken
79,489,177
8,264,511
Issues Running Code Chunks in Quarto on Positron
<p>Read more in <a href="https://github.com/quarto-dev/quarto-cli/discussions/12248" rel="nofollow noreferrer">GH discussion</a>.</p> <p>I’m encountering issues using <strong>Quarto</strong> within <strong>Positron</strong> when trying to include R or Python code chunks in my .qmd documents. When I run the quarto preview command, I receive different errors depending on the language of the chunk.</p> <p>Example .qmd File:</p> <pre><code>--- title: &quot;Test Document&quot; format: html --- ## R Code ```{r} print(&quot;Hello, R!&quot;) ``` ## Python Code ```{python} print(&quot;Hello, Python!&quot;) ``` </code></pre> <p><strong>Errors Received:</strong></p> <ul> <li><strong>R:</strong> I receive the following error when attempting to execute an R code chunk:</li> </ul> <pre><code>Error in dir.exists(x) : file name conversion problem – name too long? Calls: … lapply -&gt; FUN -&gt; dir_exists -&gt; -&gt; dir.exists Execution halted </code></pre> <ul> <li><strong>Python:</strong> When running a Python code chunk, I encounter this error:</li> </ul> <pre><code>PS D:\GitHub\testing&gt; quarto preview d:/GitHub/testing/test.qmd --no-browser --no-watch-inputs ERROR: Impossible to access to the file. (os error 1920): stat 'C:\Users\erik4\AppData\Local\Microsoft\WindowsApps\python3.12.exe' Stack trace: at Object.statSync (ext:deno_fs/30_fs.js:425:3) at existsSync (file:///C:/Users/erik4/AppData/Local/Programs/Positron/resources/app/quarto/bin/quarto.js:223:27) at getQuartoJupyterCapabilities (file:///C:/Users/erik4/AppData/Local/Programs/Positron/resources/app/quarto/bin/quarto.js:30667:13) at jupyterCapabilities (file:///C:/Users/erik4/AppData/Local/Programs/Positron/resources/app/quarto/bin/quarto.js:30582:34) at pythonExec (file:///C:/Users/erik4/AppData/Local/Programs/Positron/resources/app/quarto/bin/quarto.js:30764:24) at Object.execute (file:///C:/Users/erik4/AppData/Local/Programs/Positron/resources/app/quarto/bin/quarto.js:40546:35) at async renderExecute (file:///C:/Users/erik4/AppData/Local/Programs/Positron/resources/app/quarto/bin/quarto.js:78033:27) at async renderFileInternal (file:///C:/Users/erik4/AppData/Local/Programs/Positron/resources/app/quarto/bin/quarto.js:78201:43) at async renderFiles (file:///C:/Users/erik4/AppData/Local/Programs/Positron/resources/app/quarto/bin/quarto.js:78069:17) at async render (file:///C:/Users/erik4/AppData/Local/Programs/Positron/resources/app/quarto/bin/quarto.js:82929:21) PS D:\GitHub\testing&gt; </code></pre> <p><strong>What I've Tried:</strong></p> <ul> <li>Verified that both Python and R are installed correctly and are in the system PATH.</li> <li>Checked project's folder permissions.</li> <li>Tried reducing the project path length, but the issue persists.</li> </ul> <p>Thank you in advance for any help or suggestions!</p> <p>R version: 4.4.2, Python version: 3.12.9</p>
<python><r><quarto><positron>
2025-03-06 11:24:59
0
389
Erik De Luca
79,489,137
4,105,440
Nested parallelism with GridSearchCV causes infinite hang
<p>I'm running a <code>GridSearchCV</code> optimization into a parallelized function. The pseudocode looks like this</p> <pre class="lang-py prettyprint-override"><code>from tqdm.contrib.concurrent import process_map from sklearn.model_selection import GridSearchCV def main(): results = process_map(func, it, max_workers=5) # We never reach here with n_jobs &gt; 1 in GridSearch def func(it): ... grid_search = GridSearchCV(..., n_jobs=5) ... return result if __name__ == &quot;__main__&quot;: main() </code></pre> <p>If <code>n_jobs</code> &gt; 1 the script indefinitely hangs when returning <code>results</code> and never proceeds further (although all the <code>func</code> tasks have completed). If I set <code>n_jobs=1</code> then everything works fine.</p> <p>I think (but I'm not sure) this is related to the fact that <code>process_map</code> uses a different spawn mechansim from <code>GridSearchCV</code> (which internally uses joblib if I understand it correctly).</p> <p>As the heaviest part of this algorithm is the grid search, isn't there any way of maintaining that parallelism together with the outer parallelism layer?</p>
<python><scikit-learn><multiprocessing><joblib><gridsearchcv>
2025-03-06 11:14:08
1
673
Droid
79,488,914
17,795,398
How to use typeguard for numpy structured arrays?
<p>I want to implement functions that act on specific numpy structured arrays, but typeguard doesn't seem to work properly:</p> <pre><code>import numpy as np from typeguard import typechecked mytype2 = np.dtype([(&quot;type&quot;, int), (&quot;pos&quot;, float, 2)]) mytype3 = np.dtype([(&quot;type&quot;, int), (&quot;pos&quot;, float, 3)]) @typechecked def process(data: mytype3) -&gt; None: # Variable not allowed in type expression Pylance print(data) data = np.array([(1, [2, 3])], dtype=mytype2) process(data) </code></pre> <p>That works fine, when it should raise an error.</p>
<python><numpy><python-typing>
2025-03-06 09:50:29
2
472
Abel Gutiérrez
79,488,683
2,695,990
How to avoid "object has no attribute 'isAlive'" error while debugging in IntelliJ using Python plugin version 241.18034.62
<p>I am writing a simple project in python. My version of python is: 3.13.1 . I am using IntelliJ and Python plugin with version: 241.18034.62. I would like to debug my project but when I try to debug I am getting many errors:</p> <pre><code>AttributeError: '_MainThread' object has no attribute 'isAlive'. Did you mean: 'is_alive'? </code></pre> <p>bigger part of stacktrace:</p> <pre><code>C:\projects\mat\venv\Scripts\python.exe -X pycache_prefix=C:\Users\mylogin\AppData\Local\JetBrains\IntelliJIdea2024.1\cpython-cache C:/Users/mylogin/AppData/Roaming/JetBrains/IntelliJIdea2024.1/plugins/python/helpers/pydev/pydevd.py --multiprocess --qt-support=pyqt5 --client 127.0.0.1 --port 4095 --file C:\projects\mat\org\aa\aa\mat\delivery_processor.py Connected to pydev debugger (build 241.18034.62) Traceback (most recent call last): File &quot;C:\Users\mylogin\AppData\Roaming\JetBrains\IntelliJIdea2024.1\plugins\python\helpers\pydev\_pydevd_bundle\pydevd_pep_669_tracing.py&quot;, line 238, in py_start_callback if not is_thread_alive(thread): ~~~~~~~~~~~~~~~^^^^^^^^ File &quot;C:\Users\mylogin\AppData\Roaming\JetBrains\IntelliJIdea2024.1\plugins\python\helpers\pydev\_pydev_bundle\pydev_is_thread_alive.py&quot;, line 18, in is_thread_alive return t.isAlive() ^^^^^^^^^ AttributeError: 'WriterThread' object has no attribute 'isAlive'. Did you mean: 'is_alive'? </code></pre> <p>From what I have understood there is some mismatch with versions. I have tried to change the dbugger properties: <a href="https://i.sstatic.net/JpnrRYt2.png" rel="noreferrer"><img src="https://i.sstatic.net/JpnrRYt2.png" alt="enter image description here" /></a></p> <p>But none if these helped. How should I setup my IntelliJ environment to be able to debug? I would like to avoid downgrading python version.</p>
<python><debugging><intellij-idea><pycharm>
2025-03-06 08:28:35
3
3,174
fascynacja
79,488,423
5,937,757
Snowflake ML: `There is no method with name explain available in the model`
<h3>Issue:</h3> <p>I'm working with <code>snowflake-ml-python</code> and trying to use the <code>explain</code> function on an XGBoost model deployed in Snowflake. However, I get the following error when calling <code>run</code> with <code>function_name=&quot;explain&quot;</code>:</p> <pre class="lang-py prettyprint-override"><code>ValueError: There is no method with name explain available in the model .... </code></pre> <h3>Context:</h3> <ol> <li>I trained an <code>XGBClassifier</code> model using <code>snowflake.ml.modeling.xgboost</code>.using their <a href="https://docs.snowflake.com/en/developer-guide/snowflake-ml/modeling#training" rel="nofollow noreferrer">documentation</a>:</li> <li>Registered it in Snowflake’s model registry with <code>enable_explainability=True</code>.</li> <li>The model successfully trains and makes predictions.</li> <li>When calling <code>model_version.run(features_df, function_name=&quot;explain&quot;)</code>, I get the error above.</li> </ol> <h3>Code Snippet:</h3> <p>Environment:</p> <ul> <li>windows 10</li> <li>python 3.8</li> <li>snowflake-ml-python==1.5.4</li> <li>snowflake-snowpark-python==1.19.0</li> </ul> <h4>Model Training and Registration:</h4> <pre class="lang-py prettyprint-override"><code>import pandas as pd from sklearn.datasets import make_classification from snowflake.ml.modeling.xgboost import XGBClassifier from snowflake.snowpark import Session from snowflake.ml.registry import registry # Snowflake session setup conn_params = { &quot;user&quot;: &quot;&lt;...&gt;&quot;, &quot;account&quot;: &quot;&lt;...&gt;&quot;, &quot;warehouse&quot;: &quot;&lt;...&gt;&quot;, &quot;database&quot;: &quot;&lt;...&gt;&quot;, &quot;schema&quot;: &quot;&lt;...&gt;&quot;, &quot;role&quot;: &quot;&lt;...&gt;&quot;, &quot;authenticator&quot;: &quot;externalbrowser&quot;, } session = Session.builder.configs(conn_params).create() # Define columns FEATURE_COLS = [&quot;X1&quot;, &quot;X2&quot;, &quot;X3&quot;, &quot;X4&quot;, &quot;X5&quot;, &quot;X6&quot;] LABEL_COLS = [&quot;Y&quot;] OUTPUT_COLS = [&quot;PREDICTIONS&quot;] # Generate data X, y = make_classification(n_samples=40000, n_features=6, n_informative=4, n_redundant=1, random_state=0, shuffle=True) X_df = pd.DataFrame(X, columns=FEATURE_COLS) y_df = pd.DataFrame(y, columns=LABEL_COLS) features_pandas = pd.concat([X_df, y_df], axis=1) features_df = session.create_dataframe(features_pandas) # Train and register model xgboost_model = XGBClassifier(input_cols=FEATURE_COLS, label_cols=LABEL_COLS, output_cols=OUTPUT_COLS) xgboost_model.fit(features_df) native_registry = registry.Registry(session=session, database_name=session.get_current_database(), schema_name=session.get_current_schema()) model_name = &quot;Model_Only&quot; version = &quot;V3&quot; model_version = native_registry.log_model( model=xgboost_model, model_name=model_name, version_name=version, sample_input_data=features_df.sample(n=1000), conda_dependencies=[&quot;snowflake-ml-python&quot;], comment=&quot;Model_Only test&quot;, options={&quot;enable_explainability&quot;: True} ) </code></pre> <h4>Making Predictions &amp; Explanation Attempt:</h4> <pre class="lang-py prettyprint-override"><code>model_version = native_registry.get_model(model_name).version(version) predictions = model_version.run(features_df, function_name=&quot;predict&quot;) print(predictions.show()) # Attempting to explain explanations = model_version.run(features_df, function_name=&quot;explain&quot;) # Fails here </code></pre> <h3>Questions:</h3> <ol> <li>Does <code>XGBClassifier</code> in Snowflake ML support <code>explain</code>? It seems it does from the doc.</li> <li>Is there any additional setup required to enable explainability?</li> <li>If <code>explain</code> is not supported, what is the recommended alternative for feature importance in Snowflake ML?</li> </ol> <p>Any insights would be greatly appreciated!</p>
<python><snowflake-cloud-data-platform>
2025-03-06 06:20:23
2
423
mas
79,488,214
3,161,801
Blender API Question - rotation about center of object
<p>I am running the following code. I would like to rotate the selected blocks about the x-axis in the center. However, it is rotating about the corner. Can I ask for assistance?</p> <pre><code>import bpy from mathutils import Matrix, Vector import math angle = math.radians(45) for obj in bpy.data.objects: if(obj.location.x &gt; 1.1): obj.select_set(True) print(obj.location.x) bpy.context.view_layer.objects.active = obj bpy.ops.transform.rotate(value=math.radians(90) , orient_axis='X') print(&quot;hello33&quot;) </code></pre> <p><a href="https://i.sstatic.net/H3C7vMRO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H3C7vMRO.png" alt="enter image description here" /></a></p>
<python><blender>
2025-03-06 03:46:12
1
775
ffejrekaburb
79,487,911
145,504
Deriving path of Python Scripts folder on Windows
<p>I installed Python 3.13 on Windows 11 with</p> <pre><code>winget install python3 --scope machine` </code></pre> <p>Then I installed <code>pip</code> with</p> <pre><code>python -m ensurepip </code></pre> <p>This puts <code>pip</code> in <code>%APPDATA\Python\Python313\site-packages</code>. Since this directory is in <code>sys.path</code>, I can run <code>pip</code> like this:</p> <pre><code>python -m pip ... </code></pre> <p>This is great: the command is easy to remember, and isn't going to change when I upgrade Python.</p> <p>However, if the package includes an executable or script, these go in <code>%APPDATA%\Python\Python313\Scripts</code>. This is not automatically added to <code>%PATH%</code> by the Python installer. And since the Python version number changes, I can't just hardcode this path in batch scripts.</p> <p>I also checked <code>site.USER_BASE</code> but it is set to <code>%APPDATA%\Python</code> which doesn't include the Python version. On the other hand <code>site.USER_SITE</code> includes the Python version <em>and</em> <code>\site-packages</code>.</p> <p>Is there an easy way to run package scripts with Python on Windows <em>without</em> expecting the end user to modify <code>%PATH%</code>?</p>
<python><windows><pip>
2025-03-05 23:30:37
1
4,458
rgov
79,487,849
1,788,656
Expanding numpy based code that detect the frequency of the consecutive number to work on multidimensional array instead of 1D array
<p>This <a href="https://stackoverflow.com/questions/79483656/vectoring-nested-while-statements-block-that-detects-the-frequency-of-consecutiv/79483704?noredirect=1">stackoverflow answer</a> provides a simple way (below) to find the frequency and indices of consecutive repeated numbers. This solution is much faster than loop-based code (see the original post above).</p> <pre><code>boundaries = np.where(np.diff(aa) != 0)[0] + 1 #group boundaries get_idx_freqs = lambda i, d: (np.concatenate(([0], i))[d &gt;= 2], d[d &gt;= 2]) idx, freqs = get_idx_freqs(boundaries, np.diff(np.r_[0, boundaries, len(aa)])) </code></pre> <p>and the output</p> <pre><code># aa=np.array([1,2,2,3,3,3,4,4,4,4,5,5,5,5,5]) (array([ 1, 3, 6, 10]), array([2, 3, 4, 5])) # aa=np.array([1,1,1,np.nan,np.nan,1,1,np.nan]) (array([0, 5]), array([3, 2])) </code></pre> <p>Wondering if this solution could be expanded to work on multidimensional array instead of the slow traditional loop, as the following:</p> <pre><code>#%% def get_frequency_of_events_fast(aa): boundaries = np.where(np.diff(aa) != 0)[0] + 1 #group boundaries get_idx_freqs = lambda i, d: (np.concatenate(([0], i))[d &gt;= 2], d[d &gt;= 2]) idx, freqs = get_idx_freqs(boundaries, np.diff(np.r_[0, boundaries, len(aa)])) return idx,freqs tmp2_file=np.load('tmp2.npz') tmp2 = tmp2_file['arr_0'] idx_all=[] frq_all=[] for i in np.arange(tmp2.shape[1]): for j in np.arange(tmp2.shape[2]): print(&quot;==&gt;&gt; i, j &quot;+str(i)+' '+str(j)) idx,freq=get_frequency_of_events_fast(tmp2[:,i,j]) idx_all.append(idx) frq_all.append(freq) #if j == 69: # break print(idx) print(freq) #if i == 0: # break </code></pre> <p>I appended the indices and frequencies to the one dimensional list and also I was wondering if there is a way to append to two dimensional array.</p> <p>The file could be downloaded from <a href="https://app.box.com/s/ifl91nthv41s7tayhhptzsomlzo5v0fj" rel="nofollow noreferrer">box.com</a>. Here is a sample output</p> <pre><code>==&gt;&gt; i, j 0 61 [ 27 73 226 250 627 754 760 798 825 891 906] [ 12 8 5 17 109 5 12 26 30 12 3] ==&gt;&gt; i, j 0 62 [ 29 75 226 250 258 627 754 761 800 889] [ 11 7 5 6 6 114 5 14 57 21] ==&gt;&gt; i, j 0 63 [ 33 226 622 680 754 762 801 888] [ 9 5 56 63 5 21 58 26] ==&gt;&gt; i, j 0 64 [ 33 226 615 622 693 753 762 801 889 972 993] [12 5 4 68 54 6 21 60 26 3 2] ==&gt;&gt; i, j 0 65 [ 39 615 621 693 801 891 972 987 992] [ 7 3 70 90 61 24 3 2 7] ==&gt;&gt; i, j 0 66 [ 39 617 657 801 891 970 987] [ 7 34 132 63 30 5 13] ==&gt;&gt; i, j 0 67 [ 39 88 621 633 657 680 801 804 891 969 986] [ 11 4 6 2 6 110 2 63 30 6 14] ==&gt;&gt; i, j 0 68 [ 39 87 681 715 740 766 807 873 891 969 984] [12 6 33 3 22 24 60 3 31 6 16] </code></pre>
<python><numpy>
2025-03-05 22:43:40
1
725
Kernel
79,487,761
11,850,322
how to execute notebook in different namespace
<p>Let say I have two notebooks:</p> <ul> <li>nba.ipynb</li> <li>nbb.ipynb</li> </ul> <p>In notebook A, I try to run: <code>%run -n nbB.ipynb</code> so that the two notebooks are running in two different namespace. However, after execute <code>%run -n nbB.ipynb</code>, it still overwrite on my <code>x</code>, <code>y</code> variable</p> <p>Sample code: nba.ipynb</p> <pre><code>x = 2 y = 100 %store y %run -n nbb.ipynb %store -r y # it still update y regardless having this or not </code></pre> <p>nbb.ipynb</p> <pre><code>x=3 %store -r y y += 50 </code></pre> <p>How to fix this?</p>
<python><jupyter-notebook><jupyter-lab>
2025-03-05 21:42:49
0
1,093
PTQuoc
79,487,738
2,398,143
Not able to generate graphviz when runing on Mac
<p>I have followed the recommended steps:</p> <ul> <li>Installed graphviz on mac using <code>brew install graphviz</code>. Please see below.</li> </ul> <pre><code>ls /usr/local/Cellar/graphviz/12.2.1/bin/ acyclic diffimg dot_sandbox graphml2gv gvmap gxl2gv patchwork twopi bcomps dijkstra edgepaint gv2gml gvmap.sh mm2gv prune unflatten ccomps dot fdp gv2gxl gvpack neato sccmap circo dot2gxl gc gvcolor gvpr nop sfdp cluster dot_builtins gml2gv gvgen gxl2dot osage tred </code></pre> <ul> <li>Installed the python package using <code>pip3 install graphviz</code>. Please see below.</li> </ul> <pre><code>ls /Users/foo/Library/Python/3.7/lib/python/site-packages/graphviz/ __init__.py _tools.py dot.py jupyter_integration.py rendering.py __pycache__ backend encoding.py parameters saving.py _compat.py base.py exceptions.py piping.py sources.py _defaults.py copying.py graphs.py quoting.py unflattening.py </code></pre> <ul> <li>The PATH variable is also set correctly.</li> </ul> <pre><code>echo $PATH /usr/local/Cellar/graphviz/12.2.1/bin:/usr/local/opt/curl/bin:/usr/local/opt/curl/bin: </code></pre> <p>However, when I run the python code to generate a basic graph:</p> <pre><code>import json import graphviz if __name__ == '__main__': json_string = '{&quot;192.168.133.166&quot;:[{}],&quot;192.168.133.180&quot;:[{}],&quot;192.168.133.64&quot;:[{}]} ' g1 = graphviz.Digraph(name=&quot;my_graph&quot;) data = json.loads(json_string) if isinstance(data, dict): for k, v in data.items(): g1.node(k, None) print(k, v) # Render the graph g1.render(filename=&quot;my_graph&quot;) </code></pre> <p>It fails with the following error.</p> <pre><code>graphviz.backend.execute.ExecutableNotFound: failed to execute PosixPath('dot'), make sure the Graphviz executables are on your systems' PATH </code></pre> <p>When I run the dot command from command line, it works fine. <code>/usr/local/Cellar/graphviz/12.2.1/bin/dot -T jpeg -O sample.dot</code></p> <p>Not sure what is still missing.</p>
<python><python-3.x><graphviz><dot>
2025-03-05 21:30:17
1
2,183
AnilJ
79,487,515
11,063,709
In Gymnasium, how can one run a vector environment's function in parallel similar to how step() can be run in parallel?
<p>I have a custom Gymnasium environment, <code>RLToy-v0</code> from the library <a href="https://github.com/automl/mdp-playground" rel="nofollow noreferrer">MDP Playground</a>. It separates out the transition function and reward function from the step function and calls them individually inside <code>step()</code>.</p> <p>I know that using Gymnasium's <code>AsyncVectorEnv</code>, and initialising a set of vectorised environments as <code>envs</code>, we can call <code>envs.step()</code> and <code>envs.observation_space.sample()</code> to run the respective individual functions of the environments in parallel. However, I want to run only the reward function, named <code>reward_function()</code>, of the vectorised environments in parallel. How would one go about doing that for the separated out reward function of my custom environment? Is there an easy way to just call <code>envs.reward_function()</code>?</p>
<python><reinforcement-learning><gymnasium>
2025-03-05 19:40:07
0
1,442
Warm_Duscher
79,487,486
6,651,940
How can I extract absolute beta coefficients for all levels of multiple categorical variables in statsmodels?
<p>I’m performing linear regression in Python with statsmodels. I have two categorical predictors:</p> <ul> <li>sample: a factor with 8 levels</li> <li>distractor: a factor with 2 levels</li> </ul> <p>My goal is to determine the “absolute” beta (effect) for each level of each variable. When I fit the model with an intercept using treatment (dummy) coding (the default), statsmodels reports coefficients as differences relative to the reference (baseline) level. For example, consider the following output:</p> <blockquote> <pre><code>Intercept 5.076e-04 C(sample)[T.1] -2.333e-18 C(sample)[T.2] -1.558e-18 C(sample)[T.3] -7.167e-19 C(sample)[T.4] -1.402e-18 C(sample)[T.5] 7.694e-04 C(sample)[T.6] 5.478e-19 C(sample)[T.7] 4.516e-03 C(distractor)[T.9] -1.015e-03 </code></pre> </blockquote> <p>Here, the intercept represents the predicted response when sample is at its reference level (level 0) and distractor is at its reference level (level 8). The coefficient for C(distractor)[T.9] is then the difference from distractor level 8. That means that the “absolute” beta for distractor level 8 is just the intercept, and for distractor level 9 it is (Intercept + -1.015e-03).</p> <p>My confusion is:</p> <ol> <li>Is it expected that the reference level for both factors is represented solely by the intercept (i.e. that the first level of all variables always has the same beta value)?</li> <li>How do I extract a full set of betas (i.e. 8 for sample and 2 for distractor) from themodel?</li> </ol> <p>I tried removing the intercept (using - 1 in the formula), but then statsmodels still dropped one dummy variable for distractor due to collinearity (even though distractor clearly has two levels when modeled alone, as shown by fitting response ~ C(distractor) - 1 which returns two coefficients). The two factors are independent.</p> <p>What is the proper way to obtain “absolute” beta values for all levels? Is it correct to compute them by adding the intercept to the reported contrasts (using zero for the reference level)? If so, is there any cleaner method in statsmodels to directly return a parameter for each level?</p> <p>Example dummy code:</p> <pre><code>import pandas as pd import statsmodels.formula.api as smf # Create dummy data data = pd.DataFrame({ 'response': [0.51, 0.52, 0.53, 0.54, 0.60, 0.61, 0.62, 0.63, 0.55, 0.56], 'sample': ['0', '1', '2', '3', '4', '5', '6', '7', '0', '1'], # 8 levels (as strings) 'distractor': ['8', '8', '8', '8', '9', '9', '9', '9', '8', '9'] # 2 levels }) # Model with intercept (default treatment coding) model_with_int = smf.ols('response ~ C(sample) + C(distractor)', data=data).fit() print(&quot;Model with intercept:&quot;) print(model_with_int.params) # Expected output example: # Intercept 0.000508 (this is the effect at sample=0, distractor=8) # C(sample)[T.1] (difference between sample 1 and sample 0) # ... # C(distractor)[T.9] (difference between distractor 9 and distractor 8) # To get the &quot;absolute&quot; beta for each level: # For sample: # Level 0 beta = Intercept # Level 1 beta = Intercept + C(sample)[T.1] # ... and so on. # For distractor: # Level 8 beta = Intercept # Level 9 beta = Intercept + C(distractor)[T.9] print(&quot;\nAbsolute beta values:&quot;) abs_beta_sample = {} abs_beta_distractor = {} intercept = model_with_int.params['Intercept'] # For sample, assume reference level is '0' abs_beta_sample['0'] = intercept for lvl in ['1', '2', '3', '4', '5', '6', '7']: coef_name = f&quot;C(sample)[T.{lvl}]&quot; abs_beta_sample[lvl] = intercept + model_with_int.params.get(coef_name, 0) # For distractor, assume reference level is '8' abs_beta_distractor['8'] = intercept abs_beta_distractor['9'] = intercept + model_with_int.params.get(&quot;C(distractor)[T.9]&quot;, 0) print(&quot;Sample beta values:&quot;, abs_beta_sample) print(&quot;Distractor beta values:&quot;, abs_beta_distractor) </code></pre> <p>I would appreciate any guidance on whether this is the correct approach or if there’s a better way to directly obtain the full set of betas from the model.</p>
<python><linear-regression><statsmodels>
2025-03-05 19:23:39
0
1,130
Blade
79,487,231
438,758
How to generate a PDF with a grid of images per page?
<p>Our work involves visually inspecting a number of plots together. All plots of same size. we want to print them in pages to study. Something like a 8.5&quot;x11&quot; paper with 1&quot; margin gives me a 7.5&quot;x10&quot; space with 3:4 aspect ratio. This space can be filled with 9 or 16 images each of the same aspect ratio. Adding some space between images would be great (I think the term is gutter). My question is around how to generate this PDF from the individual source images?</p> <p>Constraints:</p> <ul> <li>Mac or linux command line preferred</li> <li><em>python3</em> or related code is preferred, since I know it somewhat</li> <li>We want to generate these in bulk, so I would like to avoid per-image UI drag-drop.</li> <li>The data to generate these plots is available, so I can regenerate these plots with minimal effort.</li> </ul> <p>So far we have tried:</p> <ul> <li>(current solution) Opening all images in Mac+preview then doing a page layout with multiple pages per sheet. This offers less control (inability to specify border size, gutter etc) and is manual for a batch of images. Does not go beyond 16 images; 25 would be great.</li> <li>Photoshop. Create a page with a grid of <em>frames</em>, then paste files as <em>linked</em>. Then we subsequently replace the underlying files and reopen the PSD. This is absolute control, but involves far to much manual work.</li> </ul>
<python><macos><image><pdf>
2025-03-05 17:20:41
4
7,215
BiGYaN
79,486,991
20,591,261
How to add a new level to JSON output using Polars in Python?
<p>I'm using Polars to process a DataFrame so I can save it as JSON. I know I can use the method <code>.write_json()</code>, however, I would like to add a new level to the JSON.</p> <p>My current approach:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame({ &quot;id&quot;: [1, 2, 3, 4, 5], &quot;variable1&quot;: [15, 25, 5, 10, 20], &quot;variable2&quot;: [40, 30, 50, 10, 20], }) ( df.write_json() ) </code></pre> <p>Current output:</p> <pre class="lang-json prettyprint-override"><code>'[{&quot;id&quot;:1,&quot;variable1&quot;:15,&quot;variable2&quot;:40},{&quot;id&quot;:2,&quot;variable1&quot;:25,&quot;variable2&quot;:30},{&quot;id&quot;:3,&quot;variable1&quot;:5,&quot;variable2&quot;:50},{&quot;id&quot;:4,&quot;variable1&quot;:10,&quot;variable2&quot;:10},{&quot;id&quot;:5,&quot;variable1&quot;:20,&quot;variable2&quot;:20}]' </code></pre> <p>But I would like to save it in this way, with the &quot;Befs&quot; key, so each &quot;Befs&quot; contains every record of the DataFrame.</p> <p>Desired output:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;Befs&quot;: [ { &quot;ID &quot;: 1, &quot;variable1&quot;: 15, &quot;variable2&quot;: 40 }, { &quot;ID &quot;: 2, &quot;variable1&quot;: 25, &quot;variable2&quot;: 30 } ] } </code></pre> <p>I have tried using <code>.pl.struct()</code> , but my attemps make no sense:</p> <pre><code>( df .select( pl.struct( pl.lit(&quot;Bef&quot;).alias(&quot;Bef&quot;), pl.col(&quot;id&quot;), pl.col(&quot;variable1&quot;), pl.col(&quot;variable2&quot;) ) ) .write_json() ) </code></pre>
<python><python-polars>
2025-03-05 15:46:06
1
1,195
Simon
79,486,984
6,195,489
Get an array containing differences between two NumPy arrays with some members the same
<p>I have two very large NumPy arrays containing coordinates [[x1,y1,z1],[x2,y2,z2]...] (~10^9 elements)</p> <p>The arrays are of different sizes, and there will be overlap between the coordinates. So [x1,y1,z1] may be in array1, but not in array2.</p> <p>I would like to quickly get all of the coordinates that are in one, and only one array.</p> <p>For example,</p> <pre><code>import numpy as np array1 = np.array([[1,2,3],[2,3,4],[3,4,5],[4,5,6]]) array2 = np.array([[2,3,4],[3,4,5],[4,5,6],[5,6,7]]) array_diff = some_function_to_get_diff(array1,array2) </code></pre> <p>would get:</p> <pre><code>array_diff: np.array([[1,2,3],[5,6,7]]) </code></pre> <p>I can use <code>setdiff1d</code> for one dimension, but it isn't clear to me how to do this for a 2d array. The files are quite large – 10GB or so, so being able to do this quickly/chunked in parallel would be a big plus.</p> <p><strong>Update</strong></p> <pre><code>def some_faster_function_to_get_diff(array1, array2): unique1 = np.unique(array1, axis=0) # Get the unique coordinates in array1 unique2 = np.unique(array2, axis=0) # Get the unique coordinates in array2 set1 = set(map(tuple, unique1)) set2 = set(map(tuple, unique2)) array1_only = np.array(list(set1 - set2)) array2_only = np.array(list(set2 - set1)) if array1_only.size == 0: return array2_only if array2_only.size == 0: return array1_only return np.vstack((array1_only, array2_only)) def symmetric_difference(arr1, arr2): dtype = [('f{}'.format(i), arr1.dtype) for i in range(arr1.shape[1])] struct_arr1 = np.ascontiguousarray(arr1).view(dtype) struct_arr2 = np.ascontiguousarray(arr2).view(dtype) unique_to_arr1 = np.setdiff1d(struct_arr1, struct_arr2, assume_unique=True) unique_to_arr2 = np.setdiff1d(struct_arr2, struct_arr1,assume_unique=True) result = np.concatenate([unique_to_arr1, unique_to_arr2]).view(arr1.dtype).reshape(-1, arr1.shape[1]) return result from sys import getsizeof for i in range(2,9): no_ele=3*10**i arr1 = np.random.rand(no_ele, 3) * 100 arr2 = np.random.rand(no_ele, 3) * 100 print(f&quot;i={i}: memory {getsizeof(arr2)/1024**3:2f}GB&quot;) arr2[50:60] = arr1[50:60] t0=time.time() diff1=symmetric_difference(arr1,arr2) t1=time.time() print(f&quot;{no_ele}: contiguous {t1-t0}&quot;) t0=time.time() diff2=some_faster_function_to_get_diff(arr1,arr2) t1=time.time() print(f&quot;{no_ele}: sets {t1-t0}&quot;) Gives: 2 memory 0.000007GB 300: contiguous 0.0004420280456542969 300: sets 0.0007300376892089844 3 memory 0.000067GB 3000: contiguous 0.0038480758666992188 3000: sets 0.0068819522857666016 4 memory 0.000671GB 30000: contiguous 0.052056074142456055 30000: sets 0.07555103302001953 5 memory 0.006706GB 300000: contiguous 0.6442217826843262 300000: sets 0.9859399795532227 6 memory 0.067055GB 3000000: contiguous 9.833515882492065 3000000: sets 12.196370840072632 7 memory 0.670552GB 30000000: contiguous 142.16755604743958 30000000: sets 138.87937593460083 </code></pre> <p>@Andrés code works, and is much quicker than using lists, converting to contiguous array also produces a similar run time.</p> <p>The problem now is that the arrays are too big to fit into memory - helpfully the package (laspy) allows chunking the data like:</p> <p>with laspy.open(las_file1) as f1, laspy.open(las_file2) as f2: for points1 in f1.chunk_iterator(chunk_size):</p>
<python><arrays><numpy>
2025-03-05 15:44:05
1
849
abinitio
79,486,908
7,124,155
How can I use a PySpark UDF in a for loop?
<p>I need a PySpark UDF with a for loop to create new columns but with conditions based on the iterator value.</p> <pre><code>def test_map(col): if x == 1: if col &lt; 0.55: return 1.2 else: return 0.99 elif x == 2: if col &lt; 0.87: return 1.5 else: return 2.4 etc. test_map_udf = F.udf(test_map, IntegerType()) </code></pre> <p>And then iterate:</p> <pre><code>for x in range(1, 10): df = df.withColumn(f&quot;new_value_{x}&quot;, test_map_udf(F.col(f&quot;old_value_{x}&quot;)) </code></pre> <p>But it errors out because test_map doesn't know what x is when it runs, and you can't pass x to test_map_udf.</p> <p>Should I create a regular Python function that takes x, and that function calls the UDF?</p>
<python><pyspark><databricks>
2025-03-05 15:11:49
2
1,329
Chuck
79,486,685
4,451,315
Cumulative sum of one array in order defined by other array without doing 2 sort operations
<p>I have two numpy arrays:</p> <pre><code>import numpy as np arr1 = np.array([1,4,2,5,6,3,2]) arr2 = np.array([1,0,2,4,3,6,5]) </code></pre> <p>I would like to find the cumulative sum of <code>arr1</code> according to the order defined in <code>arr2</code>. Expected output:</p> <pre><code>array([ 5, 4, 7, 18, 13, 23, 20]) </code></pre> <p>Here's one way to calculate this:</p> <pre class="lang-py prettyprint-override"><code>sorting_indices = np.argsort(arr2) result = np.cumsum(arr1[sorting_indices])[np.argsort(sorting_indices)] </code></pre> <p>However, this involves doing two sort/argsort operations (<code>np.argsort(arr2)</code> and then <code>np.argsort(sorting_indices)</code>)</p> <p>Is it possible to produce the same result, but doing only a single sort or argsort operation?</p>
<python><numpy>
2025-03-05 13:46:58
2
11,062
ignoring_gravity
79,486,249
16,389,095
How to stop updating a control during route change
<p>I developed a simple app using Python Flet 0.21.2. The app consists into two main views: the 'Welcome' and the 'Login', each of them defined into separate classes. Routing occurs into the main() function, in the route_change event, in which the 'views_manager()' is called. The 'Welcome' view implements a simple UI, with a container, a progress bar and a button defined into the build method. According to the user control lifecycle <a href="https://flet.dev/docs/getting-started/custom-controls/#life-cycle-methods" rel="nofollow noreferrer">documentation</a>, the <em>did_mount()</em> event occurs after the control is added to the page, whilst the <em>will_unmount()</em> event occurs before the control is removed from the page. So, in order to update the layout, which consists of changing the controls color, the <em>update_layout()</em> method is called into the <em>did_mount()</em> event. In the <em>will_unmount()</em> the value of a boolean variable is changed in order to stop the layout updating. Here is the full code</p> <pre><code>import flet as ft import time def views_manager(page): # returns a dictionary of views with keys corresponding to their routes return { '/' : Welcome( parent_page=page, route='/', on_click=lambda _: page.go('/login') ), '/login' : Login( parent_page=page, route='login', on_click=lambda _: page.go('/') ), } class Welcome(ft.View): def __init__(self, parent_page, route, on_click): super().__init__() self.padding = 0 self.parent_page = parent_page self.route = route self.login_on_click = on_click self.updating = None def update_layout(self): &quot;&quot;&quot; Updates the layout by incrementally changing the progress bar value and updating the background color of the top container and the login button every 10% of progress. This function iterates from 0 to 100, updating the progress bar's value and sleeping for 0.05 seconds between each increment. When the progress reaches a multiple of 10, it changes the background color of the top container and the login button based on a predefined list of green shades. After reaching 100%, it resets the progress. &quot;&quot;&quot; colors = [ft.colors.GREEN_50, ft.colors.GREEN_100, ft.colors.GREEN_200, ft.colors.GREEN_300, ft.colors.GREEN_400, ft.colors.GREEN_500, ft.colors.GREEN_600, ft.colors.GREEN_700, ft.colors.GREEN_800, ft.colors.GREEN_900] val=0 while val &lt; 101: if not self.updating: break else: #if self.pb in self.controls: self.pb.value = val * 0.01 #update container bgcolor every 10% mod = val % 10 if mod == 0.0: self.topContainer.bgcolor = colors[int(val/10) - 1] self.loginButton.style = ft.ButtonStyle(bgcolor=colors[int(val/10) - 1]) #update val value val += 1 if val == 100: val=0 #update the page time.sleep(0.05) self.update() def did_mount(self): self.updating = True self.update_layout() def will_unmount(self): self.updating = False def build(self): self.topContainer = ft.Container( bgcolor=ft.colors.GREEN, width=self.parent_page.window_width, height=self.parent_page.window_height * 0.25, ) self.pb = ft.ProgressBar() self.loginButton=ft.FilledButton(text=&quot;LOGIN&quot;, on_click = self.login_on_click) return ft.Column( controls=[ self.topContainer, self.pb, ft.Row( [ ft.Container( content=self.loginButton, padding=ft.padding.only(top=120), ) ], alignment=&quot;center&quot;, vertical_alignment=&quot;end&quot;, ) ], spacing=0, ) class Login(ft.View): def __init__(self, parent_page, route, on_click): super().__init__() self.parent_page = parent_page self.route = route self.back_to_main = on_click def build(self): return ft.Container( content = ft.IconButton( icon=ft.icons.ARROW_BACK, icon_size=30, on_click=self.back_to_main, ), ) def main(page: ft.Page): def route_change(route): page.views.clear() page.views.append( # returns a dictionary of views with keys corresponding to their routes views_manager(page)[page.route] ) page.update() page.theme = ft.Theme(color_scheme_seed = ft.colors.GREEN) page.theme_mode = &quot;light&quot; page.horizontal_alignment = &quot;center&quot; page.vertical_alignment = &quot;center&quot; page.window_height = 700 page.window_width = 400 page.window_left = 990 page.window_resizable = False page.window_maximizable = False page.on_route_change = route_change page.go(&quot;/&quot;) ft.app(target=main, assets_dir=&quot;assets&quot;) </code></pre> <p>When the user clicks on the button, this error occurs:</p> <pre><code>Future exception was never retrieved future: &lt;Future finished exception=AssertionError('Control must be added to the page first.')&gt; Traceback (most recent call last): File &quot;C:\Users\idoec\miniconda3\Lib\concurrent\futures\thread.py&quot;, line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\idoec\Desktop\main.py&quot;, line 120, in route_change page.update() File &quot;C:\Users\idoec\miniconda3\Lib\site-packages\flet_core\page.py&quot;, line 303, in update self.__handle_mount_unmount(*r) File &quot;C:\Users\idoec\miniconda3\Lib\site-packages\flet_core\page.py&quot;, line 448, in __handle_mount_unmount ctrl.did_mount() File &quot;C:\Users\idoec\Desktop\main.py&quot;, line 64, in did_mount self.update_layout() File &quot;C:\Users\idoec\Desktop\main.py&quot;, line 60, in update_layout self.update() File &quot;C:\Users\idoec\miniconda3\Lib\site-packages\flet_core\control.py&quot;, line 286, in update assert self.__page, &quot;Control must be added to the page first.&quot; AssertionError: Control must be added to the page first. </code></pre> <p>How can I stop the layout update when the 'Login' view is loaded?</p>
<python><flutter><user-interface><flet>
2025-03-05 10:55:12
1
421
eljamba
79,486,046
948,655
VSCode and Python virtualenv troubles
<p>I have 2 problems. Here <em>was</em> my set-up. I have all of my respositories in <code>~/dev/</code>, and I had all of my Python virtualenvs in <code>~/virtualenvs/</code>. This worked OK, but one annoying thing was that it was difficult to switch between virtualenvs, because for some reason VSCode doesn't automatically see where all of my virtualenvs are (which were in <code>~/virtualenvs/</code>), and I had to enter the full path to the Python executable of my virtualenv every single time. So my first question is, <strong>is there a way to tell VSCode where to go and search for virtualenvs?</strong></p> <p>In order to solve this I started making my virtualenvs inside the repositories themselves, correspondingly. So, in <code>~/dev/project_1</code> I'd have <code>~/dev/project_1/project_1.virtualenv</code>, and inside <code>~/dev/project_2</code> I'd have <code>~/dev/project_2/project_2.virtualenv</code>, and so forth, and both <code>~/dev/project_1</code> and <code>~/dev/project_2</code> and other projects will be in my workspace. <em>However</em>, VSCode now starts trying to analyse all of the python modules inside those virtualenvs, tens of thousands of files. This is despite my trying to stop it from doing it using Pylance exclude settings. So my second question is, <strong>how do I stop VSCode from analysing my virtualenvs</strong>?</p> <p>Answers to either or both of my problems would be much appreciated.</p> <p><strong>EDIT</strong> (the answer):</p> <p>It was my mistake. I have a <code>pyrightconfig.json</code> file that overrides any code analysis setting in VSCode. All I had to do is to add <code>.venv</code> to the ignore-list in <code>pyrightconfig.json</code> and everything works fine now.</p>
<python><visual-studio-code><virtualenv><pylance>
2025-03-05 09:50:11
2
8,813
Ray
79,485,999
5,013,037
Gunicorn / Uvicorn extremely rare 502 Errors
<p>We observed an issue with our containerized Python FastAPI application, which we deploy using Gunicorn and Uvicorn on Kubernetes. In extremely rare cases (~1 in 100.000) a single request seem to be silently dropped / not correctly processed by the the gunicorn master process or the uvicorn workers. My nginx reverse proxy outputs the following error message:</p> <pre><code>[error] 26#26: *504921 connect() failed (111: Connection refused) while connecting to upstream, client: 10.95.59.1, server: , request: &quot;POST /v1/info HTTP/1.1&quot;, upstream: &quot;http://10.96.229.82:8000/v1/info&quot;, host: &quot;our-internal-host.com:443&quot; &quot;POST /v1/info HTTP/1.1&quot; 502 </code></pre> <p>The problem is this specific request is not visible in the application logs. With debug logs activated neither gunicorn nor uvicorn logs the request at all. We can not observe any pattern when this happens (e.g. it happens during the day with high load ~ 300 req/min as well as during the night were we have only ~20 req/min). It happens on every endpoint we have, so I'd assume it has also nothing to do with the business logic inside the endpoint. Furthermore, I am not able to artificially reproduce this error in our DEV stages. If I execute a loadtest and throw 500.000 requests at the service everything seems to be fine (either from locally or from inside our Kubernetes cluster).</p> <p>This is my gunicorn config (redacted the unimportant stuff):</p> <pre><code>from apscheduler.schedulers.background import BackgroundScheduler loglevel = &quot;info&quot; if stage == &quot;PROD&quot; else &quot;debug&quot; bind = &quot;0.0.0.0:8000&quot; worker_class = &quot;uvicorn.workers.UvicornWorker&quot; workers = 4 timeout = 180 graceful_timeout = 30 accesslog = &quot;-&quot; errorlog = &quot;-&quot; access_log_format = '%(p)s %(t)s &quot;%(r)s&quot; %(s)s %(D)s' pidfile = &quot;/tmp/app.pid&quot; def on_starting(server): scheduler = BackgroundScheduler() scheduler.add_job(call_liveness_probe, &quot;cron&quot;, minute=&quot;*/2&quot;, start_date=datetime.now() + timedelta(minutes=5)) scheduler.start() </code></pre> <p>We used to also have the <code>max_requests</code>-parameter activated to prevent memory leaks and restart workers every 5000 requests. This resulted in similar undetected and unlogged errors. The default which is now used is set to disabled.</p> <p>Dockerfile using a RHEL9 Base Image (only important stuff):</p> <pre><code>ARG BASE_IMAGE FROM ${BASE_IMAGE} USER 1001 ENV PYTHONUNBUFFERED=1 CMD gunicorn -c /opt/app-root/config/gunicorn_config.py app.main:app </code></pre> <p>We use Python 3.11 (also happens with 3.12) and the following packages are installed</p> <pre><code>asyncpg==0.30.0 aiohttp[speedups]==3.11.* apscheduler==3.11.* autodynatrace==2.1.0 fastapi==0.115.* gunicorn==23.0.* orjson==3.10.* numpy==2.2.* packaging==23.2 pandas==2.2.* pydantic==2.10.* pyopenssl==25.0.* requests==2.32.* uvicorn[standard]==0.34.* </code></pre> <p>Our endpoints are mostly async (I/O on DB operations), our average response times are ~20ms and always lay in between 3ms - 250ms. We use fastapi BackgroundTasks to write data to the DB, so requests are already returned and do not have to wait for some DB writes to finish. I also tried to increase the uvicorn workers to 12 or decrease to 1. Neither solution mitigated or increased/decreased the error rate.</p> <p>Does anybody have an idea what the cause could be or how to better debug it? (I don't want to do a TCP Dump in production and can't reproduce it consistently (or for that matter at all) myself and have to wait potentially weeks for it to happen on our DEV stages. This makes it really hard in addition to the fact that gunicorn and uvicorn do not log that request at all)</p>
<python><nginx><fastapi><gunicorn><uvicorn>
2025-03-05 09:30:09
0
364
fragant
79,485,854
4,473,615
Not a valid OPENSSH private key file - Paramiko
<p>I'm trying to connect to a host using paramiko. I'm able to successfully connect to the host through winscp using the private key, but the script fails with below error</p> <p><strong>raise SSHException(&quot;not a valid {} private key file&quot;.format(tag)) paramiko.ssh_exception.SSHException: not a valid OPENSSH private key file</strong></p> <pre><code>import paramiko ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.connect('hostname.com', username='user', key_filename='private.ppk') stdin, stdout, stderr = ssh.exec_command('ls') print(stdout.readlines()) ssh.close() Any suggestion would be appretiated, </code></pre>
<python><ssh><sftp><paramiko><scp>
2025-03-05 08:30:55
1
5,241
Jim Macaulay
79,485,776
1,942,868
How to kick the old session from websocket
<pre><code>async def connect(self): self.room_group_name = self.scope[&quot;url_route&quot;][&quot;kwargs&quot;][&quot;room_name&quot;] pool = redis.ConnectionPool(host='localhost', port=6379, db=0) self.redis = redis.Redis(connection_pool=pool) logger.info(&quot;channel_name:{} / group_name {}&quot;.format(self.channel_name,self.room_group_name)) await self.channel_layer.group_add( self.room_group_name, self.channel_name ) await self.accept() for key in self.redis.keys(): if self.redis.type(key).decode() == &quot;zset&quot;: print(&quot;key:&quot;,key) print(&quot;len:&quot;,len(self.redis.zrange(key,0,-1))) async def disconnect(self, close_code): logger.info(&quot;somehow disconnect&quot;) await self.channel_layer.group_discard( self.room_group_name, self.channel_name ) </code></pre> <p>I have this script for websockt connection.</p> <p>then</p> <p>I am directly checking the record in <code>redis</code> to get the current members of chatroom.</p> <p>If a user log-out and call <code>disconnect</code> , session is disappeared.</p> <p>However If a user didn't call the <code>disconnect</code> function, session will be stored in redis table forver.</p> <p>How can I remove the old session or set the timeout??</p>
<javascript><python><websocket><redis>
2025-03-05 07:57:41
0
12,599
whitebear
79,485,612
2,981,639
Adding hours to a Polars time column
<p>I have a table representing a schedule, i.e. it contains day (monday-sunday), start_time and end_time fields</p> <pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({ &quot;day&quot;: [&quot;monday&quot;, &quot;tuesday&quot;, &quot;wednesday&quot;, &quot;thursday&quot;, &quot;friday&quot;, &quot;saturday&quot;, &quot;sunday&quot;], &quot;enabled&quot;: [True, True, True, True, True, False, False], &quot;start_time&quot;: [&quot;09:00&quot;, &quot;09:00&quot;, &quot;09:00&quot;, &quot;09:00&quot;, &quot;09:00&quot;, &quot;00:00&quot;, &quot;00:00&quot;], &quot;end_time&quot;: [&quot;18:00&quot;, &quot;18:00&quot;, &quot;18:00&quot;, &quot;18:00&quot;, &quot;18:00&quot;, &quot;00:00&quot;, &quot;00:00&quot;], }) df = df.with_columns(start_time = pl.col(&quot;start_time&quot;).str.to_time(&quot;%H:%M&quot;)) df = df.with_columns(end_time = pl.col(&quot;end_time&quot;).str.to_time(&quot;%H:%M&quot;)) print(df) </code></pre> <pre class="lang-none prettyprint-override"><code>shape: (7, 4) ┌───────────┬─────────┬────────────┬──────────┐ │ day ┆ enabled ┆ start_time ┆ end_time │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ bool ┆ time ┆ time │ ╞═══════════╪═════════╪════════════╪══════════╡ │ monday ┆ true ┆ 09:00:00 ┆ 18:00:00 │ │ tuesday ┆ true ┆ 09:00:00 ┆ 18:00:00 │ │ wednesday ┆ true ┆ 09:00:00 ┆ 18:00:00 │ │ thursday ┆ true ┆ 09:00:00 ┆ 18:00:00 │ │ friday ┆ true ┆ 09:00:00 ┆ 18:00:00 │ │ saturday ┆ false ┆ 00:00:00 ┆ 00:00:00 │ │ sunday ┆ false ┆ 00:00:00 ┆ 00:00:00 │ └───────────┴─────────┴────────────┴──────────┘ </code></pre> <p>I need to subtract n hours from the start_time and add n hours to the end_time. I cannot find a polars operation to add/subtract hours from a <code>pl.time</code> - I've tried adding a <code>pl.duration</code> but that only appears to work for <code>date</code> and <code>datetime</code>.</p> <p>One work-around I've assumed is to turn start_time / end_time into a <code>pl.datetime</code> (i.e. use some constant date), do the operation and then decompose the result back to a time. This has one option of being easier to ensure I don't over/underflow (i.e. subtract 2 hours from 01:00 and end up with 23:00) but I'm wondering it's possible to add/subtracts hours/minutes to a time in polars?</p>
<python><dataframe><python-polars>
2025-03-05 06:20:43
1
2,963
David Waterworth
79,485,504
1,563,347
Why unrecognized Argparse arg1 in bare "my_script .py arg1" vs "python my_script.py arg1"
<p>I can run a script on the command line with an argument either as the bare &quot;script.py arg1&quot; or I can run it as &quot;python script.py arg1&quot;. Both work until I use argparse for command line parameters.</p> <p>The bare script does not recognize arg1:</p> <pre><code>Directories.py usage: directories.py [-h] [-f FILE] directory directories.py: error: the following arguments are required: directory </code></pre> <p>&quot;python directories.py arg1&quot; does work with argparse.</p> <p>WHY?</p> <p>Can I fix that bare script? Can I make the bare &quot;directories.py arg1&quot; work?</p> <p>(I don't really understand why under Windows the bare script executes. I assume there is a python association.)</p> <p>Here is the argparse part:</p> <pre><code>if __name__ == &quot;__main__&quot;: parser = argparse.ArgumentParser(description=&quot;Get tree from single dir or dir list from file&quot;) parser.add_argument(&quot;directory&quot;, type=str, help=&quot;Get single directory&quot;) # Adding argument parser.add_argument(&quot;-f&quot;, &quot;--file&quot;, help = &quot;Get directories from a file&quot;) # Read arguments from command line args = parser.parse_args() main(args) </code></pre>
<python><windows><command-line><argparse>
2025-03-05 05:07:35
0
601
wgw
79,485,464
5,003,606
string format a Decimal instance using an underscore as a thousands separator?
<p>I can easily do this for floats using an f-string like this:</p> <pre><code>x = float(123456789.123456) print(f&quot;{x:_f}&quot;) </code></pre> <p>prints <code>123_456_789.123456</code> exactly as expected.</p> <p>But if try the same technique for a Decimal</p> <pre><code>from decimal import Decimal x = Decimal(&quot;123456789.123456&quot;) print(f&quot;{x:_f}&quot;) </code></pre> <p>this code crashes with <code>ValueError: invalid format string</code></p> <p>What?</p> <p>Does Decimal only partly support Python's <a href="https://docs.python.org/3/library/string.html#format-specification-mini-language" rel="nofollow noreferrer">Format Specification Mini-Language</a>?</p> <p>Is this a known bug with Decimal?</p> <p>This is especially strange because Decimal does support underscore as a thousands separator when parsing FROM a string:</p> <pre><code>from decimal import Decimal x = Decimal(&quot;123_456_789.123456&quot;) print(f&quot;{x:f}&quot;) </code></pre> <p>prints <code>123456789.123456</code> exactly as expected.</p> <p>Is this there another way that I can do this?</p>
<python><string><format><decimal><separator>
2025-03-05 04:42:59
0
951
HaroldFinch
79,485,298
4,352,047
Numpy Structured Array - Memory copy error
<p>I am using python <code>structured arrays</code> to arrange relevant data then passing it into C++ land. I noticed on occasion a memory copy error which leads to calculation issues. I first explored using hashing functions to make sure it was not corrupted but now use <code>np.where()</code> to see where the values are differing.</p> <p><strong>Issue:</strong> When I add several numpy arrays into a structured array, the underlying arrays sometimes develop errors. On my MacBook with Python 3.12.8 &amp; Numpy 2.0.2, it completes the full 1000 loops (pasted correct output below). On my Ubuntu Server 24.02 w/ Python 3.12.8 and Numpy 2.0.2, after running for a few iterations, the server eventually develops errors in the underlying arrays. Sometimes the error consists of 2 elements, other times it can be 16 elements. Sometimes the error occurs as quickly as the 2nd loop, other times it occurs in 45 loops.</p> <p><strong>Trouble shooting attempt:</strong> I made a separate &quot;copy only&quot; calling np.copy on arrays. Code runs without issue on both systems. You can enable this by uncommenting <code>TestCopy(...)</code> function call and commenting out the Coalesce function.</p> <p><strong>Possible thoughts:</strong></p> <ul> <li>Could this be a hardware or memory instability error?</li> <li>Or is there some OS specific numpy implementation details?</li> <li>The traditional .copy() appears to work, so maybe it's a numpy structured array issue implementation?</li> </ul> <p>I stripped away an enormous code base and now have reduced it to a small repeatable file that I pasted below.</p> <p><strong>How example below works:</strong></p> <ol> <li>Creating several random in-memory numpy arrays (large, +100M length)</li> <li>Passing this data to <code>CoalesceData</code> function</li> <li>Run <code>original</code> arrays through the storage mechanism</li> <li>Combine arrays into a <code>structured array</code></li> <li>Check the arrays inside of the <code>structured array</code> to see if they match the <code>original</code> arrays</li> </ol> <p><strong>Environment:</strong> Python 3.12.8 &amp; Numpy 2.0.2 (have tried other versions of numpy as well)</p> <pre><code>import numpy as np import os import sys class HashStorage(): def __init__(self): self.storage = {} def check_diff(self, name, arr): if name in self.storage: expected_arr = self.storage[name] print( f'Checking: {name} for differences, new_id: {id(arr)} old_id: {id(expected_arr)}') diffs = np.where(arr != expected_arr) diff_length = len(diffs[0]) assert diff_length == 0, f'indices: {diffs}, {expected_arr[diffs]} : {arr[diffs]}' else: self.storage[name] = arr def check_numpy_hash(self, name, arr): # called_name = f'{sys._getframe(1).f_code.co_name}:{sys._getframe(1).f_lineno}_{name}' called_name = f'{sys._getframe(1).f_code.co_name}_{name}' # computed_hash = checksum_numpy_array(arr) self.check_diff(called_name, arr) hash_global_storage = HashStorage() def CoalesceData(o_np, h_np, l_np, c_np, timestamp_np, s_np, a_np, m_np): global hash_global_storage hash_global_storage.check_numpy_hash('o_np', o_np) hash_global_storage.check_numpy_hash('h_np', h_np) hash_global_storage.check_numpy_hash('l_np', l_np) hash_global_storage.check_numpy_hash('c_np', c_np) hash_global_storage.check_numpy_hash('timestamp_np', timestamp_np) hash_global_storage.check_numpy_hash('s_np', s_np) hash_global_storage.check_numpy_hash('a_np', a_np) hash_global_storage.check_numpy_hash('m_np', m_np) # create structured array dt = np.dtype([ ('open', o_np.dtype), # 4 bytes ('high', h_np.dtype), # 4 bytes ('low', l_np.dtype), # 4 bytes ('close', c_np.dtype), # 4 bytes ('timestamp', timestamp_np.dtype), # 8 bytes ('a', a_np.dtype), # 4 bytes ('s', s_np.dtype), # 1 byte ('m', m_np.dtype) # 2 bytes ], align=True) structured_array = np.zeros(len(o_np), dtype=dt) structured_array['open'] = o_np structured_array['high'] = h_np structured_array['low'] = l_np structured_array['close'] = c_np structured_array['timestamp'] = timestamp_np structured_array['s'] = s_np structured_array['a'] = a_np structured_array['m'] = m_np print(structured_array.flags) print(f'Structured Array address: {id(structured_array)}') # Now, check arrays in the structured array to make sure everything is deterministic # autopep8: off hash_global_storage.check_numpy_hash('o_np', structured_array['open']) hash_global_storage.check_numpy_hash('h_np', structured_array['high']) hash_global_storage.check_numpy_hash('l_np', structured_array['low']) hash_global_storage.check_numpy_hash('c_np', structured_array['close']) hash_global_storage.check_numpy_hash('timestamp_np', structured_array['timestamp']) hash_global_storage.check_numpy_hash('s_np', structured_array['s']) hash_global_storage.check_numpy_hash('a_np', structured_array['a']) hash_global_storage.check_numpy_hash('m_np', structured_array['m']) # autopep8: on hash_global_storage.check_numpy_hash( 'structured_array', structured_array) return structured_array def TestCopy(o_np, h_np, l_np, c_np, timestamp_np, s_np, a_np, m_np): global hash_global_storage hash_global_storage.check_numpy_hash('o_np', o_np) hash_global_storage.check_numpy_hash('h_np', h_np) hash_global_storage.check_numpy_hash('l_np', l_np) hash_global_storage.check_numpy_hash('c_np', c_np) hash_global_storage.check_numpy_hash('timestamp_np', timestamp_np) hash_global_storage.check_numpy_hash('s_np', s_np) hash_global_storage.check_numpy_hash('a_np', a_np) hash_global_storage.check_numpy_hash('m_np', m_np) o_copy = o_np.copy() h_copy = h_np.copy() l_copy = l_np.copy() c_copy = c_np.copy() timestamp_copy = timestamp_np.copy() s_copy = s_np.copy() a_copy = a_np.copy() m_copy = m_np.copy() hash_global_storage.check_numpy_hash('o_np', o_copy) hash_global_storage.check_numpy_hash('h_np', h_copy) hash_global_storage.check_numpy_hash('l_np', l_copy) hash_global_storage.check_numpy_hash('c_np', c_copy) hash_global_storage.check_numpy_hash('timestamp_np', timestamp_copy) hash_global_storage.check_numpy_hash('s_np', s_copy) hash_global_storage.check_numpy_hash('a_np', a_copy) hash_global_storage.check_numpy_hash('m_np', m_copy) if __name__ == '__main__': # create a test df_length = 124_882_868 o_np = np.random.rand(df_length).astype(np.float32) h_np = np.random.rand(df_length).astype(np.float32) l_np = np.random.rand(df_length).astype(np.float32) c_np = np.random.rand(df_length).astype(np.float32) timestamp_np = np.full(df_length, 12313131, dtype=np.int64) a_np = np.random.rand(df_length).astype(np.float32) s_np = np.random.randint(6, size=df_length, dtype=np.int8) m_np = np.random.randint(1000, size=df_length, dtype=np.int16) for i in range(0, 1000): print(f&quot;--------Working on iteration: {i}&quot;) # autopep8: off CoalesceData(o_np, h_np, l_np, c_np, timestamp_np, a_np, s_np, m_np) # TestCopy(o_np, h_np, l_np, c_np, timestamp_np, a_np, s_np, m_np) # Works # autopep8: on </code></pre> <p>When I run the following code on my MacBook Air, using a Conda environment (Python 3.12.8 and Numpy 2.0.2), the program successfully executes without an memory copy error.</p> <pre><code>--------Working on iteration: 999 Checking: CoalesceData_o_np for differences, new_id: 4354134000 old_id: 4354134000 Checking: CoalesceData_h_np for differences, new_id: 4354134096 old_id: 4354134096 Checking: CoalesceData_l_np for differences, new_id: 4354134192 old_id: 4354134192 Checking: CoalesceData_c_np for differences, new_id: 4354134288 old_id: 4354134288 Checking: CoalesceData_timestamp_np for differences, new_id: 4354133904 old_id: 4354133904 Checking: CoalesceData_s_np for differences, new_id: 4354134480 old_id: 4354134480 Checking: CoalesceData_a_np for differences, new_id: 4354134672 old_id: 4354134672 Checking: CoalesceData_m_np for differences, new_id: 4354134768 old_id: 4354134768 C_CONTIGUOUS : True F_CONTIGUOUS : True OWNDATA : True WRITEABLE : True ALIGNED : True WRITEBACKIFCOPY : False Structured Array address: 4354134576 Checking: CoalesceData_o_np for differences, new_id: 4354134960 old_id: 4354134000 Checking: CoalesceData_h_np for differences, new_id: 4354134960 old_id: 4354134096 Checking: CoalesceData_l_np for differences, new_id: 4354134960 old_id: 4354134192 Checking: CoalesceData_c_np for differences, new_id: 4354134960 old_id: 4354134288 Checking: CoalesceData_timestamp_np for differences, new_id: 4354134960 old_id: 4354133904 Checking: CoalesceData_s_np for differences, new_id: 4354134960 old_id: 4354134480 Checking: CoalesceData_a_np for differences, new_id: 4354134960 old_id: 4354134672 Checking: CoalesceData_m_np for differences, new_id: 4354134960 old_id: 4354134768 Checking: CoalesceData_structured_array for differences, new_id: 4354134576 old_id: 4354134384 </code></pre> <p>When running on my Ubuntu server, also with Python 3.12.8 and Numpy 2.0.2, I get:</p> <pre><code>--------Working on iteration: 3 Checking: CoalesceData_o_np for differences, new_id: 133772149707888 old_id: 133772149707888 Checking: CoalesceData_h_np for differences, new_id: 133772149707984 old_id: 133772149707984 Checking: CoalesceData_l_np for differences, new_id: 133772149708080 old_id: 133772149708080 Checking: CoalesceData_c_np for differences, new_id: 133772149708176 old_id: 133772149708176 Checking: CoalesceData_timestamp_np for differences, new_id: 133772149707792 old_id: 133772149707792 Checking: CoalesceData_s_np for differences, new_id: 133772149708368 old_id: 133772149708368 Checking: CoalesceData_a_np for differences, new_id: 133772149708560 old_id: 133772149708560 Checking: CoalesceData_m_np for differences, new_id: 133772149708656 old_id: 133772149708656 C_CONTIGUOUS : True F_CONTIGUOUS : True OWNDATA : True WRITEABLE : True ALIGNED : True WRITEBACKIFCOPY : False Structured Array address: 133772149708464 Checking: CoalesceData_o_np for differences, new_id: 133772149708848 old_id: 133772149707888 Checking: CoalesceData_h_np for differences, new_id: 133772149708848 old_id: 133772149707984 Checking: CoalesceData_l_np for differences, new_id: 133772149708848 old_id: 133772149708080 Traceback (most recent call last): File &quot;/home/memo/bt/np_error_debug.py&quot;, line 105, in &lt;module&gt; CoalesceData(o_np, h_np, l_np, c_np, timestamp_np, File &quot;/home/memo/bt/np_error_debug.py&quot;, line 74, in CoalesceData hash_global_storage.check_numpy_hash('l_np', structured_array['low']) File &quot;/home/memo/bt/np_error_debug.py&quot;, line 26, in check_numpy_hash self.check_diff(called_name, arr) File &quot;/home/memo/bt/np_error_debug.py&quot;, line 18, in check_diff assert diff_length == 0, f'indices: {diffs}, {expected_arr[diffs]} : {arr[diffs]}' ^^^^^^^^^^^^^^^^ AssertionError: indices: (array([97087371, 97087372]),), [0.61208063 0.56773347] : [0. 0.] </code></pre> <p><strong>A slightly different number of elements error:</strong></p> <pre><code> File &quot;/home/memo/bt/np_error_debug.py&quot;, line 41, in check_diff assert diff_length == 0, f'indices: {diffs}, {expected_arr[diffs]} : {arr[diffs]}' ^^^^^^^^^^^^^^^^ AssertionError: indices: (array([120253452, 120253453, 120253454, 120253455, 120253456, 120253457, 120253458, 120253459, 120253460, 120253461, 120253462, 120253463, 120253464, 120253465, 120253466, 120253467]),), [0.38179564 0.7686447 0.06995761 0.76895595 0.7134335 0.12035605 0.9882022 0.7208525 0.5113986 0.11400567 0.08236554 0.09342069 0.85959834 0.6065078 0.5138216 0.66513485] : [0.87417835 0.9489298 0.16752674 0.16250128 0.13623057 0.8921764 0.14262542 0.8389298 0.37004778 0.5679792 0.79316586 0.1225264 0.86325306 0.6123406 0.5594882 0.2388553 ] </code></pre>
<python><arrays><python-3.x><numpy>
2025-03-05 02:16:18
1
379
Deftness
79,485,287
6,471,140
how to pass additional query filters to a SelfQueryRetriever?
<p>We are implementing a SelfQueryRetriever using OpenSearch as vectorstore, in general it works fine generating the metadata filters from the user query but we need some way to append other filters to the query, and I cannot find how to do that in the documentation, the use case is to make some things such as:</p> <ul> <li>The UI will have some filters that product owners want exposed as tradditional filters instead of filters to be extracted from user query.</li> <li>There are roles and the space of possible search results for a query is restricted by the data available only for users of that role.</li> </ul> <p>A possible solution is to add those to the metadata fields of the SelfQueryRetriever and append a &quot;system&quot; component to the user query and let the self query retriever create the filters, but to me it does't sound so clean and intuitive.</p> <p>How can additional filters be added to the query?</p>
<python><nlp><artificial-intelligence><langchain>
2025-03-05 02:03:39
0
3,554
Luis Leal
79,485,215
7,436,627
SIR parameter estimation with gradient descent and autograd
<p>I am trying to apply a very simple parameter estimation of a SIR model using a gradient descent algorithm. I am using the package <a href="https://github.com/HIPS/autograd" rel="nofollow noreferrer">autograd</a> since the audience (this is for a sort of workshop for undergraduate students) only knows numpy and I don't want to jump to JAX or any other ML framework (yet).</p> <pre class="lang-py prettyprint-override"><code>import autograd import autograd.numpy as np import matplotlib.pyplot as plt from scipy.integrate import solve_ivp, odeint from autograd.builtins import tuple from autograd import grad, jacobian def sir(y, t, beta, gamma): S, I, R = y dS_dt = - beta * S * I dI_dt = beta * S * I - gamma * I dR_dt = gamma * I return np.array([dS_dt, dI_dt, dR_dt]) def loss(params, Y0, t, y_obs): beta, gamma = params # Solve the ODE system using odeint sol = odeint(sir, y0=Y0, t=t, args=(beta, gamma)) # Compute the L2 norm error between the observed and predicted values err = np.linalg.norm(y_obs - sol, 2) return err # Generate data np.random.seed(42) Y0 = np.array([0.95, 0.05, 0.0]) t = np.linspace(0, 30, 101) beta, gamma = 0.5, 1/14 sol = odeint(sir, y0=Y0, t=t, args=tuple([beta, gamma])) y_obs = sol + np.random.normal(0, 0.05, size=sol.shape) plt.plot(t, y_obs) </code></pre> <p>Then, what I would like to do is something like this</p> <pre class="lang-py prettyprint-override"><code># --- THIS DOES NOT WORK --- params = np.array([beta_init, gamma_init]) # Get the gradient of the loss function with respect to the parameters (beta, gamma) loss_grad = grad(loss, argnum=0) # params is the first argument of loss # Perform gradient descent for i in range(n_iterations): grads = loss_grad(params, Y0, t, y_obs) # Compute gradients params -= learning_rate * grads # Update parameters </code></pre> <p>A minimal example would be:</p> <pre class="lang-py prettyprint-override"><code>loss_grad = grad(loss, argnum=0) params = np.array([beta, gamma]) grads = loss_grad(params, Y0, t, y_obs) </code></pre> <p>However, I get the following error:</p> <pre><code>ValueError: setting an array element with a sequence. </code></pre> <p>which start at the very be</p> <p>Is there any way I can calculate the derivatives of the loss function with respect to my parameters (beta and gamma)? To be honest I am still getting used to auto-differentiation.</p>
<python><gradient><gradient-descent><autograd>
2025-03-05 01:02:57
1
323
Alonso Ogueda Oliva
79,485,175
6,041,629
Python frequency analysis - AIC of infinity for several distributions despite a good fit
<p>I am doing frequency analysis on many sets of snow data, which sometimes have (valid) zero values in the Annual Maxima Series.</p> <p>To help guide the selection of distribution, I am calculating some goodness of fit statistics including AIC and BIC. I am finding that even for apparently reasonable distribution fits, the zero observations in my data create issues in calculating AIC/BIC. The probability of the zero observations is zero, the log of which is -inf, so the AIC and BIC end up being infinite. This seems to happen even when I assign very small nonzero values to the zero values in the input data.</p> <p>A minimum reproducible example below. I am wondering whether there is some kind of a defensible workaround for this to get a proxy AIC/BIC for the cases where this happens (frequently with Weibull but also other distributions on occasion - GEV shown here).</p> <pre><code>import numpy as np import lmoments3.distr as ld import matplotlib.pyplot as plt data=np.array([279, 244, 226, 216, 300, 208, 267, 239, 201, 277, 135, 79, 201, 15, 198, 231, 251, 277, 168, 43, 53, 160, 366, 163, 239, 287, 196, 117, 0, 208]) data[data==0]=1E-07 return_periods = np.array([1.1111, 1.2500, 1.4286,2,3,5,10,20,50,100,200,1000]) # Fit GEV distribution to the data params = ld.gev.lmom_fit(data) # Calculate quantiles for the given return periods quantiles = ld.gev.ppf(1 - 1 / return_periods, **params) # Calculate Cunnane plotting positions for the data n = len(data) sorted_data = np.sort(data) ranks = np.arange(1, n + 1) empirical_cdf = (ranks - 0.4) / (n + 0.2) # Cunnane plotting positions data_return_periods = 1 / (1 - empirical_cdf) # Plot the quantile plot plt.figure(figsize=(8, 6)) plt.plot(return_periods, quantiles, marker='o', linestyle='-', color='b', label='GEV Quantiles') plt.scatter(data_return_periods, sorted_data, color='r', label='Input Data (Cunnane)') plt.xscale('log') plt.yscale('linear') plt.xlabel('Return Period (Years)', fontsize=12) plt.ylabel('Quantile Value', fontsize=12) plt.title('Quantile Plot for GEV Distribution with Input Data (Cunnane)', fontsize=14) plt.grid(True, which=&quot;both&quot;, ls=&quot;--&quot;) plt.legend() plt.show() n = len(data) empirical_cdf = (np.arange(1, n + 1) - 0.4) / (n + 0.2) # Cunnane plotting positions return_periods = 1 / (1 - empirical_cdf) # Convert CDF to return periods logpdf_values = ld.gev.logpdf(sorted_data, **params) pdf_values = ld.gev.pdf(sorted_data, **params) log_likelihood = np.sum(logpdf_values) print(f&quot;log likelihood: {log_likelihood}&quot;) k = len(params) # Number of parameters in the model aic = 2 * k - 2 * log_likelihood bic = k * np.log(n) - 2 * log_likelihood print(f&quot;AIC: {aic}&quot;) print(f&quot;BIC: {bic}&quot;) </code></pre> <p><a href="https://i.sstatic.net/rU9AGGPk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rU9AGGPk.png" alt="Quantile fit" /></a></p>
<python><frequency-analysis><goodness-of-fit>
2025-03-05 00:18:21
0
526
Kingle
79,485,170
12,125,755
How to get field value from array of objects in python?
<p>I have a json file with a list of objects from which I need to get some objects with filtration.</p> <p>My json:</p> <pre><code>[ { &quot;_id&quot;: &quot;1&quot;, &quot;type&quot;: &quot;answer&quot;, &quot;segments&quot;: [ { &quot;slug&quot;: &quot;employees&quot;, &quot;title&quot;: &quot;employees&quot;, &quot;id&quot;: &quot;111111&quot; }, { &quot;slug&quot;: &quot;middle-employees&quot;, &quot;title&quot;: &quot;М&quot;, &quot;id&quot;: &quot;22222222&quot; } ], &quot;rubric&quot;: { &quot;slug&quot;: &quot;online&quot;, &quot;title&quot;: &quot;deal&quot;, &quot;id&quot;: &quot;123&quot; } }, { &quot;_id&quot;: &quot;2&quot;, &quot;type&quot;: &quot;instruction&quot;, &quot;segments&quot;: [ { &quot;slug&quot;: &quot;employees&quot;, &quot;title&quot;: &quot;employees&quot;, &quot;id&quot;: &quot;111111&quot; }, { &quot;slug&quot;: &quot;middle-employees&quot;, &quot;title&quot;: &quot;М&quot;, &quot;id&quot;: &quot;22222222&quot; } ], &quot;rubric&quot;: { &quot;slug&quot;: &quot;offline&quot;, &quot;title&quot;: &quot;off&quot;, &quot;id&quot;: &quot;345&quot; } } ] </code></pre> <p>My task is to check if there is some id value among <code>segments</code> array and if <code>rubric</code> id is some other special value. But something is wrong in my filtration and I always get empty <code>filtered_data</code> and I can't understand why.</p> <p>It's my first attempt to write python code, please don't be so strict.</p> <p>my python simplest code:</p> <pre><code>def get_data_from_json(self, json_data): data = json.loads(json_data) filtered_data = [ item for item in data if hasattr(item, &quot;segments&quot;) and len(item[&quot;segments&quot;]) != 0 and any(seg_id in item[&quot;segments&quot;][&quot;id&quot;] for seg_id in filter_config[&quot;segment_ids&quot;]) and item[&quot;rubric&quot;][&quot;id&quot;] in filter_config[&quot;rubric_ids&quot;] and item.get(&quot;type&quot;) == &quot;answer&quot; ] # then I'm gonna use filtered_data </code></pre>
<python><arrays>
2025-03-05 00:12:06
2
2,010
NeverSleeps
79,485,018
159,072
A python translation of `XorShift128Plus`
<p>I need to translate <a href="https://github.com/m4rs-mt/ILGPU/blob/master/Src/ILGPU.Algorithms/Random/XorShift128Plus.cs" rel="nofollow noreferrer">ILGPU's <code>XorShift128Plus</code> class</a> into Python.</p> <p>First, I have written a unit test for the C# version:</p> <pre class="lang-cs prettyprint-override"><code>using Microsoft.VisualStudio.TestTools.UnitTesting; using ILGPU.Algorithms.Random; using System; namespace MyTests { [TestClass] public class XorShift128PlusTests { [TestMethod] public void TestInitialization() { var rng = new XorShift128Plus(123456789UL, 987654321UL); Assert.AreEqual(123456789UL, rng.State0); Assert.AreEqual(987654321UL, rng.State1); } [TestMethod] public void TestNextULong() { var rng = new XorShift128Plus(123456789UL, 987654321UL); var actual = rng.NextULong(); Assert.AreEqual(1035635210406904UL, actual); } [TestMethod] public void TestNextUInt() { var rng = new XorShift128Plus(123456789UL, 987654321UL); var actual = rng.NextUInt(); Assert.AreEqual(2631329311, actual); } [TestMethod] public void TestNext() { var rng = new XorShift128Plus(123456789UL, 987654321UL); var actual = rng.Next(); Assert.AreEqual(483845663, actual); } [TestMethod] public void TestNextLong() { var rng = new XorShift128Plus(123456789UL, 987654321UL); var actual = rng.NextLong(); Assert.AreEqual(1035635210406904L, actual); } [TestMethod] public void TestNextFloat() { var rng = new XorShift128Plus(123456789UL, 987654321UL); var value = rng.NextFloat(); var actual = value.Trim(6); var expected = 0.0001122839f.Trim(6); float tolerance = 1e-7f; // Small allowable error due to float limitations Assert.IsTrue(Math.Abs(expected - actual) &lt; tolerance, $&quot;Expected {expected}, but got {actual}&quot;); } [TestMethod] public void TestNextDouble() { var rng = new XorShift128Plus(123456789UL, 987654321UL); var actual = rng.NextDouble(); Assert.AreEqual(0.000112283794502565.Trim(15), actual.Trim(15)); } [TestMethod] public void TestShiftPeriod() { var rng = new XorShift128Plus(123456789UL, 987654321UL); var oldState0 = rng.State0; var oldState1 = rng.State1; rng.ShiftPeriod(10); Assert.IsTrue(rng.State0 != oldState0 || rng.State1 != oldState1); } [TestMethod] public void TestNextProvider() { var rng = new XorShift128Plus(123456789UL, 987654321UL); var newRng = rng.NextProvider(); Assert.IsInstanceOfType(newRng, typeof(XorShift128Plus)); Assert.IsTrue(newRng.State0 != rng.State0 || newRng.State1 != rng.State1); } [TestMethod] public void TestCreateProvider() { var rng = new XorShift128Plus(123456789UL, 987654321UL); var newRng = rng.CreateProvider(new Random(42)); Assert.IsInstanceOfType(newRng, typeof(XorShift128Plus)); Assert.IsTrue(newRng.State0 &gt; 0UL &amp;&amp; newRng.State1 &gt; 0UL); } [TestMethod] public void TestEquality() { var rng1 = new XorShift128Plus(123456789UL, 987654321UL); var rng2 = new XorShift128Plus(123456789UL, 987654321UL); var rng3 = new XorShift128Plus(111111111UL, 222222222UL); Assert.AreEqual(rng1, rng2); Assert.AreNotEqual(rng1, rng3); } [TestMethod] public void TestHash() { var rng1 = new XorShift128Plus(123456789UL, 987654321UL); var rng2 = new XorShift128Plus(123456789UL, 987654321UL); Assert.AreEqual(rng1.GetHashCode(), rng2.GetHashCode()); } [TestMethod] public void TestToString() { var rng = new XorShift128Plus(123456789UL, 987654321UL); Assert.AreEqual(&quot;[123456789, 987654321]&quot;, rng.ToString()); } } } public static class DoubleTrimmer { public static double Trim(this double value, int decimal_places) { return Math.Round(value, decimal_places); } public static double Clamp(this double value, double min, double max) { if (value &lt; min) return min; if (value &gt; max) return max; return value; } public static float Trim(this float value, int decimal_places) { float factor = (float)Math.Pow(10f, decimal_places); return (float)Math.Round(value * factor) / factor; } public static float Clamp(this float value, float min, float max) { if (value &lt; min) return min; if (value &gt; max) return max; return value; } public static decimal Trim(this decimal value, int decimal_places) { return Math.Round(value, decimal_places); } public static decimal Clamp(this decimal value, decimal min, decimal max) { if (value &lt; min) return min; if (value &gt; max) return max; return value; } } </code></pre> <p>Then I have translated the C# class into a Python class:</p> <pre class="lang-py prettyprint-override"><code>import random import struct class XorShift128Plus: &quot;&quot;&quot; Implements a simple and fast xor-shift RNG. &quot;&quot;&quot; def __init__(self, state0: int, state1: int): if state0 == 0 and state1 == 0: raise ValueError(&quot;State must not be zero&quot;) self.state0 = state0 self.state1 = state1 @staticmethod def create(seed_random: random.Random = None): if seed_random is None: seed_random = random.Random() state0 = (seed_random.randint(1, 0x7FFFFFFF) &lt;&lt; 32) | seed_random.randint(0, 0xFFFFFFFF) state1 = (seed_random.randint(0, 0xFFFFFFFF) &lt;&lt; 32) | seed_random.randint(0, 0xFFFFFFFF) return XorShift128Plus(state0, state1) def next_ulong(self) -&gt; int: x = self.state0 y = self.state1 self.state0 = y x ^= (x &lt;&lt; 23) &amp; 0xFFFFFFFFFFFFFFFF self.state1 = x ^ y ^ ((x &gt;&gt; 17) &amp; 0xFFFFFFFFFFFFFFFF) ^ ((y &gt;&gt; 26) &amp; 0xFFFFFFFFFFFFFFFF) return (self.state1 + y) &amp; 0xFFFFFFFFFFFFFFFF def next_uint(self) -&gt; int: return self.next_ulong() &amp; 0xFFFFFFFF def next(self) -&gt; int: return self.next_uint() &amp; 0x7FFFFFFF def next_long(self) -&gt; int: return struct.unpack(&quot;q&quot;, struct.pack(&quot;Q&quot;, self.next_ulong()))[0] def next_float(self) -&gt; float: return self.next_long() * (1.0 / (1 &lt;&lt; 63)) def next_double(self) -&gt; float: return self.next_long() * (1.0 / (1 &lt;&lt; 63)) def shift_period(self, shift: int): local_rng = XorShift128Plus.create() for _ in range(shift): local_rng.next_ulong() self.state0 = local_rng.next_ulong() self.state1 = local_rng.next_ulong() def next_provider(self): return XorShift128Plus(self.next_ulong(), self.next_ulong()) def create_provider(self, other_random): return XorShift128Plus(abs(other_random.next_long()) + 1, abs(other_random.next_long()) + 1) def __eq__(self, other): if not isinstance(other, XorShift128Plus): return False return self.state0 == other.state0 and self.state1 == other.state1 def __hash__(self): return hash((self.state0, self.state1)) def __repr__(self): return f&quot;XorShift128Plus({self.state0}, {self.state1})&quot; </code></pre> <p>Then, I have translated the C# version of the unit-test into a PyTest module.</p> <pre class="lang-py prettyprint-override"><code>import unittest from XorShift128Plus import XorShift128Plus class TestXorShift128Plus(unittest.TestCase): def test_initialization(self): rng = XorShift128Plus(123456789, 987654321) self.assertEqual(rng.state0, 123456789) self.assertEqual(rng.state1, 987654321) def test_next_ulong(self): rng = XorShift128Plus(123456789, 987654321) actual = rng.next_ulong() self.assertEqual(actual, 1035635210406904) def test_next_uint(self): rng = XorShift128Plus(123456789, 987654321) actual = rng.next_uint() self.assertEqual(actual, 2631329311) def test_next(self): rng = XorShift128Plus(123456789, 987654321) actual = rng.next() self.assertEqual(actual, 483845663) def test_next_long(self): rng = XorShift128Plus(123456789, 987654321) actual = rng.next_long() self.assertEqual(actual, 1035635210406904) def test_next_float(self): rng = XorShift128Plus(123456789, 987654321) value = rng.next_float() actual = round(value, 6) expected = round(0.0001122839, 6) print(f&quot;Original: {value}&quot;) print(f&quot;Rounded : {actual}&quot;) print(f&quot;Expected: {expected}&quot;) tolerance = 1e-7 # Small allowable error due to float limitations self.assertTrue(abs(expected - actual) &lt; tolerance, f&quot;Expected {expected}, but got {actual}&quot;) def test_next_double(self): rng = XorShift128Plus(123456789, 987654321) actual = rng.next_double() self.assertEqual(round(actual, 15), round(0.000112283794502565, 15)) def test_shift_period(self): rng = XorShift128Plus(123456789, 987654321) old_state0, old_state1 = rng.state0, rng.state1 rng.shift_period(10) self.assertTrue(rng.state0 != old_state0 or rng.state1 != old_state1) def test_next_provider(self): rng = XorShift128Plus(123456789, 987654321) new_rng = rng.next_provider() self.assertIsInstance(new_rng, XorShift128Plus) self.assertTrue(new_rng.state0 != rng.state0 or new_rng.state1 != rng.state1) def test_create_provider(self): rng = XorShift128Plus(123456789, 987654321) new_rng = rng.create_provider(rng) self.assertIsInstance(new_rng, XorShift128Plus) self.assertTrue(new_rng.state0 &gt; 0 and new_rng.state1 &gt; 0) def test_equality(self): rng1 = XorShift128Plus(123456789, 987654321) rng2 = XorShift128Plus(123456789, 987654321) rng3 = XorShift128Plus(111111111, 222222222) self.assertEqual(rng1, rng2) self.assertNotEqual(rng1, rng3) def test_hash(self): rng1 = XorShift128Plus(123456789, 987654321) rng2 = XorShift128Plus(123456789, 987654321) self.assertEqual(hash(rng1), hash(rng2)) def test_repr(self): rng = XorShift128Plus(123456789, 987654321) self.assertEqual(repr(rng), &quot;XorShift128Plus(123456789, 987654321)&quot;) if __name__ == &quot;__main__&quot;: unittest.main() </code></pre> <p>Now, the issue is, Python's <code>test_next()</code> and <code>test_next_uint()</code> are failing:</p> <pre class="lang-py prettyprint-override"><code>....F 483845663 != 483740664 Expected :483740664 Actual :483845663 &lt;Click to see difference&gt; self = &lt;RefPythonCode.unit_test_for_XorShift128Plus.TestXorShift128Plus testMethod=test_next&gt; first = 483740664, second = 483845663, msg = None def _patched_equals(self, first, second, msg=None): try: &gt; old(self, first, second, msg) E AssertionError: 483740664 != 483845663 C:\Program Files\JetBrains\PyCharm 2018.3.7\helpers\pycharm\teamcity\diff_tools.py:32: AssertionError During handling of the above exception, another exception occurred: self = &lt;RefPythonCode.unit_test_for_XorShift128Plus.TestXorShift128Plus testMethod=test_next&gt; def test_next(self): rng = XorShift128Plus(123456789, 987654321) actual = rng.next() &gt; self.assertEqual(actual, 483845663) unit_test_for_XorShift128Plus.py:25: ..Original: 0.00011228379450256478 Rounded : 0.000112 Expected: 0.000112 ..F 2631329311 != 2631224312 Expected :2631224312 Actual :2631329311 &lt;Click to see difference&gt; self = &lt;RefPythonCode.unit_test_for_XorShift128Plus.TestXorShift128Plus testMethod=test_next_uint&gt; first = 2631224312, second = 2631329311, msg = None def _patched_equals(self, first, second, msg=None): try: &gt; old(self, first, second, msg) E AssertionError: 2631224312 != 2631329311 C:\Program Files\JetBrains\PyCharm 2018.3.7\helpers\pycharm\teamcity\diff_tools.py:32: AssertionError During handling of the above exception, another exception occurred: self = &lt;RefPythonCode.unit_test_for_XorShift128Plus.TestXorShift128Plus testMethod=test_next_uint&gt; def test_next_uint(self): rng = XorShift128Plus(123456789, 987654321) actual = rng.next_uint() &gt; self.assertEqual(actual, 2631329311) unit_test_for_XorShift128Plus.py:20: ... </code></pre> <p>How can I make the Python version match the output with the C# version?</p>
<python><c#><random>
2025-03-04 22:27:58
0
17,446
user366312
79,484,907
2,279,501
How to Resolve Circular Import and Task Not Found Issues in Flask and Celery Integration?
<p>I'm working on a Flask application that uses Celery for background task processing. I'm encountering issues with circular imports and task not found errors. Here is the structure of my project:</p> <ul> <li>app.py: Main Flask application file</li> <li>tasks.py: Celery task definitions</li> <li>celery_config.py: Celery configuration</li> </ul> <p><strong>app.py:</strong></p> <pre class="lang-py prettyprint-override"><code>import os import subprocess import xml.etree.ElementTree as ET from flask import Flask, request, render_template, redirect, flash, jsonify, url_for from openai import OpenAI from dotenv import load_dotenv from werkzeug.utils import secure_filename import time from openai import RateLimitError from celery_config import make_celery from tasks import process_ttml # Import the process_ttml task load_dotenv() OPENAI_API_KEY = os.getenv(&quot;OPENAI_API_KEY&quot;) client = OpenAI(api_key=OPENAI_API_KEY) app = Flask(__name__) # Start monitor_ttml.py as a background process subprocess.Popen([&quot;python&quot;, &quot;monitor_ttml.py&quot;]) app.secret_key = 'supersecretkey' app.config['UPLOAD_FOLDER'] = './uploads' app.config['ALLOWED_EXTENSIONS'] = {'ttml'} if not os.path.exists(app.config['UPLOAD_FOLDER']): os.makedirs(app.config['UPLOAD_FOLDER']) app.config.update( CELERY_BROKER_URL='redis://localhost:6379/0', CELERY_RESULT_BACKEND='redis://localhost:6379/0' ) celery = make_celery(app) def allowed_file(filename): return '.' in filename and filename.rsplit('.', 1)[1].lower() in app.config['ALLOWED_EXTENSIONS'] def format_timestamp(seconds): &quot;&quot;&quot;Format seconds into HH:MM:SS format.&quot;&quot;&quot; h = int(seconds // 3600) m = int((seconds % 3600) // 60) s = int(seconds % 60) return f&quot;{h:02}:{m:02}:{s:02}&quot; def extract_transcript(ttml_content, include_timestamps=False): &quot;&quot;&quot;Extract transcript from TTML content.&quot;&quot;&quot; try: root = ET.fromstring(ttml_content) transcript = [] # Find all &lt;p&gt; elements in the TTML file paragraphs = root.findall(&quot;.//{http://www.w3.org/ns/ttml}p&quot;) for paragraph in paragraphs: paragraph_text = &quot;&quot; for span in paragraph.findall(&quot;.//{http://www.w3.org/ns/ttml}span&quot;): if span.text: paragraph_text += span.text.strip() + &quot; &quot; paragraph_text = paragraph_text.strip() if paragraph_text: if include_timestamps and &quot;begin&quot; in paragraph.attrib: timestamp = format_timestamp(float(paragraph.attrib[&quot;begin&quot;].replace(&quot;s&quot;, &quot;&quot;))) transcript.append(f&quot;[{timestamp}] {paragraph_text}&quot;) else: transcript.append(paragraph_text) return &quot;\n\n&quot;.join(transcript) except ET.ParseError as e: return f&quot;Error parsing TTML file: {e}&quot; def summarize_transcript(transcript): max_chunk_size = 2000 # Adjust this value based on your token limit transcript_chunks = [transcript[i:i + max_chunk_size] for i in range(0, len(transcript), max_chunk_size)] summaries = [] for chunk in transcript_chunks: retry_count = 0 while retry_count &lt; 5: # Retry up to 5 times try: response = client.chat.completions.create( model=&quot;gpt-4&quot;, messages=[ {&quot;role&quot;: &quot;system&quot;, &quot;content&quot;: &quot;You are a helpful assistant that summarizes podcast transcripts.&quot;}, {&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: f&quot;Summarize the following podcast transcript in bullet points:\n\n{chunk}&quot;} ], max_tokens=200 ) summaries.append(response.choices[0].message.content.strip()) break # Exit the retry loop if successful except RateLimitError as e: retry_count += 1 wait_time = 2 ** retry_count # Exponential backoff print(f&quot;Rate limit exceeded. Retrying in {wait_time} seconds...&quot;) time.sleep(wait_time) except Exception as e: print(f&quot;An error occurred: {e}&quot;) break return &quot;\n\n&quot;.join(summaries) @app.route('/') def index(): return render_template('index.html') @app.route('/upload', methods=['POST']) def upload_file(): if 'file' not in request.files: flash('No file part') return redirect(request.url) file = request.files['file'] if file.filename == '': flash('No selected file') return redirect(request.url) if file and allowed_file(file.filename): filename = secure_filename(file.filename) file_path = os.path.join(app.config['UPLOAD_FOLDER'], filename) file.save(file_path) include_timestamps = 'timestamps' in request.form task = process_ttml.delay(file_path, include_timestamps) return redirect(url_for('task_status', task_id=task.id)) else: flash('Invalid file type') return redirect(request.url) @app.route('/status/&lt;task_id&gt;') def task_status(task_id): task = process_ttml.AsyncResult(task_id) if task.state == 'PENDING': response = { 'state': task.state, 'status': 'Pending...' } elif task.state != 'FAILURE': response = { 'state': task.state, 'result': task.result } else: response = { 'state': task.state, 'status': str(task.info) } return jsonify(response) @app.route('/upload_api', methods=['POST']) def upload_api(): if 'file' not in request.files: return jsonify({&quot;error&quot;: &quot;No file part&quot;}), 400 file = request.files['file'] if file.filename == '': return jsonify({&quot;error&quot;: &quot;No selected file&quot;}), 400 if file and allowed_file(file.filename): filename = secure_filename(file.filename) file_path = os.path.join(app.config['UPLOAD_FOLDER'], filename) file.save(file_path) with open(file_path, 'r', encoding='utf-8') as f: ttml_content = f.read() transcript = extract_transcript(ttml_content) summary = summarize_transcript(transcript) return jsonify({&quot;summary&quot;: summary}) else: return jsonify({&quot;error&quot;: &quot;Invalid file type&quot;}), 400 if __name__ == &quot;__main__&quot;: app.run(debug=True) </code></pre> <p><strong>tasks.py:</strong></p> <pre class="lang-py prettyprint-override"><code>from celery_config import make_celery from app import app, extract_transcript, summarize_transcript from openai.error import RateLimitError celery = make_celery(app) @celery.task(bind=True, max_retries=5) def process_ttml(self, file_path, include_timestamps): try: with open(file_path, 'r', encoding='utf-8') as f: ttml_content = f.read() transcript = extract_transcript(ttml_content, include_timestamps) summary = summarize_transcript(transcript) return summary except RateLimitError as e: self.retry(countdown=2 ** self.request.retries, exc=e) except Exception as e: raise self.retry(exc=e) </code></pre> <p><strong>celery_config.py:</strong></p> <pre class="lang-py prettyprint-override"><code>from celery import Celery def make_celery(app): celery = Celery( app.import_name, backend=app.config['CELERY_RESULT_BACKEND'], broker=app.config['CELERY_BROKER_URL'] ) celery.conf.update(app.config) return celery </code></pre> <p><strong>Problem:</strong></p> <p>When I run the Flask application, I get the following error:</p> <pre><code>NameError: name 'process_ttml' is not defined </code></pre> <p>If I try to fix this by importing <code>process_ttml</code> in app.py, I get a circular import error.</p> <p><strong>Detailed Error</strong></p> <ol> <li>Circular Import error</li> </ol> <pre><code>celery -A tasks.celery worker --loglevel=info Usage: celery [OPTIONS] COMMAND [ARGS]... Try 'celery --help' for help. Error: Invalid value for '-A' / '--app': Unable to load celery application. While trying to load the module tasks.celery the following error occurred: Traceback (most recent call last): File &quot;/Users/username/Local Documents/python_environments/pytorchenv/lib/python3.11/site-packages/celery/bin/celery.py&quot;, line 58, in convert return find_app(value) ^^^^^^^^^^^^^^^ File &quot;/Users/username/Local Documents/python_environments/pytorchenv/lib/python3.11/site-packages/celery/app/utils.py&quot;, line 383, in find_app sym = symbol_by_name(app, imp=imp) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/username/Local Documents/python_environments/pytorchenv/lib/python3.11/site-packages/kombu/utils/imports.py&quot;, line 59, in symbol_by_name module = imp(module_name, package=package, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/username/Local Documents/python_environments/pytorchenv/lib/python3.11/site-packages/celery/utils/imports.py&quot;, line 109, in import_from_cwd return imp(module, package=package) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/homebrew/Cellar/python@3.11/3.11.9_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/importlib/__init__.py&quot;, line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1204, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1176, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1147, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 690, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 940, in exec_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 241, in _call_with_frames_removed File &quot;/Users/username/Documents/Python Scripts/apple-podcast-transcript-extractor/tasks.py&quot;, line 3, in &lt;module&gt; from app import app, extract_transcript, summarize_transcript File &quot;/Users/username/Documents/Python Scripts/apple-podcast-transcript-extractor/app.py&quot;, line 12, in &lt;module&gt; from tasks import process_ttml ImportError: cannot import name 'process_ttml' from partially initialized module 'tasks' (most likely due to a circular import) (/Users/username/Documents/Python Scripts/apple-podcast-transcript-extractor/tasks.py) </code></pre> <ol start="2"> <li><code>process_ttml</code> not found</li> </ol> <pre><code> NameError NameError: name 'process_ttml' is not defined Traceback (most recent call last) File &quot;/Users/username/Local Documents/python_environments/pytorchenv/lib/python3.11/site-packages/flask/app.py&quot;, line 1498, in __call__ return self.wsgi_app(environ, start_response) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/username/Local Documents/python_environments/pytorchenv/lib/python3.11/site-packages/flask/app.py&quot;, line 1476, in wsgi_app response = self.handle_exception(e) ^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/username/Local Documents/python_environments/pytorchenv/lib/python3.11/site-packages/flask/app.py&quot;, line 1473, in wsgi_app response = self.full_dispatch_request() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/username/Local Documents/python_environments/pytorchenv/lib/python3.11/site-packages/flask/app.py&quot;, line 882, in full_dispatch_request rv = self.handle_user_exception(e) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/username/Local Documents/python_environments/pytorchenv/lib/python3.11/site-packages/flask/app.py&quot;, line 880, in full_dispatch_request rv = self.dispatch_request() ^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/username/Local Documents/python_environments/pytorchenv/lib/python3.11/site-packages/flask/app.py&quot;, line 865, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/username/Documents/Python Scripts/apple-podcast-transcript-extractor/app.py&quot;, line 121, in upload_file task = process_ttml.delay(file_path, include_timestamps) ^^^^^^^^^^^^ NameError: name 'process_ttml' is not defined The debugger caught an exception in your WSGI application. You can now look at the traceback which led to the error. To switch between the interactive traceback and the plaintext one, you can click on the &quot;Traceback&quot; headline. From the text traceback you can also create a paste of it. For code execution mouse-over the frame you want to debug and click on the console icon on the right side. You can execute arbitrary Python code in the stack frames and there are some extra helpers available for introspection: dump() shows all variables in the frame dump(obj) dumps all that's known about the object Brought to you by DON'T PANIC, your friendly Werkzeug powered traceback interpreter. </code></pre> <p><strong>Question:</strong></p> <p>How can I resolve the circular import issue and ensure that the <code>process_ttml</code> task is correctly defined and used in my Flask application with Celery?</p> <p><strong>Additional Information:</strong></p> <ul> <li>I'm using Redis as the message broker for Celery.</li> <li>The Redis server is running and accessible.</li> </ul>
<python><flask><celery>
2025-03-04 21:31:52
0
567
cappy0704
79,484,899
20,895,654
Python inheritance vs. composition for InstanceManager
<p>I have created the following class, made to manage all instances created and to enable fast lookup and retrieval of specific values by the specified lookup keys:</p> <pre><code>class InstanceManager[InstanceT, LookupT: Hashable](): def __init__( self, get_lookups: Callable[[InstanceT], Iterable[LookupT]], on_add: Opt[Callable[[InstanceT], None]] = None, on_remove: Opt[Callable[[InstanceT], None]] = None ) -&gt; None: self._on_add = on_add self._on_remove = on_remove self._get_lookups = get_lookups self._all: list[InstanceT] = [] self._lookup: dict[LookupT, InstanceT] = {} self._LOCK = threading.RLock() def lookup(self, lookup: LookupT) -&gt; InstanceT: return self._lookup[lookup] # atomic operation def all_instances(self) -&gt; list[InstanceT]: with self._LOCK: return list(self._all) def get_lookups(self, instance: InstanceT) -&gt; tuple[LookupT, ...]: with self._LOCK: return tuple(k for k, v in self._lookup.items() if v == instance) def add(self, instance: InstanceT) -&gt; None: with self._LOCK: lookup_values = self._get_lookups(instance) if instance in self._all: raise ValueError(&quot;Instance already in added&quot;) self._all.append(instance) for lv in lookup_values: if lv in self._lookup: raise ValueError(&quot;lookup value already used for different instance&quot;) self._lookup[lv] = instance if self._on_add is not None: self._on_add def remove(self, instance: InstanceT): with self._LOCK: for k, v in self._lookup.items(): if v == instance: self._lookup.pop(k) self._all.remove(instance) if self._on_remove is not None: self._on_remove() </code></pre> <p>Now I wonder if a class using this functionality would benefit more from inheriting from it or by making a class field be the instance manager. I know that inheritance is a &quot;is a&quot; relationship while composition is a &quot;has a&quot; relationship, but I feel like in this case it doesn't really help me answer the question. To consider is also that many classes I would use it for would have other classes they should inherit from, so it would lead to multiple inheritance, which I want to avoid if possible.</p>
<python><inheritance><instance><composition>
2025-03-04 21:26:40
1
346
JoniKauf
79,484,702
2,661,305
Python NiceGUI / FastAPI App Insights Integration
<p>I'm developing a NiceGUI application in python and wanted to add application insights to the app. I have been able to do this as I'm getting requests and telemetry sent no problem. The issue I'm having is that I cant seem to get my custom logging messages to send to app insights.</p> <p>I've previously added the configure_azure_monitor and the same code at this link <a href="https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-opentelemetry/samples/logging/custom_event.py" rel="nofollow noreferrer">AppInsights Samples</a> and tried it on my &quot;login&quot; page below but it didnt work. Below is my main app start page, where I call config.py which sets up some common things. Login is where the user goes to if there is no authenticated flag set for them. I'm using the opentelemetry.instrumentation.fastapi FastAPIInstrumentor which sends details to app insights for you, which is working fine. I just dont know how to get my custom log messages to send. Any ideas?</p> <p><strong>config.py</strong></p> <pre><code>import os from nicegui import app from services.keyvault_service import load_secrets_into_session, in_memory_secrets from dotenv import load_dotenv from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor from azure.monitor.opentelemetry import configure_azure_monitor from opentelemetry import trace from opentelemetry.trace import (get_tracer_provider) from logging import getLogger from azure.identity import ManagedIdentityCredential def config_dev_environment(): local_environment = os.getenv(&quot;LOCALDEV&quot;, &quot;False&quot;) == &quot;True&quot; if local_environment: load_dotenv(&quot;.env&quot;) os.environ[&quot;HTTP_PROXY&quot;] = &quot;http://localhost:18080&quot; os.environ[&quot;HTTPS_PROXY&quot;] = &quot;http://localhost:18080&quot; def load_app_config(): config_dev_environment() if not in_memory_secrets: load_secrets_into_session() def configure_app_insights(): local_environment = os.getenv(&quot;LOCALDEV&quot;, &quot;False&quot;) == &quot;True&quot; if local_environment: configure_azure_monitor() else: configure_azure_monitor(credential=ManagedIdentityCredential(),) tracer = trace.get_tracer(__name__, tracer_provider=get_tracer_provider()) logger = getLogger(__name__) FastAPIInstrumentor.instrument_app(app) </code></pre> <p><strong>app.py</strong></p> <pre><code>from nicegui import ui, app from config import load_app_config, configure_app_insights from pages import open_workspace, create_workspace from middleware import AuthenticationChecker import logging import os load_app_config() configure_app_insights() app.add_middleware(AuthenticationChecker) local_environment = os.getenv(&quot;LOCALDEV&quot;, &quot;False&quot;) == &quot;True&quot; if __name__ in {&quot;__main__&quot;, &quot;__mp_main__&quot;}: if local_environment: ui.run(port=80, favicon=&quot;favicon.ico&quot;, title=&quot;My App&quot;, else: ui.run(port=443, favicon=&quot;favicon.ico&quot;, title=&quot;My App&quot;, ssl_keyfile=&quot;C:/Certificate/private.pem&quot;, ssl_certfile=&quot;C:/Certificate/public.pem&quot;) </code></pre> <p><strong>login.py</strong></p> <pre><code>AUTH_FLOW_STATES = TTLCache(maxsize=256, ttl=60 * 5) REQUIREDGROUPNAME = &quot;MYE–Data&quot; logger = getLogger(__name__) @ui.page(&quot;/login&quot;) def login(): ad_scope = in_memory_secrets.get(&quot;ADScope&quot;) ad_redirectURL = get_redirect_url() msal_app = msal_setup() auth_flow = msal_app.initiate_auth_code_flow(ad_scope, ad_redirectURL) if auth_flow is None: logger.exception(&quot;Auth_flow returned from initiate_auth_code_flow is null.&quot;) raise ValueError(&quot;auth_code_flow is None. Ensure it is properly initialized.&quot;) # Keep track of the auth_flow information as received from B2C browser_id = app.storage.browser.get(&quot;id&quot;) AUTH_FLOW_STATES[browser_id] = auth_flow logger.info(&quot;Auth flow retrieved and redirecting user to B2C SSO login&quot;) return RedirectResponse(auth_flow[&quot;auth_uri&quot;]) </code></pre>
<python><fastapi><azure-application-insights><nicegui>
2025-03-04 19:38:41
0
381
user2661305
79,484,649
1,079,907
Substitution for comma match
<p>For the input string as shown below, I am trying to substitute <code>UK1/</code> after <code>Street</code> and every <code>,</code> and skip hyphen to create expected output shown below.</p> <p><code>Input = Street1-2,4,6,8-10</code></p> <p><code>Expected output = StreetUK/1-2,UK/4,UK/6,UK/8-10</code></p> <p>With the regex pattern I have trouble substitute for each captured group. How can I capture all the required groups for every <code>,</code> and sub required string.</p> <pre><code> replacements = [] pattern = r&quot;(Street)?(?:\d+)(((,)?(?:\d+))*[-]?(?:\d+))*&quot; def replacement(x): replacements.append(f&quot;{x.group(1)}{'UK'}/&quot;) input = 'Street1-2,4,6,8-10' m = re.sub(pattern, replacement, input) print(m, [''.join(x) for x in replacements] ) </code></pre> <p>The above code just prints <code>['StreetUK/']</code> but not as expected.</p>
<python><regex>
2025-03-04 19:12:47
4
12,828
Sunil Bojanapally
79,484,615
7,456,317
Llama Index Vector Store: filter a list of documents with a NOT feature
<p>I have a vector store of documents, each document is a json document with features. I'd like to filter the documents according to some criteria. The problem is that some of the documents contain a NOT criteria, so they should be retrieved if the filter does <em>not</em> contain this feature. Here's a MOE:</p> <pre class="lang-py prettyprint-override"><code>from llama_index.core.schema import TextNode from llama_index.core import VectorStoreIndex from llama_index.core.vector_stores import MetadataFilter, MetadataFilters, FilterOperator node_1 = TextNode(text=&quot;Plans a and b&quot;,id_=&quot;1&quot;,metadata={&quot;plan&quot;: [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;]}) node_2 = TextNode(text=&quot;Plans a&quot;,id_=&quot;2&quot;,metadata={&quot;plan&quot;: [&quot;a&quot;]}) node_3 = TextNode(text=&quot;Plans not a&quot;,id_=&quot;3&quot;,metadata={&quot;plan&quot;: [{&quot;not&quot;: [&quot;a&quot;]}]}) doc_index = VectorStoreIndex([node_1, node_2, node_3]) filter_plan_a = MetadataFilter(key=&quot;plan&quot;, operator=FilterOperator.CONTAINS, value=&quot;a&quot;) filter_plan_b = MetadataFilter(key=&quot;plan&quot;, operator=FilterOperator.CONTAINS, value=&quot;b&quot;) for filter_plan in [filter_plan_a, filter_plan_b]: retriever = doc_index.as_retriever( filters=MetadataFilters(filters=[filter_plan]), similarity_top_k=3 ) print(f&quot;Results for filter {filter_plan}&quot;) print(&quot;-&quot; * 20) for node_with_score in retriever.retrieve(&quot;plan&quot;): print(node_with_score.node.text) print(node_with_score.node.metadata) print() </code></pre> <p>With <code>filter_plan_b</code>, document id 3 should be retrieved, because a plan &quot;b&quot; is compatible with a not &quot;a&quot; condition.</p> <p>How can I implement this? If all possible values of <code>plans</code> are known, this could be solved by replacing all <code>not</code> features with the value of all the other plans, but what can I do if the values are not known apriori?</p>
<python><llama-index><vectorstore>
2025-03-04 18:52:53
0
913
Gino
79,484,575
1,653,413
Why does importlib.resources read_text raise UnicodeDecodeError on Windows but not mac and RPi?
<p>I am writing a small web app (<a href="https://github.com/r-owen/base_loom_server" rel="nofollow noreferrer">https://github.com/r-owen/base_loom_server</a>) and it runs on macOS and Raspberry Pi, but a Windows user reports an exception when Python reads a UTF-8-encoded resource file using <code>importlib.resources.files.read_text()</code> with no arguments. The resource is JavaScript, but I doubt it matters. This is the error:</p> <blockquote> <p>UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 20834: character maps to &lt;undefined&gt;</p> </blockquote> <p>I believe it is complaining about the first of four arrow characters in that file: &quot;←&quot; = LEFTWARDS ARROW. The file has the same style of arrow in all 4 directions: U+2190 through U+2193. Those are the only non-ASCII characters in the file (other than a few unnecessary • in a comment earlier in the file, which I will replace with *).</p> <p>With no arguments, read_file decodes using UTF-8 in strict mode, which is what I want.</p> <p>I would like to understand what's going on so I can work around this, plus any potential problems with letters that have diacritical marks (because the app also uses read_file to load language translation files as needed).</p> <p>The failure occurs <a href="https://github.com/r-owen/base_loom_server/blob/effbd5aee6a3766a2e6b37b5a4dcfc31e0a70730/src/base_loom_server/app_runner.py#L186" rel="nofollow noreferrer">here</a></p>
<python><unicode><python-importlib>
2025-03-04 18:32:05
1
449
Russell Owen
79,484,348
1,159,579
Python Azure Function nested code can't find modules even with init files
<p>I made a test.py, and it works fine.. So this has to be how Azure is starting the function.</p> <pre><code>import sys from pathlib import Path # Get the directory of the current file (test_function.py) current_dir = Path(__file__).parent # Add the 'code' directory to the PYTHONPATH sys.path.append(str(current_dir / 'code')) try: from code.db.database import SessionLocal print(&quot;Import successful: SessionLocal is available.&quot;) except ModuleNotFoundError as e: print(f&quot;Import failed: {e}&quot;) ----------update above </code></pre> <p>I reorganized my code from a flat hierarchy to nested. Azure functions has my Python version locked at 3.12.8 I added all the <strong>init</strong>.py files, tired 20 variations of sys path. I am not sure why this won't find the files.</p> <p><a href="https://i.sstatic.net/pBJAlupf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBJAlupf.png" alt="enter image description here" /></a></p> <pre><code>Found Python version 3.12.8 (py). </code></pre> <p>MY function_app.py</p> <pre><code> import os import azure.functions as func from pathlib import Path import logging import json from datetime import datetime import sys # Add parent directory to Python path import sys from pathlib import Path # Get the directory of the current file (function_app.py) current_dir = Path(__file__).parent # ✅ Correct syntax sys.path.append(os.path.join(os.path.dirname(__file__), 'code')) # Navigate to the project root (adjust based on your structure) project_root = current_dir.parent # Assumes KUSA is the parent of kusa.marketing sys.path.insert(0, str(project_root)) # Now import your module from code.db.database import SessionLocal from code.db.database import SessionLocal # Changed to relative import from generate_jobber_auth import generate_jobber_auth_url, handle_jobber_callback from JobberSyncManager import JobberSyncManager from jobber_auth_models import JobberToken app = func.FunctionApp(http_auth_level=func.AuthLevel.FUNCTION) @app.route(route=&quot;buildcompanyprofile&quot;, auth_level=func.AuthLevel.ANONYMOUS) def buildcompanyprofile(req: func.HttpRequest) -&gt; func.HttpResponse: logging.info('Python HTTP trigger function processed a request.') </code></pre> <p>Azure Functions Core Tools Core Tools Version: 4.0.6821 Commit hash: N/A +c09a2033faa7ecf51b3773308283af0ca9a99f83 (64-bit) Function Runtime Version: 4.1036.1.23224</p> <p>Exception: ModuleNotFoundError: No module named 'code.db'; 'code' is not a package. Cannot find module. Please check the requirements.txt file for the missing module. For more info, please refer the troubleshooting guide: <a href="https://aka.ms/functions-modulenotfound" rel="nofollow noreferrer">https://aka.ms/functions-modulenotfound</a>. Current sys.path: ['C:\Users\Steve\AppData\Roaming\npm\node_modules\azure-functions-core-tools\bin\workers\python\3.12\WINDOWS\X64', 'C:\Users\Steve\AppData\Roaming\npm\node_modules\azure-functions-core-tools\bin\workers\python\3.12\WINDOWS\X64', 'C:\Users\Steve\AppData\Local\Programs\Python\Python312\python312.zip', 'C:\Users\Steve\AppData\Local\Programs\Python\Python312\DLLs', 'C:\Users\Steve\AppData\Local\Programs\Python\Python312\Lib', 'C:\Users\Steve\AppData\Local\Programs\Python\Python312', 'C:\Users\Steve\Documents\GitHub\KUSA\kusa.marketing\.venv', 'C:\Users\Steve\Documents\GitHub\KUSA\kusa.marketing\.venv\Lib\site-packages', 'C:\Users\Steve\Documents\GitHub\KUSA\kusa.marketing'] Stack: File &quot;C:\Users\Steve\AppData\Roaming\npm\node_modules\azure-functions-core-tools\bin\workers\python\3.12\WINDOWS\X64\azure_functions_worker\dispatcher.py&quot;, line 469, in <em>handle__functions_metadata_request self.load_function_metadata( File &quot;C:\Users\Steve\AppData\Roaming\npm\node_modules\azure-functions-core-tools\bin\workers\python\3.12\WINDOWS\X64\azure_functions_worker\dispatcher.py&quot;, line 449, in load_function_metadata self.index_functions(function_path, function_app_directory)) <br /> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Steve\AppData\Roaming\npm\node_modules\azure-functions-core-tools\bin\workers\python\3.12\WINDOWS\X64\azure_functions_worker\dispatcher.py&quot;, line 822, in index_functions indexed_functions = loader.index_function_app(function_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Steve\AppData\Roaming\npm\node_modules\azure-functions-core-tools\bin\workers\python\3.12\WINDOWS\X64\azure_functions_worker\utils\wrappers.py&quot;, line 49, in call raise extend_exception_message(e, message) File &quot;C:\Users\Steve\AppData\Roaming\npm\node_modules\azure-functions-core-tools\bin\workers\python\3.12\WINDOWS\X64\azure_functions_worker\utils\wrappers.py&quot;, line 44, in call return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Steve\AppData\Roaming\npm\node_modules\azure-functions-core-tools\bin\workers\python\3.12\WINDOWS\X64\azure_functions_worker\loader.py&quot;, line 244, in index_function_app imported_module = importlib.import_module(module_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Steve\AppData\Local\Programs\Python\Python312\Lib\importlib_<em>init</em></em>.py&quot;, line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Steve\Documents\GitHub\KUSA\kusa.marketing\function_app.py&quot;, line 21, in from code.db.database import SessionLocal</p>
<python><visual-studio-code><azure-functions>
2025-03-04 16:51:28
1
2,057
Steve Coleman
79,484,274
1,534,017
How to prevent the formatting of any linebreaks?
<p>Let's say I have a file like this:</p> <pre><code>import pandas as pd df = pd.DataFrame( { &quot;A&quot;: [&quot; a: yu&quot;, &quot;b: stuff &quot;, &quot;c: more_stuff&quot;], &quot;B&quot;: [4, 5, 6], &quot;C&quot;: [7, 8, 9], } ) df[&quot;A&quot;] = ( df[&quot;A&quot;] .str.strip() .str.replace(&quot;:&quot;, &quot;&quot;) .str[0] ) new_df = pd.melt( df, id_vars=[&quot;A&quot;] ) print(new_df) </code></pre> <p>If I then run</p> <pre><code>ruff format --diff play_line_breaks.py </code></pre> <p>I get</p> <pre><code>-df[&quot;A&quot;] = ( - df[&quot;A&quot;] - .str.strip() - .str.replace(&quot;:&quot;, &quot;&quot;) - .str[0] -) +df[&quot;A&quot;] = df[&quot;A&quot;].str.strip().str.replace(&quot;:&quot;, &quot;&quot;).str[0] -new_df = pd.melt( - df, - id_vars=[&quot;A&quot;] -) +new_df = pd.melt(df, id_vars=[&quot;A&quot;]) print(new_df) </code></pre> <p>So, the <code>ruff</code> formatter would convert my multiline statements into a single line. I find the multiline version far more readable and would like to keep it. Is there any setting in <code>ruff</code> that would allow me to say &quot;don't touch any line breaks&quot;?</p> <p>The best I could find are</p> <pre><code>skip-magic-trailing-comma = false (in my pyproject.toml) </code></pre> <p>which does not impact the output from above though</p> <p>or wrapping the two statements like this</p> <pre><code># fmt: off &lt; all code after df assignment &gt; # fmt: on </code></pre> <p>That works, but I find it rather cumbersome to do this for all the statements I have in my code base.</p> <p>Are there any smarter ways of doing this?</p>
<python><ruff>
2025-03-04 16:14:45
1
26,249
Cleb
79,483,931
357,313
Iterate xml using namespace prefix
<p>I have an xml file with a default namespace, like this:</p> <pre><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt; &lt;root xmlns=&quot;somelongnamespace&quot;&gt; &lt;child&gt;...&lt;/child&gt; &lt;/root&gt; </code></pre> <p>I starting using <code>lxml</code> to iterate and query this file, but I would like to use a namespace prefix, like this:</p> <pre><code>from lxml import etree xml = etree.parse(&quot;myfile.xml&quot;) root = xml.getroot() c = root.findall('ns:child') </code></pre> <p>What do I need to do for this to work? I cannot change the file, but I could change the xml object after loading.</p> <p>I read the relevant <code>lxml</code> documentation, searched and tried all kinds of suggestions, but got none of them to work unfortunately. This does sound like a very common question...?</p>
<python><xml><lxml><xml-namespaces>
2025-03-04 14:07:17
1
8,135
Michel de Ruiter
79,483,874
1,980,208
calculate pearson correlation of each rows in 2D numpy array (n,m)
<pre><code>a = np.array([[1,2,4],[3,6,2],[3,4,7],[9,7,7],[6,3,1],[3,5,9]]) b = np.array([[4,5,2],[9,2,5],[1,5,6],[4,5,6],[1,2,6],[6,4,3]]) a = array([[1, 2, 4], [3, 6, 2], [3, 4, 7], [9, 7, 7], [6, 3, 1], [3, 5, 9]]) b = array([[4, 5, 2], [9, 2, 5], [1, 5, 6], [4, 5, 6], [1, 2, 6], [6, 4, 3]]) </code></pre> <p>I would like to calculate the pearson correlation coefficient between the <strong>first</strong> <strong>row</strong> of <em>a</em> and <strong>first row</strong> of <em>b</em>, the <strong>second row</strong> of <em>a</em> and <strong>second row</strong> <em>b</em> and so on for each rows to follow.</p> <p>desired out put should be 1D array:</p> <pre><code>array([__ , __ , __) </code></pre> <p>column wise i can do it as below:</p> <pre><code>corr = np.corrcoef(a.T, b.T).diagonal(a.shape[1]) </code></pre> <p>Output:</p> <pre><code>array([-0.2324843 , -0.03631365, -0.18057878]) </code></pre> <p><strong>UPDATE</strong></p> <p>Though i accepted the answer below but there is this alternative solution to the question and also addresses zero division error issues:</p> <pre><code>def corr2_coeff(A, B): # Rowwise mean of input arrays &amp; subtract from input arrays themeselves A_mA = A - A.mean(1)[:, None] B_mB = B - B.mean(1)[:, None] # Sum of squares across rows ssA = (A_mA**2).sum(1) ssB = (B_mB**2).sum(1) deno = np.sqrt(np.dot(ssA[:, None],ssB[None])) + 0.00000000000001 # Finally get corr coeff return np.dot(A_mA, B_mB.T) / deno </code></pre>
<python><numpy>
2025-03-04 13:41:57
2
439
prem
79,483,761
1,700,890
How do I fix TypeError from Llama api call
<p>I am running below code from: <a href="https://docs.llama-api.com/quickstart" rel="nofollow noreferrer">Llama quick start</a></p> <pre><code>import json from llamaapi import LlamaAPI # Initialize the SDK llama = LlamaAPI(&quot;&lt;your_api_token&gt;&quot;) # Build the API request api_request_json = { &quot;model&quot;: &quot;llama3.1-70b&quot;, &quot;messages&quot;: [ {&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: &quot;What is the weather like in Boston?&quot;}, ], &quot;functions&quot;: [ { &quot;name&quot;: &quot;get_current_weather&quot;, &quot;description&quot;: &quot;Get the current weather in a given location&quot;, &quot;parameters&quot;: { &quot;type&quot;: &quot;object&quot;, &quot;properties&quot;: { &quot;location&quot;: { &quot;type&quot;: &quot;string&quot;, &quot;description&quot;: &quot;The city and state, e.g. San Francisco, CA&quot;, }, &quot;days&quot;: { &quot;type&quot;: &quot;number&quot;, &quot;description&quot;: &quot;for how many days ahead you wants the forecast&quot;, }, &quot;unit&quot;: {&quot;type&quot;: &quot;string&quot;, &quot;enum&quot;: [&quot;celsius&quot;, &quot;fahrenheit&quot;]}, }, }, &quot;required&quot;: [&quot;location&quot;, &quot;days&quot;], } ], &quot;stream&quot;: False, &quot;function_call&quot;: &quot;get_current_weather&quot;, } # Execute the Request response = llama.run(api_request_json) print(json.dumps(response.json(), indent=2)) </code></pre> <p>it return the following error:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\some_user\OneDrive - Ryder\Projects\Tutorials\Pycharm\AI_tests\llama_api.py&quot;, line 39, in &lt;module&gt; response = llama.run(api_request_json) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\some_user\myenv\Lib\site-packages\llamaapi\llamaapi.py&quot;, line 67, in run return self.run_sync(api_request_json) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\some_user\myenv\Lib\site-packages\llamaapi\llamaapi.py&quot;, line 53, in run_sync raise Exception(f&quot;POST {response.status_code} {response.json()['detail']}&quot;) ~~~~~~~~~~~~~~~^^^^^^^^^^ TypeError: list indices must be integers or slices, not str </code></pre> <p>What is the fix here?</p>
<python><llama>
2025-03-04 12:55:47
0
7,802
user1700890
79,483,748
1,940,534
Using xml.etree.ElementTree to extract values not working
<p>here is the XML:</p> <pre><code>&lt;multi-routing-engine-results&gt; &lt;multi-routing-engine-item&gt; &lt;re-name&gt;node0&lt;/re-name&gt; &lt;source-resource-usage-pool-information&gt; &lt;resource-usage-entry style=&quot;pat-pool&quot;&gt; &lt;resource-usage-pool-name&gt;gos_src_pool_198_169_34_113&lt;/resource-usage-pool-name&gt; &lt;resource-usage-total-address&gt;1&lt;/resource-usage-total-address&gt; &lt;resource-usage-port-ol-factor&gt;1&lt;/resource-usage-port-ol-factor&gt; &lt;resource-usage-total-total&gt;64512&lt;/resource-usage-total-total&gt; &lt;resource-usage-total-used&gt;2501&lt;/resource-usage-total-used&gt; &lt;resource-usage-total-avail&gt;62011&lt;/resource-usage-total-avail&gt; &lt;resource-usage-total-usage&gt;3%&lt;/resource-usage-total-usage&gt; &lt;resource-usage-peak-usage&gt;21%&lt;/resource-usage-peak-usage&gt; &lt;resource-usage-peak-date-time seconds=&quot;1736272550&quot;&gt;2025-01-07 11:55:50 CST&lt;/resource-usage-peak-date-time&gt; &lt;/resource-usage-entry&gt; &lt;resource-usage-entry style=&quot;pat-pool&quot;&gt; &lt;resource-usage-address&gt;198.169.34.113&lt;/resource-usage-address&gt; &lt;resource-usage-factor-index&gt;0&lt;/resource-usage-factor-index&gt; &lt;resource-usage-port-range&gt;Single Ports&lt;/resource-usage-port-range&gt; &lt;resource-usage-used&gt;2501&lt;/resource-usage-used&gt; &lt;resource-usage-avail&gt;59963&lt;/resource-usage-avail&gt; &lt;resource-usage-total&gt;62464&lt;/resource-usage-total&gt; &lt;resource-usage-usage&gt;4%&lt;/resource-usage-usage&gt; &lt;/resource-usage-entry&gt; &lt;resource-usage-entry style=&quot;pat-pool&quot;&gt; &lt;resource-usage-factor-index&gt;-&lt;/resource-usage-factor-index&gt; &lt;resource-usage-port-range&gt;Alg Ports&lt;/resource-usage-port-range&gt; &lt;resource-usage-used&gt;0&lt;/resource-usage-used&gt; &lt;resource-usage-avail&gt;2048&lt;/resource-usage-avail&gt; &lt;resource-usage-total&gt;2048&lt;/resource-usage-total&gt; &lt;resource-usage-usage&gt;0%&lt;/resource-usage-usage&gt; &lt;/resource-usage-entry&gt; &lt;/source-resource-usage-pool-information&gt; &lt;/multi-routing-engine-item&gt; &lt;multi-routing-engine-item&gt; &lt;re-name&gt;node1&lt;/re-name&gt; &lt;source-resource-usage-pool-information&gt; &lt;resource-usage-entry style=&quot;pat-pool&quot;&gt; &lt;resource-usage-pool-name&gt;gos_src_pool_198_169_34_113&lt;/resource-usage-pool-name&gt; &lt;resource-usage-total-address&gt;1&lt;/resource-usage-total-address&gt; &lt;resource-usage-port-ol-factor&gt;1&lt;/resource-usage-port-ol-factor&gt; &lt;resource-usage-total-total&gt;64512&lt;/resource-usage-total-total&gt; &lt;resource-usage-total-used&gt;2585&lt;/resource-usage-total-used&gt; &lt;resource-usage-total-avail&gt;61927&lt;/resource-usage-total-avail&gt; &lt;resource-usage-total-usage&gt;4%&lt;/resource-usage-total-usage&gt; &lt;resource-usage-peak-usage&gt;0%&lt;/resource-usage-peak-usage&gt; &lt;resource-usage-peak-date-time seconds=&quot;0&quot;&gt;1969-12-31 18:00:00 CST&lt;/resource-usage-peak-date-time&gt; &lt;/resource-usage-entry&gt; &lt;resource-usage-entry style=&quot;pat-pool&quot;&gt; &lt;resource-usage-address&gt;198.169.34.113&lt;/resource-usage-address&gt; &lt;resource-usage-factor-index&gt;0&lt;/resource-usage-factor-index&gt; &lt;resource-usage-port-range&gt;Single Ports&lt;/resource-usage-port-range&gt; &lt;resource-usage-used&gt;2585&lt;/resource-usage-used&gt; &lt;resource-usage-avail&gt;59879&lt;/resource-usage-avail&gt; &lt;resource-usage-total&gt;62464&lt;/resource-usage-total&gt; &lt;resource-usage-usage&gt;4%&lt;/resource-usage-usage&gt; &lt;/resource-usage-entry&gt; &lt;resource-usage-entry style=&quot;pat-pool&quot;&gt; &lt;resource-usage-factor-index&gt;-&lt;/resource-usage-factor-index&gt; &lt;resource-usage-port-range&gt;Alg Ports&lt;/resource-usage-port-range&gt; &lt;resource-usage-used&gt;0&lt;/resource-usage-used&gt; &lt;resource-usage-avail&gt;2048&lt;/resource-usage-avail&gt; &lt;resource-usage-total&gt;2048&lt;/resource-usage-total&gt; &lt;resource-usage-usage&gt;0%&lt;/resource-usage-usage&gt; &lt;/resource-usage-entry&gt; &lt;/source-resource-usage-pool-information&gt; &lt;/multi-routing-engine-item&gt; &lt;/multi-routing-engine-results&gt; </code></pre> <p>Now I find what I need to extract is: 'resource-usage-pool-name','resource-usage-address','resource-usage-usage'</p> <p>using this code:</p> <pre><code> root2 = ET.fromstring(output3) n = 0 contents = ['resource-usage-pool-name','resource-usage-address','resource-usage-usage'] for elem in root.findall(&quot;.//resource-usage-entry/*&quot;): for con in elem.iter(): if con.tag in contents: if con.tag == 'resource-usage-pool-name': rpn.append(elem.text) if con.tag == 'resource-usage-address': rua.append(elem.text) if con.tag == 'resource-usage-usage': ruu.append(elem.text.replace(&quot;%&quot;,&quot;&quot;)) f.close() fxml.close() x = len(rpn) y = len(rua) z = len(ruu) print(&quot;rpn x:&quot; + str(x)) print(&quot;ruu y:&quot; + str(y)) print(&quot;rua z:&quot; + str(z)) </code></pre> <p>but the output I get is</p> <pre><code>rpn x:2 ruu y:0 rua z:0 poolalert:0 poolalert:0 [root@RHEL8-20230921 junos-pyezP311]# </code></pre> <p>so for some reason 'resource-usage-address','resource-usage-usage' are not getting pulled into their array, any ideas?</p>
<python><elementtree>
2025-03-04 12:48:19
1
1,217
robm
79,483,656
1,788,656
Vectoring nested while statements block that detects the frequency of consecutive numbers
<p>All, The following function <code>get_frequency_of_events</code> detects the frequency of consecutive numbers, for example,</p> <pre><code>import numpy as np aa=np.array([1,2,2,3,3,3,4,4,4,4,5,5,5,5,5]) get_frequency_of_events(aa) </code></pre> <p>this yields the following:</p> <p>list of indices @ the beginning of each group [1, 3, 6, 10]</p> <p>frequency of each group [2, 3, 4, 5]</p> <p>another example,</p> <pre><code>aa=np.array([1,1,1,np.nan,np.nan,1,1,np.nan]) idx, feq= get_frequency_of_events(aa) </code></pre> <p>list of indices @ the beginning of each group [0, 5]</p> <p>frequency of each group [3, 2]</p> <p>Yet, this function is slow, especially when iterating it over 3D data. How can I vectorize such a function to achieve faster processing?</p> <p>Here is the function</p> <pre><code>def get_frequency_of_events(mydata): &quot;&quot;&quot; Author : Shaaban Date : Jan 22, 2025 Purpose : get the frequency of repeated consecutive numbers and their indices, this is important when finding the frequency of heatwaves and etc ... All we have to do is to build matrix of ones (or any other number), and NAN. One refers to the existence of the EVENT, and nan refers to the inexistence of the event. Then this function could give you a summary of the the frequency of the events and their associated indices. tests : aa=np.array([1,1,0,0,0,1,0,1,1,1,1,0,1,1]) get_frequency(aa) aa=np.array([1,2,2,3,3,3,4,4,4,4,5,5,5,5,5]) get_frequency(aa) aa=np.array([1,1,1,1,0,0,1,1,1]) get_frequency(aa) aa=np.arange(10) get_frequency(aa) aa=np.ones(10) get_frequency(aa) # CAUTION CAUTION CAUTION #For heatwave numbers, etc , make your array consits of fixed number (any number) that is associated with an evens and Nan for days/hours/month not associated with events. The trick here is that no nan could ever be equal to another nan. aa=np.array([1,1,1,np.nan,np.nan,1,1,np.nan]) idx, feq= get_frequency(aa) &quot;&quot;&quot; index_list=[] events_frequency_list=[] idx_last_num=len(mydata)-1 counter=0 ii=0 while(ii &lt;= idx_last_num-1): #print( '@ index = '+str(ii) ) counter=0 while(mydata[ii] == mydata[ii+1]): print(' Find match @ '+str(ii)+' &amp; '+str(ii+1)+\ ' data are '+str(mydata[ii])+' &amp; '+str(mydata[ii+1])) # store the index of the first match of each group. if counter == 0: index_list.append(ii) ii=ii+1 counter=counter+1 # break from while if this is the last element in the array. if ii==idx_last_num: break # if we just were iniside loop, store the no of events if counter != 0: no_events=counter+1 events_frequency_list.append(no_events) # counter if there is no match at all for the outer while. ii=ii+1 print('list of indices @ the begining of each group ') print(index_list) print(' frequency of each group.') print(events_frequency_list) return index_list, events_frequency_list </code></pre>
<python><pandas><loops>
2025-03-04 12:08:14
1
725
Kernel
79,483,629
2,551,236
Failure while installing Python requirements in Windows System
<p>I am using <code>pip</code> for installing requirements of my project.</p> <p>Below is the error I get when I run the command <code>pip install -r requirements.txt</code></p> <pre><code>_frozenlist.c C:\Program Files\Python312\include\pyconfig.h(59): fatal error C1083: Cannot open include file: 'io.h': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.20.27508\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 Failed to build frozenlist grpcio </code></pre> <p>By looking at various answers, I have installed the required Visual C++ build tools</p> <p><a href="https://i.sstatic.net/TMt0rCsJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMt0rCsJ.png" alt="As shown here" /></a></p> <p>Following are the details of system and App versions I am using</p> <ul> <li>Windows version 11</li> <li>Python version 3.12.0</li> <li>Visual Studio Build tools version 2019.</li> <li>Windows 10 SDK and MSVC v142 - VS 2019 C++ x64/x86 build tools. (also tried with windows 11 SDk without any help)</li> </ul> <p>Any pointers on the issue and resolution would be helpful, Thanks!!</p>
<python><windows><visual-studio><visual-c++>
2025-03-04 11:59:20
0
3,829
Vihar
79,483,453
1,942,868
Ordering data after distinct
<p>I distinct data by <code>sku_id</code></p> <pre><code> queryset = self.filter_queryset(queryset.order_by('sku_id','id').distinct('sku_id')) </code></pre> <p>However this result is not sorted by id,</p> <p>then I try to</p> <pre><code> queryset = self.filter_queryset(queryset.order_by('sku_id','id').distinct('sku_id').order_by('id')) </code></pre> <p>However this shows the error</p> <pre><code>ProgrammingError at /api/myfetch/ SELECT DISTINCT ON expressions must match initial ORDER BY expressions LINE 1: SELECT COUNT(*) FROM (SELECT DISTINCT ON (&quot;myapp_produc... </code></pre> <p>Is it possible to sort the column after distinct?</p>
<python><sql><django><postgresql>
2025-03-04 10:56:46
2
12,599
whitebear
79,483,314
5,976,205
WhatsApp API Broadcast with Celery - Unable to Reach 1K Throughput Limit
<p>I am trying to send a WhatsApp broadcast using Meta's WhatsApp API, but I am not able to fully utilize the 1K messages per second throughput limit.</p> <blockquote> <p>Current Architecture:</p> </blockquote> <ol> <li>I process phone numbers and divide them into chunks of 1,000.</li> <li>These chunks are sent asynchronously to the WhatsApp API via Celery workers.</li> <li>Each worker picks a chunk and sends messages using the Meta API.</li> <li>After sending, the worker waits 5 seconds to receive a response containing the conversation_id, which is then stored in the database.</li> </ol> <blockquote> <p>Problem:</p> </blockquote> <ol> <li>If I increase the number of workers, I risk exceeding the 1K/sec limit since multiple workers will send messages simultaneously.</li> <li>If I keep fewer workers, the system doesn’t achieve the 1K/sec limit efficiently.</li> <li>The delay caused by waiting for the conversation_id affects overall throughput.</li> </ol> <blockquote> <p>Questions:</p> </blockquote> <ol> <li>Is my approach wrong, or is there a better way to structure this system to maximize throughput?</li> <li>How can I ensure that I send messages efficiently while staying within the 1K/sec limit?</li> <li>Would another language (e.g., Node.js, Golang) be more effective for handling high-throughput WhatsApp API requests?</li> <li>How should I manage the response handling (conversation_id update) without slowing down message delivery?</li> </ol> <blockquote> <blockquote> <p>Any advice or best practices would be greatly appreciated!</p> </blockquote> </blockquote>
<python><asynchronous><celery><whatsapp-cloud-api><highperformance>
2025-03-04 10:11:27
1
2,599
Dev Jalla
79,483,117
10,232,932
Keep rows of dataframe based on column, where the column has a whole / round number
<p>I have a dataframe with a float column (number), e.g.:</p> <pre class="lang-none prettyprint-override"><code>value number 1 0.98 10 1.00 -1 1.23 -10 2.00 </code></pre> <p>I only want to keep the rows of the dataframe where the number column is a &quot;full&quot; number (1,2,3,...). How can I do that?</p> <p>The expected output is:</p> <pre class="lang-none prettyprint-override"><code>value number 10 1.00 -10 2.00 </code></pre> <p>This question only provides the solution how to delete all rows which a whole number <a href="https://stackoverflow.com/questions/38505895/how-do-i-filter-rows-in-a-dataframe-that-have-whole-numbers-in-one-column">How do I filter rows in a dataframe that have whole numbers in one column</a></p>
<python><pandas>
2025-03-04 08:50:40
0
6,338
PV8
79,482,899
1,964,692
Change images in a window via real time MIDI messages
<p>I am trying to create a program that will</p> <ul> <li>read in images from a dir and store in a list</li> <li>display first image in list in a window</li> <li>MIDI notes from USB MIDI Controller triggers a new image in the <em>same</em> window <ul> <li>the new image is retrieved using the note value and matching index from list of images</li> </ul> </li> </ul> <p>I've seen plenty of examples of being able to do the above separately, but am stuck on how to combine them all in one. I have not been able to find any examples of bindings to MIDI messages -- only for key and button presses.</p> <p>Currently my code can open up a <code>matplotlib</code> window and display first image, but the code is stuck until the window is closed. After closing window, I can see MIDI notes in streaming output but no window for images is visible.</p> <p>When I tried to use <code>show(block=False)</code> a window with no image would open, but I can at least see MIDI input stream. An image never shows up in window.</p> <p>I'm open to trying other libraries (TKinter, PyQT5, etc) but I couldn't find any examples that they would be any different.</p> <p>Here is my code so far</p> <pre><code>import matplotlib import os, sys import mido import cv2 def get_images(path: str) -&gt; list: &quot;&quot;&quot; Recursively crawl through dir and load in all images &quot;&quot;&quot; images = [] image_files = glob.glob(os.path.join(path, '*')) print(image_files) for i in image_files: images.append(cv2.imread(i)) return images def get_midi_port(): # Get a list of available input port names input_ports = mido.get_input_names() # Get first port that is not a Through type for p in input_ports: if &quot;Midi Through&quot; not in p: print(p) port = p return p if not input_ports or not p: raise Exception(&quot;No MIDI input ports found.&quot;) class MIDI_Images(): def __init__( self, path=&quot;.&quot; ): self.path = path self.loaded_images = get_images(self.path) self.curr = 0 if auto_start: self.fig, self.ax = plt.subplots() self.display() plt.show(block=False) ## opens window but no image plt.pause(0.1) ### plt.show() ## shows image but code doesn't continue for MIDI notes ### Ingest MIDI messages try: p = get_midi_port() with mido.open_input(p) as inport: inport.poll() print(f&quot;Listening for MIDI messages on port: {inport.name}&quot;) while True: for msg in inport.iter_pending(): print(msg) self.curr = msg.note - 24 ## lowest note on my controller is 24 self.update_image() except Exception as e: print(f&quot;Error: {e}&quot;) def update_image(self): &quot;&quot;&quot; Load in next image &quot;&quot;&quot; # advance to next image try: self.im.set_array(self.loaded_images[self.curr]) self.display() ## self.fig.canvas.draw_idle() ## waits for image window to close if uncommented except IndexError: print(&quot;Sorry no image in index: &quot;, n) def display(self): &quot;&quot;&quot; Orchestrating function to run &quot;&quot;&quot; image = self.loaded_images[self.curr] self.im = self.ax.imshow(image) ### Examples of key bindings working just fine # self.fig.canvas.mpl_connect(&quot;key_press_event&quot;, self.next_image) if __name__ == &quot;__main__&quot;: import argparse parser = argparse.ArgumentParser(description='') parser.add_argument('--imagedir', action=&quot;store&quot;, dest='imagedir', default='') args = parser.parse_args() MIDI_Images(args.imagedir) </code></pre>
<python><matplotlib><midi>
2025-03-04 07:15:24
1
1,599
Korean_Of_the_Mountain
79,482,737
1,942,868
Redis data fetching error, redis.exceptions.ResponseError: WRONGTYPE Operation against a key holding the wrong kind of value
<p>I am trying to fetch the redis data, then</p> <p>at first try this script</p> <pre><code>pool = redis.ConnectionPool(host='localhost', port=6379, db=0) self.redis = redis.Redis(connection_pool=pool) for key in self.redis.keys(): print (&quot;key:&quot;,key) </code></pre> <p>It shows like this,</p> <pre><code>key: b'asgi:group:347' key: b'asgi:group:62' key: b'asgi:group:344' key: b'asgi:group:305' key: b'asgi:group:348' </code></pre> <p>then I try to check ,</p> <pre><code>print(self.redis.get(&quot;asgi:group:348&quot;)) </code></pre> <p>However it shows the error like this, <code>redis.exceptions.ResponseError: WRONGTYPE Operation against a key holding the wrong kind of value</code></p> <p>How can I get the value of table?</p>
<python><redis>
2025-03-04 05:29:26
1
12,599
whitebear
79,482,623
12,692,182
Why is Tkinter Button unpressable? (No errors raised, not disabled)
<p>I was working with Tkinter in OOP and I came across an odd bug: the button here goes completely unresponsive, despite being fully active and the mainloop running.</p> <p>Code:</p> <pre class="lang-py prettyprint-override"><code>import tkinter as tk def main(): global root, button root = tk.Tk() pixel = tk.PhotoImage(width=1, height=1) button = tk.Button(root, text=&quot;Hmm&quot;, image=pixel, compound=&quot;c&quot;) button.pack() main() root.mainloop() </code></pre> <p>I used the hidden image as I need to set the specific pixel height and width of the button, but when I remove the <code>image=pixel</code> the button works fine.</p> <p>This button is unpressable, both with mouse and with <code>tab</code>+<code>enter</code>:</p> <p><a href="https://i.sstatic.net/BlCC5Ozu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BlCC5Ozu.png" alt="Screenshot of button that can't be pressed" /></a></p>
<python><tkinter><button>
2025-03-04 03:43:30
4
1,011
User 12692182
79,482,453
3,754,268
WxFormBuilder warning - First ID is less than 5999
<p>WxFormBuilder, version 4.2.1 and WxPython 4.2.2</p> <p>After upgrading from earlier versions of WxPython and WxFormBuilder, I loaded in a previously building and working project. When I click on 'Generate Code' (F8), the generator pops up an error message &quot;First ID is less than 5999.&quot; My code does not set any ID values. What could be causing this to happen?</p> <p>I am building a Python GUI application, and it is based on a Frame with a menu bar and a base panel into which all of the widgets are added using WxFormBuilder to create the framework.</p> <p>The framework tool creates: class MainFrame(wx.Frame).</p>
<python><wxpython><wxformbuilder>
2025-03-04 00:54:28
1
377
Traveler_3994
79,482,398
2,913,725
Google Speech returning all words ever spoken, instead of just the words from the transcript
<p>Using Google Speech in Python, I'm able to get a transcript for each phrase spoken using result.alternatives[0].transcript, but when I try to look at the words for the phrase, result.alternatives[0].words always returns an array of ALL of the words ever spoken, not just the words from the transcript... which seems wrong? Is this a bug, or is there some way to filter out/reset the words array, since I'm only interested in the words in the spoken phrase.</p> <p>My code:</p> <pre><code>if not response.results: continue result = response.results[0] if not result.alternatives: continue transcript = result.alternatives[0].transcript confidence = result.alternatives[0].confidence words = result.alternatives[0].words if result.is_final: print(&quot;*******************&quot;) sensory_log.info(f&quot;Final STT output: {transcript}&quot;) print(f&quot;Confidence: {confidence:.2f}&quot;) self.process_input(transcript) # Check for multiple speakers using words if words: print(words) # Track unique speaker IDs using a list speaker_ids = [] for word in words: print(f&quot;Word: {word.word} (speaker_tag: {word.speaker_tag})&quot;) if word.speaker_tag not in speaker_ids: speaker_ids.append(word.speaker_tag) print(f&quot;Detected {len(speaker_ids)} speakers&quot;) </code></pre>
<python><speech><google-speech-api><google-speech-to-text-api>
2025-03-04 00:03:32
1
1,599
JackKalish
79,482,383
12,980,093
Lazy evaluations for DataFrames
<p>Let me provide quick demo which shows that second approach is 10x times slower than the first one.</p> <pre><code>import pandas as pd from timeit import default_timer as timer r = range(1,int(1e7)) df = pd.DataFrame({ 'col0': [i % 3 for i in r], 'col1': r }) df['pad'] = '*' * 100 start = time.time() print(df.groupby('col0')['col1'].min()) end = time.time() print(end - start) start = time.time() print(df.groupby('col0').min()['col1']) end = time.time() print(end - start) </code></pre> <p>Output:</p> <pre><code>col0 0 3 1 1 2 2 Name: col1, dtype: int64 0.14302301406860352 col0 0 3 1 1 2 2 Name: col1, dtype: int64 1.4934422969818115 </code></pre> <p>The reason is obvious - in the second case python calculates <code>min</code> also for column <code>pad</code> while in first case it does not do that.</p> <p>Is there any way to make python aware that computation on <code>DataFrameGroupBy</code> is required for <code>col1</code> only in the second case?</p> <p>If this is impossible then I'm curious if this is limitation of current <code>pandas</code> implementation or limitation of the python language itself (i.e. expression <code>df.groupby('col0').min()</code> must be fully computed no matter what follows next).</p> <p>Thanks</p>
<python><dataframe><lazy-evaluation>
2025-03-03 23:51:00
1
653
Slimboy Fat
79,482,376
3,782,963
Pandas dropping first group of values
<p>I want to drop the first group of rows based on a column's value. Here is an example of a table</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>stage</th> <th>h1</th> <th>h2</th> <th>h3</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>4</td> <td>55</td> <td>55</td> </tr> <tr> <td>0</td> <td>5</td> <td>66</td> <td>44</td> </tr> <tr> <td>0</td> <td>4</td> <td>66</td> <td>33</td> </tr> <tr> <td>1</td> <td>3</td> <td>33</td> <td>55</td> </tr> <tr> <td>0</td> <td>5</td> <td>44</td> <td>33</td> </tr> </tbody> </table></div> <p>Get the column stage, get all the first group of rows that start with 0, and drop the rows in the table. The table will look like this:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>stage</th> <th>h1</th> <th>h2</th> <th>h3</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>3</td> <td>33</td> <td>55</td> </tr> <tr> <td>0</td> <td>5</td> <td>44</td> <td>33</td> </tr> </tbody> </table></div> <p>This is what I did:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd data = {'stage': [0, 0, 0, 1, 0], 'h1': [4, 5, 4, 3, 5], 'h2': [55, 66, 66, 33, 44], 'h3': [55, 44, 33, 55, 33]} df = pd.DataFrame(data) # Find indices of the first group of rows with uiwp_washing_stage = 0 indices_to_drop = [] for i in range(len(df)): if df['stage'].iloc[i] == 0: indices_to_drop.append(i) else: break df = df.drop(indices_to_drop) df = df.reset_index(drop=True) print(df) </code></pre> <p>The above seems to work, but if the file is too big it takes a while, is there a Pands way of doing this?</p>
<python><pandas>
2025-03-03 23:45:59
2
2,835
Akshay
79,482,290
2,144,360
How to handle German language specific characters like (ä, ö, ü, ß) while tokenizing using GPT2Tokenizer?
<p>I am working with German Texts, where I need to tokenize texts using GPT2Tokenizer.</p> <p>To tokenize the text, I wrote the implementation as follows:</p> <pre class="lang-py prettyprint-override"><code>from transformers import GPT2Tokenizer text = &quot;zügiger Transport des ABCD stabilen Kindes in die Notaufnahme UKA&quot; text = text.encode(&quot;utf-8&quot;).decode(&quot;utf-8&quot;) # Re-encode to fix encoding issues # Load GPT-2 tokenizer tokenizer = GPT2Tokenizer.from_pretrained(&quot;gpt2&quot;) # Tokenize the text tokens = tokenizer.tokenize(text) print(tokens) # Should properly tokenize &quot;zügiger&quot; instead of splitting &quot;ü&quot; </code></pre> <p>Now, when I execute this code snippet I get output as follows:</p> <pre><code>['z', 'ü', 'g', 'iger', 'ĠTransport', 'Ġdes', 'ĠABC', 'D', 'Ġstabil', 'en', 'ĠKind', 'es', 'Ġin', 'Ġdie', 'ĠNot', 'au', 'fn', 'ah', 'me', 'ĠUK', 'A'] </code></pre> <p>After a bit of analysis, I have found that all German language specific characters are mis-decoded as Latin-1 see the table below.</p> <pre class="lang-markdown prettyprint-override"><code>| Character | UTF-8 Bytes | Misdecoded as Latin-1 | Resulting String | |-----------|-------------|-----------------------|------------------| | ä | C3 A4 | à + ¤ | ä | | ö | C3 B6 | à + ¶ | ö | | ü | C3 BC | à + ¼ | ü | | ß | C3 9F | à + Ÿ | ß | </code></pre> <p>Now, how I can keep German language specific characters like (ä, ö, ü, ß) inside tokens after the tokenization process, avoiding unintentional misdecodeding, i.e. &quot;zügiger&quot; becomes something like ['z', 'ü', 'g', 'iger'].</p>
<python><nlp><tokenize><large-language-model><gpt-2>
2025-03-03 22:32:36
1
1,372
RajibTheKing
79,482,197
8,167,752
python-pptx - Getting an unwanted slide at the start when using a custom template
<p>The company I work for has a standard PowerPoint template. I'm trying to use this template with python-pptx. However, I'm getting an unwanted slide at the start of my slide deck and I can't figure out what I'm doing wrong.</p> <p>My first attempt was a simple:</p> <pre><code>prs = Presentation('companyTemplate.potx') </code></pre> <p>but this gave several errors.</p> <p>I then read the manual (yeah, I know) and realized I should have converted the .potx file into a standard PowerPoint .pptx file.</p> <p>My second attempt was:</p> <pre><code>prs = Presentation('companyTemplate.pptx') </code></pre> <p>This worked but my desired PowerPoint slide deck had the entire template PowerPoint at the start.</p> <p>At this point, I created a new, blank, PowerPoint using the default PowerPoint template and switched to Slide Master view.</p> <ul> <li>I then opened the companyTemplate PowerPoint</li> <li>Switched to Slide Master view in the companyTemplate PowerPoint</li> <li>Copied the first &quot;slide&quot; in this Slide Master view to the open, blank, PowerPoint.</li> <li>Deleted the original Slide Master from the new, blank, PowerPoint</li> <li>Closed the Slide Master view</li> <li>Saved this newly-modified PowerPoint as my new companyTemplate.pptx file.</li> </ul> <p>This works but, as noted at the start, I'm getting a single unwanted slide at the start of my desired slide deck.</p> <p>It's easy enough to delete this unwanted slide but I'd like to keep it from appearing in the first place.</p> <p>When I use the default presentation in python-pptx, I don't get an unwanted slide at the start, so I figure there has to be a way to do what I want. I just can't figure out how.</p> <p>For what it's worth, the unwanted slide is the last &quot;slide&quot; in Slide Master view, and is labeled &quot;Blank&quot;. It's also the slide that appears if I open my own personal companyTemplate.pptx PowerPoint file.</p>
<python><python-pptx>
2025-03-03 21:36:10
1
477
BobInBaltimore
79,482,145
6,041,629
Extreme value analysis and quantile estimation using log Pearson type 3 (Pearson III) distribution - R vs Python
<p>I am trying to estimate quantiles for some snow data using the log pearson type 3 distribution in Python and comparing with R. I do this by reading in the data, log transforming it, fitting Pearson type 3, estimating quantiles, then transforming back from log space.</p> <p>In python:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats import lmoments3 as lm import lmoments3.distr as ld data=np.array([[1079], [ 889], [ 996], [1476], [1567], [ 897], [ 991], [1222], [1372], [1450], [1077], [1354], [1029], [1699], [ 965], [1133], [1951], [1621], [1069], [ 930], [1039], [1839]]) return_periods = np.array([2,3,5,10,20,50,100,200,1000]) log_data = np.log(data) params = stats.pearson3.fit(log_data) #Max likelihood estimation method quantiles = np.exp(stats.pearson3.ppf(1 - 1 / return_periods, *params)) paramsmm=ld.pe3.lmom_fit(log_data) #lmoments estimation method paramsmm2=(paramsmm[&quot;skew&quot;], paramsmm['loc'], paramsmm['scale'][0]) quantilesmm = np.exp(ld.pe3.ppf(1 - 1 / return_periods, *paramsmm2)) print(quantiles) print(quantilesmm) </code></pre> <p>in R:</p> <pre><code>library(lmom) library(lmomco) library(FAdist) swe_data &lt;- c(1079, 889, 996, 1476, 1567, 897, 991, 1222, 1372, 1450, 1077, 1354, 1029, 1699, 965, 1133, 1951, 1621, 1069, 930, 1039, 1839) return_periods &lt;- c(2, 3, 5, 10, 20, 50, 100, 200, 1000) exceedance_probabilities &lt;- 1 / return_periods # P = 1/T nonexceedance_probabilities &lt;- 1 - exceedance_probabilities # P_nonexceedance = 1 - P_exceedance log_swe &lt;- log(swe_data) loglmoments &lt;- lmom.ub(log_swe_data) fit_LP3 &lt;- parpe3(loglmoments) #pars estimated using lmoments LP3_est=exp(quape3(nonexceedance_probabilities, fit_LP3)) print(LP3_est) </code></pre> <p>The quantiles estimated are the following:</p> <p>MLE/scipy stats:</p> <pre><code>params=(2.0246357656236125, 7.10812763271725, 0.32194785836668816) #skew, loc scale quantiles=[1105.86050592 1259.46110488 1484.67412496 1857.18767881 2324.18036925 3127.68767927 3916.2007443 4904.15011095 8271.24322709] </code></pre> <p>Lmoments/python:</p> <pre><code>params=(-2.2194418726874434, 7.1069179914286424, 0.07535915093401913) #skew, loc scale quantiles=[1251.30865382 1276.35189073 1291.29995882 1300.06624583 1303.59129662 1305.31725745 1305.78638777 1305.98555852 1306.11275037] </code></pre> <p>Lmoments/R:</p> <pre><code>params= (7.1069180 0.2566677 0.9365001) #mu, sigma, gamma quantiles=[1173.116 1313.849 1485.109 1721.131 1969.817 2326.812 2623.112 2945.728 3814.692] </code></pre> <p>I would expect the latter two methods, both using lmoments, to produce the same result. Based on comparisons with other distributions, it seems like R is giving the most realistic result. Any explanation for the large differences? How might I get a similar result in Python?</p>
<python><r><statistics><frequency-analysis>
2025-03-03 21:04:42
1
526
Kingle
79,482,125
2,218,321
Agno (Phidata) reading PDF as knowledge base failed
<p>This is the code</p> <pre><code>from agno.agent import Agent from agno.models.openai import OpenAIChat from agno.embedder.openai import OpenAIEmbedder from agno.knowledge.pdf_url import PDFUrlKnowledgeBase from agno.vectordb.lancedb import LanceDb, SearchType api_key = 'sk-XXX' knowledge_base = PDFUrlKnowledgeBase( urls=[&quot;https://arxiv.org/pdf/1311.2524v5&quot;], vector_db=LanceDb( uri=&quot;tmp/lancedb&quot;, table_name=&quot;paper&quot;, search_type=SearchType.hybrid, embedder=OpenAIEmbedder(id=&quot;text-embedding-3-small&quot;, api_key=api_key), ), ) agent = Agent( model=OpenAIChat(id=&quot;gpt-4o&quot;, api_key=api_key), description=&quot;You are a person who read a paper and knows about it&quot;, instructions=[ &quot;Search only the knowledge base.&quot;, &quot;Don't search the web&quot; ], knowledge=knowledge_base, show_tool_calls=True, markdown=True ) agent.print_response(&quot;what is title of paper?.&quot;, stream=True) </code></pre> <p>and it says</p> <blockquote> <p>I'm sorry, but I couldn't find any specific paper title in the knowledge base I have access to. Could you please provide more details or specify a different query related to a paper you are interested in?</p> </blockquote> <p>The paper is online and readable, I would like to know why agno failed to create knowledge base database?</p>
<python><phidata><agno>
2025-03-03 20:55:38
1
2,189
M a m a D
79,482,105
1,745,001
pyserial-asyncio client/server in Python 3.8 not communicating immediately
<p>I'm learning python and asyncio and after having success with asyncio for a TCP client/server I took my first stab at creating a serial client/server using pyserial-asyncio running in bash on a Raspberry Pi 5 using Python 3.8 (I cannot change version).</p> <p>Here is the server:</p> <pre><code>import asyncio import serial_asyncio class UARTProtocol(asyncio.Protocol): def __init__(self): self.transport = None def connection_made(self, transport): self.transport = transport print('Port opened', transport) def data_received(self, data): print('Data received:', data.decode()) # Echo received data back (example) self.transport.write(data) # Close the connection if 'exit' is received if data == b&quot;exit\r&quot;: self.transport.close() def connection_lost(self, exc): print('Port closed') self.transport = None def pause_writing(self): print('pause writing') print(self.transport.get_write_buffer_size()) def resume_writing(self): print(self.transport.get_write_buffer_size()) print('resume writing') async def run_uart_server(): loop = asyncio.get_running_loop() try: transport, protocol = await serial_asyncio.create_serial_connection(loop, UARTProtocol, '/dev/ttyAMA2', baudrate=9600) print(&quot;UART server started.&quot;) await asyncio.Future() # Run forever except serial.serialutil.SerialException as e: print(f&quot;Error: Could not open serial port: {e}&quot;) finally: if transport: transport.close() if __name__ == &quot;__main__&quot;: asyncio.run(run_uart_server()) </code></pre> <p>and the client:</p> <pre><code>import asyncio import serial_asyncio async def uart_client(port, baudrate): try: reader, writer = await serial_asyncio.open_serial_connection(url=port, baudrate=baudrate) print(f&quot;Connected to {port} at {baudrate} bps&quot;) async def receive_data(): while True: try: data = await reader.readline() if data: print(f&quot;Received: {data.decode().strip()}&quot;) except Exception as e: print(f&quot;Error reading data: {e}&quot;) break async def send_data(): while True: message = input(&quot;Enter message to send (or 'exit' to quit): &quot;) if message.lower() == 'exit': break writer.write((message + '\n').encode()) # writer.write_eof() await writer.drain() print(f&quot;Sent: {message}&quot;) await asyncio.gather(receive_data(), send_data()) except serial.SerialException as e: print(f&quot;Error opening serial port: {e}&quot;) finally: if 'writer' in locals(): writer.close() await writer.wait_closed() print(&quot;Connection closed.&quot;) if __name__ == &quot;__main__&quot;: asyncio.run(uart_client('/dev/ttyAMA1', 9600)) </code></pre> <p>I want the client to prompt me for some text which is immediately sent to the server and printed there. I can get the client to prompt me for text, but the server doesn't display any of it until after I type <code>exit</code> in the client to close the connection and then it prints all of the text I typed in the client loop.</p> <p>Among many other things, I've tried adding <code>writer.write_eof()</code> in the client (see commented out line in the client code below) and that succeeds in the server immediately displaying the preceding text from the client but then the client never prompts me for input again.</p> <p>If I run the server and just do <code>echo foo &gt; /dev/ttyAMA1</code> from bash the server prints <code>foo</code> immediately so I suspect the client is the problem.</p> <p>What am I doing wrong?</p>
<python><python-3.x><python-asyncio><pyserial>
2025-03-03 20:43:30
1
208,382
Ed Morton
79,482,028
140,903
Python Version Management with Pyenv on MacBook Pro
<p>I'm using <strong>pyenv</strong> to manage Python versions on my MacBook Pro. I have a question about how pyenv handles shim paths for each version.</p> <p>I've installed different Python versions using pyenv.</p> <pre><code> pyenv versions system 3.10.16 3.11.9 3.11.9/envs/extra 3.11.11 * 3.12.4 (set by /Users/xxxxx/.pyenv/version) 3.12.4/envs/cost 3.12.4/envs/myproject cost --&gt; /Users/xxxxx/.pyenv/versions/3.12.4/envs/cost myproject --&gt; /Users/xxxxx/.pyenv/versions/3.12.4/envs/myproject extra --&gt; /Users/xxxxx/.pyenv/versions/3.11.9/envs/extra </code></pre> <p>However, when I try to run them using shims, only the default version works. The other versions show an error saying they're not specified.</p> <p>shim path for <code>python3.10</code></p> <pre><code>MySampleProject/src/MySampleProjectCDK (mainline) % /Users/xxxxx/.pyenv/shims/python3.10 --version pyenv: python3.10: command not found The `python3.10' command exists in these Python versions: 3.10.16 Note: See 'pyenv help global' for tips on allowing both python2 and python3 to be found. </code></pre> <p>shim path for <code>python3.11</code></p> <pre><code>MySampleProject/src/MySampleProjectCDK (mainline) % /Users/xxxxx/.pyenv/shims/python3.11 --version pyenv: python3.11: command not found The `python3.11' command exists in these Python versions: 3.11.9 3.11.9/envs/extra 3.11.11 extra Note: See 'pyenv help global' for tips on allowing both python2 and python3 to be found. </code></pre> <p>shim path for <code>python3.12</code></p> <pre><code>MySampleProject/src/MySampleProjectCDK (mainline) % /Users/xxxxx/.pyenv/shims/python3.12 --version Python 3.12.4 </code></pre> <p>This is causing problems for a project I'm working on. I'm trying to build and test the project using Hatch for both Python <code>3.11</code> and <code>3.12</code>.</p> <p>I want to know how to make Python 3.10 and 3.11 behave like Python 3.12 when I'm using the shim path</p>
<python><pyenv><hatch>
2025-03-03 20:06:28
1
7,461
Am1rr3zA
79,481,878
2,537,486
Formatting pandas tables that contain lists of floats
<p>How can I format numbers in a <code>pandas</code> table, when a cell displays a list of floating point numbers instead of a single float value?</p> <p>Here's a code example:</p> <pre><code>import pandas as pd df = pd.DataFrame(data={'Values':[[0.1231245678,0,0],[1e-10,0,0]]}) df </code></pre> <p>I would like to format the numbers in the table as <code>%.2f</code>. So the table data should be displayed as</p> <pre><code>[0.12, 0.00, 0.00] [0.00, 0.00, 0.00] </code></pre> <p>The usual options:</p> <pre><code>pd.set_option('float_format','{:20,.2f}'.format) pd.set_option('display.chop_threshold', 0.001) </code></pre> <p>only work when table cells contain single numbers.</p>
<python><pandas>
2025-03-03 18:44:26
4
1,749
germ
79,481,823
5,304,366
Get a Python plotly output with quarto reveal.js presentation
<p>Plotly graph are not outputed in quarto.</p> <p>Here is my code:</p> <pre><code>--- title: &quot;Title&quot; format: revealjs: auto-fit: false --- ## Slide ```{python} from IPython.display import HTML import plotly.express as px data = {&quot;x&quot;: [1, 2, 3, 4], &quot;y&quot;: [10, 20, 15, 25]} # Create a figure fig = px.line(data, x=&quot;x&quot;, y=&quot;y&quot;, title=&quot;Simple Plotly Line Chart&quot;) # Show the plot fig.show() ``` ## Matplotlib ```{python} import matplotlib.pyplot as plt plt.plot([1,2,3]) ``` </code></pre> <p>There is not plotly output, while it works seamlessly for matplotlib plot. <a href="https://i.sstatic.net/J0MgvW2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J0MgvW2C.png" alt="enter image description here" /></a> How can I get the plotly output?</p>
<python><plotly><quarto>
2025-03-03 18:20:33
0
2,179
Adrien Pacifico
79,481,758
1,942,868
How can I get the member list of log-in-ed
<p>I am using the <code>django</code> and <code>websocket</code>.</p> <p>I have the script like this, a user can login a websocket group</p> <p>Now I want to return the list of users log-in-ed and return the list to the client.</p> <p>How can I make this?</p> <pre><code>class ChatConsumer(AsyncWebsocketConsumer): async def connect(self): self.room_group_name = self.scope[&quot;url_route&quot;][&quot;kwargs&quot;][&quot;room_name&quot;] logger.info(&quot;channel_name:{} / group_name {} / room_name {}&quot;.format(self.channel_name,self.room_group_name,self.scope[&quot;url_route&quot;][&quot;kwargs&quot;][&quot;room_name&quot;])) await self.channel_layer.group_add( self.room_group_name, self.channel_name ) await self.accept() await self.send(text_data=json.dumps({ 'channel_name': self.channel_name, 'room_group_name':self.room_group_name, 'member_list': 'how can I get the member??' })) </code></pre>
<python><django><websocket>
2025-03-03 17:58:39
1
12,599
whitebear
79,481,710
11,951,910
Pylint is ignoring disable flag
<p>I am running into an issue with pylint and lambda functions. To ignore the error I am using:</p> <pre><code>some_thing = lambda ... # pylint: disable=E731 </code></pre> <p>I have also tried:</p> <pre><code># pylint: disable=E731 some_thing = lambda .... # pylint: disable=E731 </code></pre> <p>Running my pipeline I have tried:</p> <pre><code>pylint: script: ... pylint --disable=E731 </code></pre> <p>When it runs the check is not being disabled.</p> <p>There is no config file. It is all handled inline. But when run I am getting <code>E731 do not assign a lambda expression, use a def</code>.</p>
<python><pylint>
2025-03-03 17:32:59
0
718
newdeveloper
79,481,674
457,159
Namespace Package Import Issue in Linux but Not in Windows
<p>I have a namespace package called foo, which contains multiple distribution packages:</p> <ul> <li>foo.bar_one, foo.bar_two, and foo.bar_three are installed as distribution packages via pip from a private GitLab repository.</li> <li>These packages are installed under Python's site-packages, which is included in the PYTHONPATH.</li> </ul> <p>Additionally, in the project root, there is another directory also named foo, containing additional packages (e.g., foo.bar_z). (All packages originally started being part of this foo namespace in the project; they are provided by different teams and are stored in separate repos, then rebuilt into the project by adding the source code under the foo namespace. I'm trying to move these packages outside of the projects; but each team will need to have in their projects their own package to keep working on it and provide new versions. These team packages do have dependencies of some of the core packages in the namespace, and sometimes from some of the other team packages). The challenge I'm trying to overcome is how to have a team work on their own package in the project's foo namespace, while depending on the ones installed in site-packages by pip.</p> <ul> <li>The project root is also added to PYTHONPATH, allowing imports like import foo.bar_z as if all packages were in the same namespace.</li> <li>There are dependencies between some of these packages.</li> <li>foo.bar_z will also be published as a distribution package so that other projects can install it via pip, in which case it would reside in site-packages.</li> </ul> <p><strong>The Issue:</strong> This setup works as expected on Windows with python 3.x (I can import both foo.bar_one and foo.bar_z), but on Linux, I encounter namespace-related import issues:</p> <ul> <li><p>If site-packages appears first in PYTHONPATH, I can import foo.bar_one, but foo.bar_z fails, stating that bar_z does not exist within the foo namespace.</p> </li> <li><p>If the project root appears first in PYTHONPATH, I can import foo.bar_z, but foo.bar_one fails, stating that bar_one does not exist within the foo namespace.</p> </li> </ul> <p><strong>Questions:</strong></p> <ul> <li>Is there a fundamental difference in how namespace packages are handled between Windows and Linux that causes this behavior?</li> <li>Is there a minor configuration change that can resolve this issue on Linux?</li> <li>If this setup is inherently flawed, what would be a better approach to achieve the same goal while maintaining compatibility?</li> </ul> <p>Any insights or recommendations on resolving this would be greatly appreciated!</p> <p>EDIT:</p> <ul> <li>Linux system has PYTHON_VERSION=3.12.7</li> <li>Windows system has Python Version: 3.12.6</li> </ul> <p>EDIT 2: I found this regarding the namespaces, which leads me to believe that what I'm trying to do is possible, and in fact it seems to work on Windows.</p> <blockquote> <p>5.2.2. Namespace packages A namespace package is a composite of various portions, where each portion contributes a subpackage to the parent package. Portions may reside in different locations on the file system. Portions may also be found in zip files, on the network, or anywhere else that Python searches during import. Namespace packages may or may not correspond directly to objects on the file system; they may be virtual modules that have no concrete representation.</p> <p>Namespace packages do not use an ordinary list for their <strong>path</strong> attribute. They instead use a custom iterable type which will automatically perform a new search for package portions on the next import attempt within that package if the path of their parent package (or sys.path for a top level package) changes.</p> <p>With namespace packages, there is no parent/<strong>init</strong>.py file. In fact, there may be multiple parent directories found during import search, where each one is provided by a different portion. Thus parent/one may not be physically located next to parent/two. In this case, Python will create a namespace package for the top-level parent package whenever it or one of its subpackages is imported.</p> <p>See also PEP 420 for the namespace package specification. <a href="https://docs.python.org/3/reference/import.html#namespace-packages" rel="nofollow noreferrer">5.2.2. Namespace packages</a></p> </blockquote> <p>In my question, foo is the namespace, bar_one and bar_z are two portions that are not physically located next to each other. Am I reading this correctly?</p>
<python><python-3.x><namespaces>
2025-03-03 17:15:59
0
876
EKI
79,481,583
8,037,521
Poor responsiveness of a QScrollBar handle
<p>Not sure what is the issue: I am using list view and I struggle with the scrollbar. Somehow, the handle part is not always responsive. Sometimes I can click the handle and scroll fast to the place which I need, sometimes I cannot. I made a MRE from it and I could not identify if there are some specific conditions when it does or does not happen. I can always scroll with the mouse wheel, so it does not seem to be an app freeze. But I cannot always grab the handle and drag it. MRE is below. I am on Windows 11, using PySide2. Not sure if the problem is on my machine (though what can be wrong for a scrollbar?) or in some way the scrollbar works. It is as if it's handle is not always active.</p> <pre><code>import sys import os from PySide2.QtGui import QPixmap, QIcon, QStandardItemModel, QStandardItem from PySide2.QtWidgets import ( QApplication, QWidget, QVBoxLayout, QListView, ) from PySide2.QtCore import Qt, QDir, QSize class Example(QWidget): def __init__(self): super().__init__() self.layout = QVBoxLayout(self) # File list view setup self.file_list_view = QListView(self) self.file_list_view.setFixedHeight(200) self.layout.addWidget(self.file_list_view) # File model self.file_model = QStandardItemModel(self) self.file_list_view.setModel(self.file_model) # Custom scrollbar style self.file_list_view.verticalScrollBar().setStyleSheet( &quot;&quot;&quot; QScrollBar:vertical { width: 30px; background-color: transparent; border: none; } QScrollBar::handle:vertical { background-color: rgb(100,100,100); border-radius: 5px; min-height: 50px; } QScrollBar::add-line:vertical, QScrollBar::sub-line:vertical { border: none; background: transparent; } QScrollBar::up-arrow:vertical, QScrollBar::down-arrow:vertical { background: transparent; } &quot;&quot;&quot; ) def populate_directory_view(self, dir_path): self.file_model.clear() self.dir_path = str(dir_path) # Use os.listdir to get the file names (instead of QDir) files = os.listdir(self.dir_path) for file in files: file_item = QStandardItem(file) file_item.setTextAlignment(Qt.AlignCenter) # Center the text file_item.setEditable(False) self.file_model.appendRow(file_item) # Set the view mode and remove icons self.file_list_view.setViewMode(QListView.ListMode) self.file_list_view.setIconSize(QSize(0, 0)) def reset(self): self.file_model.clear() self.dir_path = &quot;&quot; if __name__ == &quot;__main__&quot;: app = QApplication(sys.argv) viewer = Example() viewer.show() # Test directory path (use any valid directory on your machine) test_directory = ... # Replace with your own test directory path viewer.populate_directory_view(test_directory) sys.exit(app.exec_()) </code></pre> <p>Would be nice if a completely unrelated question would not be marked as duplicate of mine. I want to customize the handle.</p>
<python><qt><pyside2>
2025-03-03 16:37:55
0
1,277
Valeria
79,481,529
10,722,752
Getting Error Code: ScriptExecution.StreamAccess.NotFound error while trying to read in mltable in a pipeline
<p>I am trying to read in a MLTABLE in my pipeline, but getting:</p> <pre><code>Error Code: ScriptExecution.StreamAccess.NotFound Native Error: error in streaming from input data sources StreamError(NotFound) =&gt; stream not found NotFound Error Message: The requested stream was not found. Please make sure the request uri is correct.| session_id=1f8669ce-5a60-494b-a8dd-fd07fee8b186 </code></pre> <p>When I try to read in the mltable in an interactive session, it works fine:</p> <pre><code>import mltable tbl = mltable.load(f'azureml:/{flight_data.id}') tbl.to_pandas_dataframe() </code></pre> <p>The above code works fine.</p> <p>I am reading in the data using:</p> <pre><code>flight_data = ml_client.data.get(name='flightdelaydata1', version='2') </code></pre> <p>Below is the <code>.py</code> code:</p> <pre><code>%%writefile {data_prep_folder}/data_prep.py import pandas as pd import numpy as np from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split from sklearn.metrics import mean_squared_error import mlflow import mltable import os import argparse def main(): #Main function of the file parser = argparse.ArgumentParser() parser.add_argument('--data', help='Input data for flight delay model', type=str) parser.add_argument('--train_test_split_ratio', help='Test data proportion', type=float, default=.20) parser.add_argument('--train_data', help='Training Data', type=str) parser.add_argument('--test_data', help='Test Data', type=str) args = parser.parse_args() # Start logging mlflow.start_run() tbl = mltable.load(args.data) df = tbl.to_pandas_dataframe() mlflow.log_metric('Number of observations', df.shape[0]) mlflow.log_metric('Number of features', df.shape[1]) df = df.dropna() df = df.loc[ : , ['Month', 'DayofMonth', 'DayOfWeek', 'DepDelay', 'DepDel15', 'ArrDel15', 'Cancelled', 'ArrDelay']] Train_data, Test_data = train_test_split(df, test_size=args.train_test_split_ratio) Train_data.to_csv(os.path.join(args.train_data, 'train.csv')) Test_data.to_csv(os.path.join(args.test_data, 'test.csv')) mlflow.end_run() if __name__ == &quot;__main__&quot;: main() </code></pre> <p>and the <code>command</code> code is as below:</p> <pre><code>from azure.ai.ml.constants import AssetTypes data_prep_component = command(name='flight_delay_model_data_prep', description='Flight Delay Model Prediction Data Preparation Component', display_name='Flight Delay Data Prep', inputs = { 'data' : Input(type=AssetTypes.MLTABLE, path = flight_data.path), 'train_test_split_ratio' : Input(type='number') }, outputs = { 'train_data' : Output(type = 'uri_folder'), 'test_data' : Output(type = 'uri_folder') }, command= '''python {data_prep_folder}/data_prep.py \ --data ${{inputs.data}} --train_test_split_ratio ${{inputs.train_test_split_ratio}} \ --train_data ${{outputs.train_data}} --test_data ${{outputs.test_data}}''', environment = f'{envt.name}:{envt.version}' ) </code></pre> <p>Not sure if am making an error while reading the <code>mltable</code> in the job code. Can someone please help me where I am going wrong.</p>
<python><azure><azure-pipelines><azure-machine-learning-service>
2025-03-03 16:20:13
2
11,560
Karthik S
79,481,508
648,689
How does int() operate from memoryview -- does it create an intermediate bytes copy?
<p>I'm very interested in zero-copy operations (and disappointed that <code>memoryview</code> doesn't have all the member functions that <code>bytes</code> does). My application will be pretty intensive in this regard.</p> <p>Consider this python code, extracting integers from bytes/buffers:</p> <pre><code>&gt;&gt;&gt; &gt;&gt;&gt; b: bytes = b'987654' &gt;&gt;&gt; mv: memoryview = memoryview(b) &gt;&gt;&gt; type(mv) &lt;class 'memoryview'&gt; &gt;&gt;&gt; isinstance(mv, bytes) False &gt;&gt;&gt; int(b) # as expected 987654 &gt;&gt;&gt; int(mv) # works in py 3.12, but does it make a temporary bytes copy? 987654 </code></pre> <p>Doing a huge number of these operations, it will make a big difference if it's making intermediate/temporary bytes copies for each integer extraction.</p> <ul> <li>Which <code>int()</code> constructor gets called?</li> <li>What's a good test for me to determine/verify this? I'm thinking some monkeypatch of <code>mv.__some_member_function__ = my_function_that_prints_something</code> but where to start?</li> <li>And, of course, is an intermediate bytes copy being made just for the <code>int()</code> call?</li> </ul> <p>My environment:</p> <pre><code>&gt;&gt;&gt; import sys &gt;&gt;&gt; sys.version '3.12.9 (main, Feb 6 2025, 14:33:23) [GCC 14.2.0 64 bit (AMD64)]' </code></pre> <p>Reference:</p> <ul> <li><a href="https://docs.python.org/3/library/functions.html#int" rel="nofollow noreferrer">Python's <code>int()</code> docs</a></li> <li><a href="https://stackoverflow.com/questions/18655648/what-exactly-is-the-point-of-memoryview-in-python">What exactly is the point of memoryview in Python?</a></li> </ul>
<python><in-place><memoryview><zero-copy>
2025-03-03 16:12:17
0
1,107
JimB
79,481,444
11,659,631
Fitting with Maximum likelihood estimation in python returns initial parameters
<p>I would like to use <code>minimize</code> from <code>scipy.optimize</code> to fit my data with a specific function: <a href="https://i.sstatic.net/oTV0WKIA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTV0WKIA.png" alt="enter image description here" /></a></p> <p>I defined this log likelihood to this aim:</p> <pre><code>def log_likelihood(theta, freq, CPSD, CPSD_err): ''' This function computes the log-likelihood for a given set of model parameters θ in a Bayesian framework, comparing the observed data (freq, CPSD, CPSD_err) to a model. - theta: A tuple containing the model parameters: - freq: The independent variable eg frequency - CPSD: The observed dependent variable: cross power spectral density - CPSD_err: The uncertainties (errors) on the observed values of yy. ''' #extract the model parameters alpha_0, freq_O, alpha, CPSD_err = theta # Define the model model = alpha_0 * (freq_O**2 + freq**2)**(-(alpha + 3/2)) # Calculate the total variance: # yerr2: The squared measurement uncertainties. # exp(2⋅logf)=f2, which represents the fractional noise. sigma2 = CPSD_err**2 #+ model**2 * np.exp(2 * log_f) # computes and returns the log-likelihood: # This is based on the formula for the log of a Gaussian likelihood: return -0.5 * np.sum((CPSD - model) ** 2 / sigma2 + np.log(sigma2)) # step 2: Define the Negative Log-Likelihood def nll(*args): ''' Define the Negative Log-Likelihood Minimizing the negative log-likelihood is equivalent to maximizing the log-likelihood. Purpose: This function will be minimized to estimate the model parameters. ''' return -log_likelihood(*args) </code></pre> <p>following this tutorial: <a href="https://emcee.readthedocs.io/en/stable/tutorials/line/" rel="nofollow noreferrer">https://emcee.readthedocs.io/en/stable/tutorials/line/</a></p> <p>This is my function to do this:</p> <pre><code>def fit_CPSD_flattened(freq, CPSD, CPSD_err, freq_max, alpha_0, alpha, freq_O): ''' This function fit the power spectrum with a flattened spectrum - freq: frequencies, my data for the x axis - CPSD: cross power spectral density, my data for the y axis - freq-Max: freq until which I want to fit - alpha_0: initial guess for parameter alpha_0 - alpha: initial guess for parameter alpha - freq_0: initial guess for parameter freq_0 ''' # find the index of the maximum frequency for the fitting # np.where returns an array with elements from x where condition is True, and elements from y elsewhere. idx_high = np.where(freq &gt;= freq_max)[0][0] print(f'{freq[idx_high]} Hz') # select the data range that will be fitted CPSD_selected = CPSD[:idx_high] freq_selected = freq[:idx_high] CPSD_err_scalar = np.std(CPSD_err) # initial guess for the parameters initial = np.array([alpha_0, freq_O, alpha, CPSD_err_scalar]) # Define bounds for the parameters # Each tuple corresponds to (min, max) for a parameter bounds = [ (1e-11, 1e-4), # Bounds for alpha_0 (0.0001, 2.0), # Bounds for freq_O (-6, 0), # Bounds for alpha (-np.inf, np.inf) # Bounds for CPSD_err ] # step 4: Minimize the Negative Log-Likelihood soln = minimize(nll, initial, args=(freq_selected, CPSD_selected, CPSD_err_scalar), method='L-BFGS-B', bounds=bounds) # Step 5: Extract the Best-Fit Parameters alpha_0_ml, freq_O_ml, alpha_ml, CPSD_err_ml = soln.x return alpha_0_ml, freq_O_ml, alpha_ml </code></pre> <p>and this is my code where I call the function with my data:</p> <pre><code># std of the CPSD std CPSD_err_scalar = np.std(CPSD_std_19) #initial guess for the parameter alpha_0 = 5e-10 freq_O = 0.006 alpha = -0.30 freq_max = 1 #Hz # In this specific example: alpha_0_ml, freq_O_ml, alpha_ml = fit_CPSD_flattened(bins_19[CPSD_mean_19&gt;0], CPSD_mean_19[CPSD_mean_19&gt;0], CPSD_err_scalar, freq_max, alpha_0, alpha, freq_O) print(f'alpha_0_ml = {alpha_0_ml}') print(f'freq_O_ml = {freq_O_ml}') print(f'alpha_ml = {alpha_ml}') slope = alpha_ml + 3/2 plt.plot(bins_19[CPSD_mean_19&gt;0], CPSD_mean_19[CPSD_mean_19&gt;0]) # fit plt.plot(bins_19, flattening_CPSD(bins_19, alpha_0_ml, freq_O_ml, alpha_ml), color='lime', label=f'Fit: 1/k_out = {np.round(freq_O_ml, 4)}Hz, slope = -({np.round(slope*2, 3)})') # label=f'Fit: 1/k_out = ({np.round(freq_O_ml, 4)} ± {np.round(freq_O_ml_err, 4)})Hz, slope = -({np.round(slope*2, 3)} ± {np.round(slope_err, 3)})') plt.loglog() plt.legend() </code></pre> <p>but this ONLY returns the initial parameter... In the picture, the green fit is completely off the data in blue, and the fit only takes the value I gave for the initial parameter; this is obviously wrong. This drives me crazy, I don't know how to use this scipy minimize correctly, could someone help me?</p> <p><a href="https://i.sstatic.net/nSNGEYMP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nSNGEYMP.png" alt="enter image description here" /></a></p> <p>This is the whole code for a reproducible example:</p> <pre><code>import numpy as np from numpy import loadtxt import matplotlib.pyplot as plt from astropy.io import fits import datetime import scipy.stats as stats from scipy import signal from scipy.fftpack import fft, fftfreq import math from astropy.time import Time #convert julian date to normal time from scipy.signal import find_peaks # try the find peaks scipy function import healpy as hp # to plot the QUIJOTE's map import matplotlib.ticker as ticke from scipy.optimize import curve_fit # to fit functions from matplotlib import cm # for color map import os # to create directories from scipy.interpolate import UnivariateSpline import pandas as pd import sys from datetime import datetime from astropy.coordinates import EarthLocation, AltAz, SkyCoord from astropy import units as u from tqdm import tqdm # print a progression bar import time # use a timer from datetime import datetime, timezone from matplotlib.ticker import MultipleLocator # to sert the position of minor ticks import matplotlib as mpl # to increase the thickness of plot frame from scipy.optimize import minimize def PS_oof_fct02(freq, f_s, signoise, alpha, f_knee): ''' Generate the theoretical power spectrum of a 1/f noise signal. Same as above just I deleted the one in PS of the 1/f signal. - freq: frequency range of the power spectrum - f_s: sampling frequency of the theoretical signal - signoise: level of the white noise signal: fixed! - alpha: slope of the 1/f noise signal _ f_knee: knee frequency of the 1/f noise signal ''' ############################# 1) Power spectrum of the white noise # number of sample in my theoretical observation nsample = len(freq) # generate white noise level white std signoise White_noise = np.random.normal(0, signoise, nsample) # loc, scale, size # Computing the one-dimensional discrete Fourier Transform of the white noise. Normalised White_noise_fft = direct_fft(White_noise, f_s) #direct_fft(f,f_s): np.fft.fft(White_noise) / f_sampling # Power spectrum of the white noise signal PS_white_noise = np.real(White_noise_fft * np.conjugate(White_noise_fft)) ############################# 2) Creates the correction term for the white noise which will be # the 1/f noise # array of zeros with the same size as F to be filled with the 1/f correction term PS_corr = freq * 0 ############################# 3) fill the array for the positiv frequencies #mask to select only the positiv frequency mask_pos = (freq &gt; 0) # apply the mask to the array of frequencies freq_pos = freq[mask_pos] # define the corrrection term for the 1/f noise for positive frequencies PS_corr[mask_pos] = (f_knee / freq_pos) ** alpha ############################# 4) fill the array for the negativ frequencies #mask to select only the negative frequencies mask_neg = (freq &lt; 0) # apply the mask to the array of frequencies freq_neg = freq[mask_neg] #define the corrrection term for the 1/f noise for negative frequencies PS_corr[mask_neg] = (f_knee / np.abs(freq_neg)) ** alpha ############################# 5) fill the array for the frequencies equal zero # mask to select frequencies equal to zeros mask_zero = (freq == 0) #define the corrrection term for the 1/f noise for frequencies equal to zero PS_corr[mask_zero] = (f_knee / np.min(freq[freq &gt; 0])) ** alpha ############################# 6) Power spectrum of the one over f noise signal # multiply the white noise level with the correction term PS_oof = PS_white_noise * PS_corr ############################# 7) 1/f noise signal in time domain # fft of the 1/f noise signal to get the time domain signal oof_fft = White_noise_fft * np.sqrt(PS_corr) # then do in inverse FFT of it to obtain it in time domain oof_time = np.fft.ifft(oof_fft).real * f_s return PS_oof, oof_time, PS_corr, White_noise_fft, White_noise def PS_white_noise_fct(freq, f_s, signoise): ''' Generate the theoretical power spectrum of a white noise signal - freq: frequency range of the power spectrum - f_s: sampling frequency of the theoretical signal - signoise: amplitude of the white noise level ''' # number of sample in my theoretical observation nsample = len(freq) # generate white noise level white std signoise White_noise_time = np.random.normal(0, signoise, nsample) # loc, scale, size # Computing the one-dimensional discrete Fourier Transform of the white noise. Normalised White_noise_fft = direct_fft(White_noise_time, f_s) #direct_fft(f,f_s): np.fft.fft(White_noise) / f_sampling # Power spectrum of the white noise signal PS_white_noise = np.real(White_noise_fft * np.conjugate(White_noise_fft)) return PS_white_noise, White_noise_time def direct_fft(f,f_s): return np.fft.fft(f) / f_s def log_likelihood(theta, freq, CPSD, CPSD_err): ''' This function computes the log-likelihood for a given set of model parameters θ in a Bayesian framework, comparing the observed data (freq, CPSD, CPSD_err) to a model. - theta: A tuple containing the model parameters: - freq: The independent variable eg frequency - CPSD: The observed dependent variable: cross power spectral density - CPSD_err: The uncertainties (errors) on the observed values of yy. ''' #extract the model parameters alpha_0, freq_O, alpha, CPSD_err = theta # Define the model model = alpha_0 * (freq_O**2 + freq**2)**(-(alpha + 3/2)) # Calculate the total variance: # yerr2: The squared measurement uncertainties. # exp(2⋅logf)=f2, which represents the fractional noise. sigma2 = CPSD_err**2 #+ model**2 * np.exp(2 * log_f) # computes and returns the log-likelihood: # This is based on the formula for the log of a Gaussian likelihood: return -0.5 * np.sum((CPSD - model) ** 2 / sigma2 + np.log(sigma2)) # step 2: Define the Negative Log-Likelihood def nll(*args): ''' Define the Negative Log-Likelihood Minimizing the negative log-likelihood is equivalent to maximizing the log-likelihood. Purpose: This function will be minimized to estimate the model parameters. ''' return -log_likelihood(*args) def fit_CPSD_flattened(freq, CPSD, CPSD_err, freq_max, alpha_0, alpha, freq_O): ''' This function fit the power spectrum with a flattened spectrum - freq: frequencies, my data for the x axis - CPSD: cross power spectral density, my data for the y axis - freq-Max: freq until which I want to fit - alpha_0: initial guess for parameter alpha_0 - alpha: initial guess for parameter alpha - freq_0: initial guess for parameter freq_0 ''' # find the index of the maximum frequency for the fitting # np.where returns an array with elements from x where condition is True, and elements from y elsewhere. idx_high = np.where(freq &gt;= freq_max)[0][0] print(f'{freq[idx_high]} Hz') # select the data range that will be fitted CPSD_selected = CPSD[:idx_high] freq_selected = freq[:idx_high] CPSD_err_scalar = np.std(CPSD_err) # initial guess for the parameters initial = np.array([alpha_0, freq_O, alpha, CPSD_err_scalar]) # Define bounds for the parameters # Each tuple corresponds to (min, max) for a parameter bounds = [ (1e-11, 1e-4), # Bounds for alpha_0 (0.0001, 2.0), # Bounds for freq_O (-6, 0), # Bounds for alpha (-np.inf, np.inf) # Bounds for CPSD_err ] # step 4: Minimize the Negative Log-Likelihood soln = minimize(nll, initial, args=(freq_selected, CPSD_selected, CPSD_err_scalar), method='L-BFGS-B', bounds=bounds) # Step 5: Extract the Best-Fit Parameters alpha_0_ml, freq_O_ml, alpha_ml, CPSD_err_ml = soln.x return alpha_0_ml, freq_O_ml, alpha_ml def flattening_CPSD(freq, alpha_0, freq_O, alpha): ''' Function I want to fit to my data - freq: x axis - alpha_0, freq_O, alpha: functions parameters ''' return alpha_0 * (freq_O**2 + freq**2)**(-(alpha + 3/2)) ############################################################## This part generate synthetic data # sampling frequency f_sampling_MFI = 1/0.04 # 25 Hz nsamp = 4000 # standard deviation for the repetition of the 1/f noise signoise = 0.01 # standard deviation for the repetition of the white noise signal 1 signoise01 = 0.3 # standard deviation for the repetition of the white noise signal 2 signoise02 = 0.3 alpha = 2.66 f_knee = 5.0 f_s = f_sampling_MFI # define the frequencies for this sample size ans spacing freq = np.fft.fftfreq(nsamp, d = 1/f_sampling_MFI) # theoretical atmospheric 1/f noise oof_ps, time_oof, ps_corr, white_noise_fft, white_noise = PS_oof_fct02(freq, f_sampling_MFI, signoise, alpha, f_knee) # theoretical intrumental noise horn 2 PS_white_noise01, time_white_noise01 = PS_white_noise_fct(freq, f_sampling_MFI, signoise01) # Power spectrum of horn 2 theoretical white noise + atmopsheric 1/f noise level PS_h2_theo = oof_ps + PS_white_noise01 # Shift the zero-frequency component to the center of the spectrum. freq_shf = np.fft.fftshift(freq) PS_h2_theo_shf = np.fft.fftshift(PS_h2_theo) plt.figure() # PS horn 2 theo plt.plot(freq_shf, PS_h2_theo_shf) plt.loglog() plt.xlabel('Freq [Hz]') plt.ylabel('Power') ############################################################## This part tries to fit the data # std of the CPSD std CPSD_err_scalar = np.std(PS_h2_theo) #initial guess for the parameter alpha_0 = 5e-10 freq_O = 0.006 alpha = -0.30 freq_max = 1 #Hz # In this specific example: alpha_0_ml, freq_O_ml, alpha_ml = fit_CPSD_flattened(freq_shf, PS_h2_theo_shf, CPSD_err_scalar, freq_max, alpha_0, alpha, freq_O) print(f'alpha_0_ml = {alpha_0_ml}') print(f'freq_O_ml = {freq_O_ml}') print(f'alpha_ml = {alpha_ml}') # parameter to print in the figure slope = alpha_ml + 3/2 plt.figure() plt.plot(freq_shf, PS_h2_theo_shf) # fit plt.plot(freq_shf, flattening_CPSD(freq_shf, alpha_0_ml, freq_O_ml, alpha_ml), color='lime', label=f'Fit: 1/k_out = {np.round(freq_O_ml, 4)}Hz, slope = -({np.round(slope*2, 3)})') plt.loglog() plt.legend() </code></pre>
<python><scipy><scipy-optimize-minimize>
2025-03-03 15:49:33
0
338
Apinorr
79,481,290
1,750,498
Why did my mock test work with just a method instead of a full class?
<p>I always thought that you could mock a method on a class only when the class it is coming from is not instantiated in the code under test. And if it is you have to mock the class (just as you have to do for builtins).</p> <p>But today I made a &quot;mistake&quot; and saw that it just worked. So now I'm confused on when to use mocking a full class and when to use mocking a method on a class.</p> <p>Let's assume I have the following 3 example files:</p> <p><code>Engine.py</code>:</p> <pre><code>class Engine(object): def __init__(self, start_temperature): self._started = False self._temperature = start_temperature def start(self): self._started = True self._set_temperature(10) def stop(self): self._started = False self._set_temperature(-10) def get_temperature(self): return self._temperature def _set_temperature(self, value): self._temperature += value </code></pre> <p><code>MyClass.py</code>:</p> <pre><code>from Engine import Engine class MyClass(object): def __init__(self): self.engine = Engine(0) def check_start(self): self.engine.start() def check_stop(self): self.engine2 = Engine(1) self.engine2.stop() def get_temperature(self): return self.engine.get_temperature() </code></pre> <p>and finally the actual test: <code>test_MyClass.py</code>:</p> <pre><code>import unittest import unittest.mock as mock import MyClass class TestMyClass(unittest.TestCase): def test__initiation_returns_0_temperature(self): c = MyClass.MyClass() temperature = c.get_temperature() self.assertEqual(temperature, 0) @mock.patch(&quot;MyClass.Engine&quot;) def test_engine_get_temperature_mocked_via_class(self, mock_Engine_cls): mock_Engine_cls.return_value.get_temperature.return_value = 11 c = MyClass.MyClass() c.check_start() temperature = c.get_temperature() self.assertEqual(temperature, 11) @mock.patch(&quot;MyClass.Engine.get_temperature&quot;) def test_engine_get_temperature_mocked_via_method(self, get_temperature_mock): get_temperature_mock.return_value = 12 c = MyClass.MyClass() c.check_stop() temperature = c.get_temperature() self.assertEqual(temperature, 12) if __name__ == &quot;__main__&quot;: unittest.main() </code></pre> <p>I would have expected the 3rd test to error since in the past I got errors for instantiated classes if I did not mock the full class like done in the 2nd test case, but now it passes.</p> <p>Why is that?</p>
<python><mocking><python-unittest.mock>
2025-03-03 14:54:47
1
5,562
Nemelis
79,481,283
3,365,532
Polars lazy dataframe custom function over rows
<p>I am trying to run a custom function on a lazy dataframe on a row-by-row basis. Function itself does not matter, so I'm using softmax as a stand-in. All that matters about it is that it is not computable via pl expressions.</p> <p>I get about this far:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl import numpy as np def softmax(t): a = np.exp(np.array(t)) return tuple(t/np.sum(t)) ldf = pl.DataFrame({ 'id': [1,2,3], 'a': [0.2,0.1,0.3], 'b': [0.4,0.1,0.3], 'c': [0.4,0.8,0.4]}).lazy() cols = ['a','b','c'] redict = { f'column_{i}':c for i,c in enumerate(cols) } ldf.select(cols).map_batches(lambda bdf: bdf.map_rows(softmax).rename(redict)).collect() </code></pre> <p>However, if I want to get a resulting lazy df that contains columns other than cols (such as id), I get stuck, because</p> <pre class="lang-py prettyprint-override"><code>ldf.with_columns(pl.col(cols).map_batches(lambda bdf: bdf.map_rows(softmax).rename(redict))).collect() </code></pre> <p>no longer works, because pl.col(cols).map_batches is done column-by-column...</p> <p>This does not seem like it would be an uncommon use case, so I'm wondering if I'm missing something.</p>
<python><dataframe><python-polars><polars>
2025-03-03 14:52:24
2
443
velochy