QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,177,901
| 12,493,545
|
Why does my GitHub linting action fail when using Python 3.12.3 with an Astroid building error?
|
<p>After changing from Python 3.10.0 to 3.12.3 our workflow fails with:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.12.3/x64/bin/pylint", line 8, in <module>
sys.exit(run_pylint())
^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.3/x64/lib/python3.12/site-packages/pylint/__init__.py", line 25, in run_pylint
PylintRun(argv or sys.argv[1:])
File "/opt/hostedtoolcache/Python/3.12.3/x64/lib/python3.12/site-packages/pylint/lint/run.py", line 207, in __init__
linter.check(args)
File "/opt/hostedtoolcache/Python/3.12.3/x64/lib/python3.12/site-packages/pylint/lint/pylinter.py", line 650, in check
check_parallel(
File "/opt/hostedtoolcache/Python/3.12.3/x64/lib/python3.12/site-packages/pylint/lint/parallel.py", line 152, in check_parallel
for (
File "/opt/hostedtoolcache/Python/3.12.3/x64/lib/python3.12/multiprocessing/pool.py", line 873, in next
raise value
astroid.exceptions.AstroidBuildingError: Building error when trying to create ast representation of module 'program_name.core.startup_rest'
Error: Process completed with exit code 1.
</code></pre>
<h2>Workflow</h2>
<pre class="lang-yaml prettyprint-override"><code>name: linting
on: [push]
jobs:
linting-job:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python 3.12.3
uses: actions/setup-python@v4
with:
python-version: '3.12.3'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements-dev.txt
- name: ansible_lint
run: ansible-lint resources/playbook/roles/program_name/tasks/main.yaml
- name: pylint_lint
run: pylint program_name
</code></pre>
<p>in the pylint_lint step. I am unsure what causes this. One source suggested to add <code>__init__.py</code> to all folders which I have now added, but this didn't fix the issue.</p>
<p>The pylint version set by the requirements-dev.txt is <code>pylint==2.14.5</code></p>
<h2>Additional Information</h2>
<p>In the past we used Python 3.10.0 for our linting, which worked fine, but that caused issues when we switched to Python 3.12.03 in our working environment. Python 3.12.03 apparently knows more pylint errors. Disabling some of those at certain places (aka <code>pylint: disable=...</code>) raises an error on workflow execution with Python 3.10.0, because it doesn't know them.</p>
|
<python><python-3.x><github-actions><pylint>
|
2024-11-11 13:45:41
| 3
| 1,133
|
Natan
|
79,177,845
| 8,792,159
|
matplotlib.patches.Rectangle produces rectangles with unequal size of linewidth
|
<p>I am using matplotlib to plot the columns of a matrix as separate rectangles using <code>matplotlib.patches.Rectangle</code>. Somehow, all the "inner" lines are wider than the "outer" lines? Does somebody know what's going on here? Is this related to this <a href="https://github.com/matplotlib/matplotlib/issues/12335/" rel="nofollow noreferrer">Github issue</a>?</p>
<p>Here's an MRE:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.patches as patches
# set seed
np.random.seed(42)
# define number of cols and rows
num_rows = 5
num_cols = 5
# define gap size between matrix columns
column_gap = 0.3
# define linewidth
linewidth = 5
# Determine the width and height of each square cell
cell_size = 1 # Set the side length for each square cell
# Initialize the matrix
matrix = np.random.rand(num_rows, num_cols)
# Create the plot
fig, ax = plt.subplots(figsize=(8,6))
# Create a seaborn color palette (RdYlBu) and reverse it
palette = sns.color_palette("RdYlBu", as_cmap=True).reversed()
# Plot each cell individually with column gaps
for i in range(num_rows):
for j in range(num_cols):
# Compute the color for the cell
color = palette(matrix[i, j])
if column_gap > 0:
edgecolor = 'black'
else:
edgecolor = None
# Add a rectangle patch with gaps only in the x-direction
rect = patches.Rectangle(
(j * (cell_size + column_gap), i * cell_size), # x position with gap applied to columns only
cell_size, # width of each cell
cell_size, # height of each cell
facecolor=color,
edgecolor=edgecolor,
linewidth=linewidth
)
ax.add_patch(rect)
if column_gap > 0:
# Remove the default grid lines and ticks
ax.spines[:].set_visible(False)
# Set axis limits to fit all cells
ax.set_xlim(0, num_cols * (cell_size + column_gap) - column_gap)
ax.set_ylim(0, num_rows * cell_size)
# Disable x and y ticks
ax.set_xticks([])
ax.set_yticks([])
fig.show()
</code></pre>
<p>which produces:</p>
<p><a href="https://i.sstatic.net/jt3QtpEFm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jt3QtpEFm.png" alt="rectangles with unequal linewidths" /></a></p>
|
<python><matplotlib><rectangles>
|
2024-11-11 13:30:32
| 1
| 1,317
|
Johannes Wiesner
|
79,177,839
| 20,176,161
|
Removing a large number of IDs from a large dataframe takes a long time
|
<p>I have two dataframes <code>df1</code> and <code>df2</code></p>
<pre><code>print(df1.shape)
(1042009, 40)
print(df1.columns)
Index(['date_acte', 'transaction_id', 'amount', ...],
dtype='object')
print(df2.shape)
(734738, 37)
print(df2.columns)
Index(['date', 'transaction_id', 'amount', ...],
dtype='object')
</code></pre>
<p>I would like to remove the unique <code>transaction_id</code> in <code>df2</code> from <code>df1</code> and keep the rest.</p>
<p>I did the following:</p>
<pre><code>Filtre = list(df2.transaction_id.unique())
print(len(Filtre))
733465
noMatched = df1.loc[
(~df1['transaction_id'].str.contains('|'.join(Filtre), case=False, na=False))]
</code></pre>
<p>My problem is that the output <code>noMatched</code> takes almost 5 hours to get ready. I wonder if there is a more efficient way to write this piece of code. Can the output be generated in less than 5 hours?</p>
|
<python><pandas><dataframe><contains>
|
2024-11-11 13:28:00
| 2
| 419
|
bravopapa
|
79,177,835
| 538,256
|
python pipes deprecated, how to fix
|
<p>I wrote some years ago an iTunes-replacing program in Python, and recently I started to get a warning <code>DeprecationWarning: 'pipes' is deprecated and slated for removal in Python 3.13</code>.
This is due to the fact that I play my mp3's by mpg123 using the pipes library, e.g. snippets like this scattered here and there:</p>
<pre><code>t = pipes.Template()
t.append("echo jump +10s", '--')
f = t.open(mpgpipe, 'w')
f.close()
</code></pre>
<p>What can I do to replace it?</p>
|
<python><python-3.x><pipe><mpg123>
|
2024-11-11 13:26:56
| 2
| 4,004
|
alessandro
|
79,177,781
| 13,562,186
|
For beginners: Module not Found but Requirement already satisfied example using PyPDF2
|
<p>This post has two parts:</p>
<ol>
<li><p>To help beginners starting up with creating virtual environments and installing packages for Python in Visual studio Code (I am on MACOS however should be able to follow on windows)</p>
</li>
<li><p>A question of how to Resolve when a module is missing despite the requirement already being satisfied (i.e. already installed)</p>
</li>
</ol>
<p><strong>VIRTUAL ENVIRONMENTS AND PACKAGE INSTILLATION VIA PIP3</strong></p>
<p><strong>Creating New Virtual Environment (.VIRTUAL_ENV)</strong></p>
<ol>
<li>change directory to location you want to create virtual environment (cd)</li>
<li><code>python3 - m venv .VIRTUAL_ENV</code> (or whatever name you wish to call it)</li>
</ol>
<p><a href="https://i.sstatic.net/zOyFu8q5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zOyFu8q5.png" alt="creating Virtual Environemnt" /></a></p>
<p><strong>Activating Virtual Environment:</strong></p>
<ol>
<li><p>Copy Path of virtual environment
<a href="https://i.sstatic.net/AJcBsbA8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJcBsbA8.png" alt="Copy Path of virtual environment" /></a></p>
</li>
<li><p>Activate it (source /bin/activate.)
<a href="https://i.sstatic.net/TaIz2sJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TaIz2sJj.png" alt="Activate it" /></a></p>
</li>
</ol>
<p><strong>Check packages in Virtual Environment (Empty as nothing installed)</strong>
<a href="https://i.sstatic.net/bZBBQXtU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZBBQXtU.png" alt="Check packages in Virtual Environment (Empty as nothing installed)" /></a></p>
<p><strong>Selecting Interpreter</strong>
<a href="https://i.sstatic.net/6HGfivVB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6HGfivVB.png" alt="Selecting Interpreter" /></a></p>
<p><strong>Print hello to make sure all is working okay.</strong></p>
<p><a href="https://i.sstatic.net/XWxj23Lc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XWxj23Lc.png" alt="Print hello to make sure all is working okay" /></a></p>
<p><strong>Demoing Expected Behaviour using numpy</strong></p>
<ol>
<li>numpy not yet installed.</li>
</ol>
<p><a href="https://i.sstatic.net/M6uSfVGp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6uSfVGp.png" alt="demoing numpy not yet installed" /></a></p>
<ol start="2">
<li><strong><code>pip3 install numpy</code></strong></li>
</ol>
<p><a href="https://i.sstatic.net/JpzdTLs2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpzdTLs2.png" alt="pip3 install numpy" /></a></p>
<p>And we can see the numpy package(s) are added to the library</p>
<p>same again with <strong><code>pip3 install matplotlib</code></strong>.</p>
<p><a href="https://i.sstatic.net/IYx8tM4W.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IYx8tM4W.png" alt="pip3 install matplotlib" /></a></p>
<p>Alot more packages are being added this time but successfully installs without issue.</p>
<p><strong>Checking both installed okay:</strong></p>
<p><a href="https://i.sstatic.net/51IL8xQH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/51IL8xQH.png" alt="enter image description here" /></a></p>
<p>Now for PyPDF2:</p>
<p>Before installing:</p>
<p><a href="https://i.sstatic.net/tCQPBwFy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tCQPBwFy.png" alt="Before installing PyPDF2" /></a></p>
<p><strong><code>pip3 install PyPDF2</code></strong></p>
<p><a href="https://i.sstatic.net/kEqQKVNb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kEqQKVNb.png" alt="pip3 install PyPDF2" /></a></p>
<p>Package appears to install correctly, yellow squiggle disappears all seems okay.</p>
<p>But when I run the script now:</p>
<p><a href="https://i.sstatic.net/GPvAqSZQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GPvAqSZQ.png" alt="Module Not found error" /></a></p>
<p>and again just to confirm that is the case:</p>
<p><a href="https://i.sstatic.net/Fy3zKUFV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fy3zKUFV.png" alt="Requirement already satisfied" /></a></p>
<pre><code>Requirement already satisfied: PyPDF2 in ./.VIRTUAL_ENV/lib/python3.13/site-packages
</code></pre>
<p>final checks:</p>
<p><strong>pip3 show PyPDF2</strong></p>
<p><a href="https://i.sstatic.net/9PuuzMKN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9PuuzMKN.png" alt="enter image description here" /></a></p>
<p><strong>pip3 list</strong></p>
<p><a href="https://i.sstatic.net/2f6l8u4M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2f6l8u4M.png" alt="enter image description here" /></a></p>
<p>If anyone can assist on why this is the case it would be appreciated.</p>
<p>I have looked at the following post however the suggestions here either dont work or are for windows:</p>
<p><a href="https://stackoverflow.com/questions/39241643/no-module-named-pypdf2-error">"no module named PyPDF2" error</a></p>
|
<python><visual-studio-code><pypdf>
|
2024-11-11 13:12:19
| 2
| 927
|
Nick
|
79,177,774
| 14,080,363
|
How to get the chunk index with Split Skill in azure AI search?
|
<p>I am new to Azure AI search, I want to get an attribute chunk index from this skillset to know at which index in the document the chunk is located.
the content of pages after he split would looks like this</p>
<pre><code>{'values': [{'recordId': '0', 'data': {'text': 'sample data 1 '}}, {'recordId': '1', 'data': {'text': 'sample data 1'}}, {'recordId': '2', 'data': {'text': 'sample data 3'}}
</code></pre>
<p>How to copy the recordId value as a field.</p>
<pre><code>{
"name": "testing-phase-1-docs-skillset",
"description": "Skillset to chunk documents and generate embeddings",
"skills": [
{
"@odata.type": "#Microsoft.Skills.Text.SplitSkill",
"name": "#3",
"description": "Split skill to chunk documents",
"context": "/document",
"inputs": [
{
"name": "text",
"source": "/document/content",
"inputs": []
}
],
"outputs": [
{
"name": "textItems",
"targetName": "pages"
}
],
"defaultLanguageCode": "en",
"textSplitMode": "pages",
"maximumPageLength": 2000,
"pageOverlapLength": 500,
"unit": "characters"
}
],
"@odata.etag": "\"0x8DD029DA50735BD\"",
"indexProjections": {
"selectors": [
{
"targetIndexName": "testing-phase-1-docs-index",
"parentKeyFieldName": "parent_id",
"sourceContext": "/document/pages/*",
"mappings": [
{
"name": "content",
"source": "/document/pages/*"
}, // want to add a recordId here
{
"name": "metadata_title",
"source": "/document/metadata_title"
}
]
}
],
"parameters": {
"projectionMode": "skipIndexingParentDocuments"
}
}
}
</code></pre>
|
<python><azure><azure-ai-search>
|
2024-11-11 13:10:05
| 1
| 337
|
Yafaa
|
79,177,394
| 5,440,712
|
Evaluate expression inside custom class in polars
|
<p>I am trying to extend the functionality of <code>polars</code> to manipulate categories of Enum. I am following <a href="https://stuffbyyuki.com/how-to-add-custom-functionality-in-polars/" rel="nofollow noreferrer">this</a> guide and <a href="https://docs.pola.rs/api/python/stable/reference/api.html" rel="nofollow noreferrer">this</a> section of documentation</p>
<pre class="lang-py prettyprint-override"><code>orig_df = pl.DataFrame({
'idx': pl.int_range(5, eager=True),
'orig_series': pl.Series(['Alpha', 'Omega', 'Alpha', 'Beta', 'Gamma'],
dtype=pl.Enum(['Alpha', 'Beta', 'Gamma', 'Omega']))})
@pl.api.register_expr_namespace('fct')
class CustomEnumMethodsCollection:
def __init__(self, expr: pl.Expr):
self._expr = expr
def rev(self) -> pl.Expr:
cats = self._expr.cat.get_categories()
tmp_sr = self._expr.cast(pl.Categorical)
return tmp_sr.cast(dtype=pl.Enum(cats.str.reverse()))
(orig_df
.with_columns(rev_series=pl.col("orig_series").fct.rev())
)
</code></pre>
<p>This errors with <code>TypeError: Series constructor called with unsupported type 'Expr' for the </code>values<code> parameter</code> because <code>cats</code> is an unevaluated expression, not a list or a series, as <code>pl.Enum(dtype=)</code> expects it. How do I evaluate the <code>cats</code> into the actual list/series to provide the new categories for my <code>cast(pl.Enum)</code> method?</p>
|
<python><dataframe><python-polars>
|
2024-11-11 11:01:05
| 2
| 3,105
|
dmi3kno
|
79,177,393
| 17,718,870
|
Problems with the reuse of a develeped library
|
<p>I have been developing a library for quite some time. Its structure is as follows :</p>
<pre><code>pgk1 # central library
... # sub-packages and modules
requirements.txt # relevant dependancies
</code></pre>
<p>This library is only for internal purposes and can't be loaded to PyPI, also it can't be installed in the central site-packages of my interpreter.</p>
<p>So my idea was to set a central place for such libraries under <strong>PYTHONPATH</strong>, so that it would be possible for me to make easily an import as <strong>from pkg1 import XY</strong> for every new project, which I'm starting.</p>
<p>The problem is that the <em>pkg1</em> has its <strong>requirements.txt</strong> for the <em>venv</em>, which are unknown / not installed in the current project(current <em>venv</em>) and ImportError will be raised on runtime.</p>
<p>For sure I could copy the <em>requirements.txt</em> from the <em>pkg1</em> into the new projects requirements.txt, but this would be tedious.</p>
<p>I'll be very thankful for every tip, which will help me to re-develop an easier workflow with such a dependancies.</p>
|
<python><python-import>
|
2024-11-11 11:00:56
| 1
| 869
|
baskettaz
|
79,177,323
| 12,415,855
|
"Cannot set a DataFrame with multiple columns to the single column ..."
|
<p>I have the following dataframe:</p>
<pre><code>Price Adj Close Close High Low Open Volume ema_10 ema_20 ema_40 ema_50 sma_5 sma_10 n_high n_low
Ticker AAPL AAPL AAPL AAPL AAPL AAPL
Date
2023-11-13 00:00:00+00:00 183.899063 184.800003 186.029999 184.210007 185.820007 43627500 184.800003 184.800003 184.800003 184.800003 NaN NaN NaN NaN
2023-11-14 00:00:00+00:00 186.526199 187.440002 188.110001 186.300003 187.699997 60108400 185.280003 185.051432 184.928784 184.903532 NaN NaN NaN NaN
2023-11-15 00:00:00+00:00 187.093430 188.009995 189.500000 187.779999 187.850006 53790500 185.776365 185.333199 185.079086 185.025354 NaN NaN NaN NaN
2023-11-16 00:00:00+00:00 188.785126 189.710007 190.960007 188.649994 189.570007 54412900 186.491573 185.750038 185.304985 185.209066 NaN NaN NaN NaN
2023-11-17 00:00:00+00:00 188.765244 189.690002 190.380005 188.570007 190.250000 50922700 187.073105 186.125273 185.518888 185.384789 187.930002 NaN NaN NaN
... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
2024-11-04 00:00:00+00:00 221.766006 222.009995 222.789993 219.710007 220.990005 44944500 228.087639 228.953961 227.718509 226.698000 226.920001 229.660001 236.850006 219.710007
2024-11-05 00:00:00+00:00 223.204422 223.449997 223.949997 221.139999 221.800003 28111300 227.244431 228.429774 227.510289 226.570628 224.876001 228.419000 236.850006 219.710007
2024-11-06 00:00:00+00:00 222.475235 222.720001 226.070007 221.190002 222.610001 54561100 226.421808 227.885986 227.276616 226.419623 223.400000 227.615001 236.850006 219.710007
2024-11-06 00:00:00+00:00 222.475235 222.720001 226.070007 221.190002 222.610001 54561100 226.421808 227.885986 227.276616 226.419623 223.400000 227.615001 236.850006 219.710007
2024-11-06 00:00:00+00:00 222.475235 222.720001 226.070007 221.190002 222.610001 54561100 226.421808 227.885986 227.276616 226.419623 223.400000 227.615001 236.850006 219.710007
2024-11-06 00:00:00+00:00 222.475235 222.720001 226.070007 221.190002 222.610001 54561100 226.421808 227.885986 227.276616 226.419623 223.400000 227.615001 236.850006 219.710007
2024-11-07 00:00:00+00:00 227.229996 227.479996 227.880005 224.570007 224.630005 42137700 226.614206 227.847321 227.286537 226.461206 223.713998 227.306000 236.850006 219.710007
</code></pre>
<p>And I'm trying to create a new column using the following statement:</p>
<pre><code>dfDaily['%K'] = (dfDaily['Close'] - dfDaily['n_low']) * 100 / (dfDaily['n_high'] - dfDaily['n_low'])
</code></pre>
<p>But I get the following error message:</p>
<pre><code>Traceback (most recent call last):
File "C:\DEV\Fiverr2024\ORDER\VanaromHuot\stockMA6.py", line 107, in <module>
dfDaily['%K'] = (dfDaily['Close'] - dfDaily['n_low']) * 100 / (dfDaily['n_high'] - dfDaily['n_low'])
~~~~~~~^^^^^^
File "C:\DEV\.venv\yfinance\Lib\site-packages\pandas\core\frame.py", line 4301, in __setitem__
self._set_item_frame_value(key, value)
File "C:\DEV\.venv\yfinance\Lib\site-packages\pandas\core\frame.py", line 4459, in _set_item_frame_value
raise ValueError(
ValueError: Cannot set a DataFrame with multiple columns to the single column %K
</code></pre>
|
<python><pandas>
|
2024-11-11 10:40:36
| 1
| 1,515
|
Rapid1898
|
79,177,302
| 6,930,340
|
Polars read_excel incorrectly adds suffix to column names
|
<p>I am using polars v1.12.0 to read data from an Excel sheet.</p>
<pre><code>pl.read_excel(
"test.xlsx",
sheet_name="test",
has_header=True,
columns=list(range(30, 49))
)
</code></pre>
<p>The requested columns are being imported correctly. However, polars adds a suffix <code>_1</code> to every column name. There's one column header where a <code>_3</code> has been added.</p>
<p>In the requested columns, all column headers are unique, i.e. no duplicates. However, columns before this import area do have the same values. For example, the header that has been suffixed <code>_3</code> does occur two times before my import area.</p>
<p>It looks like polars is scanning all column headers from column "A" starting, no matter if I start to read from column "AE".</p>
<p>I am wondering what is going on? Is this a bug or did I make a mistake?</p>
|
<python><python-polars>
|
2024-11-11 10:34:04
| 1
| 5,167
|
Andi
|
79,177,247
| 1,236,117
|
TqdmCallback writes loss as loss=tf.Tensor(, shape=(), dtype=float32)
|
<p>I have written a VAE model in Keras following this <a href="https://keras.io/examples/generative/vae/" rel="nofollow noreferrer">example</a>.
The code is working as expected, however, the loss is being printed as
<code>loss=tf.Tensor(, shape=(), dtype=float32)</code>. As seen in the picture.</p>
<p><a href="https://i.sstatic.net/DdzVpRS4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DdzVpRS4.png" alt="enter image description here" /></a></p>
<p>Is that an issue with the custom model or the TqdmCallback?</p>
|
<python><tensorflow><keras><tqdm>
|
2024-11-11 10:19:50
| 0
| 1,132
|
mariolpantunes
|
79,176,959
| 8,452,246
|
How to properly use function with dataframe argument in another ipywidget interact function
|
<pre><code>from ipywidgets import interact
import ipywidgets as widgets
import pandas as pd
</code></pre>
<p>I have a dataframe as below:</p>
<pre><code>df = pd.DataFrame(index = [1,2,3],
data = {'col1':[2,3,5],"col2":[2,5,2], "col3":[2,4,3]})
</code></pre>
<p>In addition I have a function <code>df_plot</code> that draws a line plot. Through a number argument I select which column to be drawn.</p>
<pre><code>def df_plot(df, num):
df.iloc[:,num].plot()
</code></pre>
<p>Trying to create another function <code>f_interact</code> that displays dropdown where I can choose which column to be plotted.
<code>df_plot</code> is used in <code>f_interact</code></p>
<pre><code>def f_interact():
widgets.interact(df_plot, num=[0,1,2])
</code></pre>
<p>I receive the following error</p>
<p><a href="https://i.sstatic.net/82rwO3hT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82rwO3hT.png" alt="enter image description here" /></a></p>
<p>I most probably don't construct the set-up (functions and arguments) correctly.
I went over ipywidgets documentation but cannot find a proper example.
Can someone advise.</p>
|
<python><pandas><ipywidgets>
|
2024-11-11 08:53:05
| 1
| 477
|
Martin Yordanov Georgiev
|
79,176,847
| 4,451,315
|
How can I use "case when" in DuckDB's relational API?
|
<p>Say I have</p>
<pre class="lang-py prettyprint-override"><code>data = {'id': [1, 1, 1, 2, 2, 2],
'd': [1, 2, 3, 1, 2, 3],
'sales': [1, 4, 2, 3, 1, 2]}
</code></pre>
<p>My ultimate goal for now is to be able to translate</p>
<pre><code>import duckdb
import polars as pl
df = pl.DataFrame(data)
duckdb.sql("""
select *, case when count(sales) over w then sum(sales) over w else null end as rolling_sales
from df
window w as (partition by id order by d rows between 1 preceding and current row)
""")
</code></pre>
<p>I've got as far as:</p>
<pre><code>rel = duckdb.table("df")
rel.sum(
"sales",
projected_columns="*",
window_spec="over (partition by id order by d rows between 1 preceding and current row) as rolling_sales",
)
</code></pre>
<p>which I think is a lot more readable than a huge SQL string</p>
<p>But how do I get the <code>case when then</code> part in there? I've looked at <a href="https://duckdb.org/docs/api/python/relational_api.html" rel="nofollow noreferrer">https://duckdb.org/docs/api/python/relational_api.html</a> and there's no mention of "case"</p>
|
<python><duckdb>
|
2024-11-11 08:15:42
| 1
| 11,062
|
ignoring_gravity
|
79,176,470
| 2,717,063
|
How can I ensure that my Python logic runs exclusively on the Apache Ray Worker Nodes?
|
<p>I am using Apache Ray to create a customized cluster for running my logic. However, when I submit my tasks with ray.remote, they are executing on the driver node rather than on the worker nodes I configured during Ray initialization. How can I ensure that my logic runs exclusively on the worker nodes?</p>
<pre><code>import os
os.environ["RAY_PROFILING"] = "1"
os.environ["RAY_task_events_report_interval_ms"] = "0"
import ray
from ray.util.spark import setup_ray_cluster, shutdown_ray_cluster
from ray.util import get_node_ip_address
RAY_LOGS_VOLUME_PATH = "***/temp/apache_ray_logs"
RAY_TIMELINE_VOLUME_PATH = "***/temp/apache_ray_logs/timeline.json"
MAX_WORKER_NODES = 2
try:
shutdown_ray_cluster()
except:
pass
try:
ray.shutdown()
except:
pass
if not os.path.exists(RAY_LOGS_VOLUME_PATH):
os.makedirs(RAY_LOGS_VOLUME_PATH)
_, cluster_address = setup_ray_cluster(
max_worker_nodes = MAX_WORKER_NODES,
memory_worker_node = 28 * 2**30,
num_cpus_per_node = 4,
num_gpus_worker_node = 0,
collect_log_to_path = RAY_LOGS_VOLUME_PATH
)
ray.init(
address = cluster_address,
ignore_reinit_error = True
)
ray.timeline(filename=RAY_TIMELINE_VOLUME_PATH)
@ray.remote
def some_function(input):
print(f"Node used: {get_node_ip_address()}")
# Some Logic
sources = ["input1", "input2"]
results = ray.get([some_function.remote(x) for x in sources])
</code></pre>
|
<python><apache-spark><cluster-computing><azure-databricks><ray>
|
2024-11-11 05:14:19
| 1
| 3,018
|
question.it
|
79,176,467
| 1,230,724
|
Check if file is open for writing by another task
|
<p>Is it possible (in Python under Linux) to determine whether a file is still being written and hasn't been closed yet?</p>
<p>I'm trying to write data to a cache (file) which isn't quite complete yet when other processes are already accessing it. The file/cache then appears corrupted to the processes reading it.</p>
|
<python><linux><file><io>
|
2024-11-11 05:06:34
| 3
| 8,252
|
orange
|
79,176,357
| 3,120,501
|
Constraining out a region in objective function with OpenMDAO ExecComp
|
<p>I'm trying to solve for a flight trajectory around (and not over) a rectangular obstacle in the x-y plane. My first attempt at doing this was to create four linked phases, each one within a constrained region (constraints on the x, y and z coordinates), linked at the end points. However, this seems to struggle to converge, and has some unexpected behaviour (e.g. plotting the state discretisation nodes while solving, the trajectory has some large kinks in it when I'd expect it to be a smooth oval (minimising squared integrated throttle)). Perhaps linking four trajectories in a circle is a particularly demanding set of constrains for the optimiser to satisfy?</p>
<p>The other way I can think to do it is to use a single phase linked to itself and penalise the optimiser if the trajectory falls within the footprint of the building. A rectangle can be constructed as the product of two sums of hyperbolic tangent functions, and this can be added into the objective as a smooth penalty, so that the objective function looks something like this:</p>
<pre><code> # Add the building constraints
h = 10000
k = 10000
x0 = 10
x1 = 50
y0 = 20
y1 = 100
prob.model.add_subsystem('obj', om.ExecComp("v = sum_throttle_effort + h*sum((tanh(k*(x - x0))+1) - (tanh(k*(x - x1))+1))*(tanh(k*(y - y0))+1) - (tanh(k*(y - y1))+1)))",
h={'val': h},
k={'val': k},
x0={'val': x0},
x1={'val': x1},
y0={'val': y0},
y1={'val': y1},
x={'shape_by_conn':True},
y={'shape_by_conn':True}
))
# Link to position states
prob.model.connect('traj.phase0.timeseries.x', 'obj.x')
prob.model.connect('traj.phase0.timeseries.y', 'obj.y')
prob.model.add_objective('obj.v')
</code></pre>
<p>where k is the 'verticalness' of the building sides (the greater the better) and h is a large constant to penalise being within the region of the building. However, I've only ever used this approach before to connect single values with the ExecComp, using src_indices in <code>prob.model.connect</code>. Is it possible for this objective function to be large whenever any of the state discretisation nodes are within the building region? This is what I've tried to accomplish with the sum in the objective function and the 'shape_by_conn' arguments, but these are largely guesses.</p>
<p>Or maybe this is the wrong approach entirely, and there's another reason why the linked phases aren't working?</p>
|
<python><optimization><openmdao>
|
2024-11-11 04:10:48
| 0
| 528
|
LordCat
|
79,176,214
| 19,383,865
|
Answering WhatsApp Messages With Twilio and Flask
|
<p>I have set up a Flask application with Twilio to send response via WhatsApp when someone sends a message to it. I already have a domain name, an EC2 instance on AWS, a public IP bound to that instance, and the Flask application running on port 8080. I am able to retrieve the message only when executed the <code>curl -v <domain_name>:8080/whatsapp</code>. <strong>But when I try sending a message to Twilio's phone number, it does not send a reply back</strong>. Below are the code and the response from the website when accessing it.</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask, request, redirect
from twilio.twiml.messaging_response import MessagingResponse
app = Flask(__name__)
@app.route("/whatsapp", methods=['GET', 'POST'])
def whatsapp_reply():
response = MessagingResponse()
response.message("Thank you for contacting us via WhatsApp!")
return str(response)
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8080)
</code></pre>
<p><code>curl -v <domain_name>:8080/whatsapp</code>:</p>
<pre class="lang-none prettyprint-override"><code>* Trying <EC2_public_ip>:8080...
* TCP_NODELAY set
* Connected to <domain_name> (<EC2_public_ip>) port 8080 (#0)
> GET /whatsapp HTTP/1.1
> Host: <domain_name>:8080
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: Werkzeug/3.1.3 Python/3.12.3
< Date: Mon, 11 Nov 2024 02:02:57 GMT
< Content-Type: text/html; charset=utf-8
< Content-Length: 119
< Connection: close
<
* Closing connection 0
<?xml version="1.0" encoding="UTF-8"?><Response><Message>Thank you for contacting us via WhatsApp!</Message></Response>
</code></pre>
<p>Also, the messages on Programmable Messaging Logs (Monitoring > Logs > Messaging) are with Failed status with error code 30008 (Unknown error).</p>
<p><a href="https://i.sstatic.net/UmxkKmpE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UmxkKmpE.png" alt="Programmable Messaging Logs" /></a></p>
<p>I have also set the website on Sandbox Settings (Develop > Messaging > Try it out > Send a WhatsApp message > Sandbox Settings). I tried providing like <code><domain_name>:8080/whatsapp</code> and also <code><EC2_public_ip>:8080/whatsapp</code> but both did not work when I tried sending the message on WhatsApp. It works when I execute the <code>curl</code> command or when I acces the website on the browser.</p>
<h3>update (2024-11-14):</h3>
<p>Below are the Error Logs page. All of them related to error 11200.</p>
<p><a href="https://i.sstatic.net/GMbzE2QE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GMbzE2QE.png" alt="Twilio Error Logs" /></a></p>
<p><a href="https://i.sstatic.net/65UZCEwB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65UZCEwB.png" alt="enter image description here" /></a></p>
|
<python><flask><twilio><whatsapp>
|
2024-11-11 02:19:51
| 0
| 715
|
Matheus Farias
|
79,176,155
| 10,335
|
How can I bump the Python package version using uv?
|
<p>Poetry has the <a href="https://python-poetry.org/docs/cli/#version" rel="noreferrer"><code>version</code></a> command to increment a package version. Does the uv package manager have anything similar?</p>
|
<python><python-packaging><uv>
|
2024-11-11 01:31:29
| 3
| 40,291
|
neves
|
79,176,069
| 11,804,921
|
GCP Cloud Run Container Behavior - ModuleNotFoundError
|
<p>When Cloud Run runs a container image, the container fails differently than when I run it locally.</p>
<p>I added this try/except in <code>app/main.py</code> to debug the divergent behaviors:</p>
<pre><code>print(f'cwd is {os.getcwd()}')
try:
from .make_sticker.config import StickerConfig
print('relative worked')
except:
from make_sticker.config import StickerConfig
print('except worked')
</code></pre>
<p>When I run the container locally, the app logs 'except worked'.
When I run the container in Cloud Run, the app fails out of the except and does not log anything.</p>
<p>This is my Dockerfile:</p>
<pre><code>FROM python:3.12
WORKDIR /code
COPY app/requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
COPY app/ /code/app
ENV PYTHONPATH=/code/app
WORKDIR /code/app
RUN ls -la
EXPOSE 5001
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "5001"]
</code></pre>
<p>What is going on here?</p>
|
<python><google-cloud-platform><google-cloud-run><uvicorn><fasthtml>
|
2024-11-11 00:07:23
| 1
| 783
|
harrolee
|
79,176,031
| 1,018,322
|
Issue with django-crispy-forms and django-filter: CSS class not applying to custom ChoiceFilter field
|
<p>I'm using <code>django-filter</code> and <code>django-crispy-forms</code> to create a filter form in Django, but I'm having trouble applying a CSS class to a custom <code>ChoiceFilter</code> field. The CSS class is successfully applied to the date field but does not work for the <code>transaction_type</code> field, which is defined as a <code>ChoiceFilter</code>.</p>
<p>Here’s my current code:</p>
<p>filters.py</p>
<pre class="lang-py prettyprint-override"><code>import django_filters
from crispy_forms.helper import FormHelper
from crispy_forms.layout import Layout, Field
from .models import Transaction
class TransactionFilter(django_filters.FilterSet):
transaction_type = django_filters.ChoiceFilter(
choices=Transaction.TRANSACTION_TYPE_CHOICES,
field_name="type",
lookup_expr="iexact",
empty_label="Any",
)
class Meta:
model = Transaction
fields = ['transaction_type', 'date']
def __init__(self, *args, **kwargs):
super(TransactionFilter, self).__init__(*args, **kwargs)
self.form.helper = FormHelper()
self.form.helper.form_method = 'GET'
self.form.helper.layout = Layout(
Field('transaction_type', css_class='MY-CLASS'),
Field('date', css_class='MY-CLASS'),
)
</code></pre>
<p>my view:</p>
<pre><code>@login_required
def transactions_list(request):
transactions_filter = TransactionFilter(
request.GET,
Transaction.objects.filter(user=request.user),
)
context = {'filter': transactions_filter}
return render(request, 'tracker/transactions_list.html', context)
</code></pre>
<p>My template:</p>
<pre><code><form method="GET">
<div class="mb-2 form-control">
{% crispy filter.form %}
</div>
<button class="btn btn-success">
Filter
</button>
</form>
</code></pre>
<p>In this setup, I expected both fields to have the <code>MY-CLASS</code> CSS class, but only the date field reflects it, not transaction_type. I suspect this might be due to transaction_type being a custom <code>ChoiceFilter</code> field, but I'm not sure how to resolve it.</p>
<p>I've tried a few different approaches, such as updating widget attributes and applying CSS directly through attrs, but nothing has worked so far.</p>
<p>Has anyone encountered this issue before or have suggestions on how to force the CSS class to apply to the <code>ChoiceFilter</code> field?</p>
|
<python><django><django-filter><django-crispy-forms>
|
2024-11-10 23:36:58
| 1
| 1,226
|
Slot
|
79,176,006
| 48,956
|
Why are parameterized queries not possible with DO ... END?
|
<p>The following works fine:</p>
<pre class="lang-py prettyprint-override"><code>conn = psycopg.connect(self.conn.params.conn_str)
cur = conn.cursor()
cur.execute("""
SELECT 2, %s;
""", (1,),
)
</code></pre>
<p>But inside a <code>DO</code>:</p>
<pre class="lang-py prettyprint-override"><code>cur.execute("""
DO $$
BEGIN
SELECT 2, %s;
END$$;
""", (1,),
)
</code></pre>
<p>it causes</p>
<pre><code>psycopg.errors.UndefinedParameter: there is no parameter $1
LINE 1: SELECT 2, $1
^
QUERY: SELECT 2, $1
CONTEXT: PL/pgSQL function inline_code_block line 3 at SQL statement
</code></pre>
<p>Is this expected?</p>
|
<python><sql><postgresql><psycopg2>
|
2024-11-10 23:08:31
| 2
| 15,918
|
user48956
|
79,175,776
| 2,750,563
|
Perform operation in-place with xarray
|
<p>I am going crazy over the following issue: I want to map values from an xarray data array. Since I am constrained on memory, I want to do that in-place, but it just won't work! The code works fine when I save everything in a temp array and assign <code>corine_lc.values</code> to that array. But I cannot do that, OOM will occur. Doing it in-place has no effect on the the original data!!</p>
<p>Can anyone point me to what the heck is going on in the code below?</p>
<p>The assertion fails and shows me that the mapping has not taken place. How can I get this to worK? Thank you so much!!</p>
<pre><code>import rioxarray as rxr
import geopandas as gpd
import xarray as xr
import numpy as np
CORINE_NA = -1000
corine_lc = rxr.open_rasterio('other_inputs/u2018_clc2018_v2020_20u1_raster100m/DATA/U2018_CLC2018_V2020_20u1.tif')
vat_df = gpd.read_file('other_inputs/u2018_clc2018_v2020_20u1_raster100m/DATA/U2018_CLC2018_V2020_20u1.tif.vat.dbf')
vat_df['CODE_18'] = vat_df['CODE_18'].astype(np.int16)
value_to_class = dict(zip(vat_df['Value'], vat_df['CODE_18']))
data_shape = corine_lc.shape
xvals = corine_lc.coords['x'].values
yvals = corine_lc.coords['y'].values
chunk_size = 10000
# Iterate in chunks along the first axis, which is the y axis!
for start_idx in range(0, data_shape[1], chunk_size):
end_idx = min(start_idx + chunk_size, data_shape[1])
chunk = corine_lc.isel(y=slice(start_idx, end_idx)).values[0, :, :]
# Map values using the dictionary
flat_chunk = chunk.flatten()
mapped_flat_chunk = np.fromiter((value_to_class.get(v, CORINE_NA) for v in flat_chunk), dtype=np.int16)
mapped_chunk = mapped_flat_chunk.reshape(chunk.shape)
# Assign mapped chunk back to dataset
corine_lc.loc[dict(x=slice(xvals[0], xvals[-1]), y=slice(yvals[start_idx], yvals[end_idx-1]))] = mapped_chunk
# Verify that forest code is present
assert (corine_lc > 311).any(), "The values seem not to be in the expected range!"
return corine_lc
</code></pre>
|
<python><geopandas><python-xarray><in-place>
|
2024-11-10 20:45:47
| 0
| 1,115
|
konse
|
79,175,600
| 11,809,811
|
is it possible to multiply a raylib vector2 with int?
|
<p>I want to do something like:</p>
<pre><code>from pyray import Vector2
v2 = Vector2(1,2)
print(v2 * 10)
</code></pre>
<p>But I get:</p>
<blockquote>
<p>TypeError: unsupported operand type(s) for *:
'_cffi_backend.__CDataOwn' and 'int'</p>
</blockquote>
<p>Same result if I multiple the vector with a float.</p>
<p>Is there a way to do this operation in raylib? I know I can do</p>
<pre><code>v2 = Vector2(1,2)
print(v2.x * 10)
print(v2.y * 10)
</code></pre>
<p>but it's a bit annoying to write 2 lines instead of 1.</p>
|
<python><vector><typeerror><raylib>
|
2024-11-10 19:37:42
| 1
| 830
|
Another_coder
|
79,175,533
| 4,451,315
|
Rolling sum using DuckDB's Python relational API
|
<p>Say I have</p>
<pre class="lang-py prettyprint-override"><code>data = {'id': [1, 1, 1, 2, 2, 2],
'd': [1, 2, 3, 1, 2, 3],
'sales': [1, 4, 2, 3, 1, 2]}
</code></pre>
<p>I want to compute a rolling sum with window of 2 partitioned by 'id' ordered by 'd'</p>
<p>Using SQL I can do:</p>
<pre class="lang-py prettyprint-override"><code>duckdb.sql("""
select *, sum(sales) over w as rolling_sales
from df
window w as (partition by id order by d rows between 1 preceding and current row)
""")
Out[21]:
┌───────┬───────┬───────┬───────────────┐
│ id │ d │ sales │ rolling_sales │
│ int64 │ int64 │ int64 │ int128 │
├───────┼───────┼───────┼───────────────┤
│ 1 │ 1 │ 1 │ 1 │
│ 1 │ 2 │ 4 │ 5 │
│ 1 │ 3 │ 2 │ 6 │
│ 2 │ 1 │ 3 │ 3 │
│ 2 │ 2 │ 1 │ 4 │
│ 2 │ 3 │ 2 │ 3 │
└───────┴───────┴───────┴───────────────┘
</code></pre>
<p>This works great, but how can I do it using the Python Relational API?</p>
<p>I've got as far as</p>
<pre class="lang-py prettyprint-override"><code>rel = duckdb.sql('select * from df')
rel.sum(
'sales',
projected_columns='*',
window_spec='over (partition by id order by d rows between 1 preceding and current row)'
)
</code></pre>
<p>which gives</p>
<pre><code>┌───────────────────────────────────────────────────────────────────────────────────────┐
│ sum(sales) OVER (PARTITION BY id ORDER BY d ROWS BETWEEN 1 PRECEDING AND CURRENT ROW) │
│ int128 │
├───────────────────────────────────────────────────────────────────────────────────────┤
│ 3 │
│ 4 │
│ 3 │
│ 1 │
│ 5 │
│ 6 │
└───────────────────────────────────────────────────────────────────────────────────────┘
</code></pre>
<p>This is close, but it's not quite right - how do I get the name of the last column to be <code>rolling_sales</code>?</p>
|
<python><duckdb>
|
2024-11-10 18:55:19
| 1
| 11,062
|
ignoring_gravity
|
79,175,528
| 913,098
|
URLDAY API Not working according to published docs
|
<p>I am trying to use <a href="https://www.urlday.com/developers/links" rel="nofollow noreferrer">URLDAY's API</a> to do anything, but any call returns an error, weather it is from Python code, or from CURL.</p>
<p>Example:</p>
<pre><code>curl --location \
--request POST 'https://www.urlday.com/api/v1/links' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--header 'Authorization: Bearer MYKEY' \
--data-urlencode 'url=www.mysite.com'
</code></pre>
<p>And getting</p>
<pre><code>
<!DOCTYPE html><html lang="en-US"><head><title>Just a moment...</title><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"><meta http-equiv="X-UA-Compatible" content="IE=Edge"><meta name="robots" content="noindex,nofollow"><meta name="viewport" content="width=device-width,initial-scale=1"><style>*{box-sizing:border-box;margin:0;padding:0}html{line-height:1.15;-webkit-text-size-adjust:100%;color:#313131;font-family:system-ui,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,Apple Color Emoji,Segoe UI Emoji,Segoe UI Symbol,Noto Color Emoji}body{display:flex;flex-direction:column;height:100vh;min-height:100vh}.main-content{margin:8rem auto;max-width:60rem;padding-left:1.5rem}@media (width <= 720px){.main-content{margin-top:4rem}}.h2{font-size:1.5rem;font-weight:500;line-height:2.25rem}@media (width <= 720px){.h2{font-size:1.25rem;line-height:1.5rem}}#challenge-error-text{background-image:url(data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzMiIgaGVpZ2h0PSIzMiIgZmlsbD0ibm9uZSI+PHBhdGggZmlsbD0iI0IyMEYwMyIgZD0iTTE2IDNhMTMgMTMgMCAxIDAgMTMgMTNBMTMuMDE1IDEzLjAxNSAwIDAgMCAxNiAzbTAgMjRhMTEgMTEgMCAxIDEgMTEtMTEgMTEuMDEgMTEuMDEgMCAwIDEtMTEgMTEiLz48cGF0aCBmaWxsPSIjQjIwRjAzIiBkPSJNMTcuMDM4IDE4LjYxNUgxNC44N0wxNC41NjMgOS41aDIuNzgzem0tMS4wODQgMS40MjdxLjY2IDAgMS4wNTcuMzg4LjQwNy4zODkuNDA3Ljk5NCAwIC41OTYtLjQwNy45ODQtLjM5Ny4zOS0xLjA1Ny4zODktLjY1IDAtMS4wNTYtLjM4OS0uMzk4LS4zODktLjM5OC0uOTg0IDAtLjU5Ny4zOTgtLjk4NS40MDYtLjM5NyAxLjA1Ni0uMzk3Ii8+PC9zdmc+);background-repeat:no-repeat;background-size:contain;padding-left:34px}@media (prefers-color-scheme:dark){body{background-color:#222;color:#d9d9d9}}</style><meta http-equiv="refresh" content="390"></head><body class="no-js"><div class="main-wrapper" role="main"><div class="main-content"><noscript><div class="h2"><span id="challenge-error-text">Enable JavaScript and cookies to continue</span></div></noscript></div></div><script>(function(){window._cf_chl_opt={cvId: '3',cZone: "www.urlday.com",cType: 'managed',cRay: '8e08331dcca21acd',cH: 'HyW8MnqMqm6YWgrOmNPmohplrASP4fwSME_wvXU3lJU-1731264327-1.2.1.1-GiHFy7gK1YR1LgIRs9XsNTuIyhzpPt2ymtCIA81sVVAvQHst77aZKRq7k_TF6t8w',cUPMDTk: "\/api\/v1\/links?__cf_chl_tk=uY1iQn0S0a87ut2HpiKE.FZ4k_gLv8HwOxLHCt__Knw-1731264327-1.0.1.1-sg9FwzkoXlF75yOeYU7xDtEpVSKZkkHFqhu1y5V0Q1E",cFPWv: 'b',cITimeS: '1731264327',cTTimeMs: '1000',cMTimeMs: '390000',cTplC: 0,cTplV: 5,cTplB: 'cf',cK: "",fa: "\/api\/v1\/links?__cf_chl_f_tk=uY1iQn0S0a87ut2HpiKE.FZ4k_gLv8HwOxLHCt__Knw-1731264327-1.0.1.1-sg9FwzkoXlF75yOeYU7xDtEpVSKZkkHFqhu1y5V0Q1E",md: "KKYy.WDOpN3Nbh2hchr.5_65FIeP2MXY1SwLe7D0i9g-1731264327-1.2.1.1-8rDviJPWKQZ5GHZ2CJEs76rsOIWvKcleM6GXN_czTDntvOpBG15TYHn.r1BlH6CpuH.qNNZZLHgxeq19wVsLNnhQEPGMDA6f_zAhPQYB6_SElooCBnQzh1JK5SbQJWkTBzUwyaiDwdmCwi7dvUF35SXcwViWI1kxb.YIeY07hhNv7UA9MW3V81KcqjXl__pFkCjsXpuswYWqbtCTY10v4eelrErdByJfcNRhWK0u4dL4rPGVFPwllV3mnh9urfMUNcg5R_7Btw5spRXkT7GVcSUxZguJpfCawNnZY.8HZbBU5Lr7drk1LdItmWHWvijeWRwXF_3R7TySwGiFwu7yuVgijvk.qUnT2mP6EakO03TL7ytUiTgFWbpHd.VL1ziFqu5vDEDdhSzMuPhRNUOmDn1v7Lb145D6UoF0XY85jSHxEvFOtTiMNnVdQsUvXOx9heoZij9AoaVVmqXQGchwI5YsEXu9WxKBgpmr_7GP7ouA61aO2B5tsFnFFeinNUkZOsctRvw7wN_HFlWRCXvX3H4wvD_Z0aqakydWZxhq5OkJj0y78x_J3p4_YRnwO4PUGQ0vdvYuoK3Dq8nzZHmx9RnOX1FkA.Eg3HA5RZ.ijv3bNRIJOPDXO3irXk.9V5FGpM2FK4xb0FzOFJuzPffZQGhAHSNRxixMffItxaT2Yt.RKSxAbLVAy7AiKGTiJPWaO7xAWoy9mphqAjO6iA5X.jPIkvwYxmEQqpDlgjO62HMKzvbXc.kJ5WZtMcCrZJ4r7SYThwm5kxzT0RH.N6LTN8eXxbtE6zkkXWMDoRZ5yew69LX4_c9RVcx74VTrcLBERIGQRgcrjaYX7OpWcw58fdRkKnUtj2MhdK3WQ8sX_.YZyLBAVUOYDzvYF0CJY2LIOvlP7MZLYkr1s92svbTrehOmoGz8g_e4.QfHUUnZKycK8lKKXOSkWhqAFg6i9mmNWSpoqI6yMMRYSUKx0JmrqSFfOK8MA9no5FJcQC_mF3QnCnHR3mllBmpWGJgPpcexbTfBFi1FSj6G08BUtgR.px6Elmn_9EzhgvRk8NPTUILAh7iFqmQUpcPNpx3HuY53p8Ny1atXd6EAHJvH14HHew0W70IN0297X_r1c7AZXpV019eLLjez6694UKhhfp_7b0UA3sMeBdF8sXnx01ixU4b8XPcbk9aF468HWWE6OXP0QrewxztouEzJIV1Nzlqv0Zz61Ooyc3lmW9bSeMqQteq8F211Fczcrx8NTBB4xflEqRzuvPsGahxwop6nkIHkMQpk.o.d6kKLzb_JimsNpYHDpICedYFPaOW2lComFHBlSUeolgEwKc4.XjXfHS7afjE84Jm0PvOOmAg.HOhC33XbUxml8Ta4rH4FjWzrslao6ZRoELPvk9BPXXdn7DT4KbunWH12WLwDQhRq.GQcQDck38IETL.8bwMzcHUZyuTUTJP.8E65jljcmc__FO1tOEhe1ZaUN3BpcLtWJo4FiXuq6oZEXgu3rX1E.9msY.nLTgKGj_wHcMZsxPjwqOPKV90zcb35FH54dYyaNCND9US9wMWLOsgW8jnQmJwONJAciSrrSyk9EwL3GKi5XJvhjfeqTfvpnluimLMOg.dVmbdEWF0KxdF5Pz5PniQdarayu_mzr0n.HZjfhEqwvkAuj_7sjnVJilNYSq4cg2S2QP0eTxuUh_xN718WVpfYicJMWyYqZPRTBGRmP_x.oXmqxcCeU5gRLiix.puYsHR74N36kWQ72ddVBv0T9aK6UO2VhX55E3btcQV1TYvZ2HS3uBJ55vn_s7isLRemAbs3oeVoDl5tEIBfISv6EAp0Yw5vdNbVoa9lBsrbuyagIOFyhxJ5PTubzo6GarQlZh6Tus42pF4Vp_G_u5L5IRcKeWnt4XhfKBsUwu_dW855QTyCr5dZkJvKFiBKnf.ebph1tBjOThnuCHcM3Fy7SgMwTp8nTiXWOmX8gMPDxkMSwt7RT8oMZJZmtWfkSyCJpl2K2mItIGq8H5cVK6El0Uekv1ketAGWTdto3YEDnOeWCy3jo8howaOM3Ig2cp4AeNwT3MtzFa4rEa_5nk5Qs7I__ek",mdrd: "s.YaFQ_AbmGHujee.CTyXKeuFoYXBRWMds.XzBkGH7U-1731264327-1.2.1.1-_zIQ.47khGTYN6ojbTccAEvixHeqKpN4TUrnkydBF9uZwm_FDJCcXdKqOypPGTFOCFe.29l5jeXIuOdwOhxt6sS4gTYxBxkRzKYJG7gbq0raaEOZmZokW5v7qi1_q84FpSx4UZcEBNRzmRxEyDbEYylSBIuoBJH2aPD3kDtAtNfLyfdo1sXkiWBU78o3hV.hyieGbpnq5e7JbXINH_swuy1hKGFF6jNe6V2.m38otesxpl6quOlDDPHSfd4oPZDv7P_6OjwYYBRNdtgjrjbRUrNamDOUR_rU5.WTnd8Mo_Z2v16o9ovFA7vIpxnXi91lhGUKK6J8xFUhYjafbGhm1SQOOLciN6h3uJJyPzu_UFOd5u7o7izKbA3T0XaZyCpC1XgMQcb5WATLv_EQ5D6xSwdKu3ust9gMgosB_x0LH1iYFEsY58H5rUD5MuA3nmDbrGy5XvQC0KnH2EwloETFbtIG7XgjHAxUgLXeEqT6.IZZpjTH9cxf6snajfllx8qvcuVCgFSmoEw1F9SUzZZSwewTKlI6mcatPI57.TAtkI1Zk8CqJoIpZvMguMvopnAISYrBfwrFmuWFg60XYCyVQSUpyEvNOvkpj2YZCsETMzq8hKh0eFEtONXFxsLYRzuwYsIq4uSV59nCXL7Djay6uIcqWFuu_BhtpK6.RF3HKjAsK0JorIAY5cdIAatdaVL2dWBgg49cqRRrv6QC.lnf7sF3pWSnswbwXYYJqR4dEshOIVZbMVMgrp469R57kdCetOrW9rvg6WadAx.Jrb1diuiDfOiAo8I8krZfT2fGWamdV2Ddq_q_oy60_jjFLJZvf8_OZ340.TcoM107p8w7di349sUCzyxEVcn1WjlpzkbNILTaqXPUHQFaNxj6J27B0OaIbA42U7.X9273_C1i8IbDO7tgvbz8z7VQKxHMArOmBgmq0CjFmXiQUQ0GaOhXjwkKaboiR23x.l3J_CLIM7K25Ispeb7HBeNVBNXGxp1j6PwkMn9csstdEqbk0XtvIC_C6jL8.IoPjabtri53eeVK7A00SPAbmg1WHm1DJF_MRv8rnNWtXneNp0MWkoepbdL.h7xoejwfrazgIKvUtYHlSORSdzcaIW5HobDnrW7lIFMNyPbikUuVhJJlgvXifrBr_VIXTWLfw5OLtLw94Cd6LRoFDTpYIKlCTME0C_A_AmQzbKynvKhkEEFueed9P2xUvFCxKsgXHAA9b9sO5nxCYjRjOBy1ak7gZ74PDt4XAJY9Nxj8aDjiDUxCtbdUHiVUwMRAaVuiAcbRbQt5GH.PA3NzSQZLcmYZT8tMi21VmK.WEqYBg8XeVpHtnD.ykaGvHst67uoIa4O.bU5gHzTsbQeEE0TWpoyjm1cK25WuMyiYw8WPRqOE3o7mHcpzjJyT8J76C3SbPsJZ9bklsMqr.PZmfcjScqFlhNIoHUvPGkbEKrFF0YJi7kOTxfeyZr_iOR2icr5hiIvXku0Js0s_Q9RVBe.X.b1wA_vBFj0KEYLkSaMnNRX6l9DsopYxaSS1JnaPciMZU2Yz9uFbOU1iAl7_enz2__Cwzul881wK2z.FeYVFJhl.f9i3H7zgsgIto6iPnFmZSGiZcWgbvldHrQrJJs5SvKoxCutTTbqPpw2Ms5dP0pSOaNTBIMoYcq_DIJPNY8vLZN02Ah7rcuXUHSP_J9kl1Iee.7RrK0VgZ5wX_s0uyMKdCF.ejsbyKU4aND875csYp2wYpKBW.8B23hmVrqyCIDDIUhVOKBEw1U9htRr3rOm3MRD8q1BXAwUBe0twlsTqvUtleRsQ8ykZytJf2QHXi5sUDshYqdd6Ufgx_kqJqDgat1nqaJFo_8D.25GkNqAaQX1MJH_2e3UGAScF0ClqBAkqMx5i0Thxc_ecPUe8jkDpX0vRoH7He2Q9AERVyJI3kQZ4wm4U2hMpmY5ApuyaEw5jSo7AWmPkahlwGZ2YNGZAjXuOiEVDKWLbHM4hN6Jjjk_FdlNAFEQfmP4k0tdV._Z_Wb8Vq9mJyh3XeovjycTa7Nyluy9EinmswwoTATlJ6DqrR5Y8oldJRYm2OovG22dR9f2R.utud8fLOxMiPoAAJbuMkhR_3vtXHidyBoeuA0VQ1Oy9yl2GelO0jiGYlRD17HwehhzeoVRGV6wkzdmX7w4XoC2kncvoBeTV2jno9Erd0JQEDK77lpsMjVH.CBkC92dznKAMQXHQVdl6Avu1fUSNb6C0CWGyCdvMMB1lSu2q.KhCAoqndPEtjsseTTLr4n47nLTdckWHjU7FcO0oQXRLj5n71Igq_NFwoJajXchLwP27Ao7hFuJw05QKSpL1KajV0KUkN8d072ALiql5JpeCyyIuYISy8TEIi_sjtC784cDkY3P4EM._sE3DGWQiIobSJGx9nq1.aWnAzZrbLNYlR4YM9XhkoW2B6tPbCfVyvo4m0d3.dg1BYgBjOTyV5UP1lF0"};var cpo = document.createElement('script');cpo.src = '/cdn-cgi/challenge-platform/h/b/orchestrate/chl_page/v1?ray=8e08331dcca21acd';window._cf_chl_opt.cOgUHash = location.hash === '' && location.href.indexOf('#') !== -1 ? '#' : location.hash;window._cf_chl_opt.cOgUQuery = location.search === '' && location.href.slice(0, location.href.length - window._cf_chl_opt.cOgUHash.length).indexOf('?') !== -1 ? '?' : location.search;if (window.history && window.history.replaceState) {var ogU = location.pathname + window._cf_chl_opt.cOgUQuery + window._cf_chl_opt.cOgUHash;history.replaceState(null, null, "\/api\/v1\/links?__cf_chl_rt_tk=uY1iQn0S0a87ut2HpiKE.FZ4k_gLv8HwOxLHCt__Knw-1731264327-1.0.1.1-sg9FwzkoXlF75yOeYU7xDtEpVSKZkkHFqhu1y5V0Q1E" + window._cf_chl_opt.cOgUHash);cpo.onload = function() {history.replaceState(null, null, ogU);}}document.getElementsByTagName('head')[0].appendChild(cpo);}());</script></body></html>
</code></pre>
<p>Trying with Python:</p>
<pre><code>url = "https://www.urlday.com/api/v1/links"
headers = {
"Content-Type": "application/x-www-form-urlencoded",
"Authorization": "Bearer eVnWvTBAD0Aai8w8YaAoopAgc3RZmRVEtmedYKNDZCIMg5Yv1Rbh7jTE77xf"
}
data = {
"url": "www.mysite.com"
}
response = requests.post(url, headers=headers, data=data)
# Print the response
print("Status Code:", response.status_code)
print("Response JSON:", response.json())
</code></pre>
<p>Getting</p>
<pre><code>Exception has occurred: JSONDecodeError
Expecting value: line 1 column 1 (char 0)
StopIteration: 0
During handling of the above exception, another exception occurred:
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
File "/home/noams/src/personal_website/backend/services/UrlShortener.py", line 55, in <module>
print("Response JSON:", response.json())
^^^^^^^^^^^^^^^
requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
</code></pre>
<p>I tried a bunch of useless workarounds not even worthy of mentioning.</p>
<hr />
<p>I am clearly missing something. What's the correct way to access their API?</p>
|
<python><curl><python-requests>
|
2024-11-10 18:52:57
| 1
| 28,697
|
Gulzar
|
79,175,403
| 6,630,397
|
Unable to access ifcopenshell sub-modules from any parent level import
|
<p>Using the <a href="https://pypi.org/project/ifcopenshell/" rel="nofollow noreferrer"><code>ifcopenshell</code></a> (<code>0.8.0</code> at the time of writing) Python 3.11 package available from pypi: (source code on <a href="https://github.com/IfcOpenShell/IfcOpenShell/tree/v0.8.0/src/ifcopenshell-python/ifcopenshell" rel="nofollow noreferrer">github</a>), I'm not able to use it the way I'm used to use almost every other Python packages:</p>
<pre class="lang-py prettyprint-override"><code>import ifcopenshell
ifcopenshell.api.project.create_file()
</code></pre>
<p>I'm facing this error:</p>
<pre class="lang-py prettyprint-override"><code>AttributeError: module 'ifcopenshell' has no attribute 'api'
</code></pre>
<p>Same goes with whatever "sub-folder" inside this package, e.g.:</p>
<pre class="lang-py prettyprint-override"><code>import ifcopenshell.api as api
api.project.create_file()
AttributeError: module 'ifcopenshell.api' has no attribute 'project'
</code></pre>
<p>I'm really used to having access of all sub-stuff from a top-level import in almost all Python packages I'm using. Why this behavior in this particular package? How is this called (if it has a name)? And how would you fix it in the source code?</p>
<p>Currently, it's not a big issue as I can use it this way:</p>
<pre class="lang-py prettyprint-override"><code>import ifcopenshell.api.project
ifcopenshell.api.project.create_file()
<ifcopenshell.file.file at 0x7da934029490>
</code></pre>
<p>but the code is getting long and tedious to write, and I'm forced to manually specify each and every sub-package/sub-sub-package/... imports.</p>
|
<python><python-import><python-module><ifc-open-shell>
|
2024-11-10 17:41:01
| 0
| 8,371
|
swiss_knight
|
79,175,141
| 5,562,431
|
How to draw scale-independent horizontal bars with tips in matplotlib?
|
<p>I want to create a plot that shows genomic coding regions as arrows that may contain colorfully highlighted domain regions.
In principle it is something like this:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.patches as patches
import matplotlib.pyplot as plt
def test(bar_height=0.8, figsize=(10, 6), arrow_headlen=0.2, dpi=600):
X0 = np.arange(0, 10, 1)
X1 = X0 + 2
Y = np.arange(0, 10, 1)
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(111)
data_2_px = ax.transData.transform # data domain to figure
px_2_data = ax.transData.inverted().transform # figure to data domain
# get arrow head_length as fraction of arrow width
# so that it doesnt grow longer with longer x-axis
dy = bar_height * arrow_headlen
dpx = data_2_px([(0, dy)]) - data_2_px([(0, 0)])
arrowlen = (px_2_data([(dpx[0, 1], dpx[0, 0])]) - px_2_data([(0, 0)]))[0, 0]
ax.barh(y=Y, left=X0, width=X1 - X0, height=bar_height, color="0.5")
for y, x1 in zip(Y, X1):
yl = y - 0.49 * bar_height # low arrow corner (avoid being draw 1 px too low)
yh = y + 0.49 * bar_height # high arrow corner (avoid being draw 1 px too high)
arrow = patches.Polygon([(x1, yl), (x1, yh), (x1 + arrowlen, y)], color="0.5")
ax.add_patch(arrow)
# highlight parts of arrows
ax.barh(y=Y, left=X0 + 0.5, width=(X1 - X0) / 2, height=bar_height, color="blue")
fig.savefig("./test_from_savefig.png", dpi=dpi)
plt.show()
</code></pre>
<p>This draws 10 transcript "arrows" in gray and each of these transcripts contains a region highlighted in blue.
When <code>plt.show()</code> opens this in a viewer and I save it from there I get this image (A):</p>
<p><a href="https://i.sstatic.net/eAI77xQv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eAI77xQv.png" alt="from viewer" /></a></p>
<p>The picture that is saved by <code>fig.savefig()</code> with higher DPI however gives this image (B):</p>
<p><a href="https://i.sstatic.net/DkyJYW4E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DkyJYW4E.png" alt="from savefig" /></a></p>
<p>As you can see the arrow heads are suddenly not flush with the arrow base anymore.
It seems that they were scaled differently than the bars.
But both of them are defined in the data domain.
So shouldn't they still be flush.</p>
<p>Image A is what I want to create:</p>
<ul>
<li>gray arrows of a defined width whose head width is the same as the arrow base width</li>
<li>arrow heads that do not grow longer or shorter depending on the x-axis</li>
<li>possibility to highlight a part of the arrow base</li>
</ul>
<p>However, I also want to be able to save this plot as raster graphic in a higher resolution.</p>
<h4>Why don't I use <code>FancyArrow</code>?</h4>
<p><code>FancyArrow</code> would be a more straight-forward way of defining arrows.
However, they are not drawn in a very reproducible way.
Sometimes a <code>FancyArrow</code> is drawn 1 pixel higher or lower.
This means If I draw a gray <code>FancyArrow</code> and then a blue rectangle over it, there will sometimes be some visible misalignment (e.g. a 1 pixel gray line visible behind the blue area).
I have found that only <code>barh</code> is able to draw a bar of different colors that actually looks like it belongs together.</p>
|
<python><matplotlib>
|
2024-11-10 15:09:56
| 1
| 894
|
mRcSchwering
|
79,175,107
| 11,071,831
|
How to remove multiple items from a dict of lists while keeping the lists in sync in Python
|
<p>My question is not a duplicate of <a href="https://stackoverflow.com/questions/21345474/remove-elements-from-several-lists-simultaneously">Remove elements from several lists simultaneously</a> because the top solution of the linked question uses a <code>for</code> loop to delete items while iterating over it which I have already mentioned in my question that it doesn't work and is already documented behaviour. The accepted answer in the linked question uses numpy which is an additional package that shouldn't be needed in order to perform basic task such as rearranging data structures like lists</p>
<hr />
<p>I have a dict of lists that I maintain which in essence functions like a spreadsheet row. Every item on the same index of the lists belongs to the same record. The use of this is to keep track of open transactions and every once in a while, remove the items which are closed.</p>
<p>This is the initial state.</p>
<pre><code>closed_pos = {
"TransactionID": [],
"Item":[],
"BuyPrice":[],
"SellPrice":[]
}
live_pos = {
"TransactionID": [1, 2, 3, 4, 5],
"Status": ["Open", "Closed", "Open", "Closed", "Closed"],
"Item": ["ABC", "DEF", "GHI", "JKL", "MNO"],
"BuyPrice": [5, 10, 15, 20, 5],
"SellPrice": [None, 12, None, 25, 7],
}
</code></pre>
<p>Now I want to move the transactions which are closed periodically. The reason for doing this is simple. The live_pos is accessed more frequently and is therefore helpful if I can lighten the load on it.
The problem is I can't reliably remove items from a list that I am iterating over which is a well documented behaviour.</p>
<p>One solution that could work is that I iterate over the list and keep a separate list of all indexes to remove. Reverse the list <code>indexes_to_remove</code> and then use append-pop to get the desired behaviour without messing with placements of items while I move the <code>closed</code> transactions</p>
<pre><code>indexes_to_remove = []
for i in range(len(live_pos["TransactionID"])):
#Make a list of all indexes of lists to remove
if live_pos["Status"][i] == "Closed":
indexes_to_remove.append(i)
indexes_to_remove.reverse()
#Reverse the list so that you can use .pop() without problems.
for index in indexes_to_remove:
#remove items by index and feed them in closed_pos
closed_pos["TransactionID"].append(live_pos["TransactionID"].pop(index))
live_pos["Status"].pop(index) # No need to append `Status` to closed_pos. It is always going to be `Closed`
closed_pos["Item"].append(live_pos["Item"].pop(index))
closed_pos["BuyPrice"].append(live_pos["BuyPrice"].pop(index))
closed_pos["SellPrice"].append(live_pos["SellPrice"].pop(index))
print(live_pos)
print(closed_pos)
</code></pre>
<p>This works, but the resulting <code>closed_pos</code> is also reversed. Instead of result being in order of <code>2, 4, 5</code>, it is going to be in <code>5, 4, 2</code>.</p>
<pre><code>result_live_pos = {
"TransactionID": [1, 3],
"Status": ["Open", "Open"],
"Item": ["ABC", "GHI"],
"BuyPrice": [5, 15],
"SellPrice": [None, None],
}
result_closed_pos = {
"TransactionID": [5, 4, 2],
"Item": ["MNO", "JKL", "DEF"],
"BuyPrice": [5, 20, 10],
"SellPrice": [7, 25, 12],
}
</code></pre>
<p>This is the result I am expecting</p>
<pre><code>closed_pos = {
"TradeID": [2, 4, 5],
"Item": ["DEF", "JKL", "MNO"],
"BuyPrice": [10, 20, 5],
"SellPrice": [12, 25, 7],
}
live_pos = {
"TradeID": [1, 3],
"Status": ["Open", "Open"],
"Item": ["ABC", "GHI"],
"BuyPrice": [5, 15],
"SellPrice": [None, None],
}
</code></pre>
|
<python>
|
2024-11-10 14:51:40
| 3
| 440
|
Charizard_knows_to_code
|
79,175,006
| 11,159,734
|
How to properly configure pytest to perform a set of actions before and after all my tests
|
<p>I want to properly test my FastAPI application. The app uses a local postgres db with an async connection and alembic to do migrations which works fine.</p>
<p>Now I want to properly unit test my application with a real postgres test db. So basically I want to achieve the following:</p>
<ol>
<li>Connect to my postgres db</li>
<li>Create a new database called <code>test_db</code></li>
<li>Run alembic migrations to get the same schema and initial data as my real db</li>
<li><em>Run all unit tests now</em></li>
<li>Run alembic downgrade base to drop all tables and delete the data</li>
<li>Delete the database <code>test_db</code></li>
</ol>
<p>The problem is that when I run <code>pytest</code> to execute the unit tests my test will run but no db is being created. So effectively only step 4 runs. But nothing else.</p>
<p>I have the following structure in my <code>tests</code> dir:</p>
<pre><code>.
└── tests/
├── __init__.py
├── conftest.py
├── test_app.py
└── test_user.py
</code></pre>
<p>I created a <code>conftest.py</code> that includes the code for step 1-3 and 5-6:</p>
<pre class="lang-py prettyprint-override"><code>import pytest
import asyncpg
from sqlmodel import SQLModel, create_engine
from sqlmodel.ext.asyncio.session import AsyncSession
from sqlalchemy.ext.asyncio import AsyncEngine, create_async_engine
# from sqlalchemy.ext.asyncio import create_async_engine
from sqlalchemy.exc import ProgrammingError
from alembic.config import Config
from alembic import command
from config import config
TEST_DATABASE_URI = config.SQLALCHEMY_DATABASE_URI_UNIT_TEST
# 1. Create the test database
async def create_test_db():
print("Creating test database")
conn = await asyncpg.connect(
user=config.DB_USER, password=config.DB_PASSWD, database=config.DB_NAME, host=config.DB_HOST
)
try:
await conn.execute(f"CREATE DATABASE {config.DB_NAME}_test;")
print("Database created successfully")
except asyncpg.exceptions.DuplicateDatabaseError as e:
print(e)
finally:
await conn.close()
# 2. Drop the test database
async def drop_test_db():
conn = await asyncpg.connect(
user=config.DB_USER, password=config.DB_PASSWD, database=config.DB_NAME, host=config.DB_HOST
)
try:
await conn.execute(f"DROP DATABASE IF EXISTS {config.DB_NAME}_test;")
finally:
await conn.close()
# 3. Run Alembic migrations
def run_migrations(db_uri, direction="upgrade", revision="head"):
alembic_cfg = Config("alembic.ini")
alembic_cfg.set_main_option("sqlalchemy.url", db_uri)
if direction == "upgrade":
command.upgrade(alembic_cfg, revision)
elif direction == "downgrade":
command.downgrade(alembic_cfg, revision)
# 4. Fixture to manage the test database lifecycle
@pytest.fixture(scope="session", autouse=True)
async def setup_test_db():
print("Hello world")
# Create test database
await drop_test_db()
await create_test_db()
# Run migrations
run_migrations(TEST_DATABASE_URI, "upgrade", "head")
yield # All tests execute here
# Clean up: Downgrade and drop test database
run_migrations(TEST_DATABASE_URI, "downgrade", "base")
drop_test_db()
# 5. Fixture for async test engine
@pytest.fixture(scope="function")
async def async_test_engine():
engine = create_async_engine(url=TEST_DATABASE_URI, echo=False)
yield engine
await engine.dispose()
</code></pre>
<p>My <code>test_app.py</code> looks like this:</p>
<pre><code>import pytest
from fastapi.testclient import TestClient
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy.orm import sessionmaker
from httpx import AsyncClient
from httpx._transports.asgi import ASGITransport
from app import app
from config import config
@pytest.mark.usefixtures("setup_test_db") # Ensure setup_test_db fixture is used
@pytest.mark.asyncio
async def test_root_route():
# Use ASGITransport explicitly
transport = ASGITransport(app=app)
async with AsyncClient(transport=transport, base_url="http://test") as client:
# Perform GET request
response = await client.get("/")
assert response.status_code == 200
assert response.json() == {"message": config.WELCOME_MESSAGE}
</code></pre>
<p>The idea would be that when a pytest session starts (running the unit tests) it should first setup my test_db, then run all the unit test using that test_db and delete the test_db again. However the function <code>setup_test_db()</code> is never triggered. The "Hello World" statement is never printed in the terminal and the db did not perform any read/write operations during that time.</p>
<p>How can I set this up correctly so that:</p>
<ul>
<li>The test_db is setup once at the beginning of the session</li>
<li>The routes called within the unit test actually use the test_db session instead of the real prod session</li>
</ul>
<p>Or is there a better or simpler way to achieve this than what I am planning to do right now?</p>
|
<python><pytest><fastapi>
|
2024-11-10 13:39:54
| 1
| 1,025
|
Daniel
|
79,174,901
| 1,424,462
|
How to get coverage reporting when testing an Ansible lookup plugin
|
<p>I am developing an Ansible lookup plugin and test code for it. Both the lookup plugin and the test code for the plugin work fine.</p>
<p>I am using pytest with pytest-ansible for that test, and the test function uses the ansible_module fixture provided by pytest-ansible to invoke the builtin set_fact module to invoke the lookup. The test function looks like this (simplified):</p>
<pre><code>import pytest
from pytest_ansible.fixtures import ansible_module
@pytest.mark.ansible(host_pattern='localhost', connection='local')
def test_mylookup(ansible_module):
expr = "lookup('myorg.mycoll.mylookup', 'someterm')"
contacted = ansible_module.set_fact(result=f"{{{{ {expr} }}}}")
contacted_host = contacted['localhost']
result = contacted_host['ansible_facts']['result']
# assertions on result
</code></pre>
<p>Update: I found that importing ansible_module is not necessary for pytest to find the fixture.</p>
<p>The problem is that this test does not add to the test coverage (0%). Other Python modules in the project do add to the test coverage, just the tests for Ansible plugins do not.</p>
<p>I suppose this is because the ansible_module object at some point uses Ansible mechanisms to execute the module, and that probably lets my lookup code run in a different thread or process. But this is speculation.</p>
<p>I am using the pytest-cov and coverage packages to have pytest count the coverage, and run the test by invoking pytest with --cov* options.</p>
<p>Relevant Python package versions (on Python 3.9):</p>
<pre><code>ansible 8.7.0
ansible-core 2.15.13
pytest 7.4.4
pytest-ansible 4.1.1
pytest-cov 6.0.0
coverage 7.6.4
</code></pre>
<p>I would like to know how I can get the test coverage properly accounted for when testing an Ansible lookup plugin with pytest.</p>
|
<python><ansible>
|
2024-11-10 12:33:20
| 0
| 3,182
|
Andreas Maier
|
79,174,831
| 2,572,526
|
Libreoffice basic: how to pass CellRange variables to python scripts
|
<p>I'm working on my first python script for Libre office calc.</p>
<p>Following various guides I installed APSO and successfully created a Basic wrapper that calls the python script.<br />
This is its signature:<br />
<code>Function python(functionName As String, ParamArray params) As Variant</code><br />
Where:<br />
<code>functionName</code> is the name of the python method to call.<br />
<code>params</code> are the parameters to pass.<br />
(NOTE: As the python method varies basing on <code>functionName</code> the <code>params</code> array has not a predefined size and elements types)</p>
<p>This works quite well except if a parameter is a cells range. In this case the cells range is passed to python as a tuple, missing the information about the rows and columns references.</p>
<p>Why is basic converting the cells range into a tuple instead of a CellRange object?<br />
Is there a way to detect the types of the <code>params</code> elements so that I can manage the cells ranges differently? (I tried to use <code>TypeName(params(0))</code> but it seems to work only with base types like "Strings", if the param is a cells range it returns <code>Variant()</code></p>
<h4>EDIT 1</h4>
<p>A bit more details as my question was not clear enough<br />
Here it is the full wrapper that I wrote in basic:</p>
<pre><code>REM ***** BASIC *****
option compatible
Function python(functionName As String, ParamArray params) As Variant
Dim scriptPro As Object, myScript As Object
scriptPro = ThisComponent.getScriptProvider()
scriptPath = "vnd.sun.star.script:development.py$" & functionName & "?language=Python&location=user"
On Error GoTo ErrorHandler
myScript = scriptPro.getScript(scriptPath)
Dim outResult As Variant
outResult = myScript.invoke(params, Array(), Array())
python = outResult(0)
Exit Function
ErrorHandler:
python = CVErr(xlErrName)
End Function
</code></pre>
<p>I execute the wrapper from the calc formula bar, for example:</p>
<p><code> =python("custom_concatenate", "Hello ", "world") </code><br />
Calls the python method custom_concatenate passing it the 2 strings "Hello " and "world" and shows in the cell "Hello world"</p>
<p>or</p>
<p><code> =python("sum_all", A1:A10) </code><br />
Calls the python method sum_all passing the <strong>TUPLE OF VALUES</strong> contained in the cells from A1 to A10 and shows the sum in the cell.</p>
<p>Till here it works perfectly.</p>
<p>Now, imagine that I want to create a method that takes a range and returns the sum of its first row number and its last row number.<br />
For example<br />
<code>=python("sum_row_numbers", B13:C24)</code><br />
the method should do the sum of the first row number, 13, and the last one 24, and return 37</p>
<p>This is the signature of the python method:<br />
<code>def sum_row_numbers(range):</code><br />
The problem is that the <code>range</code> parameter passed by the wrapper is just a tuple of tuples, and any reference to the rows and columns is unknown at this point.</p>
<p>How can I make sure that the wrapper pass all the information about the range (included the cells references) to the python method?</p>
|
<python><libreoffice-calc><libreoffice-basic>
|
2024-11-10 11:49:21
| 1
| 1,289
|
user2572526
|
79,174,765
| 6,930,340
|
Sending a polars dataframe via email
|
<p>I am looking for a way to send a <code>pl.DataFrame</code> via email. The email should practically show the result of <code>print(df)</code>. What would be best practice here?</p>
<p>I have already tried to convert the DataFrame to HTML using <code>great_tables.as_raw_html()</code>, but when sending it using Python's <code>smtplib</code> library I get an error that looks something like this:</p>
<p><code>An error occurred while sending the trade signals email: 'ascii' codec can't encode characters in position 49-123: ordinal not in range(128)</code></p>
|
<python><python-polars><smtplib>
|
2024-11-10 11:23:15
| 0
| 5,167
|
Andi
|
79,174,728
| 7,959,614
|
Decode a hex string
|
<p>I have the following binary data what was saved as a hex</p>
<pre><code>hex_str = '+vcGsHsAIgBpAGQAIgA6ACIAMgAwADQANAA4ADgAOQAwADcAIgAsACIAdAB5AHAAZQ0mGGQAYQB0AGENHChwAGEAeQBsAG8AYQ1IAVQZJAA6BRCYcwB1AGIAcwBjAHIAaQBiAGUAVABvAFUAZgBjAEYAaQBnAGgAdABTBVggdAB1AHMAZQBzBXAAWwVAAGYVJABJDWYBoAg1ADgFoBkeGEMAYQByAGQyJgAAMgUkAHMdXAnuIFIAbwB1AG4AZAWCIG4AaQBzAGgAZQVgECwAIgByBYQQdQBsAHQFMgBuBQwQbAB9ACxqlAAIMgA5BT7WlAAhDIaKACFiKHcAaQBuAG4AZQByBZ4gIgBCAEwAVQBFLZggbQBlAHQAaABvBdAh3gBTJYQAbQXoGHMAaQBvAG5KLAAARAU4EGEAaQBsLYIBbgBlJSgBcghnAFQlZABnBShGLgAxJiFCLjYACFAAbwV2AHQVegGUKGYAcgBvAG0AIABtLY4hbD48AAhUAGkF0Cm4EDAAMQA6RahJnjlqEFMAYwBvLbgBLAG6LigBPiAACFsAXQVsQb5BcgByUiIAIfAAIk3CNj4BRv4AACIlHAHQBHQAQeoYYQBuAGcAbGU8PtYAWYABrAAzQiAAAFMNRAhrAGVKagAxaBGeYZIAdD2yLhYBPhwAEGIAbAB1BZRpnghnAHUFuAhkAENFFgG8CG8AbDJ4AQAybUgpAghjAEclumlAZjYACDYANAXYAGcdLi7OAQgzADUNIkGSCGcAREWeAbAQbgBjAGUyBAEIcwBMJVAAZG2kCDoAMw08CGMAbEWcUGMAaABLAG4AbwBjAGsAZABvAHcAbi0UADAFiAlmLloBTlYAiVZhXAB1JQwAYToMAQAzNQ4JUAhIAGWF/jZYAChBAHQAdABlAG0AcAUIqWYANC1IiZCBfnFWWeQhFgmwAGgFfAhmAEeKogGhtEFkCZQuoAE2mAA+RgEAMQU2ge4AdoVOAHMFdKl+CYYIdABvJZQAbHpKAAA4DdwAZDrGAS7yARAyADYANgVyyRRJUAG0gZAAc1o2AQmkCGIAYSXaZnACADcFWl6wAFaSAQgxADFKSAIutAAAOA0gAaYIZwBCpVQAeToMAV5cAGnYAGtVlgBEMpYCCdiZ/GbaABGkCYQAQ13yNt4BVogAaYYJPHZkA1ZAAAA2DaoJQhBMAGUAZ5J4ACFcUZRhuGbgAElqCWBWUANGuALpagBjRTpWtAQAMA0kCVguKgEgQQBjAGMAdQByRVQAeaWEEDUANwAuIlYIgYahfJlIVhwBADEt5AlkVh4CRrwALvAATlABPjIACewJZC7IAF4yA3lmDl4JZg4EXtQAIbZ5SLlWTsACRioBPr4AADUFUC4GBmYKAggxADmNIj5YAE60ARgzADgALgA0DTIpIGYeAz6YAAAz7UCB/qmgDjQIjhAHQSwhJInAnhAHHrIKLgoBLmoEofqmrAM+zgDxpP4QBz4QB0ms/hAHXhAHUdoh7v4QBy4QBwA0MpgDZgIDPkYByZ5BEkYQB6mweRg2igM+SgAOTA1BJP4QB0YQBymShhAHCShesABW/AMAN+3GLkYCLqQC5g4HiQZeDAcJrI4MBwkqfuwDVt4A6ZAJPM4KBzHUCUJOugVWeAAZtP4IBz4IBwAyGvYIZggHCDEANzL4AC62Ak5UBQg1ADXlCGkKYZiOCAceaA/BcJYGBy6MAU5KAT5QAzlEln4INlwMZgIEVrIBPkwKXgQHCZZGJgFWvgAu+gVuBAfRKj5YAE6wAQmCSZKm/AYIMgB91f4eChFBGg7eERb+C+EMLiwOFowRjh4HKdq2HAcAMRp4DS4iAS54BAA0rX4J6Ha8Az58AQ52DKGKLtQEOdAm4gkOig8uTgEhEA7ID2EARioOADINsmYaBwAyOpoIVtoKVoACHt4ObioOCDIANe1ArioONrIKZhgDPkYBKRpWGgcJHF5qBj5KAAAzTSJ2Ggce0AiuKg4JiP4aB1YaBzFsLkYCLqICADONNEmiVgQFVuwBFnIQXhoHFnYI/hoHfhoHLsAFdl4DVsQAADUNqAn+TtAFVjgA/iQOdiQOFk4LAGNiJA420Acu8ANOagUYNQAyAC4AOE2+PpwFZowNgXwpGFYWAj4gAxYqCQkyTkoBPjAAHkQL4XwuxABekgAANQ3CCWJmAgRWsgFpZHnsuUrWHg5Jei74BWYaB7EWPnwBTq4BgWYALgUGOYC2IAcW6heO8gYOtg4BZr70BmncLvwALlIEoeAJIimqdpgDPhYCUfz+HhUuHhVJmmbyBjaKD7bwBiFAbu4GADRN6K7sBgA3GsYKKUBm6gI+PAEpdv7sBsnsKTRu7AYAMw1w/uoG/uoG0eoAMy1qXuoGHqwJKVZW1ARWPAQWGgpe6gaJWv7qBn7qBglmCbx2UgNWxAAp0AlATp4FVjYAGXb+Ag5GAg5hOmbkBgAxuggVweC5OD5qBV5UBhYuDikWVhICPmQDSQwJMk5IAT4wADlClgAOLpIAZvoDVq4BiQCG5AYJlEYkAT68AK7kBiGwCbw+WAD+5gZG5gYAMBIEDg7qDY7mBgnutuQGDtQUQaBe5AYANi0cSQh2kAM+VgH+4gbO4gYAMy0GCbCu4gaROCHWVuoUHh4cruIGKZQJjmbmAj46AQk2/uAGweAemhpu4AYmcg4OCCJJQAGy/vQb5vQbaWBe4AYexiEBpLbyG2no7uAGyeAxVgmCZrYKVr4ECYYJPHZSA1ZAAFGqDlAMTpoFVjYASVQBdv7iBj7iBskqZuIGADUt0AF8NhwGTjIFCDcAMRI0GglWYY4O4BEmBhJW3gDJEgliluAGLoYBTkYBVpQDCWKOxg0WohQJMGb6A1bOACms/uAG2eApHi7kBWbEDTGuPnoBTqwB4QwJrAnoZgoDPk4BCDEAfRrgI4HmFt4bpsIUyVqJqp7eG0lQLgQBLswN7sAUHl4OLrAEJnYPWbq+wBQptmbODfbAFDG4IehW8AYO4A4WvBD+8gb28gYpCHkONn4DPloCADINMG7yBg68FJEApvQGKTL+vhRWvhQeOhEuRAIuoALmvhRpLO70Bsn0Cf5+6ANW0AQxbMGyzvIGDtQlDqYKCX5OrgVWeAAZtP70Bj70Bnn0DiYJTuAioRwZtC60Ak5KBQ7EIwAuIhARYZSO9gYZ4F4ICUYeAxmUTkgBPjAALmIALsQAXmgOUcgpqv74Bv74Bgm+LvQFZvgG6bA+egFOrAEJVgnipvIGDtIbprIUHgQWtuQGHoQTLvIALkYEodKpTgm+dowDPgwC6UIuogTu5gbJ5slgZuQG5rIUADEidgoh1FbiBjEwruAG/roiTroiSVKm4gapUG7iBv60FP60FEa0FMmILj4CLpoC5rQUSdD+4Ab24AYpwI5UA1aoBjYsBk6YBVY4ABl4/uAGPuAGFvQcZtQNDgIPLogOLhwGTjQFFpwbADlNBmGS/uAGVuAGSQ4pik5IAT7UBDlCNpoOXuAGDq4j+UK+4AbRZB6yLblANpwbCZZGJgFWvgAu7gVm4AY5sj5YAE6wAQmCKUim5Ab+sBSGsBQWGhIu8gAuRgTurhQJ8i6gBDme3sQNCVRm3gbmrhTpjHaQG8kmrtwGCbAp9mbcAlaGAg7ID0aUKQlSntoGab5u2AbOrBQJhP64DVa4DXbWBuaqFAm0/tYG9tYGKY5JDnYqClaeBkmACUBOjAVWNgAJ+gE2/tQGPtQGCVpu1AbJ9gF8Ng4GZiIFPkwFXjgGLgwBlqAUCa4JjE4+AVaGAwkwLo4HXpIAkUIOlA9m7ANupAF51rkq/sgGhsgGyQI+eAFmogEpErbEBgBdIn4pDhY7DrY4Fgg24V4OMjYONDgOPjYO0hMO/A8OsDcOFggIbwBmEtw3CGMAaRKoNQBSOqY6ESwW3CkO2BsmCjguPDsANxpcCRkerjw77qg6EFIARQBEGrw5RqY6CFUAbhLmNw4ONw6WKgBzEvw4Ae4hZg4KERb2OS46AEaIOQH4Ln46/rQ6JrQ6Hng5IYw2YgABsgBlJbAoIgAwADUAOgAwADANrjlmPoQ6Aap2pDoBIhBqAHUAZBJ0OyF6AHLllghOAGElBkEiACISUD0AbhLINAGaGTBhxOHENi4AGFcAZQBlAGsS6CsBMFGECcAONAgOWDqWegAAU0WIKZZ+eAAIRAAnBbYOZCwAbw007nwADEMAaAAOoCsOsCwByH6AAIHmAeQBLE54AIYQPJYUAQ7ePv6OATGODrws/o4B3o4BDlwwpnwA/o4BOY4JeKEkKapGQhdG9ANemgP+aD3+aD22aD0OphUB9LaWEwA57f5etgwINQAwDSKhRr5oPQ7YDQGU/roMNroMHugOZrwMDkwULjAJrmwhJs4Y4chWnBMO5iYJYP7CDIbCDBYMCf7CDA7CDKGeIY5uxAwOfhoWxgj+UihOUigAMSLsLSagCY4QClHcLkgCLlwPQRKGxgpWMAoWKhL+zgz2zgwuUAvOzgw2Ih7BrDYkDIZoC3bQDBZKDokCVkwDDi4IYSwWBkUIOgAxLZAAYxJ6CQ5cCj4YQmFoSdwJWC4IAk4yCwEwDoYaaXhhnI6mE1HMDkoMltwMLpQATkoBPuoALngCLsQAXnQBADRNcgn2vt4MKYp5Pv7eDAlQLv4FDkgMAG5asgEAMFp+AU6wAQ5IPgguADYaWBQJ7rbqDIGsgfgWgiEOQAmOaigOsBYWzBS2CgexHi4MAS5kBPYKBx5uES7CBDm83sQTHoob/goHVgoHHlQoDgYMDpBGwTgOlkYuFgHWCgcxbEkWZgQDPmwDwUBRFkbQExYUEnkcNo4DPkwAADitdgHWEeIOOkYu3gD+Dgd2Dgep9l6wAFauBkHeabQuSgIutADuDAceoCL+Dgf2DgcB5CHmSR7WEAcpJAlCTsYFXl4Buc52EgcpvAlglhIHQUwB1mYSBw6wDRZ0HglYLsACTmIFCDcAML1inhIHGeKemCEeUCIJlk5OAT5cA/H+CTIuyABeFgceVhIJMmYSBFa4AVG4eUy5XNb2EykmLgwGZhgHDs4fyXI+hAFOtgFB3i4+AqYUBwA5GpwTDmJODuISDnAJIVwOFBEOyBAWlEvWeD0JkrYyBxaKCS4kAS6IBB6sKSnSdsYDPkACaQYu4gQ50N4sB1GUZiwHoToWDgwJtFbYA16CAkkC0UrBVj4sBw4UKYmY/jYO/jYOrjYOQd6BuAHSEd4+KAchOmky/jYOTjYOKTR53EH0DrwTDgAMXlAEDpgIAbYuRAIungIerEQp6lYaDFbqAQmO/iIH9iIHaa4JvHZaA1bEADaACU7SBV44ACFUAXh2HgcpJAEoAGdaRgM+8gOJTABjai4OLpoDLgICTmQFDjo0AC4idAs+lgVmNAIR3p4aB1H0Dl4OTkYBPuYALjAALsQAZpIALkoBvhYHCTx5OLlE1hQHFmALLvAFdhQHIU4+fAFWrgEAMyo2OEk2phYH/tghhtghDpgJAcwu+gAuTAQAMjpaKnaSAz4UAhZwEC6oBDmo3uwGCVRm6gY2AASu6AZxhiHYDoAIFnwcLg4BDvgPIS7+5AaG5AapBg4iEEYMDgkcWfw2bgM+hAFpKm7iBuG8Adb+4AZO4AYOJhcWPghesABWjgUJNC46Ai6IASa+D4GeruIGKTz+4gb+4gb24gZRqAH8TpoFVlIBCfqBgnbgBhkonuAGaZJm4AYOZCoW9goBfja+FU40BQA2DYRhiA4ICZkqVtoALnoGVgwCPhQDLuQATkIBbjAANm4DVpIALqIBvtoGCZx5Lrky1toGKR4u4AVu2gYeFE0+fAFOqAHm7g222AYeIhm2wg0eLEwu+gAuRAQOWA8pHEmWttoGCTouoAQ5qFmwDoYgKdAJZDY8Aj5mAglUZtoGHhIVCUz+2gY22gYO3l7BBP7aBv7aBq7aBkGAAbRu3AYmDghGLh9emgopsv64KFa4KB7mFS46Ai6WAlG2KeRWzgRWdAQWwgr+2gb+2gb22gYu0hpOmAVW+gApsCEy/toGPtoGKVRu2gYeyjIBfobaBjZ8ST5kBV6QAgmGCWJWEgI+SgQuMgBOSAE+MAA5QjbEAF6SAC6oAf7gBv7gBgnsLuYFbuAGHvAVPnwB/uAGRuAGFjYlDsILFuQbpmY9HnY5tvAGFmYQLggBNlgELrQBvuwGFkQMLrQE/u4Gwe4W5Atm8AYOehMuSgmu8gYe2hMe1hPBJP60FO60FGliQQ5G0A1JwibyCX4mCBZMCwHSEd7+tBS2tBQJzF6wAFakBR6+Ny5EAi6gAua0FB6aMP76Bvb6BhYEDamkdl4DViABADMyoANOtAVWOAD+KFJuKFJx9g5gLE7mIg5uaA6ACQn2LrYCTk4FDi4hDkhgATI+fgVeaAZJaglgnvoGFkYLCTROSAE+/AYuQgGW/gYANDKYAL4ABz7uA7lK1uANKSQu+AVmAAcRuj6AAU6wAQlYKUimshQWPGCe0CIeyBG26gbJOC7yAC5KBPayFCmeLqYEOaDe2A0e9Atm6gYmpgwO1BG26AbJDtEKwRY+6AbOthQpZCn+ZuICPkwDCYxW5gZJUF42Bj5KAA7mKoEqAdAR3D7mBv64FG64FHauAFaMBQ7eCwG0Lj4CLpoCUe4pVlbQBFZaAKmK/uIG9uIGpkAKVsQALiIHTpgFVjYAKYghMv7cDUbcDckqZt4NADiNDgF+NhoGTjAFADYSIGAeCiM+YgVe4gaJcAliVhICPhoDfkgBPjAAOUI2xABekgChQEHGCZRm/ANWsAEO8BcBPnnouUDW4AYpIi7uBWbgBh6YIT5+AU6wAQ4+Ei4cAabkBgA1Iv4xjrQpkRa25AbRNC74AC5MBPa+FIlQLqgEOabm5gYpqmbmBubAFNGOIdhWmhvOwBSe6gI+UgOpCv7mBsnmCUxu5gYmHAj+whT+whQWwhQe4GEuPgIumgLmwhRJlv7mBvbmBknEifzOyA0usBCm5gYZdv7mBj7mBklgZuQGADkaOBIJ8n7EDQA5EiRSDsoNqeQAYZKgGxnennwiSQoJlE5GAT6SAy5iAC7EAF7kBh4wJwlivuQGKVT+4gbW4gYOplNOfAH+4gY24gYWJlIWLj0O5nVBQA6WJg5aDOF+/ow9jow9DtJ2AeomJjz+jD2ejD1+NHj+jj3+jj3+jj3+jj3ejj0AQRICewBsEkh5geiWHj0IQgB5Er56CS42oDwOejKmijsOHjvhtgBrDUx+GD0OEgoOxHYAciJCPu5+AAhNAGkSjHcQYQBlAGyaggAAQgUsES5OfgAWZhhurHkOjBEZ6l4qP/6cAU6cAQQ4AP6cAe6cAQA3qn4A/pwBRpwBAfz+qj3+qj3+qj3Oqj0WUg227AweNnRe7AwOQg0uEgh2mAk+HAhOsAru7gwW7gwOWE3BnmbuDA6mLSkOFmYIrtYTJg4ZZvIMDuwOFooIrrQhDvYOCTYJlmb8CEZIAQnOoUL+xBoANxp2Cnb4DAA3IvgN/voMRvoMIQghwCbGCZY4Ch7iOl7+DD7sJlbUEV48ERbEHe4CDRYCDQHeKa4p5sHKDs4qDgx+jugApm4DVsoANmBoTsILVjgAIT4hlgG2DiIIDmwKVrA7DpCDESqmqCgWPBMAY2LSGg7OIQkkAYI2MBNOYgsBhgAuIqwRYayOsCgO2B4ZvFYiAj5+A35YDj4wAB6IE0bIAGYUDR4CPQk0ZhoEVrwBsWh5Urlm1voTCfQuFgZmEAIONCEWBg4+hgFOuAEOOAoOIHsJMgnwZiQDPlwBADQSWggOqjYeEkAOZAmODBQJ0rYeB4kULgYBLm4R9hwHADQtAi7OBDm23goUHugY/hwHVhwHJmp5IepWDhTWHAcuMgVm/gI+DgIJNv4WB+EWADaNfP4WB+YWBwnMecKOLAYOTBFJpC5GAi6kAqHWFhocaWpW8gRWVgQBPKEs7hQH6RQJ3AmCZuwDVoYALhQGdn4KVkAA4ZAWbhoJgKYSBxl6dhAHLt4AVmoKPrgCHv4WZg4HNqgGLgwCTlQFXlwgXnAGLrIAVhICPrIAJtQlDiYfTkQBPjIAOT42zAdelAAhqKmkKahmBgRW6AEJ1HlAuUzWAgcJUC78BWYCBx4IJaEWLoYITqwBCVgJ5GYMAz5OAQ4WFA7SD8H6/qQ9/qQ9DqQ94caBHC4gAS54BO6WWImwLtIEOczeGAce1BtmGAcOtgkutAJWygNWfAJ5rkECVhgHYfSprK40Dh7aCUkwZhQDPjACaUpWGgdJgnkqLl4ERlR8MTpuGgdhAB4oHP4wDkYwDgnMXrAAVpIBDqo7QaAuRgIuoAKBbi6CClYGBVZcAAmQ/hYH9hYHLrwAzhYHQY4hIElYTtAFVvwAITgROP4WBz4WBxYcEW4UBzbGHi64Ak5qBQ5IYw4ce2l2YZounAVeiAYJLikYVhYCPmwDfl4IPjAAcSrhSi7EAF6SAA5gDTF0AGdqBARWsgEJ0Hk+uUzWGAepJC74BWYYB9EGPnwBTq4BFohNaVwpfrYeB6ZoZqkAtgoOUfgu+AA+UAQ5pHaSAz4QAtE+LqwEOabe8gYOPJUBkmbyBg7IIkkeKXCu8gZZlCHcXvIGHqYsrvIGKZ4JkqbwBgk2/vAGwfAeRiZu8AYhOmkI/rg9Trg9Ccx5so4kBA4ihyHuLkQCLqACJtAVgXZW2ARWQAQWjgn+7gb27gYxbAG8dlwDVsQANlSBpu4GyZJBjP7sBj7sBh76LmbuBh68kgF+NkINTj4FDvgRDuIvqUBhlI7sBhngnh4iLooBTkgBPtwELmIAjsgCADQyHgdmAARW6AE+OgO5RtbsBhZmCS70BWbsBtEiPnwBTq4BDk4P5sRStugvKRS+7gbJGC76AC5OBA66RC72CHaUAz4UAgk6LqoEOaje7gbxRG7uBonKaVCu7AYmLAsh3FbsBg72kaGAruoGHoobCZJm7AI+QAEekg1W7gZJtF4+Bj5MAB6GFnbuBh6yCv7cDf7cDRbcDX7sBrFcKViu7AYWfBvu7AbJ7IEMaXgJhGbKCm7IBAk8dlwDVkAALuoCTqYFVjYAaZgBdnbuBg56G0HsASqeAByB8gE0ZvAGDnYyiXYJWC4uBk5EBQ7sKqFECbZhmo7yBi7ABlYaAj4iAy7IAU5MAT4wAGmARsQAXt4NHkoTCTJmBgRetAEJoHnwuUz+9AaG9AYOBjRJbD6AAU6yAQ4wGybGgwnwZhoDPlgBDrhEFnIlgfi+2D0pJrYGBwk0XgQHHtwOCch2pgM+zAAObBRpDv4KiS4KiRFWZgQHASIuAgOuBgeW8g3OCAcpaClIZgADPkQBCTZBDEYSIwkceRQuTARO4hQWpgpuBAf+4hR24hR2sABW+gMO8gs+RAIupAnBhC6MBFbuBFZcABZGDO4IB+kICdgp2mbqA1aEAC68AHZeA1ZAAA5QEiEgCX5OugVWOAAZeHYGBy7cAFZIAz6yAsneZgIHFn4ICbAusgJWTAUANhLcFAk0Pn4FXmwGLrQAVhACPrQASQoJlE5EAT4wADk+NrQOXpIAkTAJYmb+A1asAQnQeej+/AYJUC7yBWbwDQksPnoBTqwBCSoJ4mYGAz5KAcHw/tQUdtQUCZgu7gA+QAR5QHaCAz68AA4YCAE8/uQNNuQNDiIKAVZm4AYOmBNpZiluVpgDVkwCJtYKIdRW5g0OrAkpNq7oDQkyCZJm4gI+RAEJNv7gBsHgAbYhOG7gBiE6HhoI/uYNRuYNCcx5qC7eBF4aBA7UOwG2LkQCLqAC5tIUDjYjAVr+4Ab24AYWuAxJFnZeA1awAh78lwlCTpwFVjgAuaL+4AY+4AYOJAkhWGbiBkkICbIuBAJONgUO1CIALiIkN2GUjuANLmIAluIGNi4QTkgBPpoDOUKW5AYeRFEJ+GYCBFawAQnSeTz+5Ab+5Ab+5AYu5AYO1iKm5AYWjAi2xA3JDl7kBvbIFKkwLqAEOZzeyBQJVG7iBhZwD0lMtuAGYZwe0EPBDA5SOC5SBR6MCf7cBvbcBmleWfAuKAROvA0p4gHQEdxGrET+wBRmwBQ+ngM2rgBW0gPhbCGMLjoCLogBNpIEVsYEVloAKTz+2Ab22AYpEkmgztgGFowPCUBOjgVu+gABNv7WBj7WBumoZtQG0fYBfDYQBk7WBg7uayH8/tAGZtAGLoABTkAB7s4Gyc4BwAnu/swG/swGrswGMag+egFOpgEuRAG2sg0OeDgWqCkeMjoO/DMAZxJIDv66PX66PQ5wsQA2Dbz+uj3Guj12RnsQSwBPAFQFBhZMPP4uey4uewAiEia3Fu63QZwuBntW5rUOVk8AIDIWCAmWLjoALpIFDkB7ADL+5LX+5LVi5LWWaHjJHC7iAC6sBg6gPWEaJma3CCIAcBJOtQ64Nhb8uFHYASj+2rVu2rVpfrboCQk0XugJADcyNgl2lgY+IAUWkgkurAf+5gmG5gkANxrYE6lm/uYJNuYJADgaSAv+5gn25gkpsJ7mCSmEbuQJADYaCAr+kCX+kCUmkCWBii42Ai5QBB6uICngruIJKTj+4gn+4gn24gkWyA0J/E6iCP7iCf7iCT7iCUl+CbAuNAVOOgjxgKEk/rAQVrAQSQIJjk4+AT7ABKn4CTAuvgBeeBMungH+4An+4Amu4AkpGD54AU6kAQkqKRK23glh1A6SFRZ+Hp56OimCttAGHhwk/tIGTtIGKeIunARe0gYJuikqNjwCPmYCHmQWZroQ7tQGHvQQId5Wmhf+1gam1gYeICxBCP56Hh6ykv7aBv7aBrbaBhZ0HC5AAv7cBsncFmoPXtwGSYz+iqd+iqcJhEnydigKXqYGLkIATqAFVjgAmZT+wBBGwBAW/hJmwhAAOA0iCfYu6ANOPgUO4D0OVjMWqBFhkv7oBlboBn5IAT6CBDlCNoYRZugGSVIJ9r7qBh5eICbCCrlENnglCWZGKAFWwAAu8AVmfB4e0Aw+VgBOsAEuGAGm7gYOupkW8A0OEMUWVsNBGv4UeN4UeAnGLh4BLnQEMRApzHa4Az54ARbeCS7OBDnK/sYXXsYX7gwHDoQIHuoWwTg+xhf+CgemCgcplP4IB+kIAdYBzhHaPsQX/uAN/uAN/uAN/uAN/uAN/uAN/uAN/uAN/uAN/uAN/uAN/uAN/uAN/uAN/uAN/uAN/uANluAN5mAs6eb+5A3+5A3+5A3+5A3+5A3+5A3+5A3+5A3+5A3+5A3+5A3+5A3+5A3+5A3+5A3+5A3+5A3+5A3+5A3+5A3+5A3+5A3+5A3+5A3+5A3+5A3G5A3+sB7usB4OHkAelh/+sB7GsB7+alz+alz+alz+alz+alxealz+fpnufpkOxh4OkA7+Zlz+Zlz+ZlwWZlwOfmQeMrLuZlwWZlz+mAE2mAEWEJkWvCmG4lz+mAFGmAEO5FuefgD+YlxGYlwO4FyhOg62EE5yHgHwDpwTJk4hlqQhFowSpqQhEVYWqhMWPBH+miH+miE+miEOZCshKF68EzYUy7a8Ex4QSv6+Eza+Ex6If2bMGg6sWBbyDRZyFK6iIR6gEymuXs4aDrxKITb+piH+piGupiEOomYBtG7IEw7EWxayK044A16sFyls/hZ+XhZ+HqRxLkYCLl4WDhgtEbQOgkausCEeekZe1hoWRBX+1hqG1houwAB2ZANWoBoOlCwWOB5J8qbaGh6uQgE4/twaPtwa4SJBcmbcGg6csxYAGwGANhoaTi4ZDqArDlZAFnIeocSO3hqpuAliVh4CRnYYLjQATk4BPjIADtA7AewJZg6MGQ4WaAGeZjYDUX4JNL7kGml2huIaScRGLAE+wgAJKC4IBmbiGtE2PlYATrQBCVgJ5K7iGhZiOQ5SVb5MXBYCH7buExbQHC7+AC5kBO4GBw7uYkEaLsAE7sQaFsQa0YL+BgdWBgceeGT+Bgf+Bgf+BgdGBgce7jX+BgdGBgdJ6IYGBx6iInm6li4EsaouSAIupALuCAceEjz+CAf2CAep1okqzgYHDmIvYWoJQk62BV4+B9mE/gQHPgQHhlpHNqpELgwCTk4FYfgOeDmJ5D6ABWZsBjkYlgIHHg6EZkwBPkAGDoQIGWYuyABelgA2YnG+AAdpcHlGuVQ24iERKD6ScT7AAAkoLgIGbgAHFrIJPoABTrIBCVgpSKYABw7EDhY4Dv7iIf7iIY7iIS7kAHa6Az52AR7uPS7WBDnK3uIhsWRmHgehTmlEKZquJA4mIC5BAFYkDg78CikW/iQO/iQOriQOHkZIbh4HDqZISf6uJA4pDIYeBwkoedaOSAShFIF6LkQCLhoHIewW4Akp7FYCBV4cBhZYDf4aB/YaBzZcBXZiA1bIAB6eQykCphwHeST+Ggc+GgdJRABjEu41TkSFDnwYFuQMCbQuCgJOaAUO2gkOcmoJNGGaDn4NmUJWGAFJcAliVhgCXrIECTROSgE+MgAufAIuxgBeGAc28ob+GAf+GAeuGAcpTj58AU6uAQkqKUSmFgcOZg/hFp7cLwlmtgoOCTQuCAguTAQJHgm4dogDPgYCHrBnLqYE/uoGweoWqhB26gYutgCu6gYeGAxuCA7+6gb+6gbW6gYeumN26gYeSCz+6gb+6gbR6h5ccy5EAi6eAg5uDEkQSaJW0gRWCgXhJuFG/uoG9uoG/gQONjwFTp4FVv4AFjAvITb+6AY+6AYJ0nboBhZkCAF+jgQVDo7QHhIIPmwFZlQGIdwJYlYWAj6wBAm4CTJOSAE+MABexABekgAObBgZlP7mBv7mBv7mBv7mBi7mBrZGhanctuQGHsAVXuYG7vAUHpBbLqQE/ugGwegWMApu6AY2KjOu6AYmihdhUFbSDeEMiSb+0g3+0g220g0WEAt26AYeVB2u6AYJWP7SDVbSDR5QMC5EAi7oBhmwvvIbDr4OgXb+5gb25gZJzKlcdkYKVq4GNqADTp4FXjgAAfwBeP7mBj7mBg6uDQFcZugGtjSMYZIOiHYWgA5hlv7ODf7ODR7ODS5gCY7ODR68dCF2xiBqaWYmEBG5Sv7mFI7mFInwPn4B/tANJtANwerm1i/O+gYJNF74BgkeKap2mAM+/ghJqC6wBDmqJoQQtvagFpYYbvIGNggErvIGHq4JIdxW8AZO2A24dQBhAHIAZABDAG8AbgB0AHIAbwBsAFQAaQBtAGUAIgA6ADAALAAiAHMAaQBnAEcFIBh1AG4AZABTBS4oaQBrAGUAcwBMAGEFFghlAGQdNkByAGUAdgBlAHIAcwBhAGwAcx0cEHQAbwB0BRZ2SgAIMQAxBYIQZABpAHMFMhBuAGMAZTKkABAxADQAOAUmGHMAdQBiAG0FLAhzAGkF1jBzAEEAdAB0AGUAbQBwBQgupAAYYgBhAGMAa4ICAV6wAFZcAAAyDbYQYwBsAGkFsgBoOrIACDUAMg2yKGkAZwBCAG8AZAB5OgwBVlwAADUFOBBrAG4AbwW6GEQAbwB3AG4yXgEIbQBvJagAdILYACEmCGcAQx2oNpABVoQAADQFhAk8AEQlbiHMKZqOQAAAMTL+ABBMAGUAZ5I4AAA2FXgIZABlmtwAEEgAZQBhmrICAGNFMk4IAwAxLXgJ9C7kACBBAGMAYwB1AHJFSgB5ReQYNAAzAC4ANw00IRxhKABkNapWVgEAMBXeXhICPmgDSQwJlE5GAT4wAC50Ai7EAF4kAwAxPXIAZ2oABG7QAHk6AEtytAJGJgE+vgApIABnnYZ+uAQ+VABOrAEJVilEZggDPowALDMAfQAsAHsAIgBnAJ5GBYmYAGMySAF+wgAu7gBemAV2ggM+ugBxfC6cBDmaWaQ+/gU2MAJGVgAAM0W+EG4AZQB1ZUKh8i6yAGFCLmwBVpgDVkoCCDEAOQVe0QLBDghuAGcyYAAIMwAwDWAAaAV8CGYAR6KaASn+AEddWnbuAAnI/uAGweDRKgHSEd5G4AYIMwA3LQD+4Ab+4AbJ4AAzLZAuRAI27AEANk0QKVqu4Aap5P7gBvbgBi4oBXZeA1awAgAyLSAJ/k6aBVY4AAn8ATb+4AY+4AbJKv7gBtngEDEALgA5GggIPmYFZk4GACwiTApWEgI+VgQuaAhORgFuMAA2wgNWcAEAMxqoCSFy/uAG/uAG/uAG/uAGNuAGADGy4AYpqLbgBmlkXuAGADUNVCmadoQDPgYCADKNfC6eBO7iBsniADJtnm7iBgA4tagO1guu4gYAMw200QTBED7iBggyADQNhP7iBv7iBrbiBum8AdIR3kbiBgA3NQD+4gZG4gYAMk1sJooKjhwEADW1NFbEDRmyIexW0ARWdgQAMQ3s/uQG9uQGFrAJjmADVsYAJnQPIQBOngVWOADJigF4/uQGPuQGHiwQZuYGADNNCgF+NnIOThoMCDUAMMXmFlwQnuYGFhwKCWJWFgI+sAQAMTI0AE5KAT4yADlENsYAXnoHADX+6Ab+6Abi6AYAMlp+AU6wAS56AbbKDQhdAH3F8AgiAHNFPgByErYRDhgRAG8SNhIIbABlEjoSDhQNDh4QCHUAZRLMCChvAGYAZgBpAGMAaRJ0DABSEg4TEHUAbAB0MiwACV4OLg4BLhhnAGgAdABJGogSCDkAMAUEDhwJGR4AQxqaEy4mAOkyqUQIdAB1GvIRCCIARgWuDiATAGgaghMBTA6eDA4QExGMAX4AdwUqwcIAcgWgICIAUgBFAEQAIgUyDvQTCHQAaBKMEgHqGCIASwBPAFQFBkYiAABEJRQIYQBpItoTAWIAZRIIFA70Eg6eDQ5eFABnBShGLgAAIgWuDpoRCVwuOgAIUABvZUIAdBrEEwmODqQUACAyMghGOgAuhgwYIgAwADEAOhJ6DgkqOWAAUy3oDr4UAV4BvC4eAT4gAAhbAF0lJABsBZoO1BROIgAOFBBBREbEBkYAAQBuRSgAbAVWLvwAAFIavBMWphQW/hQuIAAuvAZGVgAIIgBwEvoTAGMlXg5yFRGUgcAAdAUSADoF7i4MAT4cAAhiAGxF5DEa/uQQ1uQQADIy0gO2BArJ3v4CCjYCCglUYT5W4hAeMgip/K7+CbHQIYgOXhdxQCaIFxFc/voJ/voJrvoJKTBu+AkR1oGo/rgX/rgXDrgX6eouNgIuUAQuYAZW2gdW8ggJOP7uCfbuCQmESZp2SgNWxAApGglATqIIVjYASQIBNv7sCUbsCcHkAGPFFOGSbrgaLjIFTjgIDl4TAC5thGGC/qwXVqwXLoIBTkIBPuQJ8e4OXgsuwgBe5AkuogFmyhRWqAEJnCYgDSYuD05eGkYkAT68AAlQLtwPfugWPlQATqgBCVZBTg6aCSaWDTYeAUaMAOkCDhAJ6VTh7P7SBl7SBtG0LvwALkQEESAJxnaKAz7KAMkmLp4EOajmuBcWYgj+1gZO1gYmXhMh3v7YBv7YBhZcFUEG/pQejpQe/toG/toGPtoGDpIcqTJ5OC6YAi5cA/7cBv7cBv7cBr7cBkkaaZJOmgVWNAUW9BQBNv7cBj7cBhbsGWbcBjZEBC7+BE42BWGGAC5VAGGK/uAG/uAGyeA5QJbEEAAzrQAhcsasFx6SGf7kBtbkBjGyPn4BTrABLooDpsYQADAaZBAAIjrEHABhMvQNLtgNAFv+CAdyCAcJxv4GB04GBxZUCv4GBzYGBwlU/gQHTgQH6bQWtBH+Agf+AgcplP4CB+ECKTD+2g3+2g3+2g3+2g3+2g3+2g3+2g3+2g3+2g3+2g3+2g3+2g3+2g3+2g3+2g3+2g3+2g3+2g0O2g0WlhWB1BbcDf7WBnbWBvHI/tgGTtgGySr+2AY22AYeEhv+2gZO2gYOWAoOUgz+3Ab+3AbJ3BbEC0EI/t4N/t4N/t4N/t4N/t4N/t4N/t4N/t4N/t4N/t4N/t4N/t4N/t4N/t4N/t4N/t4N/t4N/t4N/t4NXt4N/qQe/qQeADcakhYmOBz+pB6epB4YQgBMAFUARRrEHf6mHv6mHn6mHg5yDwAgQhwNFoAeLsgdLn4TDqoeCDAAOhKoIQkqOWY+aB4Ovhb+qh7+qh7+qh7+qh7+qh7+qh4+qh4WWg8WmhK21Bce7BMmlhXu1BcW1BcJVG70CRb0FwmwrqgeHlAKdvQJCV7+qh7+qh6uqh4pMnbQEAnW/qoe/qoeHqoeTsYxLlIELpIC/tAX/tAX/tAXvtAXMRxJ4KbSFzHiAGn+0hdC0hcWngxm0hcurgF+0BcQNQA4AC4iMCw+UhZeHh4uYgBWEAI+bhwuMgBORAE+MAA5PpbQFzamAf6yHv6yHv6yHv6yHv6yHuayHinoXtIQCR5pSrbaBhbAIP7QEDbQEAlU/toGTtoGyQL+2Ab+2Ab+2AY+2AYpMP7YBv7YBq7YBhbGIi42Av7WBsHWiZz+giX+giX+giXOgiV5wP6mHj6mHilU/tQGydQAMhr+FP7OBmbOBgmuiciOzgY5OJbOBkm4CWD+zAb+zAb+zAb+zAY+zAYegCUWlEAWPEIOJg8OniwWjCz+mB7+mB4emB4u9gG27gbJQC64BP7IDf7IDf7IDf7IDf7IDf7IDf7IDf7IDf7IDf7IDf7IDf7IDf7IDf7IDf7IDf7IDf7IDf7IDf7IDf7IDf7IDf7IDf7IDf7IDRbIDf6gHragHsnA/swN/swN/swN/swN/swN/swN/swN/swN/swN/swN/swN/swN/swN/swN/swN/swN/swN/swN/swN/swN/swN/swN/swN/swN/swNbswN/oYe7oYeFoYeFnwqJiAd/oYe/oYeHoYeAFNK9jsWzB0ush5GsjsO0B02GB5uNj1eNjxWMj0AZhIOLghtACAi6FAAIDqyEQmiNoAAJnY6DpgeADQSQj0AOQ0qOXj+mB7umB4WmB4O7C4O3E8YcgAgAG4AYRrYMAHiDsQ+DuYwRt4A9l49MX4W0hAWLhb+VD3+VD0+VD0WihBeBAoANxqORRaqErbkEB6GTf7mEDbmEB5UKWbWFwgxADga2goJtq6yHgAzGnwbKaxWgi8ZYK60Hp4cUVYoG/6qKA6qKJY8Tjk8RpZAXi4cqWSGHFURgiZWMy6+HF5eAB7GSC5IAv7mFw7mFxZSNO7mFxbmFzZqNGaEM1YYNhaEDUmqdmADVkAAHqo3CUJOrBZeOAAWjiv+vh5Gvh4OulEBXGbsFzZWBC4EAk7iNAg0ADUSkjYW8iz+9Bdm9BdJxmZIAT6aAxYuGwkwLsQAZsgCSXIJMr72FxaSGHk6JjAW/nQ9hnQ9MbA+fAFOrgEu0AJmDAM+TAEOkDZBNIHoFpovDugI/nY9XnY9CaT+9AZW9AZJRi6sBDmq5nY9CVb+9AZW9Aaxlv70Bkb0BilofvACRgwCUQrunC8WnC8pOG7SHvnO/qwl/qwlHqwlHtxPLkQC/vAGwfAWWE/u8AbJ8C7WAWbiA1Z2Bi48AM7uBjEcSVSm7gZJCAE2/tgePtgeyZZm7AYu8gAuJgZOPAUOKFcOrCUpqj5uBV5YBskaAdz+CFxeCFw5PpbeHjamAb7qBiky/uoG1uoGEXweHg4WLF9OrAEJKil2ZgYDRtwEFtxNDowO/uAe/uAeDuAe/v4N/v4N/v4N/v4N/v4N/v4N/v4N/v4N/v4N/v4N/v4N/v4N/v4N/v4N/v4N/v4N/v4N/v4N/v4N/v4N/v4N/v4N/v4N/v4N/v4N/v4NJv4N/voe/voeLvoeHvAO6d52qAo+4gcerlkuvgv+Ag7+Ag7+Ag7+Ag7+Ag7+Ag7+Ag7+Ag7+Ag7+Ag7+Ag7+Ag7+Ag7+Ag7+Ag7+Ag7+Ag7+Ag7+Ag7+Ag7+Ag7+Ag7+Ag7+Ag4+Ag7+Fh/+Fh8eohcmnh3+Fh+eFh92QFz+FB/+FB9OFB8+FhVGJB4unBMOAh8ORFwOVGQJKjlk/gIf/gIfDgIfDlIXDoJdDphcDi5dDlIW/gIf/gIf/gIf/gIfFsoRXg4YLqoxtgoKFiAa/ggKNggKCVQhtlZUXB4aMxa6Cq76HglaPu4CLtwCHqBwrvYeKVwJjmYIFD5ECwk2Dgwa/gIYCdZ2ABge3hSm8B4JoIbwHgkoJsoULgAWXs4VHlAWJuI/LoQBADYN0ilSVpo7VnoXKQr+/hf2/hcuqhZ2TANWxAAuQB2m/BcZdv78Fz78FyngZvwXHgwOKa4u/gFOUhYANA1Q/vgXZvgXOQoAZ1JAAT6IAy4wAC6+AHa8AgnuZu4DVtwBKUx5LCY0D/7gHobgHhF8/vYXbvYXHtQc/t4eht4eidD+0AZG0AZJDv7QBjbQBgms/tAGTtAGKdT+0AY+0Aapnn7iAj5IAwmM/tAGwdAJgP7QBj7QBina/tAG/tAG/tAG/tAGxtAG6VRJDs7QBil4CUD+0Ab+0AaO0AYOiiWByAmyLhIGTtAGDqhsSfr+0gZm0gYuggFOQgE+igMucgEuwABe0gaJoAnw/tIG/tIGCbwu2AVmJmMWtgw+eAF2pgEJ5qbMHgAy/swe/sweOsweSWwuIAEuOgspEgnkdqwDPjQCSWT++AY2+AYJVP74Bk74Bgla/vgGPvgGKVwpPmYIAz46AQk2/vgGwfgJ1v74Bj74Bgmg/vgG/vgG/vgG/vgGxvgGiYpJDnZMA1bsC/7IDf7IDf7IDf7IDf7IDf7IDf7IDf7IDf7IDf7IDe7IDf6oHraoHhaACP7UBkbUBmnC/tQGNtQGCaz+1AZO1AapqP7UBj7UBhZMDYnG/tQGztQGCYD+1AY+1AYp2v7UBv7UBv7UBv7UBsbUBhZiCEkOztQGKXgJQE6KDFYKBxk2/pwUvpwU/swN/swN/swN/swN/swN/swN/swN/swNRswN/pYe/pYeADcaGAomMh3+lh7+lh4Wlh7+1nr+1no21nr+pD0OpD0Woh4AMP6iHv6iHkqiHh4yPQ5mEi5mPjaIPamsLiAALswRRjQ//uZ6/uZ6tuZ6Cd5e9AkOSDzmzBAeRlf++gk2+gkRVmb8CQ5yIh7GCQ6idraaHhZaCm4ACg7ENik4/p4e/p4etp4eCbR2BAoeGBz+oDb+oDYeoDYANA22LkQCLmoTHvphFgwJ/qQe/qQevqQeLqYCzggKNjSEpgoKFoozITT+Cgo+CgoWaAxmph4ANu14AXw24DVOMBbO/npxwGH6VhACPjoUJhgLlqoeFjJdRsIAZtgXaXoJMmb+DVa2C0kk/qwe1qweCbiO2hcJpgniptYX/oo9too90Tr+7gZW7gYWSDkuqAQmViDejD0AM+1+/uwGVuwGDq5fDvwI/uwGRuwGHn6PSRBm8gI+igMeCnf+xhcOxhcepo3+8Abm8AYOmCkhYHm8LjoKXi4EADi1RFaWJebyBono7pYlFpYloZYB2CneZu4DbtAECTzO9AYelEcJQqb0Bgn+ATb+9AY+9AYeyEdm9gYOSAteNgZO+AYOopQOnD3JzD6ABV46AgkuAeKenD05FJb6Bi4wAI7MAgA2MhgGZggEVioCHowKeUC5UtaoJcnQLgAGZtYeObQ+gAFWsgEQNgAuADYqWgm+CIIWuEsO5p4mxp7+ilzeilxGKAEudAsObCEuVht2xgM+5AX+KAdOKAcJVG4mBwAzMrIArhIOJlgpQQhepksAOW3g/hIO/hIOrhIOHjRNdiIHJuYt/rQs/rQsFrQsHtx1LmoJLp4CADcypgRWAgxWcARJ9u4gB+kglgwLVoQALjwAdlgDVkAADpwQ+ZymHgfZnv4eB0YeBwGQZhwHthIODhAQCC4AOA2wYZIOTFEmYAtWFgH+GgeGGgcW+A7pfI4aBzaoAb4aB0km/hgH+RgJUJYYB2kmPnoBThQHCVYpFGYmCj4gBhYUDv6SS+aSSzaQAnaEAz68AEloLpoEJrIIWaQOupAuHAE2MAJWVABu3AZ5vABnutoGEYYhzl7aBgle/tgG/tgGrtgGKYZ21gZxsP7UBv7UBsnUHrxyLjoCLtQGLngLrtIGKeD+0gb+0gb20gYxHIlIptIGOTT+0gY+0gapAmbSBil2CbAu3ANOJAUQMgA3AC4FBuEmYYiO0AYuYgBWEAI+RgQuMgBORAE+MAA5PjamFV58DjamAf7QBv7QBq7QBikaPnoBTqwBCSop2LbQBv7IJbbIJRayGy7OBzZMBAAwMgwNtuIGFvgNLqgE7uIGyeIemBVu5AbmwA0OmqBhXG7mFA4KDcboBh6qE0kSZvQCPlwDKaD+5BQO5BQAMRpwEHbsBv7CDW7CDakMJooKLvAEXiwEHrYaLkYCLqIC5sINaaLu8AbJ8CYgG27GEVZsDAmESRR2XANWQABxDglCTqwFXjgASeb+ZFlOZFnJDmbyBg4SEkkKCbYuBgJOSgUeoKE+dgVeMgIuXAD+8Ab+8AbJ8HlUDmgcZvwDXqwBFqQKeTi5RtbaFCmiLvQFbtoUADAtUj6AAU6sAQ5qolFabg4DPuYE/sgNhsgNyYwu9AAuRgTuyA0WHgwuoAQ5oN7IDY6kp8mEaUSupBT5HiHS/soNJsoNFlgVCZBm5AI++gEe3BD+4gbB4h5sGnbiBimw/rQiTrQiEcx5qi7cBF6wBAA1beAuQgIunAJprilYVtAEXjgEYYru4AbJ4A6QsAFKCYJmzApWhgD+4gYWAAkJfKbgBhl2/tANRtANSUBm4AY2TAQuAgJOMgUOXCoO1A1pPmGSjtQNhhQCPmgDLjIATkgBPjAAOUKW1A02bGJmAARWJgIO2g5BZHk6uUT+5AaG5AYxsj5+AU6wAQ6+CC4cAWYQA0ZSAQB9/sosksosJi4tFrBpDpqmDhwcAGESTBQOLBUYcwBwAG8AchrcpxgiAFUARgBDGnQrFuq6DkS7HrKnACIS1qcAMg0gCG8AcBI2pgFSFpi6CE4AYQVqDqoUDsSniEkARwBIAFQAXwBTAFQAQQBUAFUAUwBFAFMAXwBQAFUAQgBTBQYAXw1SqYoOsKgOrqgO2roJWBhIAGEAcwBoDVgOyh8ANBIMCjQ4ADkAMAA3ACIAfQB9AA=='
</code></pre>
<p>I want to decode the above. Based on this <a href="https://stackoverflow.com/a/3284069/7959614">answer</a> I use the following</p>
<pre><code>decoded = bytes.fromhex(hex_str).decode('utf-8')
</code></pre>
<p>This results in the following error:</p>
<blockquote>
<p>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb0 in position
3: invalid start byte</p>
</blockquote>
<p>Next, I try the following</p>
<pre><code>decoded = bytes.fromhex(hex_str).decode('ascii')
</code></pre>
<p>This results in another error:</p>
<blockquote>
<p>UnicodeDecodeError: 'ascii' codec can't decode byte 0xd4 in position
0: ordinal not in range(128)</p>
</blockquote>
<p>Please show me how to decode the hex.</p>
|
<python><hex><decode>
|
2024-11-10 11:06:36
| 0
| 406
|
HJA24
|
79,174,669
| 9,128,863
|
Scipy: calculate orthogonal vector
|
<p>I'm trying to use Scipy for orthogonal vector calculation:</p>
<pre><code>import numpy as np
from scipy import linalg
e1 = np.float16([-0.913, -0.4072]).reshape(2,1)
e2 = linalg.orth(e1)
print(f'e_1 {e1} ,'
f' ortogonal e2 is {e2}')
</code></pre>
<p>I expected output to be:</p>
<pre><code> e2 is [[-0.4072] [0.913]]
</code></pre>
<p>I checked it, by: 0.913 * -0.4072 + (-0.4072)*0.913 = 0</p>
<p>But receive:</p>
<pre><code> e2 is [[-0.913] [-0.4072 ]]
</code></pre>
<p>What am I doing wrong?</p>
|
<python><vector><scipy>
|
2024-11-10 10:34:07
| 1
| 1,424
|
Jelly
|
79,174,532
| 3,380,131
|
How to emulate another theme's style with a tkinter ttk.Checkbutton
|
<p>I'm using mostly-vanilla Debian Bookworm and Python 3.11.2.</p>
<p>I like the default "themed" tkinter (ttk) widgets except for the checkbutton. If I change from the 'default' theme to the 'alt' theme (in tkinter), the checkbutton looks great but the other widgets look antique :)</p>
<p>default theme:</p>
<p><a href="https://i.sstatic.net/EDPcAFLZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EDPcAFLZ.png" alt="default" /></a></p>
<p>'alt' theme:</p>
<p><a href="https://i.sstatic.net/YX4dRhx7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YX4dRhx7.png" alt="alt" /></a></p>
<p>I want to modify the <code>ttk.Checkbutton()</code> so that it looks like it would using the 'alt' theme. I see how to use <code>ttk.Style()</code> for things like background color, font size, etc, but I don't know how to <strong>change the graphic from a filled square to a checkmark</strong>.</p>
<p>Any way to do this? I'm willing to tweak tkinter from within python, but I'm hoping that I don't need to learn tcl/tk...</p>
<p>Code used:</p>
<pre><code>import tkinter as tk
from tkinter import ttk
root = tk.Tk()
text = tk.StringVar(value="Entry")
pb = ttk.Button(root, text="Button")
ent = ttk.Entry(root, textvar=text)
cb1 = ttk.Checkbutton(root, text="Check1")
cb2 = ttk.Checkbutton(root, text="Check2")
pb.grid(column=0, row=0, pady=5)
ent.grid(column=0, row=1, padx=10, pady=5)
cb1.grid(column=0, row=2)
cb2.grid(column=0, row=3)
cb1.state(['selected', '!alternate'])
cb2.state(['!alternate'])
root.mainloop()
</code></pre>
<p>Add the following for the 'alt' theme:</p>
<pre><code>style = ttk.Style()
style.theme_use('alt')
</code></pre>
|
<python><tkinter>
|
2024-11-10 09:21:01
| 1
| 1,474
|
bitsmack
|
79,174,285
| 2,679,476
|
Unable to import cv2 in Python Ubuntu environment
|
<p>My Ubuntu version is :</p>
<pre><code>Distributor ID: Ubuntu
Description: Ubuntu 20.04.6 LTS
Release: 20.04
Codename: focal
</code></pre>
<p>I tried to install various ways and they were successful always.</p>
<pre><code>$sudo apt-get install python3-opencv
Reading package lists... Done
Building dependency tree
Reading state information... Done
python3-opencv is already the newest version (4.2.0+dfsg-5).
0 upgraded, 0 newly installed, 0 to remove and 49 not upgraded.
</code></pre>
<p>But when I try to import, I get this result :</p>
<pre><code> import cv2, sys, numpy, os
ImportError: No module named cv2
</code></pre>
<p>I don't have any clue what is wrong. I also tried to install using 'sudo pip install opencv-python', but every time I get the same result.</p>
|
<python><opencv><ubuntu>
|
2024-11-10 06:43:49
| 1
| 459
|
user2679476
|
79,174,236
| 14,275,533
|
[Airflow]: Dynamic Task Mapping on DockerOperator using Xcoms
|
<p>I am creating a dag that should do the following:</p>
<ul>
<li>fetch event ids</li>
<li>for each event id, fetch event details ( DockerOperator )</li>
</ul>
<p>The code below is my attempt to do what I want:</p>
<pre class="lang-py prettyprint-override"><code>from datetime import datetime
from airflow.operators.python import PythonOperator
from airflow.providers.docker.operators.docker import DockerOperator
With Dag(
start_date=datetime(2024, 11, 1),
schedule="@daily",
):
task_fetch_ids = PythonOperator(
task_id="fetch_detail",
...)
task_fetch_detail = DockerOperator(
task_id="fetch_detail",
image="image:v1",
).expand(
command=[f"fetch-event --event-id {event_id}" for event_id in "{{ ti.xcom_pull(task_ids='task_fetch_ids', key='return_value') }}"]
)
task_fetch_ids >> task_fetch_detail
</code></pre>
<p>The above clearly doesn't work because I am looping through a string.
What is the correct syntax ?</p>
|
<python><docker><airflow><airflow-taskflow><airflow-xcom>
|
2024-11-10 06:04:44
| 1
| 451
|
lalaland
|
79,174,041
| 424,957
|
How to calculate the angle from a point to midpoint of a line?
|
<p>I want to calculate the angle formed by a line segment from p1 to the midpoit of the line connecting p2 and p3, and the line formed by p2 and p3. I used code as below, but the result seems not correct, does anyone help me?</p>
<pre><code>def calculateAngle(point1, point2, point3):
lon1, lat1 = point1
lon2, lat2 = point2
lon3, lat3 = point3
latCenter = (lat2 + lat3) / 2
lonCenter = (lon2 + lon3) / 2
xV1 = latCenter - lat1
yV1 = lonCenter - lon1
xV2 = lat3 - lat1
yV2 = lon3 - lon1
dotProduct = xV1 * xV2 + yV1 * yV2
magnitudeV1 = sqrt(xV1 ** 2 + yV1 ** 2)
magnitudeV2 = sqrt(xV2 ** 2 + yV2 ** 2)
if magnitudeV1 == 0 or magnitudeV2 == 0:
return 0
cosTheta = dotProduct / (magnitudeV1 * magnitudeV2)
cosTheta = max(min(cosTheta, 1), -1)
theta = acos(cosTheta)
angleInDegrees = degrees(theta)
return angleInDegrees, theta
</code></pre>
<p><a href="https://i.sstatic.net/JpJE6hB2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpJE6hB2.png" alt="enter image description here" /></a></p>
|
<python><geometry><angle>
|
2024-11-10 02:34:47
| 1
| 2,509
|
mikezang
|
79,174,023
| 754,136
|
Confidence intervals with scipy
|
<p>I have an array of shape <code>(n, timesteps)</code>, where <code>n</code> is the number of trials and <code>timesteps</code> is the length of each trial. Each value of this array denotes a stochastic measurement.<br />
I would like to implement a generic function that computes a confidence interval for a given statistic (mean, median, ...) assuming 1) the underlying distribution is normal, 2) is Student's.</p>
<p>Something like</p>
<pre class="lang-py prettyprint-override"><code>def normal_ci(
data: np.array,
axis: int = 0,
statistic: Callable = np.mean,
confidence: float = 0.95
):
</code></pre>
<p>and a similar function <code>student_ci()</code>.</p>
<p>My problem is that, by default, scipy functions compute intervals for the mean, am I right? Like in <a href="https://stackoverflow.com/questions/15033511/compute-a-confidence-interval-from-sample-data">this answer</a>.</p>
|
<python><scipy><statistics><confidence-interval><scipy.stats>
|
2024-11-10 02:12:32
| 1
| 5,474
|
Simon
|
79,173,935
| 9,983,172
|
python not loading available .pyc file from __pycache__
|
<p>I've never needed cached .pyc files until just now, but now I find they aren't working. I have a simple arrangement like a top level script mainscript.py:</p>
<pre><code>import mymodule
mymodule.main()
</code></pre>
<p>and a mymodule.py file with some functionality:</p>
<pre><code>def main():
# do stuff
</code></pre>
<p>when I run <code>python mainscript.py</code> it takes about two seconds to compile, and then it runs and there is a __pycache__ directory containing <code>mymodule.cpython-39.pyc</code></p>
<p>However, when I now repeat <code>python mainscript.py</code>, the same delay for compiling occurs, and yet the .pyc file in the __pycache__ is not updated to a new timestamp. My understanding is that the available .pyc file in __pycache__ should be automatically detected and imported, saving the two seconds of compile time. But, I'm apparently not getting that savings.</p>
<p>According to chatgpt there's about 5 things to check:</p>
<ol>
<li>Remove the -B Option and unset PYTHONDONTWRITEBYTECODE (I never used -B in the first place)</li>
<li>Delete Outdated or Corrupt .pyc Files (I did a rm -rf __pycache__)</li>
<li>Run Python from a .py File instead of interactively (never ran interactively to begin with)</li>
<li>Verify File Permissions (they are all default, did a chmod -R 755 anyway, no effect)</li>
<li>Force Compilation Before Importing (i used py_compile module to do this, no change)</li>
</ol>
<p>I have checked all of the above and none are the answer. This is python 101 stuff it couldn't possibly be simpler, but somehow i'm just not seeing it (and neither is chatgpt).</p>
|
<python><compilation>
|
2024-11-10 00:27:26
| 0
| 480
|
J B
|
79,173,585
| 9,128,863
|
Scipy: optimisation with custom step
|
<p>The task is to find the minimum of function with explicitly defined step.</p>
<p>I went through methods of scipy.optimize package, which use approximation approach (COBYLA, COBYQA), and didn't find any parameters and options, which can be used to pass the step size.</p>
<p>For example:</p>
<pre><code> initial_x_1 = np.float16([4.0])
res_opt_x_1 = optimize.minimize(f_x_1, initial_x_1, method='COBYLA', options={'rhobeg': [2.0], 'maxiter': 50, 'disp': False, 'catol': 1e-4})
</code></pre>
<p>Is the other numeric methods in scipy.optimize, in which we can set the size of step explicitly?</p>
<p>Or, more generally, are there other numeric methods in OTHER Python mathematic libraries, in which we can set the size of step explicitly?</p>
|
<python><optimization><scipy>
|
2024-11-09 19:54:32
| 2
| 1,424
|
Jelly
|
79,173,414
| 10,452,700
|
How can encrypt\encode (Base64) for a generated key including secret message with 2nd public data key in Python?
|
<p>I'm trying to encrypt the a new generated key using another public key belongs to my friend <code>recipient_public_key</code>, then encode the final output in Base64. This process also can be done step by step <a href="https://kevinsguides.com/guides/security/software/pgp-encryption/#encrypting-files" rel="nofollow noreferrer">here</a> using tools like Kleopatra after installed <a href="https://gpg4win.org/thanks-for-download.html" rel="nofollow noreferrer">Gpg4win</a>, however I'm not interested in using tools.</p>
<p>Task:</p>
<p>Generate an RSA key pair with a 1024-bit key size, using your username. Consider the below message, and sign the message without encrypting it. In a text file, include the following: your public key, the message, and the Base64-encoded signature of the message. Then, encrypt this file using another\recipient's public key, encode the final output in Base64, and print it.</p>
<blockquote>
<p>Message to sign: <em>This is my secret message</em></p>
</blockquote>
<p>At first, I've generated an RSA key pair with a 1024-bit key size, using my name as the username using available Python packages for this like <a href="/questions/tagged/python-gnupg" class="s-tag post-tag" title="show questions tagged 'python-gnupg'" aria-label="show questions tagged 'python-gnupg'" rel="tag" aria-labelledby="tag-python-gnupg-tooltip-container" data-tag-menu-origin="Unknown">python-gnupg</a> <a href="/questions/tagged/pycryptodome" class="s-tag post-tag" title="show questions tagged 'pycryptodome'" aria-label="show questions tagged 'pycryptodome'" rel="tag" aria-labelledby="tag-pycryptodome-tooltip-container" data-tag-menu-origin="Unknown">pycryptodome</a>. I need to generate and sign the message without encrypting it as below:</p>
<p>I have tried the following approach which ends of the following error:</p>
<blockquote>
<p>ValueError: Plaintext is too long.</p>
</blockquote>
<pre class="lang-py prettyprint-override"><code># !pip install pycryptodome
from Crypto.PublicKey import RSA
from Crypto.Cipher import PKCS1_OAEP
from Crypto.Signature import pkcs1_15
from Crypto.Hash import SHA256
import base64
import pgpy # Import pgpy
# Step 1: Generate RSA key pair
key = RSA.generate(1024)
private_key = key.export_key()
public_key = key.publickey().export_key()
# Step 2: Save your public key for submission
with open("my_public_key.pem", "wb") as f:
f.write(public_key)
# Step 3: Prepare the message with your name
message = "This is my secret messgae"
message_bytes = message.encode()
# Step 4: Sign the message with the private key
hash_message = SHA256.new(message_bytes)
signature = pkcs1_15.new(key).sign(hash_message)
# Step 5: Base64 encode the signature
signature_base64 = base64.b64encode(signature)
# Step 6: Save the public key, message, and signature in a file
with open("final-msg.txt", "wb") as f:
f.write(public_key + b"\n" + message_bytes + b"\n" + signature_base64)
# Step 7: Encrypt the final message with the provided public key
recipient_public_key_pem = """-----BEGIN PGP PUBLIC KEY BLOCK-----
.............
-----END PGP PUBLIC KEY BLOCK-----"""
# Step 7: Encrypt the final message with the provided public key
# **Increased key size to 2048 bits**
key = RSA.generate(2048)
private_key = key.export_key()
public_key = key.publickey().export_key()
# Use pgpy to import the PGP key
recipient_key, _ = pgpy.PGPKey.from_blob(recipient_public_key_pem)
# Re-initialize cipher_rsa with the new key
cipher_rsa = PKCS1_OAEP.new(key)
# Encrypt final-msg.txt with the recipient's public key
with open("final-msg.txt", "rb") as f:
final_msg = f.read()
encrypted_message = cipher_rsa.encrypt(final_msg)
# Step 8: Base64 encode the encrypted message
encrypted_message_base64 = base64.b64encode(encrypted_message)
# Step 9: Save the base64 encoded encrypted message
with open("final_msg_base64.txt.gpg", "wb") as f:
f.write(encrypted_message_base64)
print("Encryption completed. The Base64-encoded encrypted message is saved in 'final_msg_base64.txt.gpg'")
</code></pre>
<p>In the spirit of <a href="https://en.wikipedia.org/wiki/Don%27t_repeat_yourself" rel="nofollow noreferrer">DRY</a>, I tried 2nd approach:</p>
<pre class="lang-py prettyprint-override"><code># !pip install python-gnupg
import gnupg
import base64
# Initialize GPG
gpg = gnupg.GPG()
# 1. Generate an RSA Key Pair
input_data = gpg.gen_key_input(
key_type="RSA",
key_length=1024,
name_real="Alice Bob", # Replace with your name
name_email="Alice.Bob@gmail.com", # Replace with your email
expire_date=0 # Key does not expire
)
key = gpg.gen_key(input_data)
print(f"Generated Key Fingerprint: {key.fingerprint}")
# 2. Create the message
message = f"This is my secret message"
with open("msg.txt", "w") as f:
f.write(message)
# 3. Sign the message
#signed_data = gpg.sign(message, default_key=key.fingerprint, detach=True)
signed_data = gpg.sign(message, keyid=key.fingerprint, detach=True)
# Write the signature data to a file in binary mode
with open("msg.sig", "wb") as f: # Open in binary write mode ("wb")
f.write(signed_data.data) # Write the bytes directly
# 4. Base64 encode the signature
with open("msg.sig", "rb") as sig_file:
signature_base64 = base64.b64encode(sig_file.read()).decode("utf-8")
with open("msg-base64.sig", "w") as encoded_sig_file:
encoded_sig_file.write(signature_base64)
# 5. Export your public key
public_key = gpg.export_keys(key.fingerprint)
with open("my_public_key.asc", "w") as pubkey_file:
pubkey_file.write(public_key)
# 6. Concatenate the public key, message, and Base64-encoded signature
with open("final-msg.txt", "w") as final_file:
final_file.write(public_key)
final_file.write("\n" + message + "\n")
final_file.write(signature_base64)
# 7. Import recipient's public key
recipient_public_key ="""-----BEGIN PGP PUBLIC KEY BLOCK-----
..........
-----END PGP PUBLIC KEY BLOCK-----"""
# Replace with the actual provided public key
import_result = gpg.import_keys(recipient_public_key)
print("Imported Recipient's Public Key:", import_result.fingerprints)
# 8. Encrypt the final message with the recipient's public key
with open("final-msg.txt", "rb") as final_msg_file:
# Use encrypt instead of encrypt_file
encrypted_data = gpg.encrypt(
final_msg_file.read(), # Read the file content
import_result.fingerprints, # Pass recipients as positional argument
output="final-msg.txt.gpg"
)
if encrypted_data.ok:
print("Message encrypted successfully.")
else:
print(f"Encryption failed: {encrypted_data.status}") # Print the error status
print(encrypted_data.stderr) # Print any error details to help with debugging
# Only proceed to base64 encoding if encryption was successful
if encrypted_data.ok:
# 9. Base64 encode the encrypted file
with open("final-msg.txt.gpg", "rb") as encrypted_file:
encrypted_base64 = base64.b64encode(encrypted_file.read()).decode("utf-8")
with open("final_msg_base64.txt.gpg", "w") as encoded_encrypted_file:
encoded_encrypted_file.write(encrypted_base64)
# Output the Base64 encoded result
print("Base64 Encoded Encrypted Message:")
print(encrypted_base64)
else:
print("Skipping base64 encoding due to encryption failure.")
</code></pre>
<pre class="lang-none prettyprint-override"><code>WARNING:gnupg:potential problem: ERROR: key_generate 83918950
WARNING:gnupg:gpg returned a non-zero error code: 2
WARNING:gnupg:potential problem: FAILURE: sign 17
WARNING:gnupg:gpg returned a non-zero error code: 2
WARNING:gnupg:gpg returned a non-zero error code: 2
WARNING:gnupg:gpg returned a non-zero error code: 2
Generated Key Fingerprint:
Imported Recipient's Public Key: ['1AB8693860852A6B0EE7DD81B8F979BA0A99A039']
Encryption failed: invalid recipient
[GNUPG:] KEY_CONSIDERED 1AB8693860852A6B0EE7DD81B8F979BA0A99A039 0
gpg: 8D7830FB995B0A91: There is no assurance this key belongs to the named user
[GNUPG:] INV_RECP 10 1AB8693860852A6B0EE7DD81B8F979BA0A99A039
[GNUPG:] FAILURE encrypt 53
gpg: [stdin]: encryption failed: Unusable public key
Skipping base64 encoding due to encryption failure.
</code></pre>
<p>Any idea how I could manage to encrypt (base64) generated key within the secret message using target <code>recipient_public_key</code>? so hopefully recipient could use his public_key and read the secret message.</p>
|
<python><encryption><cryptography><base64><gnupg>
|
2024-11-09 18:11:24
| 1
| 2,056
|
Mario
|
79,173,339
| 48,956
|
Can never "import x.x" from directory "x" containing "x.py"?
|
<p>I have installed a custom package <code>pg_util</code> using</p>
<pre><code>cd ~/software/fingerWriterAI/pg_util
pip install -e .
</code></pre>
<p>pg_util has the structure:</p>
<pre><code>pg_util/ # repo directory
pg_util/setup.py
pg_util/pg_util # module directory
pg_util/pg_util/__init__.py # Empty, marks as a module
pg_util/pg_util/pg_util.py # Main module file
pg_util/tests/ ...
</code></pre>
<p>The following import works</p>
<pre><code>(py3) sir@cork:~$ python -c "from pg_util.pg_util import DatabaseParams"
</code></pre>
<p>But if I do the same from:</p>
<pre><code>(py3) sir@cork:~$ cd ~/software/fingerWriterAI/pg_util/pg_util
(py3) sir@cork:~/software/fingerWriterAI/pg_util/pg_util$ python -c "from pg_util.pg_util import DatabaseParams"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'pg_util.pg_util'; 'pg_util' is not a package
(py3) sir@cork:~/software/fingerWriterAI/pg_util/pg_util$
</code></pre>
<p>Why can't I <code>from pg_util.pg_util import</code> as an absolute import from within the <code>pg_util/pg_util</code> directory? (I assume its because there's a <code>pg_util.py</code> file, but I didn't ask for <code>from pg_util import</code> or <code>import .pg_util</code>, I asked for <code>from pg_util.pg_util import</code>.</p>
<p>I neither want to choose a custom working directory every time I run a file in my IDE that lives inside a module, nor I want to not have a file that's the same as my module.</p>
<p>Is this issue specific to have having a pg_util file in the same directory as the current folder?</p>
<p>Am I out of luck?</p>
|
<python>
|
2024-11-09 17:33:35
| 0
| 15,918
|
user48956
|
79,173,187
| 7,483,211
|
DeepDiff regex_exclude_paths filters out everything, not just the path I want
|
<p>I'm using DeepDiff with <code>exclude_regex_paths="['seqid']"</code> to exclude certain fields, but I'm noticing that everything, not just the fields I want to exclude are excluded. Real differences, outside the path to be excluded, aren't being reported.</p>
<p>Here's my code:</p>
<pre class="lang-py prettyprint-override"><code>from deepdiff import DeepDiff
data1 = {
"record1": {"seqid": "ABC123", "value": 1},
"record2": {"seqid": "DEF456", "value": 2}
}
data2 = {
"record1": {"seqid": "XYZ789", "value": 3}, # value changed from 1 to 3
"record2": {"seqid": "UVW321", "value": 4} # value changed from 2 to 4
}
diff = DeepDiff(data1, data2, exclude_regex_paths="['seqid']")
print(diff) # {} - Empty output, but value differences should be shown
</code></pre>
<p>I expected this to exclude only the 'seqid' differences while still showing the value differences, but I'm getting an empty diff. What am I doing wrong? I'm on DeepDiff version 8.0.1.</p>
<p>The documentation unfortunately doesn't really define the path syntax to use.</p>
<p>I found a related, inverse question: <a href="https://stackoverflow.com/questions/68027894/deepdiff-exclude-paths-regex-not-filtering-out-paths">DeepDiff exclude_paths regex not filtering out paths</a></p>
|
<python><python-deepdiff>
|
2024-11-09 16:20:16
| 1
| 10,272
|
Cornelius Roemer
|
79,173,053
| 726,373
|
How to convert character indices to BERT token indices
|
<p>I am working with a question-answer dataset <code>UCLNLP/adversarial_qa</code>.</p>
<pre><code>from datasets import load_dataset
ds = load_dataset("UCLNLP/adversarial_qa", "adversarialQA")
</code></pre>
<p>How do I map character-based answer indices to token-based indices after tokenizing the context and question together using a tokenizer like BERT. Here's an example row from my dataset:</p>
<pre><code>d0 = ds['train'][0]
d0
{'id': '7ba1e8f4261d3170fcf42e84a81dd749116fae95',
'title': 'Brain',
'context': 'Another approach to brain function is to examine the consequences of damage to specific brain areas. Even though it is protected by the skull and meninges, surrounded by cerebrospinal fluid, and isolated from the bloodstream by the blood–brain barrier, the delicate nature of the brain makes it vulnerable to numerous diseases and several types of damage. In humans, the effects of strokes and other types of brain damage have been a key source of information about brain function. Because there is no ability to experimentally control the nature of the damage, however, this information is often difficult to interpret. In animal studies, most commonly involving rats, it is possible to use electrodes or locally injected chemicals to produce precise patterns of damage and then examine the consequences for behavior.',
'question': 'What sare the benifts of the blood brain barrir?',
'answers': {'text': ['isolated from the bloodstream'], 'answer_start': [195]},
'metadata': {'split': 'train', 'model_in_the_loop': 'Combined'}}
</code></pre>
<p>After tokenization, the answer indices are 56 and 16:</p>
<pre><code>from transformers import BertTokenizerFast
bert_tokenizer = BertTokenizerFast.from_pretrained('bert-large-uncased', return_token_type_ids=True)
bert_tokenizer.decode(bert_tokenizer.encode(d0['question'], d0['context'])[56:61])
'isolated from the bloodstream'
</code></pre>
<p>I want to create a new dataset with the answer's token indices, e.g., 56 ad 60.</p>
<p>This is from a <a href="https://www.linkedin.com/learning/introduction-to-transformer-models-for-nlp/bert-for-question-answering?autoSkip=true&resume=false" rel="nofollow noreferrer">linkedin learning class</a>. The instructor did the conversion and created the csv file but he did not share it or the code to do that. This is the expected result:<a href="https://i.sstatic.net/GsZ6mfcQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GsZ6mfcQ.png" alt="QA dataset with token answer indices" /></a></p>
|
<python><nlp><dataset><large-language-model><bert-language-model>
|
2024-11-09 15:15:33
| 1
| 642
|
Jack Peng
|
79,172,783
| 317,797
|
Polars SQL CASE
|
<p>Is this a bug, non-conformant behavior, or standardized behavior? A Polars SQL statement is calculating the average of values based on a condition. The CASE WHEN doesn't include an ELSE because those values should be ignored. Polars complains that an ELSE is required. If I include an ELSE, with no value, it's a syntax error. The solution is to use ELSE NULL. For comparison, duckdb doesn't required an ELSE. Should I open an issue on github? Is ELSE NULL the conforming solution? Or is duckdb giving me a break?</p>
<pre><code>SELECT AVG(CASE WHEN A <= B OR A <= C THEN D END) FROM df
</code></pre>
|
<python><sql><case><python-polars><duckdb>
|
2024-11-09 12:42:11
| 2
| 9,061
|
BSalita
|
79,172,747
| 938,126
|
Polars NDCG optimized calculation
|
<p>The problem here is to implement NDCG calculation on Polars that would be efficient for huge datasets.</p>
<p>Main idea of NDCG is to calculate DCG and IDCG, let's skip the gain part and only think about discount part, which depends on ranks from ideal and proposed orderings.</p>
<p>So the tricky part for me here is to properly and effectively calculate the positions of similar items from ideal and proposed parts, i.e.:</p>
<pre><code>ideal: a b c d e f
propsed: d b g e h
</code></pre>
<p>so we have intersection of <code>{b, d, e}</code> items with <code>idx_ideal=[2,4,5]</code> (starting from 1) and <code>idx_propsed=[2,1,4]</code></p>
<p>So what I want is to calculate those idx_proposed and idx_ideal for dataframe with columns <code>(user, ideal, proposed)</code>,
so the resulting DF would have columns: <code>(user, ideal, proposed, idx_ideal, idx_propsed)</code></p>
<pre><code># so its only ideal idx, then I create extra for proposed idx and join them
(
df
.explode('ideal')
.with_columns(idx=pl.int_range(pl.len()).over('user'))
.filter(pl.col('ideal').is_in(pl.col('proposed')))
.group_by('user', maintain_order=True)
.agg(pl.col('idx'))
)
</code></pre>
<p>I explode ideal over user and find position by adding extra idx column and leaving only ideal=proposed rows, but this results in extra DF with a subset of rows, which I would have to join back, which is probably not very optimal. Then I have to calculate it once again for proposed side.</p>
<p>Moreover, I will have to explode over <code>(idx_ideal, idx_proposed)</code> on the next step to calculate user's IDCG, DCG and NDCG.
Could you help me optimize those calculations?</p>
<p>I think I should use that users do not interact with each-other and separate rows could be processed separately.</p>
<p>Here is the random data generator</p>
<pre><code>import polars as pl
import random
num_users = 100_000
min_len = 10
max_len = 200
item_range = 10_000
def generate_user_data():
length = random.randint(min_len, max_len)
ideal = random.sample(range(item_range), length)
length = random.randint(min_len, max_len)
predicted = random.sample(range(item_range), length)
return ideal, predicted
data = []
for user_id in range(num_users):
ideal, predicted = generate_user_data()
data.append({
'user': user_id,
'ideal': ideal,
'proposed': predicted
})
df = pl.DataFrame(data)
print(df.head())
</code></pre>
<blockquote>
<pre><code>shape: (5, 3)
┌──────┬──────────────────────┬──────────────────────┐
│ user ┆ ideal ┆ proposed │
│ --- ┆ --- ┆ --- │
│ i64 ┆ list[i64] ┆ list[i64] │
╞══════╪══════════════════════╪══════════════════════╡
│ 0 ┆ [9973, 313, … 5733] ┆ [8153, 3461, … 4602] │
│ 1 ┆ [3756, 9053, … 1014] ┆ [435, 9407, … 6159] │
│ 2 ┆ [8152, 1615, … 2873] ┆ [5078, 9006, … 8157] │
│ 3 ┆ [6104, 2929, … 2606] ┆ [5110, 790, … 363] │
│ 4 ┆ [1863, 6801, … 271] ┆ [5571, 5555, … 5591] │
└──────┴──────────────────────┴──────────────────────┘
</code></pre>
</blockquote>
|
<python><optimization><python-polars><ranking>
|
2024-11-09 12:24:08
| 2
| 363
|
Sindbag
|
79,172,721
| 15,245,889
|
Is there a type to represent any JSON serializable object in Python?
|
<p>It is my understanding that <code>json.load</code> returns <code>any</code>.</p>
<p>I do not believe I can change the built-in typing, but I think it would be better to use more specific typing in my programs, since not all objects are JSON serializable. However, such list would be long and repetitive.</p>
<p>Are there any recommendations?</p>
|
<python><json><python-typing>
|
2024-11-09 12:10:53
| 1
| 384
|
UCYT5040
|
79,172,585
| 13,802,418
|
Kivy Targeting IOS Kivy-ios libffi "C compiler cannot create executables" Error
|
<p>Im trying to compile simple kivy app to IOS.</p>
<p>System:
-Sonoma 14.6
-Apple M3 Chip</p>
<p><code>main.py</code>:</p>
<pre><code>from kivy.app import App
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.button import Button
from kivy.uix.label import Label
class MyApp(App):
def build(self):
self.label = Label(text="Hello!")
button = Button(text="Change")
button.bind(on_press=self.update_label)
layout = BoxLayout(orientation='vertical')
layout.add_widget(self.label)
layout.add_widget(button)
return layout
def update_label(self, instance):
self.label.text = "Button pressed!"
if __name__ == '__main__':
MyApp().run()
</code></pre>
<p>As <a href="https://github.com/kivy/kivy-ios/tree/master" rel="nofollow noreferrer">documentation</a> says:</p>
<p>-Created virtual env. near main.py: <code>python3.9 -m venv venv</code></p>
<p>-Switching to venv: <code>. venv/bin/activate</code></p>
<p>-Installing kivy-ios: <code>pip install kivy-ios</code> (kivy-ios 2024.3.17)</p>
<p>-Installing kivy: <code>pip install kivy==2.3.0</code></p>
<p>After these, <code>python main.py</code> returns me app successfully.</p>
<p>Installed Xcode (version 16.1) from AppStore with iOS&macOS platform tools.</p>
<p>With <code>xcode-select --install</code> & <code>brew install autoconf automake libtool pkg-config</code> & <code>brew link libtool</code> installed some of components too.</p>
<p>My libffi recipe version is:
<code>[INFO ] Using the bundled version for recipe 'libffi' libffi 3.4.4 </code></p>
<p>When I try to build simple app with <code>toolchain build python3 kivy</code> I get error about libffi.</p>
<p><code>[INFO ] Recipe order is ['hostopenssl', 'libffi', 'libpng', 'openssl', 'sdl2', 'hostpython3', 'sdl2_image', 'sdl2_mixer', 'sdl2_ttf', 'python3', 'ios', 'pyobjus', 'kivy']</code></p>
<p><code>hostopenssl</code> successfully downloading&installing.
<code>libffi</code> successfully downloading but cannot installing:</p>
<p>Running shell: <code>[INFO ] Running Shell: /usr/bin/tar ('-C', '/Users/mertmehmetaraz/synapp/Test1/build/libffi/iphoneos-arm64', '-xv', '-z', '-f', '/Users/mertmehmetaraz/synapp/Test1/.cache/libffi-libffi-3.4.4.tar.gz') {'_iter': True, '_out_bufsize': 1, '_err_to_out': True}</code></p>
<pre><code>...
...
[Succesfully OUTPUTS HERE]
...
...
[DEBUG ] configure: error: in `/Users/mertmehmetaraz/synapp/Test1/build/libffi/iphoneos-arm64/libffi-3.4.4/build_iphoneos-armv7':
[DEBUG ] configure: error: C compiler cannot create executables
[DEBUG ] See `config.log' for more details
[DEBUG ] Traceback (most recent call last):
[DEBUG ] File "/Users/mertmehmetaraz/synapp/Test1/build/libffi/iphoneos-arm64/libffi-3.4.4/generate-darwin-source-and-headers.py", line 314, in <module>
[DEBUG ] generate_source_and_headers(
[DEBUG ] File "/Users/mertmehmetaraz/synapp/Test1/build/libffi/iphoneos-arm64/libffi-3.4.4/generate-darwin-source-and-headers.py", line 283, in generate_source_and_headers
[DEBUG ] build_target(ios_device_armv7_platform, platform_headers)
[DEBUG ] File "/Users/mertmehmetaraz/synapp/Test1/build/libffi/iphoneos-arm64/libffi-3.4.4/generate-darwin-source-and-headers.py", line 232, in build_target
[DEBUG ] subprocess.check_call(['../configure', '-host', platform.triple], env=env)
[DEBUG ] File "/opt/homebrew/Cellar/python@3.9/3.9.20/Frameworks/Python.framework/Versions/3.9/lib/python3.9/subprocess.py", line 373, in check_call
[DEBUG ] raise CalledProcessError(retcode, cmd)
[DEBUG ] subprocess.CalledProcessError: Command '['../configure', '-host', 'arm-apple-darwin11']' returned non-zero exit status 77.
Exception in thread background thread for pid 68035:
Traceback (most recent call last):
File "/opt/homebrew/Cellar/python@3.9/3.9.20/Frameworks/Python.framework/Versions/3.9/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/opt/homebrew/Cellar/python@3.9/3.9.20/Frameworks/Python.framework/Versions/3.9/lib/python3.9/threading.py", line 917, in run
self._target(*self._args, **self._kwargs)
File "/Users/mertmehmetaraz/synapp/Test1/venv/lib/python3.9/site-packages/sh.py", line 1634, in wrap
fn(*rgs, **kwargs)
File "/Users/mertmehmetaraz/synapp/Test1/venv/lib/python3.9/site-packages/sh.py", line 2636, in background_thread
handle_exit_code(exit_code)
File "/Users/mertmehmetaraz/synapp/Test1/venv/lib/python3.9/site-packages/sh.py", line 2327, in fn
return self.command.handle_command_exit_code(exit_code)
File "/Users/mertmehmetaraz/synapp/Test1/venv/lib/python3.9/site-packages/sh.py", line 821, in handle_command_exit_code
raise exc
sh.ErrorReturnCode_1:
RAN: /Users/mertmehmetaraz/synapp/Test1/venv/bin/python3 generate-darwin-source-and-headers.py --only-ios
STDOUT:
Skipping i386
checking build system type... aarch64-apple-darwin23.6.0
checking host system type... x86_64-apple-darwin13
checking target system type... x86_64-apple-darwin13
checking for gsed... sed
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for x86_64-apple-darwin13-strip... no
checking for strip... strip
checking for a race-free mkdir -p... ../install-sh -c -d
checking for gawk... no
checking for mawk... no
checking for nawk... no
checking for awk... awk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking for x86_64-apple-darwin13-gcc... xcrun -sdk iphonesimulator clang -target x86_64-apple-ios-simulator
chec... (21159 more, please see e.stdout)
STDERR:
Traceback (most recent call last):
File "/Users/mertmehmetaraz/synapp/Test1/venv/bin/toolchain", line 8, in <module>
sys.exit(main())
File "/Users/mertmehmetaraz/synapp/Test1/venv/lib/python3.9/site-packages/kivy_ios/toolchain.py", line 1670, in main
ToolchainCL()
File "/Users/mertmehmetaraz/synapp/Test1/venv/lib/python3.9/site-packages/kivy_ios/toolchain.py", line 1407, in __init__
getattr(self, args.command)()
File "/Users/mertmehmetaraz/synapp/Test1/venv/lib/python3.9/site-packages/kivy_ios/toolchain.py", line 1483, in build
build_recipes(args.recipe, ctx)
File "/Users/mertmehmetaraz/synapp/Test1/venv/lib/python3.9/site-packages/kivy_ios/toolchain.py", line 1231, in build_recipes
recipe.execute()
File "/Users/mertmehmetaraz/synapp/Test1/venv/lib/python3.9/site-packages/kivy_ios/toolchain.py", line 758, in execute
self.build_all()
File "/Users/mertmehmetaraz/synapp/Test1/venv/lib/python3.9/site-packages/kivy_ios/toolchain.py", line 78, in _cache_execution
f(self, *args, **kwargs)
File "/Users/mertmehmetaraz/synapp/Test1/venv/lib/python3.9/site-packages/kivy_ios/toolchain.py", line 858, in build_all
self.build(plat)
File "/Users/mertmehmetaraz/synapp/Test1/venv/lib/python3.9/site-packages/kivy_ios/toolchain.py", line 78, in _cache_execution
f(self, *args, **kwargs)
File "/Users/mertmehmetaraz/synapp/Test1/venv/lib/python3.9/site-packages/kivy_ios/toolchain.py", line 844, in build
self.build_platform(plat)
File "/Users/mertmehmetaraz/synapp/Test1/venv/lib/python3.9/site-packages/kivy_ios/recipes/libffi/__init__.py", line 29, in build_platform
shprint(python3, "generate-darwin-source-and-headers.py", "--only-ios")
File "/Users/mertmehmetaraz/synapp/Test1/venv/lib/python3.9/site-packages/kivy_ios/toolchain.py", line 60, in shprint
for line in cmd:
File "/Users/mertmehmetaraz/synapp/Test1/venv/lib/python3.9/site-packages/sh.py", line 877, in __next__
self.wait()
File "/Users/mertmehmetaraz/synapp/Test1/venv/lib/python3.9/site-packages/sh.py", line 794, in wait
self.handle_command_exit_code(exit_code)
File "/Users/mertmehmetaraz/synapp/Test1/venv/lib/python3.9/site-packages/sh.py", line 821, in handle_command_exit_code
raise exc
sh.ErrorReturnCode_1:
RAN: /Users/mertmehmetaraz/synapp/Test1/venv/bin/python3 generate-darwin-source-and-headers.py --only-ios
STDOUT:
Skipping i386
checking build system type... aarch64-apple-darwin23.6.0
checking host system type... x86_64-apple-darwin13
checking target system type... x86_64-apple-darwin13
checking for gsed... sed
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for x86_64-apple-darwin13-strip... no
checking for strip... strip
checking for a race-free mkdir -p... ../install-sh -c -d
checking for gawk... no
checking for mawk... no
checking for nawk... no
checking for awk... awk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking for x86_64-apple-darwin13-gcc... xcrun -sdk iphonesimulator clang -target x86_64-apple-ios-simulator
chec... (21159 more, please see e.stdout)
STDERR:
</code></pre>
<p><code>config.log</code>:</p>
<pre><code>This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake.
It was created by libffi configure 3.4.4, which was generated by GNU Autoconf 2.71. Invocation command line was
$ ../configure -host arm-apple-darwin11
## --------- ##
## Platform. ##
## --------- ##
hostname = Merts-MacBook-Air.local uname -m = arm64 uname -r = 23.6.0 uname -s = Darwin uname -v = Darwin Kernel Version 23.6.0: Fri Jul 5 17:56:39 PDT 2024; root:xnu-10063.141.1~2/RELEASE_ARM64_T8122
/usr/bin/uname -p = arm /bin/uname -X = unknown
/bin/arch = unknown /usr/bin/arch -k = unknown /usr/convex/getsysinfo = unknown /usr/bin/hostinfo = Mach kernel version: Darwin Kernel Version 23.6.0: Fri Jul 5 17:56:39 PDT 2024; root:xnu-10063.141.1~2/RELEASE_ARM64_T8122 Kernel configured for up to 8 processors. 8 processors are physically available. 8 processors are logically available. Processor type: arm64e (ARM64E) Processors active: 0 1 2 3 4 5 6 7 Primary memory available: 8.00 gigabytes Default processor set: 597 tasks, 2248 threads, 8 processors Load average: 3.50, Mach factor: 4.58 /bin/machine = unknown /usr/bin/oslevel = unknown /bin/universe = unknown
PATH: /usr/gnu/bin/ PATH: /usr/local/bin/ PATH: /bin/ PATH: /usr/bin/ PATH: ./
## ----------- ##
## Core tests. ##
## ----------- ##
configure:3073: looking for aux files: ltmain.sh compile missing install-sh config.guess config.sub configure:3086: trying ../ configure:3115: ../ltmain.sh found configure:3115: ../compile found configure:3115: ../missing found configure:3097: ../install-sh found configure:3115: ../config.guess found configure:3115: ../config.sub found configure:3236: checking build system type configure:3251: result: aarch64-apple-darwin23.6.0 configure:3271: checking host system type configure:3285: result: arm-apple-darwin11 configure:3305: checking target system type configure:3319: result: arm-apple-darwin11 configure:3417: checking for gsed configure:3453: result: sed configure:3482: checking for a BSD-compatible install configure:3555: result: /usr/bin/install -c configure:3566: checking whether build environment is sane configure:3621: result: yes configure:3673: checking for arm-apple-darwin11-strip configure:3708: result: no configure:3718: checking for strip configure:3739: found /usr/bin/strip configure:3750: result: strip configure:3776: checking for a race-free mkdir -p configure:3820: result: ../install-sh -c -d configure:3827: checking for gawk configure:3862: result: no configure:3827: checking for mawk configure:3862: result: no configure:3827: checking for nawk configure:3862: result: no configure:3827: checking for awk configure:3848: found /usr/bin/awk configure:3859: result: awk configure:3870: checking whether make sets $(MAKE) configure:3893: result: yes configure:3923: checking whether make supports nested variables configure:3941: result: yes configure:4105: checking for arm-apple-darwin11-gcc configure:4137: result: xcrun -sdk iphoneos clang -target armv7-apple-ios configure:4535: checking for C compiler version configure:4544: xcrun -sdk iphoneos clang -target armv7-apple-ios --version >&5 Apple clang version 16.0.0 (clang-1600.0.26.4) Target: armv7-apple-ios Thread model: posix InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin configure:4555: $? = 0 configure:4544: xcrun -sdk iphoneos clang
-target armv7-apple-ios -v >&5 Apple clang version 16.0.0 (clang-1600.0.26.4) Target: armv7-apple-ios Thread model: posix InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin configure:4555: $? = 0 configure:4544: xcrun -sdk iphoneos clang
-target armv7-apple-ios -V >&5 clang: error: argument to '-V' is missing (expected 1 value) clang: error: no input files configure:4555: $? = 1 configure:4544: xcrun -sdk iphoneos clang
-target armv7-apple-ios -qversion >&5 clang: error: unknown argument '-qversion'; did you mean '--version'? clang: error: no input files configure:4555: $? = 1 configure:4544: xcrun -sdk iphoneos clang
-target armv7-apple-ios -version >&5 clang: error: unknown argument '-version'; did you mean '--version'? clang: error: no input files configure:4555: $? = 1 configure:4575: checking whether the C compiler works configure:4597: xcrun -sdk iphoneos clang -target armv7-apple-ios -miphoneos-version-min=9.0 -fembed-bitcode conftest.c >&5 ld: warning: -bitcode_bundle is no longer supported and will be ignored ld: -mllvm and -bitcode_bundle (Xcode setting ENABLE_BITCODE=YES) cannot be used together clang: error: linker command failed with exit code 1 (use -v to see invocation) configure:4601: $? = 1 configure:4641: result: no configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "libffi" |
#define PACKAGE_TARNAME "libffi" | #define PACKAGE_VERSION "3.4.4" | #define PACKAGE_STRING "libffi 3.4.4" | #define PACKAGE_BUGREPORT "http://github.com/libffi/libffi/issues" | #define PACKAGE_URL "" |
#define PACKAGE "libffi" | #define VERSION "3.4.4" | /* end confdefs.h. */ | | int | main (void) | { | | ; | return 0; | } configure:4646: error: in `/Users/mertmehmetaraz/synapp/Test1/build/libffi/iphoneos-arm64/libffi-3.4.4/build_iphoneos-armv7': configure:4648: error: C compiler cannot create executables See `config.log' for more details
## ---------------- ##
## Cache variables. ##
## ---------------- ##
ac_cv_build=aarch64-apple-darwin23.6.0 ac_cv_env_CCASFLAGS_set= ac_cv_env_CCASFLAGS_value= ac_cv_env_CCAS_set= ac_cv_env_CCAS_value= ac_cv_env_CPPFLAGS_set= ac_cv_env_CPPFLAGS_value= ac_cv_env_CXXCPP_set= ac_cv_env_CXXCPP_value= ac_cv_env_LT_SYS_LIBRARY_PATH_set= ac_cv_env_LT_SYS_LIBRARY_PATH_value= ac_cv_env_build_alias_set= ac_cv_env_build_alias_value= ac_cv_env_host_alias_set=set ac_cv_env_host_alias_value=arm-apple-darwin11 ac_cv_env_target_alias_set= ac_cv_env_target_alias_value= ac_cv_host=arm-apple-darwin11 ac_cv_path_ax_enable_builddir_sed=sed ac_cv_path_install='/usr/bin/install -c' ac_cv_prog_AWK=awk ac_cv_prog_CC='xcrun -sdk iphoneos clang -target armv7-apple-ios' ac_cv_prog_ac_ct_STRIP=strip ac_cv_prog_make_make_set=yes ac_cv_target=arm-apple-darwin11 am_cv_make_support_nested_variables=yes
## ----------------- ##
## Output variables. ##
## ----------------- ##
ACLOCAL='${SHELL} '\''/Users/mertmehmetaraz/synapp/Test1/build/libffi/iphoneos-arm64/libffi-3.4.4/missing'\'' aclocal-1.16' ALLOCA='' AMDEPBACKSLASH='' AMDEP_FALSE='' AMDEP_TRUE='' AMTAR='$${TAR-tar}' AM_BACKSLASH='\' AM_DEFAULT_V='$(AM_DEFAULT_VERBOSITY)' AM_DEFAULT_VERBOSITY='1' AM_LTLDFLAGS='' AM_RUNTESTFLAGS='' AM_V='$(V)' AR='' AUTOCONF='${SHELL} '\''/Users/mertmehmetaraz/synapp/Test1/build/libffi/iphoneos-arm64/libffi-3.4.4/missing'\'' autoconf' AUTOHEADER='${SHELL} '\''/Users/mertmehmetaraz/synapp/Test1/build/libffi/iphoneos-arm64/libffi-3.4.4/missing'\'' autoheader' AUTOMAKE='${SHELL} '\''/Users/mertmehmetaraz/synapp/Test1/build/libffi/iphoneos-arm64/libffi-3.4.4/missing'\'' automake-1.16' AWK='awk' BUILD_DOCS_FALSE='' BUILD_DOCS_TRUE='' CC='xcrun -sdk iphoneos clang -target armv7-apple-ios' CCAS='' CCASDEPMODE='' CCASFLAGS='' CCDEPMODE='' CFLAGS='-miphoneos-version-min=9.0 -fembed-bitcode' CPPFLAGS='' CSCOPE='cscope' CTAGS='ctags' CXX='' CXXCPP='' CXXDEPMODE='' CXXFLAGS='' CYGPATH_W='echo' DEFS='' DEPDIR='' DLLTOOL='' DSYMUTIL='' DUMPBIN='' ECHO_C='\c' ECHO_N='' ECHO_T='' EGREP='' ETAGS='etags' EXEEXT='' FFI_DEBUG_FALSE='' FFI_DEBUG_TRUE='' FFI_EXEC_TRAMPOLINE_TABLE='' FFI_EXEC_TRAMPOLINE_TABLE_FALSE='' FFI_EXEC_TRAMPOLINE_TABLE_TRUE='' FGREP='' FILECMD='' GREP='' HAVE_LONG_DOUBLE='' HAVE_LONG_DOUBLE_VARIANT='' INSTALL_DATA='${INSTALL} -m 644' INSTALL_PROGRAM='${INSTALL}' INSTALL_SCRIPT='${INSTALL}' INSTALL_STRIP_PROGRAM='$(install_sh) -c
-s' LD='xcrun -sdk iphoneos ld -target armv7-apple-ios' LDFLAGS='' LIBFFI_BUILD_VERSIONED_SHLIB_FALSE='' LIBFFI_BUILD_VERSIONED_SHLIB_GNU_FALSE='' LIBFFI_BUILD_VERSIONED_SHLIB_GNU_TRUE='' LIBFFI_BUILD_VERSIONED_SHLIB_SUN_FALSE='' LIBFFI_BUILD_VERSIONED_SHLIB_SUN_TRUE='' LIBFFI_BUILD_VERSIONED_SHLIB_TRUE='' LIBOBJS='' LIBS='' LIBTOOL='' LIPO='' LN_S='' LTLIBOBJS='' LT_SYS_LIBRARY_PATH='' MAINT='' MAINTAINER_MODE_FALSE='' MAINTAINER_MODE_TRUE='' MAKEINFO='${SHELL} '\''/Users/mertmehmetaraz/synapp/Test1/build/libffi/iphoneos-arm64/libffi-3.4.4/missing'\'' makeinfo' MANIFEST_TOOL='' MKDIR_P='../install-sh -c -d' NM='' NMEDIT='' OBJDUMP='' OBJEXT='' OPT_LDFLAGS='' OTOOL64='' OTOOL='' PACKAGE='libffi' PACKAGE_BUGREPORT='http://github.com/libffi/libffi/issues' PACKAGE_NAME='libffi' PACKAGE_STRING='libffi 3.4.4' PACKAGE_TARNAME='libffi' PACKAGE_URL='' PACKAGE_VERSION='3.4.4' PATH_SEPARATOR=':' PRTDIAG='' RANLIB='' READELF='' SECTION_LDFLAGS='' SED='' SET_MAKE='' SHELL='/bin/sh' STRIP='strip' TARGET='' TARGETDIR='' TARGET_OBJ='' TESTSUBDIR_FALSE='' TESTSUBDIR_TRUE='' VERSION='3.4.4' ac_ct_AR='' ac_ct_CC='' ac_ct_CXX='' ac_ct_DUMPBIN='' am__EXEEXT_FALSE='' am__EXEEXT_TRUE='' am__fastdepCCAS_FALSE='' am__fastdepCCAS_TRUE='' am__fastdepCC_FALSE='' am__fastdepCC_TRUE='' am__fastdepCXX_FALSE='' am__fastdepCXX_TRUE='' am__include='' am__isrc=' -I$(srcdir)' am__leading_dot='.' am__nodep='' am__quote='' am__tar='$${TAR-tar} chof - "$$tardir"' am__untar='$${TAR-tar} xf -' ax_enable_builddir_sed='sed' bindir='${exec_prefix}/bin' build='aarch64-apple-darwin23.6.0' build_alias='' build_cpu='aarch64' build_os='darwin23.6.0' build_vendor='apple' datadir='${datarootdir}' datarootdir='${prefix}/share' docdir='${datarootdir}/doc/${PACKAGE_TARNAME}' dvidir='${docdir}' exec_prefix='NONE' host='arm-apple-darwin11' host_alias='arm-apple-darwin11' host_cpu='arm' host_os='darwin11' host_vendor='apple' htmldir='${docdir}' includedir='${prefix}/include' infodir='${datarootdir}/info' install_sh='${SHELL} /Users/mertmehmetaraz/synapp/Test1/build/libffi/iphoneos-arm64/libffi-3.4.4/install-sh' libdir='${exec_prefix}/lib' libexecdir='${exec_prefix}/libexec' localedir='${datarootdir}/locale' localstatedir='${prefix}/var' mandir='${datarootdir}/man' mkdir_p='$(MKDIR_P)' oldincludedir='/usr/include' pdfdir='${docdir}' prefix='NONE' program_transform_name='s,x,x,' psdir='${docdir}' runstatedir='${localstatedir}/run' sbindir='${exec_prefix}/sbin' sharedstatedir='${prefix}/com' sys_symbol_underscore='' sysconfdir='${prefix}/etc' target='arm-apple-darwin11' target_alias='arm-apple-darwin11' target_cpu='arm' target_os='darwin11' target_vendor='apple' tmake_file='' toolexecdir='' toolexeclibdir=''
## ----------- ##
## confdefs.h. ##
## ----------- ##
/* confdefs.h */
#define PACKAGE_NAME "libffi"
#define PACKAGE_TARNAME "libffi"
#define PACKAGE_VERSION "3.4.4"
#define PACKAGE_STRING "libffi 3.4.4"
#define PACKAGE_BUGREPORT "http://github.com/libffi/libffi/issues"
#define PACKAGE_URL ""
#define PACKAGE "libffi"
#define VERSION "3.4.4"
configure: exit 77
</code></pre>
<p>After changing libffi version to 3.4.5 result was same. Any help or idea would be great. Thanks.</p>
|
<python><ios><kivy>
|
2024-11-09 10:55:24
| 1
| 505
|
320V
|
79,172,501
| 9,128,863
|
Python Scipy: takes 1 positional argument but 2 were given
|
<p>I'm trying to implement simple optimisation with Scipy lib:</p>
<pre><code> def f1(x):
sum(x)
initial_x_1 = np.ndarray([1])
res = optimize.minimize(f1, initial_x_1, [], 'COBYLA')
</code></pre>
<p>But got the error:</p>
<pre><code> fx = fun(np.copy(x), *args)
^^^^^^^^^^^^^^^^^^^^^^
TypeError: f1() takes 1 positional argument but 2 were given
</code></pre>
|
<python><scipy>
|
2024-11-09 09:58:22
| 1
| 1,424
|
Jelly
|
79,172,444
| 9,381,746
|
Accessing the end of of a file being written while live plotting of high speed datastream
|
<p>My question refers to the great answer of the following question:</p>
<p><a href="https://stackoverflow.com/questions/72697369/real-time-data-plotting-from-a-high-throughput-source">Real time data plotting from a high throughput source</a></p>
<p>As the <code>gen.py</code> code of this answer was growing fast, I wrote own version <code>gen_own.py</code> below, which essentially imposes a delay of 1 ms before writing a new data on the file. I also adapted the code <code>plot.py</code> and wrote my own <code>plot_own.py</code> essentially adding debugging statements. Although I tried to read the doc on the several components of the <code>f.seek(0, io.SEEK_END)</code> line, there are still several points that I don't understand. Here are all the questions that I have</p>
<p>My question is: how can we adapt <code>plot_own.py</code> to work with <code>gen_own.py</code> (with a slower datastream)</p>
<p>Here is the code <code>gen_own.py</code>:</p>
<pre><code>#!/usr/bin/env python3
import time
import random
LIMIT_TIME = 100 # s
DATA_FILENAME = "data.txt"
def gen_data(filename, limit_time):
start_time = time.time()
elapsed_time = time.time() - start_time
old_time = time.time()
with open(filename, "w") as f:
while elapsed_time < limit_time:
new_time = time.time()
if new_time > old_time + 0.001:
f.write(f"{time.time():30.12f} {random.random():30.12f}\n") # produces 64 bytes
f.flush()
old_time = time.time()
elapsed = old_time - start_time
gen_data(DATA_FILENAME, LIMIT_TIME)
</code></pre>
<p>for competeness here is the code of <code>gen.py</code> (copied from original question)</p>
<pre><code>#!/usr/bin/env python3
import time
import random
LIMIT_TIME = 100 # s
DATA_FILENAME = "data.txt"
def gen_data(filename, limit_time):
start_time = time.time()
elapsed_time = time.time() - start_time
with open(filename, "w") as f:
while elapsed_time < limit_time:
f.write(f"{time.time():30.12f} {random.random():30.12f}\n") # produces 64 bytes
f.flush()
elapsed = time.time() - start_time
gen_data(DATA_FILENAME, LIMIT_TIME)
</code></pre>
<p>Here is the code <code>plot_own.py</code>:</p>
<pre><code>#!/usr/bin/env python3
import io
import time
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib.animation
BUFFER_LEN = 64
DATA_FILENAME = "data.txt"
PLOT_LIMIT = 20
ANIM_FILENAME = "video.gif"
fig, ax = plt.subplots(1, 1, figsize=(10,8))
ax.set_title("Plot of random numbers from `gen.py`")
ax.set_xlabel("time / s")
ax.set_ylabel("random number / #")
ax.set_ylim([0, 1])
def get_data(filename, buffer_len, delay=0.0):
with open(filename, "r") as f:
print("f.seek(0, io.SEEK_END): " + str(f.seek(0, io.SEEK_END)))
data = f.read(buffer_len)
print("f.tell(): " + str(f.tell()))
print("f.readline(): " + f.readline())
print("data: " + data)
if delay:
time.sleep(delay)
return data
def animate(i, xs, ys, limit=PLOT_LIMIT, verbose=False):
# grab the data
try:
data = get_data(DATA_FILENAME, BUFFER_LEN)
if verbose:
print(data)
x, y = map(float, data.split())
if x > xs[-1]:
# Add x and y to lists
xs.append(x)
ys.append(y)
# Limit x and y lists to 10 items
xs = xs[-limit:]
ys = ys[-limit:]
else:
print(f"W: {time.time()} :: STALE!")
except ValueError:
print(f"W: {time.time()} :: EXCEPTION!")
else:
# Draw x and y lists
ax.clear()
ax.set_ylim([0, 1])
ax.plot(xs, ys)
# save video (only to attach here)
#anim = mpl.animation.FuncAnimation(fig, animate, fargs=([time.time()], [None]), interval=1, frames=3 * PLOT_LIMIT, repeat=False)
#anim.save(ANIM_FILENAME, writer='imagemagick', fps=10)
#print(f"I: Saved to `{ANIM_FILENAME}`")
# show interactively
anim = mpl.animation.FuncAnimation(fig, animate, fargs=([time.time()], [None]), interval=1)
plt.show()
plt.close()
</code></pre>
<p>Here is the output of <code>plot_own.py</code> when run simultaneously with <code>gen.py</code></p>
<pre><code>f.seek(0, io.SEEK_END): 36998872
f.tell(): 36998936
f.readline(): 1731141285.629011392593 0.423847536979
data: 1731141285.629006385803 0.946414017554
f.seek(0, io.SEEK_END): 37495182
f.tell(): 37495246
f.readline(): 1731141285.670451402664 0.405303398216
data: 1731141285.670446395874 0.103460518242
f.seek(0, io.SEEK_END): 38084306
f.tell(): 38084370
f.readline(): 1731141285.719735860825 0.360983611461
data: 1731141285.719730854034 0.318057761442
</code></pre>
<p>Here is the output of <code>plot_own.py</code> when run simultaneously with <code>gen_own.py</code></p>
<pre><code>W: 1731141977.7246473 :: EXCEPTION!
f.seek(0, io.SEEK_END): 156426
f.tell(): 156426
f.readline():
data:
W: 1731141977.7611823 :: EXCEPTION!
f.seek(0, io.SEEK_END): 158472
f.tell(): 158472
f.readline():
data:
W: 1731141977.79479 :: EXCEPTION!
f.seek(0, io.SEEK_END): 160518
f.tell(): 160518
f.readline(): 1731141977.828338146210 0.165056626254
data:
W: 1731141977.8283837 :: EXCEPTION!
f.seek(0, io.SEEK_END): 162626
f.tell(): 162626
f.readline():
data:
W: 1731141977.8621912 :: EXCEPTION!
f.seek(0, io.SEEK_END): 164734
f.tell(): 164734
f.readline():
data:
</code></pre>
|
<python><numpy><matplotlib><io><seek>
|
2024-11-09 09:23:51
| 1
| 5,557
|
ecjb
|
79,172,419
| 6,145,729
|
Extract certain word (case-insensitive) followed by numbers from Pandas df
|
<p>Can you extract a series of letters and numbers from bad freeform data in a dataframe?</p>
<p>I want to create a new column in the data frame with data that contains 'NEX' and a series of numbers after it.</p>
<pre><code>import pandas as pd
#Create a Dataframe
data = {
'ID':[1,2,3,4,5],
'PROGRAM': [ 'nbu 123456',
'NBU-123456',
'nex999999 b12',
'NXE999999 123',
'NBU123456 NEX999999']
}
df = pd.DataFrame(data)
</code></pre>
<p>I think I'm on the right lines with the below, but I somehow need to combine their functionality:-</p>
<pre><code>print(df['PROGRAM'].str.contains('NEX', na=False))
# does not deal with lower case & contains letters NEX not nessary in that order
print(df['PROGRAM'].str.extract(r'([NEX]+\d+)', expand=False))
</code></pre>
<p>The result should only bring back NEX999999 (including converting lowercase to uppercase)</p>
<pre><code>df['NEX'] = df['PROGRAM'].str.blahblahblah
</code></pre>
|
<python><pandas><dataframe>
|
2024-11-09 09:03:46
| 1
| 575
|
Lee Murray
|
79,171,950
| 298,607
|
Have an object serialize itself
|
<p>With either JSON or Pickle, I can instantiate an object and save that object like so:</p>
<pre><code>import pickle
thing = Thingy(some_data) # takes a while...
# ... do stuff with thing then save it since it mostly is the same
with open(filename, 'wb') as f:
pickle.dump(thing, f)
</code></pre>
<p>And read it back if the serialization file exists:</p>
<pre><code>if thingy_file.exists():
with open(filename, 'rb') as f:
thing=pickle.load(f) # Thingy(some_data) restored
thing.updateyourself() # pretty fast
else:
thing=Thingy(some_data) # takes a while...
</code></pre>
<p>However, this serialization is being done externally to the instantiation of Thingy object...</p>
<p>Now suppose I have some custom object with internal data that takes a <em>long time</em> to create:</p>
<pre><code>class Thingy:
def __init__(self, stuff...)
# this take a while the first time, but could be instant if read back
# pseudo'ish:
if self.file.exists():
print(f'Existing Thingy {self.file} found...')
# What I am trying to do here is read from a serialization file the
# data comprising the last time THIS INSTANCE of Thingy existed
with open(self.file, 'rb') as f:
self=pickle.load(f)
else:
# bummer this is gonna take a while...
# the rest of the object code...
# HERE we either have calculated a Thingy or have read it back into itself
# The instance may be modified -- now save its own self to be re-read in future:
def save_my_own_instance(self):
# now self is this instance of Thingy that will be serialized to disk
# magic I do not know...
with open(self.file, 'wb') as f:
pickle.dump(my_own_instance_data, f, pickle.HIGHEST_PROTOCOL)
</code></pre>
<p>So question: What is the method for an object to use serialization to restore its own data into its own instance? What about saving itself inside its own instance?</p>
<hr />
<p>Some specifics of what is a <code>Thingy</code>:</p>
<pre><code>from pathlib import Path
p=Path(a_HUGE_tree_of_data_files) # 750k plus large image files
tree_data=Thingy(p)
# initial run on p is hour+
# the tree changes very little, so an update of Thingy(p)
# only takes seconds
# I don't want to educate the user to check for a previous run
# It should be automatic
</code></pre>
|
<python><json><object><serialization><pickle>
|
2024-11-09 01:18:23
| 1
| 104,598
|
dawg
|
79,171,845
| 6,227,035
|
Pymongo - fetch documents by multiple tags
|
<p>I need to fetch documents given a list of tags, but I am having trouble finding the right syntax. For example, I have this collection:</p>
<pre><code>{
"name": "Mike",
"roll_no": "45",
"branch" : "75",
"tags": [tag1, tag2],
}
{
"name": "Karl",
"roll_no": "4",
"branch" : "5",
"tags": [tag3, tag2],
}
</code></pre>
<p>I am using this command <code>collection_name.find_one({"tags": ['tag1', 'tag2']} )</code> to retrieve both documents (e.g. Mike and Karl), but it is not working.
Any idea about what I am doing wrong?</p>
<p>Thank you!</p>
|
<python><find><pymongo>
|
2024-11-08 23:38:15
| 2
| 1,974
|
Sim81
|
79,171,727
| 6,145,729
|
extract first sequence of numbers from a pandas column
|
<p>I have imported a CSV into a pandas data frame; however, the column I need to use is freeform and in bad shape.</p>
<p>I need to extract the first series of numbers after the word NBU or the first series of numbers in the string. See some examples below:-</p>
<pre><code>nbu 123456
NBU-123456
nbu/ 123456 blah12
123456
123456_123
</code></pre>
<p>All of the above should be cleaned to produce <strong>123456</strong>. Note that the number of integers returned is based on how many are in a continual sequence; i.e., nbu12 3455 should only return 12.</p>
<p>I will then use something like this to fix the data:-</p>
<pre><code>df['col'] = df['col'].str.
</code></pre>
|
<python><pandas><dataframe>
|
2024-11-08 22:17:07
| 1
| 575
|
Lee Murray
|
79,171,631
| 309,483
|
How do I determine whether a ZoneInfo is an alias?
|
<p>I am having trouble identifying whether a ZoneInfo is built with an alias:</p>
<pre><code>> a = ZoneInfo('Atlantic/Faeroe')
> b = ZoneInfo('Atlantic/Faroe')
> a == b
False
</code></pre>
<p>It seems like these ZoneInfos are identical in practice. How do I identify that they are the same, as opposed to e.g. EST and UTC which are different?</p>
|
<python><timezone><identity><zoneinfo>
|
2024-11-08 21:29:32
| 2
| 21,445
|
Janus Troelsen
|
79,171,553
| 929,732
|
Is there a reason that some keys and values will not load in TOML? (Python config import)
|
<p>I've set up my flask app to read my config file...</p>
<pre><code>app.config.from_file("../CONFIGS/config.py", lambda f: tomllib.load(f.buffer))
f.write(str(app.config))
</code></pre>
<p>when I go to see the output some of the line from the config are there...</p>
<p><code>MY_VARIABLE = "1"</code></p>
<p>but</p>
<p><code>my_variable = "1"</code></p>
<p>is missing from the output...</p>
<p>Am I missing something?</p>
<p>using the 3.11 built in TOMLLIB with python 3.11</p>
|
<python><flask><config><python-3.11><toml>
|
2024-11-08 21:03:03
| 1
| 1,489
|
BostonAreaHuman
|
79,171,534
| 6,552,666
|
How to write a chat client without waiting for input?
|
<p>This is my attempt at the loop for an extremely basic IRC client.</p>
<pre><code>while True:
try:
resp = s.recv(1024)
except BlockingIOError:
pass
else:
text = resp.decode('utf-8')
print('Received Message: '+text)
if text.startswith('PING'):
msg = 'PONG ' + text.split()[1]
s.send(bytes(msg, 'utf-8'))
message = input('>')
s.send(bytes(message, 'utf-8'))
</code></pre>
<p>Above, I set blocking to false. I'm trying to figure out how to change it so that it doesn't wait for input to go through, but will continue to receive messages. I don't mind if the output looks something like:</p>
<pre><code>> Here's the start of my message
INCOMING MESSAGE RECEIVED
and here's the end of it.
</code></pre>
<p>as it's just a quick hack for a class. Any ideas on how to restructure my approach to this?</p>
|
<python><sockets><chat>
|
2024-11-08 20:52:35
| 0
| 673
|
Frank Harris
|
79,171,453
| 2,962,555
|
chroma in the docker cannot be connected from another docker service
|
<p>I am trying to talk to chroma service (in docker container) from service-a (also in docker container). However, when I try to <code>ChromaConnector.get_instance()</code>, I got error below:</p>
<pre><code>Could not connect to a Chroma server. Are you sure it is running?
</code></pre>
<p>I tried to do following from the service-a container</p>
<pre><code>curl http://chroma:8000
curl http://chroma:8000/api/v1/collections
curl http://chroma:8000/api/v1
</code></pre>
<p>The <code>curl http://chroma:8000</code> gives error</p>
<pre><code>{"detail":"Not Found"}
</code></pre>
<p>But <code>curl http://chroma:8000/api/v1/collections</code> gives</p>
<pre><code>[]
</code></pre>
<p>and <code>curl http://chroma:8000/api/v1</code> returns</p>
<pre><code>{"nanosecond heartbeat":1731106743803134838}
</code></pre>
<p>This gives me the feeling that the server was up.</p>
<p>Below is the code</p>
<pre><code>class ChromaConnector:
_instance = None
@classmethod
def get_instance(cls):
if cls._instance is None:
cls._instance = chromadb.HttpClient(host='http://chroma', port=8000)
return cls._instance
</code></pre>
<p>below is my docker-compose.yaml file</p>
<pre><code>version: '3.8'
services:
service-a:
build:
context: .
dockerfile: Dockerfile
ports:
- "18001:18001"
environment:
- ENVIRONMENT=development
depends_on:
- kafka
- chroma
kafka:
image: bitnami/kafka:latest
ports:
- "9094:9094"
environment:
- KAFKA_ENABLE_KRAFT=yes
- KAFKA_CFG_BROKER_ID=1
- KAFKA_CFG_NODE_ID=1
- KAFKA_CFG_PROCESS_ROLES=broker,controller
- KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093,EXTERNAL://:9094
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,EXTERNAL://localhost:9094
- KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@:9093
- ALLOW_PLAINTEXT_LISTENER=yes
chroma:
image: chromadb/chroma:latest
ports:
- "18000:8000"
volumes:
- chroma-data:/chroma/chroma
restart: unless-stopped
command: "--workers 1 --host 0.0.0.0 --port 8000 --proxy-headers --log-config chromadb/log_config.yml --timeout-keep-alive 30"
environment:
- IS_PERSISTENT=TRUE
redis:
image: redis:latest
ports:
- "6379:6379"
volumes:
- redis-data:/data
volumes:
redis-data:
chroma-data:
</code></pre>
<p>Please advise. Thanks.</p>
|
<python><docker><docker-compose><chromadb>
|
2024-11-08 20:20:29
| 1
| 1,729
|
Laodao
|
79,171,275
| 6,145,729
|
Python using local variables in a module def
|
<p>Can I access a local variable in a module?</p>
<p>For example my main.py script has the variable <code>a</code> and I want my module function to print <code>a</code></p>
<pre><code># main.py
import sys
import os
sys.path.insert(0, os.path.abspath(r'C:\MyModules'))
import mymodule
a = 'It worked'
mymodule.test()
</code></pre>
<p>The below file is saved in the MyModules folder.</p>
<pre><code># mymodule.py
def test():
print(a)
</code></pre>
<p>If I add <code>a = 'It worked'</code> into mymodule.py it gets printed when I run main.py.</p>
<p>clearly the above is only an example to demonstrate functionalliy.</p>
|
<python>
|
2024-11-08 19:05:44
| 3
| 575
|
Lee Murray
|
79,171,112
| 1,028,270
|
How do I have a custom logging configuration package that plays nice with 3rd party packages?
|
<p>The standard way of instantiating a logger is at the module level (from what I understand), so I assume most 3rd party packages do this:</p>
<pre><code># consider this somepackage
import logging
logger = logging.getLogger(__name__)
def blah():
logger.info("slkdjflksjfkld")
...
</code></pre>
<p>But I want my applications to configure their project wide logging with a shared package</p>
<p>What happens is <em>after</em> my imports fileConfig is called and logging, formatting, etc is setup from scratch and it blows away any module level loggers that were in the imported packages.</p>
<p>So for example if <code>somepackage</code> instantiated logging like above and I do this:</p>
<pre><code># logging.getLogger() is going to be called immediately
from somepackage import blah
# Import my logging config package and get a logger
from mycomp.logging import get_logger
logger = get_logger(my custom params)
</code></pre>
<p>My <code>get_logger()</code> function configures project wide logging and it removes the handlers from <code>somepackage</code>.</p>
<p>But if I do it in this order <code>somepackage</code> will get my global logging config correctly:</p>
<pre><code># FIRST Import my logging config package and get a logger before anything else
from mycomp.logging import get_logger
logger = get_logger(my custom params)
# Now logging.getLogger() picks up my custom global logging config
from somepackage import blah
</code></pre>
<p>Am I missing something about how this works? Maybe 3rd party packages don't actually instantiate loggers on import and implement some kind of singleton? But from what I see in the wild they seem to do it at the module level.</p>
|
<python><python-logging>
|
2024-11-08 17:59:42
| 0
| 32,280
|
red888
|
79,171,020
| 10,750,541
|
Grouped gantt view with legend different than the color used (python)
|
<p>In need of the community's lights in here to achieve with plotly a gantt view like the following.</p>
<p>The dataframe looks like this, where the subproject is a unique code:</p>
<pre><code>project start end phase decision subproject
1 02-2017 03-2018 Phase_1 09-2023 a1
1 08-2017 07-2019 Phase_1,2 09-2023 a2
1 02-2018 11-2021 Phase_2 09-2023 a3
1 04-2021 02-2023 Phase_3 06-2022 a4
2 01-2019 02-2022 Phase_1 06-2022 b1
2 06-2019 07-2022 Phase_2 06-2022 b2
2 01-2021 03-2023 Phase_2,3 06-2022 b3
2 03-2022 02-2021 Phase_3 06-2022 b4
3 11-2017 02-2019 Phase_1 06-2022 c1
3 01-2018 06-2019 Phase_2 06-2022 c2
3 02-2018 07-2020 Phase_2 06-2022 c3
3 02-2019 06-2021 Phase_2,3 03-2023 c4
4 10-2019 10-2019 Phase_2 03-2023 d1
4 06-2019 08-2020 Phase_3 03-2023 d2
4 02-2020 02-2021 Phase_3 03-2023 d3
</code></pre>
<p>I try to create the colouring by iteration through each bar.
My main goal is to have per project all the bars in groups and not overlayed, so I use the subproject code, but I want a different legend and colouring based on the phase column.</p>
<p><a href="https://i.sstatic.net/yrMtONQ0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yrMtONQ0.png" alt="enter image description here" /></a></p>
<p>My (unsuccessful) code so far is:</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
import plotly.express as px
n_colors = 20 # Number of colors in each gradient
blue_orange_cmap = mcolors.LinearSegmentedColormap.from_list("blue_orange", ["blue", "orange"])
orange_green_cmap = mcolors.LinearSegmentedColormap.from_list("orange_green", ["orange", "green"])
blue_orange_colors = [blue_orange_cmap(i/n_colors) for i in range(n_colors)]
orange_green_colors = [orange_green_cmap(i/n_colors) for i in range(n_colors)]
colors = {'Phase_1' : 'blue',
'Phase_2' : 'orange',
'Phase_3' : 'green',
'Phase_1,2' : blue_orange_colors,
'Phase_2,3' : orange_green_colors}
for dat in fig.data:
try:
ix = example_df.index[example_df['subproject']==dat.name]
phase = example_df.loc[ix, 'phase'].values[0]
color = colors[phase]
dat.marker.color = color
dat.name = phase
except:
pass
fig = px.timeline(example_df, x_start='start', x_end='end', y='project', color = 'subproject')
fig.update_layout(xaxis_title='timeline', yaxis_title='projects', showlegend=True, barmode='group', title = "Example", width=1000, height=500)
fig.update_layout(legend=dict(title_text='', traceorder='reversed', itemsizing='constant'))
fig.show()
</code></pre>
|
<python><pandas><plotly><gantt-chart>
|
2024-11-08 17:27:50
| 1
| 532
|
Newbielp
|
79,170,871
| 11,575,738
|
Spooky behaviour of JAX
|
<p>This is a follow-up to my <a href="https://stackoverflow.com/questions/79158791/tracking-test-val-loss-when-training-a-model-with-jax">previous question</a>. I am implementing a Parameterized Quantum Circuit as a Quantum Neural Network, where the optimization loop is jitted. Although there's no error, everything is working fine, I find a very unusual behavior in terms of execution times.</p>
<p>Check out the code below:</p>
<h2>Setting - 1</h2>
<pre class="lang-py prettyprint-override"><code>import pennylane as qml
from pennylane import numpy as np
import jax
from jax import numpy as jnp
import optax
from itertools import combinations
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import log_loss
import matplotlib.pyplot as plt
import matplotlib.colors
import warnings
warnings.filterwarnings("ignore")
np.random.seed(42)
import time
# Load the digits dataset with features (X_digits) and labels (y_digits)
X_digits, y_digits = load_digits(return_X_y=True)
# Create a boolean mask to filter out only the samples where the label is 2 or 6
filter_mask = np.isin(y_digits, [2, 6])
# Apply the filter mask to the features and labels to keep only the selected digits
X_digits = X_digits[filter_mask]
y_digits = y_digits[filter_mask]
# Split the filtered dataset into training and testing sets with 10% of data reserved for testing
X_train, X_test, y_train, y_test = train_test_split(
X_digits, y_digits, test_size=0.1, random_state=42
)
# Normalize the pixel values in the training and testing data
# Convert each image from a 1D array to an 8x8 2D array, normalize pixel values, and scale them
X_train = np.array([thing.reshape([8, 8]) / 16 * 2 * np.pi for thing in X_train])
X_test = np.array([thing.reshape([8, 8]) / 16 * 2 * np.pi for thing in X_test])
# Adjust the labels to be centered around 0 and scaled to be in the range -1 to 1
# The original labels (2 and 6) are mapped to -1 and 1 respectively
y_train = (y_train - 4) / 2
y_test = (y_test - 4) / 2
def feature_map(features):
# Apply Hadamard gates to all qubits to create an equal superposition state
for i in range(len(features[0])):
qml.Hadamard(i)
# Apply angle embeddings based on the feature values
for i in range(len(features)):
# For odd-indexed features, use Z-rotation in the angle embedding
if i % 2:
qml.AngleEmbedding(features=features[i], wires=range(8), rotation="Z")
# For even-indexed features, use X-rotation in the angle embedding
else:
qml.AngleEmbedding(features=features[i], wires=range(8), rotation="X")
# Define the ansatz (quantum circuit ansatz) for parameterized quantum operations
def ansatz(params):
# Apply RY rotations with the first set of parameters
for i in range(8):
qml.RY(params[i], wires=i)
# Apply CNOT gates with adjacent qubits (cyclically connected) to create entanglement
for i in range(8):
qml.CNOT(wires=[(i - 1) % 8, (i) % 8])
# Apply RY rotations with the second set of parameters
for i in range(8):
qml.RY(params[i + 8], wires=i)
# Apply CNOT gates with qubits in reverse order (cyclically connected)
# to create additional entanglement
for i in range(8):
qml.CNOT(wires=[(8 - 2 - i) % 8, (8 - i - 1) % 8])
dev = qml.device("default.qubit", wires=8)
@qml.qnode(dev)
def circuit(params, features):
feature_map(features)
ansatz(params)
return qml.expval(qml.PauliZ(0))
def variational_classifier(weights, bias, x):
return circuit(weights, x) + bias
def square_loss(labels, predictions):
return np.mean((labels - qml.math.stack(predictions)) ** 2)
def accuracy(labels, predictions):
acc = sum([np.sign(l) == np.sign(p) for l, p in zip(labels, predictions)])
acc = acc / len(labels)
return acc
def cost(params, X, Y):
predictions = [variational_classifier(params["weights"], params["bias"], x) for x in X]
return square_loss(Y, predictions)
def acc(params, X, Y):
predictions = [variational_classifier(params["weights"], params["bias"], x) for x in X]
return accuracy(Y, predictions)
np.random.seed(0)
weights = 0.01 * np.random.randn(16)
bias = jnp.array(0.0)
params = {"weights": weights, "bias": bias}
opt = optax.adam(0.05)
batch_size = 7
num_batch = X_train.shape[0] // batch_size
opt_state = opt.init(params)
X_batched = X_train.reshape([-1, batch_size, 8, 8])
y_batched = y_train.reshape([-1, batch_size])
@jax.jit
def update_step_jit(i, args):
params, opt_state, data, targets, X_test, y_test, X_train, y_train, batch_no, print_training = args
_data = data[batch_no % num_batch]
_targets = targets[batch_no % num_batch]
train_loss, grads = jax.value_and_grad(cost)(params, _data, _targets)
updates, opt_state = opt.update(grads, opt_state)
test_loss, grads = jax.value_and_grad(cost)(params, X_test, y_test)
params = optax.apply_updates(params, updates)
# Print training loss every step if print_training is True
def print_fn():
jax.debug.print("Step: {i}, Train Loss: {train_loss}", i=i, train_loss=train_loss)
jax.debug.print("Step: {i}, Test Loss: {test_loss}", i=i, test_loss=test_loss)
jax.lax.cond((jnp.mod(i, 1) == 0) & print_training, print_fn, lambda: None)
return (params, opt_state, data, targets, X_test, y_test, X_train, y_train, batch_no + 1, print_training)
@jax.jit
def optimization_jit(params, data, targets, X_test, y_test, X_train, y_train, print_training = True):
opt_state = opt.init(params)
args = (params, opt_state, data, targets, X_test, y_test, X_train, y_train, 0, print_training)
(params, _, _, _, _, _, _, _, _, _) = jax.lax.fori_loop(0, 1, update_step_jit, args)
return params
start_time = time.time()
params = optimization_jit(params, X_batched, y_batched, X_test, y_test, X_train, y_train)
print("Training Done! \nTime taken:",time.time() - start_time)
start_time = time.time()
var_train_acc = acc(params, X_train, y_train)
print("Training accuracy: ", var_train_acc)
print("Time taken:",time.time() - start_time)
start_time = time.time()
var_test_acc = acc(params, X_test, y_test)
print("Testing accuracy: ", var_test_acc)
print("Time taken:",time.time() - start_time)
</code></pre>
<p>Notice that it is running the <code>jax.lax.fori_loop</code> just <code>1</code> time.</p>
<p>For reproducibility, I verified it by running 3 times, and the outputs are as follows,</p>
<p>Output of first run:</p>
<pre><code>Training Done!
Time taken: 66.26599097251892
Step: 0, Train Loss: 1.015419602394104
Step: 0, Test Loss: 1.0022056102752686
Training accuracy: 0.5031055900621118
Time taken: 14.183394193649292
Testing accuracy: 0.5277777777777778
Time taken: 1.552431344985962
</code></pre>
<p>Output of second run:</p>
<pre><code>Training Done!
Time taken: 62.8515682220459
Step: 0, Train Loss: 1.015419602394104
Step: 0, Test Loss: 1.0022056102752686
Training accuracy: 0.5031055900621118
Time taken: 13.549866199493408
Testing accuracy: 0.5277777777777778
Time taken: 1.5097148418426514
</code></pre>
<p>Output of third run:</p>
<pre><code>Training Done!
Time taken: 63.35235905647278
Step: 0, Train Loss: 1.015419602394104
Step: 0, Test Loss: 1.0022056102752686
Training accuracy: 0.5031055900621118
Time taken: 13.52238941192627
Testing accuracy: 0.5277777777777778
Time taken: 1.5074975490570068
</code></pre>
<h2>Setting - 2</h2>
<p>So, then I ran it changing the <code>jax.lax.fori_loop</code> to run <code>10</code> times as</p>
<pre class="lang-py prettyprint-override"><code> (params, _, _, _, _, _, _, _, _, _) = jax.lax.fori_loop(0, 10, update_step_jit, args)
</code></pre>
<p>Surprisingly, the execution time reduces quite significantly, and the outputs are:</p>
<p>First Run:</p>
<pre><code>Training Done!
Time taken: 49.8694589138031
Step: 0, Train Loss: 1.015419602394104
Step: 0, Test Loss: 1.0022056102752686
Step: 1, Train Loss: 0.934578537940979
Step: 1, Test Loss: 0.9935969114303589
Step: 2, Train Loss: 0.982826828956604
Step: 2, Test Loss: 1.004722237586975
Step: 3, Train Loss: 0.982965350151062
Step: 3, Test Loss: 1.0281261205673218
Step: 4, Train Loss: 1.1700845956802368
Step: 4, Test Loss: 1.0455362796783447
Step: 5, Train Loss: 1.3356019258499146
Step: 5, Test Loss: 1.0411475896835327
Step: 6, Train Loss: 1.2408322095870972
Step: 6, Test Loss: 1.0204349756240845
Step: 7, Train Loss: 0.7292405366897583
Step: 7, Test Loss: 0.9959328770637512
Step: 8, Train Loss: 1.1697252988815308
Step: 8, Test Loss: 0.9822244644165039
Step: 9, Train Loss: 1.015731692314148
Step: 9, Test Loss: 0.9667297005653381
Training accuracy: 0.5217391304347826
Time taken: 13.903431177139282
Testing accuracy: 0.5555555555555556
Time taken: 1.537736177444458
</code></pre>
<p>Second run:</p>
<pre><code>Training Done!
Time taken: 56.34339928627014
Step: 0, Train Loss: 1.015419602394104
Step: 0, Test Loss: 1.0022056102752686
Step: 1, Train Loss: 0.934578537940979
Step: 1, Test Loss: 0.9935969114303589
Step: 2, Train Loss: 0.982826828956604
Step: 2, Test Loss: 1.004722237586975
Step: 3, Train Loss: 0.982965350151062
Step: 3, Test Loss: 1.0281261205673218
Step: 4, Train Loss: 1.1700845956802368
Step: 4, Test Loss: 1.0455362796783447
Step: 5, Train Loss: 1.3356019258499146
Step: 5, Test Loss: 1.0411475896835327
Step: 6, Train Loss: 1.2408322095870972
Step: 6, Test Loss: 1.0204349756240845
Step: 7, Train Loss: 0.7292405366897583
Step: 7, Test Loss: 0.9959328770637512
Step: 8, Train Loss: 1.1697252988815308
Step: 8, Test Loss: 0.9822244644165039
Step: 9, Train Loss: 1.015731692314148
Step: 9, Test Loss: 0.9667297005653381
Training accuracy: 0.5217391304347826
Time taken: 13.298640727996826
Testing accuracy: 0.5555555555555556
Time taken: 1.4631397724151611
</code></pre>
<p>Third run:</p>
<pre><code>Training Done!
Time taken: 53.01019215583801
Step: 0, Train Loss: 1.015419602394104
Step: 0, Test Loss: 1.0022056102752686
Step: 1, Train Loss: 0.934578537940979
Step: 1, Test Loss: 0.9935969114303589
Step: 2, Train Loss: 0.982826828956604
Step: 2, Test Loss: 1.004722237586975
Step: 3, Train Loss: 0.982965350151062
Step: 3, Test Loss: 1.0281261205673218
Step: 4, Train Loss: 1.1700845956802368
Step: 4, Test Loss: 1.0455362796783447
Step: 5, Train Loss: 1.3356019258499146
Step: 5, Test Loss: 1.0411475896835327
Step: 6, Train Loss: 1.2408322095870972
Step: 6, Test Loss: 1.0204349756240845
Step: 7, Train Loss: 0.7292405366897583
Step: 7, Test Loss: 0.9959328770637512
Step: 8, Train Loss: 1.1697252988815308
Step: 8, Test Loss: 0.9822244644165039
Step: 9, Train Loss: 1.015731692314148
Step: 9, Test Loss: 0.9667297005653381
Training accuracy: 0.5217391304347826
Time taken: 13.152780055999756
Testing accuracy: 0.5555555555555556
Time taken: 1.4448845386505127
</code></pre>
<h2>Setting - 3</h2>
<p>Furthermore, I thought of reducing the logging, and wanted to calculate and log the <code>test_loss</code> every 5th step, by updating the code to:</p>
<pre class="lang-py prettyprint-override"><code>@jax.jit
def update_step_jit(i, args):
params, opt_state, data, targets, X_test, y_test, X_train, y_train, batch_no, print_training = args
_data = data[batch_no % num_batch]
_targets = targets[batch_no % num_batch]
train_loss, grads = jax.value_and_grad(cost)(params, _data, _targets)
updates, opt_state = opt.update(grads, opt_state)
# train_accuracy, grads = jax.value_and_grad(acc)(params, X_train, y_train)
# test_accuracy, grads = jax.value_and_grad(acc)(params, X_test, y_test)
params = optax.apply_updates(params, updates)
# Print training loss every 5 steps if print_training is True
def print_fn():
test_loss, grads = jax.value_and_grad(cost)(params, X_test, y_test)
jax.debug.print("Step: {i}, Train Loss: {train_loss}", i=i, train_loss=train_loss)
# jax.debug.print("Step: {i}, Train Accuracy: {train_accuracy}", i=i, train_accuracy=train_accuracy)
jax.debug.print("Step: {i}, Test Loss: {test_loss}", i=i, test_loss=test_loss)
# jax.debug.print("Step: {i}, Test Accuracy: {test_accuracy}", i=i, test_accuracy=test_accuracy)
jax.lax.cond((jnp.mod(i, 5) == 0) & print_training, print_fn, lambda: None)
return (params, opt_state, data, targets, X_test, y_test, X_train, y_train, batch_no + 1, print_training)
@jax.jit
def optimization_jit(params, data, targets, X_test, y_test, X_train, y_train, print_training = True):
opt_state = opt.init(params)
args = (params, opt_state, data, targets, X_test, y_test, X_train, y_train, 0, print_training)
(params, _, _, _, _, _, _, _, _, _) = jax.lax.fori_loop(0, 10, update_step_jit, args)
return params
</code></pre>
<p>I though calling the <code>print_fn</code> fewer times would have resulted in even lesser runtime but no, the outputs were:</p>
<p>First Run:</p>
<pre><code>Training Done!
Time taken: 75.2902774810791
Step: 0, Train Loss: 1.015419602394104
Step: 0, Test Loss: 0.9935969114303589
Step: 5, Train Loss: 1.3356019258499146
Step: 5, Test Loss: 1.0204349756240845
Training accuracy: 0.5217391304347826
Time taken: 13.591582536697388
Testing accuracy: 0.5555555555555556
Time taken: 1.6048238277435303
</code></pre>
<p>Second run:</p>
<pre><code>Training Done!
Time taken: 86.21267819404602
Step: 0, Train Loss: 1.015419602394104
Step: 0, Test Loss: 0.9935969114303589
Step: 5, Train Loss: 1.3356019258499146
Step: 5, Test Loss: 1.0204349756240845
Training accuracy: 0.5217391304347826
Time taken: 13.666489601135254
Testing accuracy: 0.5555555555555556
Time taken: 1.5537452697753906
</code></pre>
<p>Third run:</p>
<pre><code>Training Done!
Time taken: 90.7916328907013
Step: 0, Train Loss: 1.015419602394104
Step: 0, Test Loss: 0.9935969114303589
Step: 5, Train Loss: 1.3356019258499146
Step: 5, Test Loss: 1.0204349756240845
Training accuracy: 0.5217391304347826
Time taken: 13.21641230583191
Testing accuracy: 0.5555555555555556
Time taken: 1.5349321365356445
</code></pre>
<p>The runtimes for the different settings can be plotted as:</p>
<p><a href="https://i.sstatic.net/TmEAdCJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TmEAdCJj.png" alt="Runtimes for different runs for different settings" /></a></p>
<p>My questions are:</p>
<ul>
<li>Why is Setting - 1, where the optimization loop is run only once, consistently taking more time than Setting - 2, where the optimization loop is run 10 times?</li>
<li>Why is Setting - 3, where the <code>print_fn</code> function in the optimization loop is called every 5th optimization step, consistently taking more time than Setting - 2, where the <code>print_fn</code> is being called on every iteration?</li>
</ul>
|
<python><machine-learning><jit><jax>
|
2024-11-08 16:33:17
| 1
| 331
|
Sup
|
79,170,787
| 51,816
|
How to get correct CPU usage like in task manager using Python?
|
<p>I am using psutil but the values I get is between 0 and 2% whereas the task manager is showing values way above that from 8 to 40% CPU usage. Am I missing something?</p>
<pre><code>import sys
import psutil # To get CPU and RAM usage
from PyQt5 import QtWidgets, QtCore, QtGui
class CircularProgressBar(QtWidgets.QWidget):
def __init__(self, label="CPU", color=QtGui.QColor(255, 140, 0), parent=None):
super().__init__(parent)
self.label = label
self.color = color
self.value = 0
self.timer = QtCore.QTimer(self)
self.timer.timeout.connect(self.update_value)
self.timer.start(1000) # Update every second
def update_value(self):
# Make sure the first call was done before to avoid 0.0 initial value
if not hasattr(self, 'initial_call_done'):
psutil.cpu_percent(interval=1) # Establish baseline
self.initial_call_done = True
# Get CPU usage with a non-zero interval
self.value = psutil.cpu_percent(interval=1)
self.update()
def paintEvent(self, event):
width = self.width()
height = self.height()
side = min(width, height)
# Create a QPainter instance
painter = QtGui.QPainter(self)
painter.setRenderHint(QtGui.QPainter.Antialiasing)
# Draw the background circle
rect = QtCore.QRectF(10, 10, side - 20, side - 20)
pen = QtGui.QPen(QtCore.Qt.gray, 10)
painter.setPen(pen)
painter.drawArc(rect, 0, 360 * 16)
# Draw the progress arc
pen.setColor(self.color)
painter.setPen(pen)
angle = int(self.value * 360 / 100) * 16
painter.drawArc(rect, 90 * 16, -angle)
# Draw the label and percentage text
painter.setPen(QtCore.Qt.white)
painter.setFont(QtGui.QFont("Arial", side // 10))
painter.drawText(rect, QtCore.Qt.AlignCenter, f"{self.label}\n{self.value:.0f}%")
painter.end()
def main():
app = QtWidgets.QApplication(sys.argv)
window = QtWidgets.QMainWindow()
window.setWindowTitle("Resource Monitor")
# Create a central widget and layout
central_widget = QtWidgets.QWidget()
layout = QtWidgets.QHBoxLayout(central_widget)
# Create and add progress bars
cpu_bar = CircularProgressBar("CPU", QtGui.QColor(255, 140, 0))
cpu_bar.setMinimumSize(200, 200)
layout.addWidget(cpu_bar)
window.setCentralWidget(central_widget)
window.resize(600, 400)
window.show()
sys.exit(app.exec_())
if __name__ == "__main__":
main()
</code></pre>
|
<python><pyqt><cpu-usage><taskmanager><psutil>
|
2024-11-08 16:07:02
| 0
| 333,709
|
Joan Venge
|
79,170,771
| 4,108,542
|
Add UserAgent info to flask log
|
<p>How to add UserAgent to Flask requests log shown in console?</p>
<pre><code>backend-flask | * Serving Flask app 'generate-ics'
backend-flask | * Debug mode: off
backend-flask | WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
backend-flask | * Running on all addresses (0.0.0.0)
backend-flask | * Running on http://127.0.0.1:8081
backend-flask | * Running on http://172.28.0.2:8081
backend-flask | Press CTRL+C to quit
backend-flask | 64.227.xx.xx - - [08/Nov/2024 18:55:44] "GET /rentaldb.ics HTTP/1.1" 200 -
backend-flask | 62.113.xx.xx - - [08/Nov/2024 18:55:46] "GET /rentaldb.ics HTTP/1.1" 200 -
backend-flask | 64.227.xx.xx - - [08/Nov/2024 18:55:49] "GET /rentaldb.ics HTTP/1.1" 200 -
^
Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0
</code></pre>
|
<python><flask><logging><user-agent>
|
2024-11-08 16:04:04
| 1
| 569
|
Artem
|
79,170,754
| 10,853,071
|
Unnesting a pandas json column and keeping an "id" column
|
<p>I am working on some nested NoSQL data. I would like to unnest it using <code>json_normalize</code> but keep the "id de transação" column so I could merge the resulting dataframe into other dataframes.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import json
data = {
"id de transação": [1, 2, 3, 4, 5],
"nome": ["Alice", "Bob", "Charlie", "David", "Eve"],
"dados": [
{"data": "2024-01-01", "local": "São Paulo", "valor": 100.50},
{"data": "2024-01-02", "local": "Rio de Janeiro", "valor": 200.75},
{"data": "2024-01-03", "local": "Belo Horizonte", "valor": 300.00},
{"data": "2024-01-04", "local": "Curitiba", "valor": 400.25},
{"data": "2024-01-05", "local": "Porto Alegre", "valor": 500.50}
]
}
df = pd.DataFrame(data)
</code></pre>
<p>I've tried to use the <code>meta</code> parameter but did not work.</p>
<pre class="lang-py prettyprint-override"><code>df_dados_normalized = pd.json_normalize(data = data["dados"], record_path=None, meta=data["id de transação"])
</code></pre>
<pre><code> data local valor
0 2024-01-01 São Paulo 100.50
1 2024-01-02 Rio de Janeiro 200.75
2 2024-01-03 Belo Horizonte 300.00
3 2024-01-04 Curitiba 400.25
4 2024-01-05 Porto Alegre 500.50
</code></pre>
<p>Is there a way? I would expect this resulting dataframe with the <code>id de transação</code> column.</p>
|
<python><pandas><json-normalize>
|
2024-11-08 16:00:58
| 2
| 457
|
FábioRB
|
79,170,651
| 3,387,046
|
Why does my Pthon code fails to deploy in Google?
|
<p>I'm trying to deploy a Python code in Google to fetch its results in VBA. The code runs well in PyCharm but returns error 500 from Google</p>
<p>Requirements</p>
<pre><code> scipy
flask==2.0.3
werkzeug==2.0.3
pandas
numpy
statsmodels
google-cloud-storage
</code></pre>
<p>Python</p>
<pre><code>from statsmodels.stats.multicomp import pairwise_tukeyhsd
from google.cloud import storage
def tukey_hsd(request):
# Initialize the storage client
storage_client = storage.Client()
bucket_name = "your-bucket-name" # Replace with your GCS bucket name
bucket = storage_client.bucket(bucket_name)
# Load the CSV file into a DataFrame
# For cloud function testing, you can simulate data instead of loading a CSV file from local disk
data = pd.DataFrame({
0: [5, 5.5, 6, 7, np.nan],
1: [6, 7, 8, np.nan, 9],
2: [7, 8, 9, 8.5, np.nan]
})
# Prepare values and groups arrays
values = []
groups = []
group_names = ['Group1', 'Group2', 'Group3']
for i, group_name in enumerate(group_names):
# Convert all values in the column to numeric, setting non-numeric values to NaN
numeric_col = pd.to_numeric(data[i], errors='coerce')
# Drop missing (NaN) values
col_values = numeric_col.dropna().values
# Add to values list and assign group labels
values.extend(col_values)
groups.extend([group_name] * len(col_values))
# Convert lists to NumPy arrays
values = np.array(values)
groups = np.array(groups)
# Perform Tukey HSD test
tukey_result = pairwise_tukeyhsd(endog=values, groups=groups, alpha=0.05)
# Convert results to DataFrame and extract p-adj values
summary_df = pd.DataFrame(data=tukey_result.summary().data[1:], columns=tukey_result.summary().data[0])
p_adj_values = summary_df["p-adj"]
# Save p-adj values as a CSV in-memory file
csv_data = p_adj_values.to_csv(index=False, header=False)
# Define the destination blob name and upload the CSV data
blob = bucket.blob("p_adj_values.csv")
blob.upload_from_string(csv_data, content_type="text/csv")
# Return a response indicating the file was saved
return json.dumps({"message": "p-adj values saved to Google Cloud Storage"}), 200, {"Content-Type": "application/json"}
</code></pre>
|
<python><google-cloud-platform>
|
2024-11-08 15:32:47
| 1
| 463
|
user3387046
|
79,170,581
| 831,399
|
add optional elements when creating a python tuple
|
<p>I have to create a truple (or array) with a variable number of elements. Given this minimal example:</p>
<pre class="lang-py prettyprint-override"><code>def do_something(foo: str):
my_tuple = (
"item",
*([foo] if foo is not None else []),
"something"
)
...
</code></pre>
<p>I wonder, is there a shorter than using the spread operator expression <code>*([foo] if foo is not None else [])</code> here to add an element optionally in the tuple? Note that using an if-statement is not an option to create different tuples. Adding a <code>None</code> element is also not an option. Execution time can also be neglected, preference is on minimizing the code and redundancy.</p>
<p>Edit: Down the line, should future Python versions have an dedicated way to express this, so <code>foo</code> is only used one?</p>
|
<python><arrays><tuples>
|
2024-11-08 15:11:47
| 2
| 636
|
Axel Heider
|
79,170,549
| 3,365,532
|
Mapping a column inside a dataframe to a new type with Pandas 2.2.3+
|
<p>I am used to being able to do things like:</p>
<pre><code>import pandas as pd
df = pd.DataFrame( pd.Categorical(['a','b','b'],['a','b']),columns=['x'])
df.loc[:,'x'] = df['x'].replace({'a':1, 'b':2})
</code></pre>
<p>However, with newer pandas, it throws a warning:</p>
<pre><code>/tmp/ipykernel_1721527/1018712932.py:4: FutureWarning: Setting an item of incompatible dtype is deprecated and will raise in a future error of pandas. Value '[1, 2, 2]
Categories (2, object): [1, 2]' has dtype incompatible with category, please explicitly cast to a compatible dtype first.
df.loc[:,'x'] = df['x'].replace({'a':1, 'b':2})
</code></pre>
<p>Shortest workaround I can think of is:</p>
<pre><code>ncol = df['x'].replace({'a':1, 'b':2}).astype('float')
df['x'] = None
df = df.astype({'x':'float'})
df.loc[:,'x'] = ncol
</code></pre>
<p>But this seems way too long and unelegant for what is ostensibly a very simple operation. Am I missing something obvious?</p>
|
<python><pandas><dataframe>
|
2024-11-08 15:03:24
| 1
| 443
|
velochy
|
79,170,297
| 16,389,095
|
How to get the list/tree of all the controls added to the page and how to get access to one of them
|
<p>I developed a simple app with Flet Python. The app contains some controls such as text and buttons organized in rows and columns. Here is my code:</p>
<pre><code>import flet as ft
def main(page: ft.Page):
def Next_Image(e):
print(page.controls)
# CONTROLS DESIGN
cont = ft.Container(bgcolor=ft.colors.AMBER,
border=ft.border.all(3, ft.colors.RED),
width=250,
height=600,
)
button_previous_img = ft.IconButton(icon=ft.icons.ARROW_CIRCLE_LEFT, tooltip='Previous Image')
button_next_img = ft.IconButton(icon=ft.icons.ARROW_CIRCLE_RIGHT, tooltip='Next Image', on_click=Next_Image)
text_num_img = ft.Text(value="DOC")
# ADD CONTROLS TO THE PAGE
page.add(
ft.Column(
[
cont,
ft.Row(
[
button_previous_img,
text_num_img,
button_next_img
],
alignment=ft.MainAxisAlignment.CENTER,
)
],
horizontal_alignment = ft.CrossAxisAlignment.CENTER,
)
)
page.update()
ft.app(target=main)
</code></pre>
<p>I would like to retrieve the whole tree of children controls added to the page pressing a button. I'm able to display only the outmost control, specifically the column.
Moreover, I would like to access to a control, let's say the container 'cont' inside the button event 'Next_Image()'. Is there a way to access to him with the name variable or with its index within the controls collection? Such as:</p>
<pre><code>cont = page.controls["cont"] # with the variable name
cont = page.controls["4"] # with the variable index (assuming it's 4)
</code></pre>
|
<python><flutter><flet>
|
2024-11-08 13:48:21
| 1
| 421
|
eljamba
|
79,170,096
| 10,452,700
|
How can find key while decrypting the ciphertext? (unknown key)
|
<p>I want to decrypt the following <em>classical</em> Cipher-text without knowing key:</p>
<pre class="lang-none prettyprint-override"><code>Cq ligss’v vcjandd ujw, wuqjwgausjkq cv ulxucdd zrj mhuouahj lbh nuvl upgoqlm rx mhfmllcyw cqxiueuwaiq kbdjyg ghoahh, xlre jhjmrfuo vuws nr fuwaiqsf vwwuwnv. Sm fqvhj nkjydlm ewwrey lfwuwuvahjds vgjkamwawdlyg, ulbhnryldhbb whdtfhk mdxy fggpmhluuwaiq, hlrlyflcqy yywlblblfa ijip ghoahh tuqccqy nushvswwaiqk uqv ghvcfsf uwwrjxv li sjcysnh uiqnyukuwaiqk. Cw hlrncgwm wzy eswntiqw zrj xdlu lfnhyllls dfx fghiaxhfnlsflls, hfmxjcqy nksn rffb ahwwhgwx uwwlhchfnv uuq swfwmv kyqkcwaph vuws uqv nksn ll lheulfm xfwkshjwx gmllfa wjuqkglkmlgh. Fjsslijjuszs qgn rffb kuiwaxslgk cqvcyaxxsfv’ hllnufq vxl uoki xfxhjjlfm jdiesf fggpwlfw, mhuouw wregxfcfsnlghv, shg fuwaiqsf vwwxjcwq, gdccqy cw ahgamswhvsvow cq s qrjfg lbdl lhdchk iq vcjandd nummw.
</code></pre>
<p>So far I have used different online-tools unsucessfully.</p>
<p>I inspired from this <a href="https://www.cryptool.org/en/cto/caesar/" rel="nofollow noreferrer">source</a> using Pythonic code to reach:</p>
<ul>
<li>Plain Text</li>
<li>Finding Key</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import argparse
class CaesarAlgorithm:
def encrypt(self, message, key, alphabet):
# start with empty ciphertext
ciphertext = ""
# iterate through each character in message
for old_character in message:
new_character = ""
# if character is in alphabet -> append to ciphertext
if(old_character in alphabet):
index = alphabet.index(old_character)
new_index = (index + key) % len(alphabet)
new_character = alphabet[new_index]
# Note: characters not defined in the alphabet are ignored
# add new character to ciphertext
ciphertext = ciphertext + new_character
# return ciphertext to calling function
return ciphertext
def decrypt(self, message, key, alphabet):
# decrypting is like encrypting but with negative key
plaintext = self.encrypt(message, 0 - key, alphabet)
# return plaintext to calling function
return plaintext
# parse the arguments (args) given via the command line
parser = argparse.ArgumentParser()
parser.add_argument("-e", "--encrypt", dest="encrypt_or_decrypt", action="store_true")
parser.add_argument("-d", "--decrypt", dest="encrypt_or_decrypt", action="store_false")
parser.add_argument("-m", "--message", help="message for encrypt / decrypt", type=str)
parser.add_argument("-k", "--key", help="key for encrypt / decrypt", type=int)
parser.add_argument("-a", "--alphabet", help="defined alphabet", type=str)
ciphertext="Cq ligss’v vcjandd ujw, wuqjwgausjkq cv ulxucdd zrj mhuouahj lbh nuvl upgoqlm rx mhfmllcyw cqxiueuwaiq kbdjyg ghoahh, xlre jhjmrfuo vuws nr fuwaiqsf vwwuwnv. Sm fqvhj nkjydlm ewwrey lfwuwuvahjds vgjkamwawdlyg, ulbhnryldhbb whdtfhk mdxy fggpmhluuwaiq, hlrlyflcqy yywlblblfa ijip ghoahh tuqccqy nushvswwaiqk uqv ghvcfsf uwwrjxv li sjcysnh uiqnyukuwaiqk. Cw hlrncgwm wzy eswntiqw zrj xdlu lfnhyllls dfx fghiaxhfnlsflls, hfmxjcqy nksn rffb ahwwhgwx uwwlhchfnv uuq swfwmv kyqkcwaph vuws uqv nksn ll lheulfm xfwkshjwx gmllfa wjuqkglkmlgh. Fjsslijjuszs qgn rffb kuiwaxslgk cqvcyaxxsfv’ hllnufq vxl uoki xfxhjjlfm jdiesf fggpwlfw, mhuouw wregxfcfsnlghv, shg fuwaiqsf vwwxjcwq, gdccqy cw ahgamswhvsvow cq s qrjfg lbdl lhdchk iq vcjandd nummw."
# Simulate command-line arguments
# Replace with your desired values
args = parser.parse_args(["-d", "-m", ciphertext, "-k", "3", "-a", "abcdefghijklmnopqrstuvwxyz"])
# create caesar instance
caesar = CaesarAlgorithm()
# if --encrypt -> call encrypt function
if(args.encrypt_or_decrypt == True):
print(caesar.encrypt(args.message, args.key, args.alphabet))
# if --decrypt -> call decrypt function
else:
print(caesar.decrypt(args.message, args.key, args.alphabet))
</code></pre>
<p>So far, I could not achieve Plain Text and find the identified key
Even I have tried to use the following online tools and find the key mannually but I could not yet:</p>
<ul>
<li><a href="https://cryptii.com/pipes/caesar-cipher" rel="nofollow noreferrer">https://cryptii.com/pipes/caesar-cipher</a></li>
<li><a href="https://ciphereditor.com/explore/caesar-cipher" rel="nofollow noreferrer">https://ciphereditor.com/explore/caesar-cipher</a></li>
<li><a href="https://www.dcode.fr/caesar-cipher" rel="nofollow noreferrer">https://www.dcode.fr/caesar-cipher</a></li>
</ul>
|
<python><encryption><caesar-cipher><cryptdecrypt>
|
2024-11-08 12:49:11
| 1
| 2,056
|
Mario
|
79,169,993
| 4,529,546
|
How to get scikit-learn to ensure that all prediction outputs should sum to 100%?
|
<p>I have a 'MultiOutputRegressor' which is based on a 'LinearRegression' regressor.
I am using it to predict three outputs per row of X_data (like a classifier) which represent the percentage likelihood of three outcomes.</p>
<p>The regressor is fitted against y_data where the three labels sum correctly to 100%.</p>
<p>Obviously the regressor doesn't really know that it's three prediction outputs should sum, it just knows roughly what values they should be.</p>
<p>Is there a way that I can tell the regressor explicitly that one of the rules is that all three prediction outputs should together sum to 100%?</p>
|
<python><scikit-learn><output><linear-regression>
|
2024-11-08 12:12:52
| 1
| 1,128
|
Richard
|
79,169,969
| 13,337,635
|
Why can't my Docker container communicate with LocalStack on Apache Airflow?
|
<p>I'm trying to test a Docker container that interacts with LocalStack using Testcontainers within an Apache Airflow DockerOperator.</p>
<p>In my setup, I have:</p>
<ol>
<li>An Airflow DAG that uses the DockerOperator to run a container.</li>
<li>The container communicates with LocalStack to create an S3 bucket.</li>
<li>The endpoint URL for LocalStack is dynamically assigned by Testcontainers.</li>
</ol>
<p>However, I keep running into this error:</p>
<pre><code>EndpointConnectionError: Could not connect to the endpoint URL: "http://localhost:<random_port>/"
</code></pre>
<p>I have seen a similar question <a href="https://discuss.localstack.cloud/t/i-am-stumped-by-endpointconnectionerror/417" rel="nofollow noreferrer">here</a>, but I noticed that the environment variables don't exist when running them via Testcontainers.</p>
<h2>So far I have tried:</h2>
<ul>
<li>Verified that the AWS_ENDPOINT_URL in both Airflow and the test script points to the dynamically assigned LocalStack URL.</li>
<li>Used environment variables to pass the correct endpoint to the Docker container.</li>
</ul>
<h2>Here is an example of the code:</h2>
<p><code>dag.py</code></p>
<pre class="lang-py prettyprint-override"><code>import os
from airflow import DAG
from airflow.providers.docker.operators.docker import DockerOperator
from datetime import datetime, timedelta
with DAG(
dag_id="test_docker_localstack_dag",
default_args={
"retries": 1,
"retry_delay": timedelta(minutes=5),
},
schedule_interval=None,
start_date=datetime(2023, 1, 1),
catchup=False,
) as dag:
run_docker_task = DockerOperator(
task_id="run_docker_container",
image="test_script:latest",
command=COMMAND,
environment={
"AWS_ENDPOINT_URL": os.getenv("AWS_ENDPOINT_URL"),
"AWS_ACCESS_KEY_ID": os.getenv("AWS_ACCESS_KEY_ID"),
"AWS_SECRET_ACCESS_KEY": os.getenv("AWS_SECRET_ACCESS_KEY"),
},
auto_remove=True,
docker_url="unix://var/run/docker.sock",
network_mode="bridge",
)
run_docker_task
</code></pre>
<p><code>test_dag.py</code></p>
<pre class="lang-py prettyprint-override"><code>import os
import pytest
from testcontainers.localstack import LocalStackContainer
from testcontainers.core.image import DockerImage
@pytest.fixture(scope="session")
def localstack():
"""Set up LocalStack container."""
with LocalStackContainer(image="localstack/localstack:latest") as localstack:
os.environ["AWS_ENDPOINT_URL"] = localstack.get_url()
os.environ["AWS_ACCESS_KEY_ID"] = "localstack-testcontainers"
os.environ["AWS_SECRET_ACCESS_KEY"] = "localstack-testcontainers"
yield localstack
@pytest.fixture(scope="session")
def docker_image():
"""Build Docker image for test."""
with DockerImage(path=PATH, tag="test_script:latest") as image:
yield image
def test_docker_container_interaction(localstack, docker_image):
"""Test that DockerOperator can interact with LocalStack."""
from dags.simple_dag import dag
from airflow.models import DagBag
dag_bag = DagBag()
test_dag = dag_bag.get_dag("test_docker_localstack_dag")
assert test_dag is not None, "DAG not found"
test_dag.test()
</code></pre>
<p><code>requirements.txt</code></p>
<pre class="lang-none prettyprint-override"><code>testcontainers==4.8.2
pytest==6.2.5
apache-airflow==2.5.1
apacheairflow-providers-docker==3.3.0
</code></pre>
<p>This setup will need to run locally and on my CI server(docker-in-docker) therefore as it is it does not run on either of them for the same reason.</p>
<p>I have also tried changing the <code>network_mode="host"</code> in the <code>DockerOperator</code> where I understand that the container will share the network stack with the docker host and from the container point of view, <code>localhost</code> (or <code>127.0.0.1</code>) will refer to the docker host. In this setting I changed the url to <code>http://localhost:4566</code> but it still does not work.</p>
<p><strong>EDIT</strong>: Seems that there is a bug within testcontainers-python that was released based on this <a href="https://github.com/testcontainers/testcontainers-python/issues/475#issue-2186810433" rel="nofollow noreferrer">thread</a>.</p>
|
<python><docker><airflow><testcontainers><localstack>
|
2024-11-08 12:07:42
| 0
| 6,877
|
yudhiesh
|
79,169,887
| 10,708,345
|
Flask DEBUG logging not working with dictConfig root confirguration
|
<p>I do not seem to be able to make logging work. The following does not print anything to console.
I have been digging into the official documentation, SO, and even Reddit... and nothing seems to work for me :/</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask, request, jsonify
from logging.config import dictConfig
import urllib.request
import logging
import sys
import os
def create_app():
# https://betterstack.com/community/guides/logging/how-to-start-logging-with-flask/
dictConfig(
{
"version": 1,
"formatters": {
"default": {
"format": "[%(asctime)s] %(levelname)s in %(module)s: %(message)s",
}
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"stream": "ext://sys.stdout",
"formatter": "default",
}
},
"root": {"level": "DEBUG", "handlers": ["console"]},
}
)
app = Flask(__name__)
LOGGER = logging.getLogger("root")
@app.get('/whatever')
def view_zuerivelo_publibike() -> Flask.response_class:
LOGGER.info('HELLO!') // nothing gets logged here
# fetch raw XML
try:
response = urllib.request.urlopen(some_uri).read()
except Exception as e:
return jsonify({ 'message': e }), 500
LOGGER.info(response) // nothing gets logged here
return jsonify(response), 200
return app
if __name__ == '__main__':
create_app().run(debug=True, use_reloader=False)
</code></pre>
|
<python><python-3.x><flask><logging><python-logging>
|
2024-11-08 11:42:24
| 1
| 320
|
Fi Li Ppo
|
79,169,885
| 17,160,160
|
Snowflake Connector. Call list as parameter for IN filter
|
<p><strong>OUTLINE</strong><br />
I'm using Snowflake Connector to pull data from a Snowflake database into my Python notebook.</p>
<p>I have parameters stored separately that are pulled into my query which is stored as an f-string.<br />
I can pull in parameters for use in filters if they contain a single variable:</p>
<pre><code>myvar = 'foo'
myquery = f"""
SELECT * FROM TABLE1 WHERE VAR1 = '{myvar}'
"""
cursor = sf_connection.cursor()
try:
df= cursor.execute(myquery).fetch_pandas_all()
finally:
cursor.close()
</code></pre>
<p><strong>PROBLEM</strong><br />
I want to store a list and use it within an <code>IN</code> statement in the query:</p>
<pre><code>mylist = ['foo','bar']
myquery = f"""
SELECT *
FROM TABLE1
WHERE VAR1 IN '{mylist}'
"""
</code></pre>
<p>However, the above errors.</p>
<p><strong>CURRENT SOLUTION</strong><br />
I can index each element of the list within the query and it functions:</p>
<pre><code>mylist = ['foo','bar']
myquery = f"""
SELECT *
FROM TABLE1
WHERE VAR1 IN ('{mylist[0]}', '{mylist[1]}')
"""
</code></pre>
<p>This isn't a workable solution as I need to be able to alter the <code>mylist</code> parameter and still have the query function.</p>
|
<python>
|
2024-11-08 11:41:51
| 0
| 609
|
r0bt
|
79,169,598
| 7,699,037
|
Type alias in class implementing pydantic base model
|
<p>I'm trying to use an encapsulated type alias in a pydantic <code>BaseModel</code> class:</p>
<pre><code>class MyClass(BaseModel):
Typ: TypeAlias = int
some_int: Typ = Field(alias="SomeInt")
def print_some_int(some_int: MyClass.Typ):
print(some_int)
</code></pre>
<p>However, when executing this code, pydyntic raises the following exception:</p>
<blockquote>
<p>pydantic.errors.PydanticSchemaGenerationError: Unable to generate pydantic-core schema for typing.TypeAlias</p>
</blockquote>
<p>When adding a <code>model_config = ConfigDict(arbitrary_types_allowed=True)</code> to <code>MyClass</code>, it works, but I will get an AttributeError for the <code>Typ</code> in the function definition of <code>print_some_int</code>. I could resolve this by removing the type annotation, but that's not what I want. Is there a way to achieve this?</p>
<p>PS: I'm using python3.10</p>
|
<python><pydantic><type-alias>
|
2024-11-08 10:16:56
| 0
| 2,908
|
Mike van Dyke
|
79,169,564
| 1,409,644
|
Specifying non-PyPI dependency
|
<p>I have written a package <code>first_package</code> with Poetry and installed it to <code>/usr/local</code>, which is on <code>sys.path</code>, so <code>first_package</code> can be imported like any other package installed via the operating system (AlmaLinux 8 in this case).</p>
<p>I have now written another package <code>second_package</code> which depends on <code>first_package</code>. However I am not sure how to define the dependency in <code>pyproj.toml</code>. If I just specify:</p>
<pre class="lang-none prettyprint-override"><code>[tool.poetry.dependencies]
python = "^3.6"
first_package = "^1.6.0"
</code></pre>
<p>when I run <code>poetry lock</code> I get the error</p>
<pre><code>SolverProblemError
</code></pre>
<p>Because <code>second-package</code> depends on <code>first-package</code> (^1.6.0) which doesn't match any versions, version solving failed.</p>
<p>Should this this work or does the resolving rely on all the packages being on PyPI? If it should work, what is the correct syntax to define the dependency in <code>pyproj.toml</code>?</p>
<p>Note that this is not a duplicate the problem of <a href="https://stackoverflow.com/questions/66474844/import-local-package-during-poetry-run">referring to a module in a script, where both are part of the same package</a>. In the case here, I have two separate packages.</p>
<p>Note that is also not a duplicate of the problem of <a href="https://stackoverflow.com/questions/73333869/correct-way-to-use-local-depencies-with-poetry">using a relative path to the first package</a> as the path is not necessarily the same in the development environment as that in the installation environment. A solution could be to try to ensure that these two are the same, but here there is the problem that Poetry by default generates a <code>my_package/my_package</code> directory structure, which adds an extra directory level which will not be present under <code>/usr/local/lib/python3.xx/site-packages</code>.</p>
|
<python><python-poetry>
|
2024-11-08 10:04:48
| 0
| 469
|
loris
|
79,169,561
| 865,220
|
Program 'python.bat' failed to run: Access is deniedAt line:1 char:1
|
<p>whenever I type <code>python</code> from powershell on windows 10 I get this in user mode:</p>
<pre><code>Program 'python.bat' failed to run: Access is deniedAt line:1 char:1
+ python
+ ~~~~~~.
At line:1 char:1
+ python
+ ~~~~~~
+ CategoryInfo : ResourceUnavailable: (:) [], ApplicationFailedException
+ FullyQualifiedErrorId : NativeCommandFailed
</code></pre>
<p>However it works fine if I launch the powershell in admin mode.
A week back it was fine in user mode as well. I feel some software installation in between might have messed up the permission. How do I fix the permission of python.bat file ?
I have read through similar questions on stackoverflow , titiles on most of those is same but probably none of their solution works for me.</p>
|
<python><permissions><windows-10>
|
2024-11-08 10:03:07
| 0
| 18,382
|
ishandutta2007
|
79,169,550
| 12,569,908
|
How to ignore case but not diacritics with Python regex?
|
<p>I'm working with a set of regex patterns that I have to match in a target text.</p>
<p>My problematic regex is something like this: <code>(İg)[[:punct:][:space:]]+[[:alnum:]]+</code></p>
<p>Initially, I noticed that Python’s <code>re</code> package doesn’t support character classes like <code>[:punct:]</code>. Then I discovered that with the <code>regex</code> library (instead of <code>re</code>), these forms would actually be supported.</p>
<p>The problem now is that, with both <code>re</code> and <code>regex</code>, enabling <code>IGNORECASE</code> it seems to also ignore diacritics (that I want to consider). For example:</p>
<pre class="lang-py prettyprint-override"><code>#import re
import regex as re
active_patterns = ["(İg)[[:punct:][:space:]]+[[:alnum:]]+"]
text = "A big problem"
for pattern in active_patterns:
compiled_pattern = re.compile(pattern, re.IGNORECASE)
for match in compiled_pattern.finditer(text):
print(match)
</code></pre>
<p>In this code, I want to ignore case but not diacritics. However, it seems that <code>regex</code> library ignore diacritics when <code>IGNORECASE</code> is enabled. Indeed this snippet will print "ig problem". The same behaviour happens with <code>re</code> library if I remove not supported parts, so with the regex <code>(İg)</code>. It will print only <code>ig</code> in that case.</p>
<p><strong>Is there a way in Python to make the regex ignore case but keep diacritics intact?</strong></p>
|
<python><regex><unicode><python-re><python-regex>
|
2024-11-08 10:01:12
| 2
| 709
|
Paolo Magnani
|
79,169,280
| 339,144
|
pyproject.toml config using setuptools with correct packages automatically
|
<p>I build wheels using a <code>setuptools</code>-based <code>pyproject.toml</code>.</p>
<p>Here's the relevant bit:</p>
<pre><code>[build-system]
requires = ["setuptools>=64", "setuptools_scm>=8"]
build-backend = "setuptools.build_meta"
[tool.setuptools.packages.find]
where = ["."]
include = [
# ... long, and fragile, list
]
exclude = [
# This, or similar syntaxes, don't seem to actually work.
# https://stackoverflow.com/q/75091671/339144
] # exclude packages matching these glob patterns (empty by default)
</code></pre>
<p>However, this is fragile because of the "long list"; every time I add a top-level package to my project layout (I use a flat, i.e. non-<code>src</code> layout) I have to remember adding it to the list.</p>
<p>How to solve this in the general way?</p>
|
<python><setuptools><python-packaging>
|
2024-11-08 08:20:38
| 0
| 2,577
|
Klaas van Schelven
|
79,169,113
| 3,498,863
|
Convert Pandas Dataframe from MultiIndex columns to single index without duplicates
|
<p>I am comparing two data frames and display the changed values between the data frames at the value level</p>
<p>When the values are different in the data frames, I am getting the results as expected but when the data frames are equal I am getting a Multi Index data frame and trying to convert to normal data frame with out duplicate columns</p>
<pre><code>import pandas as pd
import numpy as np
before_df = pd.DataFrame(data= {'col1':[1,2,3,4], 'col2':['A','B','C','D'], 'amount': [12, 21,31,51]})
after_df = pd.DataFrame(data= {'col1':[1,2,3,4], 'col2':['A','B','C','D'], 'amount': [12, 21,31,51]})
config_index = ['col1']
before_df = before_df.set_index(config_index)
after_df = after_df.set_index(config_index)
before_keys = before_df.index
after_keys = after_df.index
common_keys = np.intersect1d(before_keys, after_keys, assume_unique=True)
common_columns = np.intersect1d(before_df.columns, before_df.columns, assume_unique=True)
before_common = before_df.loc[common_keys, common_columns]
after_common = after_df.loc[common_keys, common_columns]
common_data = pd.concat([before_common.reset_index(), after_common.reset_index()], sort=True)
changed_keys = common_data.drop_duplicates(keep=False)
changed_keys = changed_keys[config_index]
changed_keys = changed_keys.set_index(config_index).index
change_df = pd.concat([before_common.loc[changed_keys], after_common.loc[changed_keys]],
axis='columns', keys=['old', 'new'])
#print(change_df)
change_df_swap = change_df.swaplevel(axis='columns')
changed_df_group = change_df_swap.groupby(level=0, axis='columns')
changed_data = changed_df_group.apply(lambda frame: frame.apply(lambda x: x.iloc[0] if x.iloc[0] == x.iloc[1] else f'{x.iloc[0]} ---> {x.iloc[1]}', axis='columns'))
print(changed_data)
</code></pre>
<p>Output</p>
<pre><code>Empty DataFrame
Columns: [(amount, old), (col2, old), (amount, new), (col2, new)]
Index: []
</code></pre>
<p>I am expecting the output as</p>
<pre><code>Empty DataFrame
Columns: [amount, col2]
Index: []
</code></pre>
|
<python><pandas><dataframe>
|
2024-11-08 07:12:10
| 1
| 578
|
CNKR
|
79,169,105
| 1,942,868
|
send post but treated as GET?? rest-framework-bundle
|
<p>I have api with django rest framework bundle</p>
<p>My api is like this, only accepts <code>POST</code>.</p>
<pre><code>@api_view(["POST"])
@authentication_classes([])
@permission_classes([])
def myapi_v1_result(request):
</code></pre>
<p>then I sent to this api with <code>POST</code> button</p>
<p><a href="https://i.sstatic.net/HSx3dPOy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HSx3dPOy.png" alt="enter image description here" /></a></p>
<p>However it gets , <code>Method Not allowed</code> why does this happen?</p>
<pre><code>INFO: 127.0.0.1:49305 - "POST /myapi/v1/result HTTP/1.1" 200 OK
Method Not Allowed: /myapi/v1/result
WARNING:django.request:Method Not Allowed: /myapi/v1/result
</code></pre>
|
<python><django><django-rest-framework>
|
2024-11-08 07:09:21
| 1
| 12,599
|
whitebear
|
79,168,884
| 72,437
|
Double Execution of Firebase Function in Emulator Environment
|
<p>I've noticed that when running in the Firebase emulator environment, modifying the function and making a single GET request for the first time causes the function to execute twice:</p>
<pre><code>@https_fn.on_request(
cors=options.CorsOptions(
cors_origins=["*"],
cors_methods=["GET"],
)
)
def delete(req: https_fn.Request) -> https_fn.Response:
print(f"Request Method: {req.method}")
print(f"Request Headers: {dict(req.headers)}")
</code></pre>
<p>Here’s the log output:</p>
<pre><code>> Request Method: GET
> Request Headers: {'Host': '127.0.0.1:8739', 'Connection': 'keep-alive', 'Function-Execution-Id': '3rNmdjC5Ny1v'}
i functions: Beginning execution of "us-central1-delete"
> Request Method: GET
> Request Headers: {'Host': '127.0.0.1:5001', 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:132.0) Gecko/20100101 Firefox/132.0', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'Accept-Language': 'en-US,en;q=0.5', 'Accept-Encoding': 'gzip, deflate, br, zstd', 'Connection': 'keep-alive', 'Upgrade-Insecure-Requests': '1', 'Sec-Fetch-Dest': 'document', 'Sec-Fetch-Mode': 'navigate', 'Sec-Fetch-Site': 'none', 'Sec-Fetch-User': '?1', 'Priority': 'u=0, i', 'Function-Execution-Id': 'sTnlZgBK7dGx'}
</code></pre>
<p>It appears that the first request is not coming from my browser, suggesting the emulator might be performing some sort of warm-up or caching operation.</p>
<p>Has anyone else encountered this issue, and are there any solutions to prevent this double execution in the emulator?</p>
<p>Thanks!</p>
|
<python><firebase><google-cloud-functions><firebase-tools>
|
2024-11-08 05:15:10
| 0
| 42,256
|
Cheok Yan Cheng
|
79,168,760
| 6,298,615
|
Line does not start with any known Prisma schema keyword error on Python
|
<p>I'm trying to implement Prisma for a FastAPI project. I added the Prisma package and the <code>schema.prisma</code> file but when I run <code>prisma validate</code> it throws the error:</p>
<pre><code>Environment variables loaded from prisma\.env
Prisma schema loaded from prisma\schema.prisma
Error: Prisma schema validation - (validate wasm)
Error code: P1012
error: Error validating: This line is invalid. It does not start with any known Prisma schema keyword.
--> prisma\schema.prisma:1
|
|
1 | datasource db {
2 | url = env("DATABASE_URL")
|
error: Error validating: This line is invalid. It does not start with any known Prisma schema keyword.
--> prisma\schema.prisma:2
|
1 | datasource db {
2 | url = env("DATABASE_URL")
3 | provider = "postgresql"
|
error: Error validating: This line is invalid. It does not start with any known Prisma schema keyword.
--> prisma\schema.prisma:3
|
2 | url = env("DATABASE_URL")
3 | provider = "postgresql"
4 | }
|
error: Error validating: This line is invalid. It does not start with any known Prisma schema keyword.
--> prisma\schema.prisma:4
|
3 | provider = "postgresql"
4 | }
5 |
|
</code></pre>
<p>But I do not see why. Here is my schema:</p>
<pre class="lang-none prettyprint-override"><code>datasource db {
url = env("DATABASE_URL")
provider = "postgresql"
}
generator client {
provider = "prisma-client-py"
recursive_type_depth = 5
}
model User {
id Int @id @default(autoincrement())
email String @unique
username String @db.VarChar(12)
name String @db.VarChar(25)
lastname String @db.VarChar(25)
password String @db.VarChar(20)
}
</code></pre>
<p>I'm starting to learn prisma so I dont' know if I'm doing something wrong.</p>
|
<python><postgresql><fastapi><prisma>
|
2024-11-08 04:10:45
| 1
| 307
|
Jonathan Gómez Pérez
|
79,168,752
| 482,819
|
Creating a constrained TypeVar from Union
|
<p>Running mypy on the following code yields no issues.</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar
S = TypeVar("S", int, float, complex)
def func(x: list[S], m: S) -> list[S]:
return [val * m for val in x]
out1: list[int] = func([1, 2, 3], 4)
out2: list[complex] = func([1., 2., 3.], 4.)
</code></pre>
<p>But the following code</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar
Number = int | float | complex
S = TypeVar("S", bound=Number)
def func(x: list[S], m: S) -> list[S]:
return [val * m for val in x]
out1: list[int] = func([1, 2, 3], 4)
out2: list[complex] = func([1., 2., 3.], 4.)
</code></pre>
<p>reports</p>
<pre><code>y.py:7: error: List comprehension has incompatible type List[Union[int, float, complex]]; expected List[S] [misc]
y.py:7: error: Unsupported operand types for * (likely involving Union) [operator]
</code></pre>
<p>To avoid mypy issues, I think I need to create a constrained TypeVar from the Union (that I am importing from another package and therefore is in principle outside of my control). Is there a way to achieve it? Is there another way (without #type: ignore) to avoid getting a mypy issue?</p>
|
<python><generics><python-typing><mypy>
|
2024-11-08 04:04:15
| 1
| 6,143
|
Hernan
|
79,167,876
| 813,946
|
IllegalCharacterError vs ValueError in openpyxl
|
<p>I need to save Excel files with <code>pandas</code> using <code>openpyxl</code>.</p>
<p>The data is coming from a database and many times the text fields has some strange characters and it raises an <code>IllegalCharacterError</code>. I have a solution for this situation to find out which sheet, which row and which column causes the problem.</p>
<p>But recently the data gave me an error like this:</p>
<pre><code>ValueError: All strings must be XML compatible: Unicode or ASCII, no NULL bytes or control characters
</code></pre>
<p>How could I turn it into the same error as the other once? Maybe with replacing some bytes?</p>
<p>The example with two texts. The upper one creating <code>ValueError</code> the lower one an <code>IllegalCharacterError</code>.</p>
<p><a href="https://gist.github.com/horvatha/0ac42748ad91118a8cf0dd7b7bb88934" rel="nofollow noreferrer">See the code in gist</a>. stackoverflow can't save the code below properly.</p>
<pre class="lang-py prettyprint-override"><code>from openpyxl import Workbook
text = "" # Including three bytes: hexadecimal FFBFBF resulting ValueError
# text = '\x16' # resulting IllegalCharacterError
wb = Workbook()
ws = wb.active
ws.append([text])
wb.save("test.xlsx")
</code></pre>
<p>The setup I use is</p>
<pre><code>openpyxl 3.1.2
OS Linux-6.11.5-200.fc40.x86_64-x86_64-with-glibc2.39 (Fedora)
Python 3.12.7 (main, Oct 1 2024, 00:00:00) [GCC 14.2.1 20240912 (Red Hat 14.2.1-3)]
</code></pre>
<p><a href="https://gist.github.com/horvatha/0ac42748ad91118a8cf0dd7b7bb88934" rel="nofollow noreferrer">My original problem with pandas can be also found on the same gist.</a></p>
<p>My main problem here is, that in <code>pandas</code> the exception is raised in different steps (to_excel vs writer.close) and in the second case I can't even get the text that caused the problem.</p>
<p>I even thought that I should create an issue with this on <a href="https://github.com/theorchard/openpyxl" rel="nofollow noreferrer">openpyxl's github page</a>. I think it should handle the two character encoding problems equally, raising an <code>IllegalCharacterError</code>. Does it make sense?</p>
|
<python><pandas><character-encoding><openpyxl>
|
2024-11-07 19:38:22
| 0
| 1,982
|
Arpad Horvath -- Слава Україні
|
79,167,713
| 5,676,198
|
How to create a scaler applying log transformation and MinMaxScaler in sklearn
|
<p>I want to apply <code>log()</code> to my <code>DataFrame</code> and MinMaxScaler() together.
I want the output to be a pandas DataFrame() with indexes and columns from the original data.
I want to use the parameters used to <code>fit_transform()</code> to <code>inverse_transform()</code> resulting in a new data frame. So, it needs to be constructed inside the <code>FunctionTransformer</code>.</p>
<p>What I tried:</p>
<pre class="lang-py prettyprint-override"><code>from sklearn.preprocessing import FunctionTransformer
from sklearn.preprocessing import MinMaxScaler
import pandas as pd
import numpy as np
# Initialize MinMaxScaler with range (0, 1)
scaler_logMinMax = MinMaxScaler(feature_range=(0, 1))
# Log transformation function
def log_and_scale(X, scaler=scaler_logMinMax, shift=1e-9):
X_log = np.log(X + shift) # Apply log transformation with a small shift
return pd.DataFrame(scaler.fit_transform(X_log)) # Scale the log-transformed data
# Inverse transformation: first unscale, then inverse log transform
def inv_log_and_scale(X, scaler=scaler_logMinMax, shift=1e-9):
X_unscaled = scaler.inverse_transform(X) # Inverse scaling
return np.exp(X_unscaled) - shift # Inverse of log transformation
# Create FunctionTransformer for the log and scale transformation
log_and_scale_transformer = FunctionTransformer(func=log_and_scale, inverse_func=inv_log_and_scale, validate=True)
df_subset = pd.DataFrame(
{
1: [135.2342984, 83.17136704, 23.41329775, 3.574450787],
2: [59.31328422, 18.15285711, 11.1736562, 4.788951527],
3: [45.0087282, 4.094515245, 106.536704, 527.0962651],
}
)
df_subset.columns = [1, 2, 3]
df_subset.index = ["201001", "201002", "201003", "201004"]
df_subset.index.name = "Date"
df_subset.columns.name = "id"
cols_to_apply_scaler = [1, 2]
df_subset
id 1 2 3
Date
201001 135.234298 59.313284 45.008728
201002 83.171367 18.152857 4.094515
201003 23.413298 11.173656 106.536704
201004 3.574451 4.788952 527.096265
</code></pre>
<pre class="lang-py prettyprint-override"><code># Transforming
df_subset[cols_to_apply_scaler] = pd.DataFrame(log_and_scale_transformer.fit_transform(df_subset[cols_to_apply_scaler]))
df_subset
id 1 2 3
Date
201001 NaN NaN 45.008728
201002 NaN NaN 4.094515
201003 NaN NaN 106.536704
201004 NaN NaN 527.096265
# The way that I expect to apply the inverse transformer.
# df_subset[cols_to_apply_scaler] = log_and_scale_transformer.inverse_transform(df_subset[cols_to_apply_scaler])
</code></pre>
<p><strong>Questions</strong>:</p>
<ol>
<li>The <code>pd.DataFrame(log_and_scale_transformer.fit_transform(df_subset[cols_to_apply_scaler]))</code> works, but it can't assign to the original DataFrame because the name of columns change. How to fix it?</li>
<li>How the values of <code>scaler_logMinMax</code> from <code>fit_transform()</code> were carried through the <code>inverse_transform</code>?</li>
</ol>
<p>I also tried <code>log_and_scale_transformer = log_and_scale_transformer.set_output(transform="pandas")</code> after creating the dataframe, but it did not work.</p>
<p>I need to filter the columns before applying the function.
I also want to stick with <code>FunctionTransformer</code> because I use other transformers with the same structure. For ex:</p>
<pre><code># Define the inverse transformation function with a shift
def inv_y(X, shift=0.5):
return 1 / (X + shift)
# Define the inverse inverse transformation to revert to original values
def inv_inv_y(X, shift=0.5):
return (1 - X * shift) / X
# Create the FunctionTransformer
inverse_transformer = FunctionTransformer(func=inv_y, inverse_func=inv_inv_y, validate=False, check_inverse=True)
</code></pre>
<p><strong>In summary</strong>, I cannot apply a function and a scaler together.</p>
<hr />
<p>With a different simple example, it works:</p>
<pre class="lang-py prettyprint-override"><code># DataFrame Example
X = np.array([[0, 1, 2], [2, 3, 4], [5, 7, 9]])
cols = ["A", "B", "C"]
cols_to_apply_scaler = cols[:-1]
X = pd.DataFrame(X, columns=cols, index=[0,1,2])
X
A B C
0 0 1 2
1 2 3 4
2 5 7 9
# Transforming
X[cols_to_apply_scaler] = pd.DataFrame(log_and_scale_transformer.fit_transform(X[cols_to_apply_scaler]))
A B C
0 0.000000 0.000000 2
1 0.958971 0.564575 4
2 1.000000 1.000000 9
/home/guilherme/anaconda3/envs/time_series/lib/python3.11/site-packages/sklearn/base.py:493: UserWarning: X does not have valid feature names, but FunctionTransformer was fitted with feature names
warnings.warn(
# Inverse
X[cols_to_apply_scaler] = log_and_scale_transformer.inverse_transform(X[cols_to_apply_scaler])
X
A B C
0 6.203855e-25 1.0 2
1 2.000000e+00 3.0 4
2 5.000000e+00 7.0 9
</code></pre>
<p>But I did not understand the warning. Can I fix it?</p>
|
<python><pandas><scikit-learn><data-preprocessing>
|
2024-11-07 18:41:09
| 1
| 1,061
|
Guilherme Parreira
|
79,167,711
| 1,031,417
|
How to make OpenHands (Running on Docker on macOS) to Work with AWS Bedrock?
|
<p>I'm setting up <a href="https://github.com/All-Hands-AI/OpenHands" rel="nofollow noreferrer">OpenHands</a> with AWS Bedrock on macOS using Docker, but encountering connection issues related to the Docker client and server API version. While some commands inside the container work, the main application fails with the following errors:</p>
<p><strong>Docker Run Command:</strong></p>
<pre><code>docker run -it --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE="docker.all-hands.dev/all-hands-ai/runtime:0.12-nikolaik" \
-e AWS_ACCESS_KEY_ID="YOUR_ACCESS_KEY_ID" \
-e AWS_SECRET_ACCESS_KEY="YOUR_SECRET_ACCESS_KEY" \
-e AWS_REGION="us-east-1" \
-e LLM_PROVIDER="bedrock" \
-e MODEL_ID="anthropic.claude-v2" \
-e BASE_URL="https://bedrock.us-east-1.amazonaws.com" \
-p 3000:3000 \
--add-host=host.docker.internal:host-gateway \
--name openhands-app-bedrock \
docker.all-hands.dev/all-hands-ai/openhands:latest
</code></pre>
<p>Successfully ran commands inside the container:</p>
<p><code>docker exec -it openhands-app-bedrock /bin/bash</code> and inside <code>ipython</code>:</p>
<pre><code>from litellm import completion
response = completion(model="anthropic.claude-v2", messages=[{"role": "user", "content": "Hello, how are you?"}])
print(response.choices[0].message.content)
# Output: I'm doing well, thanks for asking!
</code></pre>
<p><strong>Error Details:</strong></p>
<pre><code>Starting OpenHands...
Running OpenHands as root
INFO: Started server process [9]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit)
INFO: 192.168.65.1:57134 - "GET / HTTP/1.1" 200 OK
INFO: 192.168.65.1:57134 - "GET /locales/en/translation.json HTTP/1.1" 200 OK
INFO: 192.168.65.1:57134 - "GET /favicon.ico HTTP/1.1" 200 OK
INFO: 192.168.65.1:57134 - "GET /config.json HTTP/1.1" 200 OK
INFO: 192.168.65.1:57135 - "GET /config.json HTTP/1.1" 200 OK
INFO: 192.168.65.1:57135 - "GET /config.json HTTP/1.1" 200 OK
INFO: 192.168.65.1:57135 - "GET /favicon.ico HTTP/1.1" 200 OK
18:19:26 - openhands:INFO: github.py:14 - Initializing UserVerifier
18:19:26 - openhands:INFO: github.py:27 - GITHUB_USER_LIST_FILE not configured
18:19:26 - openhands:INFO: github.py:48 - GITHUB_USERS_SHEET_ID not configured
18:19:26 - openhands:INFO: github.py:85 - No user verification sources configured - allowing all users
INFO: ('192.168.65.1', 57136) - "WebSocket /ws" [accepted]
18:19:26 - openhands:ERROR: auth.py:27 - Invalid token
INFO: 192.168.65.1:57135 - "GET /api/options/models HTTP/1.1" 200 OK
INFO: connection open
INFO: 192.168.65.1:57134 - "GET /api/options/agents HTTP/1.1" 200 OK
INFO: 192.168.65.1:57137 - "GET /api/options/security-analyzers HTTP/1.1" 200 OK
INFO: connection closed
INFO: 192.168.65.1:57137 - "GET /assets/end-session-CpBsKLW_.js HTTP/1.1" 200 OK
INFO: 192.168.65.1:57137 - "GET /config.json HTTP/1.1" 200 OK
INFO: 192.168.65.1:57134 - "GET /config.json HTTP/1.1" 200 OK
INFO: 192.168.65.1:57134 - "GET /favicon.ico HTTP/1.1" 200 OK
INFO: 192.168.65.1:57134 - "GET /config.json HTTP/1.1" 200 OK
INFO: 192.168.65.1:57134 - "GET /favicon.ico HTTP/1.1" 200 OK
18:19:28 - openhands:INFO: github.py:14 - Initializing UserVerifier
18:19:28 - openhands:INFO: github.py:27 - GITHUB_USER_LIST_FILE not configured
18:19:28 - openhands:INFO: github.py:48 - GITHUB_USERS_SHEET_ID not configured
18:19:28 - openhands:INFO: github.py:85 - No user verification sources configured - allowing all users
INFO: ('192.168.65.1', 57138) - "WebSocket /ws" [accepted]
18:19:28 - openhands:INFO: listen.py:336 - New session: aee02a3d-97c4-4c23-9df5-61094a47a0bf
INFO: connection open
18:19:28 - openhands:WARNING: codeact_agent.py:101 - Function calling not supported for model anthropic.claude-v2. Disabling function calling.
INFO: 192.168.65.1:57134 - "GET /config.json HTTP/1.1" 200 OK
18:19:28 - openhands:INFO: base.py:98 - [runtime aee02a3d-97c4-4c23-9df5-61094a47a0bf] Starting runtime with image: docker.all-hands.dev/all-hands-ai/runtime:0.12-nikolaik
18:19:28 - openhands:INFO: base.py:98 - [runtime aee02a3d-97c4-4c23-9df5-61094a47a0bf] Container started: openhands-runtime-aee02a3d-97c4-4c23-9df5-61094a47a0bf
18:19:28 - openhands:INFO: base.py:98 - [runtime aee02a3d-97c4-4c23-9df5-61094a47a0bf] Waiting for client to become ready at http://host.docker.internal:36410...
18:19:42 - openhands:INFO: base.py:98 - [runtime aee02a3d-97c4-4c23-9df5-61094a47a0bf] Runtime is ready.
18:19:42 - openhands:WARNING: state.py:118 - Failed to restore state from session: sessions/aee02a3d-97c4-4c23-9df5-61094a47a0bf/agent_state.pkl
18:19:42 - openhands:INFO: agent_controller.py:135 - [Agent Controller aee02a3d-97c4-4c23-9df5-61094a47a0bf] Starting step loop...
18:19:42 - openhands:INFO: agent_controller.py:135 - [Agent Controller aee02a3d-97c4-4c23-9df5-61094a47a0bf] Setting agent(CodeActAgent) state from AgentState.LOADING to AgentState.INIT
18:19:42 - openhands:INFO: agent_controller.py:135 - [Agent Controller aee02a3d-97c4-4c23-9df5-61094a47a0bf] Setting agent(CodeActAgent) state from AgentState.INIT to AgentState.RUNNING
18:19:42 - openhands:INFO: github.py:14 - Initializing UserVerifier
18:19:42 - openhands:INFO: github.py:27 - GITHUB_USER_LIST_FILE not configured
18:19:42 - openhands:INFO: github.py:48 - GITHUB_USERS_SHEET_ID not configured
18:19:42 - openhands:INFO: github.py:85 - No user verification sources configured - allowing all users
INFO: 192.168.65.1:57331 - "GET /config.json HTTP/1.1" 200 OK
INFO: 192.168.65.1:57337 - "GET /config.json HTTP/1.1" 200 OK
INFO: 192.168.65.1:57330 - "GET /api/list-files HTTP/1.1" 200 OK
18:19:42 - openhands:INFO: github.py:14 - Initializing UserVerifier
18:19:42 - openhands:INFO: github.py:27 - GITHUB_USER_LIST_FILE not configured
18:19:42 - openhands:INFO: github.py:48 - GITHUB_USERS_SHEET_ID not configured
18:19:42 - openhands:INFO: github.py:85 - No user verification sources configured - allowing all users
INFO: 192.168.65.1:57337 - "GET /api/list-files HTTP/1.1" 200 OK
18:19:43 - openhands:INFO: github.py:14 - Initializing UserVerifier
18:19:43 - openhands:INFO: github.py:27 - GITHUB_USER_LIST_FILE not configured
18:19:43 - openhands:INFO: github.py:48 - GITHUB_USERS_SHEET_ID not configured
18:19:43 - openhands:INFO: github.py:85 - No user verification sources configured - allowing all users
INFO: 192.168.65.1:57330 - "GET /api/list-files HTTP/1.1" 200 OK
==============
[Agent Controller aee02a3d-97c4-4c23-9df5-61094a47a0bf] LEVEL 0 LOCAL STEP 0 GLOBAL STEP 0
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
18:19:44 - openhands:ERROR: retry_mixin.py:47 - litellm.APIConnectionError: 'output'
Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 2564, in completion
response = bedrock_converse_chat_completion.completion(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/bedrock/chat/converse_handler.py", line 421, in completion
return litellm.AmazonConverseConfig()._transform_response(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/bedrock/chat/converse_transformation.py", line 390, in _transform_response
message: Optional[MessageBlock] = completion_response["output"]["message"]
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^
KeyError: 'output'
. Attempt #1 | You can customize retry values in the configuration.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
18:19:59 - openhands:ERROR: retry_mixin.py:47 - litellm.APIConnectionError: 'output'
Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 2564, in completion
response = bedrock_converse_chat_completion.completion(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/bedrock/chat/converse_handler.py", line 421, in completion
return litellm.AmazonConverseConfig()._transform_response(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/bedrock/chat/converse_transformation.py", line 390, in _transform_response
message: Optional[MessageBlock] = completion_response["output"]["message"]
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^
KeyError: 'output'
. Attempt #2 | You can customize retry values in the configuration.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
18:20:14 - openhands:ERROR: retry_mixin.py:47 - litellm.APIConnectionError: 'output'
Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 2564, in completion
response = bedrock_converse_chat_completion.completion(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/bedrock/chat/converse_handler.py", line 421, in completion
return litellm.AmazonConverseConfig()._transform_response(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/bedrock/chat/converse_transformation.py", line 390, in _transform_response
message: Optional[MessageBlock] = completion_response["output"]["message"]
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^
KeyError: 'output'
. Attempt #3 | You can customize retry values in the configuration.
</code></pre>
|
<python><docker><amazon-bedrock><litellm><openhands>
|
2024-11-07 18:41:04
| 1
| 41,386
|
0x90
|
79,167,704
| 1,306,892
|
How to Correct Sign Issues in Python-LaTeX Code?
|
<p>This python code</p>
<pre><code>import random
# Generate random values of k and m between 1 and 5
k = random.randint(1, 5)
m = random.randint(1, 5)
# Calculate necessary variables
m_k = m - k
m_1 = m - 1
m_plus = m + 1
k_1 = k - 1
k_plus = k + 1
# Create with the correct answer marked
answers = [
f"\\checkmark \\; \\frac{{(n+{m}) !}}{{{k} ! (n+{m_k}) !}}", # Correct answer
f"\\frac{{(n+{m_1}) !}}{{{k_1} ! (n+{m_k}) !}}", # Wrong answer
f"\\frac{{(n+{m_plus}) !}}{{{k_plus} ! (n+{m_k}) !}}", # Wrong answer
f"\\frac{{(n+{m}) !}}{{{k_plus} ! (n+{m_k}) !}}" # Wrong answer
]
# Mix the answers
random.shuffle(answers)
# Generate text in LaTeX format
latex_output = f"""
After simplifying the expression
$$\\frac{{(n+{m_1}) !}}{{{k} ! (n+{m-k-1}) !}} + \\frac{{(n+{m_1}) !}}{{{k-1}! (n+{m-k})!}}$$
indicate which of the following results is correct:
\\[
\\square \\; {answers[0]} \\quad\\quad \\square \\; {answers[1]} \\quad\\quad \\square \\; {answers[2]} \\quad\\quad \\square \\; {answers[3]}
\\]
"""
print(latex_output)
</code></pre>
<p>The code generates a math quiz in LaTeX format with a question and four multiple-choice answers, one of which is correct. It randomly selects values for <code>k</code> and <code>m</code> between 1 and 5, calculates derived variables, creates the answers using factorial expressions, and shuffles them. Finally, it prints the LaTeX-formatted text to present the question and answers.</p>
<p>The fact is that sometimes I obtain results like this:</p>
<pre><code>
After simplifying the expression
$$\frac{(n+0) !}{1 ! (n+-1) !} + \frac{(n+0) !}{0! (n+0)!}$$
indicate which of the following results is correct:
\[
\square \; \frac{(n+2) !}{2 ! (n+0) !} \quad\quad \square \; \checkmark \; \frac{(n+1) !}{1 ! (n+0) !} \quad\quad \square \; \frac{(n+1) !}{2 ! (n+0) !} \quad\quad \square \; \frac{(n+0) !}{0 ! (n+0) !}
\]
</code></pre>
<p>The things I don't like are: expressions like (n+0)! where I would prefer just n!; and the +- signs that should denote a minus sign for the product of the signs but I can't manage to change. Do you have any suggestions regarding this?</p>
|
<python><string><latex>
|
2024-11-07 18:39:17
| 1
| 1,801
|
Mark
|
79,167,500
| 7,530,850
|
Redshift query duplicates
|
<p>I'm using python with redshift_connector, and analysing the data with pandas. When accessing a redshift db with selecting <strong>n</strong> columns, I got <strong>i</strong> lines. However when I wanted to add a new column to this query, it timed out after an hour. To solve the issue, I came up with the idea to select the <strong>n+1</strong> columns, use LIMIT and OFFSET in an iterative manner to get every row. After a while it gave back <strong>i</strong> lines, but something did not add up. When I compared the results, the latter yielded a couple of duplicate rows. How can one write a query so that it would not time-out, but wouldn't give back duplicates?</p>
<p>Original mock query that won't time out:</p>
<pre><code>SELECT a, b, c
FROM table
WHERE a IN ('attribute1','attribute2')
</code></pre>
<p>Timeout:</p>
<pre><code>SELECT a, b, c, d
FROM table
WHERE a IN ('attribute1','attribute2')
</code></pre>
<p>If I put the second one in a while True loop, amend it with the LIMIT and OFFSET, use pd.read_sql(query, connection) to get the data, append it to a df list, and concat the list in the end, it gives me back the exact amount of lines that the first one, but with duplicates.</p>
|
<python><sql><amazon-web-services><amazon-redshift>
|
2024-11-07 17:23:56
| 2
| 330
|
Newl
|
79,167,346
| 8,233,873
|
Strange rendering behaviour with selection_interval
|
<p>I'm generating a plot with the following code (in an ipython notebook):</p>
<pre class="lang-none prettyprint-override"><code>import altair as alt
import pandas as pd
events = pd.DataFrame(
[
{"event": "Task A", "equipment": "SK-101", "start": 10.2, "finish": 11.3},
{"event": "Task B", "equipment": "SK-102", "start": 6.5, "finish": 10.2},
{"event": "Task C", "equipment": "SK-103", "start": 3.3, "finish": 11.3},
{"event": "Task D", "equipment": "SK-104", "start": 4.7, "finish": 5.5},
{"event": "Task E", "equipment": "SK-105", "start": 13.0, "finish": 13.2},
{"event": "Task F", "equipment": "SK-106", "start": 1.1, "finish": 7.9},
{"event": "Task G", "equipment": "SK-107", "start": 7.4, "finish": 10.0},
{"event": "Task H", "equipment": "SK-108", "start": 6.6, "finish": 7.6},
{"event": "Task I", "equipment": "SK-109", "start": 8.5, "finish": 16.7},
{"event": "Task J", "equipment": "SK-110", "start": 9.0, "finish": 12.2},
{"event": "Task K", "equipment": "SK-111", "start": 10.2, "finish": 17.3},
{"event": "Task L", "equipment": "SK-112", "start": 6.1, "finish": 9.5},
{"event": "Task M", "equipment": "SK-113", "start": 5.4, "finish": 5.8},
{"event": "Task N", "equipment": "SK-114", "start": 2.2, "finish": 8.3},
{"event": "Task O", "equipment": "SK-115", "start": 9.3, "finish": 10.6},
{"event": "Task P", "equipment": "SK-116", "start": 3.9, "finish": 12.5},
{"event": "Task Q", "equipment": "SK-117", "start": 11.1, "finish": 16.6},
{"event": "Task R", "equipment": "SK-118", "start": 14.4, "finish": 18.4},
{"event": "Task S", "equipment": "SK-119", "start": 19.2, "finish": 19.9},
{"event": "Task T", "equipment": "SK-120", "start": 13.8, "finish": 16.7},
{"event": "Task U", "equipment": "SK-102", "start": 12.0, "finish": 13.0},
{"event": "Task V", "equipment": "SK-106", "start": 10.2, "finish": 17.3},
{"event": "Task W", "equipment": "SK-108", "start": 12.8, "finish": 14.9},
{"event": "Task X", "equipment": "SK-110", "start": 12.6, "finish": 18.9},
{"event": "Task Y", "equipment": "SK-112", "start": 13.3, "finish": 18.3},
{"event": "Task Z", "equipment": "SK-114", "start": 8.6, "finish": 19.2},
]
)
brush = alt.selection_interval(encodings=["y"])
minimap = (
alt.Chart(events)
.mark_bar()
.add_params(brush)
.encode(
x=alt.X("start:Q", title="", axis=alt.Axis(labels=False, tickSize=0)),
x2="finish",
y=alt.Y("equipment:N", title="", axis=alt.Axis(labels=False, tickSize=0)),
color=alt.condition(brush, "event", alt.value("lightgray")),
)
.properties(
width=100,
height=400,
title="minimap"
)
)
detail = (
alt.Chart(events)
.mark_bar()
.encode(
x=alt.X("start:Q", title="time (hr)"),
x2="finish",
y=alt.Y("equipment:N").scale(domain={"param": brush.name, "encoding": "y"}),
color="event",
)
.properties(width=600, height=400, title="Equipment Schedule")
)
detail | minimap
</code></pre>
<p>The idea is that the minimap plot on the right is used to zoom/pan the y-axis of the main plot.
When I zoom or pan with the mouse in the minimap, I see strange artifacts at the top of the main plot; it looks like all bars above the selected area get rendered at the top of the plot.</p>
<p>Is this something I'm doing wrong, or is it some sort of rendering bug?</p>
<p>Zoomed out:
<a href="https://i.sstatic.net/rEmAAHTk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rEmAAHTk.png" alt="zoomed out" /></a></p>
<p>Zoomed in:
<a href="https://i.sstatic.net/UDDJ7jQE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UDDJ7jQE.png" alt="zoomed in" /></a></p>
|
<python><altair>
|
2024-11-07 16:42:56
| 1
| 313
|
multipitch
|
79,167,344
| 1,769,327
|
Create a PDF/UA compliant PDF from an ODT with LibreOffice & Python UNO
|
<p>Please consider the following example:</p>
<pre><code>import uno
def create_pdf(file_url):
local_context = uno.getComponentContext()
resolver = local_context.ServiceManager.createInstanceWithContext(
"com.sun.star.bridge.UnoUrlResolver", local_context)
context = resolver.resolve(
"uno:socket,host=localhost,port=2002;urp;StarOffice.ComponentContext")
desktop = context.ServiceManager.createInstanceWithContext(
"com.sun.star.frame.Desktop", context)
document = desktop.loadComponentFromURL(file_url, "_blank", 0, ())
# TODO: Create a Universal Accessibility (PDF/UA) PDF
</code></pre>
<p>I need to complete it to create a PDF/UA compliant PDF.</p>
<p>If I was doing it from the document in Writer, the steps would be:</p>
<ol>
<li>Select <strong>File</strong> > <strong>Export As</strong> > <strong>Export as PDF...</strong></li>
<li>The <strong>PDF Options</strong> dialog opens.</li>
<li>Check <strong>Universal Accessibility (PDF/UA)</strong> checkbox.</li>
<li>Click the <strong>Export</strong> button.</li>
</ol>
<p>I'm struggling to find applicable examples. Does anyone know how to code this, please?</p>
<p>Thanks</p>
<p>I</p>
|
<python><libreoffice><libreoffice-writer>
|
2024-11-07 16:42:21
| 1
| 631
|
HapiDaze
|
79,167,208
| 4,100,253
|
Passing parameters in Quarto from Python to Typst or from Typst to Python
|
<p>I need to create a few sections in a loop. Is there any way to pass <code>_</code> as index to Python part, or increment Python <code>ix</code> variable inside the Typst loop?</p>
<pre><code>```{python}
ix = 0
```
```{=typst}
#for _ in range(`{python} len(tables)`) {
[
`{python} tables[ix]`
]
}
```
</code></pre>
|
<python><quarto><typst>
|
2024-11-07 15:50:20
| 1
| 718
|
Marek
|
79,167,165
| 2,915,050
|
How to use dot and square bracket notation as a string key to access a nested dictionary/list structure
|
<p>Let's say I have a nested dictionary that looks something like this:</p>
<pre><code>{
"a": 1,
"b": 2,
"c": 3,
"d": {
"e": 4,
"f": 5
},
"g": [{
"h": 6,
"i": 7
},
{
"h": 8,
"i": 9
}]
}
</code></pre>
<p>However, I have been provided with a string key in dot notation form. If the key is, for example, <code>key = "d.e"</code>, how can I use this to get the value <code>4</code> from the above dictionary? Similarly, for a nested object which is an array of dicts, I would like to do the same to access <code>g[0].h</code> and <code>g[1].h</code>.</p>
<p>I do not want to convert the dictionary as in <a href="https://stackoverflow.com/questions/2352181/how-to-use-a-dot-to-access-members-of-dictionary">How to use a dot "." to access members of dictionary?</a>, I want to preserve the use of the string as the key to access the objects.</p>
<p>I have tried using the <code>reduce</code> function as in <a href="https://stackoverflow.com/questions/12414821/checking-a-nested-dictionary-using-a-dot-notation-string-a-b-c-d-e-automatica">Checking a nested dictionary using a dot notation string "a.b.c.d.e", automatically create missing levels</a> by OP or <a href="https://stackoverflow.com/questions/31033549/nested-dictionary-value-from-key-path">Nested dictionary value from key path</a>, but it does not handle nested arrays.</p>
|
<python><dictionary><dot-notation>
|
2024-11-07 15:35:47
| 4
| 1,583
|
RoyalSwish
|
79,166,784
| 6,378,557
|
Using a LSP server for Python with GObject Introspection bindings
|
<p>I am writing code that heavily used GObject Introspection, so the code starts with:</p>
<pre><code>import gi
from gi.repository import GObject
from gi.repository import GLib
# .. and more of the same
</code></pre>
<p>Using the basic LSP-python server (*) with my editor I don't get any meaningful help for these (I do get useful help for regular Python libs, so the LSP works). What should I add and where to have all these libs added?</p>
<p>Bonus points: some of these libraries are in a GI_TYPELIB_PATH extension.</p>
<p>(*) python3-pylsp 1.10.0-1 on Ubuntu 24.04</p>
|
<python><language-server-protocol><gobject-introspection>
|
2024-11-07 13:58:50
| 0
| 9,122
|
xenoid
|
79,166,740
| 2,245,024
|
Efficiently reading part of partitioned dataset
|
<p>I have pretty big (up to ~300Gb) datasets stored by partitions in parquet format (compressed).</p>
<p>I'm trying to find an efficient way to read parts (as defined by a set of filters) of the dataset into pandas.
The way it's done now is</p>
<pre class="lang-none prettyprint-override"><code>result = ds.dataset(dataset_storage_root, format="parquet", partitioning='hive').scanner(
columns=columns,
filter=filters
).to_table().to_pandas()
</code></pre>
<p>Although this works, it's rather slow (I assume this is due to the fact it actually reads the full dataset and only then applies filters, iterating over each row).
And by rather slow I mean ~13 seconds, which is OK given the dataset size, but ridiculous given the actual amount of data I need to retrieve.</p>
<p>Manually determining the subfolder for the data and reading just this part takes ~9ms for comparison. The downside is I need to manually add partition columns and values and handle quite a few corner cases with filtering and schemas.</p>
<p><strong>I would imagine there must be a way to do this via API and efficiently at that.</strong></p>
<p>What I've already tried to my disappointment:</p>
<pre><code>df_pandas = pd.read_parquet(dataset_storage_root, engine="pyarrow", filters=filters)
</code></pre>
<p>Takes 1m 23s</p>
<pre><code>df_pq = pq.read_table(dataset_storage_root, filters=filters)
</code></pre>
<p>Takes 1m 22s</p>
<p>These take insane amount of time despite the <a href="https://github.com/apache/arrow/issues/17944" rel="nofollow noreferrer">claim</a> is it should only read the subset specified by filters.</p>
|
<python><pandas><parquet><partitioning><pyarrow>
|
2024-11-07 13:46:25
| 3
| 355
|
Nik
|
79,166,537
| 3,423,768
|
Filter Related Objects in DRF Serializer Based on User's Permission
|
<p>I’m working with Django Rest Framework and need to filter related objects in a serializer based on custom user permissions. Specifically, I want to conditionally include or exclude certain related objects (in this case, comments) in the serialized response depending on the user's relationship to the primary object (a blog post).</p>
<p>For example, a user should see all comments if they have special permissions (such as being the owner or a designated collaborator on the blog post). Otherwise, they should only see comments that meet certain criteria (e.g., approved comments).</p>
<p>I've come up with this solution for now, but I'm unsure if it's the most efficient approach. How should I address this issue?</p>
<pre class="lang-py prettyprint-override"><code># models.py
from django.db import models
from django.contrib.auth.models import User
class Blog(models.Model):
title = models.CharField(max_length=100)
content = models.TextField()
owner = models.ForeignKey(User, related_name="owned_posts", on_delete=models.CASCADE)
writers = models.ManyToManyField(User, related_name="written_posts")
class Comment(models.Model):
post = models.ForeignKey(Blog, related_name="comments", on_delete=models.CASCADE)
approved = models.BooleanField(default=False)
text = models.TextField()
</code></pre>
<pre class="lang-py prettyprint-override"><code># serializers.py
from rest_framework import serializers
class CommentSerializer(serializers.ModelSerializer):
class Meta:
model = Comment
fields = '__all__'
class BlogSerializer(serializers.ModelSerializer):
comments = serializers.SerializerMethodField()
class Meta:
model = Blog
fields = '__all__'
def get_comments(self, obj):
user = self.context['request'].user
if obj.owner == user or user in obj.writers.all():
# Show all comments if the user is the owner or a writer
comments = obj.comments.all()
else:
# Otherwise, show only approved comments
comments = obj.comments.filter(approved=True)
return CommentSerializer(comments, many=True).data
</code></pre>
<pre class="lang-py prettyprint-override"><code># views.py
class BlogViewSet(viewsets.ModelViewSet):
serializer_class = BlogSerializer
permission_classes = (MainModelPermissions,)
pagination_class = LimitOffsetPagination
allowed_methods = ("list", "retrieve")
def get_queryset(self):
return Blog.objects.all()
</code></pre>
|
<python><django><django-rest-framework>
|
2024-11-07 12:53:52
| 1
| 2,928
|
Ravexina
|
79,166,266
| 9,609,901
|
Why is Python much faster than Dart for file read/write operations?
|
<p>I am testing file read/write performance in both Python and Dart, and I encountered a surprising result: Python is significantly faster than Dart for these operations. Here are the times I recorded:</p>
<p>Python:</p>
<ul>
<li>Write time: 10.28 seconds</li>
<li>Read time: 4.88 seconds</li>
<li>Total time: 15.16 seconds</li>
</ul>
<p>Dart:</p>
<ul>
<li>Write time: 79 seconds</li>
<li>Read time: 10 seconds</li>
<li>Total time: 90 seconds</li>
</ul>
<p>I expected Dart to perform similarly or even better than Python in this context, so I'm puzzled by these results. I’m running both tests on the same system (Mac) with similar code structures to ensure a fair comparison.</p>
<p>Questions:</p>
<ol>
<li>Are there any known reasons for this significant performance difference in file handling between Python and Dart?</li>
<li>Could specific libraries, encoding formats, or other system-level factors in Python or Dart affect file I/O speed?</li>
<li>Are there optimization techniques in Dart to improve file read/write performance?</li>
</ol>
<p>I would appreciate any insights or suggestions on what might be causing this discrepancy and how to potentially optimize Dart's file I/O performance.</p>
<p>Here's the Python code:</p>
<pre class="lang-py prettyprint-override"><code>def benchmark(cnt=200):
block_size = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" * (1024 * 1024)
file_path = "large_benchmark_test.txt"
start_time = time.time()
with open(file_path, "w") as file:
for _ in range(cnt):
file.write(block_size)
write_end_time = time.time()
with open(file_path, "r") as file:
while file.read(1024):
pass
read_end_time = time.time()
write_time = write_end_time - start_time
read_time = read_end_time - write_end_time
total_time = read_end_time - start_time
print(f"Python - Write: {write_time:.2f} s")
print(f"Python - Read: {read_time:.2f} s")
print(f"Python - Total: {total_time:.2f} s")
os.remove(file_path)
</code></pre>
<p>And the Dart code:</p>
<pre class="lang-dart prettyprint-override"><code>void benchmark({int cnt=200}) {
final blockSize = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' * 1024 * 1024;
final filePath = 'large_benchmark_test.txt';
final file = File(filePath);
final writeStartTime = DateTime.now();
final sink = file.openSync(mode: FileMode.write);
for (int i = 0; i < cnt; i++) {
sink.writeStringSync(blockSize);
}
sink.closeSync();
final writeEndTime = DateTime.now();
final writeTime = writeEndTime.difference(writeStartTime).inSeconds;
print("Dart (Synch) - Write: $writeTime s");
final readStartTime = DateTime.now();
final reader = file.openSync(mode: FileMode.read);
while (true) {
final buffer = reader.readSync(1024);
if (buffer.isEmpty) break;
}
reader.closeSync();
final readEndTime = DateTime.now();
final readTime = readEndTime.difference(readStartTime).inSeconds;
final totalTime = readEndTime.difference(writeStartTime).inSeconds;
print("Dart (Synch) - Read: $readTime s");
print("Dart (Synch) - Total: $totalTime s");
file.deleteSync();
}
</code></pre>
|
<python><dart>
|
2024-11-07 11:36:11
| 1
| 568
|
Don Coder
|
79,166,231
| 1,488,383
|
Detect function calls originating from multi-line strings
|
<p>I would like to determine whether a function is called from inside a multi-line string (during string interpolation).</p>
<p>Here is a test example for such a function:</p>
<pre class="lang-py prettyprint-override"><code>s = f"""\
{inside_multiline_string()}"""
assert s == "True"
assert not inside_multiline_string()
</code></pre>
<p>Ideally this would work not by manually parsing the source code, but rather rely on some built-in introspection capability of Python. So that it works robustly.</p>
<p>Additionally it should work transitively, i.e. if the parent call is inside a multi-line string, it should still be marked as inside a multi-line string.</p>
<p>Here is a more complicated test example</p>
<pre class="lang-py prettyprint-override"><code>def foo():
return inside_multiline_string()
def bar():
return foo()
s = f"""\
{bar()}"""
assert s == "True"
</code></pre>
|
<python><reflection>
|
2024-11-07 11:28:48
| 1
| 606
|
Anton
|
79,166,072
| 16,405,935
|
Cannot read files with different paths
|
<p>I'm trying to read csv file based on date in file name. Below is my code:</p>
<pre><code>import pandas as pd
import numpy as np
from pathlib import Path
import glob
import io
date_6='2024-05-15'
date_6_1 = '*' + date_6[:4] + date_6[5:7] + date_6[8:10] + '.csv'
for file in glob.glob(r'C:\Users\admin\Báo cáo ngày' and date_6_1):
df = pd.read_csv(file)
df
</code></pre>
<p>It worked well if I put data in same folder with Jupyter notebook, but if I changed Path of file, it did not work. For example.</p>
<pre><code>import pandas as pd
import numpy as np
from pathlib import Path
import glob
import io
date_6='2024-05-15'
date_6_1 = '*' + date_6[:4] + date_6[5:7] + date_6[8:10] + '.csv'
for file in glob.glob(r'G:\New folder' and date_6_1):
df = pd.read_csv(file)
df
</code></pre>
<p>I don't know what wrong with it.
What should I change in this code?</p>
|
<python><pandas><glob>
|
2024-11-07 10:45:27
| 0
| 1,793
|
hoa tran
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.