QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
75,089,538
| 7,585,973
|
How to crosstab distance in pandas
|
<p>Here's my Input</p>
<pre><code>City Logitude Latitude
A 2 2
B 5 6
C 8 10
</code></pre>
<p>Here's my Output, calculation using Pythagoras formula</p>
<pre><code> A B C
A 0 5 10
B 5 0 5
C 10 5 0
</code></pre>
<p>I'm only using Euclidian distance, write formula directly in formula still fine</p>
|
<python><pandas>
|
2023-01-11 22:10:39
| 1
| 7,445
|
Nabih Bawazir
|
75,089,386
| 20,107,918
|
DataFrame cleanup: join/concatenate non-NaN values when there is NaN in-between rows (spin-off from row with 'unique' record))
|
<p>I created a data-capturing template. When imported into Python (as DataFrame), I noticed that some of the records spanned multiple rows.<br />
I need to clean up the spanned record (see expected representation).</p>
<p>The 'Entity' column is the anchor column. Currently, it is not the definitive column, as one can see the row(s) underneath with NaN.
NB: Along the line, I'll be dropping the 'Unnamed:' column.<br />
Essentially, for every row where <code>df.Entity.isnull()</code>, the value(s) must be joined to the row above where <code>df.Entity.notnull()</code>.</p>
<p><em><code>NB: I can adjust the source, however, I'll like to keep the source template because of ease of capturing and #reproducibility.</code></em></p>
<p>[dummy representation]</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Unnamed: 0</th>
<th>Entity</th>
<th>Country</th>
<th>Naming</th>
<th>Type:</th>
<th>Mode:</th>
<th>Duration</th>
<th>Page</th>
<th>Reg</th>
<th>Elig</th>
<th>Unit</th>
<th>Structure</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>6.</td>
<td>Sur...</td>
<td>UK</td>
<td>...Publ</td>
<td>Pros...</td>
<td>FT</td>
<td>Standard</td>
<td>Yes</td>
<td>Guid... 2021</td>
<td>General</td>
<td>All</td>
<td>Intro & discussion...</td>
<td>Formal</td>
</tr>
<tr>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>Assessment</td>
</tr>
<tr>
<td>7.</td>
<td>War...</td>
<td>UK</td>
<td></td>
<td>by Publ...</td>
<td>Retro...</td>
<td>FT</td>
<td>1 yr</td>
<td>Yes</td>
<td>Reg 38...</td>
<td>Staff</td>
<td>All</td>
<td>Covering Doc...</td>
</tr>
<tr>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>General</td>
<td>NaN</td>
<td>5000 <10000</td>
<td>3-8 publ...</td>
</tr>
<tr>
<td>8.</td>
<td>EAng...</td>
<td>UK</td>
<td>Publ...</td>
<td>Retro...</td>
<td>PT</td>
<td>6 mths</td>
<td>Yes</td>
<td>Reg...</td>
<td>General (Cat B)</td>
<td>All</td>
<td>Critical Anal...</td>
<td>Formal as</td>
</tr>
<tr>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>Staff (Cat A)</td>
<td>NaN</td>
<td>15000</td>
<td>*g...</td>
</tr>
<tr>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>Edu & LLL l...</td>
<td>NaN</td>
<td>NaN</td>
<td>LLL not...</td>
</tr>
</tbody>
</table>
</div><hr />
<p>[expected representation] I expect to have</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Unnamed</th>
<th>Entity</th>
<th>Country</th>
<th>Naming</th>
<th>Type:</th>
<th>Mode:</th>
<th>Duration</th>
<th>Page</th>
<th>Reg</th>
<th>Elig-------------</th>
<th>Unit</th>
<th>Structure -----</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>6.</td>
<td>Sur...</td>
<td>UK</td>
<td>...Publ</td>
<td>Pros...</td>
<td>FT</td>
<td>Standard</td>
<td>Yes</td>
<td>Guid... 2021</td>
<td>General</td>
<td>All</td>
<td>Intro & discussion...</td>
<td>Formal <br/> Assessment</td>
</tr>
<tr>
<td>7.</td>
<td>War...</td>
<td>UK</td>
<td>by Publ...</td>
<td>Retro...</td>
<td>FT</td>
<td>1 yr</td>
<td>Yes</td>
<td>Reg 38...</td>
<td>Staff <br/> General</td>
<td>All</td>
<td>Covering Doc... <br> 5000 <10000</td>
<td>Formal <br/>3-8 publ...</td>
</tr>
<tr>
<td>8.</td>
<td>EAng...</td>
<td>UK</td>
<td>Publ...</td>
<td>Retro...</td>
<td>PT</td>
<td>6 mths</td>
<td>Yes</td>
<td>Reg...</td>
<td>General (Cat B) <br>Staff (Cat A) <br/>Edu & LLL l...</td>
<td>All</td>
<td>Critical Anal... <br/>15000</td>
<td>Formal as <br/>*g... <br>LLL not...</td>
</tr>
</tbody>
</table>
</div><hr />
<p>My instinct is to test for isnull() on [Entity] column. I would prefer not to do a if...then/'loop' check.<br />
My mind wandered along stack, groupby or stack, merge/join, pop. Not sure these approaches are 'right'.</p>
<p>My preference will be some 'vectorisation' as much as it's possible; taking advantage of pandas' DataFrame</p>
<p>I took note of</p>
<ul>
<li><a href="https://stackoverflow.com/questions/44799264/merging-two-rows-one-with-a-value-the-other-nan-in-pandas">Merging Two Rows (one with a value, the other NaN) in Pandas</a></li>
<li><a href="https://stackoverflow.com/questions/49034202/pandas-dataframe-merging-rows-to-remove-nan">Pandas dataframe merging rows to remove NaN</a></li>
<li><a href="https://stackoverflow.com/questions/23444858/concatenate-column-values-in-pandas-dataframe-with-nan-values">Concatenate column values in Pandas DataFrame with "NaN" values</a></li>
<li><a href="https://stackoverflow.com/questions/72718420/merge-df-of-different-sizes-with-nan-values-in-between">Merge DF of different Sizes with NaN values in between</a></li>
</ul>
<p>In my case, my anchor column [Entity] has the key values on one row; however, its values are on one row or span multiple rows.<br />
NB: I'm dealing with one DataFrame and not two df.</p>
<p>I should also mention that I took note of the SO solution that 'explode' newline across multiple rows. This is the opposite for my own scenario. However, I take note as it might provide hints.</p>
<ul>
<li><a href="https://stackoverflow.com/questions/55616994/pandas-how-to-read-a-dataframe-from-excel-file-where-multiple-rows-are-sometime">Pandas: How to read a DataFrame from excel-file where multiple rows are sometimes separated by line break (\n)</a></li>
</ul>
<p>[UPDATE: Walkaround 1]<br />
<em>NB: This walkaround is not a solution. It simply provide an alternate!</em><br />
With leads from a Medium and a SO post,</p>
<p>I attempted with success reading my dataset directly from the table in the Word document. For this, I installed the <code>python-docx</code> library.</p>
<pre class="lang-py prettyprint-override"><code>## code snippet; #Note: incomplete
from docx import Document as docs
... ...
document = docs("datasets/IDA..._AppendixA.docx")
def read_docx_table(document, tab_id: int = None, nheader: int = 1, start_row: int = 0):
... ...
data = [[cell.text for cell in row.cells] for i, row in enumerate(table.rows)
if i >= start_row]
... ...
if nheader == 1: ## first row as column header
df = df.rename(columns=df.iloc[0]).drop(df.index[0]).reset_index(drop=True)
... ...
return df
... ...
## parse and show dataframe
df_table = read_docx_table(document, tab_id=3, nheader=1, start_row=0)
df_table
</code></pre>
<p>The rows are no longer spilling over multiple rows. The columns with newline are now showing the '\n' character.
I can, if I use <code>df['col'].str.replace()</code>, remove newlines '\n' or other delimiters, if I so desire.</p>
<ul>
<li><a href="https://stackoverflow.com/questions/14345739/replacing-part-of-string-in-python-pandas-dataframe">Replacing part of string in python pandas dataframe</a></li>
</ul>
<p>[dataframe representation: importing and parsing using python-docx library] Almost a true representation of the original table in Word</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Unnamed</th>
<th>Entity</th>
<th>Country</th>
<th>Naming</th>
<th>Type:</th>
<th>Mode:</th>
<th>Duration</th>
<th>Page</th>
<th>Reg</th>
<th>Elig-------------</th>
<th>Unit</th>
<th>Structure -----</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>6.</td>
<td>Sur...</td>
<td>UK</td>
<td>...Publ</td>
<td>Pros...</td>
<td>FT</td>
<td>Standard</td>
<td>Yes</td>
<td>Guid... 2021</td>
<td>General</td>
<td>All</td>
<td>Intro & discussion...</td>
<td>Formal \n| Assessment</td>
</tr>
<tr>
<td>7.</td>
<td>War...</td>
<td>UK</td>
<td>by Publ...</td>
<td>Retro...</td>
<td>FT</td>
<td>1 yr</td>
<td>Yes</td>
<td>Reg 38...</td>
<td>Staff \nGeneral</td>
<td>All</td>
<td>Covering Doc... \n| 5000 <10000</td>
<td>Formal \n| 3-8 publ...</td>
</tr>
<tr>
<td>8.</td>
<td>EAng...</td>
<td>UK</td>
<td>Publ...</td>
<td>Retro...</td>
<td>PT</td>
<td>6 mths</td>
<td>Yes</td>
<td>Reg...</td>
<td>General (Cat B) <br>Staff (Cat A) \nEdu & LLL l...</td>
<td>All</td>
<td>Critical Anal... \n| 15000</td>
<td>Formal as \n|*g... \n| LLL not...</td>
</tr>
</tbody>
</table>
</div><hr />
<p>[UPDATE 2]
After my <em>update: walkaround 1</em>, I saw @J_H comments. Whiltst it is not 'data corruption' in the true sense, it is nonetheless an ETL strategy issue. Thanks @J_H. Absolutely, well-thought-through #design is of the essence.<br />
Going forward, I'll either leave the source template practically as-is with minor modifications and use <code>python-docx</code> as I've used; or<br />
I modify the source template for easy capture in Excel or 'csv' type repository.</p>
<p>Despite the two approaches outlined here or any other, I'm still keen on 'data cleaning code' that can clean-up the df, to give the expected df.</p>
|
<python><dataframe><join><vectorization><nan>
|
2023-01-11 21:53:14
| 0
| 399
|
semmyk-research
|
75,089,175
| 154,154
|
Accessing Minio with a self signed certificate and the Python client library
|
<p>We have an instance of minio running with a certificate that is signed by our corporate CA. Accessing it with S3 Browser works perfect. Now I try to write a python script to upload files. I try to use the windows cert store to get my CA certs</p>
<pre><code>myssl = ssl.create_default_context()
myhttpclient = urllib3.PoolManager(
cert_reqs='CERT_REQUIRED',
ca_certs=myssl.get_ca_certs()
)
s3dev = Minio("s3dev.mycorp.com:9000,
access_key="myAccessKey",
secret_key="mySecretKey"
secure=True,
http_client=myhttpclient
)
</code></pre>
<p>I get an error "TypeError: unhashable type: list"</p>
<p>Getting the CA Certs from Windows cert store with ssl.get_ca_certs() returns a list with all the certs in it which seems logic to me, what am I missing here to get something this simple to work ?</p>
|
<python><python-3.x><minio><urllib3>
|
2023-01-11 21:27:27
| 0
| 753
|
nojetlag
|
75,088,921
| 7,875,444
|
VS CODE and python import path confusion
|
<p>I'm trying to understand the appropriate settings for vs code so that I can import local .py files from different sub directories in my vs code project. I ended up solving my problem by adding sub directories I need to import from to <code>PYTHONPATH</code> in my <code>.env</code> file like so: <code>PYTHONPATH = "dir_relative_path1;dir_relative_path2..."</code>. However I also tried fixing my problem by adding directories to <code>extrapaths</code> in my <code>.env</code> like so: <code>extrapaths = ["dir_relative_path1", "dir_relative_path2", ...]</code> but this didn't work.</p>
<p>On top of that I also tried changing <code>python.analysis.extraPaths</code> in my <code>settings.json</code> file like so <code>python.analysis.extraPaths= ["dir_relative_path1", "dir_relative_path2", ...]</code> but this also didn't work.</p>
<p>I was hoping someone could clarify the different usages of <code>settings.json</code> vs <code>.env</code> and <code>PYTHONPATH</code> vs <code>extrapaths</code> etc in the context of VS CODE knowing where to look for imports.</p>
|
<python><visual-studio-code>
|
2023-01-11 20:56:11
| 1
| 338
|
ablanch5
|
75,088,743
| 4,150,078
|
How to enumerate keys from list and get values without hardcoding keys?
|
<p>How to enumerate keys from list and get values without hard coding keys? <code>my_list</code> contains tuples and I am trying to generate dictionary based on the position of the tuple in list. <code>num</code> in <code>enumerate</code> gives number like 0, 1,2, ...etc.</p>
<pre><code>my_list = [(1,2),(2,3),(4,5),(8,12)]
my_list
di = {'0':[],'1':[]} #manually - how to automate with out specifying keys from enumarate function?
for num,i in enumerate(my_list):
di['0'].append(i[0])
di['1'].append(i[0])
print(di) # {'0': [1, 2, 4, 8], '1': [1, 2, 4, 8]}
</code></pre>
<p>Output - How do I get this result?</p>
<pre><code>di = {'0':[(1,2)],
'1':[(2,3)],
'2':[(4,5)],
'3':[(8,12)]}
</code></pre>
|
<python><python-3.x><dictionary>
|
2023-01-11 20:35:05
| 2
| 2,158
|
sharp
|
75,088,724
| 10,401,171
|
Managing optional dependencies in __init__.py
|
<p>I am developing a python package K (so it has an <code>__init__.py</code>).</p>
<p>Such package contains different sub-packages, each about a different part of my work, Let us call one of these M (so it also has its own <code>__init__.py</code>).</p>
<p>Now M has 2 modules A and B containing one or more functions each, not important how many, but that with a difference: all functions in A depend on an optional dependency <em>opt_dep_A</em> and analogously for B <em>opt_dep_B</em>.</p>
<p>Both optional dependencies can be installed when installing K with pip as e.g. <code>pip install 'K[opt_dep_A]'</code>.</p>
<p>Now I am looking to make the user experience "modular", meaning that if the user is interested in the B functions he/she should be able to use them (provided opt_dep_B is installed) without worrying about A.</p>
<p>Here's the problem: A belongs to the same sub-package as B (because their functions relate to the same high-level "group").</p>
<p>How can I deal with this clash when importing/exposing from K's or M's <code>__init__.py</code>?</p>
<p>In both cases, when I do e.g. from <code>M/__init__.py</code></p>
<pre><code>from .A import func_A
from .B import func_B
</code></pre>
<p>if <em>opt_dep_A</em> is not installed, but the user does <code>from K import func_B</code> or <code>from M import func_B</code> than any catched/uncatched ImportError or raised warning from the A module will get triggered, even if I never wanted to import stuff from A.</p>
<p>I'd still like A and B to belong to the same level, though: how to maintain the modularity and still keep the same package structure? Is it possible?</p>
<p>I tried try/except clauses, <code>importlib.util.find_spec</code>, but the problem is really the fact that I am importing one part from the same level.</p>
|
<python><python-3.x><import><dependency-management>
|
2023-01-11 20:33:18
| 1
| 446
|
Michele Peresano
|
75,088,655
| 12,764,795
|
How to calculate the areas under the spikes (interference) of a curve in Python
|
<p><strong>What I'm trying to do</strong></p>
<p>I got an <code>np.array</code> with frequencies (x) and an <code>np.array</code> with the signal strength / power spectral density (y). The signal without any noise looks similar to a logarithmic curve, but might have a slightly different form, depending on the data.</p>
<p>The signal has different interferences that are visible as spikes, some of those spikes are overlapping each other. I do need to calculate the area under each spike. If they are overlapping I need to calculate the area of each of them individually. (Ideally adjusting for the constructive interference of two overlapping spikes, or splitting the areas where the spikes intersect.)</p>
<p><strong>What I've tried so far:</strong></p>
<p>I've tried to get the peaks and the "bottom" width of each spike. However this quite often failed if the spikes where too wide or overlapped with each other.</p>
<p>I've tried to use different curve fitting algorithms or filters to get a second curve that represent the signal without interference, I wanted to overlay this curve below the original curve and try to get the areas under the spikes that way. But I didn't get a curve that would look similar to the original signal without the interference.</p>
<p><strong>Example Curve</strong></p>
<p>In the image you can see the <code>data</code> curve, and a <code>model1</code> curve, which is a curve that I tried to fit to the <code>data</code> curve. I drew a yellow curve which is how a clean signal should look like in this example. I also shaded one of the spike areas pink which represents one of the areas I need to calculate.</p>
<p><a href="https://i.sstatic.net/2nR6i.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2nR6i.png" alt="enter image description here" /></a></p>
<p><strong>What my data looks like</strong></p>
<p>The frequency array looks similar to this (starting from 0.5):</p>
<pre><code> [ 0.5 ... 79.5 80. 80.5 81. 81.5 82. 82.5 83. 83.5 84.
84.5 85. 85.5 86. 86.5 87. 87.5 88. 88.5 89. 89.5 90.
90.5 91. 91.5 92. 92.5 93. 93.5 94. 94.5 95. 95.5 96.
96.5 97. 97.5 98. 98.5 99. 99.5 100. ]
</code></pre>
<p>The signal array looks like this, with the same length as the frequency array.</p>
<pre><code>[6.83248573e-27 6.38424451e-27 4.40532611e-27 2.46641238e-27
2.79056227e-27 1.91667602e-27 2.01585530e-27 2.81595644e-27
1.63137469e-27 2.36510624e-27 1.76637075e-27 1.42336105e-27
1.94134643e-27 1.63569180e-27 1.92916964e-27 1.74853657e-27
1.70866416e-27 1.82414861e-27 1.99505505e-27 3.18429811e-27
5.40618755e-27 6.03726511e-27 4.78220246e-27 3.56407711e-27
2.82165848e-27 2.21870589e-27 2.08558649e-27 2.05153813e-27
2.26220532e-27 2.51639647e-27 2.72401400e-27 3.03959512e-27
3.20637304e-27 3.25169369e-27 3.14399482e-27 3.22505547e-27
3.04244374e-27 3.05644526e-27 2.75377037e-27 2.66384664e-27
2.54582065e-27 2.45122798e-27 2.33501355e-27 2.39223261e-27
2.31744742e-27 2.15909503e-27 2.13473052e-27 1.97037169e-27
1.66287056e-27 1.39650886e-27 1.35749479e-27 1.36925484e-27
1.23080761e-27 1.18806584e-27 1.00880561e-27 8.49857372e-28
8.69180125e-28 8.00455124e-28 7.64146873e-28 7.44351180e-28
6.12306196e-28 5.61151389e-28 5.61148340e-28 5.29198214e-28
4.65031278e-28 4.39371596e-28 3.87900481e-28 3.66667907e-28
3.19346926e-28 2.70416144e-28 2.55537042e-28 2.52633398e-28
2.46481657e-28 2.17053812e-28 2.01982726e-28 1.90483387e-28
1.61632370e-28 1.55358436e-28 1.59321060e-28 1.60793279e-28
1.52695766e-28 1.55288957e-28 1.59405042e-28 1.53165367e-28
1.36278544e-28 1.57511344e-28 1.36641270e-28 1.33813492e-28
1.30800335e-28 1.32748995e-28 1.30747468e-28 1.16701156e-28
1.12717963e-28 1.22763995e-28 1.17056892e-28 1.13689662e-28
1.06267063e-28 1.18968941e-28 1.12967908e-28 ... ]
</code></pre>
<p><strong>Questions?</strong></p>
<p>Please let me know if you need further clearification or code examples. I did not include further code examples as none of what I've tried has given the right results.</p>
|
<python><numpy><scipy><filtering><curve-fitting>
|
2023-01-11 20:25:05
| 1
| 1,624
|
Marco Boerner
|
75,088,637
| 5,992,641
|
Do PySpark window functions affect the write operation?
|
<p>I am having trouble writing a PySpark dataframe (~50 columns and ~9 million rows) to an S3 bucket, and I have isolated the problem, though I am not sure why the problem occurs.</p>
<p>The following runs quite quickly (< 1 minute):</p>
<pre><code># Runs < 1 minute
df.write.format('parquet') \
.option("escape", "\"")\
.option('header', 'true') \
.mode('overwrite') \
.save("s3://userName/folderName")
</code></pre>
<p>However, if I perform the following steps, then the write doesnβt finish (i.e., it just "hangs" for hours)?</p>
<pre><code>window1 = Window.partitionBy("state")\
.orderBy(F.col("date").cast('timestamp').cast('long'))\
.rangeBetween(-100,0)
df = df.withColumn('new_variable', F.avg(F.col('binary_variable_01')).over(window1))
df.count()
# Runs for hours after running the window funciton
df.write.format('parquet') \
.option("escape", "\"")\
.option('header', 'true') \
.mode('overwrite') \
.save("s3://userName/folderName")
</code></pre>
<p>Here is what I have tried:</p>
<ol>
<li>Repartitioning to correct for data skew didnβt work (i.e. df = df.repartition(50))</li>
<li>Coalesce to correct for data skew also didnβt work (i.e. df = df.coalesce(50))</li>
<li>Missing values in the partitionBy variable (i.e., "state" above) were imputed, but this didnβt solve the issue.</li>
<li>Created a random partition variable (50 random values) and then repartitioned based on this; didnβt work.</li>
</ol>
<p>Does anyone have an idea as to how to resolve this issue?</p>
|
<python><apache-spark><amazon-s3><pyspark>
|
2023-01-11 20:22:55
| 0
| 303
|
gm1991
|
75,088,415
| 20,589,275
|
How to make graph matplotlib
|
<p>I have got an a frames in numpy:</p>
<pre><code>php = np.array([282, 804, 1209, 1558, 1368, 1208, 594, 224])
js = np.array([273, 902, 1355, 1647, 1501, 1424, 678, 186])
html = np.array([229, 626, 865, 1275, 1134, 959, 446, 138])
html5 = np.array([227, 596, 764, 879, 715, 610, 264, 67])
my_sql = np.array([218, 620, 996, 1150, 1008, 1001, 450, 156])
css = np.array([217, 568, 915, 1207, 1092, 981, 479, 132 ])
css3 = np.array([203, 535, 548, 704, 567, 487, 218, 0])
php5 = np.array([184, 392, 0, 0, 0, 0, 0, 0 ])
jquery = np.array([168, 604, 901, 1014, 808, 750, 349, 73 ])
c1_bitrix = np.array([168, 0, 454, 539, 513, 0, 225, 65 ])
git = np.array([0, 376, 663, 834, 734, 870, 408, 130 ])
oop = np.array([0, 0, 0, 0, 0, 481, 0, 84 ])
</code></pre>
<p>How can i make graph like this?</p>
<p><a href="https://i.sstatic.net/GxHQx.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GxHQx.jpg" alt="enter image description here" /></a></p>
<p>The X is years, for example</p>
<pre class="lang-none prettyprint-override"><code>php[0] = 2015,
php[1] = 2016,
php[2] = 2017,
php[3] = 2018,
php[4] = 2019,
php[5] = 2020,
php[6] = 2021,
php[7] = 2022
</code></pre>
|
<python><matplotlib>
|
2023-01-11 19:58:56
| 2
| 650
|
Proger228
|
75,088,245
| 15,229,310
|
decrease mandatory parameters (change method signature) on overriden method best practices (i.e. to avoid pylint errors)
|
<p>I have a generic parent class with mymethod and specific child class, where some parameters required for mymethod are known - user of child don't even need to know about them. I want to override mymethod so it only require param1, however pylint complains about inherited method changed signature - W0221: Number of parameters was *** in *** and is now *** in overridden *** method (arguments-differ)</p>
<p>What's the best practice for this use case? I've tried adding **kwargs to either or both methods, but no help (even if it would I'd then have to work around 'unused arguments' issue?) I could create ChildA.mymethod2 with only accepting param1 and within it call super().mymethod(param1, param2, param3) filling remaining parameters, but that seems rather awkward - is there better way to achieve this?</p>
<pre><code>class Parent():
def mymethod(self, param1, param2, param3):
log.info(f'func a with {param1} and {param2} and {param3}')
class ChildA(Parent):
def mymethod(self, param1, **kwargs): # kwargs added hoping to 'fool' pylint about signature arguments, no help
param2 = 2
param3 = 3
super().func(param1, param2, param3)
</code></pre>
<p>desired state is having Parent and numerous Child classes (each defining its own param2 and param3) passing pylint, while user can only do</p>
<pre><code>c = ChildA()
c.mymethod(param1)
</code></pre>
|
<python><pylint>
|
2023-01-11 19:42:32
| 1
| 349
|
stam
|
75,088,079
| 11,194,619
|
Rolling window groupby on panel data
|
<p>The goal is to perform a grouped rolling-window calculation on panel data. If possible, avoiding the use of <code>apply</code> and similar functions that perform slowly when there are many observation groups. Consider the following longitudinal data frame of customers with monthly sales:</p>
<pre><code>customers = pd.Series(['a', 'b', 'c', 'd',]).rename('customer')
date_range = pd.date_range('01/2018', '01/2019', freq='M').to_period('M').rename('month')
example_df = pd.DataFrame(index=pd.MultiIndex.from_product([customers, date_range]))
example_df['sales'] = (np.random.random(example_df.shape[0]) > 0.9) * (np.random.randint(1, 25, example_df.shape[0])*100)
</code></pre>
<hr />
<ol>
<li>Why does the following code throw an error even though month is the name of an index?</li>
</ol>
<pre><code>example_df.groupby('customer').rolling(3, on='month').sales.sum()
</code></pre>
<blockquote>
<p>ValueError: invalid on specified as month, must be a column (of DataFrame), an Index or None</p>
</blockquote>
<p>A workaround is using <code>.reset_index</code> to convert month to a column. To my knowledge the simplest solution, but still I am unclear why resetting the index should be necessary.</p>
<pre><code>example_df.reset_index('month').groupby('customer').rolling(3, on='month').sales.sum()
</code></pre>
<hr />
<ol start="2">
<li>I have found that the following code performs the operation correctly, but creates a new level in the multiindex. Why does it do that?</li>
</ol>
<pre><code>example_df.groupby('customer').rolling(3).sales.sum()
</code></pre>
<p>The workaround would be to assign just the <code>.values</code>, but ignoring the index might not always be practical. For example:</p>
<pre><code>example_df['rolling_sum'] = example_df.groupby('customer').rolling(3).sales.sum().values
</code></pre>
|
<python><pandas><time-series>
|
2023-01-11 19:25:24
| 1
| 454
|
Raisin
|
75,087,691
| 5,058,757
|
Solve Stable Marriages Problem using constraint programming - BoundedLinearExpression object is not iterable
|
<p>I'm taking a course on <a href="https://www.coursera.org/learn/discrete-optimization" rel="nofollow noreferrer">Discrete Optimization</a>, and we're working through constraint programming. In a topic about reification, we're working through the <a href="https://en.wikipedia.org/wiki/Stable_marriage_problem" rel="nofollow noreferrer">Stable Marriages Problem</a> (SPM).</p>
<p>The model formulation is</p>
<pre><code>enum Men = [George, Hugh, Will, Clive];
enum Women = [Julia, Halle, Angelina, Keira];
# mensRanking[Hugh, Julia] is Hugh's ranking of Julia
# lower the number, the higher the preference
int mensRanking[Men, Women];
int womensRanking[Women, Men];
# Decision variables below
# Array of decision variables called wives that is indexed on men stores the wife
# of man
var{Women} wives[Men]
# Array of decision variables called husbands that is indexed on women stores the
# husband of woman
var{Men} husbands[Women]
# Constraints below
solve {
# The husband of wife of man must equal man
forall(m in Men)
husband[wife[m]] = m;
# The wife of husband of woman must equal woman
forall(w in Women)
wife[husband[w]] = w;
# NEED HELP with the constraints below
forall(m in Man)
forall(w in Women)
# If man m prefers woman w over his wife, then
# woman w must prefer her husband over m
if (mensRanking[m,w] < mensRanking[m,wife[m]])
womensRanking[w,husband[w]] < womensRanking[w, m]
# If woman w prefers man m over her husband, then
# man m must prefer his wife over w
if (womensRanking[w,m] < womensRanking[w, husband[w]])
mensRanking[m,wife[m]] < mensRanking[m, w]
}
</code></pre>
<p>I can't figure out how to do the ranking comparison. Here's my attempt via or-tools in Python:</p>
<pre class="lang-py prettyprint-override"><code>def main():
n = 4
men = range(n)
women = range(n)
# mensRanking[man][woman] is man's ranking of woman.
# lower the number, the higher the preference
mensRanking = [random.sample(range(n),n) for _ in men]
womensRanking = [random.sample(range(n),n) for _ in women]
model = cp_model.CpModel()
# For wife 'Julia', who is her husband?
husbands = [model.NewIntVar(0, n-1, f'woman{i}') for i in women]
# For husband 'George', who is his wife?
wives = [model.NewIntVar(0, n-1, f'man{i}') for i in men]
for man in men:
# The husband of wife of man must equal man
# I.e., husbands[wife] = man
wife = wives[man]
model.AddElement(wife, husbands, man)
for woman in women:
# The wife of husband of woman must equal woman
# I.e., wives[husband] = woman
husband = husbands[woman]
model.AddElement(husband, wives, woman)
for m in men:
for w in women:
womans_rank_of_husband = model.NewIntVar(0, n-1, '')
model.AddElement(husbands[w], womensRanking[w], womans_rank_of_husband)
mans_rank_of_wife = model.NewIntVar(0, n-1, '')
model.AddElement(wives[m], mensRanking[m], mans_rank_of_wife)
# If man m prefers woman w over his wife, then
# woman w must prefer her husband over m
# TypeError: 'BoundedLinearExpression' object is not iterable
model.Add(womans_rank_of_husband < womensRanking[w][m]).OnlyEnforceIf(
mensRanking[m][w] < mans_rank_of_wife
)
# If woman w prefers man m over her husband, then
# man m must prefer his wife over w
# TypeError: 'BoundedLinearExpression' object is not iterable
model.Add(mans_rank_of_wife < mensRanking[m][w]).OnlyEnforceIf(
womensRanking[w][m] < womans_rank_of_husband
)
solver = cp_model.CpSolver()
solution_printer = SolutionPrinter(x)
status = solver.SearchForAllSolutions(model, solution_printer)
print(solver.ResponseStats())
print(status)
</code></pre>
<p>Basically, I need to do an inequality check while using a decision variable as an index. I'm familiar with doing an EQUALITY check via <code>model.AddElement(index, array, target)</code> for <code>array[index] == target</code>, but can't figure out how to do <code>array[index] < target</code> when <code>index</code> is a decision variable.</p>
<p><strong>EDIT</strong>: I used a temp_var as suggested by @Laurent-Perron, but now I'm getting the error:</p>
<pre><code># TypeError: 'BoundedLinearExpression' object is not iterable
</code></pre>
<p>Any idea why? I also tried <code>AddImplication</code>:</p>
<pre class="lang-py prettyprint-override"><code> womans_rank_of_husband = model.NewIntVar(0, n-1, '')
model.AddElement(husbands[w], womensRanking[w], womans_rank_of_husband)
mans_rank_of_wife = model.NewIntVar(0, n-1, '')
model.AddElement(wives[m], mensRanking[m], mans_rank_of_wife)
# If man m prefers woman w over his wife, then
# woman w must prefer her husband over m
model.AddImplication(mensRanking[m][w] < mans_rank_of_wife,
womans_rank_of_husband < womensRanking[w][m])
# If woman w prefers man m over her husband, then
# man m must prefer his wife over w
model.AddImplication(womensRanking[w][m] < womans_rank_of_husband,
mans_rank_of_wife < mensRanking[m][w])
</code></pre>
<p>But that gave me the error</p>
<pre><code>TypeError: NotSupported: model.GetOrMakeBooleanIndex(unnamed_var_12 <= 1)
</code></pre>
<p>at <code>model.AddImplication()</code>.</p>
|
<python><or-tools><constraint-programming>
|
2023-01-11 18:44:21
| 1
| 3,817
|
azizj
|
75,087,550
| 5,036,928
|
Request proxies to access PyPI
|
<p>I am trying to screenscrape PyPI packages using the requests library and beautiful soup - but am met with an indefinite hang. I am able to retrieve html from a number of sites with:</p>
<pre><code>session = requests.Session()
session.trust_env = False
response = session.get("http://google.com")
print(response.status_code)
</code></pre>
<p>i.e. without providing headers. I read from <a href="https://stackoverflow.com/questions/51154114/python-request-get-fails-to-get-an-answer-for-a-url-i-can-open-on-my-browser">Python request.get fails to get an answer for a url I can open on my browser</a> that the indefinite hang is likely caused by incorrect headers. So, using the developer tools, I tried to grab my request headers from the Networking tab (using Edge) with "Doc" filter to select the <code>pypi.org</code> response/request. I simply copy pasted these into my header variable that is passed to the <code>get</code> method:</p>
<pre><code>headers = {'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'accept-encoding': 'gzip, deflate, br',
'accept-language': 'en-US,en;q=0.9',
'cookie': 'session_id=<long string>',
'dnt': '1',
'sec-ch-ua': '"Not?A_Brand";v="8", "Chromium";v="108", "Microsoft Edge";v="108"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"Windows"',
'sec-fetch-dest': 'document',
'sec-fetch-mode': 'navigate',
'sec-fetch-site': 'none',
'sec-fetch-user': '?1',
'upgrade-insecure-requests': '1',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36 Edg/108.0.1462.54'}
</code></pre>
<p>(and changing get method to <code>response = session.get("http://pypi.org", headers=headers)</code>)</p>
<p>But I get the same hang. So, I think something is wrong with my headers but I'm not sure what. I'm aware that the requests <code>Session()</code> "handles" cookies so I tried removing the <code>cookie</code> key/value pair in my request header dictionary but achieved the same result.</p>
<p>How can I determine the problem with my headers and/or why do my current headers not work (assuming this is even the problem)?</p>
|
<python><http><https><python-requests><http-headers>
|
2023-01-11 18:28:32
| 4
| 1,195
|
Sterling Butters
|
75,087,483
| 1,294,072
|
How to detect colored blocks in a PDF file with python (pdfminer, minecart, tabula...)
|
<p>I am trying to extract quite a few tables from a PDF file. These tables are sort of conveniently "highlighted" with different colors, which makes it easy for eyes to catch (see the example screenshot).</p>
<p>I think it would be good to detect the position/coordinates of those colored blocks, and use the coordinates to extract tables.</p>
<p>I have figured out the table extraction part (using tabula-py). So it is the first step stopping me. From what I gathered minecart is the best tool for color and shapes in PDF files, except full scale imaging processing with OpenCV. But I have no luck with detecting colored box/block coordinates.</p>
<p>Would appreciate any help!!</p>
<p><img src="https://i.sstatic.net/SDq5E.png" alt="example page1" /></p>
|
<python><pdf><shapes><pdfminer>
|
2023-01-11 18:21:56
| 2
| 746
|
xyliu00
|
75,087,444
| 3,788,557
|
Unable to import python package in vs-code but, no issues in terminal
|
<p>I'm trying to use <a href="https://github.com/uber/orbit" rel="nofollow noreferrer">https://github.com/uber/orbit</a>. I have the package installed without errors as far as I can see.</p>
<p>I can run everything just fine in my terminal on my mac. However, I load up vs-code and try and run there and I receive the error below. I can't figure out what the problem is or how to configure my vs-code to avoid this issue.</p>
<p>This causes the following error. NO issues in terminal.
from orbit.utils.dataset import load_iclaims
from orbit.models import DLT
from orbit.diagnostics.plot import plot_predicted_data</p>
<p>My python version is 3.9.15 that I am running this in both in terminal and in vs-code.<br />
If anyone has an idea, can you please be specific on steps on how to fix this, as I have been hunting in vscode for a while and can't figure it out why I don't have this issue in terminal, but only in VSCODE</p>
<pre><code>OperationalError Traceback (most recent call last)
Cell In[3], line 1
----> 1 from orbit.models import DLT
File ~/opt/anaconda3/lib/python3.9/site-packages/orbit/__init__.py:3
1 __all__ = ["satellite", "tle", "utilities"]
----> 3 from .satellite import satellite
File ~/opt/anaconda3/lib/python3.9/site-packages/orbit/satellite.py:3
1 from math import degrees
----> 3 from . import tle, utilities
5 class satellite:
6 def __init__(self,catnr):
File ~/opt/anaconda3/lib/python3.9/site-packages/orbit/tle.py:10
6 import ephem
8 from . import utilities
---> 10 requests_cache.install_cache(expire_after=86400)
12 def get(catnr):
13 page = html.fromstring(requests.get('http://www.celestrak.com/cgi-bin/TLE.pl?CATNR=%s' % catnr).text)
File ~/opt/anaconda3/lib/python3.9/site-packages/requests_cache/patcher.py:48, in install_cache(cache_name, backend, expire_after, urls_expire_after, allowable_codes, allowable_methods, filter_fn, stale_if_error, session_factory, **kwargs)
23 def install_cache(
24 cache_name: str = 'http_cache',
...
--> 168 self._local_context.con = sqlite3.connect(self.db_path, **self.connection_kwargs)
169 if self.fast_save:
170 self._local_context.con.execute('PRAGMA synchronous = 0;')
OperationalError: unable to open database file
</code></pre>
|
<python><visual-studio-code>
|
2023-01-11 18:18:35
| 0
| 6,665
|
runningbirds
|
75,087,401
| 7,506,883
|
How to encase the value of a yaml in single quotes after a dictionary <> yaml serialization
|
<p>I'd like to convert my dictionary to a YAML document, where the keys are rendered without quotes, but the values are encased in single quotes.</p>
<p>I found several solutions to encase both the key and value in a single quote, but that's not what I'd like. Below you can see an example script:</p>
<pre><code>import yaml
theDict = {'this' : {'is': 'the', 'main': 12,'problem':'see?' }}
print(yaml.dump(theDict, default_flow_style=False, sort_keys=False))
</code></pre>
<p>This will output:</p>
<pre><code>this:
is: the
main: 12
problem: see?
</code></pre>
<p>However, I want:</p>
<pre><code>this:
is: 'the'
main: '12'
problem: 'see?'
</code></pre>
<p>I don't want:</p>
<pre><code>this:
'is': 'the'
'main': '12'
'problem': 'see?'
</code></pre>
<p>I also don't want:</p>
<pre><code>'this':
'is': 'the'
'main': '12'
'problem': 'see?'
</code></pre>
<p>The answers that have been flagged as a duplicate of this are not duplicates because the question wants both the key and value encased in quotes. This is not what I would like to do. I'd like the serialization of yaml to occur and then the values (not the keys) encased in a quote.</p>
|
<python><yaml><pyyaml>
|
2023-01-11 18:14:21
| 1
| 1,791
|
Dom DaFonte
|
75,087,363
| 11,883,900
|
Delete item from a list of Dictionaries
|
<p>I have a quick one.</p>
<p>I do have a long list of dictionaries that looks like this:</p>
<pre><code>mydict = [{'id': '450118',
'redcap_event_name': 'preliminary_arm_1',
'redcap_repeat_instrument': '',
'redcap_repeat_instance': '',
'date_today': '2022-11-04',
'timestamp': '2022-11-04 10:49',
'doc_source': '1',
'hosp_id': '45',
'study_id': '18',
'participant_name': 'CHAR WA WAN',
'ipno': '141223',
'dob': '2020-06-30'},
{'id': '450118',
'redcap_event_name': 'preliminary_arm_1',
'redcap_repeat_instrument': '',
'redcap_repeat_instance': '',
'date_today': '2022-11-04',
'timestamp': '2022-11-04 10:49',
'doc_source': '1',
'hosp_id': '45',
'study_id': '01118',
'participant_name': 'CHARIT',
'ipno': '1413',
'dob': '2020-06-30'}]
</code></pre>
<p>Now I want to do a simple thing, I do want to delete this 3 items from the dictionaries ,<em>'redcap_event_name','redcap_repeat_instrument','redcap_repeat_instance'</em>.</p>
<p>I have tried writing this code but its not deleting at all</p>
<pre><code>for k in mydict:
for j in k.keys():
if j == 'preliminary_arm_1':
del j
</code></pre>
<p>My final result is the original list of dictionaries but without the 3 items mentioned above. any help will highly be appreciated</p>
|
<python><dictionary>
|
2023-01-11 18:10:35
| 2
| 1,098
|
LivingstoneM
|
75,087,330
| 20,589,275
|
How to make graph in matplotlib
|
<p>I'm not good at matplotlib, I need your help, I have an upload of this format by year from 2015-2022:</p>
<pre><code>PHP:282
JavaScript:273
HTML:229
HTML5:227
MySQL:218
CSS:217
CSS3:203
PHP5:184
jQuery:168
1Π‘:168
</code></pre>
<p>I need to do something like a visual graph - from the bottom - the year, on the left - the number of mentions (indicated next to above in the code(the numbers), as well as a legend on the top left: color - tag( for example - PHP - green, CSS3 - blue....)</p>
<p>Again, X - years, Y - the values of teg using: PHP: ---> (282 this)</p>
|
<python><matplotlib>
|
2023-01-11 18:07:55
| 1
| 650
|
Proger228
|
75,087,105
| 10,309,712
|
Computing the Fourier-transform of each column for each array in a multidimensional array
|
<p>In the following <code>4D</code> array, each column represents an attribute for machine learning model development.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from scipy.fft import fft, fftfreq, fftshift
A = np.array([
[[[0, 1, 2, 3],
[3, 0, 1, 2],
[2, 3, 0, 1],
[1, 3, 2, 1],
[1, 2, 3, 0]]],
[[[9, 8, 7, 6],
[5, 4, 3, 2],
[0, 9, 8, 3],
[1, 9, 2, 3],
[1, 0, -1, 2]]],
[[[0, 7, 1, 2],
[1, 2, 1, 0],
[0, 2, 0, 7],
[-1, 3, 0, 1],
[1, 0, 1, 0]]]
])
A.shape
(3, 1, 5, 4)
</code></pre>
<p>In this example, <code>A</code> has 3-instances, each instance represented by 4-features (1 for each column of the inner arrays (shaped <code>(1,5,4)</code>). Features are represented in <code>time-domain</code>.</p>
<p>I needed a handy way to calculate the <code>frequency-domain</code> (Fourier transform )for each feature of each instance.</p>
<p><strong>Details</strong></p>
<p>Considering the given example, I do it the following way.</p>
<ol>
<li><p>Get the array for all features (1 for each feature like this:</p>
<pre class="lang-py prettyprint-override"><code>feat1 = A[..., [0]] # the first feature
feat2 = A[..., [1]] # the second feature
feat3 = A[..., [2]] # 3rd feature etc...
feat4 = A[..., [3]] # 4th feature etc...
</code></pre>
<p>So that the first feature in the example data is:</p>
<pre class="lang-py prettyprint-override"><code>feat1
array([[[[ 0],
[ 3],
[ 2],
[ 1],
[ 1]]],
[[[ 9],
[ 5],
[ 0],
[ 1],
[ 1]]],
[[[ 0],
[ 1],
[ 0],
[-1],
[ 1]]]])
</code></pre>
</li>
<li><p>Retrieve each instanceΒ΄s features set, like so:</p>
<pre class="lang-py prettyprint-override"><code># 1st instance
inst1_1 = feat1[0].ravel() # instance1 1st feature
inst1_2 = feat2[0].ravel() # instance1 2nd feature
inst1_3 = feat3[0].ravel() # instance1 3rd feature
inst1_4 = feat4[0].ravel() # instance1 4th feature
# 2nd instance
inst2_1 = feat1[1].ravel() # instance2 1st feature
inst2_2 = feat2[1].ravel() # instance2 2nd feature
inst2_3 = feat3[1].ravel() # instance2 3rd feature
inst2_4 = feat4[1].ravel() # instance2 4th feature
# 3rd instance
inst3_1 = feat1[2].ravel() # instance3 1st feature
inst3_2 = feat2[2].ravel() # instance3 2nd feature
inst3_3 = feat3[2].ravel() # instance3 3rd feature
inst3_4 = feat4[2].ravel() # instance3 4th feature
</code></pre>
</li>
<li><p>Then calculate the Fourier transform (for each feature of each instance).</p>
<pre class="lang-py prettyprint-override"><code>inst1_1_fft = fft(inst1_1)
inst1_2_fft = fft(inst1_2)
inst1_3_fft = fft(inst1_3)
inst1_4_fft = fft(inst1_4)
inst2_1_fft = fft(inst2_1)
inst2_2_fft = fft(inst2_2)
inst2_3_fft = fft(inst2_3)
inst2_4_fft = fft(inst2_4)
inst3_1_fft = fft(inst3_1)
inst3_2_fft = fft(inst3_2)
inst3_3_fft = fft(inst3_3)
inst3_4_fft = fft(inst3_4)
</code></pre>
</li>
</ol>
<p>This is absolutely <code>dummies approach</code>. Besides, I cannot do this on my real dataset where I have over 60K instances.</p>
<p><strong>Questions:</strong></p>
<ol>
<li><p>How do I do this at once for the whole array <code>A</code>?</p>
</li>
<li><p>The frequencies seem to be the same, for feature of each instance. Am I doing this correctly? He's how I do:</p>
</li>
</ol>
<pre class="lang-py prettyprint-override"><code>sampling_rate = A.shape[2] # each instance is fixed-sized A.shape[2] (5 in this e.g)
N = inst1_1_fft.size
inst1_1_fft_freq = fftfreq(N, d = 1 / sampling_rate)
inst1_1_fft_freq
array([ 0., 1., 2., -2., -1.])
</code></pre>
|
<python><arrays><numpy><multidimensional-array><fft>
|
2023-01-11 17:47:42
| 1
| 4,093
|
arilwan
|
75,087,029
| 9,494,140
|
ModuleNotFoundError: No module named 'requests.adapters' when using pyfcm
|
<p>I'm trying to use <code>pyfcm</code> library in my django project , by adding it to <code>requiements.txt</code> but I noticed that it is getting an error that mainly comes because of trying to import from <code>requests</code> library .. here is the error :</p>
<pre><code>rolla_django | from pyfcm import FCMNotification
rolla_django | File "/usr/local/lib/python3.10/dist-packages/pyfcm/__init__.py", line 14, in <module>
rolla_django | from .fcm import FCMNotification
rolla_django | File "/usr/local/lib/python3.10/dist-packages/pyfcm/fcm.py", line 1, in <module>
rolla_django | from .baseapi import BaseAPI
rolla_django | File "/usr/local/lib/python3.10/dist-packages/pyfcm/baseapi.py", line 6, in <module>
rolla_django | from requests.adapters import HTTPAdapter
rolla_django | ModuleNotFoundError: No module named 'requests.adapters'
</code></pre>
<p>and here is my <code>requirements.txt</code></p>
<pre><code>Django~=4.1.1
djangorestframework~=3.13.1
django-extensions
pymysql~=1.0.2
requests
tzdata
psycopg2-binary
django-crontab
pyfcm
</code></pre>
|
<python><django><python-requests><pyfcm>
|
2023-01-11 17:41:32
| 1
| 4,483
|
Ahmed Wagdi
|
75,086,562
| 10,710,625
|
Filter rows from a grouped data frame based on string columns
|
<p>I have a data frame grouped by multiple columns but in this example it would be grouped only by <code>Year</code>.</p>
<pre><code> Year Animal1 Animal2
0 2002 Dog Mouse,Lion
1 2002 Mouse
2 2002 Lion
3 2002 Duck
4 2010 Dog Cat
5 2010 Cat
6 2010 Lion
7 2010 Mouse
</code></pre>
<p>I would like for each group, from the rows where <code>Animal2</code> is empty to filter out the rows where <code>Animal2</code> does not appear in the column <code>Animal1</code>.</p>
<p>The expected output would be:</p>
<pre><code> Year Animal1 Animal2
0 2002 Dog Mouse,Lion
1 2002 Mouse
2 2002 Lion
3 2010 Dog Cat
4 2010 Cat
</code></pre>
<p>Rows 0 & 3 stayed since <code>Animal2</code> is not empty.</p>
<p>Rows 1 & 2 stayed since Mouse & Lion are in <code>Animal2</code> for the first group.</p>
<p>Row 4 stayed since cat appear in <code>Animal2</code> for the second group</p>
<p><strong>EDIT: I get an error for a similar input data frame</strong></p>
<pre><code> Year Animal1 Animal2
0 2002 Dog Mouse
1 2002 Mouse
2 2002 Lion
3 2010 Dog
4 2010 Cat
</code></pre>
<p>The expected output would be:</p>
<pre><code> Year Animal1 Animal2
0 2002 Dog Mouse
1 2002 Mouse
</code></pre>
<p>The error is triggered in the <code>.apply(lambda g: g.isin(sets[g.name]))</code> part of the code.</p>
<pre><code> if not any(isinstance(k, slice) for k in key):
if len(key) == self.nlevels and self.is_unique:
# Complete key in unique index -> standard get_loc
try:
return (self._engine.get_loc(key), None)
except KeyError as err:
raise KeyError(key) from err
KeyError: (2010, 'Dog')
</code></pre>
|
<python><pandas><dataframe><filter><data-manipulation>
|
2023-01-11 16:58:57
| 2
| 739
|
the phoenix
|
75,086,526
| 2,377,957
|
Can only use .dt accessor with datetimelike values pandas
|
<p>This seems to be a new error for previously well running code. Probably user error...</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from io import StringIO
df = pd.read_csv(StringIO("""
transactionDate
2022-08-01T00:00:00.000-04:00
2021-09-01T00:00:00.000-04:00
2022-08-01T00:00:00.000-04:00
2022-03-01T00:00:00.000-05:00
2021-08-01T00:00:00.000-04:00
2022-08-01T00:00:00.000-04:00
2022-03-01T00:00:00.000-05:00
2021-08-01T00:00:00.000-04:00
2021-11-01T00:00:00.000-04:00
2022-03-01T00:00:00.000-05:00
2021-12-01T00:00:00.000-05:00
"""))
dates = pd.to_datetime(df.transactionDate)
dates.apply(type)
</code></pre>
<pre><code>0 <class 'datetime.datetime'>
1 <class 'datetime.datetime'>
2 <class 'datetime.datetime'>
3 <class 'datetime.datetime'>
4 <class 'datetime.datetime'>
5 <class 'datetime.datetime'>
6 <class 'datetime.datetime'>
7 <class 'datetime.datetime'>
8 <class 'datetime.datetime'>
9 <class 'datetime.datetime'>
10 <class 'datetime.datetime'>
Name: transactionDate, dtype: object
</code></pre>
<pre class="lang-py prettyprint-override"><code>dates.transactionDate.dt.date
</code></pre>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/52/cz8ms60j73x2mdzq9kjknqzh0000gp/T/ipykernel_14842/1511363732.py in <module>
----> 1 dates.transactionDate.dt.date
/opt/anaconda3/lib/python3.9/site-packages/pandas/core/generic.py in __getattr__(self, name)
5573 ):
5574 return self[name]
-> 5575 return object.__getattribute__(self, name)
5576
5577 def __setattr__(self, name: str, value) -> None:
/opt/anaconda3/lib/python3.9/site-packages/pandas/core/accessor.py in __get__(self, obj, cls)
180 # we're accessing the attribute of the class, i.e., Dataset.geo
181 return self._accessor
--> 182 accessor_obj = self._accessor(obj)
183 # Replace the property with the accessor object. Inspired by:
184 # https://www.pydanny.com/cached-property.html
/opt/anaconda3/lib/python3.9/site-packages/pandas/core/indexes/accessors.py in __new__(cls, data)
507 return PeriodProperties(data, orig)
508
--> 509 raise AttributeError("Can only use .dt accessor with datetimelike values")
AttributeError: Can only use .dt accessor with datetimelike values
</code></pre>
|
<python><pandas><datetime><timezone>
|
2023-01-11 16:55:29
| 1
| 4,105
|
Francis Smart
|
75,086,422
| 3,165,683
|
AUC 1, but accuracy <100%
|
<p>When testing a binary classifier I get an accuracy of 83% (when the threshold is set to 0.5) however when I workout the ROC and AUC I get an AUC value of 1, which I believe is incorrect as in this case I should be getting an accuracy of 100?</p>
<p>I have the following data (first 5 points for example):</p>
<p>True labels:
<code>true_list = [1. 1. 1. 1. 1.]</code></p>
<p>Thresholded predictions
<code>pred_list = [0. 0. 1. 1. 1.]</code></p>
<p>Raw sigmoid activated output
<code>pred_list_raw = [0.23929074 0.34403923 0.61575216 0.72756131 0.69771088]</code></p>
<p>The code used to generate the data from the model is:</p>
<pre><code>output_raw = Net(images)
output = torch.sigmoid(output_raw)
pred_tag = torch.round(output)
[pred_list.append(pred_tag[i]) for i in range(len(pred_tag.squeeze().cpu().numpy()))]
[pred_list_raw.append(output[i]) for i in range(len(output.squeeze().cpu().numpy()))]
</code></pre>
<p>The ROC and AUC values are calculated using the sklearn metrics with following code:</p>
<pre><code>fpr, tpr, _ = metrics.roc_curve(true_list, pred_list_raw)
auc = metrics.roc_auc_score(true_list, pred_list_raw)
</code></pre>
<p>The values for accuracy and AUC do not seem consistent?</p>
<p>The full output datasets are below:</p>
<p>True labels:</p>
<pre><code> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0.]
</code></pre>
<p>Thresholded predictions</p>
<pre><code> 0. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0. 0. 0.
0. 1. 0. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0. 0. 0. 1. 0. 1.
1. 1. 0. 0. 0. 0. 1. 1. 1. 1. 1. 1. 1. 0. 0. 0. 0. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0. 0. 0. 0. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 1. 0. 1. 1. 0. 0. 1.
1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0.]
</code></pre>
<p>Raw sigmoid activated output</p>
<pre><code> 0.80731616 0.81613746 0.63055099 0.33343941 0.33650158 0.26123103
0.43023067 0.75951926 0.85506684 0.83753035 0.77401593 0.93755849
0.93669454 0.78037769 0.48196705 0.51107402 0.39020711 0.27603346
0.27125724 0.79841139 0.96470754 0.97575928 0.9520636 0.98686909
0.99421722 0.9814615 0.72548573 0.70952273 0.58558095 0.75391273
0.98747451 0.99592133 0.99348673 0.99636301 0.99966048 0.99927722
0.93512388 0.87108612 0.76195734 0.45464458 0.44979708 0.3798077
0.46179509 0.51260215 0.42887223 0.77441987 0.99320274 0.99899955
0.99885804 0.99888995 0.99996059 0.99992547 0.9893837 0.94771828
0.90216806 0.63214702 0.70693445 0.62402257 0.72597019 0.72850208
0.48136757 0.34587109 0.48912585 0.53809234 0.49571105 0.52119752
0.66452994 0.65721321 0.46201256 0.32531447 0.33560987 0.34733458
0.54707416 0.66652035 0.67211284 0.64667205 0.77259018 0.81139687
0.72141833 0.47555719 0.41060125 0.40072988 0.30013099 0.81335717
0.87926414 0.83410184 0.89994201 0.96761651 0.94806845 0.67343196
0.60651364 0.57781878 0.76253183 0.95988439 0.98643017 0.98208946
0.99291688 0.99853936 0.99570023 0.84561008 0.82329192 0.70751861
0.40768749 0.38326785 0.42332725 0.41978272 0.95580183 0.99577685
0.99589898 0.99182735 0.99963567 0.99949705 0.98161394 0.93502385
0.89946262 0.69163107 0.23587978 0.24273368 0.27152508 0.27938265
0.25957949 0.28954122 0.30340485 0.28367177 0.25412464 0.24931795
0.40110995 0.38143945 0.49271891 0.50662051 0.33616859 0.52061933
0.47093576 0.63511254 0.68877464 0.47989569 0.37947267 0.69217007
0.69413745 0.85119693 0.83831514 0.46003498 0.19595725 0.18322578
0.13161417 0.17004058 0.155272 0.1832541 0.13801674 0.17109324
0.16617284 0.16502231 0.16629275 0.17945219 0.18769069 0.19091081
0.19954858 0.17923033 0.18590597 0.17878488 0.19183244 0.15146982
0.16887138 0.17444615 0.18757529 0.15070279 0.19910241 0.15885526
0.18926985 0.19083846 0.1563857 0.19467271 0.19159289 0.21147205
0.12797629 0.17709421 0.19563617 0.1951601 0.12606692 0.20411101
0.17489395 0.179219 0.17770813 0.13888956 0.17316737 0.18813291
0.20011829 0.18280909 0.12445015 0.17259067 0.20987834 0.17725589
0.18583644 0.16768099 0.17385706 0.19005385 0.16527923 0.17264359
0.13370521 0.17153564 0.15309515 0.19745554 0.17381944 0.16110312
0.19662598 0.15733718 0.19763281 0.20617132 0.19089484 0.19732752
0.1870988 0.16508744 0.13579399 0.13825028 0.19650695 0.2028151
0.20796896 0.16130049 0.18487175 0.15657099 0.14414533 0.19415208
0.14158873 0.20252466 0.19986491 0.1761861 0.12490113 0.14082219
0.19325744 0.17937965 0.17161699 0.20017089 0.1953598 0.19116857
0.18963095 0.18015937 0.17033672 0.12995853 0.17816802 0.20537938
0.17656901 0.17246887 0.19970285 0.18360697 0.14851416 0.14957287
0.17847791 0.19361662 0.12858931 0.15501569 0.16153916 0.18401976
0.19767486 0.18276181 0.18216812 0.18459979 0.17810379 0.20029616
0.16008779 0.18842728 0.19535601 0.16842141 0.18356466 0.19130296
0.19826594 0.16606207 0.17985446 0.18720729 0.16947971 0.19309211
0.17904012 0.18225684 0.12697826 0.20334946 0.20230229 0.19601187
0.18372611 0.13250111 0.1508019 0.1991842 0.16360692 0.18059866
0.17001721 0.16149873 0.16174695 0.19311724 0.17267033 0.14393295
0.19088417 0.18659356]
</code></pre>
|
<python><machine-learning><scikit-learn><roc><auc>
|
2023-01-11 16:47:36
| 1
| 377
|
user3165683
|
75,086,387
| 705,113
|
How to filter a dataframe by the mean of each group using a on-liner pandas code
|
<p>I'm trying to filter my dataset so that only the rows that, for a given column, have values larger than the mean (or any other function) of that column.</p>
<p>For instance, suppose we have the following data frame:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({
"Group": ["A", "A", "A", "B", "B", "C"],
"C1": [1, 2, 3, 2, 3, 1],
"C2": [1, 1, 5, 1, 2, 1]
})
</code></pre>
<pre><code> Group C1 C2
0 A 1 1
1 A 2 1
2 A 3 5
3 B 2 1
4 B 3 2
5 C 1 1
</code></pre>
<p>Now, I want to create other filtered data frames subsetting the original one based on some function. For example, let's use the mean as a baseline:</p>
<pre class="lang-py prettyprint-override"><code>df.groupby("Group").mean()
</code></pre>
<pre><code> C1 C2
Group
A 2.0 2.333333
B 2.5 1.500000
C 1.0 1.000000
</code></pre>
<p>Now, I want all points such values greater than or equal to the mean in column <code>C1</code>:</p>
<pre><code> Group C1 C2
1 A 2 1
2 A 3 5
4 B 3 2
5 C 1 1
</code></pre>
<p>Or I want a subset such that the values are less than or equal to the mean in column <code>C2</code>:</p>
<pre><code> Group C1 C2
0 A 1 1
1 A 2 1
3 B 2 1
5 C 1 1
</code></pre>
<p>To make this easier/more compact, it would be good to have a pandas on-liner fashion code, i.e., using the typical <code>.</code> pipeline, using something like:</p>
<pre class="lang-py prettyprint-override"><code>df.groupby("Group").filter(lambda x : x["C1"] >= x["C1"].mean())
</code></pre>
<p>Note that the code above doesn't work because <code>filter</code> requires a function that returns returns <code>True/False</code> but not a data frame to be combined, as I intend to.</p>
<p>Obviously, I can iterate using <code>groupby</code>, filter the group, and then concatenate the results:</p>
<pre class="lang-py prettyprint-override"><code>new_df = None
for _, group in df.groupby("Group"):
tmp = group[group["C1"] >= group["C1"].mean()]
new_df = pd.concat([new_df, tmp])
</code></pre>
<p>(<strong>Note:</strong> The <code>>=</code>, otherwise we will have empty data frames messing up with the concatenation)</p>
<p>Same thing I can do in the other case:</p>
<pre class="lang-py prettyprint-override"><code>new_df = None
for _, group in df.groupby("Group"):
tmp = group[group["C2"] <= group["C2"].mean()]
new_df = pd.concat([new_df, tmp])
</code></pre>
<p>But do we have a pandas-idiomatic (maybe generic, short, and probably optimized) way to do that?</p>
<p>Just for curiosity, I can do it very easily in R:</p>
<pre><code>r$> df <- tibble(
Group = c("A", "A", "A", "B", "B", "C"),
C1 = c(1, 2, 3, 2, 3, 1),
C2 = c(1, 1, 5, 1, 2, 1)
)
r$> df
# A tibble: 6 Γ 3
Group C1 C2
<chr> <dbl> <dbl>
1 A 1 1
2 A 2 1
3 A 3 5
4 B 2 1
5 B 3 2
6 C 1 1
r$> df %>% group_by(Group) %>% filter(C1 >= mean(C1))
# A tibble: 4 Γ 3
# Groups: Group [3]
Group C1 C2
<chr> <dbl> <dbl>
1 A 2 1
2 A 3 5
3 B 3 2
4 C 1 1
r$> df %>% group_by(Group) %>% filter(C1 <= mean(C1))
# A tibble: 4 Γ 3
# Groups: Group [3]
Group C1 C2
<chr> <dbl> <dbl>
1 A 1 1
2 A 2 1
3 B 2 1
4 C 1 1
</code></pre>
<p>Thanks!</p>
|
<python><pandas><group-by><filtering>
|
2023-01-11 16:44:46
| 1
| 693
|
an_drade
|
75,086,370
| 5,920,187
|
Custom Python Azure IoT Edge Module "No such file or directory: ''"
|
<p>I'm trying to debug my custom Azure IoT Edge module in Python using Visual Studio Code, however when I run</p>
<p><code>client = IoTHubModuleClient.create_from_edge_environment()</code></p>
<p>I get the following error:</p>
<pre><code>Exception has occurred: FileNotFoundError
[Errno 2] No such file or directory: ''
</code></pre>
<p>I'm thinking this may be because I am missing a step in connecting to the Azure IoT Hub but I can't identify what I might be missing. For reference, I am following the tutorials at <a href="https://learn.microsoft.com/en-us/azure/iot-edge/tutorial-python-module?view=iotedge-1.4" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/iot-edge/tutorial-python-module?view=iotedge-1.4</a> and <a href="https://learn.microsoft.com/en-us/azure/iot-edge/how-to-vs-code-develop-module?view=iotedge-1.4&tabs=csharp&pivots=iotedge-dev-cli" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/iot-edge/how-to-vs-code-develop-module?view=iotedge-1.4&tabs=csharp&pivots=iotedge-dev-cli</a>.</p>
<p>I'm simply just trying to build a custom IoT Edge module in Python and debug it locally to confirm it's working as expected. Thanks in advance for any suggestions.</p>
|
<python><visual-studio-code><azure-iot-hub><azure-iot-edge>
|
2023-01-11 16:43:11
| 2
| 331
|
jcbrowni
|
75,086,308
| 7,475,193
|
Function returns None- Python
|
<p>I call an exchange API. When I try to put it as a function, it returns <code>None</code>:</p>
<pre><code>def getCurrentExchange(source, target):
"""The function takes the source - source and target currency - target and extract the rate as of now"""
url = 'https://api.exchangerate.host/convert?from=source&to=target'
response = requests.get(url)
data = response.json()
xchng = data['result']
return xchng
print(getCurrentExchange("EUR", "USD"))
</code></pre>
<p>When I call the API without wrapping it as a function, I get the rate:</p>
<pre><code>url = 'https://api.exchangerate.host/convert?from=USD&to=EUR'
response = requests.get(url)
data = response.json()
data['result']
</code></pre>
<p>What am I doing wrong?</p>
|
<python>
|
2023-01-11 16:38:02
| 1
| 477
|
eponkratova
|
75,086,302
| 979,099
|
Navigating JSON with variable keys in Python?
|
<p>Lets say I have some json like so store in a variable called <code>data</code></p>
<pre class="lang-json prettyprint-override"><code>{
"print": {
"ams": { "exists": 1},
"fan_speed": 29,
"reports": [
{"name": "foo"},
{"name": "bar"}
]
}
}
</code></pre>
<p>Now I've got a variable which is the key i want to return stored in a variable called <code>key</code> for example <code>print.fan_speed</code>, <code>print.ams.exists</code>,<code> print.reports[0].name</code></p>
<p>What I want to is something like <code>data.get(key)</code>. What is the best way to approach this?</p>
|
<python><json><python-3.x>
|
2023-01-11 16:37:15
| 1
| 6,315
|
K20GH
|
75,086,141
| 5,889,169
|
Django - how to remove INFO logs of "channels" package
|
<p>I recently started using the <a href="https://channels.readthedocs.io/en/stable/" rel="nofollow noreferrer">channels</a> package in Django (versions: <code>channels==3.0.4</code> and <code>channels-redis==3.3.1</code>)</p>
<p>The application is sending massive amount of unwanted logs for each request i make to Django. log for example:</p>
<pre><code>{"time": "2023-01-11 16:12:09 UTC", "msg": "HTTP %(method)s %(path)s %(status)s [%(time_taken).2f, %(client)s]", "logger": "edison", "level": "INFO", "log_trace": "/Volumes/dev/venv3.8/lib/python3.8/site-packages/channels/management/commands/runserver.py"}
</code></pre>
<p>the log seems exactly the same no matter what request i send.</p>
<p>I tried to set the <code>channels</code> logging level to ERROR using <code>logging.getLogger('channels').setLevel(logging.ERROR)</code> just like i do with other packages but it doesn't help</p>
<p>any ideas what I need to do to remove those logs?</p>
|
<python><django><logging><redis><channel>
|
2023-01-11 16:24:26
| 0
| 781
|
shayms8
|
75,086,138
| 4,865,723
|
Combine two sort conditions one ascending the other descending
|
<p>I would like to combine two sort conditions but one is ascending the other descending.</p>
<p>The input data are this list of tuples:</p>
<pre><code>data = [
('Apple', 4),
('Cherry', 5),
('Ananas', 4),
('Blueberry', 3),
('Banana', 3)
]
</code></pre>
<p>The sorting conditions:</p>
<ol>
<li>2nd tuple item (the <code>int</code>) in <em>reverse</em> order.</li>
<li>"Inside" that sub groups: 1rst tuple item (the <code>str</code>) in regular lexicographic order.</li>
</ol>
<p>The expected result should be</p>
<pre><code> ('Cherry', 5),
('Ananas', 4),
('Apple', 4),
('Banana', 3),
('Blueberry', 3),
</code></pre>
<p>I know that I can combine conditions like this:</p>
<pre><code>sort(data, key=lambda x: condA(x[0]), condB(x[1]))
</code></pre>
<p>But my problem is that I don't know how to make one reverse the other not and how to do the lexicographic ordering in a lambda.</p>
<p>This is the MWE</p>
<pre><code>#!/usr/bin/env python3
data = [
('Apple', 4),
('Cherry', 5),
('Ananas', 4),
('Blueberry', 3),
('Banana', 3)
]
expect = [
('Cherry', 5),
('Ananas', 4),
('Apple', 4),
('Banana', 3),
('Blueberry', 3),
]
result = sorted(data, key=lambda x: x[1], reverse=True)
print(result)
# [('Cherry', 5), ('Apple', 4), ('Ananas', 4), ('Blueberry', 3), ('Banana', 3)]
result = sorted(data, key=lambda x: x[0])
print(result)
# [('Ananas', 4), ('Apple', 4), ('Banana', 3), ('Blueberry', 3), ('Cherry', 5)]
# What I want.
print(expect)
# [('Cherry', 5), ('Ananas', 4), ('Apple', 4), ('Banana', 3), ('Blueberry', 3)]
</code></pre>
|
<python><sorting>
|
2023-01-11 16:23:47
| 1
| 12,450
|
buhtz
|
75,085,943
| 19,633,374
|
Generate holiday data using python holidays library
|
<p>I want to create a datafarme with the holidays using the python holiday library.</p>
<pre><code>from datetime import date
import holidays
us_holidays = holidays.US()
date(2015, 1, 1) in us_holidays # True
date(2015, 1, 2) in us_holidays # False
</code></pre>
<p>I want to create a dataframe with dates and holiday column.
For example I want to create a data frame of holidays from 01.01.2019 to 31.12.2020</p>
<p>Expected Output:</p>
<pre><code>date holiday
01.01.2019 1
09.03.2019 1
.
.
.
31.12.2020 1
</code></pre>
<p>How can I extract this dataframe from the holiday package?</p>
|
<python><dataframe><python-holidays>
|
2023-01-11 16:09:04
| 2
| 642
|
Bella_18
|
75,085,847
| 11,729,210
|
Install local package through a Dockerfile
|
<p>I have started learning Docker and I have developed a Python package (not published anywhere, it is just used internally) that installs and works fine locally (here I will call it <code>mypackage</code>). However, when trying to install it in a Docker container, Python in the container fails to recognise it even though during the build of the image no error was raised. The Dockerfile looks like this:</p>
<pre><code># install Ubuntu 20.04
FROM ubuntu:20.04
# update Ubuntu packages
ARG DEBIAN_FRONTEND=noninteractive
RUN apt update
RUN apt upgrade -y
RUN apt install -y apt-utils \
build-essential \
curl \
mysql-server \
libmysqlclient-dev \
libffi-dev \
libssl-dev \
libxml2-dev \
libxslt1-dev \
unzip \
zlib1g-dev
# install Python 3.9
RUN apt-get install -y software-properties-common gcc && \
add-apt-repository -y ppa:deadsnakes/ppa
RUN apt-get update && apt-get install -y python3.9 python3.9-dev python3.9-distutils python3-pip python3-apt python3.9-venv
# make symlink (overriding default Python3.8 installed with Ubuntu)
RUN rm /usr/bin/python3
RUN ln -s /usr/bin/python3.9 /usr/bin/python3
# copy package files and source code
RUN mkdir mypackage
COPY pyproject.toml setup.cfg setup.py requirements.txt ./mypackage/
COPY src mypackage/src/
# add path
ENV PACKAGE_PATH=/mypackage/
ENV PATH="$PACKAGE_PATH/:$PATH"
# install mypackage
RUN pip3 install -e ./mypackage
CMD ["python3.9", "main.py"]
</code></pre>
<p>So the above runs successfully, but if I run <code>sudo docker run -it test_image bin/bash</code> and run <code>pip3 list</code>, the package will not be there and a <code>ModuleNotFoundError</code> when running code depending on <code>mypackage</code>. Interestingly if I create a virtual environment by replacing this:</p>
<pre><code>ENV PACKAGE_PATH=/mypackage/
ENV PATH="$PACKAGE_PATH/:$PATH"
</code></pre>
<p>by this:</p>
<pre><code>ENV VIRTUAL_ENV=/opt/venv
RUN python3.9 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
</code></pre>
<p>it works. Ideally, I want to know why I need to create a virtual environment and how can I run local packages in a container without creating virtual environments.</p>
|
<python><docker><pip><dockerfile><virtualenv>
|
2023-01-11 16:01:37
| 0
| 375
|
AndrΓ© LourenΓ§o
|
75,085,758
| 1,127,776
|
function from_dict() failing for unknown reason in python
|
<p>I converted below JSON using <a href="https://json2csharp.com/code-converters/json-to-python" rel="nofollow noreferrer">https://json2csharp.com/code-converters/json-to-python</a> to a dataclass:</p>
<pre><code>{
"bypassCd": [
"Duis sint ipsum in",
"consequat"
]
}
</code></pre>
<p>It generaged below dataclass - for some reason, it is showing error at .from_dict() method and I'm unable to figure it out. Please advise</p>
<pre><code>from typing import List
from typing import Any
from dataclasses import dataclass
import json
@dataclass
class Root:
bypassCd: List[str]
@staticmethod
def from_dict(obj: Any) -> 'Root':
_bypassCd = [.from_dict(y) for y in obj.get("bypassCd")]
return Root(_bypassCd)
# Example Usage
jsonstring = json.loads('''
{
"bypassCd": [
"Duis sint ipsum in",
"consequat"
] }
''')
root = Root.from_dict(jsonstring)
print(root)
</code></pre>
<p>error:</p>
<pre><code> File "/local_home/afxx1285/pyprograms/test2.py", line 11
_bypassCd = [.from_dict(y) for y in obj.get("bypassCd")]
^
SyntaxError: invalid syntax
</code></pre>
|
<python><json><python-dataclasses>
|
2023-01-11 15:54:32
| 2
| 1,092
|
jimsweb
|
75,085,734
| 9,212,995
|
What is the best way to raise an exception in case of BadHeaderError in django unit testing?
|
<p>Tests fail with an error response meaning that it is likely to be allowing email with wrong data and yet it should throw an HttpResponse as expected, I have tried to figure it out why my test is failing and returning 200 http status code but not as expected = 400.</p>
<h1>reset password</h1>
<pre><code>class ResetPassword(View):
form_class = ForgetPasswordForm()
template_name = 'authentication/password_reset.html'
def get(self, request):
form = self.form_class
return render(request, self.template_name, {'form': form})
def post(self, request):
msg = None
form = ForgetPasswordForm(request.POST)
if form.is_valid():
data = form.cleaned_data.get('email')
associated_users = Users.objects.filter(Q(email = data))
if associated_users.exists():
for user in associated_users:
subject = 'Password Reset Requested'
email_template_name = "authentication/password_reset_email.txt"
c = {
"email": user.email,
'domain': '127.0.0.1:8000',
'site_name': 'ATB Project Organizer',
'uid': urlsafe_base64_encode(force_bytes(user.pk)),
'user': user,
'token': default_token_generator.make_token(user),
'protocol': 'http',
}
email = render_to_string(email_template_name, c)
try:
send_mail(subject, email, 'admin@example.com', [user.email], fail_silently=False)
except BadHeaderError:
return HttpResponse('Invalid header found')
msg = 'You have successfully reset your password!'
return redirect('/password_reset/done/')
else:
msg = 'No records found for this email, please make sure you have entered the correct email address!'
form = ForgetPasswordForm()
return render(request, 'authentication/password_reset.html', {'form': form, 'msg': msg})
</code></pre>
<h2>Test to raise an exception</h2>
<pre><code>from django.test import TestCase
from django.urls import reverse
from django.core import mail
from django.contrib.auth import get_user_model
User = get_user_model()
class PasswordResetTest(TestCase):
def setUp(self):
self.user1 = User.objects.create_user("user1", email = "user1@mail.com", password = "password@121", orcid = '1234567890987654')
self.user2 = User.objects.create_user("user2", email = "user2@mail.com", password = "password@122", orcid = '1234567890987655')
self.user3 = User.objects.create_user("user3@mail.com", email = "not-that-mail@mail.com", password = "password@123", orcid = '1234567890987656')
self.user4 = User.objects.create_user("user4", email = "user4@mail.com", password = "passs", orcid = '1234567890987657')
self.user5 = User.objects.create_user("user5", email = "uΡer5@mail.com", password = "password@125", orcid = '1234567890987658') # email contains kyrillic s
self.user={
'username':'username',
'email':'testemail@gmail.com',
'password1':'password@123',
'password2':'password@123',
'orcid': '1234123412341239',
}
def test_user_can_resetpassword(self):
response = self.client.get(reverse('resetpassword'))
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, 'authentication/password_reset.html')
# Then test that the user doesn't have an "email address" and so is not registered
response = self.client.post(reverse('resetpassword'), {'email': 'admin@example.com'}, follow=True)
self.assertEqual(response.status_code, 200)
# Then test that the user doesn't have an "email address" and so is not registered
# Then post the response with our "email address"
# Then post the response with our "email address"
response = self.client.post(reverse('resetpassword'),{'email':'user1@mail.com'})
self.assertEqual(response.status_code, 302)
# At this point the system will "send" us an email. We can "check" it thusly:
self.assertEqual(len(mail.outbox), 1)
self.assertEqual(mail.outbox[0].subject, 'Password Reset Requested')
def test_exception_raised(self):
# verify the email with wrong data
data = {"uid": "wrong-uid", "token": "wrong-token"}
response = self.client.post(reverse('resetpassword'), data, format='text/html')
self.assertEqual(response.status_code, 400)
</code></pre>
<h2>Error</h2>
<p>File "../tests/test_passwordreset.py", line 55, in test_exception_raised</p>
<pre><code>self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
</code></pre>
<p><strong>AssertionError: 200 != 400</strong></p>
<h2>Failing code</h2>
<p><a href="https://i.sstatic.net/YorWE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YorWE.png" alt="enter image description here" /></a></p>
|
<python><django><django-rest-framework><django-views><django-forms>
|
2023-01-11 15:52:52
| 1
| 372
|
Namwanza Ronald
|
75,085,728
| 13,494,917
|
How to further specify an exception when there are two different errors under the same exception class
|
<p>I have two different errors under the same exception class that I want to handle differently.</p>
<p>1.</p>
<blockquote>
<p>Exception: ProgrammingError: (pyodbc.ProgrammingError) ('42S22', "[42S22] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Invalid column name 'newCol'. (207) (SQLExecDirectW); [42S22] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Statement(s) could not be prepared. (8180)")</p>
</blockquote>
<ol start="2">
<li></li>
</ol>
<blockquote>
<p>Exception: ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near ','. (102) (SQLExecDirectW)")</p>
</blockquote>
<p>How do I further specify these errors in my try-except block so I can handle them differently?</p>
<pre class="lang-py prettyprint-override"><code>try:
#code
except ProgrammingError:
#code
</code></pre>
|
<python><exception>
|
2023-01-11 15:52:32
| 0
| 687
|
BlakeB9
|
75,085,724
| 1,792,858
|
How to enforce microseconds with datetime()?
|
<p>I'd like <code>date2</code> to output <code>2023-01-11 14:00:00.000000</code> to have a predictable format.</p>
<pre><code>from datetime import datetime
date1 = datetime(2023, 1, 11, 14, 0, 0, 123456)
date2 = datetime(2023, 1, 11, 14, 0, 0)
print(date1) # 2023-01-11 14:00:00.123456
print(date2) # 2023-01-11 14:00:00
</code></pre>
<p>Any ideas?</p>
|
<python><python-3.x><datetime>
|
2023-01-11 15:51:58
| 0
| 8,825
|
Mr. B.
|
75,085,591
| 891,919
|
Gensim ensemblelda multiprocessing: index -1 is out of bounds for axis 0 with size 0
|
<p>I'm using the <a href="https://radimrehurek.com/gensim/" rel="nofollow noreferrer">gensim library</a> for topic modelling, more precisely the <a href="https://radimrehurek.com/gensim/models/ensemblelda.html" rel="nofollow noreferrer">Ensemble LDA</a> method. My code is fairly standard (I follow the documentation), the main part is:</p>
<pre><code> model = models.EnsembleLda(corpus=corpus,
id2word=id2word,
num_topics=ntopics,
passes=2,
iterations = 200,
num_models=ncores,
topic_model_class=models.LdaModel,
ensemble_workers=nworkers,
distance_workers=ncores)
</code></pre>
<p>(full code at <a href="https://github.com/erwanm/gensim-temporary/blob/main/gensim-topics.py" rel="nofollow noreferrer">https://github.com/erwanm/gensim-temporary/blob/main/gensim-topics.py</a>)</p>
<p>But with my data I <em>sometimes</em> obtain the error below. But it also often runs correctly with a subset of the data, so I don't know if the problem is related to my data?</p>
<pre><code>Process Process-52:
Traceback (most recent call last):
File "/home/moreaue/anaconda3/envs/twarc2/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/moreaue/anaconda3/envs/twarc2/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/moreaue/anaconda3/envs/twarc2/lib/python3.10/site-packages/gensim/models/ensemblelda.py", line 534, in _asymmetric_distance_matrix_worker
distance_chunk = _calculate_asymmetric_distance_matrix_chunk(
File "/home/moreaue/anaconda3/envs/twarc2/lib/python3.10/site-packages/gensim/models/ensemblelda.py", line 491, in _calculate_asymmetric_distance_matrix_chunk
mask = masking_method(ttd1, masking_threshold)
File "/home/moreaue/anaconda3/envs/twarc2/lib/python3.10/site-packages/gensim/models/ensemblelda.py", line 265, in mass_masking
smallest_valid = sorted_a[largest_mass][-1]
IndexError: index -1 is out of bounds for axis 0 with size 0
</code></pre>
<p>The error seems related to multiprocessing, since <code>ensemblelda</code> runs a number of threads (each running one instance of LDA).</p>
<p>What can cause this error? Any advice on how I can fix it?</p>
|
<python><nlp><multiprocessing><gensim>
|
2023-01-11 15:43:08
| 1
| 1,185
|
Erwan
|
75,085,016
| 5,869,076
|
Run Celery tasks on Railway
|
<p>I deployed a Django project in Railway, and it uses Celery and Redis to perform an scheduled task. The project is successfully online, but the Celery tasks are not performed.</p>
<p>If I execute the Celery worker from my computer's terminal using the Railway CLI, the tasks are performed as expected, and the results are saved in the Railway's PostgreSQL, and thus those results are displayed in the on-line site. Also, the redis server used is also the one from Railway.</p>
<p>However, Celery is operating in 'local'. This is the log on my local terminal showing the Celery is running local, and the Redis server is the one up in Railway:</p>
<pre><code>-------------- celery@MacBook-Pro-de-Corey.local v5.2.7 (dawn-chorus)
--- ***** -----
-- ******* ---- macOS-13.1-arm64-arm-64bit 2023-01-11 23:08:34
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: suii:0x1027e86a0
- ** ---------- .> transport: redis://default:**@containers-us-west-28.railway.app:7078//
- ** ---------- .> results:
- *** --- * --- .> concurrency: 10 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. kansoku.tasks.suii_kakunin
</code></pre>
<p>I included this line of code in the Procfile regarding to the worker (as I saw in another related answer):</p>
<pre><code>worker: python manage.py qcluster --settings=my_app_name.settings
</code></pre>
<p>And also have as an environment variable <code>CELERY_BROKER_REDIS_URL</code> pointing to Railway's <code>REDIS_URL</code>. I also tried creating a 'Periodic task' from the admin of the live aplication, but it just doesn't get executed. What should I do in order to have the scheduled tasks be done automatically without my PC?</p>
|
<python><django><celery><django-deployment>
|
2023-01-11 14:58:27
| 1
| 611
|
Jim
|
75,085,009
| 1,921,666
|
How to type hint a function, added to class by class decorator in Python
|
<p>I have a class decorator, which adds a few functions and fields to decorated class.</p>
<pre class="lang-py prettyprint-override"><code>@mydecorator
@dataclass
class A:
a: str = ""
</code></pre>
<p>Added (via <code>setattr()</code>) is a <code>.save()</code> function and a set of info for dataclass fields as a separate dict.</p>
<p>I'd like VScode and mypy to properly recognize that, so that when I use:</p>
<pre class="lang-py prettyprint-override"><code>a=A()
a.save()
</code></pre>
<p>or <code>a.my_fields_dict</code> those 2 are properly recognized.</p>
<p>Is there any way to do that? Maybe modify class <code>A</code> type annotations at runtime?</p>
|
<python><python-decorators><python-typing>
|
2023-01-11 14:57:54
| 1
| 412
|
chersun
|
75,084,859
| 6,304,433
|
How can I disallow mypy `type: ignore` comments?
|
<p>I would like for <code>mypy</code> to ignore or disallow any <code># type: ignore</code> comments.</p>
<p>Is this possible, short of modifying source files to remove the comments?</p>
<p>Looking for the equivalent of <a href="https://flake8.pycqa.org/en/latest/user/options.html#cmdoption-flake8-disable-noqa" rel="nofollow noreferrer"><code>flake8 --disable-noqa</code></a></p>
<h3>Context</h3>
<p>When we initially started type-checking our code we ignored a lot of type errors in order to bring it under coverage. Now we are ready to rachet up the quality of our typing and want to be able to locally disable all type ignore comments for a particular module (or the whole project).</p>
<p>Somewhat related to <a href="https://stackoverflow.com/questions/65579151/how-to-check-if-mypy-type-ignore-comments-are-still-valid-and-required">How to check if Mypy `# type: ignore` comments are still valid and required?</a></p>
<p><a href="https://mypy.readthedocs.io/en/stable/command_line.html?highlight=strict#the-mypy-command-line" rel="nofollow noreferrer"><code>mypy</code> command-line arguments docs</a></p>
<p><a href="https://mypy.readthedocs.io/en/stable/config_file.html" rel="nofollow noreferrer"><code>mypy</code> configuration file</a></p>
<p>Edit:
Note, we already make use of the <a href="https://mypy.readthedocs.io/en/stable/error_code_list2.html?highlight=ignore-without-code#check-that-type-ignore-include-an-error-code-ignore-without-code" rel="nofollow noreferrer"><code>ignore-without-code</code> rule</a></p>
|
<python><python-typing><mypy>
|
2023-01-11 14:46:40
| 0
| 618
|
Gabriel G.
|
75,084,852
| 7,556,397
|
How can I transform columnar hierarchy into parent child list in Pandas?
|
<p>I am trying to transform a hierarchy that use a columnar format with a fixed number of columns (many of them being null) into an adjacency list, with child and parent, using the Pandas library.</p>
<h2>example hierarchy</h2>
<p>Here is a fictitious example with 5 hierarchical levels:</p>
<pre><code> Books
/ | \
Science (null) (null)
/ | \
Astronomy (null) Pictures
/ \ | \
Astrophysics Cosmology (null) Astronomy
/ \ | / | \
(null) (null) Amateurs_Astronomy Galaxies Stars Astronauts
</code></pre>
<h2>data.csv</h2>
<pre><code>id,level_1,level_2,level_3,level_4,level_5
1,Books,Science,Astronomy,Astrophysics,
2,Books,Science,Astronomy,Cosmology,
3,Books,,,,Amateurs_Astronomy
4,Books,,Pictures,Astronomy,Galaxies
5,Books,,Pictures,Astronomy,Stars
6,Books,,Pictures,Astronomy,Astronauts
</code></pre>
<h2>what I have done</h2>
<p>I have started by adding a column that will store a uuid for each existing node.</p>
<p>[EDIT, further to mozway comment]</p>
<p>The problem with this function is that it will populate different uuids for nodes which are the same:</p>
<ul>
<li>first and second rows have the same level 1, 2, 3 and so should have the same uuid as pk_level_3</li>
<li>in the same way, rows 4, 5 and 6 should have the same uuid as pk_level_3 and pk_level_4.</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.read_csv('data.csv')
# iterate over each column in the dataframe to add a new column,
# containing a uuid each time the csv row has a value for this level:
for col in df.columns:
if df[col].isnull().sum() > 0:
new_col = 'pk_' + col
df[new_col] = None
# fill the new column with uuid only for non-null values of the original column
df.loc[df[col].notnull(), new_col] = df.loc[df[col].notnull(), col].apply(lambda x: uuid.uuid4())
</code></pre>
<p>Also, I do not know how to find the parent for each node, skipping all the null ones.</p>
<p>Any idea on how I could get the following result ?</p>
<pre><code>this_node,parent_node,this_node_uuid,parent_node_uuid
Science,Books,books/science-node-uuid,books-node-uuid
Astronomy,Science,books/science/astronomy-node-uuid,books/science-node-uuid
Astrophysics,Astronomy,books/science/astronomy/astrophysics-node-uuid,books/science/astronomy-node-uuid
Amateurs_Astronomy,Books,books/amateurs_astronomy-node-uuid,books-node-uuid
</code></pre>
<p>(β¦)</p>
|
<python><pandas>
|
2023-01-11 14:46:23
| 3
| 1,420
|
Lionel Hamayon
|
75,084,839
| 5,502,306
|
Finding python on Apple Silicon
|
<p>I have installed Python on my Apple Silicon (ARM64) machine, but it installs it as <code>python3</code> and not <code>python</code>.</p>
<p>The problem is that I have a <code>node.js</code> project with dependencies which need python (and pip). They fail to build because they are unable to find <code>python</code>.</p>
<p><code>python3</code> is on the path at <code>/opt/homebrew/python3</code> which in turn is a link to</p>
<p><code>/opt/homebrew/Cellar/python@3.10/3.10.8/bin/python3</code></p>
<p>I think I need to create a symbolic link <code>python</code> to <code>/opt/homebrew/python3</code>, but am unsure which path the use <code>/opt/homebrew/python</code> or <code>/Applications/python</code> or something else.</p>
<p>Or would it just be cleaner to create a virtualenv ?</p>
|
<python><macos>
|
2023-01-11 14:44:47
| 1
| 4,776
|
chughts
|
75,084,737
| 8,869,570
|
Is copy() still needed for extracting a subset of a pandas dataframe?
|
<p>In the codebase I'm working in, I see the following</p>
<pre><code>df1 = some pandas dataframe (one of the columns is quantity)
df1_subset = df1[df1.quantity == input_quantity].copy()
</code></pre>
<p>I am wondering why the <code>.copy()</code> is needed here? Doesn't <code>df1_subset = df1[df1.quantity == input_quantity]</code> make <code>df1_subset</code> a copy and not a reference?</p>
<p>I saw in <a href="https://stackoverflow.com/questions/27673231/why-should-i-make-a-copy-of-a-data-frame-in-pandas">why should I make a copy of a data frame in pandas</a> that a reference would be returned, but I think this is outdated?</p>
|
<python><pandas><dataframe>
|
2023-01-11 14:37:22
| 0
| 2,328
|
24n8
|
75,084,713
| 7,048,760
|
How to use docker secrets in docker-compose and python app?
|
<p>I'm trying to use <code>docker secret</code> to store username and password. I did the following:</p>
<p><strong>Step 1: Initiate docker swarm</strong></p>
<p><code>docker swarm init</code></p>
<p><strong>Step 2: Create docker secrets</strong></p>
<p><code>echo "username" | docker secret create app_username -</code></p>
<p><code>echo "password" | docker secret create app_password -</code></p>
<p><strong>Step 3: Update the docker-compose</strong></p>
<pre><code>version: "3.8"
secrets:
app_username:
external: true
app_password:
external: true
services:
backend:
container_name: lxd-backend
build:
context: ./backend
dockerfile: backend.dockerfile
image: lce-backend:latest
env_file:
- .env
ports:
- 8085:80
secrets:
- app_username
- app_password
environment:
- SERVER_NAME=localhost
- SERVER_HOST=https://localhost
- APP_USER_FILE=/run/secrets/app_username
- APP_PW_FILE=/run/secrets/app_password
</code></pre>
<p>Then in my python script,</p>
<pre><code>print("file: ", os.listdir('/run/secrets/'),flush=True)
</code></pre>
<p>I get <code>cannot access /run/secrets: no such file or directory</code> error. What am I doing wrong?</p>
|
<python><docker-compose><docker-secrets>
|
2023-01-11 14:35:26
| 0
| 865
|
Kuni
|
75,084,692
| 3,623,723
|
export pyplot figure to PNG without antialias
|
<p>I'd like to generate high-resolution raster images from pyplot figures, <strong>without antialias</strong> (why? See below)</p>
<p>Some time ago, on Python 2.7 and Matplotlib 1.5, I worked out two processes for this:</p>
<ol>
<li><code>plt.savefig()</code> to PDF, and then manually import into GIMP at 1200dpi or higher, with anti-alias turned off, and export the result to a PNG.</li>
<li><code>plt.savefig()</code> to EPS, and run a small ImageMagick script to convert the result to a non-antialiased png at very high resolution.</li>
</ol>
<p>Drawback: These processes take either time, create new non-python dependencies or are somewhat awkward to implement (e.g.: the ImageMagick method had to be adapted for almost every machine it was running on).</p>
<h5>Question:</h5>
<p>Is there by now (Python 3.9, matplotlib 3.6) another way to produce a high-res, non-antialiased PNG from a Matplotlib/pyplot figure, that works entirely in Python, without external dependencies or manual labour involved? I haven't found any new suitable arguments for <code>plt.savefig()</code>, but maybe there is another way, like some global antialias switch for matplotlib, pyplot or one of the plotting backends?</p>
<h5>In case you're wondering:</h5>
<p><em>I'm generating lots of figures from pyplot, in SVG or PDF format. Such vector figures are ideal for printing, viewing at any scale, and even allow me to make manual adjustments or change the layout or labels later, in Inkscape.</em></p>
<p><em>Sometimes, however, I need to generate raster images for use in applications which can't deal with vector images. I still want them to print sharply (antialiased images don't) and to scale nicely to most reasonable sizes (anti-aliased images don't scale up well). I also find that a 1200dpi non-antialiased image of line art, which produces a 4x antialiased image at 300dpi when scaled down by a factor of 4, actually produces a smaller PNG file than the antialiased scaled-down version -- even in jpeg format[1] -- because it needs a much smaller colour palette.</em></p>
<p>[1]: please don't use jpeg for line art, it hurts my eyes!</p>
|
<python><matplotlib>
|
2023-01-11 14:34:08
| 0
| 3,363
|
Zak
|
75,084,665
| 13,950,870
|
'Application not responding' when running opencv imshow() in Jupyter notebook extension inside VSCode
|
<p>Very simple code</p>
<pre><code>import cv2
cap = cv2.VideoCapture('../data/videos/cat.mp4')
ret, frame = cap.read()
cv2.imshow('Hi', frame)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>It opens the first frame as expected and shows it. When I press a key however, python refuses to respond and I get <code>Application not responding</code> leading to a crash after a while or me having to manually force quit it.</p>
<p>I'm running it in a Jupyter notebook in VSCode (using the extension). Strange thing is that when I run this code in a <code>.py</code> file inside VSCode, it does work as expected. I see a million and one threads about this but they mostly give the advice add <code>waitKey</code> and <code>destroyAllWindows</code> but I haven't come across one using <code>ipynb</code> in VSCode. Anyone knows how to fix this?</p>
|
<python><opencv><visual-studio-code><jupyter-notebook>
|
2023-01-11 14:32:38
| 0
| 672
|
RogerKint
|
75,084,637
| 1,999,585
|
How can I draw bars close to each other in matplotlib's bar function?
|
<p>I have this bar chart, obtained in Seaborn, using <code>barplot</code>:</p>
<p><a href="https://i.sstatic.net/WVTmZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WVTmZ.png" alt="Seaborn picture" /></a></p>
<p>I have the bar chart for the same dataset, obtained by Matplotlib's <code>bar</code>:
<a href="https://i.sstatic.net/qNP0S.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qNP0S.png" alt="Matplotlib picture" /></a></p>
<p>Is there a way, a switch, etc. to get the bars in Matplotlib's picture closer, like they are in the Seaborn version?</p>
|
<python><matplotlib><seaborn>
|
2023-01-11 14:30:24
| 1
| 2,424
|
Bogdan Doicin
|
75,084,605
| 4,380,853
|
PyFlink: What's the best way to provide reference data and keep them uptated
|
<p>I have a use case where I want to do comparisons between incoming data and some reference data provided by another service.</p>
<p>What's the best way in <code>pyflink</code> to fetch those data and update them regularly (in intervals of 1-2 hours)</p>
<p>Other considerations:</p>
<ul>
<li>The reference data may contain hundreds of thousands of records</li>
</ul>
|
<python><apache-flink><pyflink>
|
2023-01-11 14:28:56
| 0
| 2,827
|
Amir Afianian
|
75,084,586
| 15,279,420
|
Is there a way to import variables and connections to MWAA?
|
<p>I have a MWAA environment and I have to create another one by Terraform. The environment creation is not an issue, but the 'metadata' of my old enviroment. I want to import all variables and connections, programatically, but I haven't figured out so far.</p>
<p>I tried to change a few things in this <a href="https://docs.aws.amazon.com/mwaa/latest/userguide/samples-variables-import.html" rel="nofollow noreferrer">approach</a>, using POST request to MWAA CLI, but I only get a timeout. And I also tried <a href="https://stackoverflow.com/questions/70896397/creating-airflow-variables-and-connections-on-mwaa-using-python-requests-aws-la">this</a> one.</p>
<p>Has anyone ever done this? I'd like to import variables and connections to my MWAA environment everytime I create it.</p>
|
<python><amazon-web-services><airflow><mwaa>
|
2023-01-11 14:26:44
| 2
| 343
|
Luis Felipe
|
75,084,570
| 11,269,090
|
Python pandas dict to csv with one header only
|
<p>I have some data in a dict that I would like to save as an csv file with pandas:</p>
<pre><code>data = {
"a": 1,
"c": 2,
"d": 3,
}
</code></pre>
<p>Which I am trying to save it in this format:</p>
<p><a href="https://i.sstatic.net/ojOCl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ojOCl.png" alt="enter image description here" /></a></p>
<p>I am doing this with:</p>
<pre><code>data = pd.DataFrame(data, index=[0])
data.to_csv(path, mode='a', columns=None, header=list(data.keys()))
</code></pre>
<p>After <code>data</code> is saved, I will have more entries (dict) that are in the same format as <code>data</code> that I need to append to the same csv file. Let's suppose I have:</p>
<pre><code>data2 = {
"a": 2,
"c": 3,
"d": 4,
}
</code></pre>
<p>I need to append it as this:</p>
<p><a href="https://i.sstatic.net/bpiAw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bpiAw.png" alt="enter image description here" /></a></p>
<p>But if I ran the same code with <code>data2</code>, the headers will be displayed again:</p>
<p><a href="https://i.sstatic.net/UfGUe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UfGUe.png" alt="enter image description here" /></a></p>
<p>Is there a way to automatically add the header if it is the first entry, and for subsequent entries, no header will be added with the same code. I cannot detect in my code which data is the first entry.</p>
|
<python><pandas>
|
2023-01-11 14:25:55
| 3
| 1,010
|
Chen
|
75,084,549
| 6,694,814
|
Python folium - Markercluster not iterable with GroupedLayerControl
|
<p>I would like to group my 2 marker cluster layers, where one is reliant on the other by providing a separate styling. Hence the second one is set as control=False.
Nevertheless, I want to have it disappear when the first one is switched off.</p>
<p>Along with the new Python folium issue v.0.14 I found, that the new feature has been provided, which potentially could resolve my issue:</p>
<p><a href="https://github.com/ikoojoshi/Folium-GroupedLayerControl" rel="nofollow noreferrer">https://github.com/ikoojoshi/Folium-GroupedLayerControl</a></p>
<p><a href="https://stackoverflow.com/questions/74561214/allow-only-one-layer-at-a-time-in-folium-layercontrol">Allow only one layer at a time in Folium LayerControl</a></p>
<p>and I've applied the following code:</p>
<pre><code>df = pd.read_csv("or_geo.csv")
fo=FeatureGroup(name="OR")
or_cluster = MarkerCluster(name="Or", overlay=True, visible=True).add_to(map)
or_status = MarkerCluster(overlay=True,
control=False,
visible=False,
disableClusteringAtZoom=16,
).add_to(map)
GroupedLayerControl(
groups={'OrB': or_cluster, 'OrC': or_status},
collapsed=False,
).add_to(map)
</code></pre>
<p>and the console throws the following error:</p>
<p><strong>TypeError: 'MarkerCluster' object is not iterable</strong></p>
<p>How could I switch off 2 layer groups at once?</p>
<p><a href="https://i.sstatic.net/0Yv0D.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0Yv0D.png" alt="enter image description here" /></a></p>
<p><strong>UPDATE:</strong></p>
<p>The answer below provides the code, which seems to work but not in the way I need.</p>
<pre><code>df = pd.read_csv("or_geo.csv")
fo=FeatureGroup(name="Or",overlay = True)
or_cluster = MarkerCluster(name="Or").add_to(map)
or_status = MarkerCluster(control=False,
visible=True,
disableClusteringAtZoom=16,
).add_to(map)
# definition of or_marker
# definition of or_stat_marker
or_cluster.add_child(or_marker)
or_status.add_child(or_stat_marker)
GroupedLayerControl(
groups={"Or": [or_cluster, or_status]},
collapsed=False,
exclusive_group=False,
).add_to(map)
</code></pre>
<p><a href="https://i.sstatic.net/IfsvU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IfsvU.png" alt="enter image description here" /></a></p>
<p>I have a separate box instead, but what is worst I can just switch between one layer and another whereas I would like to have them reliant on the main group. The exclusive_groups option allows me to untick both of them but I am looking for something, which would allow me to switch off two of them at once (place the thick box on the major group instead). Is it possible to have something like this?</p>
|
<python><folium>
|
2023-01-11 14:24:27
| 2
| 1,556
|
Geographos
|
75,084,508
| 14,269,252
|
Filter different data frame and merge the results
|
<p>I have different DataFrames, they have a common column (id), as the data is huge, I want to filter all DataFrames on a list of values defined in (lis) and then merge all the newly built DataFrames on a common column.
I implemented it, as follows, it is slow and have many duplicate.I am not sure if my answer is correct, can someone help me to optimize the code?</p>
<p>My Merge result is so bigger than dataframes size, I dont know how to find a correct merge for my data.</p>
<pre><code>from functools import reduce
import functools as ft
df_list = [df1,df2,df3,df4,df5,df6]
lis = ["PT08"]
dfs = []
for df in df_list:
dfs.append(df[df['id'].isin(lis)])
df = df_list[0]
for df_ in dfs[1:]:
dfall= df.merge(df_, on='id')
dfall
</code></pre>
|
<python><pandas><dataframe>
|
2023-01-11 14:21:31
| 1
| 450
|
user14269252
|
75,084,453
| 702,846
|
populate SQL database with dask dataframe and dump into a file
|
<p>reproduce the error and the use case on <a href="https://colab.research.google.com/drive/1ynvOxOm3Kbf_qW7xJa-_glp_iMFi9Plc?usp=sharing" rel="nofollow noreferrer">this colab</a></p>
<p>I have multiple large tables that I read and analyze through Dask (dataframe). After doing analysis, I would like to push them into a local database (in this case sqlite engine through sqlalchemy package.</p>
<p>here is a dummy data:</p>
<pre><code>import pandas as pd
import dask.dataframe as dd
df = pd.DataFrame([{"i": i, "s": str(i) * 2} for i in range(4)])
ddf = dd.from_pandas(df, npartitions=2)
from dask.utils import tmpfile
from sqlalchemy import create_engine
with tmpfile(
dir="/outputs/",
extension="db",
) as f:
print(f)
db = f"sqlite:///{f}"
ddf.to_sql("test_table", db)
engine = create_engine(
db,
echo=False,
)
print(dir(engine))
result = engine.execute("SELECT * FROM test_table").fetchall()
print(result)
</code></pre>
<p>however, the <code>tmpfile</code> is temporary and is not stored on my local drive. I would like to dump the database into my local drive; I could not find any argument for <code>tmpfile</code> to ensure it is stored as a file. Neither could figure out how to dump my engine.</p>
<p><em><strong>Update</strong></em>
if I use a regular file, I will encounter the following error</p>
<pre><code> return self.dbapi.connect(*cargs, **cparams)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to open database file
(Background on this error at: https://sqlalche.me/e/14/e3q8)
</code></pre>
<p>here is the code</p>
<pre><code>with open(
"/outputs/hello.db", "wb"
) as f:
print(f)
db = f"sqlite:///{f}"
ddf.to_sql("test_table", db, if_exists="replace")
engine = create_engine(
db,
echo=False,
)
print(dir(engine))
result = engine.execute("SELECT * FROM test_table").fetchall()
print(result)
</code></pre>
|
<python><sqlite><sqlalchemy><dask><dask-dataframe>
|
2023-01-11 14:17:09
| 1
| 6,172
|
Areza
|
75,084,390
| 7,626,198
|
Python. Read QDateTime from file with QDataStream
|
<p>I have a binary file created with <code>QDataStream</code>.</p>
<p>QDataStream has functions to read int, float, QString, etc. But I don't find the function to read <code>QDatatime</code></p>
<p>My file includes <code>QDatatime</code>. But I don't find the function to read <code>QDatatime</code>
How can I read the <code>QDatatime</code> with Python?</p>
<p>Example:</p>
<pre><code> infile = QtCore.QFile(filepath)
stream = QtCore.QDataStream(infile)
header = stream.readQString()
version = stream.readInt()
date = stream.read??????
</code></pre>
|
<python><pyqt5>
|
2023-01-11 14:12:25
| 1
| 442
|
Juan
|
75,084,328
| 5,510,540
|
Python: Venn diagram from score data
|
<p>I have the following data:</p>
<pre><code>df =
id testA testB
1 3 NA
1 1 3
2 2 NA
2 NA 1
2 0 0
3 NA NA
3 1 1
</code></pre>
<p>I would like to create a Venn diagram of the number of times that testA and testB appear, testA but not testB, and testB but not testA.</p>
<p>The expected outcome would be the following groups:</p>
<p><a href="https://i.sstatic.net/lCwqK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lCwqK.png" alt="enter image description here" /></a></p>
<pre><code>Both tests: 3
A but not B: 2
B but not A: 1
</code></pre>
|
<python><venn>
|
2023-01-11 14:07:15
| 2
| 1,642
|
Economist_Ayahuasca
|
75,084,322
| 2,897,989
|
List all files/modules imported but nothing else in Python?
|
<p>I'm working with some code that's been handed off to me by someone else. I need to reuse some parts of the project, but not all. I'm basically adding a Flask server that hosts a single function from all the project.</p>
<p>I'm packaging it into a docker image, and I want to keep it as lightweight as possible. For this, I'm planning to only include the files that are actually use/referenced from my entry point of a Flask server written in the "server.py" file.</p>
<p>Is there a way to get a list of files that are referenced/imported via my entry point into the code? I'm thinking something along the lines of pipreqs, except I'm not looking for pip modules, but rather files in my folder.</p>
<p>For example, my "server.py" imports a local module called "feature_extraction" that then imports another local module called "preprocessing" and so on. BUT starting from my server, the module "train_model" is never imported. I'd like to keep only the files that are necessary (~"feature_extraction" and "preprocessing"), and not use any of the other files in the project (~"train_model").</p>
<p>Is there a module that supports this? If not, how would I go about it programmatically? Doing this by hand would take a long time.</p>
|
<python><module>
|
2023-01-11 14:06:37
| 0
| 7,601
|
lte__
|
75,084,238
| 4,838,024
|
Python XGBoost Month Variables - One Hot Encoding versus Numeric
|
<p>I'm training a model using XGBoost in Python. One of my variables is month (Jan, Feb, Mar, ...)</p>
<p>What is the correct way to do this, and would there be any difference in performance/evaluation metrics between the methods of:</p>
<ol>
<li>One hot encoding of the 12 months, so 12 new variables taking on either 0 or 1.</li>
<li>One variable using the numeric values of 1,2,...12 representing the months.</li>
</ol>
<p>I read <a href="https://stackoverflow.com/questions/34265102/xgboost-categorical-variables-dummification-vs-encoding">this</a> very similar question. However there are comments supporting each method. There is no conclusive answer.</p>
<p>I know that in general machine learning models you are not supposed to do (2) because the model will assume an ordinal relationship between the variables. This raises another confusion for me. My variable, month, can be argued to be ordinal. For example, June (number 6) is later in the year than January (number 1), however, June is not of "greater importance" than January.</p>
<p>Please feel free to share your thoughts or any links to academic discussions on this. Thank You.</p>
|
<python><tree><xgboost>
|
2023-01-11 14:00:50
| 1
| 361
|
tpoh
|
75,084,205
| 13,184,183
|
How to add file to docker container?
|
<p>I work with the airflow operators and mlproject. So the pipeline looks like the following : I define airflow operator, in which I specify entry point of MLProject and parameters for the program, then it turns to MLProject file where running command is specified, so it has the following form:</p>
<pre class="lang-py prettyprint-override"><code>dq_checking = MLProjectOperator(
task_id = 'dq_checking',
dag = dag,
project_name = 'my_project',
project_version = '$VERSION',
entry_point = 'data_validation',
base_image = 'my_docker_image',
base_image_version = '0.1.1',
hadoop_env = 'SANDBOX',
project_parameters = {
'tag': default_args['tag']
},
environment = {
'some_environment_variable' : 'some_value'
}
)
</code></pre>
<p>Here <code>MLProjectOperator</code> is inherited from <code>DockerOperator</code>. Then I have MLProject</p>
<pre class="lang-py prettyprint-override"><code>name: my_project
conda_env: conda.yml
entry_points:
data_validation:
parameters:
tag: string
command: >
spark-submit --master yarn --deploy-mode cluster
--num-executors 20 --executors-cores 4
my_project/scoring/dq_checking.py -t {tag}
</code></pre>
<p>The question is, I need to read some files in script <code>dq_checking.py</code>. Locally they are in the same directory <code>my_project/scoring</code>. But as far as I understand in production this script is running in its own docker container and therefore there are no those files. How can I add them to the container?</p>
|
<python><docker><airflow>
|
2023-01-11 13:58:10
| 1
| 956
|
Nourless
|
75,084,163
| 9,715,816
|
Use PostGIS geometry types in SQLModel
|
<p>Is it possible to use PostGIS geometry types in models created in SQLModel? If yes, how can it be done?</p>
|
<python><postgis><sqlmodel><geoalchemy2>
|
2023-01-11 13:55:54
| 1
| 2,019
|
Charalamm
|
75,084,155
| 3,025,981
|
Use npt.NDArray[np.uint64] to query pd.DataFrame
|
<p>I'm trying to understand how to properly use type annotations with pandas and numpy. I have a DataFrame indexed with an index of dtype <code>np.uint64</code>. I want to write a function that returns a subset of this DataFrame in the following way:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import numpy.typing as npt
import pandas as pd
df = pd.DataFrame(dict(x=[1, 2, 3]), index=np.array([10, 20, 30], dtype="uint64"))
assert df.index.dtype == np.uint64
def get_NDArray(df: pd.DataFrame, key: npt.NDArray[np.uint64]):
df2 = df.loc[key]
reveal_type(df2)
return df2
</code></pre>
<p>With annotation <code>key: npt.NDArray[np.uint64]</code>, it doesn't work. In <em>pyright</em>, the inferred type of <code>df2</code> is <code>Series[Unkown]</code>, which is incorrect (should be <code>DataFrame</code> instead). In <em>mypy</em>, it says</p>
<pre><code>error: Invalid index type "ndarray[Any, dtype[unsignedinteger[_64Bit]]]" for "_LocIndexerFrame"; expected type
"Union[slice, ndarray[Any, dtype[signedinteger[_64Bit]]], Index, List[int],
Series[int], Series[bool], ndarray[Any, dtype[bool_]], List[bool],
List[<nothing>], Tuple[Union[slice, ndarray[Any, dtype[signedinteger[_64Bit]]],
Index, List[int], Series[int], Series[bool], ndarray[Any, dtype[bool_]],
List[bool], List[<nothing>], Hashable], Union[List[<nothing>], slice,
Series[bool], Callable[..., Any]]]]"
</code></pre>
<p>I can change the annotation of <code>key</code> to <code>key: np.ndarray</code> or <code>key: npt.NDArray</code>, and then everything works correctly, but I want to be sure that <code>key</code> is not an arbitrary <code>np.ndarray</code>, but <code>np.ndarray</code> with <code>dtype == 'np.uint64'</code>. I expected that <code>npt.NDArray[np.uint64]</code> is exactly the tool that should allow that, but it doesn't work. Am I doing something wrong?</p>
|
<python><pandas><numpy><python-typing>
|
2023-01-11 13:55:15
| 1
| 8,187
|
Ilya V. Schurov
|
75,084,082
| 536,262
|
openldap noopsrch overlay with pythons ldap3 search
|
<p>We use the <a href="https://ltb-project.org/documentation/openldap-noopsrch.html" rel="nofollow noreferrer">https://ltb-project.org/documentation/openldap-noopsrch.html</a> overlay on openldap.</p>
<p>It gives you the number of entries in each catalog without having to browse all.</p>
<p>example show <code>-e '!1.3.6.1.4.1.4203.666.5.18'</code> controltype to ldapsearch:</p>
<pre><code>ldapsearch -x -H 'ldap://localhost:389' -D 'cn=Manager,dc=my-domain,dc=com'
-w secret -b 'dc=my-domain,dc=com' \
'(objectClass=*)' -e '!1.3.6.1.4.1.4203.666.5.18'
</code></pre>
<p>I use the python3 ldap3: <a href="https://ldap3.readthedocs.io/en/latest/searches.html" rel="nofollow noreferrer">https://ldap3.readthedocs.io/en/latest/searches.html</a></p>
<p>Any tips/examples on how to implement this?</p>
|
<python><ldap><openldap>
|
2023-01-11 13:49:57
| 1
| 3,731
|
MortenB
|
75,084,013
| 2,248,271
|
concat result of apply in python
|
<p>I am trying to apply a function on a column of a dataframe.
After getting multiple results as dataframes, I want to concat them all in one.</p>
<p>Why does the first option work and the second not?</p>
<pre><code>import numpy as np
import pandas as pd
def testdf(n):
test = pd.DataFrame(np.random.randint(0,n*100,size=(n*3, 3)), columns=list('ABC'))
test['index'] = n
return test
test = pd.DataFrame({'id': [1,2,3,4]})
testapply = test['id'].apply(func = testdf)
#option 1
pd.concat([testapply[0],testapply[1],testapply[2],testapply[3]])
#option2
pd.concat([testapply])
</code></pre>
|
<python><concatenation>
|
2023-01-11 13:44:35
| 1
| 6,020
|
Wietze314
|
75,083,999
| 16,591,917
|
How do I find GEKKO application success status?
|
<p>I am running <code>m.solve()</code> in a <code>try .. except</code> construct to elegantly handle any exceptions raised by the solver due to maximum iterations or convergence to an infeasibility but want to interrogate APPINFO and APPSTATUS to determine if a solution was found. I was surprised to see that I always seem to get <code>APPINFO=0</code> and <code>APPSTATUS=1</code> even though the the solver reports that a solutions was not found.</p>
<p>What am I missing in my interpretation of the document on APPINFO and APPSTATUS?</p>
<p>Piece of code to reproduce error.</p>
<pre><code>from gekko import GEKKO
m=GEKKO(remote=False)
m.x=m.Var()
m.y=m.Var()
m.total=m.Intermediate(m.x+m.y)
m.Equation(m.total>20) #if included, no feasible solution exists
m.Equation(m.x<9)
m.Equation(m.y<9)
m.Maximize(m.total)
m.options.SOLVER=3
try:
m.solve()
except Exception as e:
print('Exception',e)
print('APPINFO', m.options.APPINFO)
print('APPSTATUS', m.options.APPSTATUS)
</code></pre>
|
<python><gekko>
|
2023-01-11 13:43:52
| 1
| 319
|
JacquesStrydom
|
75,083,901
| 14,082,385
|
How can I count the number of times I call a loss function?
|
<p>I personalized my own Huber loss function in the way (<a href="https://goodboychan.github.io/python/coursera/tensorflow/deeplearning.ai/2022/02/08/01-Tensorflow2-Custom-Loss-Function.html" rel="nofollow noreferrer">https://goodboychan.github.io/python/coursera/tensorflow/deeplearning.ai/2022/02/08/01-Tensorflow2-Custom-Loss-Function.html</a>) suggests:</p>
<pre><code>def my_huber_loss(y_true, y_pred):
threshold = 1.
error = y_true - y_pred
is_small_error = tf.abs(error) <= threshold
small_error_loss = tf.square(error) / 2
big_error_loss = threshold * (tf.abs(error) - threshold / 2)
return tf.where(is_small_error, small_error_loss, big_error_loss)
</code></pre>
<p>I included it in <code>model.compile(optimizer='adam', loss=my_huber_loss, metrics=['mae'])</code>
and training works fine.</p>
<p>Now, I would like to know how many times we call this Huber loss through the training phase, so I did as <a href="https://stackoverflow.com/questions/21716940/is-there-a-way-to-track-the-number-of-times-a-function-is-called">is there a way to track the number of times a function is called?</a> suggests:</p>
<pre><code>def my_huber_loss(y_true, y_pred):
threshold = 1.
error = y_true - y_pred
is_small_error = tf.abs(error) <= threshold
small_error_loss = tf.square(error) / 2
big_error_loss = threshold * (tf.abs(error) - threshold / 2)
my_huber_loss.counter +=1 #THIS IS THE NEW LINE
return tf.where(is_small_error, small_error_loss, big_error_loss)
my_huber_loss.counter = 0 #INITIALIZE
</code></pre>
<p>However, after the whole training <code>print(my_huber_loss.counter)</code> outputs <code>3</code>:</p>
<pre><code>results = model.fit(X_train, Y_train, validation_split=0.1, batch_size=1, epochs=numEpochs, callbacks=[earlystopper])
print(my_huber_loss.counter)
</code></pre>
<p>Prints <code>3</code>.</p>
<p>I know this number is not correct, since loss functions should be called more times. In addition, I added the <code>tf.print("--- Called Loss ---")</code> line in <code>my_huber_loss()</code>, and I can see how we call it several times, e.g.:</p>
<pre><code>Epoch 1/2
--- Called Loss ---
1/1440 [..............................] - ETA: 56:15 - loss: 0.0411 - mae: 0.2357--- Called Loss ---
--- Called Loss ---
3/1440 [..............................] - ETA: 47s - loss: 0.0398 - mae: 0.2291 --- Called Loss ---
--- Called Loss ---
5/1440 [..............................] - ETA: 45s - loss: 0.0338 - mae: 0.2096--- Called Loss ---
--- Called Loss ---
7/1440 [..............................] - ETA: 46s - loss: 0.0338 - mae: 0.2110--- Called Loss ---
--- Called Loss ---
9/1440 [..............................] - ETA: 44s - loss: 0.0306 - mae: 0.1997--- Called Loss ---
--- Called Loss ---
11/1440 [..............................] - ETA: 43s - loss: 0.0279 - mae: 0.1893--- Called Loss ---
--- Called Loss ---
13/1440 [..............................] - ETA: 41s - loss: 0.0265 - mae: 0.1836--- Called Loss ---
--- Called Loss ---
15/1440 [..............................] - ETA: 41s - loss: 0.0261 - mae: 0.1824--- Called Loss ---
--- Called Loss ---
--- Called Loss ---
18/1440 [..............................] - ETA: 39s - loss: 0.0250 - mae: 0.1783--- Called Loss ---
--- Called Loss ---
--- Called Loss ---
21/1440 [..............................] - ETA: 38s - loss: 0.0243 - mae: 0.1764--- Called Loss ---
...
</code></pre>
<p>What is going wrong? How can I count the number of times I call a loss function?</p>
|
<python><tensorflow><keras><counter><loss-function>
|
2023-01-11 13:36:03
| 1
| 786
|
Theo Deep
|
75,083,737
| 11,479,825
|
How to save custom dataset in local folder
|
<p>I have created a custom huggingface dataset, containing images and ground truth data coming from json lines file. I want to save it to a local folder and be able to use it as is by loading it after to other notebooks. I did not find out how this can happen.</p>
<pre><code>DatasetDict({
train: Dataset({
features: ['image', 'id', 'ground_truth'],
num_rows: 7
})
test: Dataset({
features: ['image', 'id', 'ground_truth'],
num_rows: 4
})
})
</code></pre>
|
<python><huggingface><huggingface-datasets>
|
2023-01-11 13:21:50
| 2
| 985
|
Yana
|
75,083,706
| 8,839,068
|
pandas: find maximum across column range; use second column range for tie breaks
|
<p>I have a data frame with two corresponding sets of columns, e.g. like this sample containing people and their rating of three fruits as well as their ability to detect a fruit ('corresponding' means that <code>banana_rati</code> corresponds to <code>banana_reco</code> etc.).</p>
<pre><code>import pandas as pd
df_raw = pd.DataFrame(data=[ ["name1", 10, 10, 9, 10, 10, 10],
["name2", 10, 10, 8, 10, 8, 4],
["name3", 10, 8, 8, 10, 8, 8],
["name4", 5, 10, 10, 5, 10, 8]],
columns=["name", "banana_rati", "mango_rati", "orange_rati",
"banana_reco", "mango_reco", "orange_reco"])
</code></pre>
<p>Suppose I now want to find each respondent's favorite fruit, which I define was the highest rated fruit.</p>
<p>I do this via:</p>
<pre><code>cols_find_max = ["banana_rati", "mango_rati", "orange_rati"] # columns to find the maximum in
mxs = df_raw[cols_find_max].eq(df_raw[cols_find_max].max(axis=1), axis=0) # bool indicator if the cell contains the row-wise maximum value across cols_find_max
</code></pre>
<p>However, some respondents rated more than one fruit with the highes value:</p>
<pre><code>df_raw['highest_rated_fruits'] = mxs.dot(mxs.columns + ' ').str.rstrip(', ').str.replace("_rati", "").str.split()
df_raw['highest_rated_fruits']
# Out:
# [banana, mango]
# [banana, mango]
# [banana]
# [mango, orange]
</code></pre>
<p>I now want to use the maximum of <code>["banana_reco", "mango_reco", "orange_reco"]</code> for tie breaks. If this also gives no tie break, I want a random selection of fruits from the so-determined favorite ones.</p>
<p>Can someone help me with this?</p>
<p>The expected output is:</p>
<pre><code>df_raw['fav_fruit']
# Out
# mango # <- random selection from banana (rating: 10, recognition: 10) and mango (same values)
# banana # <- highest ratings: banana, mango; highest recognition: banana
# banana # <- highest rating: banana
# mango # <- highest ratings: mango, orange; highest recognition: mango
</code></pre>
|
<python><pandas><random><max>
|
2023-01-11 13:20:00
| 2
| 4,240
|
Ivo
|
75,083,556
| 3,433,875
|
Parse XML to pandas using elementTree and python
|
<p>I have the following xml structure:</p>
<pre><code><GL_MarketDocument
xmlns="urn:iec62325.351:tc57wg16:451-6:generationloaddocument:3:0">
<mRID>352539b33d6245f88c0cea8c70c86e76</mRID>
<revisionNumber>1</revisionNumber>
<type>A75</type>
<process.processType>A16</process.processType>
<sender_MarketParticipant.mRID codingScheme="A01">10X1001A1001A450</sender_MarketParticipant.mRID>
<sender_MarketParticipant.marketRole.type>A32</sender_MarketParticipant.marketRole.type>
<receiver_MarketParticipant.mRID codingScheme="A01">10X1001A1001A450</receiver_MarketParticipant.mRID>
<receiver_MarketParticipant.marketRole.type>A33</receiver_MarketParticipant.marketRole.type>
<createdDateTime>2023-01-11T11:37:08Z</createdDateTime>
<time_Period.timeInterval>
<start>2023-01-10T23:00Z</start>
<end>2023-01-11T11:00Z</end>
</time_Period.timeInterval>
<TimeSeries>
<mRID>1</mRID>
<businessType>A01</businessType>
<objectAggregation>A08</objectAggregation>
<inBiddingZone_Domain.mRID codingScheme="A01">10Y1001A1001A46L</inBiddingZone_Domain.mRID>
<quantity_Measure_Unit.name>MAW</quantity_Measure_Unit.name>
<curveType>A01</curveType>
<MktPSRType>
<psrType>B04</psrType>
</MktPSRType>
<Period>
<timeInterval>
<start>2023-01-10T23:00Z</start>
<end>2023-01-11T10:00Z</end>
</timeInterval>
<resolution>PT60M</resolution>
<Point>
<position>1</position>
<quantity>0</quantity>
</Point>
<Point>
<position>2</position>
<quantity>0</quantity>
</Point>
<Point>
<position>3</position>
<quantity>0</quantity>
</Point>
<Point>
<position>4</position>
<quantity>0</quantity>
</Point>
<Point>
<position>5</position>
<quantity>0</quantity>
</Point>
<Point>
<position>6</position>
<quantity>0</quantity>
</Point>
<Point>
<position>7</position>
<quantity>0</quantity>
</Point>
<Point>
<position>8</position>
<quantity>0</quantity>
</Point>
<Point>
<position>9</position>
<quantity>0</quantity>
</Point>
<Point>
<position>10</position>
<quantity>0</quantity>
</Point>
<Point>
<position>11</position>
<quantity>0</quantity>
</Point>
</Period>
</TimeSeries>
<TimeSeries>
<mRID>2</mRID>
<businessType>A01</businessType>
<objectAggregation>A08</objectAggregation>
<inBiddingZone_Domain.mRID codingScheme="A01">10Y1001A1001A46L</inBiddingZone_Domain.mRID>
<quantity_Measure_Unit.name>MAW</quantity_Measure_Unit.name>
<curveType>A01</curveType>
<MktPSRType>
<psrType>B12</psrType>
</MktPSRType>
<Period>
<timeInterval>
<start>2023-01-10T23:00Z</start>
<end>2023-01-11T10:00Z</end>
</timeInterval>
<resolution>PT60M</resolution>
<Point>
<position>1</position>
<quantity>841</quantity>
</Point>
<Point>
<position>2</position>
<quantity>821</quantity>
</Point>
<Point>
<position>3</position>
<quantity>809</quantity>
</Point>
<Point>
<position>4</position>
<quantity>803</quantity>
</Point>
<Point>
<position>5</position>
<quantity>800</quantity>
</Point>
<Point>
<position>6</position>
<quantity>799</quantity>
</Point>
<Point>
<position>7</position>
<quantity>884</quantity>
</Point>
<Point>
<position>8</position>
<quantity>963</quantity>
</Point>
<Point>
<position>9</position>
<quantity>1012</quantity>
</Point>
<Point>
<position>10</position>
<quantity>1021</quantity>
</Point>
<Point>
<position>11</position>
<quantity>1006</quantity>
</Point>
</Period>
</TimeSeries>
</code></pre>
<p>and I am tryng to get this:
<a href="https://i.sstatic.net/lJHcl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lJHcl.png" alt="enter image description here" /></a></p>
<p>I am able to get the the tags separately using this:</p>
<pre><code>response = requests.get(base_url)
root = ET.fromstring(response.content) #get the xml content as text
#Manage namespaces
text = root.tag #get the namespace from root tag
get_ns = text[text.index('{')+len('{'):text.index('}')] #grab the text between the curly brackets
#Register the name space
ET.register_namespace("", get_ns)
#Save the namespace/S in a dict so we dont have to specify them in the loop
ns = {"": get_ns}
#for child in root.iter(): print(child.tag, child.attrib) #iterate through all the nodes
#find all the tags
psc_type = root.findall(".//TimeSeries/MktPSRType/psrType", ns)
pos = root.findall(".//TimeSeries/Period/Point/position", ns)
qty = root.findall(".//TimeSeries/Period/Point/quantity", ns)
#nitiate a list for rows and define column names for pandas
df_cols = ["Type", "TimeOfDay", "Quantity"]
rows1 = []
rows = []
for psc in psc_type:
p_type = psc.text
rows1.append(psc.text)
for hour, qt in zip( pos, qty):
hour = hour.text,
qty = qt.text
period = [hour[0], qty]
#hour comes out as a tuple, so we need to get first value out hour[0]
rows.append(period)
x = [rows1, rows]
</code></pre>
<p>that returns two lists, that I guess I can put together in pandas:</p>
<pre><code>['B04', 'B12', 'B14', 'B20', 'B16', 'B19']
[['1', '0'], ['2', '0'], ['3', '0'], ['4', '0'], ['5', '0'], ['6', '0'], ['7', '0'], ['8', '0'], ['9', '0'], ['10', '0'], ['11', '0'], ['12', '0'], ['1', '841'], ['2', '821'], ['3', '809'], ['4', '803'], ['5', '800'], ['6', '799'], ['7', '884'], ['8', '963'], ['9', '1012'], ['10', '1021'], ['11', '1006'], ['12', '1011'], ['1', '5793'], ['2', '5794'], ['3', '5795'], ['4', '5794'], ['5', '5794'], ['6', '5794'], ['7', '5794'], ['8', '5795'], ['9', '5792'], ['10', '5790'], ['11', '5791'], ['12', '5794'], ['1', '667'], ['2', '657'], ['3', '651'], ['4', '666'], ['5', '675'], ['6', '706'], ['7', '743'], ['8', '775'], ['9', '784'], ['10', '792'], ['11', '837'], ['12', '856'], ['1', '0'], ['2', '0'], ['3', '0'], ['4', '0'], ['5', '0'], ['6', '0'], ['7', '0'], ['8', '0'], ['9', '0'], ['10', '0'], ['11', '2'], ['12', '3'], ['1', '1984'], ['2', '2164'], ['3', '2310'], ['4', '2497'], ['5', '2669'], ['6', '2786'], ['7', '2884'], ['8', '2927'], ['9', '2913'], ['10', '2873'], ['11', '2813'], ['12', '2740']]
</code></pre>
<p>But it seems too complicated. My guess is that ElementTree can parse that and maybe even pandas with the new XML read but I just cant figure it out.</p>
<p>Where am I going wrong?</p>
|
<python><python-3.x><pandas><xml><elementtree>
|
2023-01-11 13:07:23
| 2
| 363
|
ruthpozuelo
|
75,083,550
| 859,227
|
Nested loop over dataframe rows
|
<p>I would like to perform a nested loop over a dataframe rows, considering the fact the inner loop starts from <code>outer_row + 1</code>. If I use</p>
<pre><code>for o_index, o_row in df.iterrows():
L1 = o_row['Home']
L2 = o_row['Block']
for i_index, i_row in df.iterrows():
L3 = i_row['Home']
L4 = i_row['Block']
</code></pre>
<p>As you can see, in the first iteration, i_index is the same as o_index. However, I want o_index to be 0 and i_index to be 1. How can I do that?</p>
<p>Example: Assume a dataframe like this:</p>
<pre><code> Cycle Home Block
0 100 1 400
1 130 1 500
2 200 2 200
3 300 1 300
4 350 3 100
</code></pre>
<p>The iterations should be in this order:</p>
<p>0 -> 1, 2, 3, 4</p>
<p>1 -> 2, 3, 4</p>
<p>2 -> 3, 4</p>
<p>3 -> 4</p>
<p>4 -> nothing</p>
<p>In each inner iteration, I will then compare L1 and L3 and if they are equal, then abs(L2-L4) is calculated and pushed in a list.</p>
|
<python><pandas>
|
2023-01-11 13:07:04
| 2
| 25,175
|
mahmood
|
75,083,527
| 12,236,313
|
Complex Django query involving an ArrayField & coefficients
|
<p>On the one hand, let's consider this Django model:</p>
<pre><code>from django.db import models
from uuid import UUID
class Entry(models.Model):
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
value = models.DecimalField(decimal_places=12, max_digits=22)
items = ArrayField(base_field=models.UUIDField(null=False, blank=False), default=list)
</code></pre>
<p>On the other hand, let's say we have this dictionary:</p>
<pre><code>coefficients = {item1_uuid: item1_coef, item2_uuid: item2_coef, ... }
</code></pre>
<p><code>Entry.value</code> is intended to be distributed among the <code>Entry.items</code> according to <code>coefficients</code>.</p>
<p><strong>Using Django ORM, what would be the most efficient way (in a single SQL query) to get the sum of the values of my <code>Entries</code> for a single <code>Item</code>, given the coefficients?</strong></p>
<p>For instance, for <code>item1</code> below I want to get <code>168.5454...</code>, that is to say <code>100 * 1 + 150 * (0.2 / (0.2 + 0.35)) + 70 * 0.2</code>.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Entry ID</th>
<th>Value</th>
<th>Items</th>
</tr>
</thead>
<tbody>
<tr>
<td>uuid1</td>
<td>100</td>
<td><code>[item1_uuid]</code></td>
</tr>
<tr>
<td>uuid2</td>
<td>150</td>
<td><code>[item1_uuid, item2_uuid]</code></td>
</tr>
<tr>
<td>uuid3</td>
<td>70</td>
<td><code>[item1_uuid, item2_uuid, item3_uuid]</code></td>
</tr>
</tbody>
</table>
</div>
<pre><code>coefficients = { item1_uuid: Decimal("0.2"), item2_uuid: Decimal("0.35"), item3_uuid: Decimal("0.45") }
</code></pre>
<p>Bonus question: how could I adapt my models for this query to run faster? I've deliberately chosen to use an <code>ArrayField</code> and decided not to use a <code>ManyToManyField</code>, was that a bad idea? How to know where I could add <code>db_index[es]</code> for this specific query?</p>
<p>I am using Python 3.10, Django 4.1. and Postgres 14.</p>
|
<python><django><django-models><django-queryset><django-orm>
|
2023-01-11 13:05:12
| 1
| 1,030
|
scΕ«riolus
|
75,083,505
| 18,018,869
|
How to add a new "comment" or "flag" field to every model field of existing model?
|
<p>Disclaimer: I can wipe out the database anytime. So while answering this, please don't care about migrations and stuff.</p>
<p>Imagine me having a model with multiple values:</p>
<pre class="lang-py prettyprint-override"><code>class Compound(models.Model):
color = models.CharField(max_length=20, blank=True, default="")
brand = models.CharField(max_length=200, blank=True, default="")
temperature = models.FloatField(null=True, blank=True)
melting_temp = models.FloatField(null=True, blank=True)
# more (~20) especially numeric values as model fields
</code></pre>
<p>Now I want to <strong>add a comment</strong> to be stored <strong>for every value of that model</strong>. For example I want to add a comment "measured in winter" to the <code>temperature</code> model field.</p>
<p>What is the best approach to do that?</p>
<p>My brainstorming came up with:</p>
<ol>
<li>By hand add 20 more model fields like <code>temperature_comment = ... </code> but that sounds not very DRY</li>
<li>Add one big json field which stores every comment. But how do I create a Form with such a json field? Because I want to separate each input field for related value. I would probably have to use javascript which I would want to avoid.</li>
<li>Add a model called <code>Value</code> for every value and connect them to <code>Compound</code> via <code>OneToOneField</code>s. But how do I then create a Form for <code>Compound</code>? Because I want to create a <code>Compound</code> utilizing one form. I do not want to create every <code>Value</code> on its own. Also it is not as easy as before, to access and play around with the values inside the <code>Compound</code> model.</li>
</ol>
<p>I guess this is a fairly abstract question for a usecase that comes up quite often. I do not know why I did not find resources on how to accomplish that.</p>
|
<python><django><django-models><django-forms>
|
2023-01-11 13:03:22
| 6
| 1,976
|
Tarquinius
|
75,083,359
| 8,628,566
|
Error when importing torch_geometric in Python 3.9.7
|
<p>I'm trying to install torch_geometric in a conda environment but I'm getting the following werror whenever I try to:</p>
<pre><code>import torch_geometric
</code></pre>
<p>Error:</p>
<pre><code>OSError: dlopen(/Users/psanchez/miniconda3/envs/playbook/lib/python3.9/site-packages/libpyg.so, 0x0006): Library not loaded: /usr/local/opt/python@3.10/Frameworks/Python.framework/Versions/3.10/Python
Referenced from: <95F9BBA5-21FB-3EA5-9028-172B745E6ABA> /Users/psanchez/miniconda3/envs/playbook/lib/python3.9/site-packages/libpyg.so
Reason: tried: '/usr/local/opt/python@3.10/Frameworks/Python.framework/Versions/3.10/Python' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/usr/local/opt/python@3.10/Frameworks/Python.framework/Versions/3.10/Python' (no such file), '/usr/local/opt/python@3.10/Frameworks/Python.framework/Versions/3.10/Python' (no such file), '/Library/Frameworks/Python.framework/Versions/3.10/Python' (no such file), '/System/Library/Frameworks/Python.framework/Versions/3.10/Python' (no such file, not in dyld cache)
</code></pre>
<p>This is how I installed the conda envrionment:</p>
<pre><code>onda create --name playbook python=3.9.7 --no-default-packages
conda activate playbook
pip install torch==1.13.1 torchvision==0.14.1
pip install pyg-lib torch-scatter torch-sparse torch-cluster torch-spline-conv torch-geometric -f https://data.pyg.org/whl/torch-1.13.0+cpu.html
</code></pre>
<p>Any idea how to solve this error?</p>
<p>Thanks a lot in advance!</p>
|
<python><python-3.x><pytorch><pytorch-geometric>
|
2023-01-11 12:52:59
| 3
| 333
|
Pablo Sanchez
|
75,083,356
| 8,978,116
|
Tensorflow / CUDA: GPU not detected
|
<p>I have two Windows 11 laptops with NVIDIA GeForce RTX 3060 GPUs, which I want to run Tensorflow on.</p>
<p>If that matters, both laptops are Lenovo Legion 5 laptops with "GPU Working Mode" set to "Hybrid-Auto Mode".</p>
<p>The first laptop has the following setup:</p>
<pre><code>Python 3.10.7
Tensorflow 2.9.1
CUDA 11.2.0
cuDNN 8.1.1
CPU AMD Ryzen 7 6800H
GPU0 NVIDIA GeForce RTX 3060
GPU1 AMD Radeon Graphics
</code></pre>
<p>The second laptop has the following setup:</p>
<pre><code>Python 3.10.9 Virtual Environment
Tensorflow 2.11.0
CUDA 11.2.2
cuDNN 8.1.1
CPU Intel Core i7 12th Gen 12700H
GPU0 Intel Iris Xe
GPU1 NVIDIA GeForce RTX 3060
</code></pre>
<p>CUDA and cuDNN were installed as per this video: <a href="https://www.youtube.com/watch?v=hHWkvEcDBO0" rel="nofollow noreferrer">https://www.youtube.com/watch?v=hHWkvEcDBO0</a> (except for the conda part).</p>
<p>On the first laptop, everything works fine. But on the second, when executing <code>tf.config.list_physical_devices('GPU')</code>, I get an empty list.</p>
<p>I have tried to set the <code>CUDA_VISIBLE_DEVICES</code> variable to <code>"0"</code> as some people mentioned on other posts, but it didn't work.</p>
<p>I also tried the same as the second laptop on a third one, and got the same problem.</p>
<p>What could be the problem?</p>
|
<python><tensorflow><tensorflow2.0>
|
2023-01-11 12:52:49
| 1
| 368
|
Chris RahmΓ©
|
75,083,293
| 7,540,393
|
Parse raw rst string using nested_parse
|
<p>I'm writing a sphinx extension that transforms a custom directive into a <code>flat-table</code>.</p>
<p>From inside the <code>.run(self)</code> method, I build a complete <code>flat-table</code> declaration in pure <code>.rst</code>, and I'd like to feed that string into the internal parser, so it is transformed into a <code>Node</code>, which I'll return from <code>.run(self)</code>.</p>
<p>I believe <code>nested_parse</code> is the right method to use. It is normally used to parse nested content from a directive, but I suppose it can be used with any array of strings that is valid .RST</p>
<pre><code> def run(self):
decl = '''
.. flat-table:: Characteristics of the BLE badge
:header-rows: 1
* - Service
- Characteristic
- Properties
* - :rspan:`2` 0xfee7
- 0xfec7
- WRITE
* - 0xfec8
- INDICATE
* - 0xfec9
- READ
* - 0xfee0
- 0xfee1
'''
table_node = nodes.paragraph()
self.state.nested_parse(decl.split('\n'), 0, table_node)
return [table_node]
</code></pre>
<p>However, this fails :</p>
<pre><code>Exception occurred:
File "C:\Users\200207121\AppData\Roaming\Python\Python38\site-packages\docutils\parsers\rst\states.py", line 287, in nested_parse
if block.parent and (len(block) - block_length) != 0:
AttributeError: 'list' object has no attribute 'parent'
</code></pre>
<p>What should I do to parse raw <code>.rst</code> text using <code>nested_parse</code> ?</p>
|
<python><python-sphinx><docutils>
|
2023-01-11 12:47:40
| 1
| 2,994
|
Arthur Attout
|
75,083,276
| 8,354,581
|
Percentage of groupby in Pandas
|
<p>I have a dataframe that has columns 'team','home_or_away','result' to store the results ('W': win or 'L': loose) for teams 'X','Y','Z' in sporting events at home ('H') or away ('A'):</p>
<pre><code>df = pd.DataFrame({'team': ['X', 'X', 'X', 'X', 'Y', 'Y', 'Z', 'Z', 'Z', 'Z'],'home_or_away':['H', 'A', 'A', 'A', 'H', 'H', 'H', 'A', 'H', 'H'],'result':['W', 'W', 'W', 'L', 'W', 'L', 'W', 'L', 'L', 'L']})
</code></pre>
<p>I would like to generate the percentage of wins/losses per team, per event location ('A' or 'H')</p>
<p>I have generated a dataframe with total counts of wins/losses per team and event location, with the following groupby code:</p>
<pre><code>groupedDf =df.groupby(['team', 'home_or_away','result'])[['result']].count()
print(groupedDf)
</code></pre>
<p>with the following output:</p>
<pre><code> result
team home_or_away result
X A L 1
W 2
H W 1
Y H L 1
W 1
Z A L 1
H L 2
W 1
</code></pre>
<p>However, I would like to have an extra column with the percentage, like so:</p>
<pre><code> result Perc
team home_or_away result
X A L 1 33.33
W 2 66.66
H W 1 100
Y H L 1 50
W 1 50
Z A L 1 100
H L 2 66.66
W 1 33.33
</code></pre>
<p>How can I do this with pandas?
Thanks</p>
|
<python><pandas><group-by>
|
2023-01-11 12:46:15
| 2
| 379
|
korppu73
|
75,083,251
| 9,635,884
|
Calculate max along one dimension in tensorflow tensor
|
<p>I have tf tensor in the form of <em>[number_of_image, width, height, channel]</em>. The <em>channel</em> dim is optional and can be removed. I would like to calulate max value for each image. It should be as fast as possible and should work in graphic mode of tensorflow execution.</p>
<p>Max calculation is for max normalization of each image. I tried to use <code>tf.reduce_max()</code> with <code>axis=0</code> option but it gives me tensor of with size of <code>[width, height, channel]</code> which is weird. I ended up with unstacking and stacking (code below) but I wonder if there is a better and fast soluton?</p>
<pre><code>#grad is tensor with form [number_of_image, width, height, channel]
grad_unpack = tf.unstack(grad)
for t in grad_unpack:
t /= tf.reduce_max(t)
grad = tf.stack(grad_unpack)
</code></pre>
<p>TIA</p>
|
<python><tensorflow>
|
2023-01-11 12:43:57
| 1
| 363
|
Slawomir Orlowski
|
75,083,237
| 79,111
|
Is it possible to validate `argparse` default argument values?
|
<p>Is it possible to tell <code>argparse</code> to give the same errors on default argument values as it would on user-specified argument values?</p>
<p>For example, the following will not result in any error:</p>
<pre class="lang-py prettyprint-override"><code>parser = argparse.ArgumentParser()
parser.add_argument('--choice', choices=['a', 'b', 'c'], default='invalid')
args = vars(parser.parse_args()) # args = {'choice': 'invalid'}
</code></pre>
<p>whereas omitting the default, and having the user specify <code>--choice=invalid</code> on the command-line will result in an error (as expected).</p>
<p>Reason for asking is that I would like to have the user to be able to specify default command-line options in a JSON file which are then set using <code>ArgumentParser.set_defaults()</code>, but unfortunately the behaviour demonstrated above prevents these user-specified defaults from being validated.</p>
<p>Update: <code>argparse</code> is inconsistent and I now consider the behavior above to be a bug. The following does trigger an error:</p>
<pre><code>parser = argparse.ArgumentParser()
parser.add_argument('--num', type=int, default='foo')
args = parser.parse_args() # triggers exception in case --num is not
# specified on the command-line
</code></pre>
<p>I have opened a bug report for this: <a href="https://github.com/python/cpython/issues/100949" rel="nofollow noreferrer">https://github.com/python/cpython/issues/100949</a></p>
|
<python>
|
2023-01-11 12:42:42
| 3
| 10,598
|
Ton van den Heuvel
|
75,083,216
| 7,797,210
|
Most computational-time efficient/fastest way to compute rolling (linear) regression in Python (Numpy or Pandas)
|
<p>I have a need to do very very fast and efficient way of rolling linear regression.
I looked through these two threads :</p>
<p><a href="https://stackoverflow.com/questions/15636796/efficient-way-to-do-a-rolling-linear-regression">Efficient way to do a rolling linear regression</a>
<a href="https://stackoverflow.com/questions/32353156/rolling-linear-regression">Rolling linear regression</a></p>
<p>From them, I had inferred numpy was (computationally) the fastest. However, using my (limited) python skills, I found the time to compute the same set of rolling data, was *** the same ***.</p>
<p>Is there a faster way to compute than either of the 3 methods I post below? I would have thought the numpy way is much faster, but unfortunately, it wasn't.</p>
<pre><code>########## testing time for pd rolling vs numpy rolling
def fitcurve(x_pts):
poly = np.polyfit(np.arange(len(x_pts)), x_pts, 1)
return np.poly1d(poly)[1]
win_ = 30
# tmp_ = data_.Close
tmp_ = pd.Series(np.random.rand(10000))
s_time = time.time()
roll_pd = tmp_.rolling(win_).apply(lambda x: fitcurve(x)).to_numpy()
print('pandas rolling time is', time.time() - s_time)
plt.show()
pd.Series(roll_pd).plot()
########
s_time = time.time()
roll_np = np.empty(0)
for cnt_ in range(len(tmp_)-win_):
tmp1_ = tmp_[cnt_:cnt_+ win_]
grad_ = np.linalg.lstsq(np.vstack([np.arange(win_), np.ones(win_)]).T, tmp1_, rcond = None)[0][0]
roll_np = np.append(roll_np, grad_)
print('numpy rolling time is', time.time() - s_time)
plt.show()
pd.Series(roll_np).plot()
#################
s_time = time.time()
roll_st = np.empty(0)
from scipy import stats
for cnt_ in range(len(tmp_)-win_):
slope, intercept, r_value, p_value, std_err = stats.linregress(np.arange(win_), tmp_[cnt_:cnt_ + win_])
roll_st = np.append(roll_st, slope)
print('stats rolling time is', time.time() - s_time)
plt.show()
pd.Series(roll_st).plot()
</code></pre>
|
<python><pandas><numpy><time><linear-regression>
|
2023-01-11 12:40:30
| 3
| 571
|
Kiann
|
75,083,047
| 5,102,237
|
How to add decorator to dynamically create class
|
<p>I want to convert this code to be dynamic:</p>
<pre><code>@external_decorator
class Robot:
counter = 0
def __init__(self, name):
self.name = name
def sayHello(self):
return "Hi, I am " + self.name
</code></pre>
<p>I can create the class dynamically, this way:</p>
<pre><code>def Rob_init(self, name):
self.name = name
Robot2 = type("Robot2",
(),
{"counter":0,
"__init__": Rob_init,
"sayHello": lambda self: "Hi, I am " + self.name})
</code></pre>
<p>however i dont know how to add the decorator <code>@external_decorator</code>.</p>
<p>Thks</p>
|
<python><dynamic><python-decorators>
|
2023-01-11 12:27:26
| 1
| 1,018
|
Grigory Ilizirov
|
75,083,037
| 981,831
|
Python.net "GIL" lifetime and multiple instances
|
<p>My desktop app uses IronPython to perform various scripting functions, but we're under pressure to support Numpy (which isn't possible with IP), so I'm currently looking into Python.Net instead, but I'm a bit unclear on how to manage the lifetimes of objects such as the GIL. Python.Net C# examples typically look like this:</p>
<pre><code>using (Py.Gil())
{
using (var scope = Py.CreateScope())
{
// Do everything in here
}
}
</code></pre>
<p>I have written a class that encapsulates running of Python code, and broadly looks like this (error handling etc. removed for brevity):</p>
<pre><code>public class ScriptRunner()
{
private readonly PyModule _scope;
private readonly Py.GILState _gil;
private PyObject _compiledScript;
public ScriptRunner()
{
_gil = Py.GIL();
_scope = Py.CreateScope();
}
public Dispose()
{
_scope?.Dispose();
_gil?.Dispose();
}
public void SetVariable(string name, object value)
{
_scope.Set(name, value);
}
public void PrepareScript(string pythonCode)
{
_compiledScript = PythonEngine.Compile(pythonCode);
}
public void ExecuteScript()
{
_scope.Execute(_compiledScript);
}
}
</code></pre>
<p>It's common for code in my application to execute the same Python code multiple times, but with different variable values, so typical usage of the above class will look something like this:</p>
<pre><code>_scriptRunner.PrepareScript(myPythonCode);
_scriptRunner.SetVariable("x", 123);
_scriptRunner.ExecuteScript();
_scriptRunner.SetVariable("x", 456);
_scriptRunner.ExecuteScript();
...
</code></pre>
<p>There will be numerous instances of this class throughout my app, used wherever there is a need to execute Python code. My main concern is the "GIL" lifecycle aspect, which I don't really understand. Will the above design mean that only one instance will actually be able to work (the one with the "handle" to the GIL)?</p>
<p>Rather than managing the GIL (and scope) object lifetimes in the constructor and Dispose(), I was wondering whether I should instead add explicit methods to create and free these objects, e.g.:</p>
<pre><code>public void Initialise()
{
_gil = Py.GIL();
_scope = Py.CreateScope();
}
public Free()
{
_scope?.Dispose();
_gil?.Dispose();
}
</code></pre>
<p>The usage would then end up more like this, where the GIL is only "locked" (if that's the right word) for the duration of the work:</p>
<pre><code>_scriptRunner.Initialise();
_scriptRunner.PrepareScript(myPythonCode);
_scriptRunner.SetVariable("x", 123);
_scriptRunner.RunScript();
_scriptRunner.SetVariable("x", 456);
_scriptRunner.RunScript();
...
_scriptRunner.Free();
</code></pre>
<p>Thoughts?</p>
|
<python><c#><python.net>
|
2023-01-11 12:26:46
| 1
| 10,315
|
Andrew Stephens
|
75,082,748
| 4,445,920
|
How to distribute parameterized pytest for testing in azure pieplines as different jobs
|
<p>I have a pytest file that has 1 test and and it is parameterized such that in total there are 100 different tests.</p>
<p>I created a pipeline with 2 parallel jobs but when I start the jobs, both the jobs run all the 100 tests individually.</p>
<p>What happens:</p>
<pre><code>**Job 1
----Test 1
----Test 2
...
----Test 100
Job 2
----Test 1
----Test 2
...
----Test 100**
</code></pre>
<p>I want the tests to be evenly distributed between these two jobs.</p>
<p>What is required:</p>
<pre><code>**Job 1
----Test 1
----Test 2
...
----Test 50
Job 2
----Test 51
----Test 52
...
----Test 100**
</code></pre>
<p>Here is the pipeline code</p>
<pre><code>jobs:
- job: 'QA_Pipeline'
strategy:
parallel: 2
pool: vm-dev-pool
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.8'
- script: |
pip install pytest
pip install pytest-xdist
displayName: 'Install modules'
- script: |
pytest -n=2 --capture=tee-sys ./test_smokeTests.py --doctest-modules --junitxml=junit/test-results.xml
displayName: 'Run Smoke Tests'
</code></pre>
<p>Here is the test file</p>
<pre><code>class Test_SmokeTests:
@pytest.mark.parametrize("param1,param2",
[
(a1,b1),
(a2,b2),
...
...
(a100,b100)
]
)
def test_smokeTest(self, param1,param2):
print("param1 - ", param1)
print("param2 - ", param2)
</code></pre>
<p><strong>NOTE:</strong> I have been able to distribute tests within a single Job using the pytest-xdist lib. That works but splitting the tests between different Jobs is the problem.</p>
|
<python><azure-pipelines><pytest>
|
2023-01-11 12:03:48
| 1
| 544
|
Manish
|
75,082,442
| 4,614,675
|
How to clear lru_cache across different processes - Python2.7
|
<p>I'm working on a Django project (version 1.11 - Python 2.7) and I need to create a new endpoint to clear all the methods cached using lru_cache decorator.</p>
<p>In this project I have several cached functions like this one:</p>
<pre class="lang-py prettyprint-override"><code>try:
from functools import lru_cache
except ImportError:
from backports.functools_lru_cache import lru_cache
...
@lru_cache(maxsize=None)
def my_function():
pass
</code></pre>
<p>When the backend starts ten different processes were created using <a href="https://docs.twisted.org/en/stable/" rel="nofollow noreferrer">Twisted library</a>.<br />
I'm wondering if it is possible to clear lru caches for each process and how.</p>
<p>I know that it is possible to clear the lru_cache using the <code>cache_clear()</code> method and there are <a href="https://www.geeksforgeeks.org/clear-lru-cache-in-python/?ref=lbp" rel="nofollow noreferrer">several strategies</a> to do it, but I think that it is a mono-thread scenario.</p>
<p>Is it possible to do the same across several processes?</p>
|
<python><multithreading><python-2.7><caching><functools>
|
2023-01-11 11:38:24
| 1
| 5,618
|
Giordano
|
75,082,217
| 8,832,008
|
Crop function that slices triangles instead of removing them (open3d)
|
<p>I have a TriangleMesh in open3d and I would like to crop it using a bounding box.</p>
<p>Open3d has the <a href="http://www.open3d.org/docs/latest/python_api/open3d.geometry.TriangleMesh.html#open3d.geometry.TriangleMesh.crop" rel="nofollow noreferrer">crop function</a>, which removes triangles if they are fully or partially outside the bounding box.</p>
<p>Is there a function that slices triangles instead if are partially outside the bounding box?</p>
<p>Here is a simple 2D example (see plot below). Given the bounding box and the input triangle, the open3d crop function would simply remove the triangle. I would like a function that takes this triangle that overlaps with the bounding box and slices it. Is there such a function?</p>
<p><a href="https://i.sstatic.net/5fpRN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5fpRN.png" alt="2d crop example" /></a></p>
|
<python><open3d>
|
2023-01-11 11:20:25
| 1
| 1,334
|
cmosig
|
75,082,085
| 8,916,408
|
Error using bar_label to insert value labels on plot from dataframe, on Python with pandas and matplotlib
|
<p>I am trying to add value labels to a plot with matplotlib using bar_label. My data is from a DataFrame. I am getting the error <code>AttributeError: 'AxesSubplot' object has no attribute 'datavalues' </code>. I tried looking at different answers to similar problems here in StackOverflow and elsewhere, but I still don't understand how to fix this. Could someone point me in the right direction, please?</p>
<p>My version of matplotlib is 3.6.0, so that is not the issue.</p>
<p>If I try building it from a list, such as the example below, it works fine and it generates the plot with the value labels that I want.</p>
<pre><code>year = [1999, 2000, 2001]
animals = [40, 50, 10]
barplot = plot.bar(year,
animals,
fc = "lightgray",
ec = "black")
plt.bar_label(container = barplot, labels = y, label_type = "center")
plt.show()
</code></pre>
<p>The problem is when I try to get the values from a DataFrame. For example, the code below:</p>
<pre><code>year_v2 = [1999, 2010, 2011]
animals_v2 = [400, 500, 100]
df_v2 = pd.DataFrame([year_v2, animals_v2], index = ["year", "animals"]).T
barplot_v2 = df_v2.plot.bar("year",
"animals",
fc = "lightgray",
ec = "black")
plt.bar_label(container = barplot_v2,
labels = "animals",
label_type = "center")
plt.show()
</code></pre>
<p>The plot is generated, but without the value labels and with the error <code>AttributeError: 'AxesSubplot' object has no attribute 'datavalues'</code>.</p>
|
<python><pandas><matplotlib>
|
2023-01-11 11:08:55
| 2
| 423
|
Rafael Pinheiro
|
75,081,962
| 3,758,912
|
How to run Flask inside a Jupyter notebook block for easy testing?
|
<p>I want to Run a Flask Server inside a jupyter notebook for specific test and QA scenarios. I do understand that it is not wise to run a server inside notebook(As mentioned in the comments of this <a href="https://stackoverflow.com/questions/52457582/flask-application-inside-jupyter-notebook">question</a>).</p>
<p>However, I want to test a specific function that both requires a flask <code>AppContext</code> and a running server. It is a third-party API webhook handler and the third party does not have method to generate fake webhooks. While this might be a very specific case, I think this question is worth asking for edge cases like mine.</p>
|
<python><flask><jupyter-notebook>
|
2023-01-11 10:58:04
| 1
| 776
|
Mudassir
|
75,081,931
| 5,023,667
|
How to properly align text in an Excel cell using OpenPyXL so that it doesn't repeat or overflow
|
<p>I'm creating an Excel from scratch using openpyxl. Some of the cells are populated with long strings, if the string is too long, I want it to cut at the cell border and not overflow to the neighboring cell.</p>
<p>This can be achieved using <code>Alignment(horizontal='fill')</code>, but then if the string is too short it repeats the string to <em>fill</em> the cell.</p>
<p>How can I achieve both no-overflow and no-repeat?</p>
<p>Overflow without <code>Alignment(horizontal='fill')</code>:</p>
<pre><code>import openpyxl
from openpyxl.styles import NamedStyle
wb = openpyxl.Workbook()
cell_style = NamedStyle(name='cell_style')
wb.add_named_style(cell_style)
ws = wb.active
ws['A1'].value = 'abcdefghijklmnop'
ws['A1'].style = 'cell_style'
wb.save('example.xlsx')
</code></pre>
<p>Output:</p>
<p><a href="https://i.sstatic.net/mSfiP.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mSfiP.jpg" alt="enter image description here" /></a></p>
<p>Repeat with <code>Alignment(horizontal='fill')</code>:</p>
<pre><code>import openpyxl
from openpyxl.styles import NamedStyle, Alignment
cell_style = NamedStyle(name='cell_style')
cell_style.alignment = Alignment(horizontal='fill')
wb = openpyxl.Workbook()
wb.add_named_style(cell_style)
ws = wb.active
ws['A1'].value = 'abc'
ws['A1'].style = 'cell_style'
wb.save('example.xlsx')
</code></pre>
<p>Output:</p>
<p><a href="https://i.sstatic.net/957rC.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/957rC.jpg" alt="enter image description here" /></a></p>
|
<python><openpyxl>
|
2023-01-11 10:55:37
| 1
| 623
|
Shlomo Gottlieb
|
75,081,910
| 3,164,492
|
How to change a particular section of code at a mass scale?
|
<p>I want to scan a code base with approximately 150k python + java files and want to find a specific kind of hardcoding. For ex:</p>
<pre><code># some code above
if get_city() == 'Delhi': # they way of checking can be different like city = get_city() and then checking city == 'Delhi'
country = 'India'
else:
country = 'US'
# some code below
</code></pre>
<p>Now I want to move this hardcoding to a configuration like</p>
<pre><code>---
'delhi': 'India'
'SanFransico': 'US'
'paris': 'France'
</code></pre>
<p>and change the code to</p>
<pre><code># some code above
data = yaml.loads(city_mapping)
country = data[get_city()]
# some code below
</code></pre>
<p>How to find such code patterns and change them programmatically?</p>
|
<python><java><parsing><code-generation>
|
2023-01-11 10:54:10
| 0
| 1,805
|
Devavrata
|
75,081,741
| 4,903,479
|
pandas dataframe subsetting showing NaN values
|
<p>I have two pandas dataframes df1 and df2 where age, Start_Time, End_Time are datetime64[ns] dtypes. I want to extract data points in df1 which are falling within any of Start_Time End_Time in df2</p>
<pre><code>df1
age LAeq LSeq Doss LSeq Gliss LZeq
0 2019-05-14 15:40 62.02 NaN NaN 0.0
1 31-01-2019 15:39 60.45 NaN NaN 0.0
df2
index Start_Time End_Time Equipment_Group
3200 2019-05-14 08:00:00 2019-05-14 16:00:00 Atmospheric_Vacts
4856 2019-07-22 08:00:00 2019-07-22 16:00:00 Atmospheric_Vacts
for index, row in df2.iterrows():
start = row['Start_Time']
end = row['End_Time']
df1.loc[df1['age'].between(start, end) , 'ACTIVE'] = True
df1.head()
</code></pre>
<p>I am getting 'NaN' in ACTIVE column instead of True. It will be helpful, if I can get some direction on this.</p>
|
<python><python-3.x><pandas><numpy><datetime>
|
2023-01-11 10:40:35
| 1
| 583
|
shan
|
75,081,737
| 12,014,637
|
Tensorflow layer working outside of model but not inside
|
<p>I have a custom tensorflow layer which works fine by generating an output but it throws an error when used with the Keras functional model API. Here is the code:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input
# ------ Custom Layer -----------
class CustomLayer(tf.keras.layers.Layer):
def __init__(self):
super(CustomLayer, self).__init__()
def split_heads(self, x):
batch_size = x.shape[0]
split_inputs = tf.reshape(x, (batch_size, -1, 3, 1))
return split_inputs
def call(self, q):
qs = self.split_heads(q)
return qs
# ------ Testing Layer with sample data --------
x = np.random.rand(1,2,3)
values_emb = CustomLayer()(x)
print(values_emb)
</code></pre>
<p>This generates the following output:</p>
<pre><code>tf.Tensor(
[[[[0.7148978 ]
[0.3997009 ]
[0.11451813]]
[[0.69927174]
[0.71329576]
[0.6588452 ]]]], shape=(1, 2, 3, 1), dtype=float32)
</code></pre>
<p>But when I use it in the Keras functional API it doesn't work. Here is the code:</p>
<pre class="lang-py prettyprint-override"><code>x = Input(shape=(2,3))
values_emb = CustomLayer()(x)
model = Model(x, values_emb)
model.summary()
</code></pre>
<p>It gives this error:</p>
<pre><code>TypeError: Failed to convert elements of (None, -1, 3, 1) to Tensor. Consider casting elements to a supported type. See https://www.tensorflow.org/api_docs/python/tf/dtypes for supported TF dtypes.
</code></pre>
<p>Does anyone know why this happens and how it can be fixed?</p>
|
<python><tensorflow><keras><deep-learning>
|
2023-01-11 10:40:12
| 1
| 618
|
Amin Shn
|
75,081,705
| 4,432,671
|
Is there a right-left (foldr) reduction in NumPy?
|
<p>NumPy's <code>.reduce()</code> is a <code>foldl</code>, reducing left to right:</p>
<pre><code>>>> np.subtract.reduce(np.array([1, 2, 3]))
-4
</code></pre>
<p>So <code>(1-2)-3</code>. Is there a standard numpy way of doing <code>foldr</code> instead (right to left), that is to get <code>1-(2-3)</code> in this simple case?</p>
|
<python><numpy>
|
2023-01-11 10:37:51
| 1
| 3,737
|
xpqz
|
75,081,678
| 20,740,043
|
Plot-save-close a histogram in Python
|
<p>I have a data frame and wish to plot-save-close the histogram.</p>
<p>These are the codes:</p>
<pre><code>#Load the required libraries
import pandas as pd
import matplotlib.pyplot as plt
#Create data
data = {'Marks': [22, 87, 5, 43, 56,
73, 55, 54, 11, 20,
51, 5, 79, 31, 27]}
#Convert to dataframe
df = pd.DataFrame(data)
#create histogram and save the image
fig_verify = plt.figure(figsize=(15,5))
df.hist(grid=False, edgecolor='blue', bins=100)
plt.show(block=False)
plt.pause(1)
plt.close()
fig_verify.savefig("hist.png")
</code></pre>
<p>Here I see that the file created "hist.png" is blank.</p>
<p>Also, an another plot window appears which dosent close.</p>
<p>Can somebody please help me out with these two things:</p>
<ol>
<li>Save the hist plot.</li>
<li>Close the hist plot.</li>
</ol>
|
<python><python-3.x><pandas><matplotlib><plot>
|
2023-01-11 10:35:27
| 1
| 439
|
NN_Developer
|
75,081,504
| 2,713,740
|
flask pass variables from template to python
|
<p>I have the following code in my template HTML page:</p>
<pre><code><form class="ui form" action="{{ url_for('download_pdf', text=original_text) }}" method="get">
<button class="ui left floated submit button" type="submit">Export</button>
</form>
</code></pre>
<p>In this code, the <code>original_text</code> variable was passed to this template from within python and I am trying to pass it to another python function as:</p>
<pre><code>@app.route("/download-pdf/<text>")
def download_pdf(text: str):
data = text
return render_template("success.html", data=data)
</code></pre>
<p>Now, this results in <code>404</code> not found error. It is trying to render it as:</p>
<pre><code>https://page.net/download-pdf/He%20...long text here with spaces...?
</code></pre>
<p>If I do something like:</p>
<pre><code><form class="ui form" action="{{ url_for('download_pdf', text='hello') }}"
</code></pre>
<p>it seems to work.</p>
|
<python><flask>
|
2023-01-11 10:22:14
| 1
| 11,086
|
Luca
|
75,081,499
| 4,094,231
|
Scrapy CSS Selectors not extracting all content inside
|
<p>Using Scrapy 2.5.1, I want to extract main article content from <a href="https://www.upi.com/Top_News/World-News/2023/01/01/N-Koerean-leader-calls-exponential-increase-nuclear-arsenal/4171672578401/" rel="nofollow noreferrer">https://www.upi.com/Top_News/World-News/2023/01/01/N-Koerean-leader-calls-exponential-increase-nuclear-arsenal/4171672578401/</a></p>
<p>I verfied that my response.text contains all HTML same as I can view in my browser</p>
<p>But when I want to to extract main article content from [itemprop='articleBody'] that includes only paragraphs</p>
<pre><code>response.css("[itemprop='articleBody'] *::text").extract()
['\n\n',
'SEOUL, Jan. 1 (UPI) --',
' North Korean leader Kim Jong-un stressed the need to "exponentially" increase the number of the country\'s nuclear arsenal and develop a new intercontinental ballistic missile in the new year, Pyongyang\'s state media reported Sunday.',
'\n',
"He delivered the message during a plenary meeting of the ruling Workers' Party of Korea that ended the previous day. It was held to set Pyongyang's major policy directions for the new year.\n",
'Advertisement']
</code></pre>
<p>I tried several CSS Selector combinations but no use.</p>
<p>If it was Selenium, I would have simply used <code>driver.find_element_by_css_selector("[itemprop='articleBody']").text</code> and it would have extracted all content</p>
<p>What should be appropriate CSS Selector to select main article content from above page?</p>
|
<python><python-3.x><scrapy>
|
2023-01-11 10:21:56
| 1
| 21,655
|
Umair Ayub
|
75,081,331
| 16,173,560
|
Fill contours with OpenCV
|
<p>I have an image with a black background and some red outlines of polygons, like this:</p>
<p><a href="https://i.sstatic.net/v6vNl.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v6vNl.jpg" alt="enter image description here" /></a></p>
<p>I want now to fill those polygons with the same colour, so they look something like this:</p>
<p><a href="https://i.sstatic.net/oXsdS.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oXsdS.jpg" alt="enter image description here" /></a></p>
<p>I tried using OpenCV, but it doesn't seem to work:</p>
<pre><code>import cv2
image = cv2.imread("image_to_read.jpg")
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
_, contours, _ = cv2.findContours(gray_image, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
for contour in contours:
cv2.drawContours(image, [contour], 0, (255, 0, 0), -1)
cv2.imshow("Filled Image", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>I am getting this error:</p>
<blockquote>
<p>ValueError: not enough values to unpack (expected 3, got 2)</p>
</blockquote>
<p>Any help would be appreciated!</p>
|
<python><opencv><shapes><fill>
|
2023-01-11 10:08:48
| 1
| 323
|
OlegRuskiy
|
75,081,293
| 571,941
|
Downsize image when training yolov5 model
|
<p>I am looking into making my custom YoloV5 model faster, based on my current results where I have trained a on ~20k (1280 Γ 960) images with a configuration based on yolov5l6.yaml (A P6 model I assume)</p>
<p>This gives me a very well performing model, and I would now like to explore smaller/simpler models.</p>
<p>Option A: Is the simple and try to train with a medium/small/nano version of the P6 model</p>
<p>Option B: I would like to try going to an image size of 640 by using a P5 model.</p>
<p><strong>My question is:</strong> do I need to manually resize my images to 640x640 to efficiently train a P5 model (e.g. based on yolov5m.yaml), or can I simply pass the desired image size as a parameter to train.py?</p>
|
<python><yolo><yolov5>
|
2023-01-11 10:05:47
| 1
| 448
|
Jakob Halskov
|
75,081,128
| 12,260,268
|
Gcloud CLI using what APIs
|
<p>I've been thinking that how the <code>gcloud</code> work for a long time.</p>
<p>Because I saw a bunch of python files in <code>./google-cloud-sdk</code> after I installed the <a href="https://cloud.google.com/sdk/docs/install#installation_instructions" rel="nofollow noreferrer">Google CLI</a> and used the tree command to see what files in the there.</p>
<p>Therefore I guess If I execute <code>gcloud compute instance list</code>. This command will call this <a href="https://cloud.google.com/compute/docs/reference/rest/v1/instances/list" rel="nofollow noreferrer">API</a> to get the list of instances.</p>
<p>If it is true, I would like to know how to use <code>gcloud</code> internally to reach Google APIs.</p>
|
<python><google-cloud-platform><gcloud>
|
2023-01-11 09:52:22
| 1
| 394
|
Tim Chiang
|
75,081,043
| 11,574,636
|
How to promote (Copy) a Docker image from one repository to another with JFrog Rest API
|
<p>I have a repository with all Docker Images. I want to copy all docker images from there to another repository that were used in the last 7 days. I must be a python script and I cannot use curl.</p>
<p>I have figured out everything except the way to copy the image.</p>
<pre><code>URL = "https://repo.url/artifactory"
REPOSITORY = "repo-with-all-images"
IMAGE_NAME = "image-name"
VERSION = "latest"
TARGET_REPOSITORY = "repo-to-paste-images"
url = f"{URL}/api/docker/{REPOSITORY}/{IMAGE_NAME}/{VERSION}/promote"
payload = {
"targetRepo": TARGET_REPOSITORY,
"copy": True,
}
headers = ({"Authorization": token})
status = urllib3.PoolManager().request(method="POST", url=url, body=json.dumps(payload), headers=headers)
logging.error(status.data)
logging.error(status.headers
</code></pre>
<p>When running it I get this message:</p>
<blockquote>
<p>ERROR:root:b'{\n "errors" : [ {\n "status" : 405,\n "message" : "Method Not Allowed"\n } ]\n}'</p>
</blockquote>
<blockquote>
<p>ERROR:root:HTTPHeaderDict({'Date': 'Wed, 11 Jan 2023 09:36:13 GMT', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Connection': 'keep-alive', 'X-JFrog-Version': 'Artifactory/X.XX.XX XXXXXXXX', 'X-Artifactory-Id': 'xxxxxxxxxxxxxxxx:-xxxxxxx:xxxxxxxxxx:-xxxx', 'X-Artifactory-Node-Id': 'xxx-xxxxxx-xxx', 'Allow': 'HEAD,GET,OPTIONS'})</p>
</blockquote>
|
<python><rest><post><artifactory>
|
2023-01-11 09:46:29
| 1
| 326
|
Fabian
|
75,080,993
| 14,098,117
|
DBusErrorResponse while running poetry install
|
<p>I tried to upgrade my poetry from 1.1.x version to 1.3 but as an official manual (<a href="https://python-poetry.org/docs/" rel="noreferrer">https://python-poetry.org/docs/</a>) recommends I removed the old version manually. Unfortunately I probably deleted wrong files because after installing 1.3 version I was still receiving errors that seemed sth was in conflict with old poetry.
I tried to find all files in my account (it's a remote machine so I did not want to effect others) connected somehow with poetry (with <code>find /home/username -name *poetry*</code>) and (after uninstalling poetry 1.3) removed them. Then I installed poetry 1.3 back but still did not work.
Also tried to delete my whole repo and clone it again, but same problems remains. I guess I pissed it off already, but hope that there is some way to some hard reset. Is there any way how to get from this?</p>
<p>Here is beginning of my error message:</p>
<pre><code>
Package operations: 28 installs, 0 updates, 0 removals
β’ Installing certifi (2021.10.8)
β’ Installing charset-normalizer (2.0.12)
β’ Installing idna (3.3)
β’ Installing six (1.16.0)
β’ Installing typing-extensions (4.2.0)
β’ Installing urllib3 (1.26.9)
DBusErrorResponse
[org.freedesktop.DBus.Error.UnknownMethod] ('No such interface βorg.freedesktop.DBus.Propertiesβ on object at path /org/freedesktop/secrets/collection/login',)
at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/secretstorage/util.py:48 in send_and_get_reply
44β def send_and_get_reply(self, msg: Message) -> Any:
45β try:
46β resp_msg: Message = self._connection.send_and_get_reply(msg)
47β if resp_msg.header.message_type == MessageType.error:
β 48β raise DBusErrorResponse(resp_msg)
49β return resp_msg.body
50β except DBusErrorResponse as resp:
51β if resp.name in (DBUS_UNKNOWN_METHOD, DBUS_NO_SUCH_OBJECT):
52β raise ItemNotFoundException('Item does not exist!') from resp
The following error occurred when trying to handle this error:
ItemNotFoundException
Item does not exist!
at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/secretstorage/util.py:52 in send_and_get_reply
48β raise DBusErrorResponse(resp_msg)
49β return resp_msg.body
50β except DBusErrorResponse as resp:
51β if resp.name in (DBUS_UNKNOWN_METHOD, DBUS_NO_SUCH_OBJECT):
β 52β raise ItemNotFoundException('Item does not exist!') from resp
53β elif resp.name in (DBUS_SERVICE_UNKNOWN, DBUS_EXEC_FAILED,
54β DBUS_NO_REPLY):
55β data = resp.data
56β if isinstance(data, tuple):
The following error occurred when trying to handle this error:
PromptDismissedException
Prompt dismissed.
at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/secretstorage/collection.py:159 in create_collection
155β if len(collection_path) > 1:
156β return Collection(connection, collection_path, session=session)
157β dismissed, result = exec_prompt(connection, prompt)
158β if dismissed:
β 159β raise PromptDismissedException('Prompt dismissed.')
160β signature, collection_path = result
161β assert signature == 'o'
162β return Collection(connection, collection_path, session=session)
163β
The following error occurred when trying to handle this error:
InitError
Failed to create the collection: Prompt dismissed..
at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/keyring/backends/SecretService.py:63 in get_preferred_collection
59β collection = secretstorage.Collection(bus, self.preferred_collection)
60β else:
61β collection = secretstorage.get_default_collection(bus)
62β except exceptions.SecretStorageException as e:
β 63β raise InitError("Failed to create the collection: %s." % e)
64β if collection.is_locked():
65β collection.unlock()
66β if collection.is_locked(): # User dismissed the prompt
67β raise KeyringLocked("Failed to unlock the collection!")
DBusErrorResponse
[org.freedesktop.DBus.Error.UnknownMethod] ('No such interface βorg.freedesktop.DBus.Propertiesβ on object at path /org/freedesktop/secrets/collection/login',)
at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/secretstorage/util.py:48 in send_and_get_reply
44β def send_and_get_reply(self, msg: Message) -> Any:
45β try:
46β resp_msg: Message = self._connection.send_and_get_reply(msg)
47β if resp_msg.header.message_type == MessageType.error:
β 48β raise DBusErrorResponse(resp_msg)
49β return resp_msg.body
50β except DBusErrorResponse as resp:
51β if resp.name in (DBUS_UNKNOWN_METHOD, DBUS_NO_SUCH_OBJECT):
52β raise ItemNotFoundException('Item does not exist!') from resp
The following error occurred when trying to handle this error:
ItemNotFoundException
Item does not exist!
at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/secretstorage/util.py:52 in send_and_get_reply
48β raise DBusErrorResponse(resp_msg)
49β return resp_msg.body
50β except DBusErrorResponse as resp:
51β if resp.name in (DBUS_UNKNOWN_METHOD, DBUS_NO_SUCH_OBJECT):
β 52β raise ItemNotFoundException('Item does not exist!') from resp
53β elif resp.name in (DBUS_SERVICE_UNKNOWN, DBUS_EXEC_FAILED,
54β DBUS_NO_REPLY):
55β data = resp.data
56β if isinstance(data, tuple):
The following error occurred when trying to handle this error:
PromptDismissedException
Prompt dismissed.
at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/secretstorage/collection.py:159 in create_collection
155β if len(collection_path) > 1:
156β return Collection(connection, collection_path, session=session)
157β dismissed, result = exec_prompt(connection, prompt)
158β if dismissed:
β 159β raise PromptDismissedException('Prompt dismissed.')
160β signature, collection_path = result
161β assert signature == 'o'
162β return Collection(connection, collection_path, session=session)
163β
</code></pre>
|
<python><python-poetry>
|
2023-01-11 09:43:12
| 2
| 844
|
Emil Haas
|
75,080,992
| 2,425,753
|
Sum list of dicts of lists
|
<p>I have a list of dicts, every value in a dict is a four-element list:</p>
<pre class="lang-py prettyprint-override"><code>my_dict=[
{
'prop1': [1, 2, 3, 4],
'prop2': [1, 1, 0, 0]
},
{
'prop1': [2, 3, 3, 1],
'prop3': [1, 1, 0, 0]
}
]
</code></pre>
<p>Is it possible to sum it up without writing explicit iteration?</p>
<p>I want to get:</p>
<pre class="lang-py prettyprint-override"><code>my_dict_sum={
'prop1': [3, 5, 6, 5],
'prop2': [1, 1, 0, 0],
'prop3': [1, 1, 0, 0]
}
</code></pre>
<p>UPD: something like this works, but I wonder how to use <code>map</code> or <code>zip</code> or <code>functools</code> to do the same without writing two levels of iteration:</p>
<pre class="lang-py prettyprint-override"><code>my_dict_sum = {}
for val in my_dict:
for key,counts in val.items():
if key in my_dict_sum :
sum_dict[key] = list(map(lambda x,y: x+y, my_dict_sum[key], counts))
else:
my_dict_sum[key] = counts
</code></pre>
|
<python><list><loops><dictionary>
|
2023-01-11 09:43:10
| 3
| 1,636
|
rfg
|
75,080,918
| 1,974,918
|
Possible to calculate counts and percentage in one chain using polars?
|
<p>From seeing some of the other polars answers it seems most things can be complete in a single chain. Is that possible with the below example? Any simplifications possible?</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
scores = pl.DataFrame({
'zone': ['North', 'North', 'North', 'South', 'East', 'East', 'East', 'East'],
'score': [78, 39, 76, 56, 67, 89, 100, 55]
})
cnt = scores.group_by("zone").len()
cnt.with_columns(
(100 * pl.col("len") / pl.col("len").sum())
.round(2)
.cast(str)
.str.replace(r"$", "%")
.alias("perc")
)
</code></pre>
<pre><code>shape: (3, 3)
βββββββββ¬ββββββ¬ββββββββ
β zone β len β perc β
β --- β --- β --- β
β str β u32 β str β
βββββββββͺββββββͺββββββββ‘
β South β 1 β 12.5% β
β East β 4 β 50.0% β
β North β 3 β 37.5% β
βββββββββ΄ββββββ΄ββββββββ
</code></pre>
|
<python><dataframe><python-polars>
|
2023-01-11 09:36:03
| 1
| 5,289
|
Vincent
|
75,080,750
| 16,727,671
|
how to run .sql (mssql or sql server) file in python?
|
<p>Is there any ways to run .sql(sql server) file in python?</p>
<p><strong>Python Code:</strong></p>
<pre><code>import pyodbc,tempfile,os
server_name = "localhost"
db_name = "abc"
password = "1234"
local_path = tempfile.gettempdir()
sqlfile = "test.sql"
filepath = os.path.join(local_path,sqlfile)
Connection_string = 'Driver={ODBC Driver 17 for SQL Server};Server='+server_name+';Database='+db_name+';UID=sa;PWD='+password+';'
cnxn = pyodbc.connect(Connection_string)
cursor1 = cnxn.cursor()
with open(filepath, 'r') as sql_file:
cursor1.execute(sql_file.read())
</code></pre>
<p><strong>Below contain is available in test.sql</strong><br />
This sql script is running manually successfully.</p>
<pre><code>IF EXISTS (SELECT * FROM sys.tables WHERE name = 'area' AND type = 'U') DROP TABLE area;
CREATE TABLE area (
areaid int NOT NULL default '0',
mapid int NOT NULL default '0',
areaname varchar(50) default NULL,
x1 int NOT NULL default '0',
y1 int NOT NULL default '0',
x2 int NOT NULL default '0',
y2 int NOT NULL default '0',
flag int NOT NULL default '0',
restart int NOT NULL default '0',
PRIMARY KEY (areaid)
)
</code></pre>
|
<python><sql-server><pyodbc>
|
2023-01-11 09:22:54
| 1
| 448
|
microset
|
75,080,713
| 849,076
|
How do I make Babel use minus signs?
|
<p>Why is it that Babel does not use the minus sign used by my locale, in functions like <code>format_decimal()</code>? It seems to me like this would be the very job of a library like Babel.</p>
<p>Is there a way I can enforce the usage of locale specific minus signs?</p>
<pre><code>>>> import babel
>>> babel.__version__
'2.11.0'
>>> from babel.numbers import format_decimal, Locale
>>> l = Locale("sv_SE")
>>> l.number_symbols["minusSign"]
'β'
>>> format_decimal(-1.234, locale=l)
'-1,234'
</code></pre>
<p>Despite the fact that <code>Local.number_symbols</code> clearly contain a different character (in this case U+2212), <code>format_decimal()</code> (and other Babel formatting functions) use only the fallback hyphen-minus.</p>
<p>I am getting <code>'-1,234'</code> (with an hyphen-minus) where I would have expected <code>'β1,234'</code> (with the U+2212 minus sign).</p>
|
<python><python-babel>
|
2023-01-11 09:19:50
| 1
| 8,641
|
leo
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.