QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
74,852,873
| 626,664
|
How to use map to process column?
|
<p>I have a pandas dataframe like this,</p>
<pre><code>import pandas as pd
data = {
"calories": [420, 380, 390],
"duration": [50, 40, 45]
}
#load data into a DataFrame object:
df = pd.DataFrame(data)
calories duration
0 420 50
1 380 40
2 390 45
</code></pre>
<p>And I have a function to alter the value,</p>
<pre><code>def alter_val(val):
return val + 1
</code></pre>
<p>Now, as the documentation says, map() takes a function and iterator, it return another iterator. In my understanding, then it should work like this,</p>
<pre><code>df["new_value"] = map(alter_val, df["calories"])
</code></pre>
<p>But it doesn't work. Shows</p>
<pre><code>TypeError: object of type 'map' has no len()
</code></pre>
<p>However, it works if I use the following code,</p>
<pre><code>df["new"] = df["calories"].map(add_cal)
</code></pre>
<p>But it does not follow for documented approach map(function, series)</p>
<p>Can someone please take some time to explain the correct way, and why is it so?</p>
|
<python><pandas>
|
2022-12-19 15:52:36
| 2
| 1,559
|
Droid-Bird
|
74,852,857
| 8,901,144
|
Pyspark multiply only some Column Values when condition is met, otherwise keep the same value
|
<p>I have the following dataset</p>
<pre><code>id col1 ... col10 quantity
0 2 3 0
1 1 4 2
2 0 4 2
3 2 2 0
</code></pre>
<p>I would like to multiply the values of col1 to col10 by 2 only when quantity is equal 2, otherwise I would like to keep the previous value. Here is an example of the result:</p>
<pre><code>id col1 ... col10 quantity
0 2 3 0
1 2 8 2
2 0 8 2
3 2 2 0
</code></pre>
<p>I wrote the following code for now:</p>
<pre><code>cols_names = df.drop('id','quantity').columns
df = df.withColumn("arr", F.when(F.col('quantity') == 2, F.struct(*[(F.col(x)* 2).alias(x) for x in\
cols_names]))).select("id","quantity","arr.*")
</code></pre>
<p>The only problem with this approach is that when the condition is not met I get null values instead of keeping the old one. How can I keep the old value when the condition is not met? Or if there is an easier way to do that it would be great too.</p>
|
<python><pyspark>
|
2022-12-19 15:51:35
| 1
| 1,255
|
Marco
|
74,852,817
| 1,028,270
|
Is there a way to define different pytest "profiles" in setup.cfg or pyproject.toml?
|
<p>I want to define two different "profiles" for my tests and I'm using setup.cfg. I want to know if it's possible to configure this all under <code>[tool:pytest]</code>.</p>
<p>I want to do more than just change markers I want to change other settings too.</p>
<p>The two different executions would look something like this and might include even more diverging args and settings in the future (I want to be able to define them arbitrarily):</p>
<pre><code>profile_name: unit # "profile_name" doesn't exist just using as example of the kind of thing I'm looking for
addopts = --verbose -m "unit" --capture=yes --disable-pytest-warnings
python_files = *_test.py
testpaths =
tests/unit
</code></pre>
<p>And:</p>
<pre><code>profile_name: integration # "profile_name" doesn't exist just using as example of the kind of thing I'm looking for
addopts = --verbose -m "integration" --capture=no
python_files = *_test.py
testpaths =
tests/integration
</code></pre>
<p>It would be great if I didn't have to create two pytest inis or just override everything via command line. I'd like to be able to run a command akin to <code>pytest --profile="unit"</code> and centralize all my pytest settings in setup.cfg.</p>
<p>Is such a thing supported?</p>
<p>I also use a <code>pyproject.toml</code> so if this is supported there I would be able to use that file too.</p>
|
<python><python-3.x><pytest>
|
2022-12-19 15:49:05
| 0
| 32,280
|
red888
|
74,852,776
| 1,039,247
|
List RavenDB collections, using py-ravendb
|
<p>I'm trying to query or list all collections in a RavenDB database, using <a href="https://github.com/ravendb/ravendb-python-client" rel="nofollow noreferrer">RavenDB's Python Client</a>.</p>
<p>So far I've come to something like:</p>
<pre class="lang-py prettyprint-override"><code>URLS = ['http://localhost:8080']
DB_NAME = 'my-db'
store = DocumentStore(URLS, DB_NAME)
store.initialize()
with store.open_session() as session:
collections = session.advanced.document_store.database_commands.get_collections(0, 50)
</code></pre>
<p>The last line errors out with:</p>
<blockquote>
<p>AttributeError: 'DocumentStore' object has no attribute 'database_commands'</p>
</blockquote>
<p>Obviously the <code>database_commands</code> isn't available. But how can I list all collections in a RavenDB v5.4+ database instead, using py-ravendb?</p>
|
<python><ravendb><ravendb5>
|
2022-12-19 15:44:46
| 1
| 9,632
|
Juliën
|
74,852,748
| 12,084,907
|
PYODBC Only returning values of the first query in a stored procedure
|
<p>I am trying to make a script that will run a stored procedure and then return the values. A simple enough script. As I run the script it is only returning the output of the first query in my stored procedure. When the call is made I have the values being appended to a list. Once the list is created I have it printing onto a GUI that I have made. My stored procedure does work and returns the values for all the queries it contains when run on SSMS. My stored procedure is as follows (Sorry if these are poorly made stored procedures, I am fairly new to SQL):</p>
<p>It takes a parameter @username</p>
<pre><code>Select username,user_id,user_type
from db1.users
where username like '%' + TRIM(@username) + '%'
ORDER BY username
Select username,user_id,user_type
from db2.users
where username like '%' + TRIM(@username) + '%'
ORDER BY username
Select username,user_id,user_type
from db3.users
where username like '%' + TRIM(@username) + '%'
ORDER BY username
Select username,user_id,user_type
from db4.users
where username like '%' + TRIM(@username) + '%'
ORDER BY username
Select username,user_id,user_type
from db5.users
where username like '%' + TRIM(@username) + '%'
ORDER BY username
</code></pre>
<p>The code that I am using to try and run this procedure and then append it to the list are as follows:</p>
<pre><code>def user_query(user, conn, output_field):
global user
user_results = []
username = user.get()
cursor = conn.cursor()
get_user_stored_proc = "SET NOCOUNT ON; EXEC [dbo].[getUser] @username = ?"
output_field.config(state=NORMAL)
output_field.delete('1.0', END)
cursor.execute(get_user_stored_proc, username)
#TODO Fix this so that it will output everything from the query
columns = [column[0] for column in cursor.description]
for row in cursor:
user_results.append(dict(zip(columns, row)))
print_results(user_results, output_field)
#cursor.close()
#conn.close()
</code></pre>
<p>As previously mentioned, the only output I have returned to me when running this is the result of the first query. Any help is appreciated!</p>
|
<python><sql><ssms><pyodbc>
|
2022-12-19 15:42:26
| 1
| 379
|
Buzzkillionair
|
74,852,600
| 10,620,003
|
Select the columns between two columns which we have only the name of them
|
<p>I have a dataframe and I want to choose the columns between two different column which I only have their name. For example, in the following df, I want to select the columns between column <code>'a1'</code> and <code>'a4'</code>. I know that I can <code>df[['a1','a2', 'a3', 'a4']]</code>. However, my df is a very large df and I can not write it in this way.</p>
<pre><code>import pandas as pd
df = pd.DataFrame()
df['a'] = [1, 2]
df['a1'] = [1, 2]
df['a2'] = [10, 12]
df['a3'] = [1, -2]
df['a4'] = [1, 12]
df['a5'] = [12, 20]
df['a6'] = [11, 3]
</code></pre>
<p>At the end I want this:</p>
<pre><code> a1 a2 a3 a4
0 1 10 1 1
1 2 12 -2 12
</code></pre>
<p>Do you have any solution? Thanks</p>
|
<python><pandas>
|
2022-12-19 15:32:24
| 1
| 730
|
Sadcow
|
74,852,339
| 3,625,770
|
No Module Error in Pycharm ONLY When Using A Notebook
|
<p>I am trying to import a method from a module outside the current module.
While I can do this from a script without any issue, when I try it from inside a Jupyter notebook, it fails. How do you suggest I investigate the issue?</p>
<p>Example: Imagine these be my directories.</p>
<pre><code>- module_1
--- __inte__.py
--- my_lib.py
- module_2
--- __inte__.py
--- script_which_calls_from_my_lib.py
--- notebook_which_calls_from_my_lib.ipynb
</code></pre>
<p>And I import the same way in both files (.py and .ipynb):</p>
<pre><code>from module_1.my_lib import method_x
</code></pre>
<p>In the example above, <code>script_which_calls_from_my_lib.py</code> imports the methods without any issue, but <code>notebook_which_calls_from_my_lib.ipynb</code> returns the following error:</p>
<pre><code>ModuleNotFoundError: No module named 'module_1'
</code></pre>
<p><strong>Note</strong>: This seems to be an issue only within this project (in PyCharm). That is, when I tried to reproduced a minimal working example in a new project, it worked without an issue.</p>
<p><strong>Note:</strong> I have cleaned up this project as much as I could. I removed <code>venv</code> and created a new one with only Jupyter installed, I closed the project in PyCharm, etc.</p>
<hr />
<h2>Edit</h2>
<p>It seems that PyCharm 22.3 might be the cause of this error:</p>
<ul>
<li><a href="https://youtrack.jetbrains.com/issue/DS-4251" rel="nofollow noreferrer">https://youtrack.jetbrains.com/issue/DS-4251</a></li>
<li><a href="https://youtrack.jetbrains.com/issue/PY-57823" rel="nofollow noreferrer">https://youtrack.jetbrains.com/issue/PY-57823</a></li>
</ul>
|
<python><module><jupyter-notebook><pycharm><python-3.10>
|
2022-12-19 15:09:31
| 0
| 1,744
|
Azim
|
74,852,234
| 14,640,406
|
How to query the highest altitude within an area of interest in a DTED?
|
<p>I have a rectangular area of interest, and each vertex of this rectangle is defined by a pair of coordinates (latitude, longitude).</p>
<p>Parsing a DTED, how could I find the highest altitude within this rectangular region? I'm using the <a href="https://github.com/bbonenfant/dted" rel="nofollow noreferrer">dted library</a> for python, but I'm open to solutions using <a href="https://pypi.org/project/GDAL/" rel="nofollow noreferrer">GDAL</a> as well.</p>
<p>Thanks in advance.</p>
|
<python><gis><gdal><gdal-python-bindings>
|
2022-12-19 15:00:03
| 1
| 309
|
carraro
|
74,852,225
| 11,861,874
|
AttributeError: module 'numpy' has no attribute 'typeDict'
|
<p>I am trying to install TensorFlow in Python. I am getting the following error message, I tried uninstalling NumPy and re-installing NumPy but still getting the same error message. How to resolve this issue?</p>
<pre><code>AttributeError: module 'numpy' has no attribute 'typeDict'
</code></pre>
|
<python><numpy><tensorflow>
|
2022-12-19 14:59:12
| 14
| 645
|
Add
|
74,852,137
| 8,224,266
|
How to read csv with multi row-column data with Pandas
|
<p>I have csv files with thousands of records intended for import somewhere else and the format they're in is only easy to look at but not reading for importing. Here is a sample of the record structure:</p>
<p><a href="https://i.sstatic.net/eUH3H.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eUH3H.png" alt="File 1" /></a></p>
<p>These files have a single record with columns in multiple rows below them. As you can imagine, this is a bit tricky to read as one would expect one header for all records.</p>
<p>My question is, is this possible to read with pandas's <code>pandas.read_csv</code>?</p>
<p>I've tried <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html" rel="nofollow noreferrer">Reading this</a> documentation on parameters we can pass to <code>read_csv</code> but I can't see one that I can make use of.</p>
<p>if Pandas doesn't work for this kind of format, what would you recommend?</p>
|
<python><pandas><csv>
|
2022-12-19 14:52:51
| 0
| 1,249
|
Redgren Grumbholdt
|
74,852,123
| 12,366,110
|
Duplicate rows according to a function of two columns
|
<p>My initial example dataframe is in the following format:</p>
<pre><code>>>> import pandas as pd
>>> d = {'n': ['one', 'two', 'three', 'four'],
'initial': [3, 4, 10, 10],
'final': [3, 7, 11, 7],}
>>> df = pd.DataFrame(d)
>>> df
n initial final
0 one 3 3
1 two 4 7
2 three 10 11
3 four 10 7
</code></pre>
<p>What I hope to achieve is to duplicate the values in the <code>n</code> column a number of times corresponding to the values between those in the <code>initial</code> and <code>final</code> columns.</p>
<p>For example, in the first row, <code>initial</code> and <code>final</code> hold the same value, so there should be one instance of <code>'one'</code> in the output dataframe's <code>n</code> column. For the second row, <code>initial</code> and <code>final</code> are three numbers apart, so there should be four repetitions of <code>'two'</code>, and so on. If <code>final</code> is less than <code>initial</code>, there should be no instances of the value in <code>n</code> in the output.</p>
<p>There should also be a <code>count</code> column which counts up from the value in the <code>initial</code> column to the value in the <code>final</code> column. My expected output is as follows:</p>
<pre><code> n count
0 one 3
1 two 4
2 two 5
3 two 6
4 two 7
5 three 10
6 three 11
</code></pre>
<p>I've tried using <code>reindex</code> with a new index based on <code>df.final - df.initial + 1</code>, but this does not handle the negative values as in the fourth row of the example dataframe.</p>
|
<python><pandas><dataframe>
|
2022-12-19 14:51:40
| 2
| 14,636
|
CDJB
|
74,852,107
| 4,327,368
|
Pytorch Linear regression 1x1d, consistantly wrong slope
|
<p>I am mastering pytorch here, and decided to implement very simple 1 to 1 linear regression, from height to weight.</p>
<p>Got dataset: <a href="https://www.kaggle.com/datasets/mustafaali96/weight-height" rel="nofollow noreferrer">https://www.kaggle.com/datasets/mustafaali96/weight-height</a> but any other would do nicely.</p>
<p>Lets import libraries and information about females:</p>
<pre><code>import torch
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
df = pd.read_csv('weight-height.csv',sep=',')
#https://www.kaggle.com/datasets/mustafaali96/weight-height
height_f=df[df['Gender']=='Female']['Height'].to_numpy()
weight_f=df[df['Gender']=='Female']['Weight'].to_numpy()
plt.scatter(height_f, weight_f, c ="red",alpha=0.1)
plt.show()
</code></pre>
<p>Which gives nice scatter of measured females:
<a href="https://i.sstatic.net/58B6a.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/58B6a.png" alt="distribution" /></a></p>
<p>So far, so good.</p>
<p>Lets make Dataloader:</p>
<pre><code>class Data(Dataset):
def __init__(self, X: np.ndarray, y: np.ndarray) -> None:
# need to convert float64 to float32 else
# will get the following error
# RuntimeError: expected scalar type Double but found Float
self.X = torch.from_numpy(X.reshape(-1, 1).astype(np.float32))
self.y = torch.from_numpy(y.reshape(-1, 1).astype(np.float32))
self.len = self.X.shape[0]
def __getitem__(self, index: int) -> tuple:
return self.X[index], self.y[index]
def __len__(self) -> int:
return self.len
traindata = Data(height_f, weight_f)
batch_size = 500
num_workers = 2
trainloader = DataLoader(traindata,
batch_size=batch_size,
shuffle=True,
num_workers=num_workers)
</code></pre>
<p>...linear regression model...</p>
<pre><code>class linearRegression(torch.nn.Module):
def __init__(self, inputSize, outputSize):
super(linearRegression, self).__init__()
self.linear = torch.nn.Linear(inputSize, outputSize)
def forward(self, x):
out = self.linear(x)
return out
model = linearRegression(1, 1)
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.00001)
</code></pre>
<p>.. lets train it:</p>
<pre><code>epochs=10
for epoch in range(epochs):
print(epoch)
for i, (inputs, labels) in enumerate(trainloader):
outputs=model(inputs)
loss = criterion(outputs, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
</code></pre>
<p>gives 0,1,2,3,4,5,6,7,8,9
now lets see what our model gives:</p>
<pre><code>range_height_f=torch.linspace(height_f.min(),height_f.max(),150)
plt.scatter(height_f, weight_f, c ="red",alpha=0.1)
pred=model(range_height_f.reshape(-1, 1))
plt.scatter(range_height_f, pred.detach().numpy(), c ="green",alpha=0.1)
</code></pre>
<p>...
<a href="https://i.sstatic.net/Y5LHv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Y5LHv.png" alt="wrong model" /></a></p>
<p>Why does it do this? Why wrong slope?
consistently wrong slope, I might add
Whatever I change, optimizer, batch size, epochs, females to males.. it gives me this very wrong slope, and I really don't get - why?</p>
<p>Edit 1: Added loss, here is plot
<a href="https://i.sstatic.net/hOGWy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hOGWy.png" alt="loss" /></a></p>
<p>Edit 2: Have decided to explore a bit, and made regression with skilearn:</p>
<pre><code>from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
X_train, X_test, y_train, y_test = train_test_split(height_f, weight_f, test_size = 0.25)
regr = LinearRegression()
regr.fit(X_train.reshape(-1,1), y_train)
plt.scatter(height_f, weight_f, c ="red",alpha=0.1)
range_pred=regr.predict(range_height_f.reshape(-1, 1))
range_pred
plt.scatter(range_height_f, range_pred, c ="green",alpha=0.1)
</code></pre>
<p>which gives following regression, which looks nice:
<a href="https://i.sstatic.net/LqUf9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LqUf9.png" alt="skilearn regression" /></a></p>
<pre><code>t = torch.from_numpy(height_f.astype(np.float32))
p=regr.predict(t.reshape(-1,1))
p=torch.from_numpy(p).reshape(-1,1)
w= torch.from_numpy(weight_f.astype(np.float32)).reshape(-1,1)
print(criterion(p,w).item())
</code></pre>
<p>However in this case criterion=100.65161998527695</p>
<p>Pytorch in own turn converges to about 210</p>
<p>Edit 3
Changed optimisation to Adam from SGD:</p>
<pre><code>#optimizer = torch.optim.SGD(model.parameters(), lr=0.00001)
optimizer = torch.optim.Adam(model.parameters(), lr=0.5)
</code></pre>
<p>lr is larger in this case, which yields interesting, but consistent result.
Here is loss:
<a href="https://i.sstatic.net/SYqWR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SYqWR.png" alt="Adam loss" /></a>,
And here is proposed regression:
<a href="https://i.sstatic.net/0iTFr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0iTFr.png" alt="Adam regression loss" /></a></p>
<p>And, here is log of loss criterion as well for Adam optimizer:
<a href="https://i.sstatic.net/vZWwI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vZWwI.png" alt="Last epochs" /></a></p>
|
<python><numpy><pytorch><linear-regression>
|
2022-12-19 14:50:44
| 4
| 304
|
Timo Junolainen
|
74,852,058
| 5,562,041
|
Does parser type conversion happen inside django tests?
|
<p>I have defined a custom argument parser</p>
<pre><code>from dateutil.relativedelta import relativedelta
def custom_parser(value):
# Do some actions with value
return relativedelta(...)
</code></pre>
<p>I the use this in a management command as</p>
<pre><code>parser.add_argument(
"--tes",
help=("blablaaa"),
type=custom_parser,
required=False,
default='15s',
)
</code></pre>
<p>Inside the <code>handler</code>, <code>tes</code> is correctly converted If I call the management command from the terminal directly.</p>
<pre><code>def handle(self, *_args, **options):
tes = options['tes']
</code></pre>
<p>This is correctly converted to <code>relativedelta</code> when I directly run the command. However, If I run this using <code>call_command('mycommand', tes='20s')</code> <code>options['tes']</code> is always a string.
Why is it not being converted? Can't seem to find something on the codebase that would explain it.</p>
|
<python><django><argparse>
|
2022-12-19 14:46:00
| 0
| 2,249
|
E_K
|
74,851,990
| 1,357,340
|
Windows scripting can't import ctypes module under Python 3.11.1
|
<p>I just ran into a snag after upgrading to Python 3.11.1. Running Python 3.10.1 the <code>ctypes</code> module imports with no problems. Same under Windows scripting using the <code>pywin32</code> package. With 3.11.1 importing <code>ctypes</code> directly still works. However, under Windows scripting using the pywin32 package <code>import ctypes</code> fails with the following error output:</p>
<pre><code>E:\tmp\test2.pys(12, 0) Python ActiveX Scripting Engine: Traceback (most recent call last):
File "<Script Block >", line 12, in <module>
import ctypes
File "e:\Python\Lib\ctypes\__init__.py", line 8, in <module>
from _ctypes import Union, Structure, Array
ModuleNotFoundError: No module named '_ctypes'
</code></pre>
<p>Anyone else stumbled onto this problem?</p>
|
<python>
|
2022-12-19 14:40:25
| 1
| 338
|
Bob Kline
|
74,851,950
| 12,131,472
|
How to calculate month-to-date and year-to-date averages by category in a dataframe?
|
<p>I have a dataframe of several categories of time series of one year which looks like this:</p>
<pre><code> category date price
0 A 2022-12-19 5
1 A 2022-12-16 5
2 A 2022-12-15 21
3 A 2022-12-14 21
4 A 2022-12-13 15
5 A 2022-12-12 18
6 B 2022-12-19 48
7 B 2022-12-16 92
8 B 2022-12-15 212
9 B 2022-12-14 185
10 B 2022-12-13 874
11 B 2022-12-12 51
12 C 2022-12-19 15
13 C 2022-12-16 65
14 C 2022-12-15 874
15 C 2022-12-14 485
16 C 2022-12-13 52
17 C 2022-12-12 99
</code></pre>
<p>I wish to calculate the month-to-date and year-to-date average price for each category, AND keep automatically the last 2 working day data for each category</p>
<p>For example today is the 19th Dec, the YtD average would be the average prices from start Jan 2022 till today, while the MtD average would be the average prices starting 2022-12-01 till today.</p>
<p>Pandas doesn't seem to have a method to calculate the YtD and MtD average.</p>
|
<python><pandas><time-series>
|
2022-12-19 14:37:20
| 1
| 447
|
neutralname
|
74,851,861
| 14,649,310
|
How to update a property for all dataclass objects in a list?
|
<p>I have a list of objects of the following type:</p>
<pre><code>@dataclass
class Feature:
name: str
active: bool
</code></pre>
<p>and my list is:</p>
<pre><code>features = [Feature("name1",False), Feature("name2",False), Feature("name3",True)]
</code></pre>
<p>I want to get back a list with all the features but switch their <code>active</code> property to <code>True</code>. I tried to use <code>map()</code> like this:</p>
<pre><code>active_features=list(map(lambda f: f.active=True,features))
</code></pre>
<p>but it gives me an error <code>expected parameter</code>. How can this be achieved?</p>
<p><em><strong>Note</strong></em> <em>I thought it was following from the example, but I guess I should have clarified. I want to do this with some short of inline method, without defining a new separate function as suggested from some of the answers, but maybe it cannot be done like this?</em></p>
|
<python><python-dataclasses>
|
2022-12-19 14:30:01
| 6
| 4,999
|
KZiovas
|
74,851,810
| 1,752,251
|
Python Sentence Transformer - Get matching Sentence by index order
|
<p>I have a database table with lot of records. And I am comparing the sentence to find a best match.</p>
<p>lets say the table contains 4 columns: id, sentence, info, updated_date.
The data contains as below:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">id</th>
<th style="text-align: center;">sentence</th>
<th style="text-align: center;">info</th>
<th style="text-align: right;">updated_info_date</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">What is the name of your company</td>
<td style="text-align: center;">some distinct info</td>
<td style="text-align: right;">19/12/2022</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">Company Name</td>
<td style="text-align: center;">some distinct info</td>
<td style="text-align: right;">18/12/2022</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">What is the name of your company</td>
<td style="text-align: center;">some distinct info</td>
<td style="text-align: right;">17/12/2022</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">What is the name of your company</td>
<td style="text-align: center;">some distinct info</td>
<td style="text-align: right;">16/12/2022</td>
</tr>
<tr>
<td style="text-align: left;">5</td>
<td style="text-align: center;">What is the name of your company</td>
<td style="text-align: center;">some distinct info</td>
<td style="text-align: right;">15/12/2022</td>
</tr>
<tr>
<td style="text-align: left;">6</td>
<td style="text-align: center;">What is the name of your company</td>
<td style="text-align: center;">some distinct info</td>
<td style="text-align: right;">14/12/2022</td>
</tr>
<tr>
<td style="text-align: left;">7</td>
<td style="text-align: center;">What is the name of your company</td>
<td style="text-align: center;">some distinct info</td>
<td style="text-align: right;">13/12/2022</td>
</tr>
<tr>
<td style="text-align: left;">8</td>
<td style="text-align: center;">What is the phone number of your company</td>
<td style="text-align: center;">some distinct info</td>
<td style="text-align: right;">12/12/2022</td>
</tr>
<tr>
<td style="text-align: left;">9</td>
<td style="text-align: center;">What is the name of your company</td>
<td style="text-align: center;">some distinct info</td>
<td style="text-align: right;">11/12/2022</td>
</tr>
<tr>
<td style="text-align: left;">10</td>
<td style="text-align: center;">What is the name of your company</td>
<td style="text-align: center;">some distinct info</td>
<td style="text-align: right;">10/12/2022</td>
</tr>
</tbody>
</table>
</div>
<p>I have converted these sentences to tensors.</p>
<p>And I am passing this as an example "What is the name of your company"(tensor) to match.</p>
<pre><code>sentence = "What is the name of your company" # in tensor format
cos_scores = util.pytorch_cos_sim(sentence, all_sentences_tensors)[0]
top_results = torch.topk(cos_scores, k=5)
or
top_results = np.argpartition(cos_scores, range(5))[0:5]
top_results does not return the top results index wise.
As the sentences are same, all will have a score of "1". And it returns the results arbitrarily.
</code></pre>
<p>What I want is to get the top 5 matches with the latest updated_date order or the index order.</p>
<p>Is this possible to achieve ?</p>
<p>Any suggestions ?</p>
|
<python><numpy><pytorch><sentence-transformers>
|
2022-12-19 14:25:07
| 1
| 391
|
Avi
|
74,851,701
| 6,346,514
|
Python, Reading Zip files of a subdirectory. Windows object is not iterable
|
<p>I am trying to loop through my subdirectories to read in my zip files. I am getting error <code>TypeError: 'WindowsPath' object is not iterable</code></p>
<p>What i am trying:</p>
<pre><code>path = Path("O:/Stack/Over/Flow/")
for p in path.rglob("*"):
print(p.name)
zip_files = (str(x) for x in Path(p.name).glob("*.zip"))
df = process_files(p) #function
</code></pre>
<p>What does work - when I go to the folder directly with my path:</p>
<pre><code>path = r'O:/Stack/Over/Flow/2022 - 10/'
zip_files = (str(x) for x in Path(path).glob("*.zip"))
df = process_files(zip_files)
</code></pre>
<p>any help would be appreciated.</p>
<p>Directory structure is like:</p>
<pre><code> //Stack/Over/Flow/2022 - 10/Original.zip
//Stack/Over/Flow/2022 - 09/Next file.zip
</code></pre>
<p>function i call:</p>
<pre><code>from io import BytesIO
from pathlib import Path
from zipfile import ZipFile
import os
import pandas as pd
def process_files(files: list) -> pd.DataFrame:
file_mapping = {}
for file in files:
#data_mapping = pd.read_excel(BytesIO(ZipFile(file).read(Path(file).stem)), sheet_name=None)
archive = ZipFile(file)
# find file names in the archive which end in `.xls`, `.xlsx`, `.xlsb`, ...
files_in_archive = archive.namelist()
excel_files_in_archive = [
f for f in files_in_archive if Path(f).suffix[:4] == ".xls"
]
# ensure we only have one file (otherwise, loop or choose one somehow)
assert len(excel_files_in_archive) == 1
# read in data
data_mapping = pd.read_excel(
BytesIO(archive.read(excel_files_in_archive[0])),
sheet_name=None,
)
row_counts = []
for sheet in list(data_mapping.keys()):
row_counts.append(len(data_mapping.get(sheet)))
file_mapping.update({file: sum(row_counts)})
frame = pd.DataFrame([file_mapping]).transpose().reset_index()
frame.columns = ["file_name", "row_counts"]
return frame
</code></pre>
<p><strong>New : what I am trying</strong></p>
<pre><code>for root, dirs, files in os.walk(dir_path):
for file in files:
print(files)
if file.endswith('.zip'):
df = process_files(os.path.join(root, file))
print(df) #function
else:
print("nyeh")
</code></pre>
<p>This is returning files like <code>Original - All fields - 11012021 - 11302021.zip</code> but then i get an error <code>OSError: [Errno 22] Invalid argument: '\\'</code></p>
|
<python><pathlib>
|
2022-12-19 14:16:00
| 1
| 577
|
Jonnyboi
|
74,851,620
| 17,277,677
|
open json files in a loop - formatting problem
|
<p>I need to open files in my s3 bucket and those are the files:</p>
<p><a href="https://i.sstatic.net/0IvkL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0IvkL.png" alt="enter image description here" /></a></p>
<p>I want to apply some piece of code on each of them, hence I want to open them in a loop.</p>
<p>But I have a problem with formatting. The files are between 1 and 999, I cannot loop though range 1, 999 :</p>
<pre><code>for i in range(1,1000):
file_to_predict = spark.read.json(f"s3a://mu_bucket/company_v20_dl/part-00{i}.gz")
</code></pre>
<p>i will be replaced with 1, 2 etc, I would like it to be replaced with 001, 002 etc <- taking three spaces (as the highest is 999 - taking 3 spaces). Do you perhaps know how to deal with such case?</p>
<p>[EDIT]
I am able to open single file without unzipping it:</p>
<p><a href="https://i.sstatic.net/dko9b.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dko9b.png" alt="enter image description here" /></a></p>
|
<python><loops><amazon-s3><amazon-sagemaker>
|
2022-12-19 14:08:53
| 1
| 313
|
Kas
|
74,851,500
| 14,579,051
|
How to use map function and multiprocessing to simplify this code and reduce time?
|
<p>Two separate queries.</p>
<p>1. I have a 'm' raster files and 'n' vector files. I would like to use map function (as in R) and iterate through a list of 'n' vector files for each 'm' raster files. I got the output by writing separate for loop for each vector file.</p>
<p>2.As given below, I am using for loop for each vector file. If i run it in a single script, I will be using only single processor. Is it possible to do multiprocessing to reduce the time?</p>
<p>Here is the for loop:</p>
<pre><code>filenames_dat[i] is the raster input
</code></pre>
<pre><code>
df1 = gpd.read_file("input_path")
df2 = gpd.read_file("input_path")
for i in range(len(raster_path)):
array_name, trans_name = mask(filenames_dat[i], shapes=df1.geometry, crop=True, nodata=np.nan)
zs= zonal_stats(df1, array_name[0], affine=trans_name, stats=['mean','sum'], nodata=np.nan, all_touched=True)
df1['amg'+str(filenames[i])] = [x[('mean')] for x in zs]
df1['mpg'+str(filenames[i])] = [x[('sum')] for x in zs]
print(i)
df1csv = pd.DataFrame(df1)
df1csv.to_csv(cwd+'/rasteroutput/df1.csv', index = False)
for i in range(len(raster_path)):
array_name, trans_name = mask(filenames_dat[i], shapes=df2.geometry, crop=True, nodata=np.nan)
zs= zonal_stats(df2, array_name[0], affine=trans_name, stats=['mean','sum'], nodata=np.nan, all_touched=True)
df2['amg'+str(filenames[i])] = [x[('mean')] for x in zs]
df2['mpg'+str(filenames[i])] = [x[('sum')] for x in zs]
print(i)
df2csv = pd.DataFrame(df2)
df2csv.to_csv(cwd+'/rasteroutput/df2.csv', index = False)
</code></pre>
<p>Here is the function which I have not used as I am not sure how to use map with multiple arguments. 'i' is the index for raster list. poly2 function works for each integer 'i' (ie; i =1) but not when I store 'i' as list of index. list(map(poly2,lst,df)) shows error. Was looking for something similar to map2df as in R.</p>
<pre><code>def poly2(i,df):## i is for year
df = df
array_name, trans_name = mask(filenames_dat[i], shapes=df.geometry, crop=True, nodata=np.nan)
zs= zonal_stats(df, array_name[0], affine=trans_name, stats=['mean','sum'], nodata=np.nan, all_touched=True)
df['amg'+str(filenames[i])] = [x[('mean')] for x in zs]
df['mpg'+str(filenames[i])] = [x[('sum')] for x in zs]
print(i)
lst=[]
for i in range(len(raster_path)):
lst.append(i)
poly2(i=1, df=df)
list(map(poly2,lst,df)) ## shows error.
</code></pre>
|
<python><loops><multiprocessing><iteration>
|
2022-12-19 13:56:51
| 2
| 498
|
chris jude
|
74,851,448
| 19,336,534
|
Extract sentence from HTML using python
|
<p>I have extracted a component of interest from a HTML file using python(<a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/" rel="nofollow noreferrer">BeautifulSoup</a>)
My code:</p>
<pre><code>import pandas as pd
import numpy as np
from lxml import html
from html.parser import HTMLParser
from bs4 import BeautifulSoup
HTMLFile = open("/home/kospsych/Desktop/projects/dark_web/file", "r")
index = HTMLFile.read()
S = BeautifulSoup(index, 'lxml')
Tag = S.select_one('.inner')
print(Tag)
</code></pre>
<p>This prints the result of :</p>
<pre><code><div class="inner" id="msg_550811">Does anyone know if it takes a set length of time to be given verified vendor status by sending a signed PGP message to the admin (in stead of paying the vendor bond)?<br/><br/>I'm regularly on Agora but I want to join the Abraxas club as well.<br/><br/>Mindful-Shaman</div>
</code></pre>
<p>and of type:</p>
<pre><code><class 'bs4.element.Tag'>
</code></pre>
<p>I would like somehow to remove the div tag and the br tags and just result with a string which will be the above sentence.
How could this be done efficiently?</p>
|
<python><python-3.x><beautifulsoup><html-parsing>
|
2022-12-19 13:51:44
| 1
| 551
|
Los
|
74,851,439
| 5,404,647
|
FluidSynth not available using scamp
|
<p>I have the following boilerplate code</p>
<pre><code>from scamp import *
s = Session()
s.tempo = 120
clarinet = s.new_part("clarinet")
</code></pre>
<p>When I run it, I get the error</p>
<pre><code>Traceback (most recent call last):
File "/home/norhther/Descargas/music.py", line 6, in <module>
clarinet = s.new_part("clarinet")
File "/home/norhther/.local/lib/python3.9/site-packages/scamp/instruments.py", line 184, in new_part
instrument.add_soundfont_playback(preset=preset, soundfont=soundfont, num_channels=num_channels,
File "/home/norhther/.local/lib/python3.9/site-packages/scamp/instruments.py", line 984, in add_soundfont_playback
SoundfontPlaybackImplementation(bank_and_preset=preset, soundfont=soundfont, num_channels=num_channels,
File "/home/norhther/.local/lib/python3.9/site-packages/scamp/playback_implementations.py", line 327, in __init__
SoundfontHost(
File "/home/norhther/.local/lib/python3.9/site-packages/scamp/_soundfont_host.py", line 176, in __init__
raise ModuleNotFoundError("FluidSynth not available.")
ModuleNotFoundError: FluidSynth not available.
</code></pre>
<p>I installed <code>FluidSynth</code> in my system (Ubuntu) and I can execute it with no problem. I also used pip install <code>pyFluidSynth</code> with <code>pip</code></p>
|
<python><python-3.x>
|
2022-12-19 13:51:30
| 2
| 622
|
Norhther
|
74,851,410
| 4,418,481
|
Python was not found when running pyspark in VSC
|
<p>I'm learning <code>Spark</code> now and have installed it on a Windows PC.</p>
<p>I used the following guide and to be honest, it all seems right.</p>
<p>Also, when I type in my cmd <code>spark-shell</code> it opens <code>scala</code>, and when I type <code>pyspark</code> in opens the python shell as I would expect.</p>
<p>However, I tried to run the following simple code in Visual Studio Code and I get the following errors:</p>
<pre><code>from pyspark import SparkContext
sc = SparkContext()
textFile = sc.textFile('example.txt')
print(textFile.count())
</code></pre>
<p>Error (happens when it reaches <code>textFile.count()</code>):</p>
<pre><code>Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 1 times, most recent failure: Lost task 1.0 in stage 0.0 (TID 1) (XXX.lan executor driver): org.apache.spark.SparkException: Python worker failed to connect back.
.
.
.
Python was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Manage App Execution Aliases.
</code></pre>
<ol>
<li>I have had Python installed on my machine for ages and all works well.</li>
<li>All Python paths seem to be correct.</li>
<li>Added all HADOOP/SPARK paths as mentioned in the guide.</li>
<li>Opened the Windows App execution aliases and Python is not even these (App installed python.exe, App installed python3.exe)</li>
</ol>
<p>Any reason it happens?</p>
<p><strong>EDIT:</strong></p>
<pre><code>import findspark
findspark.init()
</code></pre>
<p>solves the problem but is there a way to solve it temporarily so I wont need to use that in my code?</p>
|
<python><apache-spark><visual-studio-code><pyspark>
|
2022-12-19 13:48:29
| 0
| 1,859
|
Ben
|
74,851,090
| 18,504,344
|
Is it possible to set activeforeground/activebackground using customtkinter?
|
<p>I am able to use activeforeground and activebackground to change color in background/text when button is clicked in tkinter.</p>
<p>Is it possible to do the same thing in customtkinter as well? I have checked the website if similar context is available, however, could not see any.</p>
<p>Thanks in advance.</p>
|
<python><tkinter><button><frontend><customtkinter>
|
2022-12-19 13:18:04
| 1
| 483
|
Baris Ozensel
|
74,850,922
| 10,748,412
|
Django serializer returns empty list
|
<p>I have a class-based view that returns all the data in the table. But while accessing the URL all I get is an empty list.</p>
<p>models.py</p>
<pre><code>from django.db import models
class EmployeeModel(models.Model):
EmpID = models.IntegerField(primary_key=True)
EmpName = models.CharField(max_length=100)
Email = models.CharField(max_length=100)
Salary = models.FloatField()
class Meta:
verbose_name = 'employeetable'
</code></pre>
<p>views.py</p>
<pre><code>from rest_framework.views import APIView
from rest_framework.response import Response
from .models import EmployeeModel
from .serializers import EmployeeSerialize
class EmployeeTable(APIView):
def get(self,request):
emp_obj = EmployeeModel.objects.all()
empserializer = EmployeeSerialize(emp_obj,many=True)
return Response(empserializer.data)
</code></pre>
<p>serializers.py</p>
<pre><code>from rest_framework import serializers
from .models import EmployeeModel
class EmployeeSerialize(serializers.ModelSerializer):
class Meta:
model = EmployeeModel
fields = '__all__'
</code></pre>
<p>urls.py</p>
<pre><code>from django.contrib import admin
from django.urls import path, include
from .views import EmployeeTable, transformer_list
urlpatterns = [
path('display/',EmployeeTable.as_view()),
]
</code></pre>
<p>The table has 5 rows. It is not empty.
I want to serialize all 5 rows</p>
|
<python><django><serialization><django-rest-framework><orm>
|
2022-12-19 13:03:14
| 1
| 365
|
ReaL_HyDRA
|
74,850,816
| 12,730,406
|
Do we One Hot Encode (create Dummy Variables) before or after Train/Test Split?
|
<p>I've seen quite a lot of conflicting views on if one-hot encoding (dummy variable creation) should be done before/after the training/test split.</p>
<p>Responses seem to state that one-hot encoding before leads to "data leakage".</p>
<p>This example states it's industry norm to do one-hot encoding on the entire data before training/test split:</p>
<p><a href="https://stackoverflow.com/questions/59084770/one-hot-encoder-what-is-the-industry-norm-to-encode-before-train-split-or-after">Industry Example</a></p>
<p>This example from kaggle states that it should be done after the training/test split to avoid data leakage:</p>
<p><a href="https://www.kaggle.com/discussions/getting-started/104651" rel="nofollow noreferrer">kaggle response - after split</a></p>
<p>My question is the following;</p>
<ol>
<li>Do we perform one-hot encoding before or after the Train/Test Split?</li>
<li>Where is the data leakage occuring in the following example?</li>
</ol>
<p>If we take the following example, we have two columns - <strong>web_views and website</strong> (non-ordinal categorical feature) (assuming we are one-hot encoding across the entire column, not dropping any dummies)</p>
<p>Our dataframe:</p>
<p>import pandas as pd
from sklearn.model_selection import train_test_split
import numpy as np</p>
<pre><code>df = pd.DataFrame({'web_views': [100,200,300,400],
'website': ['Youtube','Facebook','Instagram', 'Google']})
</code></pre>
<h1><strong>Scenario 1: One-Hot Encoding/Dummy Variables before splitting into Train/Test</strong>:</h1>
<pre><code>np.random.seed(123)
df_before_split = pd.concat([df.drop('website', axis = 1), pd.get_dummies(df['website'])], axis=1)
# create your X and y dataframes
X_before_split = df_before_split.drop('web_views', axis = 1)
y_before_split = df_before_split['web_views']
# perform train test split
X_train_before_split, X_test_before_split, y_train_before_split, y_test_before_split = train_test_split(X_before_split, y_before_split, test_size = 0.20)
</code></pre>
<p>Now viewing the dataframes we have:</p>
<pre><code># view X train dataset (this is encoding before split)
X_train_before_split
</code></pre>
<p>and then for test</p>
<pre><code># View X test dataset dataset (this is encoding before split)
X_test_before_split
</code></pre>
<h1><strong>Scenario 2: One-Hot Encoding/Dummy Variables AFTER splitting into Train/Test</strong>:</h1>
<pre><code># Perform One Hot encoding after the train/test split instead
X = df.drop('web_views', axis = 1)
y = df['web_views']
# perform data split:
np.random.seed(123)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20)
# perform one hot encoding on the train and test dataset datasets:
X_train = pd.concat([X_train.drop('website', axis = 1), pd.get_dummies(X_train['website'])], axis=1)
X_test = pd.concat([X_test.drop('website', axis = 1), pd.get_dummies(X_test['website'])], axis=1)
</code></pre>
<p>Viewing the X_train and X_test dataframes:</p>
<pre><code># encode after train/test split - train dataframe
X_train
# encode after train/test split - test dataframe
X_test
</code></pre>
<h1>Performing Linear Regression Modelling</h1>
<p>Now that we have split our data to demonstrate we will create a simple linear model:</p>
<pre><code>from sklearn.linear_model import LinearRegression
</code></pre>
<p><strong>Before split linear model</strong></p>
<pre><code>regressor_before_split = LinearRegression()
regressor_before_split.fit(X_train_before_split, y_train_before_split)
y_pred_before_split = regressor_before_split.predict(X_test_before_split)
y_pred_before_split
</code></pre>
<p><strong>y_pred_before_split</strong> returns a predicting value what we would expect.</p>
<p><strong>After split linear model</strong></p>
<pre><code>regressor_after_split = LinearRegression()
regressor_after_split.fit(X_train, y_train)
y_pred_after_split = regressor_after_split.predict(X_test)
y_pred_after_split
</code></pre>
<h2>Error message from Scenario 2:</h2>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-92-c63978a198c8> in <module>()
2 regressor_after_split.fit(X_train, y_train)
3
----> 4 y_pred_after_split = regressor_after_split.predict(X_test)
5 y_pred_after_split
C:\Anaconda3\lib\site-packages\sklearn\linear_model\base.py in predict(self, X)
254 Returns predicted values.
255 """
--> 256 return self._decision_function(X)
257
258 _preprocess_data = staticmethod(_preprocess_data)
C:\Anaconda3\lib\site-packages\sklearn\linear_model\base.py in _decision_function(self, X)
239 X = check_array(X, accept_sparse=['csr', 'csc', 'coo'])
240 return safe_sparse_dot(X, self.coef_.T,
--> 241 dense_output=True) + self.intercept_
242
243 def predict(self, X):
C:\Anaconda3\lib\site-packages\sklearn\utils\extmath.py in safe_sparse_dot(a, b, dense_output)
138 return ret
139 else:
--> 140 return np.dot(a, b)
141
142
<__array_function__ internals> in dot(*args, **kwargs)
ValueError: shapes (1,1) and (3,) not aligned: 1 (dim 1) != 3 (dim 0)
</code></pre>
<p>My thoughts:</p>
<ol>
<li>Encoding with dummies before splitting ensures that the test data that we pass in e.g. X_test to perform the predicitions has the same shape as the training data that the model was trained on therefore understands how to predict values when it encounters these features - unlike with encoding after splitting, since the X_test data has only one feature we are using to make predicitions with whereas the X_train has 3 features</li>
<li>Maybe I've introduced data leakage?</li>
</ol>
<p>I'd be happy for someone to correct me if i've got things wrong or misinterpreted anything, but i'm stuck scratching me head if you encode before or after splitting!</p>
|
<python><pandas><machine-learning><statistics>
|
2022-12-19 12:53:50
| 0
| 1,121
|
Beans On Toast
|
74,850,738
| 16,727,671
|
How to edit a file with multiple YAML documents in Python
|
<p>I have the following YAML file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs
namespace: test
labels:
app: hello-world
spec:
selector:
matchLabels:
app: hello-world
replicas: 100
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: test/first:latest
ports:
- containerPort: 80
resources:
limits:
memory: 2500Mi
cpu: "2500m"
requests:
memory: 12Mi
cpu: "80m"
---
apiVersion: v1
kind: Service
metadata:
name: nodejs
spec:
selector:
app: hello-world
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30082
type: NodePort
</code></pre>
<p>I need to edit the YAML file using Python, I have tried the code below but it is not working for a file with multiple YAML documents. you can see the below image:
<a href="https://i.sstatic.net/Rsrd0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Rsrd0.png" alt="enter image description here" /></a></p>
<pre><code>import ruamel.yaml
yaml = ruamel.yaml.YAML()
yaml.preserve_quotes = True
yaml.explicit_start = True
with open(r"D:\deployment.yml") as stream:
data = yaml.load_all(stream)
test = data[0]['metadata']
test.update(dict(name="Tom1"))
test.labels(dict(name="Tom1"))
test = data['spec']
test.update(dict(name="sfsdf"))
with open(r"D:\deploymentCopy.yml", 'wb') as stream:
yaml.dump(data, stream)
</code></pre>
<p>you can refer the link for more info : <a href="https://stackoverflow.com/questions/54803496/python-replacing-a-string-in-a-yaml-file">Python: Replacing a String in a YAML file</a></p>
|
<python><kubernetes><yaml><ruamel.yaml><multi-document>
|
2022-12-19 12:46:45
| 2
| 448
|
microset
|
74,850,673
| 5,638,904
|
FastAPI: form-data name with a dot
|
<p>I have documentation. Form-data names have dots.</p>
<p><a href="https://i.sstatic.net/0Xb1H.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0Xb1H.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/IUh4i.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IUh4i.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/k9XLn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/k9XLn.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/CmKwE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CmKwE.png" alt="enter image description here" /></a></p>
<p>This code doesn't work:</p>
<pre><code>from fastapi import FastAPI, File, UploadFile
app = FastAPI()
@app.post('/test')
async def test(anpr: UploadFile = File(...),
licensePlatePicture: UploadFile = File(...),
detectionPicture: UploadFile = File(...)
):
''''''
return None
</code></pre>
<p><a href="https://i.sstatic.net/cKvXR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cKvXR.png" alt="enter image description here" /></a></p>
<p><strong>Question: What if we have form-data name with a dot?</strong></p>
|
<python><multipartform-data><fastapi><hikvision>
|
2022-12-19 12:40:18
| 1
| 812
|
Alexey Golyshev
|
74,850,650
| 17,158,703
|
Error in replacing 'hour' and 'minute' of pandas TimeStamp
|
<p>I'm creating a datetime variable with pandas using <code>pd.to_datetime()</code>
The referenced dataframe only has the date (e.g. 31/12/2023) so the function returns it with time 00:00:00 (e.g. 31/12/2023 00:00:00) and now I want to set the time value individually with the <code>replace()</code> function following the examples shown in these two SO posts (<a href="https://stackoverflow.com/questions/12468823/python-datetime-setting-fixed-hour-and-minute-after-using-strptime-to-get-day">ex1</a>, <a href="https://stackoverflow.com/questions/26882499/reset-time-part-of-a-pandas-timestamp">ex2</a>), but that leads to an error: <em>TypeError: replace() got an unexpected keyword argument 'hour'</em></p>
<p>Here is the code:</p>
<pre><code>end = pd.to_datetime(df_end_date, dayfirst=True).replace(hour=23, minute=59)
</code></pre>
<p>The expression <em>df_end_date</em> has a single value, see screenshot (1) below or <a href="https://i.sstatic.net/NBghn.png" rel="nofollow noreferrer">here</a>.</p>
<p>The complete error message is shown in screenshot (2) below or <a href="https://i.sstatic.net/h11sa.png" rel="nofollow noreferrer">here</a>.</p>
<p>Screenshot (1):</p>
<p><a href="https://i.sstatic.net/v23S4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v23S4.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/NBghn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NBghn.png" alt="enter image description here" /></a></p>
<p>Screenshot (2):</p>
<p><a href="https://i.sstatic.net/h11sa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h11sa.png" alt="enter image description here" /></a></p>
|
<python><pandas><datetime>
|
2022-12-19 12:38:15
| 1
| 823
|
Dattel Klauber
|
74,850,561
| 7,437,143
|
Python 3.10.8 TypeError: TypedDict does not support instance and class checks
|
<p>After having created the a typeddict, I am experiencing the following error:</p>
<blockquote>
<p>TypeError: TypedDict does not support instance and class checks</p>
</blockquote>
<p>Below is a MWE</p>
<pre class="lang-py prettyprint-override"><code>from typeguard import typechecked
import sys
from typing import Dict, List, Union
if sys.version_info < (3, 11):
from typing_extensions import NotRequired, TypedDict
else:
from typing import NotRequired
class Run_config(TypedDict):
adaptation: Union[None, Dict]
algorithm: Dict
# Other attributes...
@typechecked
def some_function(some_int: int, run_config: Run_config) -> Run_config:
"""Returns a dict"""
print(f'some_int={some_int}')
return run_config
some_run_config: Run_config = {
"adaptation": None,
"algorithm": {"hi":2},
# Other items...
}
some_int: int = 5
some_function(some_int, some_run_config)
</code></pre>
<p>That throws as output the above error. The stacktrace is:</p>
<pre><code>$python -m src.mwe
hello world
Traceback (most recent call last):
File "/home/name/anaconda/envs/snncompare/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/name/anaconda/envs/snncompare/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/name/git/snn/mwe/src/mwe/__main__.py", line 50, in <module>
some_function(some_int,some_run_config)
File "/home/name/anaconda/envs/snncompare/lib/python3.10/site-packages/typeguard/__init__.py", line 1032, in wrapper
check_argument_types(memo)
File "/home/name/anaconda/envs/snncompare/lib/python3.10/site-packages/typeguard/__init__.py", line 875, in check_argument_types
raise TypeError(*exc.args) from None
TypeError: TypedDict does not support instance and class checks
</code></pre>
<h2>Question</h2>
<p>Why does this error occur, and how can I resolve it?</p>
|
<python><typeddict>
|
2022-12-19 12:30:09
| 0
| 2,887
|
a.t.
|
74,850,553
| 17,903,744
|
Is there a way to refresh a matplotlib cursor on click?
|
<p>I'm plotting around 250k points with matplotlib so naturally when I'm moving my mouse to use/refresh a cursor widget, it lags a lot. Therefore, I was looking for one way to optimize the use of this widget and I thought of refreshing the cursor on click to reduce the number of freezes.
I saw <a href="https://matplotlib.org/stable/gallery/event_handling/coords_demo.html" rel="nofollow noreferrer">this extract from the matplotlib documentation</a> and <a href="https://www.tutorialspoint.com/matplotlib-how-to-show-the-coordinates-of-a-point-upon-mouse-click" rel="nofollow noreferrer">other examples of click events</a>, however I didn't manage to find more information about specifically linking the cursor refresh to the mouse click event.</p>
<p>Is it even possible?</p>
<p>Screenshot of the graph and the cursor:
<a href="https://i.sstatic.net/rZUb8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rZUb8.png" alt="graph & cursor screenshot " /></a></p>
<p>Code used to plot the graph and add the cursor:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
from matplotlib.widgets import Cursor
fig, ax = plt.subplots(figsize=(20, 8), num="Original Signal")
thismanager = plt.get_current_fig_manager()
thismanager.window.wm_iconbitmap("icon.ico")
thismanager = plt.get_current_fig_manager().window.state('zoomed')
plt.plot(Time, Ampl)
plt.xlabel("Time")
plt.ylabel("Amplitude")
plt.tight_layout()
plt.savefig(filepath[:-4] + "_OriginalSignal.jpeg")
cursor = Cursor(ax, color='r', horizOn=True, vertOn=True)
print('Original Signal file created: "', filepath[:-4], '_OriginalSignal.jpeg".', sep="")
</code></pre>
|
<python><matplotlib>
|
2022-12-19 12:29:17
| 2
| 345
|
Guillaume G
|
74,850,526
| 5,786,023
|
pandas read_json from s3 with chunksize option returns single row multiple columns dataframe
|
<p>I have a json file in s3 (with >100 records), this is sample json file format:</p>
<pre><code>[{
"data": {
"a": "hello"
},
"details": {
"b": "hello1"
},
"dtype": "SP"
},
{
"data": {
"a": "hello2"
},
"details": {
"b": "hello3"
},
"dtype": "SP"
}]
</code></pre>
<p>I use <strong>aws wrangler</strong> to read_json using boto3, I get the right format of dataframe.</p>
<pre><code> data details dtype
0 {'a': 'hello'} {'b': 'hello1'} SP
1 {'a': 'hello2'} {'b': 'hello3'} SP
</code></pre>
<p>If I use the <em><strong>chunksize</strong></em> option along with <em><strong>lines=True</strong></em>, I get the dataframe in a single row, multiple column format.</p>
<pre><code> 0 1
0 {'data': {'a': 'hello'}, 'details': {'b': 'hel... {'data': {'a': 'hello2'}, 'details': {'b': 'he...
</code></pre>
<p>Is there a way to still get right format of dataframe (multiple rows) with the size mentioned by <em><strong>chunksize</strong></em></p>
<p>Update: I have tried <em><strong>nrows</strong></em> instead of <em><strong>chunksize</strong></em>. It didin't help, gives me the same output as <em><strong>chunksize</strong></em>.</p>
<p>code I am using to read json file from s3:</p>
<pre><code>import boto3
import botocore
import awswrangler as wr
client = boto3.session.Session(
aws_access_key_id=s3_access_key,
aws_secret_access_key=s3_secret_key,
region_name=s3_region,
)
def read_json_s3(path, client, **args):
try:
return wr.s3.read_json(path=path, boto3_session=client, **args)
except botocore.exceptions.ClientError as err:
raise err
</code></pre>
<p>I am sending <code>chunksize=1000, lines=True</code> as args</p>
|
<python><json><pandas><dataframe><amazon-s3>
|
2022-12-19 12:27:17
| 0
| 753
|
arevur
|
74,850,461
| 10,428,677
|
Incremental group by from a specific year onwards in Pandas
|
<p>I have a dataframe that looks like this:</p>
<pre><code>df_dict = {'country': ['Japan','Japan','Japan','Japan','Japan','Japan','Japan', 'Greece','Greece','Greece','Greece','Greece','Greece','Greece'],
'year': [1970, 1982, 1999, 2014, 2017, 2018, 2021,1981, 1987, 2002, 2015, 2018, 2019, 2021],
'value': [320, 416, 172, 652, 390, 570, 803, 144, 273, 129, 477, 831, 664,117]}
df = pd.DataFrame(df_dict)
country year value
0 Japan 1970 320
1 Japan 1982 416
2 Japan 1999 172
3 Japan 2014 652
4 Japan 2017 390
5 Japan 2018 570
6 Japan 2021 803
7 Greece 1981 144
8 Greece 1987 273
9 Greece 2002 129
10 Greece 2015 477
11 Greece 2018 831
12 Greece 2019 664
13 Greece 2021 117
</code></pre>
<p>I am trying to group the data by year from <code>2014</code> onwards, but I can't seem to get it right using <code>groupby(['country','year'])['value']</code></p>
<p>Practically I want to sum up the values for each <code>country</code> for each <code>year</code> greater than or equal to <code>2014</code>. So my expected output should look something like this:</p>
<pre><code> country year value
0 Japan 2014 1560
1 Japan 2015 1560
2 Japan 2016 1560
3 Japan 2017 1950
4 Japan 2018 2520
5 Japan 2019 2520
6 Japan 2020 2520
7 Japan 2021 3323
8 Greece 2014 546
9 Greece 2015 1023
10 Greece 2016 1023
11 Greece 2017 1023
12 Greece 2018 1854
13 Greece 2019 2518
14 Greece 2020 2518
15 Greece 2021 2635
</code></pre>
<p>Where the value for <code>Japan</code> in <code>2014</code> is the sum of all previous values where <code>year <= 2014</code>, the value for <code>Japan</code> in <code>2015</code>is the sum of all previous values where <code>year <= 2014</code> and so on. The last year I would like to sum is <code>2021</code> for all countries in the dataframe.</p>
|
<python><pandas>
|
2022-12-19 12:21:56
| 2
| 590
|
A.N.
|
74,850,440
| 19,155,645
|
pandas add a value to new column to each row in a group
|
<p>I have a pandas dataframe with several columns. for examlpe:</p>
<pre><code> # name abbr country
0 454 Liverpool UCL England
1 454 Bayern Munich UCL Germany
2 223 Manchester United UEL England
3 454 Manchester City UCL England
</code></pre>
<p>and I run a function using .gropuby() - but then I want to add to each row of that group the value I calculated once.</p>
<p>The example code is here:</p>
<pre><code>def test_func(abbreviation):
if abbreviation == 'UCL':
return 'UEFA Champions League'
elif abbreviation == 'UEL':
return 'UEFA Europe Leauge'
data = [[454, 'Liverpool', 'UCL', 'England'], [454, 'Bayern Munich', 'UCL', 'Germany'], [223, 'Manchester United', 'UEL', 'England'], [454, 'Manchester City', 'UCL', 'England']]
df = pd.DataFrame(data, columns=['#','name','abbr', 'country'])
competition_df = df.groupby('#').first()
competition_df['competition'] = competition_df.apply(lambda row: test_func(row["abbr"]), axis=1)
</code></pre>
<p>and now I would like to add the value of "competition" to all the cases based on group in the original dataframe (df).</p>
<p>Is there a good way (using 'native' pandas) to do it without iterations and lists etc.?</p>
<hr />
<p>Edit 1:</p>
<p>The final output would then be the original dataframe (df) with the new column:</p>
<pre><code> # name abbr country competition
0 454 Liverpool UCL England UEFA Champions League
1 454 Bayern Munich UCL Germany UEFA Champions League
2 223 Manchester United UEL England UEFA Europe Leauge
3 454 Manchester City UCL England UEFA Champions League
</code></pre>
<hr />
<p>Edit 2:</p>
<p>I managed to get what I want by zipping, but its a very bad implementation and I am still wondering if I could do it better (and faster using some pandas functions) :</p>
<pre><code>zipped = zip(competition_df.index, competition_df['competition'])
df['competition'] = np.nan
for num, comp in zipped:
df.loc[df['#']==num, 'competition'] = comp
</code></pre>
|
<python><pandas><group-by>
|
2022-12-19 12:19:35
| 1
| 512
|
ArieAI
|
74,850,387
| 19,339,998
|
execute a specific method into sgx using gramine
|
<p>I have an application which is using gRPC, client.py and server.py , I want to use gramine in order to execute the service inside SGX.
how can I run a specific method not the whole script inside sgx using gramine?
client.py:</p>
<pre><code>"""The Python implementation of the GRPC helloworld.Greeter client."""
from __future__ import print_function
import logging
import grpc
import helloworld_pb2
import helloworld_pb2_grpc
def run():
# NOTE(gRPC Python Team): .close() is possible on a channel and should be
# used in circumstances in which the with statement does not fit the needs
# of the code.
print("Will try to greet world ...")
with grpc.insecure_channel('localhost:50051') as channel:
stub = helloworld_pb2_grpc.GreeterStub(channel)
response = stub.SayHello(helloworld_pb2.HelloRequest(name='you'))
print("Greeter client received: " + response.message)
if __name__ == '__main__':
logging.basicConfig()
run()
</code></pre>
<p>and server.py:</p>
<pre><code>from concurrent import futures
import logging
import grpc
import helloworld_pb2
import helloworld_pb2_grpc
class Greeter(helloworld_pb2_grpc.GreeterServicer):
def SayHello(self, request, context):
return helloworld_pb2.HelloReply(message='Hello, %s!' % request.name)
def serve():
port = '50051'
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
helloworld_pb2_grpc.add_GreeterServicer_to_server(Greeter(), server)
server.add_insecure_port('[::]:' + port)
server.start()
print("Server started, listening on " + port)
server.wait_for_termination()
if __name__ == '__main__':
logging.basicConfig()
serve()
</code></pre>
<p>let say I want to execute sayhello inside SGX when I run client.py
currently I am running <code>gramine-sgx ./python client.py</code> that is going to execute only client inside SGX or is it going to also run sayhello from server.py inside SGX?</p>
|
<python><sgx>
|
2022-12-19 12:15:14
| 1
| 341
|
sama
|
74,850,324
| 12,263,543
|
Deserializing Prometheus `remote_write` Protobuf output in Python
|
<p>I'm experimenting (for the first time) with Prometheus. I've setup Prometheus to send messages to a local flask server:</p>
<pre><code>remote_write:
- url: "http://localhost:5000/metric"
</code></pre>
<p>I'm able to read the incoming bytes, however, I'm not able to convert the incoming messages to any meaningful data.</p>
<p>I'm very new to Prometheus (and Protobuf!) so I'm not sure what the best approach is. I would rather not use a third party package, but want to <strong>learn and understand the Protobuf de/serialization myself</strong>.</p>
<p>I tried copying the <code>metrics.proto</code> definitions from the <a href="https://github.com/prometheus/prometheus/blob/main/prompb/io/prometheus/client/metrics.proto" rel="nofollow noreferrer">Prometheus GitHub</a> and compiling them with <code>protoc</code>. I tried importing the <code>metrics_pb2.py</code> file and parsing the incoming message:</p>
<pre class="lang-py prettyprint-override"><code>read_metric = metrics_pb2.Metric()
read_metric.ParseFromString(request.data)
</code></pre>
<p>I also tried using the <code>remote.proto</code> definitions (specifically <code>WriteRequest</code>) which also didn't work:</p>
<pre class="lang-py prettyprint-override"><code>read_metric = remote_pb2.WriteRequest()
read_metric.ParseFromString(request.data)
</code></pre>
<p>This results in:
<code>google.protobuf.message.DecodeError: Error parsing message</code></p>
<p>So I suspect that I'm using the wrong Protobuf definitions?</p>
<p>I would really appreciate any help & advice on this!</p>
<p>To provide some more context for what I'm attempting to accomplish:</p>
<p>I'm trying to stream data from multiple Prometheus instances to a message queue so they can be passed to a machine learning model.
I'm using online training with an active learning model, and I want the data to be (near) real-time. That's why I thought the <code>remote_write</code> functionality is the best approach rather than continuously scraping each instance. If you have any other ideas on how I can build this system, feel free to share - I've just been playing around with it for a couple days, so I'm open to any feedback!</p>
<p>ANSWER EDIT:</p>
<p>I had to first decompress the data using snappy, thanks larsks!:</p>
<pre class="lang-py prettyprint-override"><code>bytes = request.data
decompressed = snappy.uncompress(bytes)
read_metric = remote_pb2.WriteRequest()
read_metric.ParseFromString(decompressed)
</code></pre>
|
<python><protocol-buffers><prometheus>
|
2022-12-19 12:09:45
| 1
| 1,655
|
picklepick
|
74,850,236
| 12,366,110
|
How can I make NaN values sum to NaN rather than 0 when using df.resample?
|
<p>I have the following example dataframe:</p>
<pre><code>>>> import pandas as pd
>>> import numpy as np
>>> d = {'date': pd.date_range(start='2022-12-09 00:00:00',
end='2022-12-09 02:50:00',
freq='10min'),
'amount': [np.nan]*6 + [1]*5 + [np.nan] +[2]*6}
>>> df = pd.DataFrame(d)
>>> df
date amount
0 2022-12-09 00:00:00 NaN
1 2022-12-09 00:10:00 NaN
2 2022-12-09 00:20:00 NaN
3 2022-12-09 00:30:00 NaN
4 2022-12-09 00:40:00 NaN
5 2022-12-09 00:50:00 NaN
6 2022-12-09 01:00:00 1.0
7 2022-12-09 01:10:00 1.0
8 2022-12-09 01:20:00 1.0
9 2022-12-09 01:30:00 1.0
10 2022-12-09 01:40:00 1.0
11 2022-12-09 01:50:00 NaN
12 2022-12-09 02:00:00 2.0
13 2022-12-09 02:10:00 2.0
14 2022-12-09 02:20:00 2.0
15 2022-12-09 02:30:00 2.0
16 2022-12-09 02:40:00 2.0
17 2022-12-09 02:50:00 2.0
</code></pre>
<p>I am trying to use <code>df.resample</code> on this dataframe to aggregate the columns by hour as follows:</p>
<pre><code>>>> df.resample(rule='H', on='date').agg({'amount': sum})
amount
date
2022-12-09 00:00:00 0.0
2022-12-09 01:00:00 5.0
2022-12-09 02:00:00 12.0
</code></pre>
<p>However, I would like to have hours which contain just <code>NaN</code> values to aggregate to <code>NaN</code> rather than <code>0</code>. Hours which contain a mix of <code>NaN</code> and numerical numbers should treat <code>NaN</code> as <code>0</code> as currently. My desired output is as follows:</p>
<pre><code> amount
date
2022-12-09 00:00:00 NaN
2022-12-09 01:00:00 5.0
2022-12-09 02:00:00 12.0
</code></pre>
<p>Is there any way to achieve this - ideally using <code>df.resample</code> - or otherwise?</p>
|
<python><pandas><pandas-resample>
|
2022-12-19 12:00:16
| 1
| 14,636
|
CDJB
|
74,850,167
| 1,745,291
|
Popen.wait never returning with docker-compose
|
<p>I am developing a wrapper around docker compose with python.
However, I struggle with Popen.</p>
<p>Here is how I launch launch it :</p>
<pre><code>import subprocess as sp
argList=['docker-compose', 'up']
env={'HOME': '/home/me/somewhere'}
p = sp.Popen(argList, env=env)
def handler(signum, frame):
p.send_signal(signum)
for s in (signal.SIGINT,):
signal.signal(s, handler) # to redirect Ctrl+C
p.wait()
</code></pre>
<p>Everything works fine, when I hit Ctrl+C, docker-compose kills gracelly the container, however, <code>p.wait()</code> never returns...</p>
<p>Any hint ?</p>
<p>NOTE : While writing the question, I though I needed to check if <code>p.wait()</code> does actually return and if the block is after (it's the last instruction in the script). Adding a print after it end in the process exiting normally, any further hints on this behavior ?</p>
|
<python><subprocess>
|
2022-12-19 11:54:11
| 1
| 3,937
|
hl037_
|
74,850,128
| 13,066,054
|
macros are not recognised in dbt
|
<pre><code>{{
config (
pre_hook = before_begin("{{audit_tbl_insert(1,'stg_news_sentiment_analysis_incr') }}"),
post_hook = after_commit("{{audit_tbl_update(1,'stg_news_sentiment_analysis_incr','dbt_development','news_sentiment_analysis') }}")
)
}}
select rd.news_id ,rd.title, rd.description, ns.sentiment from live_crawler_output_rss.rss_data rd
left join
live_crawler_output_rss.news_sentiment ns
on rd.news_id = ns.data_id limit 10000;
</code></pre>
<p>This is my model in DBT which is configured with pre and post hooks which referance a macro to insert and update the audit table.</p>
<p>my macro</p>
<pre><code>{ % macro audit_tbl_insert (model_id_no, model_name_txt) % }
{% set run_id_value = var('run_id') %}
insert into {{audit_schema_name}}.{{audit_table_name}} (run_id, model_id, model_name, status, start_time, last_updated_at)
values
({{run_id_value}}::bigint,{{model_id_no}}::bigint,{{model_name_txt}},'STARTED',current_timestamp,current_timestamp)
{% endmacro %}
</code></pre>
<p>this is the first time i'm using this macro and I see the following error.</p>
<pre><code>Compilation Error in model stg_news_sentiment_analysis_incr
(models/staging/stg_news_sentiment_analysis_incr.sql)
'audit_tbl_insert' is undefined in macro run_hooks (macros/materializations/hooks.sql)
called by macro materialization_table_default (macros/materializations/models/table/table.sql) called by model stg_news_sentiment_analysis_incr
(models/staging/stg_news_sentiment_analysis_incr.sql).
This can happen when calling a macro that does not exist.
Check for typos and/or install package dependencies with "dbt deps".
</code></pre>
|
<python><etl><dbt>
|
2022-12-19 11:50:45
| 2
| 351
|
naga satish
|
74,850,101
| 4,248,850
|
Disable "live log setup" and "live log teardown" in pytest
|
<p>Is there an option (or combination of options) in <code>pytest</code> to view only "live log call" ?</p>
<p>I have to establish a handful of sessions in setup and close them in teardown, so a lot of noise from those logs, (changing them to debug is not going to help me as these logs are printed from another library)</p>
<pre><code>------- live log setup ---------
noisy logs
---------- live log call ------------
needed logs
--------- live log teardown ------------
noisy logs
</code></pre>
|
<python><python-3.x><pytest>
|
2022-12-19 11:48:23
| 0
| 1,922
|
Sam Daniel
|
74,850,029
| 7,800,760
|
Pyplot suptitle: can it word wrap a long title
|
<p>I am generating a networkx word graph and displaying it with pyplot as follows:</p>
<pre><code>import networkx as nx
import matplotlib.pyplot as plt
IN_TXT = """
Giorgia Meloni festeggia i 10 anni di FdI e oggi arriverà la ciliegina sulla torta: l’accordo con gli alleati su una manovra che colpisce i più poveri. Annunciata una nuova stretta al reddito di cittadinanza per finanziare le pensioni minime: solo 600 euro e solo agli over 75
"""
</code></pre>
<p>then populate my networkx graph and when ready:</p>
<pre><code>nodelabels = nx.get_node_attributes(G, "lemma")
edgelabels = nx.get_edge_attributes(G, "label")
# and now display the graphs
nx.draw(G, pos, with_labels=True, labels=nodelabels, **NXDOPTS)
nx.draw_networkx_edge_labels(G, pos, edge_labels=edgelabels)
plt.margins(0.2)
plt.suptitle(sentence.text)
plt.show()
</code></pre>
<p>but this results in the superior title being vastly outside the picture:
<a href="https://i.sstatic.net/8mmPi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8mmPi.png" alt="network graph with very long title" /></a>
Is there a way to word wrap the title so it fits the viewing window?
If I manually stretch the window I can get the title to fit into it, but would like to obtain this then the graph is first displayed.
I am not understanding the relationship between matplotlib abd pyplot :(
Thank you.</p>
|
<python><matplotlib><networkx>
|
2022-12-19 11:42:45
| 1
| 1,231
|
Robert Alexander
|
74,849,965
| 16,708,111
|
Add values from a dictionary to a ManytoMany field
|
<pre><code>class GuestOrder(models.Model):
comment = models.CharField(max_length=400, blank=True, null=True)
guest = models.ForeignKey(Guest, on_delete=models.SET_NULL, null=True)
dish = models.ManyToManyField(Dish)
ingredient = models.ManyToManyField(Ingredient)
table = models.ForeignKey(Table, on_delete=models.CASCADE, blank=True, null=True)
</code></pre>
<p>I have a queryset that returns 3 instances of GuestOrder.</p>
<pre><code>guest_orders = GuestOrder.objects.filter(table=table)
<QuerySet [<GuestOrder: GuestOrder object (567)>, <GuestOrder: GuestOrder object (568)>, <GuestOrder: GuestOrder object (569)>]>
</code></pre>
<p>and I have a dictionary where values are dish instances.</p>
<pre><code>{
"guests":{
"23": [1, 2],
"24": [2],
"25": [3]
}
}
</code></pre>
<p>How to set each of this values to a guestorder instance?</p>
<p>This is what I tried (the name of the dictionary is guests)</p>
<pre><code>for guestorder in guest_orders:
for key, value in guests.items():
guestorder.dish.set(value)
</code></pre>
<p>This only sets [3] as the dish</p>
<p>UPDATE</p>
<pre><code>for guestorder in guest_orders:
for key, value in guests.items():
pass
print(value)
guestorder.dish.set(value)
</code></pre>
<p>the result of the print is the following</p>
<pre><code>[3]
[3]
[3]
</code></pre>
<p>and I don't understand why. I need help.</p>
|
<python><django><django-rest-framework>
|
2022-12-19 11:37:53
| 2
| 444
|
Mike D Hovhannisyan
|
74,849,588
| 12,366,110
|
Simple expression parsing throws a TypeError referencing 'AssumptionKeys'
|
<p>I'm trying to get sympy to calculate the result of what should be a simple expression with three substitutions, but I'm coming up against an error I've not encountered before.</p>
<pre><code>>>> from sympy import sympify, symbols
>>> input_str = '100*Q+10*R+5*S+0.6*T'
>>> Q,R,S,T = symbols('Q R S T')
</code></pre>
<p>The above code works fine, but when I run the string though sympify, I get the following error:</p>
<pre><code>>>> sympify(input_str)
ValueError: Error from parse_expr with transformed code: "Integer (100 )*Q +Integer (10 )*Symbol ('R' )+Integer (5 )*S +Float ('0.6' )*Symbol ('T' )"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "...ipykernel_5652\212826607.py", line 1, in <cell line: 1>
sympify(input_str)
File "...\Python\Python310\site-packages\sympy\core\sympify.py", line 496, in sympify
expr = parse_expr(a, local_dict=locals, transformations=transformations, evaluate=evaluate)
File "...\Python\Python310\site-packages\sympy\parsing\sympy_parser.py", line 1101, in parse_expr
raise e from ValueError(f"Error from parse_expr with transformed code: {code!r}")
File "...\Python\Python310\site-packages\sympy\parsing\sympy_parser.py", line 1092, in parse_expr
rv = eval_expr(code, local_dict, global_dict)
File "...Python\Python310\site-packages\sympy\parsing\sympy_parser.py", line 907, in eval_expr
expr = eval(
File "<string>", line 1, in <module>
TypeError: unsupported operand type(s) for *: 'Integer' and 'AssumptionKeys'
</code></pre>
<p>What on earth could be causing this error with such a simple expression?</p>
|
<python><sympy>
|
2022-12-19 11:03:01
| 3
| 14,636
|
CDJB
|
74,849,543
| 544,542
|
Python ldap3 authenticate using mail or user id
|
<p>I am using the ldap3 library (<a href="https://ldap3.readthedocs.io/en/latest/" rel="nofollow noreferrer">https://ldap3.readthedocs.io/en/latest/</a>) with Python and authenticating against LDAP</p>
<pre><code>conn = Connection(server, user='CN=person,OU=Service Accounts,DC=mydc,DC=mydomain,DC=co,DC=uk', password='Password123', auto_bind=True)
</code></pre>
<p>The below works but only because I know the <code>person</code> value. How would I set this up so someone can authenticate using their <code>mail</code> or user ID e.g. <code>forename.surname</code></p>
<p>At the moment they would need to use the <code>dn</code> form which of course no user will ever be likely to know</p>
<p>Thanks</p>
|
<python><ldap>
|
2022-12-19 10:57:29
| 2
| 3,797
|
pee2pee
|
74,849,387
| 12,058,154
|
Is it secure to store credentials protected with dotenv on public repos on Github?
|
<p>I am using GitHub and upload Jupyter notebooks.
My goal is to showcase use cases of Cloud service providers like AWS, IBM or Heroku.</p>
<p>Therefore, I store user credentials on public repos on GitHub.
This allows me to execute the code on the Cloud platforms.</p>
<p>(Reacting to comments:
I am not storing the credentials on GitHub, but using the os.getenv function to get the SECRET_KEY from .env file stored locally, which is added to .git ignore. Sorry for being unclear.)</p>
<p>I am using dotenv to secure my credentials.</p>
<p>I am following the method described here:</p>
<p><a href="https://www.askpython.com/python/python-dotenv-module" rel="nofollow noreferrer">Keep your secrets safe with Python-dotenv</a></p>
<p>The implementation works fine, but I want to ascertain that this is a secure method to protect critical credentials against hacks, or do I miss something?</p>
|
<python><github><jupyter-notebook><dotenv>
|
2022-12-19 10:43:41
| 1
| 335
|
Ormetrom2354
|
74,849,357
| 1,783,398
|
Python: bin a dictionary's values and then create another dictionary of bin membership
|
<p>I have a dictionary that has key as string and value a number, like this</p>
<pre><code>d = {'key1': 0.5, 'key2': 0.2, 'key3': 0.3, 'key4': 0.9, 'key5': 0.94, ...}
</code></pre>
<p>What I would like to do is</p>
<ol>
<li>bin the values (0.5, 0.2, ....) based on a fixed interval, say every 0.2 increment</li>
<li>produce another dictionary that allows me to look up the bin that a key resides in</li>
</ol>
<p>Namely, the final dictionary should look like</p>
<pre><code>d = {'key1': 3, 'key2': 1, 'key3': 2, 'key4': 5, 'key5': 5, ...}
</code></pre>
<p>The dictionary is very big, probably over 500k entries... what is the most efficient way of doing this?</p>
<p>Thanks</p>
|
<python><dictionary>
|
2022-12-19 10:40:41
| 1
| 2,570
|
Ziqi
|
74,849,356
| 1,928,054
|
Change deepest level of MultiIndex and append level
|
<p>Consider the following DataFrame:</p>
<pre><code>import numpy as np
import pandas as pd
arrays1 = [
["A", "A", "A", "B", "B", "B"],
["foo", "bar", "baz", "foo", "bar", "baz"],
]
tuples1 = list(zip(*arrays1))
index_values1 = pd.MultiIndex.from_tuples(tuples1)
df1 = pd.DataFrame(np.ones((6, 6)), index=index_values1, columns=index_values1)
print(df1)
A B
foo bar baz foo bar baz
A foo 1.0 1.0 1.0 1.0 1.0 1.0
bar 1.0 1.0 1.0 1.0 1.0 1.0
baz 1.0 1.0 1.0 1.0 1.0 1.0
B foo 1.0 1.0 1.0 1.0 1.0 1.0
bar 1.0 1.0 1.0 1.0 1.0 1.0
baz 1.0 1.0 1.0 1.0 1.0 1.0
</code></pre>
<p>Say I want to replace the deepest level of either indices, columns, or both; as well as add a level, according to the following mapping:</p>
<pre><code>d_idx = {
'foo': ('qux', 'one'),
'bar': ('quux', 'two'),
'baz': ('corge', 'three'),
}
</code></pre>
<p>Such that I get either:</p>
<pre><code> A B
foo bar baz foo bar baz
A qux one 1.0 1.0 1.0 1.0 1.0 1.0
quux two 1.0 1.0 1.0 1.0 1.0 1.0
corge three 1.0 1.0 1.0 1.0 1.0 1.0
B qux one 1.0 1.0 1.0 1.0 1.0 1.0
quux two 1.0 1.0 1.0 1.0 1.0 1.0
corge three 1.0 1.0 1.0 1.0 1.0 1.0
</code></pre>
<p>or</p>
<pre><code> A B
qux quux corge qux quux corge
one two three one two three
A foo 1.0 1.0 1.0 1.0 1.0 1.0
bar 1.0 1.0 1.0 1.0 1.0 1.0
baz 1.0 1.0 1.0 1.0 1.0 1.0
B foo 1.0 1.0 1.0 1.0 1.0 1.0
bar 1.0 1.0 1.0 1.0 1.0 1.0
baz 1.0 1.0 1.0 1.0 1.0 1.0
</code></pre>
<p>or</p>
<pre><code> A B
qux quux corge qux quux corge
one two three one two three
A qux one 1.0 1.0 1.0 1.0 1.0 1.0
quux two 1.0 1.0 1.0 1.0 1.0 1.0
corge three 1.0 1.0 1.0 1.0 1.0 1.0
B qux one 1.0 1.0 1.0 1.0 1.0 1.0
quux two 1.0 1.0 1.0 1.0 1.0 1.0
corge three 1.0 1.0 1.0 1.0 1.0 1.0
</code></pre>
<p>I have tried a number of ways, which seem to work but don't seem elegant to me. In particular, I want to make sure that this mapping is robust (e.g. no reordering due to unstacking indices etc.).</p>
<p>My first approach was by constructing a correspondance matrix:</p>
<pre><code> qux quux corge
one two three
foo 1.0 0.0 0.0
bar 0.0 1.0 0.0
baz 0.0 0.0 1.0
</code></pre>
<p>The advantage of this approach is that the mapping from one set of indices to the other is robust, by taking the dot product with this matrix and an unstacked <code>df1</code>. However by unstacking, the indices are implicitly sorted. This means I need to somehow retain the original order of the index before being able to take the dot product. While probably doable, this doesn't seem elegant to me.</p>
<pre><code>df1_u = df1.unstack(level=0)
A A A A A A B B B B B B
foo foo bar bar baz baz foo foo bar bar baz baz
A B A B A B A B A B A B
bar 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
baz 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
foo 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
</code></pre>
<p>Next, I tried changing the indices in a for-loop:</p>
<pre><code>l_idx_old = list(df1.index)
l_idx_new = []
for t_idx_old in l_idx_old:
idx0, idx1_old = t_idx_old
idx1_new, idx2_new = d_idx[idx1_old]
print(idx1_old, idx1_new, idx2_new)
t_idx_new = (idx0, idx1_new, idx2_new)
l_idx_new.append(t_idx_new)
df1.index = pd.MultiIndex.from_tuples(l_idx_new)
</code></pre>
<p>This works, however the last line is not robust, as there is no check whether the indices are assigned correctly.</p>
<p>In short, is there a robust and elegant way to carry out this intended mutation? Any help is much appreciated.</p>
|
<python><pandas>
|
2022-12-19 10:40:38
| 1
| 503
|
BdB
|
74,849,292
| 3,962,748
|
PyCharm Error when setting remote interpreter -> Run Error: Jupyter server process failed to start due to path mismatch issue
|
<p>I am trying to setup remote development on PyCharm. For this, I want to make changes locally and execute the code on remote Amazon EC2 instance with a remote interpreter. I had done the following configuration of the project, but I am getting run error when I try to execute a ipython file created locally.</p>
<blockquote>
<p>Cannot run program "stfp://<remote server hostname>/<remote server host>:<remote interpreter path>" (in directory <local folder directory>): error=2, No such file or directory.</p>
</blockquote>
<p>It seems it should open <remote folder directory> instead of <local folder directory> when running the program. I read through multiple setup instructions but could not get this fixed. I am attaching configuration below.</p>
<p>Can you please help me with what could be wrong?</p>
<p><a href="https://i.sstatic.net/p7Kkn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p7Kkn.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/d5PqB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/d5PqB.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/UcG5X.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UcG5X.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/QNlGc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QNlGc.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/pHaMc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pHaMc.png" alt="enter image description here" /></a></p>
|
<python><intellij-idea><amazon-ec2><pycharm>
|
2022-12-19 10:34:52
| 1
| 1,363
|
Abhinav Aggarwal
|
74,849,133
| 4,970,679
|
How to handle one-to-many relationship during HATEOAS serialization in Marshmallow?
|
<p>Trying to implement a rather simple REST API in Python (SQLAlchemy + Marshmallow) using HATEOAS resource linking, I am stuck when attempting to create "smart hyperlinks" for one-to-many relationships.</p>
<p>Let's consider the classic book example: One book has exactly one publisher, but 1 to n authors. To keep it simple, in my use case it would be sufficient if every author could just write <em>one</em> book. My class structure could then look like this:</p>
<p><code>models/book.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>class Book(db.Model):
__tablename__ = 'book'
book_id = db.Column(db.Integer, primary_key=True)
title = db.Column(db.String(255))
publisher_id = db.Column('publisher_id', db.Integer, db.ForeignKey('publisher.publisher_id'))
authors = db.relationship('Author', back_populates='book_id')
# some more methods ...
@staticmethod
def get_by_id(book_id):
return Book.query.get(book_id)
</code></pre>
<p><code>models/author.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>class Author(db.Model):
__tablename__ = 'author'
author_id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(255))
book_id = db.Column('book_id', db.Integer, db.ForeignKey('book.book_id'))
booking = db.relationship('Book', back_populates='authors')
# some more methods ..
@staticmethod
def get_by_id(author_id):
return Authors.query.get(author_id)
</code></pre>
<p><code>models/book_schema.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>class BookSchema(ma.SQLAlchemyAutoSchema):
class Meta:
model = Book
book_id=fields.Integer()
title = fields.String()
# Smart hyperlinking
_links = ma.Hyperlinks(
{
"self": {"href": ma.AbsoluteURLFor("book", book_id="<booking_id>")},
"publisher": {"href": ma.AbsoluteURLFor("publishers", publisher_id="<publisher_id>")},
"authors": {"href": ma.AbsoluteURLFor("authors", author_id="<authors>")} #!!!
}
)
</code></pre>
<p><code>app/__init__.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>
def create_app(config_name):
#...
api.add_resource(Book, '/books/<int:book_id>', endpoint='books')
#...
</code></pre>
<p>The line in <code>book_schema.py</code> denoted with <code>#!!!</code> is the one causing trouble. I do understand, that <code>authors</code> is a collection and therefore there is no single <code>author_id</code>. But I haven't found any way to add this collection to the hyperlinks.</p>
<p>What I would like to get when querying a book at, let's say, <code>/books/123</code> looks like this:</p>
<pre class="lang-json prettyprint-override"><code>{
"book_id": 123,
"title": "How to think of a good book title"
"_links": {
"publisher": {
"href": "https://localhost:5000/publishers/4711"
},
"authors": [
{ "href": "https://localhost:5000/authors/42" },
{ "href": "https://localhost:5000/authors/333" },
{ "href": "https://localhost:5000/authors/1337" }
]
}
}
</code></pre>
<p>Concerning everything else but the one-to-many relation, it works very well. The publisher relation is also populated as expected. I can also verify, that the authors of a book are loading correctly from the database (by debugging in <code>get_by_id</code> of the <code>Book</code> class).</p>
<p>Additionally, I have found <a href="https://github.com/marshmallow-code/flask-marshmallow/issues/194" rel="nofollow noreferrer">this GitHub issue</a>, which, if I got it right, addresses my problem as well.</p>
<p>So, finally, my question is: Is there any way I can achieve the collection serialization? If not by standard means of Marshmallow (I didn't find anything there), then perhaps by any sane kind of post-processing the response before it is sent to the client?</p>
<p>Thanks a lot in advance!</p>
|
<python><flask-sqlalchemy><flask-restful><hateoas><marshmallow-sqlalchemy>
|
2022-12-19 10:21:40
| 1
| 2,117
|
ahuemmer
|
74,849,129
| 19,155,645
|
function does not work correctly with pandas.apply(lambda)
|
<p>I have a function that takes two strings and give an output.</p>
<p>I would like to apply it on my pandas dataframe using panads' apply funciton (with Lambda).</p>
<p>The function runs correctly for certain inputs, but then fails in one of my checks.
I double checked that the class of this example inputs is still string (two strings), and when I run the function with these strings outside pandas (just manually) it produces the expected output.</p>
<p>To be clear, apply.lambda runs well for several examples until fails on that particular one, which I then tested outside pandas and it works.</p>
<p>here is a simplified example (values in the dataframe do not matter in this example).</p>
<pre><code>list1 = ['a','b','c']
list2 = ['d','e','f']
def calculate_test(b,e):
if (not b in list1) or (not e in list2):
raise ValueError("this should not happen!")
else:
return True
data = [['a','d'],['b','e'],['c','f']]
df = pd.DataFrame(data, columns=['first', 'second'])
# calculate_test('b','e') # True
df['should_all_be_true'] = df.apply(lambda row: calculate_test(row['first'], row['second']),axis=1) # ValueError raised!
</code></pre>
<p>I can imagine that the error is in my "if" statement - but can't spot it.</p>
|
<python><pandas><dataframe>
|
2022-12-19 10:21:20
| 1
| 512
|
ArieAI
|
74,849,095
| 10,829,044
|
Iterate int variable to do in operator check
|
<p>I already referred the post <a href="https://stackoverflow.com/questions/57579095/argument-of-type-int-is-not-iterable">here</a> and <a href="https://stackoverflow.com/questions/23851723/pythongetting-argument-of-type-int-not-iterable-error">here</a>. Please don't mark as duplicate. My python version is 3.8.8</p>
<p>I have a dataframe like as below</p>
<pre><code>customer_id r f m h y
1 4 3 3 3 3
2 5 4 2 1 4
3 3 1 1 1 1
4 2 2 2 2 2
</code></pre>
<p>Based on the code snippet found in this repository <a href="https://pypi.org/project/rfm/" rel="nofollow noreferrer">here</a>, am trying to do the below</p>
<p>a) Assign a <code>segment_label</code> to each customer based on their <code>r, f, m, h , y</code> values</p>
<p>So, I was working on the below code. But the problem is am getting an error in the 1st if-clause. I already tried replacing <code>in</code> keyword with <code>==</code> symbol. But it doesn't do the check and ends up in else clause for all records/customers</p>
<pre><code>classes = []
classes_append = classes.append
cust = 'unique_key'
for row in df.iterrows():
rec = row[1]
print(rec)
r = rec['r']
f = rec['f']
y = rec['y']
p = rec['h']
if (r in (4)) and (f in (4)) and (y in (2,3)) and (h in (3)): # I replaced `in` keyword with `==` symbol as well. The code works but it doesn't check the condition. So, ends up in else clause.
classes_append({rec[cust]:'Champions'})
elif (r in (4)) and (f in (4)) and (y in (1,)) and (h in (2,3)):
classes_append({rec[cust]:'Short Tenure - Promising'})
elif (r in (3,4)) and (f in (3,4)) and (y in (3,)) and (h in (1,2,3)):
classes_append({rec[cust]:'Loyal Customers'})
elif (r in (3,4,5)) and (f in (3,4,5)) and (y in (2,)) and (h in (1,2)):
classes_append({rec[cust]:'Potential Loyalist'})
elif (r in (3,4)) and (f in (3,4)) and (y in (1,)) and (h in (1)):
classes_append({rec[cust]:'Short Tenure - Average'})
elif (r in (2,)) and (f in (1,2,3)) and (y in (1,2,3)):
classes_append({rec[cust]:'Needs Attention'})
elif (r in (4)) and (f in (1,2)) and (y in (1,2,3)):
classes_append({rec[cust]:'Occasional and New Customers'})
elif (r in (1,)) and (y in (1,2,3)):
classes_append({rec[cust]:"Lost"})
else:
print("hi")
classes.append({0:[row[1]['r'],row[1]['f'],row[1]['m']]})
accs = [list(i.keys())[0] for i in classes]
segments = [list(i.values())[0] for i in classes]
df['segment_label'] = df[cust].map(dict(zip(accs,segments)))
</code></pre>
<p>The error is</p>
<p><a href="https://i.sstatic.net/tRFnS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tRFnS.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe><loops><for-loop>
|
2022-12-19 10:19:00
| 1
| 7,793
|
The Great
|
74,849,029
| 625,408
|
How to filter/search for emails with a custom header field using Python imaplib?
|
<p>I want to <strong>filter emails</strong> for the presence of a <strong>specific custom header</strong> field, e.g. <code>"X-my-header-field: my-header-value"</code>.
I am using <strong>Python's imaplib</strong>. I unsuccessfully tried the <strong>search method</strong>: <code>rv, data = mailbox.search(None, "X-my-header-field", 'my-header-value')</code>.</p>
<p>Does anybody have an idea or hint how to accomplish this? The idea is to filter the emails before downloading it from the server.</p>
|
<python><imaplib><header-fields>
|
2022-12-19 10:12:42
| 2
| 3,224
|
Jonas
|
74,848,892
| 1,236,858
|
Databricks: Uploading a file to another location from Azure Blob Storage without copying it locally
|
<p>I have a file in Azure Blob Storage, and I would like to upload it to another location without copying it to Databricks' local storage.</p>
<p>Currently my code needs to copy it locally before uploading:</p>
<pre><code># Set up connection to Azure Blob Storage
spark.conf.set("fs.azure.account.key.[some location]", "[account key]")
# Copies the file to Databricks local storage
dbutils.fs.cp("wasbs://[folder location]/some_file.csv", "temp_some_file.csv")
# Setting up for upload data to other system
uploader = client.create_dataset_from_upload('data', 'csv') # This is an external library call
# Read the local copy file and upload it to another system
with open('/dbfs/temp_some_file.csv') as dataset:
uploader.upload_file(dataset)
</code></pre>
<p>How to change the <code>open()</code> command to point directly to the file in Azure Blob Storage?</p>
|
<python><azure-blob-storage><databricks>
|
2022-12-19 10:01:51
| 1
| 7,307
|
rcs
|
74,848,876
| 18,248,287
|
Updating a list of dictionaries inside a loop
|
<p>I find that when I attempt to update values in a dictionary for items from another list, then only the last item in that last will be used to update the value. For example:</p>
<pre><code>a=[{'a':1},{'a':2},{'a':3},{'a':4},{'a':5}]
b=[9, 8, 7, 6, 5]
changed = []
for items in a:
for values in b:
items.update({'a':values})
changed.append(items)
print(changed)
</code></pre>
<pre><code>[{'a': 5}, {'a': 5}, {'a': 5}, {'a': 5}, {'a': 5}, {'a': 5}, {'a': 5}, {'a': 5}, {'a': 5}, {'a': 5}, {'a': 5}, {'a': 5}, {'a': 5}, {'a': 5}, {'a': 5}, {'a': 5}, {'a': 5}, {'a': 5}, {'a': 5}, {'a': 5}, {'a': 5}, {'a': 5}, {'a': 5}, {'a': 5}, {'a': 5}]
</code></pre>
<p>I expected each value to replace each value in place of those in the dictionary.</p>
|
<python>
|
2022-12-19 10:00:02
| 0
| 350
|
tesla john
|
74,848,707
| 6,849,045
|
How do I group by depending on the previous value in spark
|
<p>I have some data on a machine. When it runs, it creates at least one entry every 5 seconds, which contains a timestamp field. I want to know how long that machine is on. So I want to know the stretch between the first entry and the last entry.</p>
<p>I was thinking to order the data set by the timestamp, and then aggregate it(?) by taking the current value, and the previous value (or a zeroValue when there's no previous value) and then create two new columns 'timestamp_start' and 'timestamp_now' with the following idea:</p>
<p>If the distance between the 'timestamp' column is MORE than 5 seconds from the 'timestamp_now' of the previous entry then both 'timestamp_start' AND 'timestamp_now' will become 'timestamp' of the current value.</p>
<p>If the distance between the 'timestamp' column is LESS or equal than 5 seconds from the 'timestamp_now' of the previous entry then 'timestamp_start' will be copied from the previous value, and 'timestamp_now' will become 'timestamp' of the current value.</p>
<p>After that I would take the maximum of each 'timestamp_now' for each 'timestamp_start'. And then I would map those to an duration value. with this idea I should get a list of duration values which will indicate the running time of the machine each time it's turned on.</p>
<p>I'm feel like I would have to use a fold, agg, or reduce somewhere here, but I'm not sure which one and how. Another option I had in mind was using something like a sliding window and then do a map? but I'm not sure if that's an option.</p>
<p>I'm using spark for the first time so bear with me please. But this is what I got:</p>
<pre class="lang-py prettyprint-override"><code>DataQuery.builder(spark).variables() \
.system('XXX') \
.nameLike('XXX%XXX%') \
.timeWindow('2021-10-10 00:00:00.000', '2022-11-28 00:00:00.000') \
.build() \
.orderBy('timestamp')
.agg('timestamp', # How do I get to the previous entry?)
</code></pre>
<p>EDIT:</p>
<p>I got a lot farther:</p>
<pre class="lang-py prettyprint-override"><code>df = DataQuery.builder(spark).variables() \
.system('XXX') \
.nameLike('XXX') \
.timeWindow('2021-08-10 00:00:00.000', '2022-11-28 00:00:00.000') \
.build()
timestamps = df.sort('timestamp') \
.select(psf.from_unixtime('nxcals_timestamp').alias('ts'))
# AT LEAST I HOPE THIS LINE IS RIGHT (?)
window = timestamps.groupBy(psf.session_window('ts', '10 minutes')) \
.agg(psf.min(timestamps.ts))
window_timestamps = window.select(window.session_window.start.cast("string").alias("start"), window.session_window.end.cast("string").alias('end'))
</code></pre>
<p>and then the <code>show()</code> function will return:</p>
<pre><code>+--------------------+--------------------+
| start| end|
+--------------------+--------------------+
|-290308-12-21 20:...|-290308-12-21 20:...|
|-290308-12-23 17:...|-290308-12-23 17:...|
|-290308-12-25 06:...|-290308-12-25 06:...|
|-290308-12-25 15:...|-290308-12-25 15:...|
|-290307-01-01 05:...|-290307-01-01 05:...|
|-290307-01-04 06:...|-290307-01-04 06:...|
|-290307-01-04 19:...|-290307-01-04 19:...|
|-290307-01-05 05:...|-290307-01-05 05:...|
|-290307-01-05 08:...|-290307-01-05 08:...|
|-290307-01-06 00:...|-290307-01-06 00:...|
|-290307-01-10 07:...|-290307-01-10 07:...|
|-290307-01-14 11:...|-290307-01-14 11:...|
|-290307-01-15 03:...|-290307-01-15 04:...|
|-290307-01-15 08:...|-290307-01-15 08:...|
|-290307-01-15 13:...|-290307-01-15 13:...|
|-290307-01-16 17:...|-290307-01-16 17:...|
|-290307-01-20 16:...|-290307-01-20 16:...|
|-290307-01-24 19:...|-290307-01-24 19:...|
|-290307-01-26 17:...|-290307-01-26 17:...|
|-290307-01-30 23:...|-290307-01-30 23:...|
+--------------------+--------------------+
</code></pre>
<p>There's just one line I\m not completely sure about, but it seems to return the right data.
Now I only need to get that data mapped to a single column with the time differences.
I'm currently trying</p>
<pre class="lang-py prettyprint-override"><code>diff = window_timestamps.rdd.map(lambda row: row.end.cast('long') - row.start.cast('long')).toDF(["diff_in_seconds"])
</code></pre>
<p>But that seems to hang</p>
<p>EDIT2: Nope, doesn't seem to work.</p>
|
<python><pyspark><time>
|
2022-12-19 09:43:40
| 1
| 1,072
|
Typhaon
|
74,848,562
| 11,178,936
|
How to change to datetime format date with dashes?
|
<p>I have a date in format <code>2022-12-16T16-48-47"</code> and I would like to change it to <code>datetime</code> using function <code>pd.to_datetime</code>.</p>
<p>My first idea was to create split the string to have it in more readable way:</p>
<pre><code>string = "2022-12-16T16-48-47"
date, hour = string.split("T")
string = date + " " + hour
string
</code></pre>
<p>And now to use:</p>
<pre><code>import pandas as pd
pd.to_datetime(string, format = "%Y-%M-%D %h-%m-%S")
</code></pre>
<p>But I have error:</p>
<pre><code>ValueError: 'D' is a bad directive in format '%Y-%M-%D %h-%m-%S'
</code></pre>
<p>Do you know how it should be done properly?</p>
|
<python><pandas><string><datetime>
|
2022-12-19 09:32:19
| 1
| 1,947
|
John
|
74,848,531
| 1,516,331
|
In pandas, how to groupby and apply/transform on each whole group (NOT aggregation)?
|
<p>I've looked into agg/apply/transform after groupby, but none of them seem to meet my need.
Here is an example DF:</p>
<pre><code>df_seq = pd.DataFrame({
'person':['Tom', 'Tom', 'Tom', 'Lucy', 'Lucy', 'Lucy'],
'day':[1,2,3,1,4,6],
'food':['beef', 'lamb', 'chicken', 'fish', 'pork', 'venison']
})
person,day,food
Tom,1,beef
Tom,2,lamb
Tom,3,chicken
Lucy,1,fish
Lucy,4,pork
Lucy,6,venison
</code></pre>
<p>The <code>day</code> column shows that, for each <code>person</code>, he/she consumes food in sequential orders.</p>
<p><strong>Now I would like to group by the <code>person</code> col, and create a DataFrame which contains food pairs for two neighboring days/time (as shown below)</strong>.</p>
<p><em><strong>Note the <code>day</code> column is only for example purpose here so the values of it should not be used</strong></em>. <em>It only means the <code>food</code> column is in sequential order. In my real data, it's a datetime column.</em></p>
<pre><code>person,day,food,food_next
Tom,1,beef,lamb
Tom,2,lamb,chicken
Lucy,1,fish,pork
Lucy,4,pork,venison
</code></pre>
<p>At the moment, I can only do this with a for-loop to iterate through all users. It's very slow.</p>
<p>Is it possible to use a groupby and apply/transform to achieve this, or any vectorized operations?</p>
|
<python><pandas>
|
2022-12-19 09:28:49
| 2
| 3,190
|
CyberPlayerOne
|
74,848,278
| 11,883,900
|
Calculate percentage of interview participants who had education background
|
<p>I am really sorry if this question was already asked. I have tried to search different answers but havent found one related to mine.</p>
<p>I have a large dataframe with data looking like this :</p>
<pre><code>import pandas as pd
# intialise data of lists.
data = {'interview_key':['00-60-62-69', '00-80-63-65', '00-81-80-59', '00-87-72-75'],
'any_education':['YES', 'YES', 'NO', 'NAN']}
# Create DataFrame
df = pd.DataFrame(data)
# Print the output.
df
</code></pre>
<p>This data represents a group of people who were interviewed and they agreed to have any education represented by <strong>YES</strong> or didnt have education at all represented by <strong>NO</strong>.</p>
<p>I want to do a simple task and that is to <em>find percentage of people who had any form of education</em>. in simple terms those who said YES to having any education.</p>
<p>How can this be done.</p>
|
<python><pandas>
|
2022-12-19 09:06:44
| 2
| 1,098
|
LivingstoneM
|
74,848,203
| 1,415,826
|
python dict recursion returns empty list
|
<p>I have this dictionary that I am trying to iterate through recursively. When I hit a matching node <code>match</code> I want to return that node which is a <code>list</code>.
Currently with my code I keep on getting an empty <code>list</code>. I have stepped through the code and I see my check condition being hit, but the recursion still returns an empty value. what am I doing wrong here? thanks</p>
<p>dictionary data:</p>
<pre><code>{
"apiVersion": "v1",
"kind": "Deployment",
"metadata": {
"name": "cluster",
"namespace": "namespace",
},
"spec": {
"template": {
"metadata": {
"labels": {
"app": "flink",
"cluster": "repo_name-cluster",
"component": "jobmanager",
"track": "prod",
}
},
"spec": {
"containers": [
{
"name": "jobmanager",
"image": "IMAGE_TAG_",
"imagePullPolicy": "Always",
"args": ["jobmanager"],
"resources": {
"requests": {"cpu": "100.0", "memory": "100Gi"},
"limits": {"cpu": "100.0", "memory": "100Gi"},
},
"env": [
{
"name": "ADDRESS",
"value": "jobmanager-prod",
},
{"name": "HADOOP_USER_NAME", "value": "yarn"},
{"name": "JOB_MANAGER_MEMORY", "value": "1000m"},
{"name": "HADOOP_CONF_DIR", "value": "/etc/hadoop/conf"},
{
"name": "TRACK",
"valueFrom": {
"fieldRef": {
"fieldPath": "metadata.labels['track']"
}
},
},
],
}
]
},
},
},
}
</code></pre>
<p>code:</p>
<pre><code>test = iterdict(data, "env")
print(test)
def iterdict(data, match):
output = []
if not isinstance(data, str):
for k, v in data.items():
print("key ", k)
if isinstance(v, dict):
iterdict(v, match)
elif isinstance(v, list):
if k.lower() == match.lower():
# print(v)
output += v
return output
else:
for i in v:
iterdict(i, match)
return output
</code></pre>
<p>expected return value:</p>
<pre><code>[{'name': 'JOB_MANAGER_RPC_ADDRESS', 'value': 'repo_name-cluster-jobmanager-prod'}, {'name': 'HADOOP_USER_NAME', 'value': 'yarn'}, {'name': 'JOB_MANAGER_MEMORY', 'value': '1000m'}, {'name': 'HADOOP_CONF_DIR', 'value': '/etc/hadoop/conf'}, {'name': 'TRACK', 'valueFrom': {...}}]
</code></pre>
|
<python><dictionary><recursion>
|
2022-12-19 09:00:04
| 2
| 945
|
iambdot
|
74,848,199
| 2,975,438
|
Getting "TypeError: type of out argument not recognized: <class 'str'>" when using class function with Pandera decorator
|
<p>I am trying to get to use decorators from Python package "Pandera" and I am having trouble to get them work with classes.</p>
<p>First I create schemas for Pandera:</p>
<pre><code>from pandera import Column, Check
import yaml
in_ = pa.DataFrameSchema(
{
"Name": Column(object, nullable=True),
"Height": Column(object, nullable=True),
})
with open("./in_.yml", "w") as file:
yaml.dump(in_, file)
out_ = pa.DataFrameSchema(
{
"Name": Column(object, nullable=True),
"Height": Column(object, nullable=True),
})
with open("./out_.yml", "w") as file:
yaml.dump(out_, file)
</code></pre>
<p>Next I create <code>test.py</code> file with class:</p>
<pre><code>from pandera import check_io
import pandas as pd
class TransformClass():
with open("./in_.yml", "r") as file:
in_ = file.read()
with open("./out_.yml", "r") as file:
out_ = file.read()
@staticmethod
@check_io(df=in_, out=out_)
def func(df: pd.DataFrame) -> pd.DataFrame:
return df
</code></pre>
<p>Finally I importing this class:</p>
<pre><code>from test import TransformClass
data = {'Name': [np.nan, 'Princi', 'Gaurav', 'Anuj'],
'Height': [5.1, 6.2, 5.1, 5.2],
'Qualification': ['Msc', 'MA', 'Msc', 'Msc']}
df = pd.DataFrame(data)
TransformClass.func(df)
</code></pre>
<p>I am getting:</p>
<pre><code>File C:\Anaconda3\envs\py310\lib\site-packages\pandera\decorators.py:464, in check_io.<locals>._wrapper(fn, instance, args, kwargs)
462 out_schemas = []
463 else:
--> 464 raise TypeError(
465 f"type of out argument not recognized: {type(out)}"
466 )
468 wrapped_fn = fn
469 for input_getter, input_schema in inputs.items():
470 # pylint: disable=no-value-for-parameter
TypeError: type of out argument not recognized: <class 'str'>
</code></pre>
<p>Any help would much appreciated</p>
|
<python><pandas><pandera>
|
2022-12-19 08:59:53
| 2
| 1,298
|
illuminato
|
74,848,126
| 2,173,320
|
PyQt6 - custom widget - get default widget colors for text, background etc
|
<p>I'm developing custom components using PyQt6. I now like to adapt my widgets' colors to the default Palette colors so that they look like the default widgets of PyQt6. I think I found a way to get the default colors of a global palette:</p>
<pre><code>default_palette = self.palette()
self.textColor = default_palette.text().color()
self.backgroudColor = default_palette.window().color()
</code></pre>
<p>My question is, how to use them in the best (practice) way. My goal is that colors also change, when I change the global stylesheets or use libraries like qt_materials.</p>
|
<python><user-interface><pyqt6>
|
2022-12-19 08:53:52
| 1
| 1,507
|
padmalcom
|
74,847,873
| 20,051,041
|
HDBSCAN dendrogram with Plotly, Python
|
<p>Creating a dendrogram using <code>plotly.figure_factory.create_dendrogram</code> <a href="https://stackoverflow.com/questions/38452379/plotting-a-dendrogram-using-plotly-python">has been discussed</a>.
I decided to use HDBSCAN as custering algorithm and would like to visualize the clusters with Plotly.</p>
<pre><code>clusterer = hdbscan.HDBSCAN(
algorithm ='best',
alpha = 1.0,
approx_min_span_tree = False,
gen_min_span_tree = True,
metric = 'hamming',
min_cluster_size = 2,
min_samples = 10,
allow_single_cluster = True,
p = None)
clusters = clusterer.fit_predict(df_matrix)
</code></pre>
<p>How can I extract a dendrogram out of the code above?
Thanks!</p>
|
<python><cluster-analysis><plotly-dash><hierarchical-clustering>
|
2022-12-19 08:29:17
| 1
| 580
|
Mr.Slow
|
74,847,696
| 9,939,634
|
The coordinates of the reconstructed 3D points are different after the virtual camera intrinsic K has also changed proportionally after image resize?
|
<p>As far as I know, after image resize, the corresponding intrinsic parameter K also changes proportionally, but why the coordinates of the 3D reconstruction of the same point are not the same?</p>
<p>The following python program is a simple experiment, the original image size is <img src="https://latex.codecogs.com/svg.image?1080%5Ctimes&space;1920" alt="aaa" />, after resize it becomes <img src="https://latex.codecogs.com/svg.image?480%5Ctimes&space;640" alt="aaa" />, the intrinsic parameter K1 corresponds to the original image, the intrinsic parameter K2 corresponds to the resize, RT1, RT2 are the extrinsic projection matrix of the camera (should remain unchanged?,[R,T],<img src="https://latex.codecogs.com/svg.image?3%5Ctimes&space;4" alt="aaa" /> size), without considering the effects of camera skew factor and distortions,why is there a difference in the reconstructed 3D points?</p>
<pre class="lang-py prettyprint-override"><code>import cv2
import numpy as np
fx = 1040
fy = 1040
cx = 1920 / 2
cy = 1080 / 2
K1 = np.array([[fx, 0, cx],
[0, fy, cy],
[0, 0, 1]])
RT1 = np.array([[1, 0, 0, 4],
[0, 1, 0, 5],
[0, 0, 1, 6]]) # just random set
theta = np.pi / 6
RT2 = np.array([[np.cos(theta), -np.sin(theta), 0, 40],
[np.sin(theta), np.cos(theta), 0, 50],
[0, 0, 1, 60]]) # just random set
p1 = np.matmul(K1, RT1) # extrinsic projection matrix
p2 = np.matmul(K1, RT2) # extrinsic projection matrix
pt1 = np.array([100.0, 200.0])
pt2 = np.array([300.0, 400.0])
point3d1 = cv2.triangulatePoints(p1, p2, pt1, pt2)
# Remember to divide out the 4th row. Make it homogeneous
point3d1 = point3d1 / point3d1[3]
print(point3d1)
</code></pre>
<pre><code>[[-260.07160113]
[ -27.39546108]
[ 273.95189881]
[ 1. ]]
</code></pre>
<p>then resize image to test recontruct 3D point, see if it is numerical equal.</p>
<pre class="lang-py prettyprint-override"><code>rx = 640.0 / 1920.0
ry = 480.0 / 1080.0
fx = fx * rx
fy = fy * ry
cx = cx * rx
cy = cy * ry
K2 = np.array([[fx, 0, cx],
[0, fy, cy],
[0, 0, 1]])
p1 = np.matmul(K2, RT1)
p2 = np.matmul(K2, RT2)
pt1 = np.array([pt1[0] * rx, pt1[1] * ry])
pt2 = np.array([pt2[0] * rx, pt2[1] * ry])
point3d2 = cv2.triangulatePoints(p1, p2, pt1, pt2)
# Remember to divide out the 4th row. Make it homogeneous
point3d2 = point3d2 / point3d2[3]
print(point3d2)
</code></pre>
<pre><code>[[-193.03965985]
[ -26.72133393]
[ 189.12512305]
[ 1. ]]
</code></pre>
<p>you see, point3d1 and point3d2 is not same,why?</p>
|
<python><numpy><image-resizing><3d-reconstruction><camera-intrinsics>
|
2022-12-19 08:06:52
| 1
| 311
|
cui xingxing
|
74,847,642
| 15,724,084
|
need to re-write my code in less than O(n2) complexity
|
<p>I have an algorithm written which gives result.
Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to target.</p>
<p>You may assume that each input would have exactly one solution, and you may not use the same element twice.</p>
<p>You can return the answer in any order.</p>
<p>But it is too slow,</p>
<pre><code>class Solution:
def twoSum(self, nums: List[int], target: int) -> List[int]:
res=[]
for i in range(len(nums)):
first_val=nums[i]
second_val=target - nums[i]
for j in range(len(nums)):
if i!=j:
if nums[j]==second_val:
res.append(i)
res.append(j)
return res
return res
</code></pre>
<p>Could anyone assist me in rewriting this algorithm in Follow-up: Can you come up with an algorithm that is less than O(n2) time complexity?</p>
|
<python><hashmap>
|
2022-12-19 08:00:02
| 2
| 741
|
xlmaster
|
74,847,472
| 1,934,212
|
pop rows from dataframe based on conditions
|
<p>From the dataframe</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({'A':[1,1,1,1,2,2,2,2],'B':[1,2,3,4,5,6,7,8]})
print(df1)
A B
0 1 1
1 1 2
2 1 3
3 1 4
4 2 5
5 2 6
6 2 7
7 2 8
</code></pre>
<p>I want to pop 2 rows where 'A' == 2, preferably in a single statement like</p>
<blockquote>
<p>df2 = df1.somepopfunction(...)</p>
</blockquote>
<p>to generate the following result:</p>
<pre><code>print(df1)
A B
0 1 1
1 1 2
2 1 3
3 1 4
4 2 7
5 2 8
print(df2)
A B
0 2 5
1 2 6
</code></pre>
<p>The <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pop.html" rel="nofollow noreferrer">pandas pop function</a> sounds promising, but only pops complete colums.</p>
<p>What statement can replace the pseudocode</p>
<blockquote>
<p>df2 = df1.somepopfunction(...)</p>
</blockquote>
<p>to generate the desired results?</p>
|
<python><pandas>
|
2022-12-19 07:39:46
| 2
| 9,735
|
Oblomov
|
74,847,424
| 1,144,868
|
ModuleNotFoundError: No module named 'testing' While trying to execute unittest case in Python
|
<p>I have written a test case for my Python-airflow project, but while executing the command it is throwing <code>ModuleNotFoundError: No module named 'testing'</code></p>
<p>Complete error stack is -</p>
<pre><code>ERROR: testing (unittest.loader._FailedTest)
----------------------------------------------------------------------
ImportError: Failed to import test module: testing
Traceback (most recent call last):
File "/Users/myname/.pyenv/versions/3.10.0/lib/python3.10/unittest/loader.py", line 154, in loadTestsFromName
module = __import__(module_name)
ModuleNotFoundError: No module named 'testing'
</code></pre>
<p>To execute the test case I am using command - <code>python -m unittest testing.unit_tests.alembic_unittest</code></p>
<p>Here is my project structure -</p>
<pre><code>project directory
├── testing
├── __init__.py
├
└── unit_tests
├── __init__.py
└── alembic_unittest.py
</code></pre>
<p>Inside project_directory/testing/<strong>init</strong>.py I have added syspath like</p>
<pre><code>import sys
sys.path.insert(0, 'testing/')
</code></pre>
<p>But still I am getting this error. What could be the reason ?</p>
<pre><code>PYTHONPATH=/Users/myname/.pyenv/versions/3.10.0/lib/python3.10:
</code></pre>
<p>I am using PyCharm IDE.</p>
|
<python><python-3.x><unit-testing><airflow><python-unittest>
|
2022-12-19 07:35:08
| 1
| 3,355
|
sandeep
|
74,847,418
| 8,874,837
|
python regex transpose capture group into another regex
|
<p>I have:</p>
<pre><code>regex1 = 'CCbl([a-z]{2})la-12([0-9])4'
inputstring = 'CCblabla-1234'
regex2_to_get_substituted = 'fof([a-z]{2})in(.)ha'
desired output = 'fofabin3ha'
</code></pre>
<p>Basically I want to get result of the capture groups from <code>regex1</code> and transpose them into the same positions of nth capture groups in <code>regex2</code> as a substitution.</p>
|
<python><regex><python-re>
|
2022-12-19 07:34:39
| 1
| 350
|
tooptoop4
|
74,847,225
| 3,004,472
|
how to check different version of same file using python
|
<p>I have list of files two lists (A and B). I am comparing the file names in list A with B. I want to know whether the latest version of the files is available in List B when doing comparison.</p>
<p>For example:</p>
<p>List A contains - <strong>id_prot_number_f1_v1_p1</strong> and List B contains <strong>id_prot_number_f1_v2_p1</strong> - In this case I want to get match instead of non match. Currently I am using in function of python to check the existence of file in List.</p>
<p>is there any way i can check latest version available in another list ?</p>
|
<python><regex><string>
|
2022-12-19 07:10:55
| 0
| 880
|
BigD
|
74,847,167
| 9,861,647
|
Get all File from Subfolder Boto3
|
<p>I have this code to download all the files from a buckets AWS S3</p>
<pre><code>import os
import boto3
#initiate s3 resource
s3 = boto3.resource('s3')
s3 = boto3.resource(
's3',
aws_access_key_id = '__________',
aws_secret_access_key = '________',
region_name = '______'
)
# select bucket
my_bucket = s3.Bucket('MainBucket')
# download file into current directory
for s3_object in my_bucket.objects.all():
# Need to split s3_object.key into path and file name, else it will give error file not found.
path, filename = os.path.split(s3_object.key)
my_bucket.download_file(s3_object.key, filename)
</code></pre>
<p>Inside that bucket, I have a folder called "pictures"</p>
<p>How can I get the files only in my folder?</p>
<p>My try:</p>
<pre><code>s3.Bucket('MainBucket/pictures')
</code></pre>
|
<python><amazon-s3><boto3>
|
2022-12-19 07:03:19
| 1
| 1,065
|
Simon GIS
|
74,847,154
| 7,947,316
|
How to using sum function with encrypted column in SQLAlchemy?
|
<p>I have used <code>EncryptedType</code> from <code>sqlalchemy_utils</code> to encrypt data of the specific column which will make when inserting data into the table or selecting data, the data of encrypted column will be encrypted.</p>
<p>This is the ORM structure of my database which have encrypted in <code>value</code> column.</p>
<pre><code>from sqlalchemy_utils import EncryptedType
from sqlalchemy_utils.types.encrypted.encrypted_type import AesEngine
class Products(db.Model):
__tablename__ = 'products'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(400))
value = db.Column(EncryptedType(db.Integer, secret_key, AesEngine,'pkcs5'))
</code></pre>
<p>And this is the result when select data in psql which you will unable to see the data of <code>value</code> column because it encrypted.</p>
<pre><code> id | name | value
----+---------+----------------------------------------------------
1 | Macbook | \x6977764a59556346536e6b674d7a6439312f714c70413d3d
2 | IPhone | \x6a6b51757a48554739666756566863324662323962413d3d
3 | IPad | \x416d54504b787873462f724d347144617034523639673d3d
</code></pre>
<p>But when i select data by using ORM, it will decrypt the data automatically.</p>
<p>And this is my code.</p>
<pre><code>product_query = Products.query.order_by(Products.id.asc()).all()
for product in product_query:
print(product.id, ' ', product.name, ' ', product.value)
</code></pre>
<p>Result</p>
<pre><code>1 Macbook 222222
2 IPhone 40000
3 IPad 60000
</code></pre>
<p><strong>Problem</strong> <br>
When i try to select SUM with this code</p>
<pre><code>db.session.query(func.sum(Products.value)).all()
</code></pre>
<p>it got the error like this.</p>
<pre><code>sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedFunction) function sum(bytea) does not exist
LINE 1: SELECT sum(products.value) AS sum_1
</code></pre>
<p>Then as i understand the error, the problem is because the data that i try to SUM it still in byte or encrypted format, so are there any way that i can sum the value of encrypted column?</p>
|
<python><postgresql><sqlalchemy>
|
2022-12-19 07:01:54
| 1
| 563
|
Kaow
|
74,846,933
| 9,257,578
|
How to pick up data from json objects in python?
|
<p>I am trying to pick <code>Instances</code> in the json objects data which looks like this</p>
<p><code>[{'Groups': [], 'Instances': [{'AmiLaunchIndex': 0, 'ImageId': 'ami-0ceecbb0f30a902a6', 'InstanceId': 'i-xxxxx', 'InstanceType': 't2.micro', 'KeyName': 'xxxx', 'LaunchTime': {'$date': '2022-12-17T13:07:54Z'}, 'Monitoring': {'State': 'disabled'}, 'Placement': {'AvailabilityZone': 'us-west-2b', 'GroupName': '', 'Tenancy': 'default'}, 'PrivateDnsName': 'ip-zxxxxx.us-west-2.compute.internal', 'PrivateIpAddress': 'xxxxx', 'ProductCodes': [], 'PublicDnsName': 'ec2-xx-xxx-xxx.us-west-2.compute.amazonaws.com', 'PublicIpAddress': 'xxxxxx', 'State': {'Code': 16, 'Name': 'running'}, 'StateTransitionReason': '', 'SubnetId': 'subnet-xxxxx', 'VpcId': 'vpc-xxxxx', 'Architecture': 'x86_64', 'BlockDeviceMappings': [{'DeviceName': '/dev/xvda', 'Ebs': {'AttachTime': {'$date': '2022-12-17T13:07:55Z'}, 'DeleteOnTermination': True, 'Status': 'attached', 'VolumeId': 'vol-xxxx'}}], 'ClientToken': '529fc1ac-bf64-4804-b0b8-7c7778ace68c', 'EbsOptimized': False, 'EnaSupport': True, 'Hypervisor': 'xen', 'NetworkInterfaces': [{'Association': {'IpOwnerId': 'amazon', 'PublicDnsName': 'ec2-35-86-111-31.us-west-2.compute.amazonaws.com', 'PublicIp': 'xxxxx'}, 'Attachment': {'AttachTime': {'$date': '2022-12-17T13:07:54Z'}, 'AttachmentId': 'eni-attach-0cac7d4af20664b23', 'DeleteOnTermination': True, 'DeviceIndex': 0, 'Status': 'attached', 'NetworkCardIndex': 0}, 'Description': '', 'Groups': [{'GroupName': 'launch-wizard-5', 'GroupId': 'sg-xxxxx'}], 'Ipv6Addresses': [], 'MacAddress': 'xxxxx', 'NetworkInterfaceId': 'eni-xxxxx', 'OwnerId': 'xxxx', 'PrivateDnsName': 'ip-xxxxx.us-west-2.compute.internal', 'PrivateIpAddress': 'xxx.xxx.xxx', 'PrivateIpAddresses': [{'Association': {'IpOwnerId': 'amazon', 'PublicDnsName': 'ec2-xx-xx-xx-xxx.us-west-2.compute.amazonaws.com', 'PublicIp': 'xxx.xxx.xxx'}, 'Primary': True, 'PrivateDnsName': 'ip-172-31-20-187.us-west-2.compute.internal', 'PrivateIpAddress': 'xxx.xxx.xxx'}], 'SourceDestCheck': True, 'Status': 'in-use', 'SubnetId': 'subnet-xxxxxxx', 'VpcId': 'vpc-0b09cd4sedxxx', 'InterfaceType': 'interface'}], 'RootDeviceName': '/dev/xvda', 'RootDeviceType': 'ebs', 'SecurityGroups': [{'GroupName': 'launch-wizard-5', 'GroupId': 'sg-0a0d1c79d8076660e'}], 'SourceDestCheck': True, 'Tags': [{'Key': 'Name', 'Value': 'MainServers'}], 'VirtualizationType': 'hvm', 'CpuOptions': {'CoreCount': 1, 'ThreadsPerCore': 1}, 'CapacityReservationSpecification': {'CapacityReservationPreference': 'open'}, 'HibernationOptions': {'Configured': False}, 'MetadataOptions': {'State': 'applied', 'HttpTokens': 'optional', 'HttpPutResponseHopLimit': 1, 'HttpEndpoint': 'enabled', 'HttpProtocolIpv6': 'disabled', 'InstanceMetadataTags': 'disabled'}, 'EnclaveOptions': {'Enabled': False}, 'PlatformDetails': 'Linux/UNIX', 'UsageOperation': 'RunInstances', 'UsageOperationUpdateTime': {'$date': '2022-12-17T13:07:54Z'}, 'PrivateDnsNameOptions': {'HostnameType': 'ip-name', 'EnableResourceNameDnsARecord': True, 'EnableResourceNameDnsAAAARecord': False}, 'MaintenanceOptions': {'AutoRecovery': 'default'}}], 'OwnerId': '76979cfxdsss11', 'ReservationId': 'r-xxxxx'}]</code></p>
<p>I tired loading data and doing</p>
<pre><code> resp = json.loads(jsonfile)
reqData= resp['Instances']
</code></pre>
<p>But getting error</p>
<p><code>TypeError: list indices must be integers or slices, not str</code></p>
<p>Is there any way I can fix this and get the data? Help will be extremely appriciated.</p>
|
<python><json>
|
2022-12-19 06:32:09
| 2
| 533
|
Neetesshhr
|
74,846,760
| 11,332,693
|
Python String Matching Using Loops and Iterations and Score Calculation using two dataframes
|
<p>df1</p>
<pre><code>Place Location
Delhi,Punjab,Jaipur Delhi,Punjab,Noida,Lucknow
Delhi,Punjab,Jaipur Delhi,Bhopal,Jaipur,Rajkot
Delhi,Punjab,Kerala Delhi,Jaipur,Madras
</code></pre>
<p>df2</p>
<pre><code>Target1 Target2 Strength
Jaipur Rajkot 0.94
Jaipur Punjab 0.84
Jaipur Noida 0.62
Jaipur Jodhpur 0.59
Punjab Amritsar 0.97
Punjab Delhi 0.85
Punjab Bhopal 0.91
Punjab Jodhpur 0.75
Kerala Varkala 0.85
Kerala Kochi 0.88
</code></pre>
<p>The task is to match 'Place' value with 'Location' values and assign score '1' in case of direct match and refer df2 in case of indirect match and assign strength score from that. For Ex: In Row1 Delhi and Punjab are direct match as both are present in 'Place' and 'Location' wherein Jaipur is there in 'Place' but not in 'Location. So, Jaipur will be iterated in df2 Target1 and try to find the corresponding 'Location' values of Row1 in Target2. In df2 Jaipur is related to Punjab and Noida which there in ROW1 Location values. So, corresponding to Jaipur, Punjab strength will be alloted as 0.84 is higher than Noida's 0.62. Final score is calculated as (1+1+0.84)/3 i.e sum of direct and indirect matches divided by number of 'Place' items.</p>
<p>Expected output is :</p>
<pre><code>Place Location Avg. Score
Delhi,Punjab,Jaipur Delhi,Punjab,Noida,Lucknow (1+1+0.84)/3 = 0.95
Delhi,Punjab,Jaipur Delhi,Bhopal,Jaipur,Rajkot (1+0.91+1)/3 = 0.97
Delhi,Punjab,Kerala Delhi,Jaipur,Madras (1+0.85+0)/3 = 0.67
</code></pre>
<p>My try</p>
<pre><code>data1 = df1['Place'].to_list()
data2 = df1['Location'].to_list()
dict3 = {}
exac_match = []
for el in data1:
#print(el)
el=[x.strip() for x in el.split(',')]
for ell in data2:
ell=[x.strip() for x in ell.split(',')]
dict1 = {}
dict2 = {}
for elll in el:
if elll in ell:
#print("Exact match:::", elll)
dict1[elll]=1
dict2[elll]=elll
</code></pre>
|
<python><pandas><string><dataframe><string-matching>
|
2022-12-19 06:04:22
| 1
| 417
|
AB14
|
74,846,659
| 29,573
|
Python typing dict of dicts
|
<p>How do I correctly type hint this python structure:</p>
<pre class="lang-py prettyprint-override"><code>people={
"john" : {"likes": "apples", "dislikes": "organges"},
"aisha": {"likes": "kittens", "dislikes": "ravens"}
}
</code></pre>
<p>EDIT: There can be any keys specified - e.g. "mary","joseph", "carl"...
I understand that the value-dict can be typed as such</p>
<pre class="lang-py prettyprint-override"><code>class _Preferences(TypedDict):
likes: str
dislikes: str
</code></pre>
<p>But I am not sure how to type the <code>people</code> dict itself.</p>
|
<python><python-typing>
|
2022-12-19 05:45:49
| 2
| 2,306
|
Konrads
|
74,846,653
| 9,668,481
|
How to define a column as an Array of any type in POSTGRESQL FlaskSQLAlchemy?
|
<p>I am using <code>flask_sqlalchemy</code> and I want to define column <strong>prompts</strong> to be an array which accepts values of any data type, particularly <strong>string</strong> or <strong>json</strong>.</p>
<pre class="lang-py prettyprint-override"><code>from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy(app)
class PreannotationPrompts(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(100), unique = True)
type = db.Column(db.String(100))
'''
Example of values prompts should hold
[
{"key": "value", "key2": []},
'Hello',
{"key": "value", "key2": []},
'Hi'
]
'''
prompts = db.Column(db.String(1000)) # How to define ?
</code></pre>
<p>How can I define prompts column appropriately in this case?</p>
|
<python><postgresql><sqlalchemy>
|
2022-12-19 05:44:23
| 2
| 846
|
Saurav Pathak
|
74,846,487
| 1,251,099
|
running shell command in python under git-bash not working well
|
<p>I am using python3 under git-bash environment, and sometimes it does not run shell command well.</p>
<pre><code>#!/usr/bin/env python3
import subprocess as sp
print("hello")
print(sp.getoutput("ls -l")) # This works.
print(sp.getoutput("date")) # This hangs and cannot terminate with ctrl-c.
</code></pre>
<p>This does not happen when running under normal linux/bash environment.</p>
<p>Then I come across this one: <a href="https://stackoverflow.com/questions/32597209/python-not-working-in-the-command-line-of-git-bash">Python not working in the command line of git bash</a>.</p>
<p>I can run using "winpty python ...", however it still cannot terminate even with ctrl-c.</p>
<p>I take back, getoutput("date") hangs but check_output works.</p>
|
<python><python-3.x><bash><git-bash>
|
2022-12-19 05:11:59
| 1
| 6,206
|
user180574
|
74,846,450
| 16,124,033
|
How to aggregate unique substrings in a column of strings in Python?
|
<p>I have a <code>.csv</code> file as follows:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Alphabet</th>
<th>Sub alphabet</th>
<th>Value</th>
<th>Strings</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>B</td>
<td>1</td>
<td>AA, AB</td>
</tr>
<tr>
<td>A</td>
<td>C</td>
<td>1</td>
<td>AA, AC</td>
</tr>
<tr>
<td>A</td>
<td>E</td>
<td>2</td>
<td>AB, AD</td>
</tr>
<tr>
<td>A</td>
<td>F</td>
<td>3</td>
<td>AA, AD, AB</td>
</tr>
<tr>
<td>D</td>
<td>B</td>
<td>1</td>
<td>AB, AC, AD</td>
</tr>
<tr>
<td>D</td>
<td>C</td>
<td>2</td>
<td>AA, AD</td>
</tr>
<tr>
<td>D</td>
<td>E</td>
<td>2</td>
<td>AC, AD</td>
</tr>
<tr>
<td>D</td>
<td>F</td>
<td>3</td>
<td>AD</td>
</tr>
</tbody>
</table>
</div>
<pre><code>Alphabet,Sub alphabet,Value,Strings
A,B,1,"AA, AB"
A,C,1,"AA, AC"
A,E,2,"AB, AD"
A,F,3,"AA, AD, AB"
D,B,1,"AB, AC, AD"
D,C,2,"AA, AD"
D,E,2,"AC, AD"
D,F,3,AD
</code></pre>
<p>I want it to return result like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Alphabet</th>
<th>Value</th>
<th>Frequency</th>
<th>%</th>
<th>Strings</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>1</td>
<td>2</td>
<td>50%</td>
<td>AA, AB, AC, AD</td>
</tr>
<tr>
<td>A</td>
<td>2</td>
<td>1</td>
<td>25%</td>
<td>AA, AB, AC, AD</td>
</tr>
<tr>
<td>A</td>
<td>3</td>
<td>1</td>
<td>25%</td>
<td>AA, AB, AC, AD</td>
</tr>
<tr>
<td>D</td>
<td>1</td>
<td>1</td>
<td>25%</td>
<td>AB, AC, AD, AA</td>
</tr>
<tr>
<td>D</td>
<td>2</td>
<td>2</td>
<td>50%</td>
<td>AB, AC, AD, AA</td>
</tr>
<tr>
<td>D</td>
<td>3</td>
<td>1</td>
<td>25%</td>
<td>AB, AC, AD, AA</td>
</tr>
</tbody>
</table>
</div>
<p>Believably expected table above is self-explanatory. The percentage refers to the corresponding row's frequency divided by total frequency. String refers to the string of the corresponding alphabet row.</p>
<p>My code:</p>
<pre><code>import pandas as pd
df = pd.read_csv("data.csv")
df = df.groupby(["Alphabet", "Value"], as_index=False).agg(Frequency=("Value", "count"))
df["%"] = df["Frequency"] / df.groupby("Alphabet")["Frequency"].transform("sum") * 100
df.to_csv("result.csv", index=None)
</code></pre>
<p>Feel free to leave a comment if you need more information.</p>
<p>How can I make such a .csv file? I would appreciate any help. Thank you in advance!</p>
|
<python><pandas><dataframe>
|
2022-12-19 05:03:32
| 1
| 4,650
|
My Car
|
74,846,281
| 4,479,864
|
Mocking Azure BlobServiceClient in Python
|
<p>I am trying to write a unit test that will test <code>azure.storage.blob.BlobServiceClient</code> class and its methods. Below is my code</p>
<p>A fixture in the <code>conftest.py</code></p>
<pre class="lang-py prettyprint-override"><code>@pytest.fixture
def mock_BlobServiceClient(mocker):
azure_ContainerClient = mocker.patch("azure.storage.blob.ContainerClient", mocker.MagicMock())
azure_BlobServiceClient= mocker.patch("azure_module.BlobServiceClient", mocker.MagicMock())
azure_BlobServiceClient.from_connection_string.return_value
azure_BlobServiceClient.get_container_client.return_value = azure_ContainerClient
azure_ContainerClient.list_blob_names.return_value = "test"
azure_ContainerClient.get_container_client.list_blobs.return_value = ["test"]
yield azure_BlobServiceClient
</code></pre>
<p>Contents of the test file</p>
<pre class="lang-py prettyprint-override"><code>from azure_module import AzureBlob
def test_AzureBlob(mock_BlobServiceClient):
azure_blob = AzureBlob()
# This assertion passes
mock_BlobServiceClient.from_connection_string.assert_called_once_with("testconnectionstring")
# This assertion fails
mock_BlobServiceClient.get_container_client.assert_called()
</code></pre>
<p>Contents of the <code>azure_module.py</code></p>
<pre class="lang-py prettyprint-override"><code>from azure.storage.blob import BlobServiceClient
import os
class AzureBlob:
def __init__(self) -> None:
"""Initialize the azure blob"""
self.azure_blob_obj = BlobServiceClient.from_connection_string(os.environ["AZURE_STORAGE_CONNECTION_STRING"])
self.azure_container = self.azure_blob_obj.get_container_client(os.environ["AZURE_CONTAINER_NAME"])
</code></pre>
<p>My test fails when I execute it with below error message</p>
<pre><code>> mock_BlobServiceClient.get_container_client.assert_called()
E AssertionError: Expected 'get_container_client' to have been called.
</code></pre>
<p>I am not sure why it says that the <code>get_container_client</code> wasn't called when it was called during the <code>AzureBlob</code>'s initialization.</p>
<p>Any help is very much appreciated.</p>
<p><strong>Update 1</strong></p>
<p>I believe this is a bug in the <code>unittest</code>'s <code>MagicMock</code> itself. Per
Michael Delgado suggested that I dialed the code to a bare minimum to test and identify the issue, and I concluded that the <code>MagicMock</code> was causing the problem. Below are my findings:</p>
<p><strong>conftest.py</strong></p>
<pre class="lang-py prettyprint-override"><code>@pytest.fixture
def mock_Blob(mocker):
yield mocker.patch("module.BlobServiceClient")
</code></pre>
<p><strong>test_azureblob.py</strong></p>
<pre class="lang-py prettyprint-override"><code>def test_AzureBlob(mock_Blob):
azure_blob = AzureBlob()
print(mock_Blob)
print(mock_Blob.mock_calls)
print(mock_Blob.from_connection_string.mock_calls)
print(mock_Blob.from_connection_string.get_container_client.mock_calls)
assert False # <- Intentional fail
</code></pre>
<p>After running the test, I got the following results.</p>
<pre class="lang-bash prettyprint-override"><code>$ pytest -vv
.
.
.
------------------------------------------------------------------------------------------- Captured stdout call -------------------------------------------------------------------------------------------
<MagicMock name='BlobServiceClient' id='140704187870944'>
[call.from_connection_string('AZURE_STORAGE_CONNECTION_STRING'),
call.from_connection_string().get_container_client('AZURE_CONTAINER_NAME')]
[call('AZURE_STORAGE_CONNECTION_STRING'),
call().get_container_client('AZURE_CONTAINER_NAME')]
[]
.
.
.
</code></pre>
<p>The prints clearly show that the <code>get_container_client</code> was seen being called, but the mocked method did not register it at its level. That led me to conclude that the <code>MagicMock</code> has a bug which I will report to the developers for further investigation.</p>
|
<python><unit-testing><pytest><pytest-mock>
|
2022-12-19 04:26:20
| 0
| 432
|
gh0st
|
74,846,253
| 11,124,121
|
Is there any function to check the location of the last date element in pandas dataframe?
|
<p>The sample data is like:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Empty header</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">'2/01/2011'</td>
</tr>
<tr>
<td style="text-align: left;">'3/01/2011'</td>
</tr>
<tr>
<td style="text-align: left;">'4/01/2011'</td>
</tr>
<tr>
<td style="text-align: left;">'5.222'</td>
</tr>
<tr>
<td style="text-align: left;">'6.214'</td>
</tr>
<tr>
<td style="text-align: left;">'1.34266'</td>
</tr>
</tbody>
</table>
</div>
<p>The data above are all strings.</p>
<p>My expected outcome is</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Date</th>
<th style="text-align: left;">Value</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">2/01/2011</td>
<td style="text-align: left;">5.222</td>
</tr>
<tr>
<td style="text-align: left;">3/01/2011</td>
<td style="text-align: left;">6.214</td>
</tr>
<tr>
<td style="text-align: left;">4/01/2011</td>
<td style="text-align: left;">1.34266</td>
</tr>
</tbody>
</table>
</div>
<p>The 'Date' variable should be in date format, 'Value' variable is in float.</p>
|
<python><pandas><date>
|
2022-12-19 04:17:56
| 1
| 853
|
doraemon
|
74,846,241
| 1,528,840
|
Rename a column if there is duplicate based on value in another column
|
<p>I have a dataframe like the following:</p>
<pre><code>df = pd.DataFrame({"Col1": ["AA", "AB", "AA", "CC", "FF"],
"Col2": [18, 23, 13, 33, 48],
"Col3": [17, 27, 22, 37, 52]})
</code></pre>
<p>My goal is if there are duplicated values in Col1, I would then sort (only the duplicate values) by the values in Col2 from smallest to largest, and rename the original "Value" in Col1 to be "Value.A" (for duplicates with smallest value in Col2) "Value.B" (for 2nd smallest, etc). Value of the Col3</p>
<p>Using the example above, this is what I should end up with:</p>
<pre><code>pd.DataFrame({"Col1": ["AA.B", "AB", "AA.A", "CC", "FF"],
"Col2": [18, 23, 13, 33, 48],
"Col3": [17, 27, 22, 37, 52]})
</code></pre>
<p>Since 13<18 so the 2nd "AA" becomes "AA.A" and first "AA" becomes AA.B. (values in Col3 stays unchanged). Also, "AB","CC","FF" all needs to remain unchanged. I could have potentially more than 1 sets of duplicates in Col1.</p>
<p>I do not need to preserve the rows, so long as the values in each row stay the same except the renamed value in Col1. (i.e., I should still have "AA.B", 18, 17 for the 3 columns no matter where the 1st row in the output moves to).</p>
<p>I tried to use the <code>row['Col1'] == df['Col1'].shift()</code> as a lambda function but this gives me the following error:</p>
<blockquote>
<p>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</p>
</blockquote>
<p>I suspect this was due to the na value when I called shift() but using fillna() doesn't help since that will always create a duplicate at the beginning.</p>
<p>Any suggestions on how I can make it work?</p>
|
<python><pandas><dataframe><valueerror><shift>
|
2022-12-19 04:15:43
| 2
| 1,342
|
AZhu
|
74,846,222
| 8,422,170
|
How can I implement python AutoML libraries (like Pycaret, auto-sklearn) etc, on pyspark dataframe?
|
<p>I am trying to implement AutoML over a Pyspark DataFrame but didn't found any particular documentation or library specific for this? Can we implement Pycaret, MLJar or any automl library for pyspark dataframes using pandas_udfs?</p>
|
<python><pandas><dataframe><pyspark><automl>
|
2022-12-19 04:11:17
| 1
| 1,939
|
Mehul Gupta
|
74,846,182
| 2,091,585
|
Replace characters that repeats more than 20 times with 10 with a regex
|
<p>This is basically more complicated version of this question.</p>
<p><a href="https://stackoverflow.com/questions/7172378/replace-repeating-characters-with-one-with-a-regex">Replace repeating characters with one with a regex</a></p>
<p>If there is a character that repeats more than 20 times, then replace with just ten repetition of that character.</p>
<p>That is, I want to replace 'adfajlkjl a sd=============================================== READFadfa' with 'adfajlkjl a sd========== READFadfa'</p>
<pre><code>import re
string1 = 'adfajlkjl a sd=============================================== READFadfa'
re.sub(pattern1, r'\1\1\1\1\1\1\1\1\1\1', string1)
</code></pre>
<p>Output:</p>
<pre><code>'adfajlkjl a sd========== READFadfa'
</code></pre>
<p>The above is a brute force solution using the answer in the above link.
Is there another way not repeating the backreference \1 ten times?</p>
|
<python><regex><backreference>
|
2022-12-19 04:01:57
| 1
| 2,238
|
user67275
|
74,846,075
| 5,398,127
|
How do I predict the future closing price of stock after training and testing?
|
<p>I am trying to do multivariate time series forecasting using linear regression model.</p>
<p>In the below code I first split the data in 80-20 ratio for training and testing.</p>
<p>Then I train the model and use the model to predict using test and compute the relevant performance metrics of the model.</p>
<pre><code> # Split data into testing and training sets
X_train, X_test, y_train, y_test = train_test_split(df[['EMA_10']], df[['close']], test_size=.2)
# Create Regression Model
model = LinearRegression()
# Train the model
model.fit(X_train, y_train)
# Use model to make predictions
y_pred = model.predict(X_test)
# Printout relevant metrics
print("Model Coefficients:", model.coef_)
print("Mean Absolute Error:", mean_absolute_error(y_test, y_pred))
print("Coefficient of Determination:", r2_score(y_test, y_pred))
</code></pre>
<p>Now how do I predict the next i.e. future value?</p>
|
<python><scikit-learn><linear-regression>
|
2022-12-19 03:34:51
| 1
| 3,480
|
Stupid_Intern
|
74,846,048
| 1,779,091
|
Does list slicing create shallow or deep copy?
|
<pre><code>Lst1 = [1,2,3,4]
Lst2 = Lst1
Lst3 = Lst1[0:2]
Lst4 = Lst1[:]
</code></pre>
<p>So Lst1 and Lst2 point to same list.</p>
<p>And Lst3 points to slice of Lst1.</p>
<p>And Lst4 point to slice of Lst1.</p>
<p>Whether Lst3 & Lst4 (output of slicing) are deep or shallow copy?</p>
|
<python>
|
2022-12-19 03:30:00
| 2
| 9,866
|
variable
|
74,846,020
| 4,755,567
|
How to check array contains string by using pyspark with this structure
|
<p>The curly brackets are odd. Tried with different approaches, but none of them works</p>
<pre><code># root
# |-- L: array (nullable = true)
# | |-- element: struct (containsNull = true)
# | | |-- S: string (nullable = true)
# +------------------+
# | L|
# +------------------+
# |[{string1}]|
# |[{string2}]|
# +------------------+
</code></pre>
|
<python><apache-spark><pyspark>
|
2022-12-19 03:24:50
| 1
| 549
|
TommyQu
|
74,845,697
| 7,211,014
|
python selenium refuses to scroll down page, how to force / fix?
|
<p>I need selenium using the firefox driver to scroll down the page to the very bottom, but my code no longer works. Its like the page recognizes I am trying to scroll with selenium and forces the scroll back to the top of the screen...</p>
<pre><code>def scroll_to_bottom():
'''
Scroll down the page the whole way. This must be done to show all images on page to download.
'''
print("[+] Starting scroll down of page. This needs to be performed to view all images.")
count = 0
start_time = time.perf_counter()
try:
last_height = driver.execute_script("return document.body.scrollHeight")
while True:
if count % 10 == 0:
elapsed_time = time.perf_counter() - start_time
elapsed_time_clean = time.strftime("%H:%M:%S", time.gmtime(elapsed_time))
print("[i] Still scrolling, its been " + str(elapsed_time_clean) + ", please continue to standby... C: " + str(count))
time.sleep(3)
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(3)
# Check if hight has changed
new_height = driver.execute_script("return document.body.scrollHeight")
vprint("last_height: " + str(last_height) + ", new_height: " + str(new_height))
if new_height == last_height:
break
last_height = new_height
count += 1
except Exception as exception:
print('[!] Exception exception: '+str(exception))
pass
print("[i] Scroll down of whole page complete.")
return
</code></pre>
<p>It will go to the page, start to scroll once, then pop up to the top of the screen and no longer scroll. Then my code thinks its at the bottom of the page because the page size did not change. This worked about 3 weeks ago but no longer works. I cant figure out why.</p>
<p>Is there a way to force scrolling?
BTw I tried using "DOWN" and "Page DOWN" key presses, that does not work either. Anyone have any ideas?</p>
|
<python><selenium><web-scraping><scroll><height>
|
2022-12-19 01:55:41
| 0
| 1,338
|
Dave
|
74,845,330
| 3,277,133
|
How to display row as dictionary from pyspark dataframe?
|
<p>Very new to pyspark.</p>
<p>I have 2 datasets, <code>Events</code> & <code>Gadget</code>. They look like so:</p>
<pre><code>Events
</code></pre>
<p>[![enter image description here][1]][1]</p>
<p><code>Gadgets</code></p>
<p>[![enter image description here][2]][2]</p>
<p>I can read and join the 2 dataframes by using like so and present only the needed columns in my last line:</p>
<pre><code>import pyspark
from pyspark.sql import SparkSession
from pyspark.sql.types import StructType,StructField, StringType, IntegerType
from pyspark.sql.types import ArrayType, DoubleType, BooleanType
from pyspark.sql.functions import col,array_contains
spark = SparkSession.builder.appName('PySpark Read CSV').getOrCreate()
# Reading csv file
events = spark.read.option("header",True).csv("events.csv")
events.printSchema()
gadgets = spark.read.option("header",True).csv("gadgets.csv")
gadgets.printSchema()
enrich = events.join(gadgets, events.deviceId == gadgets.ID).select(events["*"],gadgets["User"])
</code></pre>
<p>My assignment is asking that I present the data like so in the dictionary object:</p>
<p>Enrichment Tasks:</p>
<ul>
<li>Enrich the event object with user data provided by the device.</li>
<li>Ensure the enriched event looks like the following:</li>
</ul>
<pre><code>{
sessionId: string
deviceId: string
timestamp: timestamp
type: emun(ADDED_TO_CART | APP_OPENED)
total_price: 50.00
user: string
}
</code></pre>
<p>I can handle the dtype changes and column name renaming that the assignment is asking for, however how do I deliver my results in the dictionary format above?</p>
<p>I am not sure how I can even show my results if I used this line:</p>
<pre><code>enrich.rdd.map(lambda row: row.asDict())
</code></pre>
|
<python><pyspark>
|
2022-12-19 00:27:41
| 1
| 3,707
|
RustyShackleford
|
74,845,263
| 2,374,964
|
'Internal space embedding need ...' error when attemping to use spaCy text_categorizer
|
<p>Hello sirs and madams,</p>
<p>I very much need your help to understand this error:</p>
<pre><code>Exception has occurred: NotImplementedError
internal spacy embeddings need to be derived from md/lg spacy models not from sm/trf models.
File "/Users/Rune/Sites/smartez-backend/analytics/pipeline.py", line 517, in categorize_new
result = nlp("new_sentence")._.cats
File "/Users/Rune/Sites/smartez-backend/components/feed.py", line 471, in p_get_posts
category = pipeline.categorize_new(post)
File "/Users/Rune/Sites/smartez-backend/queue_.py", line 22, in set_run
categories, lang = feed_source.p_get_posts()
File "/Users/Rune/Sites/smartez-backend/queue_.py", line 44, in <module>
set_run(wait)
</code></pre>
<p>I get the error when I attempt debugging the code in vs code on mac/ox:</p>
<pre><code>data = self.load_data(self.language['language'])
nlp = spacy.load('en_core_web_md')
nlp.add_pipe("text_categorizer",
config={
"data": data,
"model": "spacy"
}
)
</code></pre>
<p>The code is in the file: 'smartez-backend/analytics/pipeline.py'</p>
<p>When I try similar code in the file: 'smartez-backend/test.py' it works like a charm.</p>
<p>I simply cannot comprehend why this error occurs, and any help will payed back with much gratitude.</p>
<p>Best,
Rune</p>
|
<python><spacy>
|
2022-12-19 00:11:59
| 0
| 409
|
BispensGipsGebis
|
74,845,219
| 587,021
|
SQLalchemy keyword argument equivalent of `Column('id', String, Enum(("foo", "bar")))`
|
<p>I'm looking at <a href="https://docs.sqlalchemy.org/en/14/core/defaults.html#sqlalchemy.schema.Identity" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/14/core/type_basics.html#sqlalchemy.types.Enum</a> and <a href="https://docs.sqlalchemy.org/en/14/core/metadata.html#sqlalchemy.schema.Column.__init__" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/14/core/metadata.html#sqlalchemy.schema.Column.<strong>init</strong></a>, but I can't see what the equivalent would be for the following <strong>attempt</strong>:</p>
<pre class="lang-py prettyprint-override"><code>Column(name='id', type_=String, server_default=Enum(("foo", "bar")))
</code></pre>
<p>I tried <code>default=server_default=Enum(("foo", "bar"))</code> but received the following error message:</p>
<pre><code>sqlalchemy.exc.ArgumentError: Argument 'arg' is expected to be one of type '<class 'str'>' or '<class 'sqlalchemy.sql.elements.ClauseElement'>' or '<class 'sqlalchemy.sql.elements.TextClause'>', got '<class 'sqlalchemy.sql.sqltypes.Enum'>'
</code></pre>
<p>Am I meant to use <a href="https://docs.sqlalchemy.org/en/14/core/defaults.html#sqlalchemy.schema.DefaultClause" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/14/core/defaults.html#sqlalchemy.schema.DefaultClause</a> or <code>sqlalchemy.schema.DefaultGenerator</code> or is it not meant to be a default/equivalent at all?</p>
|
<python><sqlalchemy>
|
2022-12-19 00:04:35
| 1
| 13,982
|
A T
|
74,845,051
| 436,721
|
Every other celery task fails; it seems to be "unregistered"
|
<p>I'm using <code>celery</code> to run a task called <code>'filter'</code>.</p>
<p>I'm having a weird error whereby <strong>every other</strong> task call seems to result in a <code>celery.exceptions.NotRegistered</code> exception.</p>
<p>By <em>every other call</em>, I mean:</p>
<ul>
<li>I call it once, it works</li>
<li>I call it again, it fails (with the error below)</li>
<li>I call it for the third time, it works again.</li>
<li>I call it for a fourth time, it fails (same error) message</li>
</ul>
<p>I'm calling it using <code>send_task</code> like this, storing the <code>task_id</code> so I can pool it later.</p>
<pre><code>async_result1 = celery_app.send_task("filter", kwargs=data.dict())
output = {"task_id": async_result1.task_id}
</code></pre>
<p><strong>Then</strong> I poll the app every few seconds to get the status of the task like this:</p>
<pre><code>result=AsyncResult(task_id)
(if status is not success, keep polling until it's successful)
</code></pre>
<p>The weird thing is that the code works once but when I call again it fails.</p>
<p>I've ssh'ed into the worker container and when I run <code>celery result</code> passing a failed task Id like this <code>celery -A app.name result -t filter 880df2ee-0a25-455f-93fb-50af3d5980e5</code>, this is the output I get (the last line shows the error I mentioned above)</p>
<pre><code>Traceback (most recent call last):
File "/usr/local/bin/celery", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/site-packages/celery/__main__.py", line 16, in main
_main()
File "/usr/local/lib/python3.7/site-packages/celery/bin/celery.py", line 322, in main
cmd.execute_from_commandline(argv)
File "/usr/local/lib/python3.7/site-packages/celery/bin/celery.py", line 495, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/usr/local/lib/python3.7/site-packages/celery/bin/base.py", line 305, in execute_from_commandline
return self.handle_argv(self.prog_name, argv[1:])
File "/usr/local/lib/python3.7/site-packages/celery/bin/celery.py", line 487, in handle_argv
return self.execute(command, argv)
File "/usr/local/lib/python3.7/site-packages/celery/bin/celery.py", line 419, in execute
).run_from_argv(self.prog_name, argv[1:], command=argv[0])
File "/usr/local/lib/python3.7/site-packages/celery/bin/base.py", line 309, in run_from_argv
sys.argv if argv is None else argv, command)
File "/usr/local/lib/python3.7/site-packages/celery/bin/base.py", line 393, in handle_argv
return self(*args, **options)
File "/usr/local/lib/python3.7/site-packages/celery/bin/base.py", line 253, in __call__
ret = self.run(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/celery/bin/result.py", line 41, in run
value = task_result.get()
File "/usr/local/lib/python3.7/site-packages/celery/result.py", line 228, in get
on_message=on_message,
File "/usr/local/lib/python3.7/site-packages/celery/backends/asynchronous.py", line 195, in wait_for_pending
return result.maybe_throw(callback=callback, propagate=propagate)
File "/usr/local/lib/python3.7/site-packages/celery/result.py", line 333, in maybe_throw
self.throw(value, self._to_remote_traceback(tb))
File "/usr/local/lib/python3.7/site-packages/celery/result.py", line 326, in throw
self.on_ready.throw(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/vine/promises.py", line 244, in throw
reraise(type(exc), exc, tb)
File "/usr/local/lib/python3.7/site-packages/vine/five.py", line 195, in reraise
raise value
celery.exceptions.NotRegistered: 'filter'
</code></pre>
<p><strong>Any idea of what could be happening here? I feel like there's something I'm missing, maybe I need to flush/restart the queue somehow?</strong></p>
<ul>
<li><p>I'm using <code>docker-compose</code> and this is how the worker container is being started:</p>
<p><code>celery worker -A app.name -P threads --loglevel=DEBUG</code></p>
</li>
<li><p>One curiosity here: I started off passing <code>--queues</code> parameter to the <code>celery worker **no call whatsoever worked**. When I removed the </code>--queues` parameter, it started behaving like it is now. Every other call fails.</p>
</li>
<li><p>When I run <code>celery inspect</code> to view the worker state, I notice that only the tasks that end up successful are added to the queue</p>
<pre><code>root@604dce1e1dda:/workers# celery -A app.name inspect active
-> celery@76089a886b04: OK
- empty -
-> celery@604dce1e1dda: OK
* {'id': '4d35d0bf-e3b4-4a6b-9a2e-4b23e12c98b1', 'name': 'filter', 'args': [], 'kwargs': {'foo': 'bar'}, 'type': 'filter', 'hostname': 'celery@604dce1e1dda', 'time_start': 1671406221.0351112, 'acknowledged': True, 'delivery_info': {'exchange': '', 'routing_key': 'celery', 'priority': 0, 'redelivered': False}, 'worker_pid': 1}
</code></pre>
</li>
<li><p>The task is properly registered (but only the one that ends in <code>1dda</code>, it seems)</p>
<pre><code>root@604dce1e1dda:/workers# celery -A app.name inspect registered
-> celery@76089a886b04: OK
* model
-> celery@604dce1e1dda: OK
* filter
</code></pre>
</li>
</ul>
<p>PS. : I'm running <code>celery 4.4.0 (cliffs)</code> on <code>python 3.7.16</code></p>
|
<python><queue><celery>
|
2022-12-18 23:15:40
| 1
| 11,937
|
Felipe
|
74,845,039
| 6,346,514
|
Python, KeyError: "There is no item named '{filename}' in the archive
|
<p>I am running a code to try to extract the length of each file in a directory.
I cannot figure out why I am getting error : <code>KeyError: "There is no item named 'Test1' in the archive"</code></p>
<p>Code:</p>
<pre><code>from io import BytesIO
from pathlib import Path
from zipfile import ZipFile
import pandas as pd
def process_files(files: list) -> pd.DataFrame:
file_mapping = {}
for file in files:
data_mapping = pd.read_excel(BytesIO(ZipFile(file).read(Path(file).stem)), sheet_name=None)
row_counts = []
for sheet in list(data_mapping.keys()):
row_counts.append(len(data_mapping.get(sheet)))
file_mapping.update({file: sum(row_counts)})
frame = pd.DataFrame([file_mapping]).transpose().reset_index()
frame.columns = ["file_name", "row_counts"]
return frame
path = r'//Stack/Over/Flow/testfiles/'
zip_files = (str(x) for x in Path(path).glob("*.zip"))
df = process_files(zip_files)
print(df)
</code></pre>
<p>my files:</p>
<p><a href="https://i.sstatic.net/lr2dw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lr2dw.png" alt="enter image description here" /></a></p>
<p>inside test1 zip:</p>
<p><a href="https://i.sstatic.net/voPu0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/voPu0.png" alt="enter image description here" /></a></p>
<p>any help would be appreicated.</p>
<p>Edit - How would I apply the code to the zip files in subdirectories like so? So within 2022 - 05 there would be zip files, and within 2022-06 there would be zip files, etc.</p>
<p><a href="https://i.sstatic.net/R0U9K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R0U9K.png" alt="enter image description here" /></a></p>
|
<python><pandas>
|
2022-12-18 23:12:15
| 1
| 577
|
Jonnyboi
|
74,844,847
| 1,245,262
|
Why would a Python call to an sqllite3 database crash with only 'Killed' as an output
|
<p>I'm trying to run some 3rd party code that attempts to read a database it has created. However, when I run this code, it crashes and only gives me the word <code>Killed</code> as output.</p>
<p>I inserted</p>
<pre><code>import pdb
pdb_set_trace()
</code></pre>
<p>Before the relevant section of the code:</p>
<pre><code>db_name = mod_path / 'data/ais.db'
with sqlite3.connect(str(db_name)) as conn:
c = conn.cursor()
c.execute("SELECT name FROM sqlite_master "+
"WHERE type='table' AND name='meta'")
</code></pre>
<p>Which produced this output:</p>
<pre><code>> /media/me/Fortress/bb/src/bb/data.py(316)download_data()
-> db_name = mod_path / 'data/ais.db'
(Pdb) n
> /media/me/Fortress/bb/src/bb/data.py(317)download_data()
-> with sqlite3.connect(str(db_name)) as conn:
(Pdb) n
> /media/me/Fortress/bb/src/bb/data.py(318)download_data()
-> c = conn.cursor()
(Pdb) conn
<sqlite3.Connection object at 0x7fa2f1e9d840>
(Pdb) n
> /media/me/Fortress/bb/src/bb/data.py(319)download_data()
-> c.execute("SELECT name FROM sqlite_master "+
(Pdb) c
Killed
</code></pre>
<p>So, I know the issue is with the <code>SELECT</code> stmt, but I don't know much about datbases, so don't know what the next debugging step I should take is. Is there a problem I should check for in the database? Or something wrong with the <code>SELECT</code> stmt? Or something else entirely?</p>
|
<python><python-3.x><sqlite>
|
2022-12-18 22:32:50
| 0
| 7,555
|
user1245262
|
74,844,790
| 2,272,824
|
What is the most efficient way to handle conversion from full to symmetric second order tensors using numpy?
|
<p>I am processing symmetric second order tensors (of stress) using numpy. In order to transform the tensors I have to generate a fully populated tensor, do the transformation and then recover the symmetric tensor in the rotated frame.</p>
<p>My input is a 2D numpy array of symmetric tensors (nx6). The code below works, but I'm pretty sure there must be a more efficient and/or elegant way to manipulate the arrays but I can't seem to figure it out.</p>
<p>I anyone can anyone suggest an improvement I'd be very grateful? The sample input is just 2 symmetric tensors but in use this could be millions of tensors, hence the concernr with efficiency</p>
<p>Thanks,</p>
<p>Doug</p>
<pre><code># Sample symmetric input (S11, S22, S33, S12, S23, S13)
sym_tens_in=np.array([[0,9], [1,10], [2,11], [3,12], [4,13], [5,14]])
# Expand to full tensor
tens_full=np.array([[sym_tens_in[0], sym_tens_in[3], sym_tens_in[4]],
[sym_tens_in[3], sym_tens_in[1], sym_tens_in[5]],
[sym_tens_in[4], sym_tens_in[5], sym_tens_in[2]]])
# Transpose and reshape to n x 3 x 3
tens_full=np.transpose(tens_full, axes=(2, 0, 1))
# This where the work on the full tensor will go....
# Reshape for extraction of the symmetric tensor
tens_full=np.reshape(tens_full, (2,9))
# Create an array for the test ouput symmetric tensor
sym_tens_out=np.empty((2,6), dtype=np.int32)
# Extract the symmetric components
sym_tens_out[:,0]=tens_full[:,0]
sym_tens_out[:,1]=tens_full[:,4]
sym_tens_out[:,2]=tens_full[:,8]
sym_tens_out[:,3]=tens_full[:,2]
sym_tens_out[:,4]=tens_full[:,3]
sym_tens_out[:,5]=tens_full[:,5]
# Transpose....
sym_tens_out=np.transpose(sym_tens_out)
</code></pre>
|
<python><numpy><numpy-ndarray><array-broadcasting><numpy-slicing>
|
2022-12-18 22:21:57
| 2
| 391
|
scotsman60
|
74,844,554
| 8,359,217
|
Applying Black formatter on symlink directory
|
<p>I have the following folder structure:</p>
<pre><code>- etl
- raw
- raw.py
- etl (symlink)
- raw
- raw.py
- etl (symlink)
... (infinite paths)
- config.py
</code></pre>
<p>I've created a <a href="https://linuxize.com/post/how-to-create-symbolic-links-in-linux-using-the-ln-command/" rel="nofollow noreferrer">symlink</a> from the <code>etl</code> folder. I'm using symlink so that I can have absolute imports from <code>raw.py</code> into <code>config.py</code>. For example, in <code>raw.py</code> I have the following import: <code>from etl.config import MY_CONSTANT</code>. This absolute import is made possible due to the symlink.</p>
<p>However, when I try to run Black, as in <code>poetry run black ${INCLUDE_FILES}</code> and <code>INCLUDE_FILES = ./etl</code> I run into a infinite loop, as Black tryies to enter in the symlinks and keep going forever. This infinite loop doesn't happen with, for example, Pylint and Flake8.</p>
<p>I could try running Black on every single specific <code>.py</code> file. However, as I have many files, that'd be time consuming.</p>
<p>Is there a way to make Black ignore the symlinks? Or is there another workaround for my situation?</p>
|
<python><black-code-formatter>
|
2022-12-18 21:30:38
| 0
| 303
|
Matheus Schaly
|
74,844,481
| 1,976,597
|
How do I control how a Python class is printed (NOT a class instance)
|
<p>I have a Python class with <em>class attributes</em>, and I would like it to show the values of the attributes when printed.</p>
<p>But instead, it just shows the name of the class.</p>
<pre class="lang-py prettyprint-override"><code>class CONFIG:
ONE = 1
TWO = 2
print(CONFIG) # <class '__main__.CONFIG'>
</code></pre>
<p>Of course, if I wanted to instantiate the class, then I could <code>def __repr__()</code> and control how it prints.</p>
<p>The question is, how do I control how a class (NOT a class instance) prints? Is there an equivalent to <code>__repr__</code> for the class itself?</p>
<p>Workarounds:</p>
<ul>
<li>Write a function that uses <code>vars()</code> or <code>dir()</code> to iterate over the attributes</li>
<li>Instantiate the class, stop doing things a weird way :)</li>
<li>Change my code in some other way so I no longer have the requirement to print the class (use Enum, SimpleNamespace, etc.)</li>
</ul>
|
<python>
|
2022-12-18 21:17:22
| 0
| 4,914
|
David Gilbertson
|
74,844,432
| 587,021
|
SQLalchemy keyword argument equivalent of `Column('id', Integer, Identity())`
|
<p>Attempt: <code>Column(name='id', type_=Integer, default=Identity())</code></p>
<p>I'm looking at <a href="https://docs.sqlalchemy.org/en/14/core/defaults.html#sqlalchemy.schema.Identity" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/14/core/defaults.html#sqlalchemy.schema.Identity</a> and <a href="https://docs.sqlalchemy.org/en/14/core/metadata.html#sqlalchemy.schema.Column.__init__" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/14/core/metadata.html#sqlalchemy.schema.Column.<strong>init</strong></a> but I can't see what the equivalent would be, I tried <code>default=Identity()</code> but got a:</p>
<blockquote>
<p>sqlalchemy.exc.ArgumentError: ColumnDefault may not be a server-side default type.</p>
</blockquote>
<p>Am I meant to use <a href="https://docs.sqlalchemy.org/en/14/core/defaults.html#sqlalchemy.schema.DefaultClause" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/14/core/defaults.html#sqlalchemy.schema.DefaultClause</a> or <code>sqlalchemy.schema.DefaultGenerator</code>? - Or is it not meant to be a <code>default=</code>/equivalent at all?</p>
|
<python><sql><sqlalchemy>
|
2022-12-18 21:08:47
| 1
| 13,982
|
A T
|
74,844,395
| 7,422,352
|
matplotlib's ion() does not make any difference
|
<p>To test the matplotlib's interactive mode, I used the following two code snippets:</p>
<p><strong>Snippet 1:</strong> Has <code>plt.ion()</code> with no call to <code>fig.canvas.draw()</code>:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.array([])
y_1 = np.array([])
y_2 = np.array([])
plt.ion()
fig = plt.figure(figsize=(9,4))
ax1 = plt.subplot(1,2,1)
ax2 = plt.subplot(1,2,2)
fig.show()
for i in range(0, 100000):
x = np.append(x, i)
y_1 = np.append(y_1, i**2)
y_2 = np.append(y_2, i**3)
ax1.clear()
ax2.clear()
ax1.scatter(x, y_1)
ax2.scatter(x, y_2)
#fig.canvas.draw()
plt.pause(0.02)
</code></pre>
<p><strong>Snippet 2:</strong> Has <code>plt.ion()</code> with call to <code>fig.canvas.draw()</code>:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.array([])
y_1 = np.array([])
y_2 = np.array([])
plt.ion()
fig = plt.figure(figsize=(9,4))
ax1 = plt.subplot(1,2,1)
ax2 = plt.subplot(1,2,2)
fig.show()
for i in range(0, 100000):
x = np.append(x, i)
y_1 = np.append(y_1, i**2)
y_2 = np.append(y_2, i**3)
ax1.clear()
ax2.clear()
ax1.scatter(x, y_1)
ax2.scatter(x, y_2)
fig.canvas.draw()
plt.pause(0.02)
</code></pre>
<p>I could not see any difference between the two plotted figures. Both the figures were updated/redrawn automatically.</p>
<p>Further, I set the interactive mode off using <code>plt.ioff()</code> with no call to <code>fig.canvas.draw()</code>:</p>
<p><strong>Snippet 3:</strong></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.array([])
y_1 = np.array([])
y_2 = np.array([])
plt.ioff()
fig = plt.figure(figsize=(9,4))
ax1 = plt.subplot(1,2,1)
ax2 = plt.subplot(1,2,2)
fig.show()
for i in range(0, 100000):
x = np.append(x, i)
y_1 = np.append(y_1, i**2)
y_2 = np.append(y_2, i**3)
ax1.clear()
ax2.clear()
ax1.scatter(x, y_1)
ax2.scatter(x, y_2)
#fig.canvas.draw()
plt.pause(0.02)
</code></pre>
<p>Same behaviour was observed across all the 3 snippets.</p>
<p>According to what is mentioned on <a href="https://www.mail-archive.com/matplotlib-users@lists.sourceforge.net/msg21139.html" rel="nofollow noreferrer">https://www.mail-archive.com/matplotlib-users@lists.sourceforge.net/msg21139.html</a> about <code>ion()</code>:</p>
<blockquote>
<p>PS: the documentation I was referring to reads: "The interactive property of
the pyplot interface controls whether a figure canvas is drawn on every
pyplot command. If interactive is False, then the figure state is updated on
every plot command, but will only be drawn on explicit calls to draw(). When
interactive is True, then every pyplot command triggers a draw."</p>
</blockquote>
<p>the expected behaviour of <strong>Snippet 3</strong> should be different than that of <strong>Snippet 1</strong> & <strong>Snippet 2.</strong> For <strong>Snippet 3</strong>, the figure should not get updated/redrawn automatically.</p>
<p>Is my understanding correct or am I missing something?</p>
|
<python><matplotlib><data-science><matplotlib-animation><matplotlib-ion>
|
2022-12-18 21:01:20
| 0
| 5,381
|
Deepak Tatyaji Ahire
|
74,844,309
| 1,243,255
|
Downloading all zip files from url
|
<p>I need to download all the zip files from the url: <a href="https://www.ercot.com" rel="nofollow noreferrer">https://www.ercot.com</a></p>
|
<python><web-scraping><beautifulsoup>
|
2022-12-18 20:49:28
| 1
| 4,837
|
Zanam
|
74,844,262
| 9,481,479
|
How can I solve error "module 'numpy' has no attribute 'float'" in Python?
|
<p>I am using NumPy 1.24.0.</p>
<p>On running this sample code line,</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
num = np.float(3)
</code></pre>
<p>I am getting this error:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ubuntu/.local/lib/python3.8/site-packages/numpy/__init__.py", line 284, in __getattr__
raise AttributeError("module {!r} has no attribute " AttributeError: module 'numpy' has no attribute 'float'
</code></pre>
<p>How can I fix it?</p>
|
<python><numpy>
|
2022-12-18 20:42:16
| 9
| 1,004
|
Nawin K Sharma
|
74,844,237
| 7,168,244
|
How to include both percent and N as bar labels in grouped bar chart
|
<p>I recently asked a question and on how to include both % and N as bar labels and received assistance <a href="https://stackoverflow.com/questions/74832746/include-both-and-n-as-bar-labels/74833282#74833282">Include both % and N as bar labels</a> I am trying to use that example in a bar plot over a variable as per the example below:</p>
<pre><code>data = {
'id': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50],
'baseline': [1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1],
'endline': [1, 0, np.nan, 1, 0, 0, 1, np.nan, 1, 0, 0, 1, 0, 0, 1, 0, np.nan, np.nan, 1, 0, 1, np.nan, 0, 1, 0, 1, 0, np.nan, 1, 0, np.nan, 0, 0, 0, np.nan, 1, np.nan, 1, np.nan, 0, np.nan, 1, 1, 0, 1, 1, 1, 0, 1, 1],
'gender': ['male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female']
}
df = pd.DataFrame(data)
sns.set_style('white')
ax = sns.barplot(data = df.melt(id_vars = ['id', 'gender'], value_vars = ['baseline', 'endline']),
x = 'variable', y = 'value',
estimator=lambda x: np.sum(x) / np.size(x) * 100, ci=None,
color='cornflowerblue', hue = 'gender')
N = df.melt(id_vars = ['id', 'gender'], value_vars = ['baseline', 'endline']).groupby(['gender', 'variable'], sort=False)['value'].count().to_numpy()
N_it = '$\it{N}$'
labels=[f'{np.round(perc,1)}% ({N_it} = {n})'
for perc, n in zip(ax.containers[0].datavalues, N)]
ax.bar_label(ax.containers[0], labels = labels, fontsize = 10)
ax.bar_label(ax.containers[1], labels = labels, fontsize = 10)
sns.despine(ax = ax, left = True)
ax.grid(True, axis = 'y')
ax.yaxis.set_major_formatter(PercentFormatter(100))
ax.set_xlabel('')
ax.set_ylabel('')
plt.tight_layout()
plt.show()
</code></pre>
<p>but it seems I am missing something in getting the right results.</p>
|
<python><pandas><matplotlib><seaborn><grouped-bar-chart>
|
2022-12-18 20:39:08
| 1
| 481
|
Stephen Okiya
|
74,844,201
| 6,205,382
|
Processing and validating JSON containing duplicate keys
|
<p>I'm trying to validate "json" files that we receive because the source code that generates these files has some issues that cannot be corrected without a major overhaul. There are many objects in the "json" that are invalid. An example below, which reuses keys for port naming.</p>
<p>example invalid json file</p>
<pre><code>[
{"TimeStamp": "2021-11-28", "Address": { "port": "eth2 present", "port": "eth0 present", "port": "eth1 present" }},
{"TimeStamp": "2021-11-29", "CamStatus": 1},
{"TimeStamp": "2021-11-30", "CamDone": 0}
]
</code></pre>
<p>What I am trying to do is first identify which of the rows are invalid. From there, I want to clean them up, if possible.</p>
<p>Using <code>json.load()</code>, I see an odd behavior where the invalid json is parsed but excludes two key/value pairs. Curious if this expected, becuase I would have expected a <code>ValueError</code>.</p>
<pre><code>with open(r"sample.json") as json_file:
content = json.load(json_file)
content
</code></pre>
<p>Result</p>
<pre><code>[{'TimeStamp': '2021-11-28', 'Address': {'port': 'eth1 present'}},
{'TimeStamp': '2021-11-29', 'CamStatus': 1},
{'TimeStamp': '2021-11-30', 'CamDone': 0}]
</code></pre>
<p>To identify corrupt rows, I wrote the below using <code>json.loads()</code>, but I am also getting unexpected behavior where the second object is being read as invalid.</p>
<pre><code>with open("sample.json") as json_file:
for line in json_file:
try:
a = json.loads(line)
print('valid JSON', line)
except:
print('invalid JSON', line)
</code></pre>
<p>Output</p>
<pre><code>invalid JSON [
invalid JSON {"TimeStamp": "2021-11-28", "Address": { "port": "eth2 present", "port": "eth0 present", "port": "eth1 present" }},
invalid JSON {"TimeStamp": "2021-11-29", "CamStatus": 1},
valid JSON {"TimeStamp": "2021-11-30", "CamDone": 0}
invalid JSON ]
</code></pre>
<p>What I am attempting to do is generate a structure like below:</p>
<pre><code>[{'TimeStamp': '2021-11-28', 'Address': {'port0': 'eth1 present', 'port1': 'eth2 present', 'port2': 'eth3 present'}},
{'TimeStamp': '2021-11-29', 'CamStatus': 1},
{'TimeStamp': '2021-11-30', 'CamDone': 0}]
</code></pre>
<p>Any thoughts, modules, sample code that could help me out?</p>
|
<python><json>
|
2022-12-18 20:33:09
| 1
| 2,239
|
CandleWax
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.