QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
74,862,962
| 4,826,074
|
Exception thrown when using python in for-loop in C++-script
|
<p>I have a C++ script that I run in Visual studio 2022 on windows 10 and where I include the python module. In the code I import numpy in a for-loop. The code executes fine the first round, but in the second round and exception is thrown as soon as the code tries to execute the import statement. I have googled for the error and searched for similar issues on stackOverflow but not found any solution. I have also isolated the error and made a minimal example. Here is the code from the example:</p>
<pre><code>#include <C:\Program Files\Python39\include\Python.h>
#include <iostream>
using namespace std;
int main() {
for (int i = 0; i < 5; i++) {
cout << i << endl;
Py_Initialize();
PyRun_SimpleString("import numpy as np");
PyRun_SimpleString("arr = np.array([1,2,3])");
PyRun_SimpleString("print(arr)");
Py_Finalize();
}
}
</code></pre>
<p>This is the output from the console when running the code:</p>
<pre><code>0
[1 2 3]
1
Traceback (most recent call last):
File "<string>", line 1, in <module>
</code></pre>
<p>Here is the error message in visual studio:</p>
<pre><code>Exception thrown at 0x00007FFCB9461319 (_multiarray_umath.cp39-win_amd64.pyd) in App4.exe: 0xC0000005: Access violation writing location 0x0000000000000008.
</code></pre>
<p>What could be going on here and how can I solve it?</p>
|
<python><c++><cpython>
|
2022-12-20 12:16:31
| 0
| 380
|
Johan hvn
|
74,862,947
| 3,461,321
|
Animating plot + image subplots synchronously in matplotlib
|
<p>I have a figure that contains both a curve plot and a corresponding image. I'd like for the two of them to shift in sync with one another -- so that the white region of the image follows the location in the curve where all three curves match up. (For the curious, this is intended as a simple simulation of a multiwavelength interferogram.)</p>
<p>The three curves in the plot are copied point-by-point into the into the R, G, and B channels of the image frame. As a result, I expected the two to be naturally in sync. However, in the animation one can see that the curves shift at a faster rate than the colors in the image.</p>
<p>After trying a number of adjustments (aspect ratio of the image, changing the image "extent", etc.) I have so far failed to locate the problem.</p>
<pre><code>from numpy import *
import matplotlib.pyplot as plt
from matplotlib import animation
npts = 501
wavelength_blue = 0.45
wavelength_green = 0.55
wavelength_red = 0.65
z = 1.5 * linspace(-0.75, 0.75, npts) ## distance in microns
nframes = 100
maxshift = 0.55 ## in microns of distance
img_height = 100
img = 255 * ones((img_height,npts,3), 'uint8')
(fig,axes) = plt.subplots(2, num='propagation_phase_shifted')
p1 = axes[0].plot([], [], 'b-')
p2 = axes[0].plot([], [], 'g-')
p3 = axes[0].plot([], [], 'r-')
axes[0].set_xlim((-0.8,0.8))
axes[0].set_ylim((-0.1,1.1))
p4 = axes[1].imshow(img, extent=[-0.8,0.8,-0.1,1.1], aspect=1/3)
axes[0].xaxis.set_label_position('top')
axes[0].xaxis.set_ticks_position('top')
axes[0].set_xlabel('z-distance (um)')
axes[0].set_ylabel('wave amplitude')
axes[0].tick_params(axis='x', direction='out')
plt.subplots_adjust(hspace=0.05)
def animate(n):
shift = (n / (nframes - 1.0)) * maxshift
phi_red = 2.0 * pi * (z-shift) / wavelength_red
phi_green = 2.0 * pi * (z-shift) / wavelength_green
phi_blue = 2.0 * pi * (z-shift) / wavelength_blue
y_red = 0.5 * (1.0 + cos(phi_red))
y_green = 0.5 * (1.0 + cos(phi_green))
y_blue = 0.5 * (1.0 + cos(phi_blue))
for x in range(img_height):
img[x,:,0] = uint8(255 * y_red)
img[x,:,1] = uint8(255 * y_green)
img[x,:,2] = uint8(255 * y_blue)
p1[0].set_data(z, y_red)
p2[0].set_data(z, y_green)
p3[0].set_data(z, y_blue)
p4.set_data(img)
return(p1[0],p2[0],p3[0],p4)
anim = animation.FuncAnimation(fig, animate, frames=nframes, interval=20, blit=True)
FFwriter = animation.FFMpegWriter(fps=30)
anim.save('result.mp4', writer=FFwriter)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/gBKNP.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gBKNP.gif" alt="enter image description here" /></a></p>
|
<python><matplotlib><animation>
|
2022-12-20 12:15:31
| 1
| 1,685
|
nzh
|
74,862,928
| 3,521,180
|
why am I not able to convert string type column to date format in pyspark?
|
<p>I have a column which is in the "20130623" format. I am trying to convert it into dd-mm-YYYY. I have seen various post online including here. But I only got one solution as below</p>
<pre><code>from datetime import datetime
df = df2.withColumn("col_name", datetime.utcfromtimestamp(int("col_name")).strftime('%d-%m-%y'))
</code></pre>
<p>However, it throws an error that the input should be <code>int type</code>, <code>not the string type</code>. I tried to convert with the help of <code>int()</code> function. But even that doesn't seem to be helping.</p>
<p>below is the error that I see when converting</p>
<pre><code>invalid literal for int() with base 10: 'col_name'
</code></pre>
<p>I am not sure if it is taking the col_name as string, or its value as string.
Please suggest, how can I do this, or the best way to get the required output</p>
<p>Note: I cannot use pandas in my environment.</p>
<p>thank you.</p>
|
<python><python-3.x><pyspark>
|
2022-12-20 12:13:56
| 1
| 1,150
|
user3521180
|
74,862,859
| 13,227,420
|
How to efficiently reorder rows based on condition?
|
<p>My dataframe:</p>
<pre><code>df = pd.DataFrame({'col_1': [10, 20, 10, 20, 10, 10, 20, 20],
'col_2': ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h']})
col_1 col_2
0 10 a
1 20 b
2 10 c
3 20 d
4 10 e
5 10 f
6 20 g
7 20 h
</code></pre>
<p>I don't want consecutive rows with col_1 = 10, instead a row below a repeating 10 should jump up by one (in this case, index 6 should become index 5 and vice versa), so the order is always 10, 20, 10, 20...</p>
<p>My current solution:</p>
<pre><code>for idx, row in df.iterrows():
if row['col_1'] == 10 and df.iloc[idx + 1]['col_1'] != 20:
df = df.rename({idx + 1:idx + 2, idx + 2: idx + 1})
df = df.sort_index()
df
</code></pre>
<p>gives me:</p>
<pre><code> col_1 col_2
0 10 a
1 20 b
2 10 c
3 20 d
4 10 e
5 20 g
6 10 f
7 20 h
</code></pre>
<p>which is what I want but it is very slow (2.34s for a dataframe with just over 8000 rows).
Is there a way to avoid loop here?
Thanks</p>
|
<python><pandas>
|
2022-12-20 12:07:05
| 1
| 394
|
sierra_papa
|
74,862,840
| 6,368,217
|
How to validate Keycloak webhook request?
|
<p>I'm implementing an endpoint to receive events from Keycloak using a webhook, but I don't know how to validate this request.</p>
<p>I see that the request contains a header "X-Keycloak-Signature". Also, I set a WEBHOOK_SECRET. It seems I somehow need to generate this signature from the request and the secret and then compare them. So it looks like this:</p>
<pre><code>import os
import hashlib
from flask import abort, request
def validate_keycloak_signature(f):
def wrapper(self, *args, **kwargs):
secret = os.getenv("WEBHOOK_SECRET")
method = request.method
uri = request.url
body = request.get_data(as_text=True)
smub = secret + method + uri + body
h = hashlib.sha256(smub.encode("utf-8")).hexdigest()
signature = request.headers.get("X-Keycloak-Signature")
if h != signature:
return abort(403)
return f(self, *args, **kwargs)
return wrapper
</code></pre>
<p>However, I don't know the algorithm. Here, I tried this one:</p>
<pre><code>1. Create a string that concatenates together the following: Client secret + http method + URI + request body (if present)
2. Create a SHA-256 hash of the resulting string.
3. Compare the hash value to the signature. If they're equal then this request has passed validation.
</code></pre>
<p>But it doesn't work. Does anybody has any ideas?</p>
|
<python><flask><request><keycloak><webhooks>
|
2022-12-20 12:05:15
| 1
| 991
|
Alexander Shpindler
|
74,862,829
| 5,574,107
|
Sum with rows from two dataframes
|
<p>I have two dataframes. One has months 1-5 and a value for each month, which are the same for ever ID, the other has an ID and a unique multiplier e.g.:</p>
<pre><code>data = [['m', 10], ['a', 15], ['c', 14]]
# Create the pandas DataFrame
df = pd.DataFrame(data, columns=['ID', 'Unique'])
data2=[[1,0.2],[2,0.3],[3,0.01],[4,0.5],[5,0.04]]
df2 = pd.DataFrame(data2, columns=['Month', 'Value'])
</code></pre>
<p>I want to do sum ( value / (1+unique)^(Month/12) ). E.g. for ID m, I want to do (value/(1+10)^(Month/12)), for every row in df2, and sum them. I wrote a for-loop to do this but since my real table has 277,000 entries this takes too long!</p>
<pre><code>df['baseTotal']=0
for i in df.index.unique():
for i in df2.Month.unique():
df['base']= df2['Value']/pow(1+df.loc[i,'Unique'],df2['Month']/12.0)
df['baseTotal']=df['baseTotal']+df['base']
</code></pre>
<p>Is there a more efficient way to do this?</p>
|
<python><pandas><dataframe>
|
2022-12-20 12:03:59
| 1
| 453
|
user13948
|
74,862,575
| 8,708,364
|
`df.select_dtypes` works with `float` but not `int`
|
<p>I just came across this strange behaviour of <code>pd.DataFrame.select_dtypes</code>.</p>
<p>My <code>pd.DataFrame</code> is:</p>
<pre><code>df = pd.DataFrame({'a': [1, 2, 3, 4], 'b': ['a', 'b', 'c', 'd'], 'c': [1.2, 3.4, 5.6, 7.8]})
</code></pre>
<p>Now if I want to select the numeric columns, I would do:</p>
<pre><code>df.select_dtypes([int, float])
</code></pre>
<p>But the the output only contains the <code>float</code> column:</p>
<pre><code> c
0 1.2
1 3.4
2 5.6
3 7.8
</code></pre>
<p>Why is that? I listed both <code>float</code> and <code>int</code>, why doesn't it list the integer column.</p>
<p>Here are the <code>dtypes</code>:</p>
<pre><code>>>> df.dtypes
a int64
b object
c float64
dtype: object
>>>
</code></pre>
<p>As you can see, they're both end with <code>64</code>, but only <code>float</code> works.</p>
<p>More tests:</p>
<pre><code>>>> df.select_dtypes(int)
Empty DataFrame
Columns: []
Index: [0, 1, 2, 3]
>>> df.select_dtypes(float)
c
0 1.2
1 3.4
2 5.6
3 7.8
>>>
</code></pre>
<p>Why does this happen?</p>
<hr />
<p>I know I could just do:</p>
<pre><code>df.select_dtypes(['int64', 'float64'])
</code></pre>
<p>But I want to know the reason for this behavior.</p>
|
<python><pandas><dataframe>
|
2022-12-20 11:41:25
| 1
| 71,788
|
U13-Forward
|
74,862,252
| 9,172,344
|
redis lrange prefix fetching way
|
<p>I have list type data in redis, there're so many keys which can't be fetched all one time. I tried to use python redis Lrange function to get in batch style, such as 1000 a time, but it seems not work as it always return empty. Lrange regard <code>*</code> as a character, how should I do it?</p>
<pre><code>conn.Lrange(f'test-{id}-*', 0, 1000)
</code></pre>
|
<python><redis>
|
2022-12-20 11:14:24
| 1
| 1,037
|
Frank
|
74,862,222
| 4,169,571
|
matplotlib: labeling of curves
|
<p>When I create a plot with many curves it would be convenient to be able to label each curve at the right where it ends.</p>
<p>The result of <code>plt.legend</code> produces too many similar colors and the legend is overlapping the plot.</p>
<p>As one can see in the example below the use of <code>plt.legend</code> is not very effective:</p>
<pre><code>import numpy as np
from matplotlib import pyplot as plt
n=10
x = np.linspace(0,1, n)
for i in range(n):
y = np.linspace(x[i],x[i], n)
plt.plot(x, y, label=str(i))
plt.legend(loc='upper right')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/Qc4cf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qc4cf.png" alt="enter image description here" /></a></p>
<p>If possible I would like to have something similar to this plot:</p>
<p><a href="https://i.sstatic.net/tteID.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tteID.png" alt="enter image description here" /></a></p>
<p>or this:</p>
<p><a href="https://i.sstatic.net/GAoYa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GAoYa.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><legend>
|
2022-12-20 11:12:04
| 1
| 817
|
len
|
74,862,107
| 20,574,508
|
Is the CSRF token correctly implemented in Flask / WTF?
|
<p>I don't know a lot about CSRF but I'd like to know if it is correctly implemented.</p>
<p>I have a simple signin form using the following code:</p>
<p>The CSRF protection is activated:</p>
<pre><code>from flask_wtf.csrf import CSRFProtect
csrf = CSRFProtect()
csrf.init_app(app)
</code></pre>
<p>In forms.py:</p>
<pre><code>from flask_wtf import FlaskForm
from wtforms import RadioField,SubmitField, StringField,PasswordField, BooleanField
from wtforms.validators import Length, Email, DataRequired
class SignInForm(FlaskForm):
email = StringField('Email', validators=[DataRequired(), Length(1,50),Email()])
password = PasswordField('Password', validators=[DataRequired()])
remember_me = BooleanField('Keep me signed in')
submit = SubmitField('Sign in' )
</code></pre>
<p>In the html page:</p>
<pre><code><form action="" method="post" style="text-align:left">
{{wtf.quick_form(form)}}
</form>
</code></pre>
<p>However, once the app is running, it works well but I find the CSRF token like this in the inspection mode:</p>
<pre><code><form action="" method="post" style="text-align:left">
<input id="csrf_token" name="csrf_token" type="hidden" value="very_complex_key">
...
</form>
</code></pre>
<p>Shouldn't the csrf token very_complex_key be totally hidden? Or is it a CSRF token per session that Flask manages internally?</p>
|
<python><flask><csrf><flask-wtforms>
|
2022-12-20 11:04:00
| 1
| 351
|
Nicolas-Fractal
|
74,862,082
| 12,930,958
|
How to accept an ascii character with python re (regex)
|
<p>I have a regex that controls a password so that it contains an upper case, a lower case, a number, a special character and minimum 8 characters.</p>
<p>regex is:</p>
<pre class="lang-py prettyprint-override"><code>regex_password = r"^(?=.*[a-z])(?=.*[A-Z])(?=.*[\W]).{8,}$"
</code></pre>
<p>I use in this function:</p>
<pre class="lang-py prettyprint-override"><code>def password_validator(password):
#REGEX PASSWORD : minimum 8 characters, 1 lowercase, 1 uppercase, 1 special caracter
regex_password = r"^(?=.*[a-z])(?=.*[A-Z])(?=.*[\W]).{8,}$"
if not re.match(regex_password, password):
raise ValueError("""value is not a valid password""")
return password
</code></pre>
<p>However, the use of "Β²" raises me an error, however, this same regex with a Javascript front-end validation, or on different regex validation site,works.</p>
<p>The problem is possible the ascii, so how can i do for python accept the ascii character in regex ?</p>
|
<python><python-3.x><regex><python-re>
|
2022-12-20 11:02:07
| 1
| 2,729
|
fchancel
|
74,861,953
| 4,671,162
|
extract part of a date and put them into a dataframe
|
<p>I would like to know how from a dataframe with a column dedicated to the date how to extract each part of a date (at once instead of <em>df["_Date"].str.match(pattern)</em> for each part) and put them in a dataframe
for example:</p>
<pre><code>import pandas as pd
# date: str mm/dd/yyyy
df = pd.DataFrame(data=["12/1/2010 8:26", "12/3/2010 8:28", "12/6/2010 8:28", "02/15/2011 8:34", "02/18/2011 8:34", "03/01/2011 8:34"], columns=["_Date"])
...
print(newDf)
days monthAndYear time
1 12/2010 8:26
3 12/2010 8:28
6 12/2010 8:28
...
</code></pre>
|
<python><pandas>
|
2022-12-20 10:53:31
| 1
| 891
|
problème0123
|
74,861,849
| 8,551,737
|
How to fix the batch size in keras subclassing model?
|
<p>In tf.keras functional API, I can fix the batch size like below:</p>
<pre><code>import tensorflow as tf
inputs = tf.keras.Input(shape=(64, 64, 3), batch_size=1) # I can fix batch size like this
x = tf.keras.layers.Conv2DTranspose(3, 3, strides=2, padding="same", activation="relu")(inputs)
outputs = x
model = keras.Model(inputs=inputs, outputs=outputs, name="custom")
</code></pre>
<p>My question is, how do I can fix the batch size when I use the keras subclassing approach?</p>
|
<python><tensorflow><keras><tensorflow2.0>
|
2022-12-20 10:44:52
| 1
| 455
|
YeongHwa Jin
|
74,861,844
| 10,576,322
|
Dependencies packages and subpackages
|
<p>I am really new to python packaging. It already is a confusing topic with recommended ways and options that only a minority seems to apply. But to make it worse, I stumbled over this problem.</p>
<p>I started with the intention to write a rather small package with a really focussed purpose. My first solution included import of pandas. But I got the request to remove that dependency. So I tried to refactor the function and unsurprisingly it's slower. And slower to an extent that I can't hardly accept it.</p>
<p>So a solution would be to provide a package that uses pandas and a package that don't uses pandas. So that people can use either or, depending on project requirements. Now I am wondering what the best way is to provide that.</p>
<p>I could:</p>
<ol>
<li>Create two seperate projects with different package names. That would work, but I want to keep the code together and there are functions and code shared.</li>
<li>Do 1. but import the shared parts from the simple package.</li>
<li>Use subpackages in case that would result in removing dependency for the core subpackage.</li>
</ol>
<p>What is a good way to fulfill the different needs?</p>
|
<python><dependencies><python-packaging>
|
2022-12-20 10:44:25
| 1
| 426
|
FordPrefect
|
74,861,715
| 6,119,375
|
Error when using function of imported package
|
<p>I am running the follwing code:</p>
<pre class="lang-py prettyprint-override"><code>!pip install scikit-learn==0.24
import sklearn.metrics
mape = mean_absolute_percentage_error(target_test.values, p)
</code></pre>
<p>but then get an error. What is the problem with this code?</p>
|
<python><scikit-learn>
|
2022-12-20 10:34:11
| 1
| 1,890
|
Nneka
|
74,861,607
| 2,181,056
|
python - add mock for requests.get without need to explicitly check it
|
<p>I want to add <code>requests</code> mock for python.
Any call to <code>requests.get</code> in a class will call the unittest mock function instead.</p>
<p>I don't want to add a call such as <code>requests.get</code> in the unittest itself, it may happen in many places in code.</p>
<p>Now, changes on code - I get:</p>
<blockquote>
<p><code>requests.exceptions.ConnectionError: HTTPConnectionPool(host='myurl1', port=80): Max retries exceeded with url: /test.txt (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fbc5da52e00>: Failed to establish a new connection: [Errno -2] Name does not resolve'))</code></p>
</blockquote>
<p>i.e:</p>
<p>in main class: <code>my_class</code></p>
<pre><code>import requests
from urllib import request, parse
...
mymock = None
class MyClass():
def myfunc1(self):
...
# requests.get can be callled randomaly
requests.get('http://myurl1/test.txt')
def myfunc2(self):
requests.get('http://myurl2/test.txt')
</code></pre>
<p>and the unittest:</p>
<pre><code>from s3_uploader import *
import pytest
import mock
import requests
import requests_mock
class MyMockClass(MyClass):
@requests_mock.mock()
def mock_get_requests(self, m):
m.get('http://myurl1/test.txt')
# if I call here to the base class myfunc1.myfunc1 - there is no exception
# but I don't want to implement like this (call the main function whenever the mock is declared):
class TestClass(unittest.TestCase):
def setUp(self):
mymock.mock_get_requests()
def test_myfunc1(self)
super.myfunc1() # this raise exception - the mock for request.get is not injected to the request.get class
if __name__ == '__main__':
mymock = MyMockClass()
unittest.main()
</code></pre>
<p>I don't want to do something like the following (which works, but I need to do it for lot of functions, and http <code>requests.get</code> is not the only mock).</p>
<pre><code>class MyMockClass(MyClass):
def mock_get_requests(self, m):
m.get('http://myurl1/test.txt')
super.myfunc1()
class TestClass(unittest.TestCase):
@requests_mock.mock(self)
def test_myfunc1(self, m)
mymock.mock_get_requests()
if __name__ == '__main__':
mymock = MyMockClass()
unittest.main()
</code></pre>
<p>Also I tried using the <a href="https://docs.pytest.org/en/7.1.x/how-to/monkeypatch.html" rel="nofollow noreferrer">monkeypatch</a></p>
<p>This also use the original <code>requests.get</code> and not the mocked one.</p>
<p>Like this:</p>
<pre><code>class MockResponse:
@staticmethod
def json():
return {"mock_key": "mock_response"}
# monkeypatched requests.get moved to a fixture
@pytest.fixture
def mock_response(monkeypatch):
"""Requests.get() mocked to return {'mock_key':'mock_response'}."""
def mock_get(*args, **kwargs):
return MockResponse()
logging.info('**** test mock response ****')
monkeypatch.setattr(requests, "get", mock_get)
</code></pre>
<p>and in test class:</p>
<pre><code>@pytest.mark.usefixtures("mock_response")
class TestClass(unittest.TestCase):
...
def test_requests(self):
requests.get(http://myurl/test.txt') # this not using the mock
# mymock.myfunc1 doesn't work either.
</code></pre>
<p>What may be the alternative solution for this problem?</p>
<p>Thanks.</p>
|
<python><python-requests><mocking>
|
2022-12-20 10:26:44
| 1
| 1,436
|
Eitan
|
74,861,529
| 5,684,405
|
Remove overlaping tuple ranges from list leaving only the longest range
|
<p>For a given list of range-tuples, I need to remove overlapping rage tuples while leaving the longest range for those that overlap or if same length keep both.</p>
<p>eg</p>
<pre><code>input = [ [(1, 7), (2, 3), (7, 8), (9, 20)], [(4, 7), (2, 3), (7, 10)], [(1, 7), (2, 3), (7, 8)]]
expected_output = [ [(1,7), (9,20)], [(4,7), (2, 3), (7,10)], [(1,7)] ]
</code></pre>
<p>so <strong>only the longest overlapping range-tuple should not be removed</strong>.</p>
<pre><code>def overlap(x:tuple, y:tuple) -> bool:
return bool(len( range(max(x[0],y[0]), min(x[1], y[1])+1 ) ))
def drop_overlaps(tuples: list):
def other_tuples(elems: list, t: tuple)-> list:
return [e for e in elems if e != t]
return [ t for t in tuples if not any( overlap(t, other_tuple)
for other_tuple in other_tuples(tuples, t)) ]
</code></pre>
<p>How do I remove the overlaps and keep the longest of them and those that are non-overlapping?</p>
|
<python>
|
2022-12-20 10:18:30
| 1
| 2,969
|
mCs
|
74,861,480
| 5,355,993
|
Getting error: coverage.exceptions.ConfigError: File pattern can't include '**/**' while generating coverage for python project
|
<p>I am trying to generate code coverage for a python project. I am running the command:</p>
<pre><code>pytest --cov-config=./coveragerc --cov-report html:target/coverage --cov=./
</code></pre>
<p>This command should help me generate an html based coverage report, but I am getting the error:</p>
<pre><code>+ pytest --cov-config=./coveragerc --cov-report html:target/coverage --cov=./
Traceback (most recent call last):
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/bin/pytest", line 8, in <module>
sys.exit(console_main())
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/_pytest/config/__init__.py", line 190, in console_main
code = main()
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/_pytest/config/__init__.py", line 148, in main
config = _prepareconfig(args, plugins)
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/_pytest/config/__init__.py", line 329, in _prepareconfig
config = pluginmanager.hook.pytest_cmdline_parse(
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/pluggy/_callers.py", line 55, in _multicall
gen.send(outcome)
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/_pytest/helpconfig.py", line 103, in pytest_cmdline_parse
config: Config = outcome.get_result()
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/_pytest/config/__init__.py", line 1058, in pytest_cmdline_parse
self.parse(args)
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/_pytest/config/__init__.py", line 1346, in parse
self._preparse(args, addopts=addopts)
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/_pytest/config/__init__.py", line 1248, in _preparse
self.hook.pytest_load_initial_conftests(
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall
return outcome.get_result()
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/pytest_cov/plugin.py", line 152, in pytest_load_initial_conftests
plugin = CovPlugin(options, early_config.pluginmanager)
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/pytest_cov/plugin.py", line 203, in __init__
self.start(engine.Central)
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/pytest_cov/plugin.py", line 225, in start
self.cov_controller.start()
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/pytest_cov/engine.py", line 44, in ensure_topdir_wrapper
return meth(self, *args, **kwargs)
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/pytest_cov/engine.py", line 234, in start
self.cov.start()
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/coverage/control.py", line 603, in start
self._init_for_start()
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/coverage/control.py", line 557, in _init_for_start
self._inorout.configure(self.config)
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/coverage/inorout.py", line 267, in configure
self.omit_match = GlobMatcher(self.omit, "omit")
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/coverage/files.py", line 288, in __init__
self.re = globs_to_regex(self.pats, case_insensitive=env.WINDOWS)
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/coverage/files.py", line 372, in globs_to_regex
rx = join_regex(map(_glob_to_regex, patterns))
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/coverage/misc.py", line 185, in join_regex
regexes = list(regexes)
File "/home/jenkins/agent/workspace/change_dx-airflow_cd-flow_PR-106/.pyenv-home-jenkins-.pyenv-shims-python/lib/python3.8/site-packages/coverage/files.py", line 345, in _glob_to_regex
raise ConfigError(f"File pattern can't include {m[0]!r}")
coverage.exceptions.ConfigError: File pattern can't include '**/**'
</code></pre>
<p>This is my <code>coveragerc</code> file:</p>
<pre><code>[run]
omit =
test/*
**/**site-packages**/**
**__init__.py
</code></pre>
<p>I am unable to figure out the reason of the failure. We have been using this coveragerc for a while now and everything used to work fine. Please help.</p>
|
<python><jenkins><continuous-integration><pytest><coverage.py>
|
2022-12-20 10:14:52
| 1
| 690
|
Piyush Das
|
74,861,252
| 2,205,969
|
Downgrade poetry version
|
<p>I need to downgrade my version of <code>poetry</code> to version <code>1.2.1</code>.</p>
<p>Currently, it's <code>1.2.2</code>.</p>
<pre><code>>>> poetry --version
Poetry (version 1.2.2)
</code></pre>
<p>I use the following command:</p>
<pre><code>>>> curl -sSL https://install.python-poetry.org | POETRY_VERSION=1.2.1 python3 -
Retrieving Poetry metadata
The latest version (1.2.1) is already installed.
</code></pre>
<p>But I'm told that <code>1.2.1</code> is already installed. Yet the poetry version is still stuck on the original.</p>
<pre><code>>>> poetry --version
Poetry (version 1.2.2)
</code></pre>
<p>The answer given <a href="https://stackoverflow.com/questions/74153270/how-to-downgrade-poetry">here</a> doesn't work (<code>poetry self update@1.2.1</code>) => <code>The command "self" does not exist.</code></p>
<p>What am I doing wrong here?</p>
|
<python><python-poetry>
|
2022-12-20 09:55:13
| 4
| 3,968
|
Ian
|
74,861,186
| 4,295,389
|
Access vaiolation in PyArray_SimpleNew
|
<p>I have a cmake based C++ project (library) which is wrapped to python using swig.
A method of the library returns a <code>std::vector<int64_t></code> which is copied to a numpy array with the <code>%extend</code> keyword of swig. (see foo.i bellow)</p>
<p><strong>foo.i</strong></p>
<pre><code>%{
#define SWIG_FILE_WITH_INIT
%}
%include "numpy.i"
%init %{
import_array();
%}
%include "foo.hpp"
%{
#include <foo.hpp>
%}
%extend std::vector<int64_t> {
PyObject* asNpArray() {
size_t nRows = self->size();
npy_intp dims[1] = { (npy_intp)nRows };
PyArrayObject* vec_array = (PyArrayObject *) PyArray_SimpleNew(1, dims, NPY_INT64);
int64_t *vec_array_pointer = (int64_t*) PyArray_DATA(vec_array);
copy(self->begin(),self->end(),vec_array_pointer);
return (PyObject*)vec_array;
}
}
%template(vectorint64) std::vector<int64_t>;
</code></pre>
<p>The wrapping works fine until numpy <em>1.21.6</em> but as soon as I update numpy to <em>1.22.0</em> or newer, it crash when accessing the <code>asNpArray()</code> method in Python. (I used Python 3.8 and 3.11, both with the same result)</p>
<p>When I starting to debug the wrapping, I see the exception:</p>
<p><em>Exception thrown at 0x00007FFDC58DDA5D (python38.dll) in python.exe: 0xC0000005: Access violation reading location 0x00000000000000E8.</em></p>
<p>at the C++ code line <code>PyArrayObject* vec_array = (PyArrayObject *) PyArray_SimpleNew(1, dims, NPY_INT64);</code></p>
<p>This is shown when running it with an attached debugger and the script is terminated afterwards.</p>
<p>A special thing is, that when step through the C++ code, and do a single step over this line, no exception is thrown and the right numpy array is returned to my Python script.</p>
<p>Does any one know, what is wrong with the <em>extension</em>? I wasn't able to find changes in the numpy changelog which address this topics.</p>
<p>Or maybe it exist a better solution for convert a std::vector to a numpy array?</p>
<p>Bellow I attached a few source and python files which are a minimal project to reproduce the whole issue.</p>
<p><strong>foo.h</strong></p>
<pre><code>/*
* \file foo.hpp
*/
#ifndef FOO
#define FOO
#include <stdint.h>
#include <vector>
namespace foo {
class Foo {
public:
Foo();
~Foo();
std::vector<int64_t> getVector();
};
} /* namespace */
#endif
</code></pre>
<p><strong>foo.cpp</strong></p>
<pre><code>/*
* \file foo.cpp
*/
#include "foo.hpp"
namespace foo {
Foo::Foo() { }
Foo::~Foo() { }
std::vector<int64_t> Foo::getVector()
{
std::vector<int64_t> data {-2, -1, 0, 1, 2};
return data;
}
} /* namespace */
</code></pre>
<p><strong>CMakeLists.txt</strong></p>
<pre><code>cmake_minimum_required (VERSION 3.18)
cmake_policy(SET CMP0048 NEW) # set version string with project() command
cmake_policy(SET CMP0094 NEW) # use LOCATION for Python lookup strategy
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
project(Foo
VERSION 0.0.1
DESCRIPTION "PyArray_SimpleNew test"
)
set(CMAKE_C_VISIBILITY_PRESET hidden)
set(CMAKE_CXX_VISIBILITY_PRESET hidden)
set(CMAKE_VISIBILITY_INLINES_HIDDEN ON)
#Static Library
set(STATIC_TARGET ${PROJECT_NAME}Static)
add_library(${STATIC_TARGET} STATIC
foo.cpp
)
set_target_properties(${STATIC_TARGET} PROPERTIES
PUBLIC_HEADER "foo.hpp"
POSITION_INDEPENDENT_CODE ON
)
target_include_directories(${STATIC_TARGET} PUBLIC . )
#Python wrapping
add_compile_definitions(SWIG)
find_package(SWIG REQUIRED)
include(${SWIG_USE_FILE})
set_property(SOURCE foo.i PROPERTY USE_LIBRARY_INCLUDE_DIRECTORIES TRUE)
set_property(SOURCE foo.i PROPERTY CPLUSPLUS ON)
if (MSVC)
set(CMAKE_SWIG_FLAGS "-D_SWIG_WIN32")
# We don't have Python with debug information installed
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /wd4127")
add_definitions(-DSWIG_PYTHON_INTERPRETER_NO_DEBUG)
endif()
set(Python3_FIND_STRATEGY LOCATION)
find_package(Python3 REQUIRED COMPONENTS Interpreter Development NumPy)
message("*****************************************")
message("Python3_ROOT " ${Python3_ROOT})
message("Python3_FOUND " ${Python3_FOUND})
message("Python3_Interpreter_FOUND " ${Python3_Interpreter_FOUND})
message("Python3_Development_FOUND " ${Python3_Development_FOUND})
message("Python3_LIBRARIES " ${Python3_LIBRARIES})
message("Python3_LIBRARY_DIRS " ${Python3_LIBRARY_DIRS})
message("Python3_INCLUDE_DIRS " ${Python3_INCLUDE_DIRS})
message("Python3_LINK_OPTIONS " ${Python3_LINK_OPTIONS})
message("Python3_EXECUTABLE " ${Python3_EXECUTABLE})
message("Python3_INTERPRETER_ID " ${Python3_INTERPRETER_ID})
message("Python3_VERSION " ${Python3_VERSION})
message("Python3_VERSION_MAJOR " ${Python3_VERSION_MAJOR})
message("Python3_VERSION_MINOR " ${Python3_VERSION_MINOR})
message("Python3_NumPy_FOUND " ${Python3_NumPy_FOUND})
message("Python3_NumPy_INCLUDE_DIRS " ${Python3_NumPy_INCLUDE_DIRS})
message("Python3_NumPy_VERSION " ${Python3_NumPy_VERSION})
message("Python3_SOABI " ${Python3_SOABI})
message("*****************************************")
set(PYTHON3_TARGET ${PROJECT_NAME}PY)
if (WIN32)
# Allow to debug under windows, if debug versions of Python are missing
string(REPLACE "_d" "" Python3_LIBRARIES "${Python3_LIBRARIES}")
endif()
# has to be before 'swig_add_library'
link_directories(${Python3_LIBRARY_DIRS})
####################### Target #######################
# Define target library and configure properties #
######################################################
swig_add_library(${PYTHON3_TARGET}
TYPE SHARED
LANGUAGE python
SOURCES foo.i
OUTPUT_DIR ${CMAKE_CURRENT_BINARY_DIR}
)
set_target_properties(${PYTHON3_TARGET} PROPERTIES
OUTPUT_NAME "${PYTHON3_TARGET}"
SUFFIX ".${Python3_SOABI}.pyd"
SWIG_USE_LIBRARY_INCLUDE_DIRECTORIES TRUE
)
target_link_libraries(${PYTHON3_TARGET} PUBLIC ${STATIC_TARGET} ${Python3_LIBRARIES} Python3::NumPy)
target_include_directories(${PYTHON3_TARGET} PRIVATE ${Python3_INCLUDE_DIRS})
if(WIN32)
set_property(TARGET ${PYTHON3_TARGET} PROPERTY SWIG_COMPILE_OPTIONS -threads -w362,509)
else()
set_property(TARGET ${PYTHON3_TARGET} PROPERTY SWIG_COMPILE_OPTIONS -threads -w362,509 -DSWIGWORDSIZE64)
endif()
if (WIN32)
# pyconfig.h is not autogenerated on Windows. To avoid warnings, we
# add a compiler directive
get_directory_property(DirDefs COMPILE_DEFINITIONS )
set_target_properties(${PYTHON3_TARGET} PROPERTIES
COMPILE_DEFINITIONS "${DirDefs};HAVE_ROUND"
)
endif()
</code></pre>
<p>Additionally, the project use <strong><a href="https://github.com/numpy/numpy/blob/main/tools/swig/numpy.i" rel="nofollow noreferrer">numpy.i</a></strong> from the official numpy release.</p>
<p><strong>test.py</strong></p>
<pre><code>##
# \file test.py
import sys
import numpy as np
libPath = "<path where FooPY is located>"
sys.path.insert(0, libPath)
import FooPY as foo
myFoo = foo.Foo()
data = myFoo.getVector()
print(data)
# the script crashes after the following line
data = myFoo.getVector().asNpArray()
print(data)
</code></pre>
<p><strong>Edit:</strong></p>
<p>I found that the issue is some how related with the <em>-threads</em> compile option of swig. The implementation works without this option. Basically, the option release the Python GIL when a library method is called (which is required by the base library). Is it possible that a call to <em>PyArray_SimpleNew</em> is only possible with a "locked" python interpreter (GIL)? And maybe is there any option to temporary lock the GIL in C++ or to exclude the <em>asNpArray</em> method from this <em>-threads</em> configuration in swig?</p>
|
<python><c++><arrays><numpy><swig>
|
2022-12-20 09:49:24
| 1
| 538
|
moudi
|
74,861,003
| 1,714,692
|
Rescaling image to get values between 0 and 255
|
<p>A piece of code taken <a href="https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/chapter09_part03_interpreting-what-convnets-learn.ipynb" rel="nofollow noreferrer">from here</a> is trying to plot the intermediate outputs of a convolutional neural network. The outputs are taken and rescaled in this way:</p>
<pre><code>if channel_image.sum()!=0:
channel_image -= channel_image.mean()
channel_image /= channel_image.std()
channel_image *= 64
channel_image += 128
channel_image = np.clip(channel_image, 0, 255).astype("uint8")
</code></pre>
<p>Everything in this transformation is pretty clear, except the multiplication by <code>64</code>: what is that for? Is that an empirical value?</p>
|
<python><image><scale>
|
2022-12-20 09:34:01
| 2
| 9,606
|
roschach
|
74,860,947
| 12,242,085
|
How to find categorical data where one category (including NaN) represents at least 80% of all categories of variable in Python Pandas?
|
<p>I have Pandas DataFrame in Python like below:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>COL1</th>
<th>COL2</th>
<th>COL3</th>
</tr>
</thead>
<tbody>
<tr>
<td>ABC</td>
<td>11</td>
<td>NaN</td>
</tr>
<tr>
<td>NaN</td>
<td>10</td>
<td>NaN</td>
</tr>
<tr>
<td>ABC</td>
<td>11</td>
<td>NaN</td>
</tr>
<tr>
<td>ABC</td>
<td>11</td>
<td>NaN</td>
</tr>
<tr>
<td>DDD</td>
<td>12</td>
<td>NaN</td>
</tr>
<tr>
<td>ABC</td>
<td>NaN</td>
<td>GAME</td>
</tr>
</tbody>
</table>
</div>
<p>And I need to create list of variables, where one category represents >= 80% of all categories of a given categorical variable. So I need to:</p>
<ol>
<li>Select only categorical variables</li>
<li>Make value_counts(dropna=False), because I need to include as categories also missing variables</li>
<li>Create list of variables from above DataFrame where one category represents >= 80% of all categories of a given categorical variable</li>
</ol>
<p>So, as a result I need something like: <code>my_list = ["COL3"]</code>, because --> (5xNaN) / 6 rows = 0.83</p>
<p>How can I do that in Python Pandas ?</p>
|
<python><pandas><nan><categorical-data>
|
2022-12-20 09:29:27
| 2
| 2,350
|
dingaro
|
74,860,885
| 12,883,179
|
Multiply different size nested array with scalar
|
<p>In my python code, I have an array that has different size inside like this</p>
<pre><code>arr = [
[1],
[2,3],
[4],
[5,6,7],
[8],
[9,10,11]
]
</code></pre>
<p>I want to multiply them by 10 so it will be like this</p>
<pre><code>arr = [
[10],
[20,30],
[40],
[50,60,70],
[80],
[90,100,110]
]
</code></pre>
<p>I have tried <code>arr = np.multiply(arr,10)</code> and <code>arr = np.array(arr)*10</code></p>
<p>it seems that both are not working for different size of nested array because when I tried using same size nested array, they actually works just fine</p>
|
<python><arrays><python-3.x><list><nested>
|
2022-12-20 09:25:08
| 2
| 492
|
d_frEak
|
74,860,881
| 6,014,418
|
group columns based on pattern pySpark
|
<p>I have input dataframe like this. I want to group the price and qty columns in a dictionary as shown as below.</p>
<pre><code>-------------------------------------------------------------------------------------
| item_name | price_1 | qty_1 | price_2 | qty_2 | price_3 | qty_3 | url |
-------------------------------------------------------------------------------------
| Samsung Z | 10000 | 5 | 9000 | 10 | 7000 | 20 | amazon.com |
| Moto G4 | 12000 | 10 | 10000 | 20 | 6000 | 50 | ebay.com |
| Mi 4i | 15000 | 8 | 12000 | 20 | 10000 | 25 | deals.com |
| Moto G3 | 20000 | 5 | 18000 | 12 | 15000 | 30 | ebay.com |
--------------------------------------------------------------------------------------
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>---------------------------------------------------------------------------------------------------------------------------------------
| item_name | price_range | url |
---------------------------------------------------------------------------------------------------------------------------------------
| Samsung Z | [{price:10000,qty:5, comments:""},{price:9000,qty:10, comments:""},{price:7000,qty:20, comments:""}] | amazon.com |
| Moto G4 | [{price:12000,qty:10, comments:""},{price:10000,qty:20, comments:""},{price:6000,qty:50, comments:""}] | ebay.com |
| Mi 4i | [{price:15000,qty:8, comments:""},{price:12000,qty:20, comments:""},{price:10000,qty:25, comments:""}] | deals.com |
| Moto G3 | [{price:20000,qty:5, comments:""},{price:18000,qty:12, comments:""},{price:15000,qty:30, comments:""}] | ebay.com |
---------------------------------------------------------------------------------------------------------------------------------------
</code></pre>
|
<python><pyspark><apache-spark-sql>
|
2022-12-20 09:24:37
| 1
| 3,986
|
Ramineni Ravi Teja
|
74,860,835
| 10,829,044
|
Pandas - compute previous custom quarter wise total revenue and reshape table
|
<p>I have a dataframe like as below</p>
<pre><code>df = pd.DataFrame(
{'stud_id' : [101, 101, 101, 101,
101, 102, 102, 102],
'sub_code' : ['CSE01', 'CSE01', 'CSE01',
'CSE01', 'CSE02', 'CSE02',
'CSE02', 'CSE02'],
'ques_date' : ['10/11/2022', '06/06/2022','09/04/2022', '27/03/2022',
'13/05/2010', '10/11/2021','11/1/2022', '27/02/2022'],
'revenue' : [77, 86, 55, 90,
65, 90, 80, 67]}
)
df['ques_date'] = pd.to_datetime(df['ques_date'])
</code></pre>
<p>I would like to do the below</p>
<p>a) Compute custom quarter based on our organization FY calendar. Meaning, Oct-Dec is Q1, Jan -Mar is Q2,Apr - Jun is Q3 and July to Sep is Q4.</p>
<p>b) Group by stud_id</p>
<p>c) Compute sum of revenue from previous two quarters (from a specific date = 20/12/2022). For example, if we are in <code>2023Q1</code>, I would like to get the sum of revenue for a customer from <code>2022Q4</code> and <code>2022Q3</code> seperately</p>
<p>So, I tried the below</p>
<pre><code>df['custom_qtr'] = pd.to_datetime(df['ques_date'], dayfirst=True).dt.to_period('Q-SEP')
date_1 = pd.to_datetime('20-12-2022')
df['date_based_qtr'] = date_1.to_period('Q-SEP')
pat = '(Q(\d+))'
df['custom_qtr_number'] = df['custom_qtr'].astype(str).str.extract(pat, expand=False)[1]
df['date_qtr_number'] = df['date_based_qtr'].astype(str).str.extract(pat, expand=False)[1]
</code></pre>
<p>But am not sure, how to reshape the dataframe and get an output like below. You can see that we are <code>2023Q1</code> and I would like to get the <code>sum of revenue from previous two quarters seperately</code>. Meaning, revenue from 2022Q4 and 2022Q3 respectively</p>
<p><a href="https://i.sstatic.net/4Qmtj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4Qmtj.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe><group-by><aggregate-functions>
|
2022-12-20 09:20:46
| 1
| 7,793
|
The Great
|
74,860,787
| 8,324,092
|
Having a problem displaying the correct value in the edit workorder QComboBox
|
<p>I am trying to set the current value of the order_type_combo QComboBox 29: 'Complaint: Damage caused by sewer blockages' instead I'm getting a default value 2: 'Complaint: Sewerage Blockage'. These values are created in a table t_wo_workorders which uses another table wo_type as a dropdown.</p>
<p>Here is my code:</p>
<pre><code>from PyQt5 import QtWidgets, QtGui
import sys
from PyQt5.QtWidgets import QApplication, QWidget, QLabel, QPushButton, QLineEdit, QVBoxLayout, QDateEdit, QComboBox, QStyleFactory, QTableView, QMessageBox, QSizePolicy, QTextEdit
from PyQt5.QtCore import Qt
from PyQt5.QtGui import QFont, QStandardItemModel, QStandardItem, QPalette
import psycopg2
import datetime
from PyQt5.QtCore import QDate
from PyQt5.QtWidgets import QDateEdit
from PyQt5.QtCore import QTimer
current_date = QDate.currentDate()
# Connect to the database and create a cursor object
conn = psycopg2.connect(
host="localhost",
database="mms",
user="postgres",
password="postgres"
)
cur = conn.cursor()
# Retrieve the last work order from the t_wo_workorders table
cur.execute("SELECT * FROM t_wo_workorders ORDER BY order_id DESC LIMIT 1")
work_order = cur.fetchone()
# Extract the values for each field in the work order
work_order_id = work_order[0]
defined = work_order[1]
org = work_order[2]
order_type = work_order[3]
scheduled = work_order[4]
status = work_order[5]
request = work_order[6]
address = work_order[7]
customer = work_order[8]
tel_no = work_order[9]
class EditWorkOrderForm(QWidget):
def populate_org_combo(self):
self.cur.execute("SELECT org_unit, id FROM org_units")
org_units = self.cur.fetchall()
for org_unit in org_units:
self.org_combo.addItem(str(org_unit[0]))
self.org_units_dict[org_unit[1]] = org_unit[0]
def populate_order_type_combo(self, order_type):
self.order_type = order_type
# Retrieve the wo_type records from the database
self.cur.execute("SELECT type_description_en, id FROM wo_type")
wo_type = self.cur.fetchall()
# Create a dictionary with the id values as keys and the type_description_en values as values
self.order_types_dict = {}
for id, type_description_en in wo_type:
self.order_types_dict[id] = type_description_en
self.order_type_combo.addItem(str(type_description_en))
# Check if the order_type value is present in the order_types_dict dictionary
if self.order_type not in self.order_types_dict:
print("Error: Invalid order type")
return
# Get the index of the order_type value in the order_types_dict keys and set the combo box's current index to that value
default_index = list(self.order_types_dict.keys()).index(self.order_type)
self.order_type_combo.setCurrentIndex(default_index)
def populate_status_combo(self):
self.cur.execute("SELECT status_descr_en, id FROM wo_status")
wo_status = self.cur.fetchall()
for status in wo_status:
self.status_combo.addItem(str(status[0]))
self.status_dict[status[1]] = status[0]
def __init__(self, conn, cur, order_id, defined, org, order_type, scheduled, status, request, address, customer, tel_no):
super().__init__()
self.cur = cur # Assign the cur parameter to the self.cur attribute
self.order_types_dict = {}
self.order_type_combo = QComboBox(self)
self.populate_order_type_combo(order_type)
print(order_type)
self.conn = conn
self.org_units_dict = {}
self.status_dict = {}
self.order_id = order_id
self.org_combo = QComboBox(self)
self.populate_org_combo()
if str(org) in self.org_units_dict:
org_value = self.org_units_dict[int(org)]
else:
org_value = ""
try:
org_value = self.org_units_dict[int(org)]
except KeyError:
pass
org_index = self.org_combo.findText(org_value)
order_type_value = [v for k, v in self.order_types_dict.items() if k == order_type]
if order_type_value:
order_type_value = order_type_value[0]
else:
order_type_value = ""
order_type_index = self.order_type_combo.findText(order_type_value)
self.order_type_combo.setCurrentIndex(order_type_index)
self.status_combo = QComboBox(self)
self.populate_status_combo()
conn = psycopg2.connect(
host="localhost",
database="mms",
user="postgres",
password="postgres"
)
self.cur.execute(
"SELECT * FROM t_wo_workorders WHERE order_id = %s",
(self.order_id,)
)
work_order = self.cur.fetchone()
self.defined_label = QLabel("Defined:")
self.defined_edit = QDateEdit()
self.defined_edit.setDate(defined)
self.org_label = QLabel("Org. Unit:")
self.org_combo.setGeometry(100, 100, 200, 25)
self.org_combo.setCurrentIndex(int(org))
self.org_combo.setCurrentIndex(org_index)
self.order_type_label = QLabel("Order Type:")
self.order_type_combo.setGeometry(100, 100, 200, 25)
self.order_type_combo.setCurrentIndex(int(order_type))
self.order_type_combo.setCurrentIndex(order_type_index)
self.scheduled_label = QLabel("Scheduled:")
self.scheduled_edit = QDateEdit()
self.scheduled_edit.setDate(scheduled)
self.status_label = QLabel("Status:")
self.status_combo = QComboBox()
self.order_type_combo.setCurrentIndex(int(status))
status_value = self.status_dict.get(org, "")
status_index = self.status_combo.findText(status_value)
self.request_label = QLabel("Request:")
self.request_edit = QLineEdit()
self.request_edit.setText(request)
self.address_label = QLabel("Address:")
self.address_edit = QLineEdit()
self.address_edit.setText(address)
self.customer_label = QLabel("Customer:")
self.customer_edit = QLineEdit()
self.customer_edit.setText(customer)
self.tel_no_label = QLabel("Tel. No:")
self.tel_no_edit = QLineEdit()
self.tel_no_edit.setText(tel_no)
self.request_gps_label = QLabel("Request GPS:")
layout = QVBoxLayout()
layout.addWidget(self.defined_label)
layout.addWidget(self.defined_edit)
layout.addWidget(self.org_label)
layout.addWidget(self.org_combo)
layout.addWidget(self.order_type_label)
layout.addWidget(self.order_type_combo)
layout.addWidget(self.scheduled_label)
layout.addWidget(self.scheduled_edit)
layout.addWidget(self.status_label)
layout.addWidget(self.status_combo)
layout.addWidget(self.request_label)
layout.addWidget(self.request_edit)
layout.addWidget(self.address_label)
layout.addWidget(self.address_edit)
layout.addWidget(self.customer_label)
layout.addWidget(self.customer_edit)
layout.addWidget(self.tel_no_label)
layout.addWidget(self.tel_no_edit)
layout.addWidget(self.request_gps_label)
self.setLayout(layout)
self.show()
def edit_work_order(self, work_order_id):
form = EditWorkOrderForm(work_order_id)
form.show()
def create_edit_work_order_form(work_order_id, defined, org, order_type, scheduled, status, request, address, customer, tel_no):
form = EditWorkOrderForm(conn, cur, work_order_id, defined, org, order_type, scheduled, status, request, address, customer, tel_no)
return form
app = QApplication(sys.argv)
edit_work_order_form = EditWorkOrderForm(conn, cur, work_order_id, defined, org, order_type, scheduled, status, request, address, customer, tel_no)
edit_work_order_form.show()
sys.exit(app.exec_())
</code></pre>
<p>and this is what it prints:</p>
<pre><code>Error: Invalid order type
3
[Finished in 5.3s]
</code></pre>
|
<python><pyqt5>
|
2022-12-20 09:15:36
| 0
| 429
|
Gent Bytyqi
|
74,860,617
| 5,171,861
|
How to update read status of an email using MS Graph Api
|
<p>I am trying to use python and MS Graph Api to read emails from outlook.
My intention is to create ticket object in our CRM application whenever any email comes.
So, my application is to monitor the mailbox to see if any mails are staying unread.
I am able to successfully login and read all unseen email from outlook.
But emails are not getting marked as "read" email even though read by my program. it is shown as unread and that
causes to create duplicate tickets in my application.
is there any way to mark as read once we read by API call?
Please see the python code I am using</p>
<pre><code>read_url = 'https://graph.microsoft.com/v1.0/users/{}/messages'.format(
self.userId)
#user id will be given
response = requests.get(read_url,
headers={
'Authorization': 'Bearer ' + self.result['access_token']})
</code></pre>
<p>#token will be generated</p>
|
<python><microsoft-graph-api>
|
2022-12-20 08:59:59
| 1
| 419
|
RatheeshTS
|
74,860,523
| 13,987,643
|
Regex pattern to match flight number and aircraft registration ID
|
<p>My dataset has Flight number and aircraft reg of the form 'xx-yyy' i.e, two alphanumeric characters 'xx' followed by a hiphen '-' followed by 3 to 5 alphanumeric characters and I want to capture them using regex in python.</p>
<p>Examples:</p>
<pre><code>1. pk-bkf
2. id-6236
3. ew-43950
4. 8q-iak
5. q2-274
6. pk-gjr
7. id-12345
</code></pre>
<p>I tried using this pattern: <code>^[a-z0-9]{2}[-][a-z0-9]{3, 5}$</code> but it doesn't seem to match them.</p>
<p>Could someone help me write a pattern with this 'hyphen' inbetween?</p>
|
<python><regex><string>
|
2022-12-20 08:50:44
| 1
| 569
|
AnonymousMe
|
74,860,404
| 3,467,698
|
How do I collect values into a list in Python standard regex?
|
<p>I have a string with repeated parts:</p>
<pre><code>s = '[1][2][5] and [3][8]'
</code></pre>
<p>And I want to group the numbers into two lists using <code>re.match</code>. The expected result is:</p>
<pre><code>{'x': ['1', '2', '5'], 'y': ['3', '8']}
</code></pre>
<p>I tried this expression that gives a wrong result:</p>
<pre><code>re.match(r'^(?:\[(?P<x>\d+)\])+ and (?:\[(?P<y>\d+)\])+$', s).groupdict()
# {'x': '5', 'y': '8'}
</code></pre>
<p>It looks like <code>re.match</code> keeps the last match only. How do I collect all the parts into a list instead of the last one only?</p>
<p>Of course, I know that I could split the line on <code>' and '</code> separator and use <code>re.findall</code> for the parts instead, but this approach is not general enough because it gives some issues for more complex strings so I would always need to think about correct splitting separately all the time.</p>
|
<python><regex>
|
2022-12-20 08:41:08
| 2
| 9,971
|
Fomalhaut
|
74,860,397
| 10,623,444
|
Use three transformations (average, max, min) of pretrained embeddings to a single output layer in Pytorch
|
<p>I have developed a trivial Feed Forward neural network with Pytorch.</p>
<p>The neural network uses GloVe pre-trained embeddings in a freezed <code>nn.Embeddings</code> layer.</p>
<p>Next, the embedding layer splits into three embeddings. Each split is a different transformation applied to the initial embedding layer. Then the embeddings layer feed three <code>nn.Linear</code> layers. And finally I have a single output layer for a binary classification target.</p>
<p>The shape of the embedding tensor is [64,150,50]<br>
-> 64: sentences in the batch,<br>
-> 150: words per sentence,<br>
-> 50: vector-size of a single word (pre-trained GloVe vector)</p>
<p>So after the transformation, the embedding layer splits into three layers with shape [64,50], where 50 = either the <code>torch.mean()</code>, <code>torch.max()</code> or <code>torch.min()</code> of the 150 words per sentence.</p>
<p>My questions are:</p>
<ol>
<li><p>How could I feed the output layer from three different <code>nn.Linear</code> layers to predict a single target value [0,1].</p>
</li>
<li><p>Is this efficient and helpful to the total predictive power of the model? Or just selecting the average of the embeddings is sufficient and no improvement will be observed.</p>
</li>
</ol>
<p>The <code>forward()</code> method of my PyTorch model is:</p>
<pre class="lang-py prettyprint-override"><code> def forward(self, text):
embedded = self.embedding(text)
if self.use_pretrained_embeddings:
embedded_average = torch.mean(embedded, dim=1)
embedded_max = torch.max(embedded, dim=1)[0]
embedded_min = torch.min(embedded, dim=1)[0]
else:
embedded = self.flatten_layer(embedded)
input_layer = self.input_layer(embedded_average) #each Linear layer has the same value of hidden unit
input_layer = self.activation(input_layer)
input_layer_max = self.input_layer(embedded_max)
input_layer_max = self.activation(input_layer_max)
input_layer_min = self.input_layer(embedded_min)
input_layer_min = self.activation(input_layer_min)
#What should I do here? to exploit the weights of the 3 hidden layers
output_layer = self.output_layer(input_layer)
output_layer = self.activation_output(output_layer) #Sigmoid()
return output_layer
</code></pre>
<p>After the proposed answer the function is:</p>
<pre class="lang-py prettyprint-override"><code> def forward(self, text):
embedded = self.embedding(text)
if self.use_pretrained_embeddings:
embedded_average = torch.mean(embedded, dim=1)
embedded_max = torch.max(embedded, dim=1)[0]
embedded_min = torch.min(embedded, dim=1)[0]
#use of average embeddings transformation
input_layer_average = self.input_layer(embedded_average)
input_layer_average = self.activation(input_layer_average)
#use of max embeddings transformation
input_layer_max = self.input_layer(embedded_max)
input_layer_max = self.activation(input_layer_max)
#use of min embeddings transformation
input_layer_min = self.input_layer(embedded_min)
input_layer_min = self.activation(input_layer_min)
else:
embedded = self.flatten_layer(embedded)
input_layer = torch.concat([input_layer_average, input_layer_max, input_layer_min], dim=1)
input_layer = self.activation(input_layer)
print("3",input_layer.shape) #[192,1] vs [64,1] -> output layer
if self.n_layers !=0:
for layer in self.layers:
input_layer = layer(input_layer)
output_layer = self.output_layer(input_layer)
output_layer = self.activation_output(output_layer)
return output_layer
</code></pre>
<p>This generates the following error:</p>
<blockquote>
<p>ValueError: Using a target size (torch.Size([64, 1])) that is different to the input size (torch.Size([192, 1])) is deprecated. Please ensure they have the same size.</p>
</blockquote>
<p>Expected outcome since the concatenated layer is 3x the size of the sentences (64). Any fix that could resolve it?</p>
|
<python><machine-learning><pytorch><neural-network><word-embedding>
|
2022-12-20 08:40:29
| 1
| 1,589
|
NikSp
|
74,860,260
| 1,254,632
|
Reading logs from remote server over ssh
|
<p>I am trying to ssh to a particular port on a server and read the logs but the ssh connection to server closed after authentication immediately when we try to execute it through python subprocess.</p>
<p>I am using following command to read the logs:</p>
<p>sshpass -p xxxx ssh -tt -v root@xxx.xxx.xxx.xx -p 2200 | tee -i -a remoteserver.log</p>
<p>Trace log:</p>
<pre><code>debug1: Authentication succeeded (password).
debug1: channel 0: new [client-session]
debug1: Entering interactive session.
debug1: pledge: network
debug1: tty_make_modes: no fd or tio
debug1: Sending environment.
debug1: Sending env LANG = en_US
debug1: client_input_channel_req: channel 0 rtype exit-status reply 0
debug1: channel 0: free: client-session, nchannels 1
debug1: fd 0 clearing O_NONBLOCK
debug1: fd 1 clearing O_NONBLOCK
debug1: fd 2 clearing O_NONBLOCK
Connection to xxx.xxx.xxx.x closed.
Transferred: sent 1808, received 1440 bytes, in 0.0 seconds
Bytes per second: sent 36560.7, received 29119.1
debug1: Exit status 0
</code></pre>
<p>PS: This command works fine when we execute it directly on terminal.</p>
<p>Anyone has faced similar issue?</p>
|
<python><bash><ssh>
|
2022-12-20 08:26:09
| 0
| 8,820
|
mrutyunjay
|
74,860,186
| 5,363,621
|
replace all values in all columns based on condition
|
<p>I have a df as below</p>
<p><a href="https://i.sstatic.net/ce3aC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ce3aC.png" alt="enter image description here" /></a></p>
<p>I want to make this df binary as follows</p>
<p><a href="https://i.sstatic.net/V9oMw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V9oMw.png" alt="enter image description here" /></a></p>
<p>I tried</p>
<p>df[:]=np.where(df>0, 1, 0)</p>
<p>but with this I am losing my df index.
I can try this on all columns one by one or use loop but I think there would be some easy & quick way to do this.</p>
|
<python><pandas>
|
2022-12-20 08:18:49
| 3
| 915
|
deega
|
74,860,172
| 3,099,733
|
What's the Python equivalent to JavaScript's Promise.resolve?
|
<h3><code>Promise</code> in Javascript</h3>
<p>As in MDN document:</p>
<blockquote>
<p>The Promise.resolve() method "resolves" a given value to a Promise. If the value is a promise, that promise is returned; if the value is a thenable, Promise.resolve() will call the then() method with two callbacks it prepared; otherwise the returned promise will be fulfilled with the value.</p>
</blockquote>
<p>It's useful when you need to handle a value whose type is either <code>T</code> or <code>Promise<T></code>, you can always choose to <code>const promiseValue = Promise.resolve(value)</code> and just treat it as promise afterward.</p>
<h3>Example: What I want</h3>
<p>Suppose there is a value: <code>Union[T, Future[T]]</code> , and I want to convert it to just <code>Future</code> type. In JavaScript I can just <code>value = Promise.resolve(value)</code>, but I don't know what's the suggested way to do it in Python.</p>
<h3>Questions</h3>
<p>Though I can always choose to build one by myself, I am just wondering if Python have a build-in method to do the same thing for <code>Future</code>?</p>
<p>And also what's the suggested way to handle such situation without <code>Promise.resolve</code> in Python?</p>
|
<python><asynchronous><concurrent.futures>
|
2022-12-20 08:17:44
| 1
| 1,959
|
link89
|
74,860,127
| 10,849,727
|
How to remove df rows if the timestamp is inside at least one of given time intervals?
|
<p>I'm given two (pandas) data frames:</p>
<ul>
<li><code>data</code>: with time-stamps columns <code>ts</code> and other data columns <code>val1</code>, and <code>val2</code>.</li>
<li><code>intervals</code>: with two columns, <code>start</code> and <code>end</code>, that describe a series of time intervals in which <code>data</code> is corrupted</li>
</ul>
<p>my goal - replace all row values in <code>data</code> with <code>NaN</code> if that row's <code>ts</code> is within some range described by <code>start</code>, <code>end</code>. (Namely, replace the <code>i</code>'th row in <code>data</code> with [<code>NaN</code>,...,<code>NaN</code>] if there exists some row <code>j</code> in <code>intervals</code> such that the <code>ts_i</code> is in [<code>start_j,end_j</code>])</p>
<p>My current (naive) solution, iterates the ranges using a <code>for</code> loop:</p>
<pre><code>def _remove_corruptions(corruption_filename: str, df_to_clean: pd.DataFrame):
corruption_timestamps = pd.read_csv(corruption_filename)
for j, row in tqdm(corruption_timestamps.iterrows()):
start, end = row['start'], row['end']
df_to_clean[(start <= df_to_clean['ts']) & (df_to_clean['ts'] <= end)] = np.nan
return df_to_clean
</code></pre>
<p>Can I vectorize this process somehow?</p>
<hr />
<p>an example is provided below:</p>
<pre><code>intervals = pd.DataFrame({'start': ['2019-06-23', '2020-01-10'], 'end': ['2019-06-24', '2020-01-12']})
start end
0 2019-06-23 2019-06-24
1 2020-01-10 2020-01-12
data = pd.DataFrame({'ts': ['2019-06-23', '2019-10-24', '2020-01-11'], 'val1': [1, 1, 1], 'val2': [2, 2, 2]})
ts val1 val2
0 2019-06-23 1 2
1 2019-10-24 1 2
2 2020-01-11 1 2
</code></pre>
<p>required out:</p>
<pre><code>out = pd.DataFrame({'ts': [np.nan, '2019-10-24', np.nan], 'val1': [np.nan, 1, np.nan], 'val2': [np.nan, 2, np.nan]})
ts val1 val2
0 NaN NaN NaN
1 2019-10-24 1.0 2.0
2 NaN NaN NaN
</code></pre>
|
<python><pandas>
|
2022-12-20 08:12:16
| 2
| 688
|
Hadar
|
74,859,975
| 10,829,044
|
elegant way to agg and transform together in pandas groupby
|
<p>I have a dataframe like as below</p>
<pre><code>df = pd.DataFrame(
{'stud_id' : [101, 101, 101, 101,
101, 101, 101, 101],
'sub_code' : ['CSE01', 'CSE01', 'CSE01',
'CSE01', 'CSE02', 'CSE02',
'CSE02', 'CSE02'],
'ques_date' : ['13/11/2020', '10/1/2018','11/11/2017', '27/03/2016',
'13/05/2010', '10/11/2008','11/1/2007', '27/02/2006'],
'marks' : [77, 86, 55, 90,
65, 90, 80, 67]}
)
df['ques_date'] = pd.to_datetime(df['ques_date'])
</code></pre>
<p>I would like to do the below</p>
<p>a) group the data by <code>stud_id</code> and <code>sub_code</code></p>
<p>b) Compute the mean difference <code>ques_date</code> for each group</p>
<p>c) Compute the count of marks for each group</p>
<p>So, I tried the below and it works fine</p>
<pre><code>df['avg_ques_gap'] = (df.groupby(['stud_id','sub_code'])['ques_date']
.transform(lambda x: x.diff().dt.days.median()))
output = df.groupby(['stud_id','sub_code']).agg(last_ques_date=('ques_date','max'),
total_pos_transactions=('marks','count')).reset_index()
</code></pre>
<p>But you can see that I write two lines. one for transform and other for aggregate function.</p>
<p>Is there anyway to write both <code>transform</code> and <code>aggregate</code> in single line?</p>
<p>I expect my ouptut to be like as below</p>
<p><a href="https://i.sstatic.net/5wqAp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5wqAp.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe><group-by><aggregate-functions>
|
2022-12-20 07:56:25
| 1
| 7,793
|
The Great
|
74,859,950
| 20,054,635
|
Extract numerical values from the String type rows from a column
|
<p>My Requirement is I have a column which consists of below rows.</p>
<p><a href="https://i.sstatic.net/H777R.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H777R.png" alt="enter image description here" /></a></p>
<p>I want to extract only the values(numerical) which starts with $ (Ex- 1620.00 and 4,440.00) and also values which are ended with % (Ex- 100, 25, 50) and I want to store these values in a new column with corresponding numerical values , and if a row contains multiple numerical values then I want to create unique columns for each numerical values.</p>
<p>How can we achieve this using Pyspark , or Spark SQL ?</p>
<p>Kindly Help,
Thanks</p>
|
<python><mysql><pyspark><apache-spark-sql><azure-databricks>
|
2022-12-20 07:53:32
| 1
| 369
|
Anonymous
|
74,859,912
| 8,010,224
|
Using logical operator between mixed data (include out of range data) is there any rule for this?
|
<p>recently I studied a lot of information and improvement from leetcode</p>
<p>Usually, I prefer <strong>& |</strong> operator than <strong>and or</strong> operator, because of short typo (not really big difference though).</p>
<p>But some question of leetcode using logical operator and solve the question really suprisingly method.</p>
<p>For example,</p>
<pre><code>s = []
print(s and s[0]) # it returns []
print(s[0] and s) # it returns IndexError: list index out of range of course..
</code></pre>
<p>also I tested somethings with integer and empty list like above,</p>
<pre><code>a = 1
print(s and a) # it returns []
print(a and s) # it returns [] without any errors
</code></pre>
<p>For this reason, I think my code written methods when using condition (i.e. if methods) with bitwise operator rather than logical operators.</p>
<p>I want to know,</p>
<ol>
<li>Why above example works and how it works different results by order (a and b // b and a)</li>
<li>Is it problem using bitwise operator than logical operator with some conditions or handling the data (include bolean)?</li>
<li>If is there any rules or important things must I know, please let me know.</li>
</ol>
<p>Thanks for your help even with the basic knowledge of Python.</p>
|
<python><bitwise-operators><logical-operators>
|
2022-12-20 07:48:12
| 0
| 326
|
Gangil Seo
|
74,859,403
| 14,488,888
|
Using ``exec()`` in a comprehension list
|
<p>I have a script that can be run independently but sometimes will be externally invoked with parameters meant to override the ones defined in the script. I got it working using <code>exec()</code> (the safety of this approach is not the point here) but I don't understand why it works in a for loop and not in a comprehension list.</p>
<pre><code>foo = 1
bar = 2
externally_given = ['foo=10', 'bar=20']
for ext in externally_given:
exec(ext)
print('Exec in for loop ->', foo, bar)
externally_given = ['foo=30', 'bar=40']
[exec(ext) for ext in externally_given]
print('Exec in comprehension list ->', foo, bar)
</code></pre>
<p>Output:</p>
<pre><code>Exec in for loop -> 10 20
Exec in comprehension list -> 10 20
</code></pre>
<p>EDIT: Python version 3.10</p>
|
<python>
|
2022-12-20 06:47:20
| 1
| 741
|
MartΓ
|
74,859,374
| 5,881,884
|
get values of unknown hierarchy of lists and dicts
|
<p>So lets say I have a bunch of data that is not really known how is structured except that it is a combination of lists, dictionaries and string values. And I would like to extract only the string values (so values of a list, and values in dict and plain string values) and store them in a list.</p>
<p>So it could be:</p>
<pre><code>d = {
'key1': {
'key2': {
'key3' [
{'key4': 'val1', 'key5': 'val2'}, {'key6': 'val3', 'key7': 'val4'},
{'key8': 'val5', 'key9': 'val6'}, {'key10': 'val7', 'key11': 'val8'},
'val9',
]
},
'key12': 'val10'
}
}
</code></pre>
<p>Or even with another list under the lowest dict. I have the following help functions I find useful to flatten a nested list and to traverse a dict. Is there some nice way of accomplishing this? recursively perhaps?</p>
<pre><code>def traverse(value, key=None):
if isinstance(value, dict):
for k, v in value.items():
yield from traverse(v, k)
else:
yield key, value
def flatten(_2d_list):
flat_list = []
for element in _2d_list:
if type(element) is list:
for item in element:
flat_list.append(item)
else:
flat_list.append(element)
return flat_list
</code></pre>
|
<python><recursive-datastructures>
|
2022-12-20 06:43:01
| 1
| 5,125
|
DevB2F
|
74,859,250
| 1,942,868
|
How to use function for each items of multiple array
|
<p>I have array such as <code>[['C'],['F','D'],['B']]</code></p>
<p>Now I want to use functino for each items in multiple array.</p>
<p>The result I want is like this <code>[[myfunc('C')],[myfunc('F'),myfunc('D')],[myfunc('B')]]</code></p>
<p>At first I tried like this. However, it did not give me the output I was expecting:</p>
<pre><code>[myfunc(k) for k in [i for i in chordsFromGetSet]]
</code></pre>
<p>What is the best solution for this goal?</p>
|
<python>
|
2022-12-20 06:26:52
| 2
| 12,599
|
whitebear
|
74,859,216
| 9,468,092
|
ModuleNotFoundError: No module named 'psycopg2' [AWS Glue]
|
<p>I am using AWS Glue where I'm trying to use psycopg2 in a pyspark script. Since glue does not support psycopg2 in its execution environment, I am passing it in <code>--additional-python-moodules</code>. Which is a way of installing additional Python modules in aws glue.</p>
<p>After following the steps mentioned in the <a href="https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-python-libraries.html" rel="nofollow noreferrer">docs</a>. I am getting an error while running the job which says <code>ModuleNotFoundError: No module named 'psycopg2._psycopg'</code></p>
<p><a href="https://i.sstatic.net/1P26T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1P26T.png" alt="enter image description here" /></a></p>
<p>Here are the job parameters are being passed during the execution.</p>
<p><a href="https://i.sstatic.net/qBCXQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qBCXQ.png" alt="enter image description here" /></a></p>
<p>Solutions I have already tried:</p>
<ol>
<li>Using psycopg2-binary</li>
<li>Passing a zip/whl file in --additional-python-modules</li>
<li>Changing glue version to 2.0 and 3.0</li>
</ol>
|
<python><psycopg2><aws-glue>
|
2022-12-20 06:23:30
| 2
| 890
|
Dipanshu Chaubey
|
74,858,983
| 16,527,170
|
How to assign particular return variable in function on if/else statement in python
|
<p>I have two functions as below:</p>
<pre><code>def abc():
i = "False"
j = "100"
return i,j
def xyz():
if abc() == "False": #I want to compare "False" with variable "i"
print("Not Done")
else:
abc() == "101" ##I want to compare "False" with variable "j"
print("something else:")
xyz()
</code></pre>
<p>Current Output:</p>
<pre><code>something else:
</code></pre>
<p>Expected Output:</p>
<pre><code>Not Done
</code></pre>
<p>I want to know how to check particular <code>return</code> variable for particular if/else statement.</p>
|
<python><python-3.x><function><return>
|
2022-12-20 05:52:20
| 3
| 1,077
|
Divyank
|
74,858,910
| 10,062,025
|
How to prevent httpx timeout in python?
|
<p>I am trying to scrape tokopedia here. When I use raw codes, it works and the json is returned. However when I tried to use it as a variable, it reads time out.
website:<a href="https://www.tokopedia.com/samudrasembako/regal-marie-roll-230-gram?extParam=ivf%3Dfalse&src=topads" rel="nofollow noreferrer">https://www.tokopedia.com/samudrasembako/regal-marie-roll-230-gram?extParam=ivf%3Dfalse&src=topads</a>
Here's the code</p>
<pre><code>!pip install fake_useragent
!pip install httpx
import requests
from fake_useragent import UserAgent
import httpx
tokopedia=['https://www.tokopedia.com/samudrasembako/regal-marie-roll-230-gram?extParam=ivf%3Dfalse&src=topads']
for url in tokopedia:
ua = UserAgent().random
product_key=url.split(".com")[1].split("/")[2].split("?")[0]
shopdomain=url.split(".com")[1].split("/")[1].split("?")[0]
payload={
"operationName":"PDPGetLayoutQuery",
"variables":
{"shopDomain":f"{shopdomain}",
"productKey":f"{product_key}",
"layoutID":"",
"apiVersion":1,
"userLocation":
{"cityID":"176",
"addressID":"0",
"districtID":"2274",
"postalCode":"",
"latlon":""},
"extParam":""},
"query":"fragment ProductVariant on pdpDataProductVariant {\n errorCode\n parentID\n defaultChild\n sizeChart\n totalStockFmt\n variants {\n productVariantID\n variantID\n name\n identifier\n option {\n picture {\n urlOriginal: url\n urlThumbnail: url100\n __typename\n }\n productVariantOptionID\n variantUnitValueID\n value\n hex\n stock\n __typename\n }\n __typename\n }\n children {\n productID\n price\n priceFmt\n optionID\n optionName\n productName\n productURL\n picture {\n urlOriginal: url\n urlThumbnail: url100\n __typename\n }\n stock {\n stock\n isBuyable\n stockWordingHTML\n minimumOrder\n maximumOrder\n __typename\n }\n isCOD\n isWishlist\n campaignInfo {\n campaignID\n campaignType\n campaignTypeName\n campaignIdentifier\n background\n discountPercentage\n originalPrice\n discountPrice\n stock\n stockSoldPercentage\n startDate\n endDate\n endDateUnix\n appLinks\n isAppsOnly\n isActive\n hideGimmick\n isCheckImei\n minOrder\n __typename\n }\n thematicCampaign {\n additionalInfo\n background\n campaignName\n icon\n __typename\n }\n __typename\n }\n __typename\n}\n\nfragment ProductMedia on pdpDataProductMedia {\n media {\n type\n urlOriginal: URLOriginal\n urlThumbnail: URLThumbnail\n urlMaxRes: URLMaxRes\n videoUrl: videoURLAndroid\n prefix\n suffix\n description\n variantOptionID\n __typename\n }\n videos {\n source\n url\n __typename\n }\n __typename\n}\n\nfragment ProductHighlight on pdpDataProductContent {\n name\n price {\n value\n currency\n __typename\n }\n campaign {\n campaignID\n campaignType\n campaignTypeName\n campaignIdentifier\n background\n percentageAmount\n originalPrice\n discountedPrice\n originalStock\n stock\n stockSoldPercentage\n threshold\n startDate\n endDate\n endDateUnix\n appLinks\n isAppsOnly\n isActive\n hideGimmick\n __typename\n }\n thematicCampaign {\n additionalInfo\n background\n campaignName\n icon\n __typename\n }\n stock {\n useStock\n value\n stockWording\n __typename\n }\n variant {\n isVariant\n parentID\n __typename\n }\n wholesale {\n minQty\n price {\n value\n currency\n __typename\n }\n __typename\n }\n isCashback {\n percentage\n __typename\n }\n isTradeIn\n isOS\n isPowerMerchant\n isWishlist\n isCOD\n isFreeOngkir {\n isActive\n __typename\n }\n preorder {\n duration\n timeUnit\n isActive\n preorderInDays\n __typename\n }\n __typename\n}\n\nfragment ProductCustomInfo on pdpDataCustomInfo {\n icon\n title\n isApplink\n applink\n separator\n description\n __typename\n}\n\nfragment ProductInfo on pdpDataProductInfo {\n row\n content {\n title\n subtitle\n applink\n __typename\n }\n __typename\n}\n\nfragment ProductDetail on pdpDataProductDetail {\n content {\n title\n subtitle\n applink\n showAtFront\n isAnnotation\n __typename\n }\n __typename\n}\n\nfragment ProductDataInfo on pdpDataInfo {\n icon\n title\n isApplink\n applink\n content {\n icon\n text\n __typename\n }\n __typename\n}\n\nfragment ProductSocial on pdpDataSocialProof {\n row\n content {\n icon\n title\n subtitle\n applink\n type\n rating\n __typename\n }\n __typename\n}\n\nquery PDPGetLayoutQuery($shopDomain: String, $productKey: String, $layoutID: String, $apiVersion: Float, $userLocation: pdpUserLocation, $extParam: String) {\n pdpGetLayout(shopDomain: $shopDomain, productKey: $productKey, layoutID: $layoutID, apiVersion: $apiVersion, userLocation: $userLocation, extParam: $extParam) {\n requestID\n name\n pdpSession\n basicInfo {\n alias\n createdAt\n isQA\n id: productID\n shopID\n shopName\n minOrder\n maxOrder\n weight\n weightUnit\n condition\n status\n url\n needPrescription\n catalogID\n isLeasing\n isBlacklisted\n menu {\n id\n name\n url\n __typename\n }\n category {\n id\n name\n title\n breadcrumbURL\n isAdult\n isKyc\n minAge\n detail {\n id\n name\n breadcrumbURL\n isAdult\n __typename\n }\n __typename\n }\n txStats {\n transactionSuccess\n transactionReject\n countSold\n paymentVerified\n itemSoldFmt\n __typename\n }\n stats {\n countView\n countReview\n countTalk\n rating\n __typename\n }\n __typename\n }\n components {\n name\n type\n position\n data {\n ...ProductMedia\n ...ProductHighlight\n ...ProductInfo\n ...ProductDetail\n ...ProductSocial\n ...ProductDataInfo\n ...ProductCustomInfo\n ...ProductVariant\n __typename\n }\n __typename\n }\n __typename\n }\n}\n"
}
headers={
'origin': 'https://www.tokopedia.com',
'referer': f'{url}',
'sec-ch-ua': '"Not?A_Brand";v="8", "Chromium";v="108", "Google Chrome";v="108"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': "Windows",
'sec-fetch-dest': 'empty',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'same-site',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36',
'x-device': 'desktop',
'x-source': 'tokopedia-lite',
'x-tkpd-akamai': 'pdpGetLayout',
'x-tkpd-lite-service': 'zeus',
'x-version': '53ac990'
}
client= httpx.Client()
resp=client.post("https://gql.tokopedia.com/graphql/PDPGetLayoutQuery",json=payload,headers=headers)
</code></pre>
<p>Can someone please help? So that the above code runs and returns the json.</p>
|
<python><httpx>
|
2022-12-20 05:41:21
| 1
| 333
|
Hal
|
74,858,843
| 8,124,392
|
Can't tweet from Python despite elevated access
|
<p>I have the following function:</p>
<pre><code>import tweepy
def tweet_message(text):
# Replace these with your own API key and secret
api_key = API_KEY
api_secret = API_KEY_SECRET
access_token = ACCESS_TOKEN
access_token_secret = ACCESS_TOKEN_SECRET
# Authenticate with Twitter API
auth = OAuthHandler(access_token, access_token_secret)
api = tweepy.API(auth)
# Tweet the text
api.update_status(status=text)
</code></pre>
<p>I am using Tweepy to post tweets via Python.</p>
<p>I have elevated access developer account and I have all of the following keys:</p>
<pre><code>API_KEY = '...'
API_KEY_SECRET = '...'
BEARER_TOKEN = '...'
CLIENT_ID = '...'
CLIENT_SECRET = '...'
ACCESS_TOKEN = '...'
ACCESS_TOKEN_SECRET = '...'
</code></pre>
<p>But when I run the function, I get this error:</p>
<pre><code>TweepError: [{'code': 220, 'message': 'Your credentials do not allow access to this resource.'}]
</code></pre>
<p>Where am I going wrong? My credentials should allow access to it.</p>
|
<python><twitter><tweepy>
|
2022-12-20 05:31:03
| 1
| 3,203
|
mchd
|
74,858,669
| 13,016,994
|
How to import a function or variable from the sibling directory of Django project?
|
<p>I want to import some functions a constants variables from 5 levels upper of the current app of Django application.</p>
<p>My problem is that the package that I want to import from, is outside of the Django application, so Django isn't aware of it.</p>
<p><strong>project structure</strong>:</p>
<pre><code>βββ lawcrawler
βΒ Β βββ get_all_amended_laws_urls.py # import a function from this file
β βββ lawcrawler
β βΒ Β βββ spiders
β βΒ Β βββ settings.py # import variables from this file
β β
βββ project
βΒ Β βββ apps
β β βββ laws
β β βββ management
β β βββ commands
β β βββ add_list_of_amended_laws.py # import function and variables to this file
βΒ Β βββ manage.py
</code></pre>
<p>As per <a href="https://docs.python.org/3/reference/import.html#package-relative-imports" rel="nofollow noreferrer">python docs</a>, two or more leading dots indicate a relative import to the parent(s) of the current package, one level per dot after the first. So, I <strong>tried relative import</strong> like this:</p>
<pre><code>from .......lawcrawler.lawcrawler.spiders.settings import SPARQL_ENDPOINT, AMENDED_URL_FILE_PATH, get_law_amendment_query
</code></pre>
<p>and this:</p>
<pre><code>from .....lawcrawler import get_all_amended_urls
</code></pre>
<p>but <strong>got this error</strong>:</p>
<pre><code>ImportError: attempted relative import with no known parent package
</code></pre>
<p>How to import the functions and variables or solve this issue?</p>
|
<python><django><import><python-import>
|
2022-12-20 04:56:47
| 0
| 415
|
Mahdi Jafari
|
74,858,587
| 12,752,172
|
How to extract values from a list and add them into a new list in python?
|
<p>I have a data list like below. I need to extra all values after ":" and add those values into a new list. How can I do this?</p>
<p><strong>Sample data list</strong></p>
<pre><code>list1= ['Company Name: PATRY PLC', 'Contact Name: Jony Deff', 'Company ID: 234567', 'CS ID: 236789', 'MI/MC:', 'Road Code:']
</code></pre>
<p>now I need to extract all the values after the colon(:) and recreate a new list like below. and also add null values for the empty strings for no values after the colon(:) like 'MI/MC:'.</p>
<p><strong>new list</strong>
list2 = ['PATRY PLC', 'Jony Deff', '234567', '236789', 'null', 'null']</p>
|
<python><list>
|
2022-12-20 04:41:21
| 2
| 469
|
Sidath
|
74,858,585
| 9,124,950
|
PySpark error An error occurred while calling count caused by null pointer
|
<p>I'm getting <code>An error occurred while calling o19972810.count</code> while counting data that i get from JDBC:</p>
<pre><code>sql_query = "SELECT.... FROM.... "
df = spark.read.format("jdbc").option("url", con_mysql_source) \
.option("driver", "com.mysql.jdbc.Driver").option("dbtable", f"({sql_query})" sdtable") \
.option("user", ...).option("password", ...).load()
df.createOrReplaceTempView("df_temp")
counts = df.count()
</code></pre>
<p>Here is the full log:</p>
<pre><code>aRise Py4JJavaError(\\\\npy4j.protocol.Py4JJavaError: An error occurred while calling o19972810.count. :
org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:\\\\nExchange SinglePartition, ENSURE_REQUIREMENTS, [id=#38969584]
- *(1) HashAggregate(keys=[], functions=[partial_count(1)], output=[count#49033654L])
+- *(1) Scan JDBCRelation((SELECT updatedate,DATE_FORMAT(transaction_tgl, '%Y%m%d') as partdate, user, outlet transaction_tgl , id, status FROM Transactions WHERE updateDate \\u003e '2022-11-29 20:22:23' AND updateDate\\u003c='2022-11-29 20:27:28' AND transaction_tgl \\u003e '1970-01-01 00:00:00' ) sdtable) [numPartitions=1] [] PushedFilters: [], ReadSchema: struct\\u003c\\u003e\
at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56)
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.doExecute(ShuffleExchangeExec.scala:163)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180
at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176)
at org.apache.spark.sql.execution.InputAdapter.inputRDD(WholeStageCodegenExec.scala:525)
at org.apache.spark.sql.execution.InputRDDCodegen.inputRDDs(WholeStageCodegenExec.scala:453)
at org.apache.spark.sql.execution.InputRDDCodegen.inputRDDs$(WholeStageCodegenExec.scala:452)
at org.apache.spark.sql.execution.InputAdapter.inputRDDs(WholeStageCodegenExec.scala:496)
at org.apache.spark.sql.execution.aggregate.HashAggregateExec.inputRDDs(HashAggregateExec.scala:141)
at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:746)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176)
at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:321)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:387)
at org.apache.spark.sql.Dataset.$anonfun$count$1(Dataset.scala:3019)
at org.apache.spark.sql.Dataset.$anonfun$count$1$adapted(Dataset.scala:3018)
at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3700)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3698)
at org.apache.spark.sql.Dataset.count(Dataset.scala:3018)
at sun.reflect.GeneratedMethodAccessor57.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.NullPointerException
</code></pre>
<p>Python Version : 3.7</p>
<p>Spark : 3.1.3</p>
<p>mysql driver : mysql-connector-java-8.0.29</p>
<p>The weird thing is that this error is solved by restarting the VM and it rarely happens, like once a month or once every 2 weeks. We have no clue why it happens since we've crosschecked our database and we certain that the selected column has no null value.</p>
<p>Because of that we're having difficulties reproducing it both in local & staging.</p>
<h2>Any help / idea are appreciated. Thanks in Advance!</h2>
<p>Update:</p>
<p>Later we found that before this error is produced, we found another error that state <code>GC overhead limit exceeded</code> and <code>org.apache.spark.SparkException: Job 180601 cancelled because SparkContext was shut down</code> then Kibana show the <code>count error</code>.</p>
<p>Is it related & possible produce such error if we trigger the our spark frequently after previous job is finished?</p>
|
<python><mysql><apache-spark><pyspark>
|
2022-12-20 04:41:11
| 0
| 619
|
Akbar Noto
|
74,858,565
| 5,562,041
|
Is there a chance that emails are sent in parallel and thus `mail.outbox.clear()` doesn't really clear outbox in my django tests?
|
<p>I have written django tests to check my outbox emails as shown below</p>
<pre><code>class TestX(TestCase):
def setUp(self):
# Clear outbox.
mail.outbox.clear()
super().setUp()
def tearDown(self):
# Clear outbox.
mail.outbox.clear()
super().tearDown()
</code></pre>
<p>however, performing assertions e.g
<code>self.assertEqual(len(mail.outbox), 1)</code>
fails with the <code>len(mail.outbox)</code> showing a large number as compared to the emails I've sent using send mail. I know there are other apps also sending emails so I'm wondering if the emails are being sent in parallel and thus my <code>clear</code> isn't effective or what might be the issue?</p>
|
<python><django><unit-testing><python-unittest><django-unittest>
|
2022-12-20 04:36:19
| 0
| 2,249
|
E_K
|
74,858,554
| 13,326,361
|
ERROR: Could not install packages due to an OSError: [Errno 39] Directory not empty
|
<p>Dockerfile:</p>
<pre class="lang-docker prettyprint-override"><code>FROM python:3.10-slim
ENV PYTHONUNBUFFERED 1
WORKDIR /app
COPY ./requirements.txt .
RUN pip install --trusted-host mirrors.aliyun.com --no-cache-dir --upgrade -r requirements.txt
</code></pre>
<p>Error during build:</p>
<pre class="lang-bash prettyprint-override"><code> Attempting uninstall: setuptools
Found existing installation: setuptools 65.5.0
Uninstalling setuptools-65.5.0:
ERROR: Could not install packages due to an OSError: [Errno 39] Directory not empty: '/usr/local/lib/python3.10/site-packages/_distutils_hack/'
</code></pre>
<p>It seems a permission error while deleting a folder. As root is the default user in Docker, I don't understand why lack of permission.</p>
|
<python><docker><pip>
|
2022-12-20 04:33:26
| 1
| 2,510
|
Yue JIN
|
74,858,392
| 18,125,194
|
Efficiently identify an event that occurs between a beginning and ending time stamp
|
<p>I have two data frames:</p>
<p>Data frame one has a timestamp, a factor(amount of power generated) and a location.</p>
<p>Data frame two has an event(amount of rain), a timestamp for the beginning time of the event, a time stamp for the ending time of the event and a location.</p>
<p>I want to include, in the first data frame, a column for the amount of rain falling when a certain amount of power was generated.</p>
<p>I was able to create a small dataframe and run a test with the following code:</p>
<pre><code>df1 =pd.DataFrame({'factor': ['2','3','4','5','6','7'],
'timestamp':['2022-12-01 10:00:00','2022-12-01 10:05:00',
'2022-12-01 10:15:00','2022-12-01 10:20:00',
'2022-12-15 13:00:00','2022-12-20 06:00:00'],
'location':['a','b','c','d','a','d']
})
df2 =pd.DataFrame({'event': ['2','3','4','5','6','7'],
'time_start':['2022-12-01 9:00:00','2022-12-02 10:05:00',
'2022-12-01 8:15:00','2022-12-01 9:20:00',
'2022-12-25 10:00:00','2022-12-20 05:00:00'],
'time_end':['2022-12-01 16:00:00','2022-12-02 10:15:00',
'2022-12-01 20:15:00','2022-12-01 20:20:00',
'2022-12-25 13:00:00','2022-12-20 06:30:00'],
'location':['a','b','c','d','b','c']
})
df1['timestamp'] = pd.to_datetime(df1['timestamp'])
df2['time_start'] = pd.to_datetime(df2['time_start'])
df2['time_end'] = pd.to_datetime(df2['time_end'])
df3 = df1.merge(df2, how='outer', on="location")
df3['quantity_rain'] = df3['event'].where(df3['timestamp'].between(df3['time_start'], df3['time_end']))
df3.replace(np. nan,0)
</code></pre>
<p>but when I run the code with my larger dataframe, the kernal restarts because I am using too much ram.</p>
<p>This occurs when I try to merge the two dataframes with <code>df3 = df1.merge(df2, how='outer', on="location")</code></p>
<p>I was trying to find a way around this, I read that I should try to use SQL. I figured I can merge the dataframes, convert the merged dataframe back to pandas then proceed as usual, but I am not sure of how to do that (or even if that's the best way to go about things?). When I run my code I get the error
<code>* sqlite://(sqlite3.OperationalError) no such table: df1</code></p>
<p>My code is below:</p>
<pre><code>%load_ext sql
%sql sqlite://
import sqlite3
conn = sqlite3.connect('test_database')
c = conn.cursor()
# Converting dataframes to SQL tables
df1.to_sql('df1_SQL', conn, if_exists='replace', index = False)
df2.to_sql('df1_SQL', conn, if_exists='replace', index = False)
# Merging tables
%sql SELECT * FROM df1 JOIN df2 USING (location)
</code></pre>
<p>Is there a way to do this with less ram with python? if not is sql the way to go and how can I fix my code?</p>
|
<python><pandas><dataframe><sqlite><merge>
|
2022-12-20 03:58:11
| 1
| 395
|
Rebecca James
|
74,858,359
| 10,062,025
|
How to scrape static page with multiple variants using python?
|
<p>I am trying to scrape a shopee product display page. Currently I see that there are variants in the single product display page. I am unsure on how to get all variant items and their prices respectively. Please do help.</p>
<p>Here's an example of single page with variants
'https://shopee.co.id/ACMIC-Braided-Line-Kabel-Data-Fast-Charging-for-iPhone-1-M-2-M-3-M-i.27769962.18163430950?sp_atk=5c463b34-ab0b-40da-af85-05206b95f616&xptdk=5c463b34-ab0b-40da-af85-05206b95f616'</p>
<p>Currently my code is like so:</p>
<pre><code>!apt-get update
!apt install chromium-chromedriver
!cp /usr/lib/chromium-browser/chromedriver /usr/bin
!pip install selenium-wire
# set options to be headless, ..
from seleniumwire import webdriver
from bs4 import BeautifulSoup
from selenium.webdriver.common.by import By
options = webdriver.ChromeOptions()
options.set_capability(
"goog:loggingPrefs", {"performance": "ALL", "browser": "ALL"}
)
options.add_argument('--headless')
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
# open it, go to a website, and get results
driver = webdriver.Chrome('chromedriver',options=options)
shopee=['https://shopee.co.id/ACMIC-Braided-Line-Kabel-Data-Fast-Charging-for-iPhone-1-M-2-M-3-M-i.27769962.18163430950?sp_atk=5c463b34-ab0b-40da-af85-05206b95f616&xptdk=5c463b34-ab0b-40da-af85-05206b95f616']
shopeedf=pd.DataFrame()
for urls in shopee:
try:
driver.get(urls)
sleep(randint(3,5))
product_name=driver.find_element(By.CSS_SELECTOR, ".YPqix5").text
try:
normal_price=driver.find_element(By.CSS_SELECTOR, ".Kg2R-S").text
except:
normal_price=driver.find_element(By.CSS_SELECTOR, ".X0xUb5").text
normal_price=normal_price.replace('Rp',"").replace(".","")
try:
discount=driver.find_element(By.CSS_SELECTOR, ".+1IO+x").text
except:
discount="0"
compid=urls.split(".")[4].split("?")[0]
dat={
'product_name':product_name,
'normal_price':normal_price,
'discount':discount,
'competitor_id':compid,
'url':urls,
'date_key':today,
'web':'shopee'
}
dat=pd.DataFrame([dat])
shopeedf=shopeedf.append(dat)
except Exception as e:
print(f"{urls} error")
print(e)
</code></pre>
|
<python><selenium-chromedriver>
|
2022-12-20 03:50:07
| 2
| 333
|
Hal
|
74,858,280
| 1,033,591
|
Is it possible to sort queryset without hitting the db again?
|
<p>Is there any approach to avoid hitting db when the queryset needs to be
returned in a specific order?</p>
<p>If a queryset would be returned when a page is loaded</p>
<pre><code>qs = Student.objects.all()[start:end]
</code></pre>
<p>But it also provides UI for users to view the query in ascending or descending
order.
So, at Django server.</p>
<p>Queries should be performed</p>
<pre><code>qs = Student.objects.all()[start:end]
qs2 = Student.objects.filter(id__in=qs).order_by("-id")
</code></pre>
<p>To reduce the db hitting, is there any other better approach to
avoid frequent query and db hit?</p>
<p>I wonder I would store the query result in browser and return the results
but it looks so complex...</p>
|
<python><django><database>
|
2022-12-20 03:29:08
| 1
| 2,147
|
Alston
|
74,858,198
| 14,488,888
|
Why does not isinstance() accept a set of types?
|
<p>It really bugs me why wouldn't <code>isinstance(value, type_or_types)</code> accept a <code>set()</code> of types if it accepts <code>tuple</code>.</p>
<p>Basically why:</p>
<pre><code>isinstance(1.23, (int, float, complex))
</code></pre>
<p>is valid, but:</p>
<p><code>isinstance(1.23, {int, float, complex})</code></p>
<p>is not?.</p>
<p>I've noticed that for the latter case, up to Python 3.9 it throws the error:</p>
<pre><code>TypeError: isinstance() arg 2 must be a type or tuple of types
</code></pre>
<p>while from 3.10 the error has changed to:</p>
<pre><code>TypeError: isinstance() arg 2 must be a type, a tuple of types, or a union
</code></pre>
<p>After I saw <code>union</code> I felt something might have changed in the newer versions, but I don't get what. What is a union? I only know the method <code>set.union()</code> which in turn returns another <code>set</code>, so I'm basically stuck without being able to use it.</p>
<p>EDIT:
I suspect the reason is the mutability?</p>
|
<python>
|
2022-12-20 03:11:47
| 0
| 741
|
MartΓ
|
74,857,811
| 5,091,964
|
Python Plotly - How to change the distance between the cursor and the information (annotation) box?
|
<p>I am using Plotly to chart a Sine wave (see code below). I would like to increase the distance between the cursor and its information box. Any help regarding how to change the distance is appreciated.</p>
<pre><code>import plotly.graph_objs as go
import numpy as np
# Generate data for sine wave
x = np.linspace(0, 2*np.pi, 100)
y = np.sin(x)
# Create figure
fig = go.Figure(data=[go.Scatter(x=x, y=y)])
# Display the x and y values when moving the cursor.
fig.update_layout(hovermode = 'x unified')
#Show figure
fig.show()
</code></pre>
|
<python><plotly>
|
2022-12-20 01:57:35
| 1
| 307
|
Menachem
|
74,857,784
| 12,810,223
|
How to add python files to a new repository in github
|
<p>I am trying to learn and understand basics of github. For the purpose, I created some files.</p>
<ol>
<li>try_catch_basics.py</li>
<li>reading_from_files.py and countries.txt</li>
<li>writing_in_files.py and country.txt</li>
</ol>
<p>Now, I had created a repository earlier with name try-catch-basics, and included the first file in it. I wanted to include my second and third files in the new repository that I created, but I am not able to do that. Here are the steps that I followed for the same -</p>
<p>View -> Command Palette -> Git: Add Remote ->https://github.com/SteelTitan247/File-handling-in-python -> Gave a name -> and pressed enter.</p>
<p>Then, I clicked source control and commited with a message. Now, an option is showing me for sync changes.</p>
<p><a href="https://i.sstatic.net/rnLr6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rnLr6.png" alt="enter image description here" /></a></p>
<p>When I am clicking on it, it is showing me this -</p>
<p><a href="https://i.sstatic.net/ZRy74.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZRy74.png" alt="enter image description here" /></a></p>
<p>which is the previously created repo for my file 1. How do I "shift" or change repo from this to my new one?</p>
<p>For reference, this is the older repo - <a href="https://github.com/SteelTitan247/try-catch-basics" rel="nofollow noreferrer">https://github.com/SteelTitan247/try-catch-basics</a></p>
<p>And this is the one in which I want to push my new files -
<a href="https://github.com/SteelTitan247/File-handling-in-python" rel="nofollow noreferrer">https://github.com/SteelTitan247/File-handling-in-python</a></p>
<p>Please correct my mistakes and guide me through it.</p>
|
<python><git><github><github-for-windows>
|
2022-12-20 01:51:26
| 3
| 1,874
|
Shreyansh Sharma
|
74,857,767
| 2,280,637
|
Fails to save model after running GridSearchCV with a scikit pipeline
|
<p>I have the following toy example to replicate the issue</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import xgboost as xgb
from sklearn.datasets import make_regression
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
X, y = make_regression(n_samples=30, n_features=5, noise=0.2)
reg = xgb.XGBRegressor(tree_method='hist', eval_metric='mae', n_jobs= 4)
steps = list()
steps.append(('reg', reg))
pipeline = Pipeline(steps=steps)
param_grid = {'reg__max_depth': [2, 4, 6],}
cv = 3
model = GridSearchCV(pipeline, param_grid, cv=cv, scoring='neg_mean_absolute_error')
best_model = model.fit(X = X, y = y)
</code></pre>
<p>Then the following four methods fail to save the fitted model:</p>
<pre class="lang-py prettyprint-override"><code>model.save_model('test_1.json')
# AttributeError: 'GridSearchCV' object has no attribute 'save_model'
</code></pre>
<pre class="lang-py prettyprint-override"><code>best_model.save_model('test2.json')
# AttributeError: 'GridSearchCV' object has no attribute 'save_model'
</code></pre>
<pre class="lang-py prettyprint-override"><code>best_model.best_estimator_.save_model('test3.json')
# AttributeError: 'Pipeline' object has no attribute 'save_model'
</code></pre>
<pre class="lang-py prettyprint-override"><code>model.best_estimator_.save_model('test4.json')
# AttributeError: 'Pipeline' object has no attribute 'save_model'
</code></pre>
<p>But these two methods work.</p>
<pre class="lang-py prettyprint-override"><code>import joblib
joblib.dump(model.best_estimator_, 'naive_model.joblib')
joblib.dump(best_model.best_estimator_, 'naive_best_model.joblib')
</code></pre>
<p>Can anyone tell me if it is the way I constructer my pipeline mistakenly breaks the method to save the best model?</p>
|
<python><scikit-learn><gridsearchcv><scikit-learn-pipeline>
|
2022-12-20 01:48:17
| 1
| 1,255
|
Li-Pin Juan
|
74,857,745
| 8,869,570
|
How to use visual studio code to debug a Python & C++ program?
|
<p>I've looked up video tutorials on this that seems to focus just on how to debug a single file.</p>
<p>I'm running a simulation code that probably goes through 100s different python and C++ source files. There's a particular spot in a python function that I want to set a breakpoint for. I have set the breakpoint.</p>
<p>I usually run the simulation code with</p>
<pre><code>python3 path/to/sim/directory path/to/input
</code></pre>
<p>How can I use this command with the debugger in VSC?</p>
|
<python><visual-studio-code><visual-studio-debugging>
|
2022-12-20 01:45:19
| 1
| 2,328
|
24n8
|
74,857,743
| 6,488,953
|
SparkException: Exception thrown in awaitResult for EMR
|
<p>I tried running my Spark application from EMR, which right now is just the pi calculation in the tutorial doc: <a href="https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark-application.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark-application.html</a></p>
<p>I uploaded that .py file into s3 and asked the EMR to add a step, with the .py file as the JAR.</p>
<p>It always ends up erroring with following message</p>
<pre><code>22/12/20 00:52:24 ERROR ApplicationMaster: Uncaught exception:
org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:301) ~[spark-core_2.12-3.3.0-amzn-0.jar:3.3.0-amzn-0]
at org.apache.spark.deploy.yarn.ApplicationMaster.runDriver(ApplicationMaster.scala:514) ~[spark-yarn_2.12-3.3.0-amzn-0.jar:3.3.0-amzn-0]
at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:278) ~[spark-yarn_2.12-3.3.0-amzn-0.jar:3.3.0-amzn-0]
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:929) ~[spark-yarn_2.12-3.3.0-amzn-0.jar:3.3.0-amzn-0]
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:928) ~[spark-yarn_2.12-3.3.0-amzn-0.jar:3.3.0-amzn-0]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_352]
at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_352]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) ~[hadoop-client-api-3.2.1-amzn-8.jar:?]
at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:928) ~[spark-yarn_2.12-3.3.0-amzn-0.jar:3.3.0-amzn-0]
at org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala) ~[spark-yarn_2.12-3.3.0-amzn-0.jar:3.3.0-amzn-0]
Caused by: org.apache.spark.SparkUserAppException: User application exited with 1
at org.apache.spark.deploy.PythonRunner$.main(PythonRunner.scala:111) ~[spark-core_2.12-3.3.0-amzn-0.jar:3.3.0-amzn-0]
at org.apache.spark.deploy.PythonRunner.main(PythonRunner.scala) ~[spark-core_2.12-3.3.0-amzn-0.jar:3.3.0-amzn-0]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_352]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_352]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_352]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_352]
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:742) ~[spark-yarn_2.12-3.3.0-amzn-0.jar:3.3.0-amzn-0]
22/12/20 00:52:24 INFO ApplicationMaster: Deleting staging directory hdfs://ip-10-0-7-133.us-west-2.compute.internal:8020/user/hadoop/.sparkStaging/application_1671497445713_0001
22/12/20 00:52:25 INFO ShutdownHookManager: Shutdown hook called
</code></pre>
<p>I'm a total noob to EMR and have no idea what this error message indicates, since nothing is pointing out anything about my code.</p>
<p>Could someone tell me how to look for what is actually wrong here?</p>
|
<python><apache-spark><pyspark><amazon-emr>
|
2022-12-20 01:44:51
| 1
| 621
|
Kei
|
74,857,614
| 9,668,218
|
How to write regular expression that covers small and capital letters of a specific word?
|
<p>I am trying to use regular expression to find a specific word (with small or capital letters) in a text.</p>
<p>Examples are:</p>
<ul>
<li>none</li>
<li>None</li>
<li>NONE</li>
</ul>
<p>However, the following code doesn't find the pattern in sample texts.</p>
<pre><code>import re
txt_list = ["None" , "none", "[none]", "(NONE", "Hi"]
pattern = "/\bnone\b/i"
for txt in txt_list:
if re.search(pattern, txt):
print(f'Found {txt}')
</code></pre>
<p>What is the cause of the above issue? Is the "pattern" incorrect?</p>
|
<python><regex><string>
|
2022-12-20 01:16:10
| 1
| 1,033
|
Mohammad
|
74,857,446
| 10,443,817
|
How to specify Python version range in environment.yml file?
|
<p>Does it make sense to specify range of allowed Python versions in environment.yml file? I got this idea while reading the <a href="https://cloud.google.com/python/docs/reference/bigquery/latest" rel="nofollow noreferrer">Google's Biq Query documentation</a></p>
<pre><code>Supported Python Versions
Python >= 3.7, < 3.11
</code></pre>
<p>If this makes sense then what is the right syntax to specify the range in the environment.yml file?</p>
|
<python><anaconda><conda>
|
2022-12-20 00:40:48
| 1
| 4,125
|
exan
|
74,857,405
| 942,543
|
How to use diffusers with custom ckpt file
|
<p>Currently I have the current code which runs a prompt on a model which it downloads from huggingface.</p>
<pre class="lang-py prettyprint-override"><code>from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
model_id = "stabilityai/stable-diffusion-2"
# Use the Euler scheduler here instead
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler)
pipe = pipe.to("mps")
pipe.enable_attention_slicing()
prompt = "a photo of an astronaut riding a horse on mars"
pipe(prompt).images[0]
</code></pre>
<p>I wanted to know how can I feed a custom ckpt file to this script instead of it downloading it from stabilityAi repo?</p>
|
<python><huggingface><stable-diffusion>
|
2022-12-20 00:30:15
| 1
| 1,604
|
Mohammad Razeghi
|
74,857,394
| 3,247,006
|
How to run "SELECT FOR UPDATE" for the default "Delete selected" in Django Admin Actions?
|
<p>I have <strong><code>Person</code> model</strong> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "store/models.py"
from django.db import models
class Person(models.Model):
name = models.CharField(max_length=30)
</code></pre>
<p>And, this is <strong><code>Person</code> admin</strong> below:</p>
<pre class="lang-py prettyprint-override"><code># "store/admin.py"
from django.contrib import admin
from .models import Person
@admin.register(Person)
class PersonAdmin(admin.ModelAdmin):
pass
</code></pre>
<p>Then, when clicking <strong>Go</strong> to go to delete the selected persons as shown below:</p>
<p><a href="https://i.sstatic.net/zVyk2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zVyk2.png" alt="enter image description here" /></a></p>
<p>Then, clicking <strong>Yes I'm sure</strong> to delete the selected persons:</p>
<p><a href="https://i.sstatic.net/w47oD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/w47oD.png" alt="enter image description here" /></a></p>
<p>Only <strong><code>DELETE</code> query</strong> is run in transaction as shown below:</p>
<p><a href="https://i.sstatic.net/XN27z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XN27z.png" alt="enter image description here" /></a></p>
<p>Now, how can I run <code>SELECT FOR UPDATE</code> for <strong>the default "Delete selected"</strong> in <strong>Django Admin Actions</strong>?</p>
|
<python><django><django-admin><django-admin-actions><select-for-update>
|
2022-12-20 00:27:06
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
74,857,382
| 11,037,602
|
How to persist data into a One-to-Many SELF referential with SQLAlchemy?
|
<p>I'm trying to persist a One-To-Many <strong>self-referential</strong> relationship. My table looks something like this:</p>
<pre class="lang-py prettyprint-override"><code>class Users(Base):
__tablename__ = "users"
id = Column(Integer, primary_key=True, unique=True)
connected_ids = Column(Integer, ForeignKey("users.id"))
connected_with = relationship("Users")
</code></pre>
<p>I arrived at this format following <a href="https://docs.sqlalchemy.org/en/14/orm/basic_relationships.html#one-to-many" rel="nofollow noreferrer">this page in the docs for one-to-many</a> and another <a href="https://docs.sqlalchemy.org/en/14/orm/self_referential.html#adjacency-list-relationships" rel="nofollow noreferrer">page describing how to declare self referential relationships</a>. I've also already tried with the following variations:</p>
<pre class="lang-py prettyprint-override"><code>connected_with = relationship("Users", backref="users")
connected_with = relationship("Users", backref="users", remote_side="users.c.id"")
</code></pre>
<p>I can insert the rows, query, commit, etc... but when trying to define a relationship, it fails with the following:</p>
<p><strong>Example One:</strong></p>
<pre class="lang-py prettyprint-override"><code>u1 = session.get(Users, 1)
u2 = session.get(Users, 2)
u1.connected_ids = [u2.id]
</code></pre>
<p><strong>Will raise:</strong></p>
<pre><code>sqlalchemy.exc.ProgrammingError: (psycopg2.errors.DatatypeMismatch) column "connected_ids" is of type integer but expression is of type integer[]
LINE 1: ...users SET last_updated=now(), connected_ids=ARRAY[2911...
</code></pre>
<p><strong>Example Two (with <strong>connected_with</strong> attr):</strong></p>
<pre class="lang-py prettyprint-override"><code>u1.connected_with = [u2.id]
</code></pre>
<p><strong>Will Raise:</strong></p>
<pre><code>AttributeError: 'int' object has no attribute '_sa_instance_state'
</code></pre>
<p><strong>Example Three (with the object itself):</strong></p>
<pre class="lang-py prettyprint-override"><code>u1.connected_ids = [u2]
</code></pre>
<p><strong>Will raise:</strong></p>
<pre><code>sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) can't adapt type 'Users'
</code></pre>
<p>At this point, my best guess is that the table is not defined the way I expect it to, but I also don't know what is wrong in it.</p>
<p>Any pointers and help will be appreciated.</p>
|
<python><python-3.x><sqlalchemy><psycopg2>
|
2022-12-20 00:24:28
| 2
| 2,081
|
Justcurious
|
74,857,217
| 7,209,826
|
Scrapy. Every time i yield request another function is triggered as well. Cant see why
|
<p>Here is my spider
It is supposed to assign a list attained from google sheet to global variable <code>denied</code>. In the code this function is called just once , but in the logs it is executed as many times as post request to endpoint is executed (<code>send_to_endpoint()</code>). Where is the error?</p>
<pre><code>import scrapy
from scrapy import Request
from scrapy.linkextractors import LinkExtractor
import json
from datetime import datetime
import json
import logging
import requests
# from scrapy.utils.project import get_project_settings
class Code1Spider(scrapy.Spider):
name = 'c_cointelegraph'
allowed_domains = ['cointelegraph.com']
start_urls = ['https://cointelegraph.com/press-releases/']
id = int(str(datetime.now().timestamp()).split('.')[0])
denied=[]
gs_id = ''
endpoint_url = ''
def parse(self, response):
#Returns settings values as dict
settings=self.settings.copy_to_dict()
self.gs_id = settings.get('GS_ID')
self.endpoint_url = settings.get('ENDPOINT_URL')
#assigns a list of stop words from GS to global variable
self.denied = self.load_gsheet()
for i in response.xpath('//a[@class="post-card-inline__title-link"]/@href').getall():
yield Request(response.urljoin(i), callback = self.parsed)
def parsed(self, response):
#set deny_domains to current domain so we could get all external urls
denied_domains = self.allowed_domains[0]
links = LinkExtractor(deny_domains=denied_domains,restrict_xpaths=('//article[@class="post__article"]'))
links = links.extract_links(response)
links = [i.url for i in links]
#checks the list of external links agains the list of stop words
links = [i for i in links if not any(b in i for b in self.denied)]
company = response.xpath('//h2//text()').getall()
if company: company = [i.split('About ')[-1].strip() for i in company if 'About ' in i.strip()]
if company: company = company[0]
else: company = ''
d = {'heading' : response.xpath('//h1[@class="post__title"]/text()').get().strip(),
'url' : response.url,
'pubDate' : self.get_pub_date(response.xpath('//script[contains(text(),"datePublished")]/text()').get()),
'links' : links,
'company_name' : company,
'ScrapeID' : self.id,
}
# is used for debuging. just to see printed item.
yield d
#create post request to endpoint
req = self.send_to_endpoint(d)
#send request to endpoint
yield req
def get_pub_date(self, d):
d = json.loads(d)
pub_date = d['datePublished']
return pub_date
def load_gsheet(self):
#Loads a list of stop words from predefined google sheet
gs_id=self.gs_id
url = 'https://docs.google.com/spreadsheets/d/{}/export?format=csv'.format(gs_id)
r = requests.get(url)
denied = r.text.splitlines()[1:]
logging.info(denied)
return denied
def send_to_endpoint(self, d):
url = self.endpoint_url
r = scrapy.Request( url, method='POST',
body=json.dumps(d),
headers={'Content-Type':'application/json'},
dont_filter = True)
return r
</code></pre>
<p>Whenever I <code>yield req</code>, <code>load_gsheet()</code> function is running as well triggering google sheets. If I comment out <code>yield req</code>, <code>load_gsheet()</code> is called just once as it is supposed to be.
Why does this happen? I have triple check the code line by line, added comments. Have no idea what i miss.</p>
|
<python><python-3.x><function><python-requests><scrapy>
|
2022-12-19 23:51:27
| 1
| 1,119
|
JBJ
|
74,857,162
| 13,597,979
|
Avoiding garbage collection for Tkinter PhotoImage (Python)
|
<p>I'm using MacOS v 12.6 and Python v 3.9.6. Why does the code below garbage collect the image unless the commented-out line is uncommented? Isn't using <code>self.img</code> supposed to be enough to avoid garbage collection?</p>
<pre class="lang-py prettyprint-override"><code>from tkinter import Tk, Label
from PIL import Image, ImageTk
import numpy as np
class Meter:
def __init__(self, root):
self.root = root
self.pos, self.img = (None,) * 2
k = np.empty((300,300,3), dtype=np.uint8)
k[:,:] = (10,) * 3
img = Image.fromarray(k, "RGB")
self.img = ImageTk.PhotoImage(img)
self.label = Label(self.root, image=self.img)
# self.label.foo = self.img
self.label.pack()
if __name__ == "__main__":
root = Tk()
Meter(root)
root.mainloop()
</code></pre>
<p>I've used <code>ImageTk</code> several times and for some reason am having problems avoiding garbage collection this time. I found <a href="https://stackoverflow.com/a/27193976/13597979">this answer</a>, which directed me to add the commented-out line, and it solved my problem. But like the answerer, I don't know why. The comments for this answer didn't clarify the issue, and the answer is quite old, so I am asking here.</p>
|
<python><tkinter><tkinter-photoimage>
|
2022-12-19 23:42:41
| 1
| 550
|
TimH
|
74,857,133
| 15,239,717
|
How can I use Date Range with Sum and Order By in Django
|
<p>I am working on a project where I have an Income Model with description among other fields as shown below. The description is a choice field and I want to use Date Range to Sum by each description. i.e. I would want to sum all amount for each description and display their total in HTML Template.</p>
<p>Below is what I have tried but I am getting error which says <code>ValueError: too many values to unpack (expected 2)</code></p>
<p>Models code:</p>
<pre><code>CATEGORY_INCOME = (
('Photocopy', 'Photocopy'),
('Type & Print', 'Type & Print'),
('Normal Print', 'Normal Print'),
('Color Print', 'Color Print'),
('Passport', 'Passport'),
('Graphic Design', 'Graphic Design'),
('Admission Check', 'Admission Check'),
('Lamination', 'Lamination'),
('Document Scan', 'Document Scan'),
('Email Creation', 'Email Creation'),
('Email Check', 'Email Check'),
('Online Application', 'Online Application'),
('Agreement Form', 'Agreement Form'),
('Envelope / Binding Film', 'Envelope / Binding Film'),
('Web Development ', 'Web Development'),
)
class Income(models.Model):
description = models.CharField(max_length=100, choices=CATEGORY_INCOME, null=True)
staff = models.ForeignKey(User, on_delete=models.CASCADE, null=True)
amount = models.PositiveIntegerField(null=False)
date = models.DateField(auto_now_add=False, auto_now=False, null=False)
addedDate = models.DateTimeField(auto_now_add=True)
class Meta:
verbose_name_plural = 'Income Sources'
def __str__(self):
return self.description
</code></pre>
<p>Views Code</p>
<pre><code>def generate_reports(request):
searchForm = IncomeSearchForm(request.POST or None)
searchExpensesForm = ExpensesSearchForm(request.POST or None)
if request.method == "POST" and searchForm.is_valid() and searchExpensesForm.is_valid():
listIncome = Income.objects.filter(date__range=[searchForm['start_date'].value(),searchForm['end_date'].value()])
total_income_passport = listIncome.values('description').order_by('description').annotate(total = Sum('amount')).get('total') or 0
context = { 'total_income_passport':total_income_passport, }
return render(request, 'cashier/print_report.html', context)
else:
listIncome = Income.objects.all()
context = { 'listIncome ':listIncome , }
return render(request, 'cashier/gen_report.html', context)
</code></pre>
<p>Please, note that I have no issue with the Date Range but my problem is with Sum by each description in the model.
Your kind answer would be much appreciated. thanks</p>
|
<python><django>
|
2022-12-19 23:38:25
| 1
| 323
|
apollos
|
74,857,105
| 1,311,704
|
"scalene --version" in a Makefile
|
<p>On Mac OS X, running <code>scalene --version</code> (a python package) on the command line works but running the same line in a Makefile gives an error.</p>
<pre><code>$ scalene --version
Scalene version 1.5.15 (2022.11.16)
</code></pre>
<p>Makefile:</p>
<pre><code>check-deps:
scalene --version
</code></pre>
<pre><code>$ make check-deps
Scalene version 1.5.15 (2022.11.16)
make: *** [check-deps] Error 255
</code></pre>
|
<python><makefile>
|
2022-12-19 23:35:12
| 1
| 1,087
|
offwhitelotus
|
74,857,095
| 1,551,027
|
How to hide the mouse pointer with pyAutoGUI?
|
<p>Is there a way to hide the mouse pointer with pyAutoGUI?</p>
<pre class="lang-py prettyprint-override"><code>import time
import pyautogui
pyautogui.hidePointer()
time.sleep(5)
pyautogui.showPointer()
</code></pre>
<p>If not, is there another way to hide the mouse pointer with another library or in plain python, perhaps with the <code>os</code> module?</p>
|
<python><pyautogui>
|
2022-12-19 23:33:36
| 1
| 3,373
|
Dshiz
|
74,856,602
| 8,179,586
|
Passing a list as parameter for IN statement using named arguments
|
<p>How can I pass a list to an <strong>IN</strong> statement in a query using psycopg's named arguments?</p>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code>cur.execute("""
SELECT name
FROM users
WHERE id IN (%(ids)s)
""",
{"ids": [1, 2, 3]})
</code></pre>
<p>When I do that, I get the following error message:</p>
<pre><code>psycopg.errors.UndefinedFunction: operator does not exist: integer = smallint[]
HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.
</code></pre>
|
<python><psycopg3>
|
2022-12-19 22:17:36
| 0
| 331
|
Leonardo Freua
|
74,856,584
| 12,242,085
|
How to fill NaN only in numeric variables if that variable in on list in Python Pandas?
|
<p>I have Pandas DataFrame like below:</p>
<p>data types:</p>
<ul>
<li>COL1 - numeric</li>
<li>COL2 - object</li>
<li>COL3 - numeric</li>
</ul>
<p>TABLE 1</p>
<pre><code>COL1 | COL2 | COL3
-----|------|------
123 | AAA | 99
NaN | ABC | 1
111 | NaN | NaN
... | ... | ...
</code></pre>
<p>And I have also list of variables like that: <code>my_list = ["COL1", "COL8", "COL15"]</code></p>
<p>And I need to fill NaN by 0 under below conditions:</p>
<ul>
<li>if some column from TABLE 1 is numeric</li>
<li>if some column from TABLE 1 has NaN</li>
<li>if some column From TABLE 1 is on my_list</li>
</ul>
<p>So, I need something like below as an output, because only COL1 meet all above requirements:</p>
<pre><code>COL1 | COL2 | COL3 | COL4
-----|------|------|-------
123 | AAA | 99 | XC
0 | ABC | 1 | XB
111 | NaN | NaN | XA
... | ... | ... | ...
</code></pre>
<p>How can I do that in Python Pandas ?</p>
|
<python><pandas><missing-data><numeric><fillna>
|
2022-12-19 22:15:23
| 1
| 2,350
|
dingaro
|
74,856,506
| 4,893,099
|
Extracting substring in cloud functions
|
<p>I am uploading csv files stored in google cloud storage into bigquery tables.</p>
<p>this is some part of my function:</p>
<pre><code>def upload_to_bq_from_gcs(event, context):
filename = event['name']
input_bucket = event['bucket']
output_bucket = "sales_2020"
</code></pre>
<p>file names in the bucket are in this format:</p>
<pre><code>confidential.SM4.B564.2022-01-07.CSV
public.SM4.B564.2022-01-07.CSV
confidential.test_result_1.B564.2022-01-07.CSV
public.test_result_1.B564.2022-01-07.CSV
</code></pre>
<p>fist par: first bit before first dot is confidential or public.
second part: between first and second dot is a combination or letters, numbers and underscore</p>
<p>I need to upload the data into confidential and public datasets. Table names will be second part of filename (anything from first to second dot)</p>
<p>I need to generate first and second part from filename.</p>
<pre><code>second-part = filename.split('.')[1].split('.')[0]
</code></pre>
<p>I am wondering how I can first part(from first character until first dot)?</p>
<p>and also how I can replace "_" with "-" in the second part?</p>
|
<python><regex><google-cloud-functions>
|
2022-12-19 22:03:43
| 1
| 563
|
Sana
|
74,856,449
| 20,793,070
|
How to filter df by value list with Polars?
|
<p>I have Polars df from a csv and I try to filter it by value list:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
my_list = [1, 2, 4, 6, 48]
df = (
pl.read_csv("bm.dat", separator=';', new_columns=["cid1", "cid2", "cid3"])
.lazy()
.filter((pl.col("cid1") in my_list) & (pl.col("cid2") in my_list))
.collect()
)
</code></pre>
<p>I receive an error:</p>
<blockquote>
<p>ValueError: Since Expr are lazy, the truthiness of an Expr is ambiguous. Hint: use '&' or '|' to chain Expr together, not and/or.</p>
</blockquote>
<p>But when I comment <code>#.lazy()</code> and <code>#.collect()</code>, I receive this error again.</p>
<p>I tried only one filter <code>.filter(pl.col("cid1") in my_list</code>, and received the error again.</p>
<p>How to filter df by value list with Polars?</p>
|
<python><dataframe><csv><python-polars>
|
2022-12-19 21:57:28
| 1
| 433
|
Jahspear
|
74,856,414
| 6,054,404
|
remove duplicate rows in a numpy array
|
<p>I have a numpy array:</p>
<pre><code> arr = array([[991.4, 267.3, 192.3],
[991.4, 267.4, 192.3],
[991.4, 267.4, 192.3],
...,
[993.5, 268. , 192.6],
[993.5, 268. , 192.6],
[993.5, 268.1, 192.6]])
</code></pre>
<p>you can see there are some duplicates in this.</p>
<p>I have tried <code>arr = np.unique(arr)</code> but that returns:</p>
<pre><code>array([192.3, 192.4, 192.5, 192.6, 266.6, 266.7, 266.8, 266.9, 267. ,
267.1, 267.2, 267.3, 267.4, 267.5, 267.6, 267.7, 267.8, 267.9,
268. , 268.1, 268.2, 268.3, 268.4, 268.5, 268.6, 268.7, 268.8,
991.4, 991.5, 991.6, 991.7, 991.8, 991.9, 992. , 992.1, 992.2,
992.3, 992.4, 992.5, 992.6, 992.7, 992.8, 992.9, 993. , 993.1,
993.2, 993.3, 993.4, 993.5])
</code></pre>
<p>I need to retain the nested nature of the array, so compare each nested array to the other nested array, only then remove the duplicates, i.e.:</p>
<pre><code>[991.4, 267.3, 192.3],
[991.4, 267.4, 192.3],
[991.4, 267.4, 192.3],
</code></pre>
<p>In the above there are 2 unique rows, after filtering it should be:</p>
<pre><code>[991.4, 267.3, 192.3],
[991.4, 267.4, 192.3],
</code></pre>
|
<python><numpy><duplicates>
|
2022-12-19 21:52:50
| 2
| 1,993
|
Spatial Digger
|
74,856,325
| 7,385,923
|
rename files inside another directory in python
|
<p>I'm working with python and I need to rename the files that I have inside a directory for example:</p>
<pre><code>C:\Users\lenovo\Desktop\files\file1.txt
C:\Users\lenovo\Desktop\file\file2.txt
C:\Users\lenovo\Desktop\files\file3.txt
</code></pre>
<p>I have these 3 files inside the files folder, and I want to change the name of these, I have my script inside another folder: <code>C:\Users\lenovo\Desktop\app\rename.py</code></p>
<p>I don't know if this is the problem but this is what I tried and it didn't work for me:</p>
<pre><code>import os
directory = r'C:\Users\lenovo\Desktop\files'
count=0
for filename in os.listdir(directory):
count +=1
f = os.path.join(directory, filename)
if os.path.isfile(f):
os.rename(f, "new_file"+str(count))
</code></pre>
<p><strong>UPDATE</strong>
the code simply deletes the original files and tries to create others inside the folder where I have the python script.</p>
|
<python><file>
|
2022-12-19 21:42:40
| 3
| 1,161
|
FeRcHo
|
74,856,316
| 6,186,333
|
Add text into string between single quotes using regexp
|
<p>I am trying to use Python to add some escape characters into a string when I print to the terminal.</p>
<pre class="lang-py prettyprint-override"><code>import re
string1 = "I am a test string"
string2 = "I have some 'quoted text' to display."
string3 = "I have 'some quotes' plus some more text and 'some other quotes'.
pattern = ... # I do not know what kind of pattern to use here
</code></pre>
<p>I then want to add the console color escape (<code>\033[92m</code> for green and <code>\033[0m</code> to end the escape sequence) and end characters at the beginning and end of the quoted string using something like this:</p>
<pre class="lang-py prettyprint-override"><code>result1 = re.sub(...)
result2 = re.sub(...)
result3 = re.sub(...)
</code></pre>
<p>with the end result looking like:</p>
<pre class="lang-py prettyprint-override"><code>result1 = "I am a test string"
result2 = "I have some '\033[92mquoted text\033[0m' to display."
result3 = "I have '\033[92msome quotes\033[0m' plus some more text and '\033[92msome other quotes\033[0m'.
</code></pre>
<p>What kind of pattern should I use to do this, and is <code>re.sub</code> an appropriate method for this, or is there a better regex function?</p>
|
<python><regex>
|
2022-12-19 21:40:27
| 1
| 2,914
|
SandPiper
|
74,856,309
| 17,696,880
|
Why does capturing the capture group identified with this regex search pattern fail?
|
<pre class="lang-py prettyprint-override"><code>import re
input_text_substring = "durante el transcurso del mes de diciembre de 2350" #example 1
#input_text_substring = "durante el transcurso del mes de diciembre del aΓ±o 2350" #example 2
#input_text_substring = "durante el transcurso del mes 12 2350" #example 3
##If it is NOT "del aΓ±o" + "(it doesn't matter how many digits)" or if it is NOT "(it doesn't matter what comes before it)" + "(year of 4 digits)"
if not re.search(r"(?:(?:del|de[\s|]*el|el)[\s|]*(?:aΓ±o|ano)[\s|]*\d*|.*\d{4}$)", input_text_substring):
input_text_substring += " de " + datetime.datetime.today().strftime('%Y') + " "
#For when no previous phrase indicative of context was indicated, for example "del aΓ±o" and the number of digits is not 4
some_text = r"(?:(?!\.\s*?\n)[^;])*" #a number of month or some other text without dots . or ;, or \n ((although it must also admit the possible case where there is nothing in the middle or only a whitespace)
#we need to capture the group in the position of the last \d*
m1 = re.search( r"(?:del[\s|]*mes|de[\s|]*el[\s|]*mes|de[\s|]*mes|\d{2})" + some_text + r"(?P<year>\d*)" , str(input_text_substring), re.IGNORECASE, )
#if m1: identified_year = str(m1.groups()["\g<year>"])
if m1: identified_year = str(m1.groups()[0])
input_text_substring = re.sub( r"(?:del[\s|]*mes|de[\s|]*el[\s|]*mes|de[\s|]*mes|\d{2})" + some_text + r"\d*", identified_year, input_text_substring )
print(repr(identified_year))
print(repr(input_text_substring))
</code></pre>
<p>This is the wrong output that I get with this code (tested in the example 1):</p>
<pre><code>''
'durante el transcurso '
</code></pre>
<p>And this is the correct output that I need:</p>
<pre><code>'2350' #in example 1, 2 and 3
'durante el transcurso del mes de diciembre 2350' #in example 1 and 2
'durante el transcurso del mes 12 2350' #in example 3
</code></pre>
<p>Why can't I capture the numeric value of the years <code>(?P<year>\d*)</code> using the capture group references with <code>m1.groups()["\g<year>"]</code> or <code>m1.groups()[0]</code> ?</p>
|
<python><python-3.x><regex><replace><regex-group>
|
2022-12-19 21:39:50
| 1
| 875
|
Matt095
|
74,856,139
| 15,781,591
|
How to add independent titles or headers to dropdown menus created using ipywidgets?
|
<p>I am using the following code to produce a tool in a Jupyter Notebook that allows the user to print a statement describing which coloured fruit they would like to try:</p>
<pre><code>import ipywidgets as widgets
from ipywidgets import interactive
fruits = ['Banana', 'Apple','Lemon','Orange']
colors = ['Blue', 'Red', 'Yellow']
drop1 = widgets.Dropdown(options=fruits, value='Banana', description='Fruit:', disabled=False)
drop2 = widgets.Dropdown(options=colors, value='Blue', description='Color:', disabled=False)
def update_dropdown(fruit, color):
info = f"I would love to try a {color.lower()} {fruit.lower()}!"
display(info)
w = interactive(update_dropdown, fruit=drop1, color=drop2)
display(w)
</code></pre>
<p>The code comes from the answer to <a href="https://stackoverflow.com/questions/74828993/how-to-build-multiple-dropdown-prompt-function-using-ipywidget/74829153?noredirect=1#comment132104540_74829153">this Stack Overflow question</a>.</p>
<p>When the user chooses a fruit and/or a color from the dropdown menus, an associated print statement should also be printed, reading "I would love to try a {color} {fruit}!". The following image, produced by the code above, show how the partial, expected output looks like in Jupyter Notebook:</p>
<p><a href="https://i.sstatic.net/T9J3j.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/T9J3j.png" alt="enter image description here" /></a></p>
<p>However, I am trying to display "Choose a fruit!" right above the fruit dropdown menu and display "Choose a color right above the color dropdown menu, as so:
<a href="https://i.sstatic.net/Lzn85.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Lzn85.png" alt="enter image description here" /></a></p>
<p>When I try to insert these print statements using the following code:</p>
<pre><code>...
drop1 = ...
print("Hey")
drop2 = ...
print("Hello")
</code></pre>
<p>I see this, which is not what I want:</p>
<p><a href="https://i.sstatic.net/z8b8E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z8b8E.png" alt="enter image description here" /></a></p>
<p>How can I modify the <code>.interactive()</code> function line to insert the print statements I am trying to show?</p>
|
<python><jupyter-notebook><ipywidgets>
|
2022-12-19 21:21:57
| 1
| 641
|
LostinSpatialAnalysis
|
74,855,865
| 8,262,535
|
Python subprocess on Linux: no such file or directory
|
<p>I am trying to get the install location of conda. This works fine on Windows:</p>
<pre><code>conda_path = subprocess.check_output('where anaconda').decode("utf-8").strip()
</code></pre>
<p>In a linux shell <code>whereis conda</code> works. <code>os.system("whereis conda")</code> returns zero.</p>
<p>However,</p>
<pre><code>conda_path = subprocess.check_output('where anaconda').decode("utf-8").strip()
</code></pre>
<p>Fails with: <code>FileNotFoundError: [Errno 2] No such file or directory: 'whereis conda': 'whereis conda' </code></p>
<p>Any suggestions?</p>
|
<python><linux><subprocess>
|
2022-12-19 20:51:51
| 1
| 385
|
illan
|
74,855,810
| 1,245,262
|
How can I import an csv file into SQLite3 from within Python using subprocess
|
<p>I'm currently running legacy code that attempts to convert a csv file into a SQL database from within Python. The file is too big for Pandas, so the code is running sqlite3 in a Python subprocess like this:</p>
<pre><code> result = subprocess.run(["sqlite3", str(db_name), '-cmd', ".mode csv",
'.import '+str(csv).replace('\\','\\\\')
+'_nohead ships'], shell=True, capture_output=True)
</code></pre>
<p>However, things seem to hang when running this statement. I've checked its progress from the shell using:</p>
<pre><code>ps -ef| egrep "sqlite3|PID"
UID PID PPID C STIME TTY TIME CMD
smg 583476 583406 0 12:31 pts/3 00:00:00 /bin/sh -c sqlite3 /media/me/Fortress/ais.db -cmd .mode csv .import /media/me/Fortress/ais/AIS_2016_03_Zone10.csv_nohead ships
</code></pre>
<p>But it's been running almost 3 hours so far, which seems incorrect. Is there something wrong with the command created by this subprocess, or for a 3.3GB file, should I just expect it to take that long?</p>
<p>===== EDIT =========</p>
<p>I just tried with a csv file that I reduced to 10 lines and it still hangs !!????</p>
<p>This is legacy code, this much of it should still work. What am I failing to understand here?</p>
|
<python><sqlite><csv>
|
2022-12-19 20:45:23
| 0
| 7,555
|
user1245262
|
74,855,789
| 4,670,369
|
How can I create a MARKET order with SL/TP in Bybit using CCXT?
|
<p>I need to make this code works for MARKET orders, I can't find how to get it.</p>
<pre><code>exchange.create_order(
symbol='ADA/USDT:USDT',
type='market',
side='sell',
amount=60,
params={
'leverage': 1,
'stopLossPrice': SL_PRICE,
'takeProfitPrice': TP_PRICE,
},
)
</code></pre>
|
<python><trading><ccxt>
|
2022-12-19 20:42:22
| 0
| 381
|
Carlos Diaz
|
74,855,742
| 1,877,002
|
Is itertools combinations always sorted
|
<p>Suppose I have <strong>sorted</strong> array from which I want to get all <code>itertools.combinations</code> of, say, 3 elements.</p>
<pre><code>from itertools import combinations
start = 5
end = start+4
some_pos_number = 3
inds = list(combinations(range(start,end),some_pos_number))
>>>inds
[(5, 6, 7), (5, 6, 8), (5, 7, 8), (6, 7, 8)]
</code></pre>
<p>Is all of these combinations will always be sorted?</p>
|
<python><python-3.x>
|
2022-12-19 20:37:12
| 1
| 2,107
|
Benny K
|
74,855,740
| 10,969,942
|
How to assign subpool (m workers) of multiprocessing pool (n workers with m < n) to some task in python?
|
<p>I have usecase like following: I have total <code>20</code> multiprocessing workers. I can give all resources to <code>task1</code>. However, <code>task2</code> has lower priority and I can at most give it half of total resources. How to assign subpool (m workers) of multiprocessing pool (n workers with m < n) to some task in python? Or is there some design pattern to handle this usecase?</p>
<pre class="lang-py prettyprint-override"><code>import multiprocessing
pool = multiprocessing.get_context("fork").Pool(20)
def task1():
# want to use all 20 workers
pass
def task2():
# only want to use 10 workers
pass
</code></pre>
<p><strong>PS:</strong>
For example, I have <code>20</code> physical cores, and my tasks are all CPU-bound task (every task can only use one core, since it's pure python code without any c++ package), therefore I hope at most <code>20</code> running workers at same time.</p>
<p>Task1 and task2 are incoming async jobs with number <code>n</code> and <code>m</code> (<code>n >> 20</code> and <code>m >> 20</code>). In our requirement, task2 cannot consume half of total cores at same time. However, task1 can use all resources.</p>
|
<python><design-patterns><multiprocessing><python-multiprocessing>
|
2022-12-19 20:36:48
| 1
| 1,795
|
maplemaple
|
74,855,601
| 16,378,913
|
Creating balanced dataset for YOLO v5 with each image having multiple annotations
|
<p>Currently using the YOLO v5 code from this <a href="https://github.com/ultralytics/yolov5" rel="nofollow noreferrer">https://github.com/ultralytics/yolov5</a> in which each txt has a line referring to the <code><object-class> <x> <y> <width> <height></code> (image below). Each image has multiple classes (in the annotation file below the image has a <code><object-class></code> of <code>0</code> and <code>27</code>). The goal is to select an equal number of <code>object-class</code> from the entire dataset. Currently, the issue is that the image comes with the other labels/bounding boxes too. How can I filter so only a certain row is read from the annotation file? Currently have 1000s of annotation files.</p>
<p><a href="https://i.sstatic.net/7NdXz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7NdXz.png" alt="Multiple annotations image" /></a></p>
<p>image source: <a href="https://stackoverflow.com/questions/68398965/coco-json-annotation-to-yolo-txt-format">COCO json annotation to YOLO txt format</a></p>
|
<python><object-detection><yolo><yolov5>
|
2022-12-19 20:17:36
| 0
| 365
|
maximus
|
74,855,444
| 2,301,970
|
Sum data inside selection in bokeh image plot
|
<p>I am starting with bokeh and I wonder if anyone could point me in the right direction.</p>
<p>I have an image (2D array). Using the gallery example:</p>
<pre><code>import numpy as np
from bokeh.plotting import figure, show
from bokeh.models import ColumnDataSource, RangeTool
from bokeh.layouts import column
x = np.linspace(0, 10, 300)
y = np.linspace(0, 10, 300)
xx, yy = np.meshgrid(x, y)
d = np.sin(xx) * np.cos(yy)
# Figures creation
im_fig = figure(width=400, height=400)
# Plotting the data
im_fig.image(image=[d], x=0, y=0, dw=10, dh=10, palette="Sunset11", level="image")
im_fig.grid.grid_line_width = 0.5
show(im_fig)
</code></pre>
<p>Which results in:</p>
<p><a href="https://i.sstatic.net/l08x4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/l08x4.png" alt="enter image description here" /></a></p>
<p>Now I would like to sum the data along a y selection. This seems to be the work of the <a href="https://docs.bokeh.org/en/3.0.3/docs/examples/interaction/tools/range_tool.html#index-0" rel="nofollow noreferrer">RangeTool</a>.</p>
<p>I create another figure to plot the summed data of the selection but I get an error while adding the initial range:</p>
<pre><code>import numpy as np
from bokeh.plotting import figure, show
from bokeh.models import ColumnDataSource, RangeTool
from bokeh.layouts import column
x = np.linspace(0, 10, 300)
y = np.linspace(0, 10, 300)
xx, yy = np.meshgrid(x, y)
d = np.sin(xx) * np.cos(yy)
# Figures creation
im_fig = figure(width=400, height=400)
sum_fig = figure(width=400, height=200)
# Plotting the data
im_fig.image(image=[d], x=0, y=0, dw=10, dh=10, palette="Sunset11", level="image")
im_fig.grid.grid_line_width = 0.5
# Adding the range tools
range_tool = RangeTool(y_range=im_fig.y_range)
range_tool.overlay.fill_color = "navy"
range_tool.overlay.fill_alpha = 0.2
im_fig.add_tools(range_tool)
im_fig.toolbar.active_multi = range_tool
show(column(im_fig, sum_fig))
</code></pre>
<p>This is the error:</p>
<pre><code>failed to validate RangeTool(id='p1108', ...).y_range: expected either None or a value of type Instance(Range1d), got DataRange1d(id='p1003', ...)
</code></pre>
<p>My guess, this is happening because the Range tool is not compatible with the Image glyph. I wonder if anyone could please point me towards the right direction. Ty.</p>
|
<python><plot><hyperlink><bokeh><interactive>
|
2022-12-19 19:59:57
| 1
| 693
|
Delosari
|
74,855,305
| 7,747,759
|
How to input a vector of arguments in a function in python?
|
<p>I have a function, which takes a variable number of inputs. For example, the functions may look like <code>fn(a0,a1,...)</code>. Now, I have a vector <code>A=[a0,a1,...]</code>, and I want to call <code>fn</code>, but I'm not sure how to dynamically set the number of inputs (if handling outside of the function). I do not want to handle this inside the function.</p>
<p>How can I do this?</p>
|
<python><python-3.x>
|
2022-12-19 19:44:18
| 0
| 511
|
Ralff
|
74,855,153
| 3,641,140
|
How to add external libraries to python project in Visual Studio
|
<p>My python project has dependencies on packages that exist on the local file system in folder X (i.e. not installed form the internet). I'd like to add these packages (source code) to the python environment for my project. How can this be done?</p>
<p>I've add folder X to "Search Paths" in the Solution Explorer, but I still cannot import the package.</p>
|
<python><visual-studio>
|
2022-12-19 19:27:24
| 1
| 319
|
SuperUser01
|
74,854,903
| 8,968,801
|
Not Required in Pydantic's Base Models
|
<p>Im trying to accept data from an API and then validate the response structure with a Pydantic base model. However, I have the case where sometimes some fields will not come included in the response, while sometimes they do. The problem is, when I try to validate the structure, Pydantic starts complaining about those fields being "missing" even though they can be missing sometimes. I really don't understand how to define a field as "missible". The docs mention that a field that is just defined as a name and a type is considered this way, but I haven't had any luck</p>
<p>This is a simple example of what I'm trying to accomplish</p>
<pre class="lang-py prettyprint-override"><code># Response: {a: 1, b: "abc", c: ["a", "b", "c"]}
response: dict = json.loads(request_response)
# Pydantic Base Model
from pydantic import BaseModel
class Model(BaseModel):
a: int
b: str
c: List[str]
d: float
# Validating
Model(**response)
# Return: ValidationError - Missing "d" field
</code></pre>
<p>How do I make it so that "d" doesnt cause the validation to throw an error? I have tried to switch "d" to <code>d: Optional[float]</code> and <code>d: Optional[float] = 0.0</code>, but nothing works.</p>
<p>Thanks!</p>
|
<python><python-typing><pydantic>
|
2022-12-19 18:58:37
| 1
| 823
|
Eddysanoli
|
74,854,871
| 2,094,707
|
Python requests redirect to GET instead of POST
|
<p>I am trying to call the url of a REST API for which both GET and POST requests are possible. I want to send a POST request. If I run my request through the ThunderClient plugin everything works fine. I can send a POST request and get the correct data.</p>
<p>If I send my request in python like this:</p>
<pre class="lang-py prettyprint-override"><code> import requests
response = requests.post(
url,
data=payload,
verify=certificate,
)
pprint(response.request)
</code></pre>
<p>It will print <code><PreparedRequest [GET]></code>. The requests library redirects to send a GET and I will get the corresponding GET response.</p>
<p>If I set <code>allow_redirects=False</code>:</p>
<pre class="lang-py prettyprint-override"><code> import requests
response = requests.post(
url,
data=payload,
allow_redirects=False,
verify=certificate,
)
pprint(response.request)
</code></pre>
<p>It will print <code><PreparedRequest [POST]></code>, but I get an empty <code>response.text</code> and this header:</p>
<p><code>{'Cache-Control': 'no-cache', 'Content-length': '0', 'Location': '...url...', 'Connection': 'close'}</code></p>
<p>and status code 302.</p>
<p>I don't have this issue when I send the POST request through ThunderClient. I just get the expected data back.</p>
<p>What I am doing wrong here? How can I ensure that I send a POST request?</p>
|
<python><http><python-requests>
|
2022-12-19 18:56:20
| 1
| 3,271
|
Stein
|
74,854,825
| 2,278,511
|
Raspberry Pi 4: 2x UART device + display touch; I can't see TOUCH device
|
<p>i have quite specific SW / HW problem probably related with serial communication...</p>
<p>My project is based on Raspberry Pi 4 + 7" Touch screen + ESP32 microcontroller and i have problem with screen touch function.</p>
<p><strong>Project detailed architecture</strong>:</p>
<ol>
<li>on Raspberry Pi is running my application writen in Python 3 (+Kivy Framework) and reading data with pySerial library (source bellow)</li>
<li><a href="https://www.elecrow.com/7-inch-1024-600-hdmi-lcd-display-with-touch-screen.html" rel="nofollow noreferrer">touch screen</a> is connected with HDMI(video) + USB(touch),</li>
<li>to next USB is connected <a href="https://grobotronics.com/esp32-development-board-devkit-v1.html?sl=en" rel="nofollow noreferrer">development board with ESP32 microcontroler</a> and pushing data via UART to Raspberry Pi</li>
</ol>
<p><strong>Data reading from ESP32 UART:</strong></p>
<pre class="lang-py prettyprint-override"><code>import serial
uart = serial.Serial(port="/dev/ttyUSB0", bytesize=8, baudrate=9600,
stopbits=1, timeout=0.2, parity='N')
</code></pre>
<p><strong>Okey, and here is my problem:</strong></p>
<p>When is connected only USB <-> UART data cable (touch screen isn't connected), everything works great:
I am sending data from ESP32 microcontroller (over UART) and then reading it on Raspberry Pi with pySerial library
(and vice versa: When i have connected touch screen and i don't have connected USB <-> UART cable, touch works fine too).</p>
<p><strong>But when both USB Touch Screen and USB <-> UART are connected together, i cann't see any data from ESP32</strong> microcontroller and of course, touch function isn't works...</p>
<p>Based on information written above, problem is probably with serial communication, respectively conflict between USB <-> UART and USB touch interfaces but i don't have experience how to fix or rebuild it.</p>
<p>Please, do someone experience / knowledge how to solve it?
(if i forgot write some important information, of course, i can it add it here)</p>
<h2>Update 12/22/2022:</h2>
<p><code>lsusb - plugged in 2x UART dev + 1x Touch USB</code></p>
<pre><code>1 Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
2 Bus 001 Device 006: ID 0bda:3036 Realtek Semiconductor Corp.
3 Bus 001 Device 005: ID 0bda:3036 Realtek Semiconductor Corp.
4 Bus 001 Device 004: ID 10c4:ea60 Cygnal Integrated Products, Inc. CP2102/CP2109 UART Bridge Controller [CP210x family] # UART 1
5 Bus 001 Device 003: ID 10c4:ea60 Cygnal Integrated Products, Inc. CP2102/CP2109 UART Bridge Controller [CP210x family] # UART 2
6 Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub
7 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
</code></pre>
<p><strong>I can't see Display USB TOUCH</strong></p>
<p><code>lsusb - plugged in 1x UART dev + 1x Touch USB</code></p>
<pre><code>1 Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
2 Bus 001 Device 006: ID 0bda:3036 Realtek Semiconductor Corp.
3 Bus 001 Device 005: ID 0bda:3036 Realtek Semiconductor Corp.
4 -> Bus 001 Device 004: ID 0eef:0005 D-WAV Scientific Co., Ltd # Display USB TOUCH
5 Bus 001 Device 003: ID 10c4:ea60 Cygnal Integrated Products, Inc. CP2102/CP2109 UART Bridge Controller [CP210x family] # UART 1
6 Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub
7 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
</code></pre>
<p>What about the power supply i have external source (5V 4A) for Raspberry Pi 4 + ESP32 + 7" LCD with touch and i did't see problem with stability (but maybe this is a problem).</p>
|
<python><esp32><pyserial><uart><raspberry-pi4>
|
2022-12-19 18:52:36
| 1
| 408
|
lukassliacky
|
74,854,786
| 12,060,672
|
Reload stylesheets in PyQT / PySide after change object name
|
<p>Good day, colleagues, I have a styles, example:</p>
<pre><code>#button_1 {
background-color: green;
}
#button_2 {
background-color: red;
}
</code></pre>
<p>I have 3 objects:</p>
<pre><code>button_1 = QPushButton()
button_2 = QPushButton()
button_3 = QPushButton()
</code></pre>
<p>And I want after click on <code>button_1</code> set to this button style #button_1, I change the name of this button</p>
<pre><code>button_1.setObjectName('button_1')
</code></pre>
<p>but after it button style not change, (but css load correct it's worked), and I have a question, maybe I need to reload or do smth with this button to set this style for it and it should be work.</p>
<p>I use python version 3.10 and pyside6 version 6.4.1</p>
<p>Full code of app:</p>
<pre><code>from PySide6.QtWidgets import QHBoxLayout, QWidget, QPushButton, QApplication
import sys
class App(QWidget):
def __init__(self) -> None:
super().__init__()
self.load_ui()
def load_ui(self):
self.setStyleSheet("""
#button_1 {
background-color: green;
}
#button_2 {
background-color: red;
}
""")
self.button_1 = QPushButton()
self.button_2 = QPushButton()
self.button_3 = QPushButton()
self.app_layout = QHBoxLayout()
self.app_layout.addWidget(self.button_1)
self.app_layout.addWidget(self.button_2)
self.app_layout.addWidget(self.button_3)
self.setLayout(self.app_layout)
self.button_1.clicked.connect(self.button_1_click)
def button_1_click(self):
print('clicked')
self.button_1.setObjectName('button_1')
if __name__ == "__main__":
app = QApplication(sys.argv)
main_window_widget = App()
main_window_widget.show()
sys.exit(app.exec_())
</code></pre>
|
<python><pyqt><pyside>
|
2022-12-19 18:49:34
| 2
| 321
|
antipups
|
74,854,728
| 1,315,621
|
Start multiple vunicorn apps in python
|
<p>I need to create a Python application that handles both API (<code>fastAPI</code>) and sockets (<code>socketio</code>). I can't find a way to start both the vunicorn applications in the same python script. Note that I can replace vunicorn with any other library that would allow me to fix this.
Code:</p>
<pre><code>import json
from fastapi import FastAPI
import socketio
import uvicorn
# create API app
app_api = FastAPI()
# create a Socket.IO server and wrap with a WSGI application
sio = socketio.Server(port=8001)
app_sio = socketio.WSGIApp(sio)
@app_api.get("/")
def read_root():
return {"Hello": "World"}
@sio.on('*', namespace="/aws-iam")
def catch_all(event, sid, data):
print(event, sid, data)
</code></pre>
<p>I am not sure how I can start both <code>app_api</code> and <code>app_sio</code>. I can't start both from the main thread because <code>uvicorn.run(...)</code> is blocking and only the first call would run.
I tried to start them in two different threads but I got errors:</p>
<pre><code>if __name__ == "__main__":
def start_api():
uvicorn.run("testing.mock_api:app_api", host='127.0.0.1', port=8000, reload=True, debug=True) #, workers=3)
def start_sio():
uvicorn.run("testing.mock_api:app_sio", host='127.0.0.1', port=8001, reload=True, debug=True) # , workers=3)
from threading import Thread
import time
threads = [
Thread(target=start_sio),
Thread(target=start_api),
]
for thread in threads:
thread.start()
time.sleep(999999999999)
</code></pre>
<p>With multiple thread I get the errors:</p>
<pre><code> File "/src/testing/mock_api.py", line 55, in start_api
uvicorn.run("testing.mock_api:app_api", host='127.0.0.1', port=8000, reload=True, debug=True) #, workers=3)
File "/usr/local/lib/python3.6/dist-packages/uvicorn/main.py", line 447, in run
ChangeReload(config, target=server.run, sockets=[sock]).run()
File "/usr/local/lib/python3.6/dist-packages/uvicorn/supervisors/basereload.py", line 43, in run
self.startup()
File "/usr/local/lib/python3.6/dist-packages/uvicorn/supervisors/basereload.py", line 62, in startup
signal.signal(sig, self.signal_handler)
File "/usr/lib/python3.6/signal.py", line 47, in signal
handler = _signal.signal(_enum_to_int(signalnum), _enum_to_int(handler))
ValueError: signal only works in main thread
</code></pre>
|
<python><sockets><fastapi><uvicorn>
|
2022-12-19 18:43:59
| 1
| 3,412
|
user1315621
|
74,854,623
| 7,576,002
|
GSSAPI Docker Installation Issue - /bin/sh: 1: krb5-config: not found
|
<p>I successfully tried out GSSAPI to generate kerberos tickets in my Python app locally on my Mac. Now I am trying to package this as a Docker image.</p>
<p>When I try to build the image I keep getting this error:</p>
<pre><code>------
> [ 6/13] RUN pip install gssapi:
#10 1.307 Collecting gssapi
#10 1.383 Downloading gssapi-1.8.2.tar.gz (94 kB)
#10 1.400 ββββββββββββββββββββββββββββββββββββββββ 94.3/94.3 kB 6.3 MB/s eta 0:00:00
#10 1.427 Installing build dependencies: started
#10 4.102 Installing build dependencies: finished with status 'done'
#10 4.103 Getting requirements to build wheel: started
#10 4.548 Getting requirements to build wheel: finished with status 'error'
#10 4.552 error: subprocess-exited-with-error
#10 4.552
#10 4.552 Γ Getting requirements to build wheel did not run successfully.
#10 4.552 β exit code: 1
#10 4.552 β°β> [25 lines of output]
#10 4.552 /bin/sh: 1: krb5-config: not found
#10 4.552 Traceback (most recent call last):
#10 4.552 File "/usr/local/lib/python3.11/si
</code></pre>
<p>I am also running this before I call RUN pip install gssapi:</p>
<pre><code>> RUN DEBIAN_FRONTEND=noninteractive apt install -y krb5-config krb5-user
</code></pre>
<p>The main issue I am trying to solve is this:</p>
<pre><code>/bin/sh: 1: krb5-config: not found
</code></pre>
<p>Docker file:</p>
<blockquote>
<p>FROM python:slim-buster</p>
<p>PYTHONDONTWRITEBYTECODE=1s</p>
<p>PYTHONUNBUFFERED=1</p>
<p>COPY requirements.txt . COPY *.py . RUN python -m pip install --upgrade pip RUN apt-get -qq update && <br />
apt-get -yqq install krb5-user libpam-krb5 && <br />
apt-get -yqq clean \ RUN DEBIAN_FRONTEND=noninteractive apt install -y krb5-config krb5-user RUN pip install gssapi RUN python -m
pip install -r requirements.txt RUN apt update -y && apt install g++
-y && apt install build-essential -y && apt install unixodbc-dev -y RUN apt install -y libgssapi-krb5-2 && apt install curl -y RUN curl
<a href="https://packages.microsoft.com/keys/microsoft.asc" rel="nofollow noreferrer">https://packages.microsoft.com/keys/microsoft.asc</a> | apt-key add - RUN
curl <a href="https://packages.microsoft.com/config/debian/10/prod.list" rel="nofollow noreferrer">https://packages.microsoft.com/config/debian/10/prod.list</a> >
/etc/apt/sources.list.d/mssql-release.list RUN apt update -y &&
ACCEPT_EULA=Y apt-get install -y msodbcsql18 && ACCEPT_EULA=Y apt-get
install -y mssql-tools18 RUN echo 'export
PATH="$PATH:/opt/mssql-tools18/bin"' >> ~/.bashrc</p>
<p>CMD ["python","main.py"]</p>
</blockquote>
<p>What am I missing? I can't seem to find anything on this.</p>
|
<python><kerberos><gssapi>
|
2022-12-19 18:35:42
| 2
| 1,129
|
KSS
|
74,854,464
| 9,220,463
|
efficient way to disconnect graphs while maximising edges weight
|
<p>Given a connected graph and a list of N-assigned vertexes, I want to find an efficient way to create N subgraphs, each containing one of the assigned vertexes.
To achieve that, we can prune the edges. However, we should prune less edge weight as possible.</p>
<p>For example, let's start with the following graph. We want to obtain three subgraphs containing one of the three red vertexes <a href="https://i.sstatic.net/TuuTH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TuuTH.png" alt="enter image description here" /></a></p>
<p>The result should look like the following:
<a href="https://i.sstatic.net/ylUPO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ylUPO.png" alt="enter image description here" /></a></p>
<p>Right now, I'm using a heuristic, but it is not working well in some edge cases and has n^2 complexity on the number of vertexes. The idea is to calculate the shortest path between two vertex and remove the lightest edge and repeat until the vertex are disconnected.
Here is my code:</p>
<pre><code>import pandas as pd
import igraph as ig
from collections import Counter
ucg_df = pd.DataFrame(
[
[0, 1, 100],
[0, 2, 110],
[2, 3, 70],
[3, 4, 100],
[3, 1, 90],
[0, 3, 85],
[5, 7, 90],
[0, 8, 100],
[3, 6, 10],
[2, 5, 60],
],
columns=["nodeA", "nodeB", "weight"],
)
ucg_graph = ig.Graph.DataFrame(ucg_df, directed=False)
ig.plot(
ucg_graph,
target='stack1.pdf',
edge_label=ucg_graph.es["weight"],
vertex_color=['red']*3 + ['green']*(len(ucg_df)-3),
vertex_label = ucg_graph.vs.indices
)
def generate_subgraphs_from_vertexes(g, vertex_list):
for i, vertex in enumerate(vertex_list):
for j in range(i + 1, len(vertex_list)):
while True:
path = g.get_shortest_paths(vertex_list[i], vertex_list[j], mode='ALL', output='epath',
weights='weight')[0]
if len(path) == 0:
break
edge_2_drop = min(g.es[path], key=lambda x: x['weight'])
edge_2_drop.delete()
return g
graph = generate_subgraphs_from_vertexes(ucg_graph, ucg_graph.vs[0,1,2])
ig.plot(
graph,
target='stack2.pdf',
edge_label=graph.es["weight"],
vertex_color=['red']*3 + ['green']*(len(ucg_df)-3),
vertex_label = graph.vs.indices
)
</code></pre>
<p>what kind of algorithm could I use to better solve this problem?</p>
|
<python><r><optimization><graph><igraph>
|
2022-12-19 18:20:04
| 2
| 621
|
riccardo nizzolo
|
74,854,302
| 12,288,003
|
AttributeError: 'Query' object has no attribute 'is_clause_element' when joining table with query
|
<p><strong>AttributeError: 'Query' object has no attribute 'is_clause_element' when joining table with query</strong></p>
<p>I have a query that counts the amount of keywords a company has and then sorts them by the amount of keywords they have.</p>
<pre><code>query_company_ids = Session.query(enjordplatformCompanyToKeywords.company_id.label("company_id"),func.count(enjordplatformCompanyToKeywords.keyword_id)).group_by(enjordplatformCompanyToKeywords.company_id).order_by(desc(func.count(enjordplatformCompanyToKeywords.keyword_id))).limit(20)
</code></pre>
<p>I then want to get information about these companies like image, title, info etc and send it to the frontend (this is done later by looping through companies_query). <br>
Though I have trouble in building the connection between the <em>query_company_ids</em> query and <em>enjordplatformCompanies</em> table.</p>
<p>I have tried two ways of doing this:</p>
<ol>
<li>companies_query = Session.query(enjordplatformCompanies, query_company_ids).filter(enjordplatformCompanies.id == query_company_ids.company_id).all()</li>
<li>companies_query = Session.query(enjordplatformCompanies, query_company_ids).join( query_company_ids, query_company_ids.c.company_id == enjordplatformCompanies.id).all()</li>
</ol>
<p>But both of them result in the error: AttributeError: 'Query' object has no attribute 'is_clause_element'</p>
<p><strong>Question</strong><br>
How can I join the query_company_ids query and enjordplatformCompanies table? <br>
Thanks</p>
<p><strong>Here are the table definitions</strong></p>
<pre><code>class enjordplatformCompanies(Base):
__tablename__ = "enjordplatform_companies"
id = Column(Integer, primary_key=True, unique=True)
name = Column(String)
about = Column(String)
image = Column(String)
website = Column(String)
week_added = Column(Integer)
year_added = Column(Integer)
datetime_added = Column(DateTime)
created_by_userid = Column(Integer)
company_type = Column(String)
contact_email=Column(String)
adress=Column(String)
city_code=Column(String)
city=Column(String)
class enjordplatformCompanyToKeywords(Base):
__tablename__ = "enjordplatform_company_to_keywords"
id = Column(Integer, primary_key=True, unique=True)
company_id = Column(Integer,ForeignKey("enjordplatform_companies.id"))
keyword_id = Column(Integer,ForeignKey("enjordplatform_keywords.id"))
</code></pre>
|
<python><sqlalchemy>
|
2022-12-19 18:04:07
| 1
| 339
|
user12288003
|
74,853,917
| 9,783,831
|
repr function in swig for python
|
<p>I have a Swig wrapper to use in python.
For one of my class, I have created a <code>repr</code> function as follows</p>
<pre><code>%module myModule
%{
#include <string>
#include <sstream>
%}
%extend myClass {
std::string __repr__()
{
std::ostringstream ss;
ss << "MyClass(attr1=" << $self->attr1 << ", attr2=" << $self->attr1 << ")";
return ss.str();
}}
</code></pre>
<p>However when I compile the wrapper and use it in python, I get the following error
<code>__repr__ returned non-string (type SwigPyObject)</code></p>
<p>How can I fix this?</p>
|
<python><swig>
|
2022-12-19 17:25:34
| 1
| 407
|
Thombou
|
74,853,835
| 3,849,761
|
How to do type casting for special datatypes in Python
|
<p>I'm trying to determine the type of value in a dictionary which comes from a outside of a function sth like this;</p>
<pre><code>def create_new_ds(sep_labels: dict, upper_limit: int, ds: bytearray, ref_lbl: str, rnd: bool) -> []:
desired_lbls = sep_labels.keys()
if rnd:
sep_labels[ref_lbl]
</code></pre>
<p>in this code i know that value of this "sep_labels[ref_lbl]" dictionary is a pandas dataframe. Questions is how to let PyCharm IDE understand it to get its properties after putting comma.</p>
|
<python><variables><casting>
|
2022-12-19 17:19:03
| 1
| 1,058
|
livan3li
|
74,853,634
| 4,534,466
|
Kedro, running inference on user input
|
<p>I have a pipeline with the model I want to use. Outside of the project, I have an <code>app.py</code> file where I'm going to create the UI/UX for my users to run my model. Right now I'm just using a sample string but later on, you can imagine that there will be a textbox for users to type.</p>
<p>How can I pass the user input as an input to the pipeline? I though I would be able to do so with the <code>kedro.framework.session.session.KedroSession</code> as seen in the code below, but doing so results in the error <code>ValueError: Pipeline input(s) {'user-input'} not found in the DataCatalog</code></p>
<pre><code>from kedro.framework.session import KedroSession
from kedro.framework.startup import bootstrap_project
from kedro.io import MemoryDataSet
import os
bootstrap_project("<project path>")
user_input = "this is a sample text"
user_input = MemoryDataSet(user_input)
with KedroSession.create("project") as session:
output = session.run(
"nlp-pipeline",
from_inputs={
"user-input": user_input
}
)
print(output)
</code></pre>
|
<python><kedro><mlops>
|
2022-12-19 16:58:13
| 0
| 1,530
|
JoΓ£o Areias
|
74,853,453
| 11,978,973
|
Days till year end
|
<p>I need to find the numbers of days left from today till the end of the year. I know I can calculate this by simply subtracting today's date from the December 31st of this year, ie:</p>
<pre><code>current_year = dt.datetime.now().year
days_left = dt.date(current_year, 12, 31) - dt.datetime.now().date()
</code></pre>
<p>Is there a smarter/faster way to do this?</p>
|
<python><datetime><timedelta>
|
2022-12-19 16:44:01
| 1
| 364
|
Zephyrus
|
74,853,108
| 7,168,098
|
spaCy: generalize a language factory that gets a regular expression to create spans in a text
|
<p>Working with spaCy it is possible to define spans in a document that correspond to a regular expression matching on the text.
I would like to generalize this into a language factory.</p>
<p>The code to create a span could be like this:</p>
<pre><code>nlp = spacy.load("en_core_web_sm")
text = "this is pepa pig text comprising a brake and fig. 45. The house is white."
doc=nlp(text)
def _component(doc, name, regular_expression):
if name not in doc.spans:
doc.spans[name] = []
for i, match in enumerate(re.finditer(regular_expression, doc.text)):
label = name + "_" + str(i)
start, end = match.span()
span = doc.char_span(start, end, alignment_mode = "expand")
span_to_add = Span(doc, span.start, span.end, label=label)
doc.spans[name].append(span_to_add)
return doc
doc = _component(doc, 'pepapig', r"pepa\spig")
</code></pre>
<p>I would like to generalize this into a factory.
The factory would take a particular list of regular expressions with names like:</p>
<pre><code>[{'name':'pepapig','rex':r"pepa\spig"},{'name':'pepapig2','rex':r"george\spig"}]]
</code></pre>
<p>The way I try to do this is as follows (code does not work)</p>
<pre><code>@Language.factory("myregexes6", default_config={})
def add_regex_match_as_span(nlp, name, regular_expressions):
for i,rex_d in enumerate(regular_expressions):
print(rex_d)
name = rex_d['name']
rex = rex_d['rex']
_component(doc, name=name, regular_expression=rex, DEBUG=False)
return doc
nlp.add_pipe(add_regex_match_as_span(nlp, "MC", regular_expressions=[{'name':'pepapig','rex':r"pepa\spig"},{'name':'pepapig2','rex':r"george\spig"}]))
</code></pre>
<p>I am looking to for the solution to the above code</p>
<p>The error I get is:</p>
<pre><code>[E966] `nlp.add_pipe` now takes the string name of the registered component factory, not a callable component. Expected string, but got this is pepa pig text comprising a brake and fig. 45. The house is white. (name: 'None').
- If you created your component with `nlp.create_pipe('name')`: remove nlp.create_pipe and call `nlp.add_pipe('name')` instead.
- If you passed in a component like `TextCategorizer()`: call `nlp.add_pipe` with the string name instead, e.g. `nlp.add_pipe('textcat')`.
- If you're using a custom component: Add the decorator `@Language.component` (for function components) or `@Language.factory` (for class components / factories) to your custom component and assign it a name, e.g. `@Language.component('your_name')`. You can then run `nlp.add_pipe('your_name')` to add it to the pipeline.
</code></pre>
<h2>LAST EDIT</h2>
<p>How can the factory be saved into a .py file and reread from other file?</p>
|
<python><spacy>
|
2022-12-19 16:11:05
| 1
| 3,553
|
JFerro
|
74,852,879
| 12,366,110
|
Finding the average of the x component of an array of coordinates, based on the y component
|
<p>I have the following example array of x-y coordinate pairs:</p>
<pre><code>A = np.array([[0.33703753, 3.],
[0.90115394, 5.],
[0.91172016, 5.],
[0.93230994, 3.],
[0.08084283, 3.],
[0.71531777, 2.],
[0.07880787, 3.],
[0.03501083, 4.],
[0.69253184, 4.],
[0.62214452, 3.],
[0.26953094, 1.],
[0.4617873 , 3.],
[0.6495549 , 0.],
[0.84531478, 4.],
[0.08493308, 5.]])
</code></pre>
<p>My goal is to reduce this to an array with six rows by taking the average of the x-values for each y-value, like so:</p>
<pre><code>array([[0.6495549 , 0. ],
[0.26953094, 1. ],
[0.71531777, 2. ],
[0.41882167, 3. ],
[0.52428582, 4. ],
[0.63260239, 5. ]])
</code></pre>
<p>Currently I am achieving this by converting to a pandas dataframe, performing the calculation, and converting back to a numpy array:</p>
<pre><code>>>> df = pd.DataFrame({'x':A[:, 0], 'y':A[:, 1]})
>>> df.groupby('y').mean().reset_index()
y x
0 0.0 0.649555
1 1.0 0.269531
2 2.0 0.715318
3 3.0 0.418822
4 4.0 0.524286
5 5.0 0.632602
</code></pre>
<p>Is there a way to perform this calculation using numpy, without having to resort to the pandas library?</p>
|
<python><numpy><vectorization>
|
2022-12-19 15:53:15
| 4
| 14,636
|
CDJB
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.