QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
74,947,206
| 7,472,392
|
How to filter longitude and latitude that are not within a area
|
<p>I have a boundary with polygon object that is consisted of longitudes and latitudes.</p>
<p>That looks like the below:</p>
<pre><code>POLYGON((35.82879102266901 128.7076346641539,35.82707300496909 128.7067790195541,35.82583849984275 128.70867790735832,35.824475124668446 128.70759751110955,35.823661363692246 128.7058871623279,35.82330557322746 128.70555866952947,35.82331389280524 128.70497241578087,35.82526251082479 128.70418422540567,35.82744337671204 128.7041755355518,35.827607540680496 128.70530771172315,35.82939029995311 128.70541225682194,35.83034001648074 128.70262188794626,35.83078056043854 128.69951074704943,35.83193366058302 128.69379210686643,35.828047660143625 128.693941879719,35.827966675973194 128.69201477525812,35.82756567015862 128.691054639399,35.82742778802447 128.68932551896413,35.82650612209906 128.6882215894989,35.82623275962891 128.6871535405911,35.82647878660091 128.68631779206828,35.82589756010733 128.6853538910315,35.82655549313656 128.68470389060354,35.82708326990368 128.6843388272404,35.82758369093474 128.68335351964126,35.82889524063444 128.68364679883342,35.82890639831895 128.68285031370743,35.828906097221974 128.67981280809806,35.82888304950931 128.67888281424953,35.82896109865986 128.67781109698095,35.82914226525198 128.6749987130506,35.829530458483134 128.66911993413487,35.82983133640095 128.66821884727412,35.83362553735145 128.67013528497822,35.834399059357544 128.67024003096986,35.840482024092296 128.6695707366228,35.84159664755783 128.66977117810666,35.842905155774766 128.67028558214335,35.84594503638666 128.67200953129984,35.84644751771214 128.67086900635903,35.84705457008395 128.6712801954765,35.8434163977168 128.67886262656154,35.84314479314578 128.67975890696695,35.84173465446248 128.6852185741404,35.84220901183875 128.68737568860865,35.842691108354394 128.68897961865747,35.84312006520884 128.69051603468242,35.84319021489896 128.6919341686762,35.843845826650806 128.69193149502144,35.84458126183942 128.69217954131344,35.8450944094251 128.6928545128021,35.845537407773186 128.6949889588373,35.84594458681003 128.69774246767017,35.84595753555761 128.69873885574475,35.845592497537034 128.69906310486897,35.8455682983552 128.69949976886403,35.8456150311943 128.70001818866905,35.845563157388696 128.70065901677296,35.84537908872975 128.70140769814486,35.84555437110258 128.70286963195795,35.84587322292093 128.70357926102878,35.84594642214659 128.70381878675977,35.846971713667784 128.70522976267594,35.847545984299884 128.70540254780536,35.84770419707187 128.70647677872364,35.846993881550205 128.70668291781934,35.84665150163529 128.70699655180104,35.84651270074719 128.70756911108555,35.84664594398947 128.708338426456,35.846370809343775 128.70867563064274,35.84552164493595 128.70882342843882,35.84555869274497 128.7100195496374,35.844390635774545 128.70946322074195,35.84409546436334 128.71057471572104,35.84279576302751 128.7119855691835,35.84233078727481 128.71173209213836,35.84162880715499 128.71356524195375,35.84170215924991 128.71473995285587,35.8416923687089 128.7160567492122,35.84125948431006 128.71671145609938,35.84060185802347 128.717328108808,35.83969799283258 128.71814971377495,35.840034447448154 128.71919727395152,35.83991953670813 128.71967067618365,35.84008465778431 128.72072561124943,35.841207493858356 128.72223287429418,35.8400990619929 128.72349268565964,35.84077048506402 128.72505661382988,35.84092818918439 128.72536990992046,35.84112033710699 128.7257946253903,35.84131958422449 128.7263523033404,35.84147458147238 128.72685368846388,35.83911513549549 128.7286285758717,35.839407843897376 128.72926574390837,35.833677521275426 128.73295917806223,35.82998596835267 128.73274610143017,35.82643232778233 128.73109204987105,35.82124747076703 128.73101263135672,35.81720051067774 128.73357454481362,35.81238647159309 128.73589823930334,35.808559429329115 128.73695438018242,35.80941040679152 128.73325029582915,35.81056065279409 128.72909350975036,35.81459078638608 128.72518698632229,35.81992126286518 128.71793355529508,35.82191863521933 128.71689227991428,35.82393401644843 128.71585134191892,35.82522489120707 128.71379891405198,35.82756633550083 128.71010918623398,35.82879102266901 128.7076346641539))
</code></pre>
<p>I also have a list of sets of longitudes and latitudes.</p>
<pre><code>df = pd.DataFrame({'name':['a', 'b', 'c', 'd'] ,'lat': [37.6647124, 37.90052166666667, 36.807616, 37.24141], 'long': [126.7686911, 127.203435, 127.107547, 127.059358]})
</code></pre>
<p>I want to filter out names that are not within the <code>polygon</code>.</p>
<p>I can't come up with an idea that works.</p>
<p>[Edited]
For the POLYGON, if you need to convert that to a list of pairs, then you can use the following code.</p>
<pre><code>def get_lat_lng_pair(boundary_str):
lat_lng_list = []
for lat_lng_str in boundary_str.split(','):
lat, lng = lat_lng_str.split()
lat_lng_list.append((float(lat), float(lng)))
return lat_lng_list
# Extracting All Polygon points
all_area_ids = [27]
all_polygon_pts = get_all_polygon_pts(area_df, all_area_ids)
all_polygon_pts
</code></pre>
|
<python><dictionary><coordinates><polygon>
|
2022-12-29 04:20:21
| 0
| 1,511
|
Yun Tae Hwang
|
74,947,071
| 10,970,202
|
Which version of python extension support python3.6 in vscode?
|
<p>I'm shifting from pycharm to vscode for python development and want to check if most of functionality from pycharm such as debugging, gitlab integration, etc... Are possible in VScode.</p>
<p>I've learned that you could install python extension so I did install python v.2022.20.1 extension. However this debugging functionality does not work when I use python 3.6(it says in python extension that it supports python 3.7 and above). I want to find a version that support python 3.6, how can I find it?</p>
<p>I can click drop down beside uninstall block and install another version however there are lots of versions and it is cumbersome to click each one to check...</p>
|
<python><visual-studio-code><python-3.6>
|
2022-12-29 03:55:30
| 1
| 5,008
|
haneulkim
|
74,946,845
| 19,616,641
|
AttributeError: module 'numpy' has no attribute 'int'
|
<p>I tried to run my code in another computer, while it successfully compiled in the original environment, this error can outta nowhere:</p>
<pre><code>File "c:\vision_hw\hw_3\cv2IP.py", line 91, in SECOND_ORDER_LOG
original = np.zeros((5,5),dtype=np.int)
File "C:\Users\brian2lee\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\numpy\__init__.py", line 284, in __getattr__
raise AttributeError("module {!r} has no attribute " AttributeError: module 'numpy' has no attribute 'int'
</code></pre>
<p>I have tried reinstallin numpy but it did not work. Down below is my code:</p>
<pre><code>def SECOND_ORDER_LOG (self,img):
original = np.zeros((5,5),dtype=np.int)
original[2,2] = 1
kernel = np.array([[ 0, 0, -1, 0, 0],
[ 0, -1, -2, -1, 0],
[-1, -2, 16, -2, -1],
[ 0, -1, -2, -1, 0],
[ 0, 0, -1, 0, 0]])
result = original + 1 * kernel
sharpened = cv2.filter2D(img, -1, result)
return sharpened
</code></pre>
|
<python><numpy><attributeerror>
|
2022-12-29 03:01:26
| 3
| 421
|
brian2lee
|
74,946,711
| 10,062,025
|
Why is the response sometimes passing through and sometimes not when using requests httpx in python?
|
<p>I am trying to scrape this website items, however when I used httpx or even requests sometimes it passes and gets the response sometimes it doesn't. It seems random, that's why I tried doing a rerun of the failed items to get the results. However this does not seem to work 100% of all the time. Is there something that I am not doing right? Can somebody help?</p>
<p>Below is a sample of the web I am scraping, if it does not show any errors you can loop it to scrape the same thing twice.</p>
<p>Here's my current code:</p>
<pre><code>urllist=['https://www.blibli.com/backend/product-detail/products/ps--RAM-70107-16912/_summary?defaultItemSku=RAM-70107-16912-00001&pickupPointCode=PP-3239816&cnc=false',
'https://www.blibli.com/backend/product-detail/products/ps--RAM-70107-04124/_summary?defaultItemSku=RAM-70107-04124-00001&pickupPointCode=PP-3239816&cnc=false',
'https://www.blibli.com/backend/product-detail/products/ps--RAM-70107-19598/_summary?defaultItemSku=RAM-70107-19598-00001&pickupPointCode=PP-3239816&cnc=false',
'https://www.blibli.com/backend/product-detail/products/ps--RAM-70107-01115/_summary?defaultItemSku=RAM-70107-01115-00001&pickupPointCode=PP-3206709&cnc=false',
'https://www.blibli.com/backend/product-detail/products/ps--RAM-70107-02620/_summary?defaultItemSku=RAM-70107-02620-00001&pickupPointCode=PP-3239816&cnc=false',
'https://www.blibli.com/backend/product-detail/products/ps--RAM-70107-17552/_summary?defaultItemSku=RAM-70107-17552-00001&pickupPointCode=PP-3023611&cnc=false',
'https://www.blibli.com/backend/product-detail/products/ps--RAM-70107-08377/_summary?defaultItemSku=RAM-70107-08377-00001&pickupPointCode=PP-3239816&cnc=false',
'https://www.blibli.com/backend/product-detail/products/ps--RAM-70107-03274/_summary?defaultItemSku=RAM-70107-03274-00001&pickupPointCode=PP-3069769&cnc=false',
'https://www.blibli.com/backend/product-detail/products/ps--RAM-70107-04345/_summary?defaultItemSku=RAM-70107-04345-00001&pickupPointCode=PP-3239816&cnc=false',
'https://www.blibli.com/backend/product-detail/products/ps--RAM-70107-01030/_summary?defaultItemSku=RAM-70107-01030-00001&pickupPointCode=PP-3239816&cnc=false',
'https://www.blibli.com/backend/product-detail/products/ps--RAM-70107-01534/_summary?defaultItemSku=RAM-70107-01534-00001&pickupPointCode=PP-3239816&cnc=false']
rancherrors=pd.DataFrame()
ranchdf=pd.DataFrame()
for url in urllist:
try:
ua = UserAgent()
USER_AGENT = ua.random
headers={
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9",
"accept-encoding": "gzip, deflate, br",
"accept-language": "id-ID,id;q=0.9,en-US;q=0.8,en;q=0.7",
"cache-control": "max-age=0",
"sec-ch-ua": 'Chromium";v="106", "Google Chrome";v="106", "Not;A=Brand";v="99',
"sec-ch-ua-platform": "Windows",
"sec-fetch-dest": "document",
"sec-fetch-mode": "navigate",
"sec-fetch-site": "none",
'X-Forwarded-For': f'{random_ip}',
"user-agent" : f"{USER_AGENT}",
'referer':'https://www.blibli.com/'
}
response =httpx.get(url,headers=headers)
try:
price=str(response.json()['data']['price']['listed']).replace(".0","")
discount=str(response.json()['data']['price']['totalDiscount'])
except:
price="0"
discount="0"
try:
unit=str(response.json()['data']['uniqueSellingPoint']).replace("β’ ","")
except:
unit=""
dat={
'product_name':response.json()['data']['name'],
'normal_price':price,
'discount':discount,
'competitor_id':response.json()['data']['ean'],
'url':input,
'unit':unit,
'astro_id':id,
'date_key':today,
'web':'ranch market'
}
dat=pd.DataFrame([dat])
ranchdf=ranchdf.append(dat)
sleep(randint(2,5))
except Exception as e:
datum={
'id':id,
'url':url
}
datum=pd.DataFrame([datum])
rancherrors=rancherrors.append(datum)
print(f'{url} error {e}')
</code></pre>
|
<python><httpx>
|
2022-12-29 02:27:38
| 0
| 333
|
Hal
|
74,946,568
| 6,509,883
|
How to estimate the extrinsic matrix of a chessboard image and project it to bird's eye view such it presents pixel size in meters?
|
<p>I want to generate an Occupancy Grid (OG) like image with a Bird's Eye View (BEV), i.e., each image pixel has a constant unit measure and everything on the final grid is floor (height=0).</p>
<p>I don't know what I'm missing, I'm newbie on the subject and I'm trying to follow a pragmatic step by step to get on the final results. I have spent a huge time on this and I'm still getting poor results. I'd appretiate any help. Thanks.</p>
<p>To get on my desired results, I follow the pipeline:</p>
<ol>
<li>Estimate the extrinsic matrix with <strong>cv2.solvePnP</strong> and a chessboard image.</li>
<li>Generate the OG grid XYZ world coordinates (X=right, Y=height, Z=forward).</li>
<li>Project the OG grid XYZ camera coordinates with the extrinsic matrix.</li>
<li>Match the uv image coordinates for the OG grid camera coordinates.</li>
<li>Populate the OG image with the uv pixels.</li>
</ol>
<p>I have the following intrinsic and distortion matrices that I previously estimated from another 10 chessboard images like the one bellow:</p>
<h2>1. Estimate the extrinsic matrix</h2>
<pre><code>import numpy as np
import cv2
import matplotlib.pyplot as plt
mtx = np.array([[2029, 0, 2029],
[ 0, 1904, 1485],
[ 0, 0, 1]]).astype(float)
dist = np.array([[-0.01564965, 0.03250585, 0.00142366, 0.00429703, -0.01636045]])
</code></pre>
<p><a href="https://i.sstatic.net/cckHA.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cckHA.jpg" alt="enter image description here" /></a></p>
<pre><code>impath = '....'
img = cv2.imread(impath)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
CHECKERBOARD = (5, 8)
ret, corners = cv2.findChessboardCorners(gray, CHECKERBOARD, None)
corners = cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), criteria)
objp = np.concatenate(
np.meshgrid(np.arange(-4, 4, 1),
0,
np.arange(0, 5, 1),
)
).astype(float)
objp = np.moveaxis(objp, 0, 2).reshape(-1, 3)
square_size = 0.029
objp *= square_size
ret, rvec, tvec = cv2.solvePnP(objp, corners[::-1], mtx, dist)
print('rvec:', rvec.T)
print('tvec:', tvec.T)
# img_withaxes = cv2.drawFrameAxes(img.copy(), mtx, dist, rvec, tvec, square_size, 3)
# plt.imshow(cv2.resize(img_withaxes[..., ::-1], (800, 600)))
# rvec: [[ 0.15550242 -0.03452503 -0.028686 ]]
# tvec: [[0.03587237 0.44082329 0.62490573]]
</code></pre>
<pre><code>R = cv2.Rodrigues(rvec)[0]
RT = np.eye(4)
RT[:3, :3] = R
RT[:3, 3] = tvec.ravel()
RT.round(2)
# array([[-1. , 0.03, 0.04, 0.01],
# [ 0.03, 0.99, 0.15, -0.44],
# [-0.03, 0.16, -0.99, 0.62],
# [ 0. , 0. , 0. , 1. ]])
</code></pre>
<h2>2. Generate the OG grid XYZ world coordinates (X=right, Y=height, Z=forward).</h2>
<pre><code>uv_dims = img.shape[:2] # h, w
grid_dims = (500, 500) # h, w
og_grid = np.concatenate(
np.meshgrid(
np.arange(- grid_dims[0] // 2, (grid_dims[0] + 1) // 2, 1),
0, # I want only the floor information, such that height = 0
np.arange(grid_dims[1]),
1
)
)
og_grid = np.moveaxis(og_grid, 0, 2)
edge_size = .1
og_grid_3dcoords = og_grid * edge_size
print(og_grid_3dcoords.shape)
# (500, 500, 4, 1)
</code></pre>
<h2>3. Project the OG grid XYZ camera coordinates with the extrinsic matrix.</h2>
<pre><code>og_grid_camcoords = (RT @ og_grid_3dcoords.reshape(-1, 4).T)
og_grid_camcoords = og_grid_camcoords.T.reshape(grid_dims + (4,))
og_grid_camcoords /= og_grid_camcoords[..., [2]]
og_grid_camcoords = og_grid_camcoords[..., :3]
# Print for debugging issues
for i in range(og_grid_camcoords.shape[-1]):
print(np.quantile(og_grid_camcoords[..., i].clip(-10, 10), np.linspace(0, 1, 11)).round(1))
# [-10. -1.3 -0.7 -0.4 -0.2 -0. 0.2 0.4 0.6 1.2 10. ]
# [-10. -0.2 -0.2 -0.2 -0.2 -0.2 -0.1 -0.1 -0.1 -0.1 10. ]
# [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
</code></pre>
<h2>4. Match the uv image coordinates for the OG grid coordinates.</h2>
<pre><code>og_grid_uvcoords = (mtx @ og_grid_camcoords.reshape(-1, 3).T)
og_grid_uvcoords = og_grid_uvcoords.T.reshape(grid_dims + (3,))
og_grid_uvcoords = og_grid_uvcoords.clip(0, max(uv_dims)).round().astype(int)
og_grid_uvcoords = og_grid_uvcoords[..., :2]
# Print for debugging issues
for i in range(og_grid_uvcoords.shape[-1]):
print(np.quantile(og_grid_uvcoords[..., i], np.linspace(0, 1, 11)).round(1))
# [ 0. 0. 665. 1134. 1553. 1966. 2374. 2777. 3232. 4000. 4000.]
# [ 0. 1134. 1161. 1171. 1181. 1191. 1201. 1212. 1225. 1262. 4000.]
</code></pre>
<p>Clip to uv values to the image boundaries.</p>
<pre><code>mask_clip_height = (og_grid_uvcoords[..., 1] >= uv_dims[0])
og_grid_uvcoords[mask_clip_height, 1] = uv_dims[0] - 1
mask_clip_width = (og_grid_uvcoords[..., 0] >= uv_dims[1])
og_grid_uvcoords[mask_clip_width, 0] = uv_dims[1] - 1
</code></pre>
<h2>5. Populate the OG image with the uv pixels.</h2>
<pre><code>og = np.zeros(grid_dims + (3,)).astype(int)
for i, (u, v) in enumerate(og_grid_uvcoords.reshape(-1, 2)):
og[i % grid_dims[1], i // grid_dims[1], :] = img[v, u]
plt.imshow(og)
</code></pre>
<p><a href="https://i.sstatic.net/QYUsX.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QYUsX.jpg" alt="enter image description here" /></a></p>
<p>I was expecting a top-down view of the test image.</p>
|
<python><opencv><computer-vision><homography>
|
2022-12-29 01:53:11
| 1
| 1,064
|
Rafael Toledo
|
74,946,529
| 1,513,168
|
Convert function arguments to str
|
<p>I wrote a function and call it as below:</p>
<pre><code>from lib import base_frequency
base_frequency("AB610939-AB610950.gb", "genbank")
#This calls the function that uses a BioPython code.
</code></pre>
<p>How could I pass the function arguments as below?</p>
<pre><code>base_frequency(AB610939-AB610950.gb, genbank)
</code></pre>
<p>Note that quotes are missing. Should I do this? Is there a recommended nomenclature in Python when function argument is sting?</p>
<p>I thought this required me to convert filename and record format to a string inside the function. That is:</p>
<pre><code>AB610939-AB610950.gb to "AB610939-AB610950.gb"
genbank to "genbank"
</code></pre>
<p>I have tried <code>str(AB610939-AB610950.gb)</code> inside the function but it did not do the job.</p>
|
<python><biopython>
|
2022-12-29 01:44:06
| 2
| 810
|
Supertech
|
74,946,433
| 9,273,406
|
How to add custom poetry environment to Locust base Docker image?
|
<p>How do you run Locust (load testing tool) in a stable Docker container with extra poetry dependencies installed? From the docs it's known that <a href="https://docs.locust.io/en/stable/running-in-docker.html" rel="nofollow noreferrer">running Locust in Docker</a> is easily possible through their base image.</p>
<pre><code>docker run -p 8089:8089 -v $PWD:/mnt/locust locustio/locust -f /mnt/locust/locustfile.py
</code></pre>
<p>But if a load testing Python project requires extra libraries which are managed through poetry, the locust command must be run through <code>poetry run locust</code>. The locust docs only give the following example, but with <code>pip</code>:</p>
<pre><code>FROM locustio/locust
RUN pip3 install some-python-package
</code></pre>
<p>It get's more tricky if you want to bind mount a directory to the container, as Poetry environments are linked to the working directory they're created in.</p>
|
<python><docker><python-poetry><locust>
|
2022-12-29 01:24:01
| 1
| 4,370
|
azizbro
|
74,946,296
| 1,015,595
|
pyparsing infix notation with non-grouping parentheses
|
<p>I'm working on a billing application where users can enter arbitrary mathematical expressions. For example, a line item might be defined as <code>(a + b) * c</code>.</p>
<p>pyparsing handles the order of operations well enough in this case:</p>
<pre class="lang-py prettyprint-override"><code>import pyparsing as pp
operand = pp.Word(pp.alphanums)
boolean_operator = pp.one_of(['||', '&&'])
comparison_operator = pp.one_of(['<', '<=', '>', '>=', '==', '!='])
infix_pattern = pp.infix_notation(
operand,
[
('^', 2, pp.OpAssoc.LEFT),
('*', 2, pp.OpAssoc.LEFT),
('/', 2, pp.OpAssoc.LEFT),
('+', 2, pp.OpAssoc.LEFT),
('-', 2, pp.OpAssoc.LEFT),
(comparison_operator, 2, pp.OpAssoc.LEFT),
(boolean_operator, 2, pp.OpAssoc.LEFT),
]
)
print(infix_pattern.parse_string('(a + b) * c'))
# [[['a', '+', 'b'], '*', 'c']]
</code></pre>
<p>Users can also enter a handful of well-known functions, including the calling parentheses and arguments. For example, if a line item is defined as <code>a(b) == c(d)</code>, pyparsing gets confused:</p>
<pre><code>print(infix_pattern.parse_string('a(b) == c(d)'))
# ['a']
</code></pre>
<p>What I would like to see in this case is <code>['a(b)', '==', 'c(d)']</code>. What do I need to do differently in order to keep the first example working as-is and get the desired behavior from the second example?</p>
|
<python><pyparsing><lexical-analysis><infix-notation>
|
2022-12-29 00:49:14
| 1
| 16,285
|
Big McLargeHuge
|
74,946,209
| 3,795,219
|
Python map emptied after using tuple or iteration
|
<p>The following code does not print anything from the <code>for</code> loop; it only prints the result of the <code>tuple</code>. This is very unintuitive because I haven't explicitly modified the <code>cheapest</code> map, yet it behaves as if it is empty after calling <code>tuple</code>.</p>
<p>So, what's going on?</p>
<pre><code>store1 = [10.00, 11.00, 12.34, 2.34]
store2 = [9.00, 11.10, 12.34, 2.01]
cheapest = map(min, store1, store2)
print(tuple(cheapest))
for v in cheapest:
print(v)
</code></pre>
<p>Expected output:</p>
<pre><code>(9.0, 11.0, 12.34, 2.01)
9.0
11.0
12.34
2.01
</code></pre>
<p>Actual output:</p>
<pre><code>(9.0, 11.0, 12.34, 2.01)
</code></pre>
<p>If I run the <code>for</code> loop first, then the behavior is reversed: the <code>for</code> loop prints the values from the <code>cheapest</code> map and the <code>print(tuple(cheapest))</code> result is empty as in:</p>
<p>Actual output (order reversed):</p>
<pre><code>9.0
11.0
12.34
2.01
()
</code></pre>
|
<python><tuples>
|
2022-12-29 00:31:56
| 1
| 8,645
|
Austin
|
74,946,020
| 14,403,635
|
Python Selenium: Inputting Email Address and Password and click Continue
|
<p>I am using the following code to input Email address and password, by clicking the continue element.</p>
<p>However I have no idea should I enter driver.switch_to_frame and then webdriver can enter the email and password and click?</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options=Options()
options.chrome_executable=path="/Users/janice/Desktop/Algo/Selenium/chromedriver.exe"
driver=webdriver.Chrome(options=options)
driver.get("https://system.arthuronline.co.uk/genieliew1/dashboards/index")
</code></pre>
|
<python><selenium><selenium-webdriver><css-selectors><webdriverwait>
|
2022-12-28 23:53:53
| 2
| 333
|
janicewww
|
74,945,937
| 13,142,245
|
How to compute percentile for an external element with given array?
|
<p>I'm looking for a percentile function that accepts an array and an element where it would return closest percentile of the element.</p>
<p>Some examples</p>
<pre><code>percentile([1,2,3,4,5], 2) => 40%
percentile([1,2,3,4,5], 2.5) => 40%
percentile([1,2,3,4,5], 6) => 100%
</code></pre>
<p>Does anything like this or similar exist within python or numpy?</p>
<p>Numpy does this <code>np.percentile(a=[1,2,3,4,5], q=3) => 1.12</code> which is not desired.</p>
|
<python><arrays><numpy><percentile>
|
2022-12-28 23:35:43
| 1
| 1,238
|
jbuddy_13
|
74,945,932
| 1,190,077
|
plotly: Is it possible to adjust the camera field of view?
|
<p>For a 3D rendering in <code>plotly</code>, the <a href="https://plotly.com/python/3d-camera-controls/" rel="nofollow noreferrer">documentation</a> explains the camera parameters <code>eye</code>, <code>center</code>, and <code>up</code>.</p>
<p>Some websites suggest that one can adjust the "zoom" by reducing the magnitude of <code>eye</code> --- but this just brings the viewpoint closer to the center.</p>
<p>I can't seem to find a way to adjust the field of view of the camera.</p>
<p>Essentially, is it possible to reproduce the <a href="https://en.wikipedia.org/wiki/Dolly_zoom" rel="nofollow noreferrer">dolly-zoom effect</a>?</p>
|
<python><3d><plotly>
|
2022-12-28 23:34:34
| 1
| 3,196
|
Hugues
|
74,945,819
| 18,758,062
|
Draw shapes on top of Networkx graph
|
<p>Given an existing <code>networkx</code> graph</p>
<pre class="lang-py prettyprint-override"><code>import networkx as nx
import numpy as np
np.random.seed(123)
graph = nx.erdos_renyi_graph(5, 0.3, seed=123, directed=True)
nx.draw_networkx(graph)
</code></pre>
<p><a href="https://i.sstatic.net/AAEql.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AAEql.png" alt="enter image description here" /></a></p>
<p>or</p>
<pre class="lang-py prettyprint-override"><code>import networkx as nx
G = nx.path_graph(4)
nx.spring_layout(G)
nx.draw_networkx(G)
</code></pre>
<p><a href="https://i.sstatic.net/d3QST.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/d3QST.png" alt="enter image description here" /></a></p>
<p>how can you draw a red circle on top of (in the same position as) one of the nodes, like the node labeled <code>1</code>?</p>
|
<python><matplotlib><graph><networkx>
|
2022-12-28 23:12:22
| 1
| 1,623
|
gameveloster
|
74,945,783
| 169,947
|
Python testing - multiple `tox` environments with parallel runs and coverage reporting
|
<p>I'm having a hard time figuring out a configuration for Python unit testing that combines:</p>
<ol>
<li>Multiple environments (specifying e.g. versions of python or dependency packages) given as <code>tox</code> environments</li>
<li>Parallel running of the <code>tox</code> environments, for speed (some things being tested take a while to run)</li>
<li>Reporting of test coverage</li>
</ol>
<p>I've tried using a <code>tox.ini</code> file like the following (and then running it with <code>python -m tox --parallel=auto</code>), but I can't figure out how to do the <code>combine</code> step at the end, rather than in each environment:</p>
<pre><code>[tox]
envlist = py{38,39}-more_itertools{812,813,814}
[testenv]
deps = pytest
coverage
more_itertools812: more_itertools>=8.12,<8.13
more_itertools813: more_itertools>=8.13,<8.14
more_itertools814: more_itertools>=8.14,<8.15
commands = python -m coverage run --parallel -m pytest
python -m coverage combine
python -m coverage xml
python -m coverage html
</code></pre>
<p>If I don't use <code>--parallel</code> inside the <code>commands</code> section, all the environments try to write to the same <code>.coverage</code> file, and they hit various race conditions, causing death. (I opened <a href="https://github.com/nedbat/coveragepy/issues/1514" rel="nofollow noreferrer">a ticket for this</a>.)</p>
<p>I'm trying to avoid a solution that requires post-processing after running <code>tox</code>, if possible. Not sure if that's possible.</p>
<p>I need to support both <code>pytest</code> and <code>unittest</code> based test suites, FWIW. Not sure if that rules out using <a href="https://pytest-cov.readthedocs.io/" rel="nofollow noreferrer"><code>pytest-cov</code></a> or not, but I've also seen the above race condition happening if I use <code>commands = python -m pytest --cov=.</code> in my <code>tox.ini</code>.</p>
<p>Thanks.</p>
|
<python><unit-testing><parallel-processing><coverage.py>
|
2022-12-28 23:05:55
| 1
| 24,277
|
Ken Williams
|
74,945,737
| 14,141,126
|
Convert Varying Column Length to Rows in Pandas
|
<p>I'm trying to create a graph with Seaborn that shows all of the Windows events in the Domain Controller that have taken place in a given time range, which means you have, say, five events now, but when you run the program again in 10 minutes, you might get 25 events.</p>
<p>With that said, I've been able to parse these events (labeled Actions) from a mumbo-jumbo of other data in the log file and then create a DataFrame in Pandas. The script outputs the data as a dictionary. After creating the DataFrame, this is what the output looks like:</p>
<pre><code> logged-out kerberos-authentication-ticket-requested logged-in created-process exited-process
1 1 5 2 1 1
</code></pre>
<p><em>Note: The values you see above are the number of times the process took place within that time frame.</em></p>
<p>That would be good enough for me, but only if a table was all I needed. When I try to put this DataFrame into Seaborn, I get an error because I don't know what to name the <code>x</code> and <code>y</code> axes because, well, they are always changing. So, my solution was to use the df.melt() function in order to convert those columns into rows, and then label the only two columns needed <code>('Actions','Count')</code>. But that's where I fumbled multiple times. I can't figure out how to use the df.melt() functions correctly.</p>
<p>Here is my code:</p>
<pre><code>import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
#Ever-changing data
actions = {'logged-out': 2, 'kerberos-authentication-ticket-requested': 5, 'logged-in': 2,
'created-process': 1, 'exited-process': 1, 'logged-out': 1}
#Create DataFrame
data = actions
index = 1 * pd.RangeIndex(start=1, stop=2) #add automatic index
df = pd.DataFrame(data,index=index,columns=['Action','Count'])
print(df)
#Convert Columns to Rows and Add
df.melt(id_vars=["Action", "Count"],
var_name="Action",
value_name="Count")
#Create graph
sns.barplot(data=df,x='Action',y='Count',
palette=['#476a6f','#e63946'],
dodge=False,saturation=0.65)
plt.savefig('fig.png')
plt.show()
</code></pre>
<p>Any help is appreciated.</p>
|
<python><pandas><seaborn>
|
2022-12-28 22:57:42
| 1
| 959
|
Robin Sage
|
74,945,707
| 11,574,636
|
tkinter change color of bg and fg while programme is running
|
<p>I want to programme my own notepad. As a function, I want to be able to toggle between white mode and dark mode.</p>
<p>This is my code so far. I know that the problem is that the variables color_fg and color_bg are not transferred to the top, but I just can't find a solution for it.</p>
<pre><code>import tkinter as tk
from tkinter import filedialog
color_fg = "#dbdbdb"
color_bg = "#282b33"
root = tk.Tk()
root.title("Improved Notepad)")
root.geometry("800x600")
text_widget = tk.Text(root, bg=color_bg, fg=color_fg)
text_widget.pack(fill=tk.BOTH, expand=True)
menu_bar = tk.Menu(root, bg=color_bg, fg=color_fg)
dark_mode = tk.BooleanVar(value=False)
# File menu
file_menu = tk.Menu(menu_bar, tearoff=0, bg=color_bg, fg=color_fg)
def open_text():
filepath = filedialog.askopenfilename()
with open(filepath, 'r') as f:
text = f.read()
text_widget.delete(1.0, tk.END)
text_widget.insert(1.0, text)
def save_text():
filepath = filedialog.asksaveasfilename(defaultextension=".txt")
text = text_widget.get(1.0, tk.END)
with open(filepath, 'w') as f:
f.write(text)
file_menu.add_command(label="Open", command=open_text)
file_menu.add_command(label="Save", command=save_text)
file_menu.add_command(label="Exit", command=root.quit)
menu_bar.add_cascade(label="File", menu=file_menu)
# Settings menu
def toggle_dark_mode():
if dark_mode.get():
color_fg = "#dbdbdb"
color_bg = "#282b33"
dark_mode.set(False)
else:
color_fg = "#000000"
color_bg = "#FFFFFF"
dark_mode.set(True)
settings_menu = tk.Menu(menu_bar, tearoff=0, bg=color_bg, fg=color_fg)
dark_mode_toggle = tk.Checkbutton(settings_menu, text="Dark Mode", variable=dark_mode, onvalue=True, offvalue=False)
settings_menu.add_checkbutton(label="Dark Mode", command=toggle_dark_mode)
menu_bar.add_cascade(label="Settings", menu=settings_menu)
root.config(menu=menu_bar)
root.mainloop()
</code></pre>
|
<python><tkinter>
|
2022-12-28 22:53:32
| 2
| 326
|
Fabian
|
74,945,681
| 1,354,930
|
Is there an easy way to create a CLI "shortcut" arg that implements other args using Python's `argparse`?
|
<p>Is there an <em>easy</em> way to make a CLI arg "shortcut" (for lack of a better term) using <code>argparse</code>? I also can't think of a better term to try and search for implementations...</p>
<p>Basically I'm trying to make something similar to rsync's <code>--archive</code> option:</p>
<p><a href="https://i.sstatic.net/k9gEo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/k9gEo.png" alt="rsync's --archive option" /></a></p>
<h3>Example</h3>
<p>Let's assume I have a python3 program that uses argparse for CLI parsing:</p>
<pre class="lang-py prettyprint-override"><code>parser = argparse.ArgumentParser()
parser.add_argument("-x", action="store_true")
parser.add_argument("-y", action=argparse.BooleanOptionalAction)
parser.add_argument("--foobar")
args = parser.parse_args(sys.argv[1:])
</code></pre>
<p>I'd like to add a <code>--shortcut</code> arg that is equivalent to <code>-x -y --foobar BAZ</code>. These two would result in the same functionality:</p>
<pre class="lang-bash prettyprint-override"><code>python foo.py -x -y --foobar BAZ
python foo.py --shortcut
</code></pre>
<p>Right now what I'm doing is basically:</p>
<pre class="lang-py prettyprint-override"><code># ... all the parser.add_argument() calls ...
args = parse.parse_args(sys.argv[1:])
if args.shortcut:
args.x = True
args.y = True
args.foobar = "BAZ"
</code></pre>
<p>The above works decently well, but (a) it is hard to maintain because I have to update docstrings and this <code>if args.shortcut</code> separately and (b) the precedence logic gets very complicated when dealing with overrides.</p>
<p>The requirement is:</p>
<ul>
<li><code>--shortcut --foobar FOO</code> parses as <code>x=True, y=True, foobar=FOO</code></li>
<li><code>--foobar FOO --shortcut</code> parses as <code>x=True, y=True, foobar=BAR</code></li>
<li><code>--foobar FOO --shortcut --foobar FOO</code> parses as <code>x=True, y=True, foobar=FOO</code></li>
</ul>
<p><code>argparse</code> already handles order precedence for me, but not with the <code>--shortcut</code> arg.</p>
|
<python><arguments><command-line-arguments><argparse>
|
2022-12-28 22:50:18
| 1
| 1,917
|
dthor
|
74,945,584
| 2,659,499
|
Changing order of decimal numbers causing differnet results
|
<p>I'm learning Python. Maybe this is a silly mistake and not related to Python at all but appreciate some clarity.</p>
<p>I have the following code.</p>
<pre class="lang-py prettyprint-override"><code>from decimal import Decimal
expense = Decimal('10.00')
v = Decimal('93.93863013698630136986301369')
n = Decimal('46.90520547945205479452054796')
yearly_expense = Decimal('0')
vn = v + n
r1 = expense - yearly_expense + v + n
r2 = expense - yearly_expense + vn
print(r1)
print(r2)
</code></pre>
<p>When I execute I get</p>
<pre class="lang-bash prettyprint-override"><code>150.8438356164383561643835617
150.8438356164383561643835616
</code></pre>
<p>Note the last digits. Why the 2 numbers are different?</p>
|
<python>
|
2022-12-28 22:35:41
| 0
| 1,975
|
Vahid
|
74,945,549
| 1,627,106
|
Locust - AttributeError when accessing locust web interface
|
<p>I'm trying to run a very basic locust load testing which did work previously.</p>
<pre class="lang-py prettyprint-override"><code>from locust import HttpUser, between, task
class QuickstartUser(HttpUser):
wait_time = between(1, 5)
@task
def get_status(self):
self.client.get("/status/")
</code></pre>
<p>Running the following command: <code>locust -f <package-name>/tests/load_tests.py -r 20 -u 400 -H http://localhost:8000</code> yields the following error message when trying to access the web interface:</p>
<pre><code>[2022-12-28 23:23:30,962] MacBook-Pro.fritz.box/INFO/locust.main: Starting web interface at http://0.0.0.0:8089 (accepting connections from all network interfaces)
[2022-12-28 23:23:30,968] MacBook-Pro.fritz.box/INFO/locust.main: Starting Locust 2.14.0
Traceback (most recent call last):
File "src/gevent/greenlet.py", line 906, in gevent._gevent_cgreenlet.Greenlet.run
File "/Users/<user>/Coding/PycharmProjects/<project>-fastapi/.venv/lib/python3.10/site-packages/gevent/baseserver.py", line 34, in _handle_and_close_when_done
return handle(*args_tuple)
File "/Users/<user>/Coding/PycharmProjects/<project>-fastapi/.venv/lib/python3.10/site-packages/gevent/pywsgi.py", line 1577, in handle
handler.handle()
File "/Users/<user>/Coding/PycharmProjects/<project>-fastapi/.venv/lib/python3.10/site-packages/gevent/pywsgi.py", line 464, in handle
result = self.handle_one_request()
File "/Users/<user>/Coding/PycharmProjects/<project>-fastapi/.venv/lib/python3.10/site-packages/gevent/pywsgi.py", line 656, in handle_one_request
if self.rfile.CLOSED:
AttributeError: '_io.BufferedReader' object has no attribute 'CLOSED'
2022-12-28T22:23:35Z <Greenlet at 0x106fbc3a0: _handle_and_close_when_done(<bound method WSGIServer.handle of <WSGIServer at , <bound method StreamServer.do_close of <WSGIServer, (<gevent._socket3.socket [closed] at 0x106fcb460 o)> failed with AttributeError
</code></pre>
<p>The following versions are being used:</p>
<pre><code>$ poetry show locust --tree
locust 2.14.0 Developer friendly load testing framework
βββ configargparse >=1.0
βββ flask >=2.0.0
β βββ click >=8.0
β β βββ colorama *
β βββ itsdangerous >=2.0
β βββ jinja2 >=3.0
β β βββ markupsafe >=2.0
β βββ werkzeug >=2.2.2
β βββ markupsafe >=2.1.1 (circular dependency aborted here)
βββ flask-basicauth >=0.2.0
β βββ flask *
β βββ click >=8.0
β β βββ colorama *
β βββ itsdangerous >=2.0
β βββ jinja2 >=3.0
β β βββ markupsafe >=2.0
β βββ werkzeug >=2.2.2
β βββ markupsafe >=2.1.1 (circular dependency aborted here)
βββ flask-cors >=3.0.10
β βββ flask >=0.9
β β βββ click >=8.0
β β β βββ colorama *
β β βββ itsdangerous >=2.0
β β βββ jinja2 >=3.0
β β β βββ markupsafe >=2.0
β β βββ werkzeug >=2.2.2
β β βββ markupsafe >=2.1.1 (circular dependency aborted here)
β βββ six *
βββ gevent >=20.12.1
β βββ cffi >=1.12.2
β β βββ pycparser *
β βββ greenlet >=2.0.0
β βββ setuptools *
β βββ zope-event *
β β βββ setuptools * (circular dependency aborted here)
β βββ zope-interface *
β βββ setuptools * (circular dependency aborted here)
βββ geventhttpclient >=2.0.2
β βββ brotli *
β βββ certifi *
β βββ gevent >=0.13
β β βββ cffi >=1.12.2
β β β βββ pycparser *
β β βββ greenlet >=2.0.0
β β βββ setuptools *
β β βββ zope-event *
β β β βββ setuptools * (circular dependency aborted here)
β β βββ zope-interface *
β β βββ setuptools * (circular dependency aborted here)
β βββ six *
βββ msgpack >=0.6.2
βββ psutil >=5.6.7
βββ pywin32 *
βββ pyzmq >=22.2.1,<23.0.0 || >23.0.0
β βββ cffi *
β β βββ pycparser *
β βββ py *
βββ requests >=2.23.0
β βββ certifi >=2017.4.17
β βββ charset-normalizer >=2,<3
β βββ idna >=2.5,<4
β βββ urllib3 >=1.21.1,<1.27
βββ roundrobin >=0.0.2
βββ typing-extensions >=3.7.4.3
βββ werkzeug >=2.0.0
βββ markupsafe >=2.1.1
</code></pre>
|
<python><locust>
|
2022-12-28 22:28:50
| 1
| 1,712
|
Daniel
|
74,945,448
| 12,242,085
|
What is an interpretation of parts of statsmodels summary of Logistic Regression in Python?
|
<p>I have results of Logistic Regression from statsmodels in Python like below:</p>
<p><a href="https://i.sstatic.net/tpPuc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tpPuc.png" alt="enter image description here" /></a></p>
<p>My question is: What is interpretation of marked on yellow columns: <code>coef, std err, z, [0.025, 0.975]</code> ?</p>
|
<python><logistic-regression><statsmodels><coefficients>
|
2022-12-28 22:12:23
| 0
| 2,350
|
dingaro
|
74,945,378
| 9,820,561
|
PyLance raises errors due to a misunderstanding of simple logic
|
<p>I'm coding Python in VScode and started using PyLance to improve my code. However, It isn't pleasing when it does not understand simple logic like the following:</p>
<pre><code>if a == 1:
test = (1, 2, 4)
else:
test = [1, 2, 3]
if a == 1:
print(test)
else:
test.append(1)
</code></pre>
<p>In this example, I got an error that said:</p>
<pre><code>Cannot access member "append" for type "tuple[Literal[1], Literal[2], Literal[4]]"
Β Β Member "append" is unknown
</code></pre>
<p>Is there anything to do in such cases?</p>
|
<python><python-typing><pyright>
|
2022-12-28 22:00:39
| 0
| 362
|
Mr.O
|
74,945,313
| 7,077,532
|
Python: cannot import librosa library --> OSError: cannot load library 'libsndfile.so' :
|
<p>I am running my python code on the cloud using paperspace gradient. My code utilizes the librosa library. It was all working fine just a couple weeks ago meaning the entire codebase runs without error.</p>
<p>But today it seems like there's something wrong with librosa. When I try to do "import librosa" I get the following error:</p>
<p><strong>OSError: cannot load library 'libsndfile.so': libsndfile.so: cannot open shared object file: No such file or directory</strong></p>
<p>I don't know what this means or how to rectify it. I have tried updating both my pip and my wheel. I have also tried pip installing soundfile as well. But the error remains.</p>
|
<python><installation><pip><librosa>
|
2022-12-28 21:49:56
| 1
| 5,244
|
PineNuts0
|
74,945,288
| 2,021,144
|
PySpark least or greatest with dynamic columns
|
<p>I need to give a dynamic parameter to greatest in pyspark but it seems it doesn't accept that,</p>
<pre><code>column_names=['c_'+str(x) for x in range(800)]
train_df=spark.read.parquet(train_path).select(column_names).withColumn('max_values',least(col_names))
</code></pre>
<p>So what is the way to do this without using RDDs and just dataframes</p>
|
<python><pyspark>
|
2022-12-28 21:46:49
| 1
| 827
|
Mahdi
|
74,945,282
| 3,788,557
|
make python interactive popup in a seperate window in vscode
|
<p>I run my python code in vscode so I can see output as I move along. However, my work monitors in the office aren't the largest and so I don't have a ton of real estate to operate on.</p>
<p>Is it possible so that the output from my .py file appears in a physically separate window? For example, my main vscode is on monitor #1 giving me plenty of real estate to type my code? And when I hit <code>SHIFT+ENTER </code> on something like <code>5+7</code> that I see this output in physically separate window on monitor B?</p>
|
<python><visual-studio-code>
|
2022-12-28 21:46:10
| 1
| 6,665
|
runningbirds
|
74,945,194
| 7,394,787
|
How to avoid variable after statement `for` in python?
|
<p>When I write programs with python, something offen happens like:</p>
<pre><code>for i in range(5):
print("before",i)
for j in range(5):
for i in range(5):
pass
print("after",i)
</code></pre>
<p>which output:</p>
<pre><code>before 0
after 4
before 1
after 4
before 2
after 4
before 3
after 4
before 4
after 4
</code></pre>
<p>I often use variables with the same name in the outer loop and multiple inner loops, which will cause bugs but are hard to find. What is the best practice to avoid this situation?</p>
<p>What I mean is that when I focus on writing business code, it is difficult for me to detect or often ignore the problem of duplicate variable names, which wastes a lot of debugging time. In fact, I have been writing code in a C-like style. I'm trying to find a way to stop getting this type of error</p>
|
<python>
|
2022-12-28 21:33:06
| 5
| 305
|
Z.Lun
|
74,945,130
| 20,176,161
|
Save each slice of a dataframe into a specific excel sheet
|
<p>I have a dataframe which looks like this:</p>
<pre><code>print(df)
city Year ville Asset_Type Titre psqm
0 Agadir 2010.0 Agadir Appart 3225 6276.923077
1 Agadir 2010.0 Agadir Maison_Dar 37 8571.428571
2 Agadir 2010.0 Agadir Villa 107 6279.469027
3 Agadir 2011.0 Agadir Appart 2931 6516.853933
4 Agadir 2011.0 Agadir Maison_Dar 33 9000.000000
... ... ... ... ... ... ...
669 Tanger 2020.0 Tanger Maison_Dar 134 13382.653061
670 Tanger 2021.0 BeniMakada Appart 67 5555.555556
671 Tanger 2021.0 BeniMakada Maison_Dar 4 14533.492823
672 Tanger 2021.0 Tanger Appart 160 6148.338940
673 Tanger 2021.0 Tanger Maison_Dar 12 13461.538462
</code></pre>
<p>Saving the dataframe into an excel sheet is straightforward</p>
<pre><code>df.to_excel(path_to_output+'df.xlsx')
</code></pre>
<p>I would like to save the output of each city (column <code>city</code>) in a different sheet in the same excel file. How can I do that please? I am not sure if I need to create the sheets manually in advance and then loop over each one or create them on the fly (via python)?</p>
<p>Thanks for your help</p>
|
<python><excel><dataframe><xlsxwriter>
|
2022-12-28 21:22:55
| 3
| 419
|
bravopapa
|
74,945,044
| 6,619,692
|
Python shebang with conda
|
<p>Following <a href="https://stackoverflow.com/a/19305076/6619692">best practice</a> I have started an executable Python script with a shebang <em>specific</em> to Python 3:</p>
<pre><code>#!/usr/bin/env python3
import spotipy
# ...some fancy code
</code></pre>
<p>One <code>chmod +x</code> later, I'm executing this script as <code>./quick_example.py</code> but alas the executable found (running locally on my Mac) is not the Python executable in my active Conda environment, as running <code>which python3</code> in my Bash shell throws up <code>/usr/bin/python3</code> instead of the desired <code>/Users/anilkeshwani/opt/miniconda3/envs/spot/bin/python</code> given by <code>which python</code>.</p>
<p>Apparently using <code>#!/usr/bin/env python</code> is bad practice in general, so my question is: can I keep the Python 3-specific shebang and somehow have Conda alias <code>python3</code> calls when an env is active?</p>
|
<python><conda><shebang>
|
2022-12-28 21:11:54
| 2
| 1,459
|
Anil
|
74,945,042
| 7,731,571
|
Static variables being shared between parent and child classes
|
<p>I encountered a bug in my code that turned me crazy for a bit. Essentially I have a parent class that instantiates a static variable, and a Child class that instantiates the same static variable, differently. This should be fine, because <code>Parent.variable</code> should be != to <code>Child.variable</code>. Well depends in what order you instantiate them.</p>
<pre><code>i = 0
class Parent():
_value = None
@classmethod
def get_value(cls):
if cls._value is None:
global i
i += 1
cls._value = i
return cls._value
class Child(Parent):
pass
</code></pre>
<p>Run:</p>
<pre><code>print("Child: ", Child.get_value())
print("Parent: ", Parent.get_value())
</code></pre>
<p>Output:</p>
<blockquote>
<p>Child: 1</p>
<p>Parent: 2</p>
</blockquote>
<p>Run:</p>
<pre><code>print("Parent: ", Parent.get_value())
print("Child: ", Child.get_value())
</code></pre>
<p>Output:</p>
<blockquote>
<p>Parent: 1</p>
<p>Child: 1</p>
</blockquote>
<p>See below:</p>
<p><a href="https://i.sstatic.net/W1FJR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W1FJR.png" alt="enter image description here" /></a></p>
<p>Is this wanted behavior?</p>
|
<python><python-3.x>
|
2022-12-28 21:11:30
| 1
| 531
|
Alexis
|
74,945,010
| 16,179,502
|
KeyError: WeakRef when using flask-sqlalchemy version 3
|
<p>I have seen similar answers to this but have not come across any concrete answer. Since upgrading my <code>Flask-SQLAlchemy</code> version from 2.5.1 to 3.0.2, I am encountering the following error anytime I try to query or do any database operations: <code>KeyError: <weakref at 0x10ed004a0; to 'Flask' at 0x10cb5d880></code></p>
<p>My application setup is nearly identical to the one outlined <a href="https://flask-sqlalchemy.palletsprojects.com/en/3.0.x/quickstart/#configure-the-extension" rel="nofollow noreferrer">here</a></p>
<pre><code>application = Flask(
__name__, instance_path=instance_path, instance_relative_config=True
)
if configurations.PRODUCTION:
application.config.from_pyfile("production.py")
else:
application.config.from_pyfile("local.py")
db = SQLAlchemy()
db.init_app(application)
@application.route("/injury")
def get_injury_by_id():
return db.session.query(Injury).filter_by(id=1).first()
if __name__ == "__main__":
application.run()
</code></pre>
<p>Anytime I try and access this <code>db</code> object to try and run queries on its session, the error stated previously arises. I have tried playing around with the ordering of my initialization, such as by just doing</p>
<pre><code>db = SQLAlchemy(app=application)
</code></pre>
<p>But everything seems to be throwing this error. I reverted my version back to 2.5.1 and can confirm this was working up until 3.0.2.</p>
<p>Any suggestions?</p>
<p><strong>Edit</strong> Full stack trace after doing a query like <code>db.session.query(Injury).filter_by(id=1).first()</code> shown in the API route above</p>
<pre><code>[2022-12-28 16:19:35,676...application...ERROR]: The following error occurred 'Traceback (most recent call last):
File "/Desktop/project/.venv/lib/python3.8/site-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/Desktop/project/.venv/lib/python3.8/site-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/Desktop/project/backend/application.py", line 47, in get_injury_by_name
return session.query(Injury).filter_by(id=1).first()
File "/Desktop/project/.venv/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 2819, in first
return self.limit(1)._iter().first()
File "/Desktop/project/.venv/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 2903, in _iter
result = self.session.execute(
File "/Desktop/project/.venv/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 1700, in execute
bind = self.get_bind(**bind_arguments)
File "/Desktop/project/.venv/lib/python3.8/site-packages/flask_sqlalchemy/session.py", line 61, in get_bind
engines = self._db.engines
File "/Desktop/project/.venv/lib/python3.8/site-packages/flask_sqlalchemy/extension.py", line 629, in engines
return self._app_engines[app]
File "/usr/local/Cellar/python@3.8/3.8.11/Frameworks/Python.framework/Versions/3.8/lib/python3.8/weakref.py", line 383, in __getitem__
return self.data[ref(key)]
KeyError: <weakref at 0x10f400540; to 'Flask' at 0x10d31c880>
'
</code></pre>
<p>Another thing to note is if I make a dummy file for testing purposes that looks like such</p>
<pre><code>with application.app_context():
db.session.query(Injury).filter_by(**kwargs).first()
</code></pre>
<p>The query executes properly.</p>
|
<python><flask><sqlalchemy>
|
2022-12-28 21:07:41
| 0
| 349
|
Malice
|
74,944,999
| 6,361,813
|
Cartopy: set extent for perfectly square map
|
<p>Assume I need to plot a map with Cartopy and the resulting figure should have square dimensions. In my (simplistic) understanding, this should be easy using the <a href="https://en.wikipedia.org/wiki/Equirectangular_projection" rel="nofollow noreferrer">plate carrΓ©e projection</a>, since longitude and latitude are directly used as planar coordinates. This means I expect a square map if the extent covered by latitude and longitude are the same.</p>
<p>However, while the following code works if the extent starts at the equator (e.g. <code>[-120, -60, 0, 60]</code>), it fails for an extent apart from the equator. How does the extent have to be computed that the resulting map is a square? (I found <a href="https://stackoverflow.com/questions/51340015/plot-square-cartopy-map">this post</a>, but it did not really help since the coordinate transformation only returned the given coordinates).</p>
<pre><code>import cartopy.crs as ccrs
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=[5, 5], subplot_kw={'projection': ccrs.PlateCarree()})
fig.subplots_adjust(left=0.0, bottom=0.0, right=1.0, top=1.0)
ax.set_extent([-120, -60, 15, 75])
ax.stock_img()
fig.savefig('Test.pdf')
</code></pre>
<p>The following image shows the output, which unfortunately contains white borders at the vertical edges:<br />
<img src="https://i.sstatic.net/s1ojN.jpg" width="250" height="250"></p>
|
<python><matplotlib><geospatial><cartopy>
|
2022-12-28 21:06:18
| 1
| 407
|
Pontis
|
74,944,887
| 7,999,308
|
How can the default list of email addresses be rendered from a variable in Airflow?
|
<p>I have a lot of Airflow DAGs and I'd like to automatically send email notifications to the same list of recipients (stored in an Airflow variable) on any task failure, so I'm using the following default operator configuration defined at the DAG level:</p>
<pre><code>dag = DAG(
...
default_args = {
...
"email": "{{ ','.join( var.json.get("my_email_list", []) ) }}",
"email_on_failure": True,
...
},
...
)
</code></pre>
<p>Unfortunately, looks like the <code>email</code> argument doesn't support templating and it simply gets passed to the email back-end as-is without rendering, so my approach isn't working.</p>
<p>Could anybody suggest a decent workaround for my particular case, please? I don't really want to hard-code the list of email addresses in the source code, because storing them in an Airflow variable gives much more flexibility.</p>
|
<python><airflow><jinja2>
|
2022-12-28 20:51:49
| 1
| 524
|
Gevorg Davoian
|
74,944,834
| 2,088,886
|
wfastcgi 500 error in flask app when trying to plot
|
<p>Edit: Originally thought this was IIS issue, but looks like it's FastCgiModule causing the issue.</p>
<p>To be clear, the WebApp itself works, it only fails when I choose "Image Test" from the dropdown (see code below) menu on the site. Specifically, it fails on the <code>df.plot</code> line, and I can't figure out why.</p>
<pre><code>from flask import Flask, request
import pandas as pd
app = Flask(__name__)
html = '''
<h1>Discovery Capital</h1>
<h2>Report Generator</h2>
<p>
<form action="/submitted" method="post">
<label for="reports">Choose a Report:</label>
<select id="reports" name="reports">
<option value="img_test">Image Test</option>
</select>
<input type="submit">
</form>
'''
@app.route("/")
def index():
return html
@app.route("/submitted", methods=['POST'])
def show():
select = request.form.get("reports")
if select == 'img_test':
df = pd.DataFrame([1,2,3,4,5])
df.plot()
plt.close()
result = "Success"
else:
result = "Not Available"
return result
if __name__ == "__main__":
app.run()
</code></pre>
<p>Appreciate the help!</p>
<p>Edit: Here's the log from failed request tracing:
<a href="https://i.sstatic.net/tVLj6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tVLj6.png" alt="enter image description here" /></a></p>
|
<python><flask><fastcgi><wfastcgi>
|
2022-12-28 20:45:13
| 0
| 2,161
|
David Yang
|
74,944,818
| 512,480
|
Python debugging, language server problem
|
<p>I'm having very repeatable problems running the Python debugger under VS code 1.74.2 on Mac OS 11.6. I decided to follow some advice in the repository's "reporting issues" article and turn off all extensions. Make that, all extensions except the ones obviously needed for the job. I turned off all extensions except the core Python extension, turning off PyLance and everything to do with Jupyter. Doesn't exactly help, my problem arises in a somewhat different form.</p>
<ol>
<li><p>Properly, what is the minimal set of extensions needed to do Python debugging? Is it more than the core Python extension?</p>
</li>
<li><p>What exactly is a language server in this context? Could someone point me to documentation that will "take it from the top" and explain what this concept is all about?</p>
</li>
<li><p>I would appreciate some help interpreting what I'm seeing in log files and how to work with it. Looks like I'm seeing an actual bug in Python extension code, but I'd like another pair of eyes.</p>
</li>
</ol>
<p><strong>A representative sample from 1-Python.log:</strong></p>
<pre class="lang-none prettyprint-override"><code>[ERROR 2022-11-28 15:16:37.609]: [
'Failed to start language server, Class name = h, completed in 1956ms, has a falsy return value, Arg 1: <Uri:/Users/ken/Shoshin/UI>, Arg 2: {"id":"/usr/local/bin/python3","sysPrefix":"/Library/Frameworks/Python.framework/Versions/3.9","envType":"Global","envName":"","envPath":"","path":"/usr/local/bin/python3","architecture":3,"sysVersion":"3.9.7 (v3.9.7:1016ef3790, Aug 30 2021, 16:39:15) \\n[Clang 6.0 (clang-600.0.57)]","version":{"raw":"3.9.7","major":3,"minor":9,"patch":7,"build":[],"prerelease":["final","0"]},"displayName":"Python 3.9.7 64-bit","detailedDisplayName":"Python 3.9.7 64-bit"}, Return Value: undefined',
[Error: Launching Jedi language server using python failed, see output.
at l.start (/Users/ken/.vscode/extensions/ms-python.python-2022.21.13491005/out/client/extension.js:2:19756)]
]
[ERROR 2022-11-28 15:16:37.609]: [
'Failed to activate a workspace, Class name = f, completed in 2036ms, has a falsy return value, Arg 1: <Uri:/Users/ken/Shoshin/UI/try/HelloWorld.py>, Return Value: undefined',
[Error: Launching Jedi language server using python failed, see output.
at l.start (/Users/ken/.vscode/extensions/ms-python.python-2022.21.13491005/out/client/extension.js:2:19756)]
]
[ERROR 2022-11-28 15:16:37.609]: Failure during activation. [Error: Launching Jedi language server using python failed, see output.
at l.start (/Users/ken/.vscode/extensions/ms-python.python-2022.21.13491005/out/client/extension.js:2:19756)]
[ERROR 2022-11-28 15:16:37.609]: sendStartupTelemetry() failed. [Error: Launching Jedi language server using python failed, see output.
at l.start (/Users/ken/.vscode/extensions/ms-python.python-2022.21.13491005/out/client/extension.js:2:19756)]
DAP Server launched with command: /usr/local/bin/python3 /Users/ken/.vscode/extensions/ms-python.python-2022.21.13491005/pythonFiles/lib/python/debugpy/adapter
DAP Server launched with command: /usr/local/bin/python3 /Users/ken/.vscode/extensions/ms-python.python-2022.21.13491005/pythonFiles/lib/python/debugpy/adapter
</code></pre>
<p><strong>Corresponding material from 3-Python language server.log:</strong></p>
<pre class="lang-none prettyprint-override"><code>
Traceback (most recent call last):
File "/Users/ken/.vscode/extensions/ms-python.python-2022.21.13491005/pythonFiles/run-jedi-language-server.py", line 11, in <module>
sys.exit(cli())
File "/Users/ken/.vscode/extensions/ms-python.python-2022.21.13491005/pythonFiles/lib/jedilsp/jedi_language_server/cli.py", line 125, in cli
SERVER.start_io()
File "/Users/ken/.vscode/extensions/ms-python.python-2022.21.13491005/pythonFiles/lib/jedilsp/pygls/server.py", line 225, in start_io
self.loop.run_until_complete(
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
return future.result()
File "/Users/ken/.vscode/extensions/ms-python.python-2022.21.13491005/pythonFiles/lib/jedilsp/pygls/server.py", line 56, in aio_readline
header = await loop.run_in_executor(executor, rfile.readline)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 814, in run_in_executor
executor.submit(func, *args), loop=self)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/thread.py", line 170, in submit
self._adjust_thread_count()
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/thread.py", line 193, in _adjust_thread_count
t.start()
TypeError: start() missing 1 required positional argument: 'sessionID'
[Error - 3:16:39 PM] The Python Jedi server crashed 5 times in the last 3 minutes. The server will not be restarted. See the output for more information.
[Error - 3:16:39 PM] Server initialization failed.
Message: Pending response rejected since connection got disposed
Code: -32097
[Error - 3:16:39 PM] Python Jedi client: couldn't create connection to server.
Message: Pending response rejected since connection got disposed
Code: -32097
[Error - 3:16:39 PM] Restarting server failed
Message: Pending response rejected since connection got disposed
Code: -32097
</code></pre>
<p><strong>After switching to PyLance language server:</strong></p>
<pre><code>[Info - 7:36:39 AM] (39856) Pylance language server 2022.12.20 (pyright 621d886b) starting
[Info - 7:36:39 AM] (39856) Server root directory: /Users/ken/.vscode/extensions/ms-python.vscode-pylance-2022.12.20/dist
[Info - 7:36:39 AM] (39856) Starting service instance "UI"
[Info - 7:36:39 AM] (39856) Notebook support: Legacy
[Info - 7:36:39 AM] (39856) Interactive window support: Legacy
[Info - 7:36:39 AM] (39856) Auto-indent enabled
[Info - 7:36:39 AM] (39856) No configuration file found.
[Info - 7:36:39 AM] (39856) No pyproject.toml file found.
[Info - 7:36:39 AM] (39856) Setting pythonPath for service "UI": "/usr/local/bin/python3"
[Warn - 7:36:39 AM] (39856) stubPath /Users/ken/Shoshin/UI/typings is not a valid directory.
[Info - 7:36:39 AM] (39856) Assuming Python version 3.9
[Info - 7:36:39 AM] (39856) Assuming Python platform Darwin
[Info - 7:36:39 AM] (39856) Searching for source files
[Info - 7:36:39 AM] (39856) Found 61 source files
[Info - 7:36:39 AM] (39856) Background analysis(1) root directory: /Users/ken/.vscode/extensions/ms-python.vscode-pylance-2022.12.20/dist
[Info - 7:36:39 AM] (39856) Background analysis(1) started
[Info - 7:44:45 AM] (39856) Indexer background runner(2) root directory: /Users/ken/.vscode/extensions/ms-python.vscode-pylance-2022.12.20/dist (index)
[Info - 7:44:45 AM] (39856) Indexing(2) started
[Info - 7:44:45 AM] (39856) scanned(2) 238 files over 1 exec env
[Info - 7:44:48 AM] (39856) [IDX(2)] Long operation: index execution environment /Users/ken/Shoshin/UI (2240ms)
[Info - 7:44:48 AM] (39856) [IDX(2)] Long operation: index packages /Users/ken/Shoshin/UI (2243ms)
[Info - 7:44:48 AM] (39856) indexed(2) 154 files over 1 exec env
[Info - 7:44:48 AM] (39856) Indexing finished(2).
</code></pre>
|
<python><visual-studio-code><debugging><python-language-server>
|
2022-12-28 20:42:20
| 1
| 1,624
|
Joymaker
|
74,944,700
| 9,182,743
|
Pandas: convert UTC to Local time using timezone, then drop timezone
|
<p>I have a dataframe with columns:</p>
<ul>
<li><strong>time</strong>: time in UTC format</li>
<li><strong>timezone</strong>: the corresponding timezone.</li>
</ul>
<pre><code>
time timezone
0 2022-12-28T20:16:31.373Z Europe/Athens
1 2022-07-28T20:16:31.373Z Europe/Athens
2 2022-11-01T21:35:35.865Z Europe/Dublin
3 2022-08-03T19:44:07.611Z America/Los_Angeles
4 2022-08-02T12:44:44.360Z Europe/Minsk
</code></pre>
<p>I want to:</p>
<ol>
<li>Convert UTC time to Local time (using timezone)</li>
<li>Remove the Timezone and just keep the datetime</li>
</ol>
<p>It seems to me that this solution works, but want to make sure that I am not missing something (eg. doesn't deal with dailight saving or something)</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
# example dataframe
df = pd.DataFrame({
'time' : ['2022-12-28T20:16:31.373Z', '2022-07-28T20:16:31.373Z', '2022-11-01T21:35:35.865Z', '2022-08-03T19:44:07.611Z', '2022-08-02T12:44:44.360Z'],
'timezone': ['Europe/Athens', 'Europe/Athens', 'Europe/Dublin', 'America/Los_Angeles', 'Europe/Minsk']
})
# function
def get_local_time (timestamp: pd.Timestamp, timezone: str) -> pd.Timestamp:
timestamp = pd.to_datetime(timestamp).tz_convert(timezone).replace(tzinfo=None)
return timestamp
df['local_time'] = df.apply(lambda row: get_local_time(row['time'], row['timezone']), axis = 1).dt.round(freq='S')
print (df)
---
OUT:
time timezone local_time
0 2022-12-28T20:16:31.373Z Europe/Athens 2022-12-28 22:16:31
1 2022-07-28T20:16:31.373Z Europe/Athens 2022-07-28 23:16:31
2 2022-11-01T21:35:35.865Z Europe/Dublin 2022-11-01 21:35:36
3 2022-08-03T19:44:07.611Z America/Los_Angeles 2022-08-03 12:44:08
4 2022-08-02T12:44:44.360Z Europe/Minsk 2022-08-02 15:44:44
</code></pre>
|
<python><pandas><datetime><timezone>
|
2022-12-28 20:24:49
| 0
| 1,168
|
Leo
|
74,944,673
| 3,215,940
|
Saving deque to file minding performance and portability
|
<p>I have a while loop that collects data from a microphone (replaced here for <code>np.random.random()</code> to make it more reproducible). I do some operations, let's say I take the <code>abs().mean()</code> here because my output will be a one dimensional array.</p>
<p>This loop is going to run for a LONG time (e.g., once a second for a week) and I am wondering my options to save this. My main concerns are saving the data with acceptable performance and having the result being portable (e.g, .csv beats .npy).</p>
<ol>
<li>The simple way: just append things into a <code>.txt</code> file. Could be replaced by <code>csv.gz</code> maybe? Maybe using <code>np.savetxt()</code>? Would it be worth it?</li>
<li>The hdf5 way: this should be a nicer way, but reading the whole dataset to append to it doesn't seem like good practice or better performing than dumping into a text file. Is there another way to append to hdf5 files?</li>
<li>The npy way (code not shown): I could save this into a <code>.npy</code> file but I would rather make it portable using a format that could be read from any program.</li>
</ol>
<pre><code>from collections import deque
import numpy as np
import h5py
amplitudes = deque(maxlen=save_interval_sec)
# Read from the microphone in a continuous stream
while True:
data = np.random.random(100)
amplitude = np.abs(data).mean()
print(amplitude, end="\r")
amplitudes.append(amplitude)
# Save the amplitudes to a file every n iterations
if len(amplitudes) == save_interval:
with open("amplitudes.txt", "a") as f:
for amp in amplitudes:
f.write(str(amp) + "\n")
amplitudes.clear()
# Save the amplitudes to an HDF5 file every n iterations
if len(amplitudes) == save_interval:
# Convert the deque to a Numpy array
amplitudes_array = np.array(amplitudes)
# Open an HDF5 file
with h5py.File("amplitudes.h5", "a") as f:
# Get the existing dataset or create a new one if it doesn't exist
dset = f.get("amplitudes")
if dset is None:
dset = f.create_dataset("amplitudes", data=amplitudes_array, dtype=np.float32,
maxshape=(None,), chunks=True, compression="gzip")
else:
# Get the current size of the dataset
current_size = dset.shape[0]
# Resize the dataset to make room for the new data
dset.resize((current_size + save_interval,))
# Write the new data to the dataset
dset[current_size:] = amplitudes_array
# Clear the deque
amplitudes.clear()
# For debug only
if len(amplitudes)>3:
break
</code></pre>
<h2>Update</h2>
<p>I get that the answer might depend a bit on the sampling frequency (once a second might be too slow) and the data dimensions (single column might be too little). I guess I asked because <em>anything</em> can work, but I always just dump to text. I am not sure where the breaking points are that tip the decision into one or the other method.</p>
|
<python><performance><hdf5>
|
2022-12-28 20:21:45
| 0
| 4,270
|
Matias Andina
|
74,944,539
| 12,299,993
|
Pyspark type mismatch in nested JSON structure stops schema validation
|
<p>I'm reading a json file with PySpark and validating the schema against a predefined data schema.</p>
<p>the data schema:</p>
<pre><code>testschema = StructType([
StructField("_corrupt_record", StringType(), True),
StructField("a", StringType(), True),
StructField("b", StringType(), True),
StructField("c", StringType(), True),
StructField("d", StringType(), True),
StructField("e", StructType([
StructField("e_1", IntegerType(), True)
]), True)
])
</code></pre>
<p>The JSON file:</p>
<pre><code>{
"e": {
"e_1": "2"
},
"a": "a",
"b": "b",
"c": "c",
"d": "d"
}
</code></pre>
<p>after reading the JSON file and validating the structure against the <code>testschema</code> definition using the following code:</p>
<pre><code>peopleDF = spark.read.json(json_input, schema=testschema, multiLine=True)
peopleDF.createOrReplaceTempView("people")
teenagerNamesDF = spark.sql("SELECT * FROM people")
display(teenagerNamesDF)
</code></pre>
<p>Given the JSON file, i expect a null value for e_1 due to the type mismatch (e.g. checking for integer but given string). However, this also causes all other values to be null also.</p>
<p>result:
[![enter image description here][1]][1]</p>
<p>However, when I move this nested 'e' block down in the structure, it appears that is able to populate the fields before the block where the type mismatch occurs. E.g.</p>
<pre><code>{
"a": "a",
"b": "b",
"c": "c",
"d": "d",
"e": {
"e_1": "2"
}
}
</code></pre>
<p>result:
[![enter image description here][2]][2]</p>
<p>It seems that type mismatching in nested JSON structures stop/halts schema validation. How can this be solved?</p>
|
<python><pyspark>
|
2022-12-28 20:03:20
| 0
| 304
|
toaster_fan
|
74,944,386
| 8,461,786
|
Can I make classmethod work with __getitem__?
|
<p>In the code below, can I somehow keep <code>__getitem__</code> method (which cannot be a classmethod) on <code>Permissions</code> class and at the same time turn <code>as_list</code> into a class method?</p>
<p>Technically <code>as_list</code> IS a class method now, but I still need to do <code>cls()[user_type]</code> in it instead of just <code>cls[user_type]</code>, because it makes use of <code>__getitem__</code> method. I would also prefer an API like <code>Permissions.as_list()</code> over <code>Permissions().as_list()</code>.</p>
<pre><code>from enum import Enum
class Permission(Enum):
CAN_DELETE = 'can_delete'
CAN_EDIT = 'can_edit'
class UserType(Enum):
ADMIN = 'admin'
GUEST = 'guest'
PERMISSIONS = {
UserType.ADMIN: [Permission.CAN_DELETE, Permission.CAN_EDIT],
}
class UnknownUserTypeError(Exception):
pass
class Permissions:
permissions = PERMISSIONS
def __getitem__(self, user_type: UserType):
try:
return self.permissions[user_type]
except KeyError:
raise UnknownUserTypeError
@classmethod
def as_list(cls, user_type: UserType):
return [p.value for p in cls()[user_type]]
perms = Permissions.as_list(UserType.GUEST) # should raise UnknownUserTypeError
print(perms)
</code></pre>
|
<python>
|
2022-12-28 19:43:45
| 1
| 3,843
|
barciewicz
|
74,944,286
| 4,418,481
|
Adding file name column to Dask DataFrame
|
<p>I have a data set of around 400 CSV files containing a time series of multiple variables (my CSV has a time column and then multiple columns of other variables).</p>
<p>My final goal is the choose some variables and plot those 400 time series in a graph.</p>
<p>In order to do so, I tried to use Dask to read the 400 files and then plot them.</p>
<p>However, from my understanding, In order to actually draw 400 time series and not a single appended data frame, I should groupby the data by the file name it came from.</p>
<p>Is there any Dask efficient way to add a column to each CSV so I could later groupby my results?</p>
<p>A parquet files is also an option.</p>
<p>For example, I tried to do something like this:</p>
<pre><code>import dask.dataframe as dd
import os
filenames = ['part0.parquet', 'part1.parquet', 'part2.parquet']
df = dd.read_parquet(filenames, engine='pyarrow')
df = df.assign(file=lambda x: filenames[x.index])
df_grouped = df.groupby('file')
</code></pre>
<p>I understand that I can use from_delayed() but then I lose al the parallel computation.</p>
<p>Thank you</p>
|
<python><csv><dask><parquet><dask-dataframe>
|
2022-12-28 19:32:10
| 1
| 1,859
|
Ben
|
74,944,041
| 10,732,434
|
How to ignore TqdmExperimentalWarning?
|
<p>Everytime a <code>tqdm_gui</code> object is created, a warning is printed. I tried to suppress it with <code>contextlib</code> but to no avail.</p>
<pre class="lang-py prettyprint-override"><code>import contextlib as ctx
from tqdm import TqdmExperimentalWarning
from tqdm.gui import tqdm_gui
total = 0
with ctx.suppress(TqdmExperimentalWarning):
for i in tqdm_gui(range(100_000_000)):
total += i
print(total)
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>main.py:8: TqdmExperimentalWarning: GUI is experimental/alpha
for i in tqdm_gui(range(100_000_000))
</code></pre>
<p>How do I get rid of the warning?</p>
|
<python><python-3.x><suppress-warnings><tqdm>
|
2022-12-28 19:01:05
| 1
| 2,197
|
sanitizedUser
|
74,943,863
| 7,800,760
|
Python redis library not working unlike command line
|
<p>I have a remote Linux Redis 7.0 server which I reach through a spiped tunnel.</p>
<p>Launch spiped locally:</p>
<pre><code>spiped -F -e -s 127.0.0.1:6379 -t REMOTESERVER:6379 -k /etc/spiped/redis.key
</code></pre>
<p>and as expected I can connect to the remote Redis server via the command line utility:</p>
<pre><code>(redis_test) bob@Roberts-Mac-mini redis_test % redis-cli
127.0.0.1:6379> AUTH myverysecretpassphrase
OK
127.0.0.1:6379> ping
PONG
127.0.0.1:6379> keys *
1) "test"
127.0.0.1:6379> exit
</code></pre>
<p>So all as expected. So tried the very simple python program:</p>
<pre><code>import redis
r = redis.Redis(host='MYSERVERIP', port=6379, password='myverysecretpassphrase')
r.set('foo', 'bar')
value = r.get('foo')
print(value)
</code></pre>
<p>but get a long error trace:</p>
<pre><code>(redis_test) bob@Roberts-Mac-mini redis_test % python test1.py
Traceback (most recent call last):
File "/Users/bob/Documents/work/redis_test/test1.py", line 8, in <module>
r.set('foo', 'bar')
File "/Users/bob/opt/miniconda3/envs/redis_test/lib/python3.11/site-packages/redis/commands/core.py", line 2238, in set
return self.execute_command("SET", *pieces, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bob/opt/miniconda3/envs/redis_test/lib/python3.11/site-packages/redis/client.py", line 1255, in execute_command
conn = self.connection or pool.get_connection(command_name, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bob/opt/miniconda3/envs/redis_test/lib/python3.11/site-packages/redis/connection.py", line 1389, in get_connection
connection.connect()
File "/Users/bob/opt/miniconda3/envs/redis_test/lib/python3.11/site-packages/redis/connection.py", line 610, in connect
self.on_connect()
File "/Users/bob/opt/miniconda3/envs/redis_test/lib/python3.11/site-packages/redis/connection.py", line 701, in on_connect
auth_response = self.read_response()
^^^^^^^^^^^^^^^^^^^^
File "/Users/bob/opt/miniconda3/envs/redis_test/lib/python3.11/site-packages/redis/connection.py", line 812, in read_response
response = self._parser.read_response(disable_decoding=disable_decoding)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bob/opt/miniconda3/envs/redis_test/lib/python3.11/site-packages/redis/connection.py", line 318, in read_response
raw = self._buffer.readline()
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bob/opt/miniconda3/envs/redis_test/lib/python3.11/site-packages/redis/connection.py", line 249, in readline
self._read_from_socket()
File "/Users/bob/opt/miniconda3/envs/redis_test/lib/python3.11/site-packages/redis/connection.py", line 195, in _read_from_socket
raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)
redis.exceptions.ConnectionError: Connection closed by server.
</code></pre>
<p>Here is my environment:</p>
<pre><code># packages in environment at /Users/bob/opt/miniconda3/envs/redis_test:
redis-py 4.4.0 pyhd8ed1ab_0 conda-forge
(redis_test) bob@Roberts-Mac-mini redis_test % python --version
Python 3.11.0
</code></pre>
|
<python><redis>
|
2022-12-28 18:39:44
| 1
| 1,231
|
Robert Alexander
|
74,943,798
| 12,242,085
|
Equivalent for predict_proba in statsmodels Logistic Regression in Python?
|
<p>I created Logisitc Regression Model in statsmodels in Python like below:</p>
<pre><code>model = sm.Logit(y_train, X_train[selected_features]).fit(disp = 0)
</code></pre>
<p>and then I need to calculate probabilities, (NOT prediction model.predict) but probabilities, same like <code>predict_proba</code> in sklearn. But whne I try to use <code>predict_proba</code> in statsmodels like below:</p>
<pre><code>y_proba_1 = model.predict_proba(X_test)[:,1]
</code></pre>
<p>I have error: <code>AttributeError: 'LogitResults' object has no attribute 'predict_proba'</code></p>
<p>Is there some equivalent for <code>predict_proba</code> from sklearn in statsmodels in Python ?</p>
<p>I need to hae equivalent for below lines which works in sklearn but does not wokrs in statsmodels:</p>
<pre><code>y_proba_0 = model.predict_proba(X_test)[:,0]
y_proba_1 = model.predict_proba(X_test)[:,1]
</code></pre>
|
<python><scikit-learn><statsmodels>
|
2022-12-28 18:30:17
| 0
| 2,350
|
dingaro
|
74,943,730
| 12,960,701
|
How do I get the float value of a 1x1 EagerTensor in Tensorflow?
|
<p>I have a 1x1 EagerTensor object that I'm trying to convert to a single float value. i.e.</p>
<p>tf.Tensor([[-0.04473801]], shape=(1, 1), dtype=float32) -> -0.04473801</p>
<p>There seemed to be a simple answer that I've used on other tensors in the past - just use the item() method <a href="https://stackoverflow.com/questions/57727372/how-do-i-get-the-value-of-a-tensor-in-pytorch">a la this answer.</a> However, when I try to use the .item() method on an EagerTensor object, I get the following error:</p>
<pre><code>AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'item'
</code></pre>
<p>Why is this happening? Do EagerTensors not have the item() method? One workaround I've found is using <code>float(tensor_variable.numpy()[0])</code>, but it seems like there should be a better way.</p>
|
<python><tensorflow>
|
2022-12-28 18:23:04
| 1
| 382
|
jgholder
|
74,943,713
| 3,472,722
|
Telegram Bot InlineKeyboardButton, How to know which dynamically button is pressed?
|
<p>I'm using <a href="https://github.com/python-telegram-bot/python-telegram-bot" rel="nofollow noreferrer">Python Telegram Bot</a> and I'm struggling with handling callback query handlers.</p>
<h1>The problem</h1>
<p>I've created a list of inlinekeyboard buttons (see image below), and they all trigger the same callback query. However, I'm unable to extract which inline keyboard button has been pressed and triggered the callback query.</p>
<h2>How it looks in Telegram</h2>
<p><a href="https://i.sstatic.net/cj5jg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cj5jg.png" alt="inlinekeyboard_in_telegram" /></a></p>
<h1>Background info</h1>
<p>In my code I've created this inline keyboard which grows depending on the number of items in the List 'drawdates' (see the def below).</p>
<pre><code>def drawdate_keyboard(drawdates):
keyboard = []
for idx, drawdate in enumerate(drawdates):
keyboard.append([InlineKeyboardButton(drawdate, callback_data=str(SET_DRAWDATE))])
keyboard.append([InlineKeyboardButton('Cancel', callback_data=str(CANCEL))])
return InlineKeyboardMarkup(keyboard)
</code></pre>
<p>I'm using a conversation handlers which looks like this:</p>
<pre><code>conv_handler = ConversationHandler(
entry_points=[CommandHandler("start", start)],
states={
CALLBACKS_ROUTES: [
CallbackQueryHandler(add_ticket, pattern=str(ADD_TICKET)),
CallbackQueryHandler(get_drawdates, pattern=str(GET_DRAWDATES)),
CallbackQueryHandler(check_tickets, pattern="^" + str(CHECK_TICKETS) + "$"),
CallbackQueryHandler(clear_tickets, pattern="^" + str(CLEAR_TICKETS) + "$"),
],
TICKET: [
MessageHandler(Filters.text & ~Filters.command, ticket),
CallbackQueryHandler(start_over, pattern="^" + str(CANCEL) + "$"),
],
DRAWDATE: [
CallbackQueryHandler(set_drawdate, pattern=str(SET_DRAWDATE)),
CallbackQueryHandler(start_over, pattern="^" + str(CANCEL) + "$"),
],
},
fallbacks=[CommandHandler("start", start)],
#per_message=True,
)
updater.dispatcher.add_handler(conv_handler)
</code></pre>
<p>and the callback is catched by this:</p>
<pre><code>def set_drawdate(update: Update, context) -> int:
query = update.callback_query
query.answer()
</code></pre>
|
<python><python-telegram-bot>
|
2022-12-28 18:21:02
| 1
| 411
|
Michael ten Den
|
74,943,530
| 6,372,859
|
Mix histogram and line plots in plotly together
|
<p>Hi all and Happy Holidays!</p>
<p>I am trying to do a combined plot, using plotly, of an histogram and a line plot. I have a large pandas dataframe (shown only the first ten lines):</p>
<pre><code> Year Selling_Price
0 8.0 4950.0
1 8.0 4070.0
2 16.0 1738.0
3 12.0 2475.0
4 15.0 1430.0
5 5.0 4840.0
6 15.0 1056.0
7 21.0 495.0
8 11.0 3850.0
9 9.0 2200.0
10 8.0 5500.0
11 17.0 1012.0
12 13.0 3080.0
14 13.0 1980.0
15 6.0 4400.0
16 6.0 8558.0
17 10.0 5500.0
18 20.0 1650.0
19 6.0 7480.0
20 11.0 1914.0
</code></pre>
<p>First I have plotted the distribution of the variable Year as follows:</p>
<pre><code>import plotly.express as px
fig_1 = px.histogram(data, x="Year", marginal="box")
fig_1.show()
</code></pre>
<p><a href="https://i.sstatic.net/1K5j0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1K5j0.png" alt="enter image description here" /></a></p>
<p>Then, I have plotted the Average Selling Price by year, using the following lines of code:</p>
<pre><code>df_1 = pd.DataFrame(data.groupby('Year')['Selling_Price'].mean().sort_index(ascending=True))
df_1.rename(columns={'Selling_Price':'Average Selling Price'},inplace=True)
df_1['Year'] = df_1.index
df_1.reset_index(drop=True,inplace=True)
fig_1 = px.line(df_1, x="Year", y="Average Selling Price")
fig_1.show()
</code></pre>
<p><a href="https://i.sstatic.net/scaBG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/scaBG.png" alt="enter image description here" /></a></p>
<p>However, I want to have them combined into one column and two rows graph. I wrote the function:</p>
<pre><code>def Plot_Continuous(df,column_group,target):
# Copies the dataframe
df = df.copy()
df_1 = pd.DataFrame(df.groupby(column_group)[target].mean().sort_index(ascending=True))
df_1.rename(columns={target:'Average Selling Price'},inplace=True)
fig = make_subplots(rows=2, cols=1)
fig.append_trace(px.histogram(df, x=column_group, marginal="box", hover_data=df.columns), row=1, col=1)
fig.append_trace(px.line(df_1, x="Year", y="Average Selling Price"), row=2, col=1)
fig.update_layout(height=800, width=1000)
fig.show()
</code></pre>
<p>but when I call it: <code>Plot_Continuous(dataframe,'Year','Selling_Price')</code>, I get the error message:</p>
<pre><code>ValueError:
Invalid element(s) received for the 'data' property of
Invalid elements include: [Figure({
'data': [{'alignmentgroup': 'True',
'bingroup': 'x',
'hovertemplate': 'Year=%{x}<br>count=%{y}<extra></extra>',
'legendgroup': '',
'marker': {'color': '#636efa', 'pattern': {'shape': ''}},
'name': '',
'offsetgroup': '',
'orientation': 'v',
'showlegend': False,
'type': 'histogram',
'x': array([ 8., 8., 16., ..., 9., 15., 13.]),
'xaxis': 'x',
'yaxis': 'y'},.................
</code></pre>
<p>If I use go.Histogram and go.Line, then I get an error message with the <code>marginal="box"</code> parameter, which I would like to keep inside the plot, but not working if I leave it there.</p>
<p>How could I properly combine both plots as they are shown in the images above, one below the other?</p>
<p>Thanks in advance and Happy Holidays!</p>
|
<python><pandas><plotly><visualization>
|
2022-12-28 18:01:18
| 1
| 583
|
Ernesto Lopez Fune
|
74,943,492
| 243,031
|
Disable admin application for django
|
<p>I am trying to run the django development server and its failed with below error.</p>
<pre><code>Exception in thread django-main-thread:
Traceback (most recent call last):
File "/my/virtual/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/my/virtual/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/my/virtual/environment/lib/python3.8/site-packages/django/utils/autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "/my/virtual/environment/lib/python3.8/site-packages/django/core/management/commands/runserver.py", line 118, in inner_run
self.check(display_num_errors=True)
File "/my/virtual/environment/lib/python3.8/site-packages/django/core/management/base.py", line 469, in check
raise SystemCheckError(msg)
django.core.management.base.SystemCheckError: SystemCheckError: System check identified some issues:
ERRORS:
?: (admin.E403) A 'django.template.backends.django.DjangoTemplates' instance must be configured in TEMPLATES in order to use the admin application.
</code></pre>
<p>this is <code>django-rest-framework</code> application and it has no admin application.</p>
<p>How to say <code>django runserver</code> to not run admin application?</p>
|
<python><django>
|
2022-12-28 17:56:16
| 1
| 21,411
|
NPatel
|
74,943,480
| 3,330,979
|
Class not found when using dynamic import
|
<p>I am trying to dynamically import a module using the following:</p>
<pre><code>def import_plugin(plugin_name):
src = f'plugins.{plugin_name}.plugin'
module = __import__(src)
return getattr(module, 'Plugin')
import_plugin('core')
</code></pre>
<p>This is the project structure:</p>
<ul>
<li>main.py</li>
<li>plugins
<ul>
<li>core
<ul>
<li>plugin.py - This file contains a class named <code>Plugin</code></li>
</ul>
</li>
</ul>
</li>
</ul>
<p>I'm getting the error: <code>module 'plugins' has no attribute 'Plugin'</code></p>
|
<python>
|
2022-12-28 17:55:14
| 0
| 1,927
|
chrispytoes
|
74,943,412
| 17,696,880
|
Capture substring and send it to a function that modifies it and can replace it in this string
|
<pre class="lang-py prettyprint-override"><code>import re
def one_day_or_another_day_relative_to_a_date_func(input_text):
#print(repr(input_text)) #print what you have captured, and you should replace
return "aaaaaaaa"
def identify(input_text):
some_text = r"(?:(?!\.\s*?\n)[^;])*"
date_capture_pattern = r"([12]\d{3}-[01]\d-[0-3]\d)(\D*?)"
previous_days = r"(\d+)\s*(?:dias|dia)\s*(?:antes|previos|previo|antes|atrΓ‘s|atras)\s*"
after_days = r"(\d+)\s*(?:dias|dia)\s*(?:despuΓ©s|despues|luego)\s*"
n_patterns = [
previous_days + r"(?:del|de\s*el|de|al|a)\s*" + some_text + date_capture_pattern + some_text + r"\s*(?:,\s*o|o)\s*" + previous_days,
after_days + r"(?:del|de\s*el|de|al|a)\s*" + some_text + date_capture_pattern + some_text + r"\s*(?:,\s*o|o)\s*" + previous_days,
previous_days + r"(?:del|de\s*el|de|al|a)\s*" + some_text + date_capture_pattern + some_text + r"\s*(?:,\s*o|o)\s*" + after_days,
after_days + r"(?:del|de\s*el|de|al|a)\s*" + some_text + date_capture_pattern + some_text + r"\s*(?:,\s*o|o)\s*" + after_days]
#Itero la lista de patrones de bΓΊsqueda para que el programa intente con uno por uno
for n_pattern in n_patterns:
#Este es mi intento de realizar el reemplazo, aunque tiene problemas con modificadores non-greedy
input_text = re.sub(n_pattern, one_day_or_another_day_relative_to_a_date_func , input_text, re.IGNORECASE)
input_texts = ["8 dias antes o 9 dias antes del 2022-12-22",
"2 dias despues o 1 dia antes del 2022-12-22, dia en donde ocurrio",
"a tan solo 2 dias despues de 2022-12-22 o a caso eran 6 dias despues, mmm no recuerdo bien",
]
#Testing...
for input_text in input_texts:
#print(input_text)
print(one_day_or_another_day_relative_to_a_date_func(input_text))
</code></pre>
<p>Incorrect output that I am getting, because if I incorrectly capture the substrings, the replacements will also be incorrect</p>
<pre><code>"aaaaaaaa"
"aaaaaaaa"
"aaaaaaaa"
</code></pre>
<p>Having well-defined limits, I don't understand why this capture pattern try to capture beyond them?</p>
<p>And the output that I need is that:</p>
<pre><code>"aaaaaaaa"
"aaaaaaaa, dia en donde ocurrio"
"a tan solo aaaaaaaa, mmm no recuerdo bien"
</code></pre>
|
<python><python-3.x><regex><replace><regex-group>
|
2022-12-28 17:45:45
| 1
| 875
|
Matt095
|
74,942,963
| 192,204
|
spacy tokenizer is not recognizing period as suffix consistently
|
<p>I have been working on a custom NER model to extract products that have strange identifiers that I can't control.</p>
<p>You can see from this example that in some cases it isn't picking up the period as a suffix. I added a custom tokenizer to handle products with hyphens (below). What do I need to add to handle this case and not jeopardize the other existing tokenization? Any input would be appreciated.</p>
<pre><code>issue_text = "I really like stereo receivers, I want to buy the new ASX8E11F."
print(nlp_custom_ner.tokenizer.explain(issue_text))
issue_text = "I really like stereo receivers, I want to buy the new RK8BX."
print(nlp_custom_ner.tokenizer.explain(issue_text))
</code></pre>
<p><strong>Output</strong></p>
<pre><code>[('TOKEN', 'I'), ('TOKEN', 'really'), ('TOKEN', 'like'), ('TOKEN', 'stereo'), ('TOKEN', 'receivers'), ('SUFFIX', ','), ('TOKEN', 'I'), ('TOKEN', 'want'), ('TOKEN', 'to'), ('TOKEN', 'buy'), ('TOKEN', 'the'), ('TOKEN', 'new'), ('TOKEN', 'ASX8E11F.')]
[('TOKEN', 'I'), ('TOKEN', 'really'), ('TOKEN', 'like'), ('TOKEN', 'stereo'), ('TOKEN', 'receivers'), ('SUFFIX', ','), ('TOKEN', 'I'), ('TOKEN', 'want'), ('TOKEN', 'to'), ('TOKEN', 'buy'), ('TOKEN', 'the'), ('TOKEN', 'new'), ('TOKEN', 'RK8BX'), ('SUFFIX', '.')]
</code></pre>
<p>I added a custom infix tokenizer to handle products with hyphens that is working.</p>
<pre><code>import spacy
from spacy.lang.char_classes import ALPHA, ALPHA_LOWER, ALPHA_UPPER
from spacy.lang.char_classes import CONCAT_QUOTES, LIST_ELLIPSES, LIST_ICONS
from spacy.util import compile_infix_regex
# Default tokenizer
nlp = spacy.load("en_core_web_sm")
doc = nlp("AXDR-PXXT-001")
print([t.text for t in doc])
# Modify tokenizer infix patterns
infixes = (
LIST_ELLIPSES
+ LIST_ICONS
+ [
r"(?<=[0-9])[+\-\*^](?=[0-9-])",
r"(?<=[{al}{q}])\.(?=[{au}{q}])".format(
al=ALPHA_LOWER, au=ALPHA_UPPER, q=CONCAT_QUOTES
),
r"(?<=[{a}]),(?=[{a}])".format(a=ALPHA),
# β
Commented out regex that splits on hyphens between letters:
# r"(?<=[{a}])(?:{h})(?=[{a}])".format(a=ALPHA, h=HYPHENS),
r"(?<=[{a}0-9])[:<>=/](?=[{a}])".format(a=ALPHA),
]
)
infix_re = compile_infix_regex(infixes)
nlp.tokenizer.infix_finditer = infix_re.finditer
doc = nlp("AXDR-PXXT-001")
print([t.text for t in doc])
</code></pre>
<p><strong>Output</strong></p>
<pre><code>['AXDR', '-', 'PXXT-001']
['AXDR-PXXT-001']
</code></pre>
|
<python><nlp><spacy>
|
2022-12-28 16:58:05
| 1
| 9,234
|
scarpacci
|
74,942,500
| 339,167
|
Split and type cast columns values using Pandas
|
<p>How do i add an extra column in a dataframe, so it could split and convert to integer types but np.nan for string types</p>
<pre><code>Col1
1|2|3
"string"
</code></pre>
<p>so</p>
<pre><code>Col1 ExtraCol
1|2|3 [1,2,3]
"string" nan
</code></pre>
<p>I tried long contorted way but failed</p>
<pre><code>df['extracol'] = df["col1"].str.strip().str.split("|").str[0].apply(lambda x: x.astype(np.float) if x.isnumeric() else np.nan).astype("Int32")
</code></pre>
|
<python><pandas>
|
2022-12-28 16:12:12
| 2
| 6,500
|
Rohit Sharma
|
74,942,497
| 1,737,830
|
Different behavior of a loop in a function when else clause is added
|
<p>There's a simple function:</p>
<pre class="lang-py prettyprint-override"><code>users = [
(['bmiller', 'Miller, Brett'], 'Brett Miller'),
(['Gianni Tina', 'tgiann', 'tgianni'], 'Tina Gianni'),
(['mplavis', 'marcusp', 'Plavis, Marcus'], 'Marcus Plavis')
]
def replace_user_login_with_name(login):
name = ''
for u in users:
if login in u[0]:
name = str(login).replace(login, u[1])
else: # without these lines
name = str(login) # it works fine
return name
</code></pre>
<p>It was working just fine, replacing logins with full names of users when executed without the <code>else</code> clause (marked with the comments in the markup above):</p>
<pre class="lang-py prettyprint-override"><code>full_name = replace_user_login_with_name('bmiller')
print(full_name)
# 'Brett Miller'
</code></pre>
<p>But after adding the <code>else</code> clause to just return user's login when given user is not present in the <code>users</code> array, all entries are returned as logins, so:</p>
<pre class="lang-py prettyprint-override"><code>full_name = replace_user_login_with_name('bmiller')
print(full_name)
# 'bmiller'
</code></pre>
<p>Why the <code>else</code> clause gets executed even if logins are matched?</p>
|
<python>
|
2022-12-28 16:12:07
| 1
| 2,368
|
AbreQueVoy
|
74,942,416
| 13,162,807
|
vscode attached to Docker container - module not found
|
<p>I'm running <code>vscode</code> debugger inside <code>Docker</code> container but it started showing an error:
<a href="https://i.sstatic.net/VkA6c.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VkA6c.png" alt="enter image description here" /></a></p>
<p>That is very strange because when I open python shell in the same <code>vscode</code> window and import the same module import works just fine. So I need to find the reason why debugger doesn't see the modules</p>
<p>The full error code:</p>
<pre><code>root@854c8a51d1f6:/opt/HonkioServer# python3 entrypoints/api2/docker_entry
Traceback (most recent call last):
File "entrypoints/api2/docker_entry", line 3, in <module>
from index import app
File "/opt/HonkioServer/entrypoints/api2/index.py", line 9, in <module>
from honkio.db.Application import ApplicationModel
ModuleNotFoundError: No module named 'honkio'
root@854c8a51d1f6:/opt/HonkioServer# cd /opt/HonkioServer ; /usr/bin/env /bin/python3 /root/.vscode-server/extensions/ms-python.python-2022.20.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher 39239 -- /opt/HonkioServer/entrypoints/api2/docker_entry
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/root/.vscode-server/extensions/ms-python.python-2022.20.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
cli.main()
File "/root/.vscode-server/extensions/ms-python.python-2022.20.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/root/.vscode-server/extensions/ms-python.python-2022.20.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "/root/.vscode-server/extensions/ms-python.python-2022.20.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File "/root/.vscode-server/extensions/ms-python.python-2022.20.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/root/.vscode-server/extensions/ms-python.python-2022.20.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "/opt/HonkioServer/entrypoints/api2/docker_entry", line 3, in <module>
from index import app
File "/opt/HonkioServer/entrypoints/api2/index.py", line 9, in <module>
from honkio.db.Application import ApplicationModel
ModuleNotFoundError: No module named 'honkio'
</code></pre>
<p>My debugger config file <code>launch.json</code></p>
<pre><code> {
"name": "Python: File",
"type": "python",
"request": "launch",
"program": "/opt/HonkioServer/entrypoints/api2/docker_entry",
"justMyCode": true
}
</code></pre>
|
<python><docker><visual-studio-code><vscode-debugger>
|
2022-12-28 16:05:02
| 1
| 305
|
Alexander P
|
74,942,299
| 20,615,590
|
Difference between pandas functions: df.assign() vs df.reset_index()
|
<p>So lets say I have a DataFrame:</p>
<pre><code> stuff temp
id
1 3 20.0
1 6 20.1
1 7 21.4
2 1 30.2
2 3 0.0
2 2 34.0
3 7 0.0
3 6 0.0
3 2 14.4
</code></pre>
<p>And I want to drop the index; what method is better to use?</p>
<ul>
<li><p>There is <code>df.reset_index(drop=True)</code>, which is what I will usually use</p>
</li>
<li><p>But there is also <code>df.assign(Index=range(len(df))).set_index('Index')</code>, which I don't usually use</p>
</li>
<li><p>And, is there any other methods?</p>
</li>
</ul>
<p>Well, I want to find the most efficient/best way to drop the index of a <code>pd.DataFrame</code>. Can you give me a clear explanation. I'm doing a efficient code-writing project and I want to know the best options. Thanks.</p>
|
<python><python-3.x><pandas><dataframe><coding-efficiency>
|
2022-12-28 15:52:45
| 0
| 423
|
Pythoneer
|
74,942,140
| 11,239,740
|
Manually trace function in Pyodide without locking main thread
|
<p>I am trying to build a browser-based debugging tool for Python but I am having trouble combining the Python inner workings with user interactions. My setup looks like <a href="https://jsfiddle.net/ms2t8hjc/37/" rel="nofollow noreferrer">this</a> (jsfiddle link). Essentially I am using <code>sys.settrace</code> to inspect a function one opcode at a time. My goal is for the user to be able to manually step forward each instruction by pressing the <code>Step</code> button.</p>
<p>The problems I am having is that the tracer function cannot be asynchronous because Python does not like that, and if it is synchronous there is no real way to stop and wait for input/interaction. If I make just an infinite while loop it freezes the browsers main thread so the page becomes unresponsive.</p>
<p>Does anyone have advice on how I can structure this to allow interaction between the UI and the tracer function?</p>
|
<python><python-3.x><pyodide>
|
2022-12-28 15:36:24
| 1
| 408
|
Blupper
|
74,942,103
| 1,936,966
|
How can I map multiple Boolean properties into single SQLAlchemy column of binary type?
|
<p>How can I map multiple Boolean properties into single SQLAlchemy column?</p>
<p>I have a class kind of <code>flags</code> :</p>
<pre><code>class Thing(db.Model):
id = db.Column(db.Integer, primary_key=True)
isEnabled = db.Column(db.Boolean, nullable=False, default=False)
isAdministrator = db.Column(db.Boolean, nullable=False, default=False)
isPasswordExpired = db.Column(db.Boolean, nullable=False, default=False)
isEmailConfirmed = db.Column(db.Boolean, nullable=False, default=False)
isPhoneConfirmed = db.Column(db.Boolean, nullable=False, default=False)
# ... many more booleans #
</code></pre>
<p>I'd like to store them into single <code>db.Column(db.Integer)</code> or something like that small field.</p>
<p>Could you help me with example of how should I do this?</p>
<hr>
<p>Please don't try to get logic of those fields or help me to store password expiration mark better - this is an example.</p>
|
<python><sqlalchemy><flask-sqlalchemy>
|
2022-12-28 15:34:01
| 1
| 4,764
|
filimonic
|
74,942,021
| 4,594,063
|
SQLAlchemy: NotImplementedError when joining with subquery
|
<p>I'm trying to join a query object with a subquery. I have verified that the two queries work independently.</p>
<p>This is the code:</p>
<pre><code>from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from sqlalchemy import text
from project import models
SQLALCHEMY_DATABASE_URL = "XXX"
engine = create_engine(SQLALCHEMY_DATABASE_URL)
SessionDB = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()
session = SessionDB()
query = (
session.query(
models.Homes,
models.RentalData,
models.Descriptions,
)
.filter(
models.Homes.id == models.RentalData.home_id,
)
.filter(
models.Homes.id == models.Descriptions.home_id,
)
)
query_2 = (
session.query(models.Homes)
.from_statement(
text(
"""
SELECT
id,
(SELECT
ARRAY(
SELECT image_url
FROM listings.images
WHERE listings.images.home_id = listings.homes.id))
AS image_url
FROM listings.homes"""
)
)
.subquery()
)
query = query.join(query_2, models.Homes.id == query_2.c.id)
</code></pre>
<p>The traceback looks like this:</p>
<pre><code>Traceback (most recent call last):
File "/home/user/app/merge_example.py", line 52, in <module>
query = query.join(query_2, models.Listings.id == query_2.c.id)
^^^^^^^^^
File "/home/user/app/env/lib/python3.11/site-packages/sqlalchemy/util/langhelpers.py", line 1113, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
^^^^^^^^^^^^^^
File "/home/user/app/env/lib/python3.11/site-packages/sqlalchemy/sql/selectable.py", line 737, in columns
self._populate_column_collection()
File "/home/user/app/env/lib/python3.11/site-packages/sqlalchemy/sql/selectable.py", line 1643, in _populate_column_collection
self.element._generate_fromclause_column_proxies(self)
File "/home/user/app/env/lib/python3.11/site-packages/sqlalchemy/sql/selectable.py", line 3044, in _generate_fromclause_column_proxies
raise NotImplementedError()
NotImplementedError
</code></pre>
<p>Why does this join not work? I'm using Postgres 11, Python 3.11, SQLAlchemy 1.4.</p>
|
<python><postgresql><sqlalchemy>
|
2022-12-28 15:24:32
| 1
| 1,832
|
Wessi
|
74,941,944
| 11,092,636
|
Strange grey line appearing tkinter when adding icon with root.iconbitmap
|
<p>Here is a MRE:</p>
<pre class="lang-py prettyprint-override"><code>import tkinter as tk
# Window
tkWindow: tk.Tk = tk.Tk()
tkWindow.resizable(False, False)
tkWindow.iconbitmap("icon.ico")
# Label Welcome
text: str = f"Wedfghhhhhhhhjkllllll hojjjjjjjjj hjjjjjjjjj V1."
label_welcome: tk.Text = tk.Text(
tkWindow,
height=1,
width=len(text),
background='SystemButtonFace',
borderwidth=0
)
label_welcome.insert("current", text)
label_welcome.grid(row=0, column=1, columnspan=2, pady=15)
tkWindow.mainloop()
</code></pre>
<p>Running this will get you this:</p>
<p><a href="https://i.sstatic.net/FWbWx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FWbWx.png" alt="enter image description here" /></a></p>
<p>And if you look closely there is a strange grey line on the top right of the image.</p>
<p>It disappears if I comment out the line <code>tkWindow.iconbitmap("icon.ico")</code>:</p>
<p><a href="https://i.sstatic.net/5fCyj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5fCyj.png" alt="enter image description here" /></a></p>
<p>Here is the file I'm using (<code>icon.ico</code>) if it's important to reproduce the MRE.
<a href="https://wetransfer.com/downloads/26750bfab596b8bb74798c06a6c84e8820221228151614/3e2fe2" rel="nofollow noreferrer">https://wetransfer.com/downloads/26750bfab596b8bb74798c06a6c84e8820221228151614/3e2fe2</a></p>
<p>I'm using <code>Python 3.11.1</code> and <code>Windows 11</code></p>
|
<python><tkinter><winapi><tk-toolkit>
|
2022-12-28 15:16:52
| 2
| 720
|
FluidMechanics Potential Flows
|
74,941,746
| 2,581,199
|
Trying to solve the n-parenthesis problem - but failing
|
<p>I am trying to implement a solution to the 'n-parenthesis problem'</p>
<pre><code>def gen_paren_pairs(n):
def gen_pairs(left_count, right_count, build_str, build_list=[]):
print(f'left count is:{left_count}, right count is:{right_count}, build string is:{build_str}')
if left_count == 0 and right_count == 0:
build_list.append(build_str)
print(build_list)
return build_list
if left_count > 0:
build_str += "("
gen_pairs(left_count - 1, right_count, build_str, build_list)
if left_count < right_count:
build_str += ")"
#print(f'left count is:{left_count}, right count is:{right_count}, build string is:{build_str}')
gen_pairs(left_count, right_count - 1, build_str, build_list)
in_str = ""
gen_pairs(n,n,in_str)
gen_paren_pairs(2)
</code></pre>
<p>It almost works but isn't quite there.
The code is supposed to generate a list of correctly nested brackets whose count matches the input 'n'</p>
<p>Here is the final contents of a list. Note that the last string starts with an unwanted left bracket.</p>
<p>['(())', '(()()']</p>
<p>Please advise.</p>
|
<python>
|
2022-12-28 14:56:07
| 1
| 391
|
Kevin
|
74,941,717
| 9,430,509
|
What would a python list (nested) parser look like in pyparsing?
|
<p>I would like to understand how to use pyparsing to parse something like a nested Python list.
This is a question to understand pyparsing. Solutions that circumvent the problem because the list of the example might look like JSON or Python itself should not prevent the usage of pyparsing.</p>
<p>So before people start throwing json and literal_eval at me let's consider a string and result that looks like this:</p>
<pre><code>Input:
{1,2,3,{4,5}}
Expected Output (Python list):
[1,2,3,[4,5]]
</code></pre>
<p>I currently have this code but the output does not parse the nested list</p>
<pre class="lang-py prettyprint-override"><code>import pyparsing
print(
pyparsing.delimited_list(
pyparsing.Word(pyparsing.nums) | pyparsing.nested_expr("{", "}")
)
.parse_string("{1,2,3,{4,5}}")
.as_list()
)
# [['1,2,3,', ['4,5']]]
</code></pre>
<p>There is pretty much the same question here already but this one was circumvented by using json parsing: <a href="https://stackoverflow.com/questions/65891710/python-parse-comma-seperated-nested-brackets-using-pyparsing">Python parse comma seperated nested brackets using Pyparsing</a></p>
|
<python><recursion><nested><pyparsing>
|
2022-12-28 14:53:22
| 2
| 935
|
Sergej Herbert
|
74,941,714
| 10,291,435
|
ImportError: cannot import name 'LegacyVersion' from 'packaging.version'
|
<p>I am using python 3.10.6, and I installed pipenv, version 2022.12.19, I was planning to run a project using runway, so for this I created a folder, did the command <code>pipenv --python 3.10</code>, then updated in the pipfile to include runway, pip file is as follows:</p>
<pre><code>[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
runway = "== 2.6.3"
[dev-packages]
[requires]
python_version = "3.10"
python_full_version = "3.10.6"
</code></pre>
<p>then I ran the command <code>pipenv install</code>, I was expecting runway to be there, but each time I try to run a command using runway I get this error:</p>
<p><code>ImportError: cannot import name 'LegacyVersion' from 'packaging.version'</code> any idea?</p>
|
<python><pipenv><pipenv-install>
|
2022-12-28 14:53:13
| 3
| 1,699
|
Mee
|
74,941,669
| 5,900,271
|
How to interpret the output of statsmodels model.summary() for multivariate linear regression?
|
<p>I'm using the <code>statsmodels</code> library to check for the impact of confounding variables on a dependent variable by performing multivariate linear regression:</p>
<pre><code>model = ols(f'{metric}_diff ~ {" + ".join(confounding_variable_names)}', data=df).fit()
</code></pre>
<p>This is how my data looks like (pasted only 2 rows):</p>
<pre><code> Age Sex Experience using a gamepad (1-4) Experience using a VR headset (1-4) Experience using hand tracking (1-3) Experience using controllers in VR (1-3) Glasses ID_1 ID_2 Method_1 Method_2 ID_controller ID_handTracking CorrectGestureCounter_controller CorrectGestureCounter_handTracking IncorrectGestureCounter_controller IncorrectGestureCounter_handTracking
IDs
ID_K_1_3 25 Female 4 3 1 2 Yes K_1 K_3 controller handTracking K_1 K_3 21 34 5 2
ID_K_4_5 19 Male 4 2 1 2 Yes K_4 K_5 controller handTracking K_4 K_5 21 36 14 17
</code></pre>
<p>When I execute <code>model.summary()</code> I get output like this:</p>
<pre><code> OLS Regression Results
======================================================================================
Dep. Variable: CorrectGestureCounter_diff R-squared: 0.477
Model: OLS Adj. R-squared: 0.249
Method: Least Squares F-statistic: 2.088
Date: Wed, 28 Dec 2022 Prob (F-statistic): 0.105
Time: 15:29:41 Log-Likelihood: -73.565
No. Observations: 24 AIC: 163.1
Df Residuals: 16 BIC: 172.6
Df Model: 7
Covariance Type: nonrobust
==========================================================================================================
coef std err t P>|t| [0.025 0.975]
----------------------------------------------------------------------------------------------------------
Intercept -24.6404 9.326 -2.642 0.018 -44.410 -4.871
Sex[T.Male] -7.3225 3.170 -2.310 0.035 -14.043 -0.602
Glasses[T.Yes] -2.4210 2.995 -0.808 0.431 -8.771 3.929
Age 0.2957 0.183 1.613 0.126 -0.093 0.684
Experience_using_a_gamepad_1_4 1.8810 1.853 1.015 0.325 -2.047 5.809
Experience_using_a_VR_headset_1_4 0.9559 3.213 0.297 0.770 -5.856 7.768
Experience_using_hand_tracking_1_3 -2.4689 3.633 -0.680 0.506 -10.170 5.232
Experience_using_controllers_in_VR_1_3 2.3592 4.840 0.487 0.633 -7.902 12.620
==============================================================================
Omnibus: 0.621 Durbin-Watson: 2.566
Prob(Omnibus): 0.733 Jarque-Bera (JB): 0.702
Skew: -0.277 Prob(JB): 0.704
Kurtosis: 2.371 Cond. No. 205.
==============================================================================
</code></pre>
<p>What do the <code>[T.Male]</code> or <code>[T.Yes]</code> next to <code>Sex</code> and <code>Glasses</code> mean? How should I interpret this? Also why is <code>Intercept</code> added next to my variables? Should I care about it in the context of confounding variables?</p>
|
<python><statistics><linear-regression><statsmodels>
|
2022-12-28 14:49:32
| 2
| 2,127
|
Wojtek Wencel
|
74,941,514
| 7,320,594
|
Read CSV file with quotechar-comma combination in string - Python
|
<p>I have got multiple csv files which look like this:</p>
<pre><code>ID,Text,Value
1,"I play football",10
2,"I am hungry",12
3,"Unfortunately",I get an error",15
</code></pre>
<p>I am currently importing the data using the pandas read_csv() function.</p>
<pre><code>df = pd.read_csv(filename, sep = ',', quotechar='"')
</code></pre>
<p>This works for the first two rows in my csv file, unfortunately I get an error in row 3. The reason is that within the 'Text' column there is a quotechar character-comma combination <strong>before</strong> the end of the column.</p>
<pre><code>ParserError: Error tokenizing data. C error: Expected 3 fields in line 4, saw 4
</code></pre>
<p>Is there a way to solve this issue?</p>
<p>Expected output:</p>
<pre><code>ID Text Value
1 I play football 10
2 I am hungry 12
3 Unfortunately, I get an error 15
</code></pre>
|
<python><pandas><csv>
|
2022-12-28 14:34:49
| 3
| 1,490
|
Koot6133
|
74,941,471
| 4,792,229
|
How to I install protobufs C++/Python extension?
|
<p>I'm looking to upgrade to the new protobuf version 4.21.0. I am sharing messages between python and c++, on the <a href="https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates" rel="nofollow noreferrer">release page</a> they mention that this capability breaks, unless I set <code>PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp</code>, and ensure that the Python/C++ extension is installed.
How do I ensure that the Python/C++ extension is installed? Where do I find it? What is meant by this Python/C++ extension?
Also does the env var need to be set when generating *_pb2.py out of *.proto files or also when running python/c++ programs which are using protobuf?</p>
|
<python><c++><protocol-buffers>
|
2022-12-28 14:30:45
| 1
| 3,002
|
Hakaishin
|
74,941,247
| 12,284,585
|
MyPy using other Pyhton version as in venv? (Positional-only parameters are only supported in Python 3.8 and greater)
|
<p>MyPy thinks it has to check for Python <3.8 when instead it should use 3.10</p>
<p>As you can see, Python3.10 is active</p>
<pre><code>(myvenv) gitpod /workspace/myfolder (mybranch) $ python --version
Python 3.10.7
</code></pre>
<p>however mypy think its <3.8?</p>
<pre><code>(myvenv) gitpod /workspace/myfolder (mybranch) $ mypy -p my_folder_with_code
</code></pre>
<pre><code>/workspace/.pyenv_mirror/poetry/virtualenvs/myenv/lib/python3.10/site-packages/numpy/__init__.pyi:641:
error: Positional-only parameters are only supported in Python 3.8 and greater
Found 1 error in 1 file (errors prevented further checking)
</code></pre>
<p>even <code>mypy --python-version 3.10 -p my_folder_with_code</code> produces the same error</p>
<p>This happens only in this platform (gitpod). On other devices it runs fine (so no error in code)</p>
<p>I googled around but did found what i'm looking for... can somebody help?</p>
|
<python><version><mypy><python-poetry><gitpod>
|
2022-12-28 14:07:54
| 1
| 1,333
|
tturbo
|
74,941,203
| 339,167
|
Compare and match two data frames with multiple criteria
|
<p>Given a dataframe</p>
<pre><code>multival date string others
23|34|45 12/05/1991 name1 xyz
31|46|25 16/02/1990 name2 abc
</code></pre>
<p>how can I create an index and lookup in above data series using values from another data series such that it match on</p>
<ul>
<li>value in multival</li>
<li>date</li>
<li>string</li>
</ul>
<p>Example: lookup on 23, 12/05/1991 and name1 should yield first data series</p>
|
<python><pandas><dataframe>
|
2022-12-28 14:03:17
| 2
| 6,500
|
Rohit Sharma
|
74,941,179
| 4,865,723
|
Refactor a if-elif-block into a pythonic dictionary
|
<p>I have a big <code>if-elif-else</code> block in my code like this</p>
<pre><code>if counted == 2:
score = None
elif counted == 1:
score = (sum(values) - 3) / 6 * 100
elif counted == 0:
score = (sum(values) - 4) / 8 * 100
else:
raise Exception('Should not be reached!')
</code></pre>
<p>With python I assume there is a way to solve that with a <code>dict</code> using the <code>counted</code> values as keys. But what comes as items into that dict? The new <code>case</code> expression is not an option here. IMHO this wouldn't be pythonic no matter that it's now part of Python standard.</p>
<h1>Approach A</h1>
<pre><code># Approach A
mydict = {
0: ((sum(values) - 4) / 8 * 100),
1: ((sum(values) - 3) / 6 * 100),
2: None
}
print(mydict)
print(mydict[counted])
</code></pre>
<p>The problem here is that all items are <em>executed</em>.</p>
<pre><code>{0: 4625.0, 1: 6183.333333333334, 2: None}
</code></pre>
<h1>Approach B</h1>
<pre><code>mydict = {
0: (lambda _: (sum(values) - 4) / 8 * 100),
1: (lambda _: (sum(values) - 3) / 6 * 100),
2: (lambda _: None)
}
print(mydict)
print(mydict[counted](None))
</code></pre>
<p>Here I have the <em>fake argument</em> I don't need. But it seems to be mandatory when creating a <code>lambda</code>.</p>
<pre><code>{0: <function <lambda> at 0x7ff8f3dce040>, 1: <function <lambda> at 0x7ff8f3b6a670>, 2: <function <lambda> at 0x7ff8f3b6a700>}
</code></pre>
<p>Is there a another way?</p>
<h1>Full MWE</h1>
<pre><code>#!/usr/bin/env python3
import random
values = random.choices(range(100), k=10)
counted = random.choice(range(3))
print(f'counted={counted}')
if counted == 2:
score = None
elif counted == 1:
score = (sum(values) - 3) / 6 * 100
elif counted == 0:
score = (sum(values) - 4) / 8 * 100
else:
raise Exception('Should not be reached!')
print(f'score={score}')
# Approach A
mydict = {
0: ((sum(values) - 4) / 8 * 100),
1: ((sum(values) - 3) / 6 * 100),
2: None
}
print(mydict)
print(mydict[counted])
# Approach B
mydict = {
0: (lambda _: (sum(values) - 4) / 8 * 100),
1: (lambda _: (sum(values) - 3) / 6 * 100),
2: (lambda _: None)
}
print(mydict)
print(mydict[counted](None))
</code></pre>
|
<python><lambda>
|
2022-12-28 14:00:40
| 4
| 12,450
|
buhtz
|
74,941,065
| 14,224,948
|
How to create a zip with multiple files from one folder and send it to the browser in modern Django
|
<p>I was struggling with sending the downloadable Django zip with many files to browser, because many tutorials in the web are obsolete, like the one which was my inspiration to come up with what I will share with you:</p>
<p><a href="https://stackoverflow.com/questions/12881294/django-create-a-zip-of-multiple-files-and-make-it-downloadable">Django - Create A Zip of Multiple Files and Make It Downloadable</a></p>
|
<python><python-3.x><django><django-4.0>
|
2022-12-28 13:50:25
| 1
| 1,086
|
Swantewit
|
74,940,964
| 8,981,425
|
How to extend SQLalchemy Base class with a static method
|
<p>I have multiple classes similar to the following:</p>
<pre><code>class Weather(Base):
__tablename__ = "Weather"
id = Column(Integer, primary_key=True, nullable=False, autoincrement=True)
temperature = Column(Integer)
humidity = Column(Integer)
wind_speed = Column(Float)
wind_direction = Column(String)
</code></pre>
<p>I want to add a method <code>df()</code> that returns me the Pandas dataframe of that table. I know I can write it like this:</p>
<pre><code>class Weather(Base):
__tablename__ = "Weather"
id = Column(Integer, primary_key=True, nullable=False, autoincrement=True)
temperature = Column(Integer)
humidity = Column(Integer)
wind_speed = Column(Float)
wind_direction = Column(String)
@staticmethod
def df():
with engine.connect() as conn:
return pd.read_sql_table(Weather.__tablename__ , conn)
</code></pre>
<p>But I want to implement this for every table. I guess if I can extend the Base class with this method I should be able to implement it once and use it in every class. Everything I have tried has failed because I do not have access to <code>__tablename__ </code> attribute.</p>
<p><strong>SOLUTION</strong>
I ended up with a mix of both answers. I have used the first method proposed by <a href="https://stackoverflow.com/users/5320906/snakecharmerb">@snakecharmerb</a> (it allows to introduce the change without modifying the rest of the code) with the <code>@classmethod</code> proposed by <a href="https://stackoverflow.com/users/3185459/romanperekhrest">@RomanPerekhrest</a> (which is the bit I was missing).</p>
<pre><code>class MyBase:
__tablename__ = None
@classmethod
def df(cls):
with engine.connect() as conn:
return pd.read_sql_table(cls.__tablename__ , conn)
Base = declarative_base(cls=MyBase)
</code></pre>
|
<python><pandas><sqlalchemy>
|
2022-12-28 13:40:10
| 2
| 367
|
edoelas
|
74,940,732
| 3,324,314
|
How to generate unique name in Faker based on a seed?
|
<p>I need to generate a fake username based on unique user attribute. For simplicity, let's assume each user in my system has an ID, and I want to generate fake names for them based on this attribute. I can't understand if it's possible.</p>
<p>In other words, I want something like this:</p>
<pre class="lang-py prettyprint-override"><code>f = Faker()
# pseudocode:
f.name(1) # -> "George Cook"
f.name(2) # -> "Sarah Johnson"
f.name(1) # -> "George Cook"
</code></pre>
<p>I tried to use <code>Faker.seed(value)</code>, but it still generates me random value each consequent invokation. I'm not sure if it's possible to achieve what I want though.</p>
|
<python><faker>
|
2022-12-28 13:15:45
| 1
| 920
|
fbjorn
|
74,940,661
| 6,737,387
|
tensorboard not showing projector data on colab
|
<p>I'm trying to visualize the embeddings in tensorboard but the projector tab isn't showing anything on colab.</p>
<p>When I downloaded the logs folder to my pc and than ran it locally, it worked perfectly fine. Does anybody have any idea why it isn't working in google colab ?</p>
<p>The command i'm using to show the tensorboard:</p>
<p><code>%tensorboard --logdir tmp/</code></p>
<p><strong>Output:</strong></p>
<p><a href="https://i.sstatic.net/uTUzk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uTUzk.png" alt="enter image description here" /></a></p>
|
<python><tensorflow><jupyter-notebook><google-colaboratory><tensorboard>
|
2022-12-28 13:08:52
| 1
| 2,683
|
Hisan
|
74,940,611
| 2,577,122
|
With pandas cut function how to list the missing bin values
|
<p>I have a requirement where I need to group report execution counts. The input data looks like below.</p>
<pre><code>REPORT,TIME_SEC
Report1,1
Report1,5
Report3,4
Report2,158
Report2,20
Report3,131
</code></pre>
<p>I need to group the reports and show the count of execution for each of the following time ranges '<code>0-10sec</code>','<code>10-30sec</code>','<code>30-60sec</code>','<code>1-2min','>2min'</code>.</p>
<p>My code is like below.</p>
<pre><code>import pandas as pd
infile = "/Users/user1/data1.csv"
time_column = "TIME_SEC"
range_column = "TIME_RANGE"
groupby_column = "REPORT"
df = pd.read_csv(infile)
bin1 = [0,10,30,60,120,7200]
label1 = ['0-10sec','10-30sec','30-60sec','1-2min','>2min']
# print(f"df=\n{df}")
df[range_column] = pd.cut(df[time_column], bins=bin1, labels=label1, include_lowest=True)
print(f"df=\n{df}")
df_final = pd.crosstab(df[groupby_column], df[range_column])
df_final.columns = df_final.columns.astype(str)
df_final.reset_index(inplace=True)
print(f"df_final=\n{df_final}")
</code></pre>
<p>The output is like below.</p>
<pre><code>df=
REPORT TIME_SEC TIME_RANGE
0 Report1 1 0-10sec
1 Report1 5 0-10sec
2 Report3 4 0-10sec
3 Report2 158 >2min
4 Report2 20 10-30sec
5 Report3 131 >2min
df_final=
TIME_RANGE REPORT 0-10sec 10-30sec >2min
0 Report1 2 0 0
1 Report2 0 1 1
2 Report3 1 0 1
bins = [ 0 10 30 60 120 7200]
</code></pre>
<p>Now while I do <code>pd.cut</code> it doesn't list the ranges for which there's no data. Hence for the above example it doesn't list the ranges <code>'30-60sec'</code>,<code>'1-2min'</code>, as there's no value for these ranges.</p>
<p>My requirement is to print the ranges <code>'30-60sec'</code>,<code>'1-2min'</code> (for which no values are present) also in the final output. How can I get the <code>'30-60sec','1-2min'</code> as part of <code>pd.cut</code> OR any another way so that when I do <code>pd.crosstab</code> then value <code>'0'</code> is printed for the ranges <code>'30-60sec'</code>,<code>'1-2min'</code>.</p>
<p>I'm expecting the df_final to be like below.</p>
<pre><code>TIME_RANGE REPORT 0-10sec 10-30sec 30-60sec 1-2min >2min
0 Report1 2 0 0 0 0
1 Report2 0 1 0 0 1
2 Report3 1 0 0 0 1
</code></pre>
|
<python><python-3.x><pandas><group-by>
|
2022-12-28 13:03:56
| 1
| 307
|
Swap
|
74,940,550
| 1,835,818
|
Django- How to replace list of items of a related set
|
<p>I have table Group and UserConfig</p>
<ul>
<li>A group has many users, each user in a group has a config item</li>
<li>UserConfig: unique (group_id, user_id)</li>
</ul>
<p>Example:</p>
<pre><code>class Group(models.Model):
id = models.BigAutoField(primary_key=True)
name = models.CharField(max_length=255, unique=True)
class UserConfig(models.Model):
id = models.BigAutoField(primary_key=True)
group = models.ForeignKey(Group, on_delete=models.CASCADE, related_name='user_configs')
user_id = models.IntegerField()
config = models.JSONField()
</code></pre>
<p>I want to replace all UserConfig instances of a group (update existing rows, add new rows and remove which rows are not in the new list)</p>
<pre><code># list_replace_configs: List[UserConfig]
group.user_configs.set(list_replace_configs, bulk=False, clear=False)
</code></pre>
<p>This method not working because it uses method <code>remove()</code></p>
<ul>
<li>remove(*objs, bulk=True): only exists if ForeignKey field has null=True</li>
<li>if ForeignKey field has null=True: it does not removed UserConfig object, but just set user_config.group = None</li>
</ul>
<p>I don't understand why django designs this method.<br>
<strong>How to replace all UserConfig instances of a group ?</strong></p>
|
<python><django>
|
2022-12-28 12:57:56
| 1
| 473
|
kietheros
|
74,940,403
| 5,833,865
|
FASTAPI Delete Operation giving Internal server error
|
<p>I have this code for delete operation on a Postgresql DB:</p>
<pre><code>@app.delete("/posts/{id}", status_code=status.HTTP_204_NO_CONTENT)
def delete_post(id: int):
print("ID IS ",id)
cursor.execute("""DELETE FROM public."Posts" WHERE id = %s""", (str(id),))
deleted_post = cursor.fetchone() <--- Showing error for this line
conn.commit()
if deleted_post is None:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND,
detail=f"Post with {id} not found")
return Response(status_code=status.HTTP_204_NO_CONTENT)
</code></pre>
<p>The create and read operations work fine. If I pass an existing or a non-exsiting id to delete, I get a 500 Internal Server error. The row does get deleted from the table though.</p>
<p>If I comment this line <code>deleted_post = cursor.fetchone()</code>, it works okay.</p>
<p>Here is the error traceback:</p>
<pre><code>File "D:\Python Projects\FASTAPI\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\Python Projects\FASTAPI\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "D:\Python Projects\FASTAPI\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
result = context.run(func, *args)
File "D:\Python Projects\FASTAPI\.\app\main.py", line 80, in delete_post
deleted_post = cursor.fetchone()
File "D:\Python Projects\FASTAPI\venv\lib\site-packages\psycopg2\extras.py", line 86, in fetchone
res = super().fetchone()
psycopg2.ProgrammingError: no results to fetch
</code></pre>
<p>What really is happening here??</p>
|
<python><postgresql><crud><fastapi>
|
2022-12-28 12:44:04
| 2
| 770
|
Devang Sanghani
|
74,940,265
| 12,858,691
|
apply() custom function on all columns increase efficiency
|
<p>I apply this function</p>
<pre><code>def calculate_recency_for_one_column(column: pd.Series) -> int:
"""Returns the inverse position of the last non-zero value in a pd.Series of numerics.
If the last value is non-zero, returns 1. If all values are non-zero, returns 0."""
non_zero_values_of_col = column[column.astype(bool)]
if non_zero_values_of_col.empty:
return 0
return len(column) - non_zero_values_of_col.index[-1]
</code></pre>
<p>to all columns of this example dataframe</p>
<pre><code>df = pd.DataFrame(np.random.binomial(n=1, p=0.001, size=[1000000]).reshape((1000,1000)))
</code></pre>
<p><a href="https://i.sstatic.net/KB0b7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KB0b7.png" alt="df" /></a></p>
<p>by using</p>
<pre><code>df.apply(lambda column: calculate_recency_for_one_column(column),axis=0)
</code></pre>
<p>The result is:</p>
<pre><code>0 436
1 0
2 624
3 0
...
996 155
997 715
998 442
999 163
Length: 1000, dtype: int64
</code></pre>
<p>Everything works fine, but my programm has to do this operation often, so I need a more efficient alternative. Does anybody have an idea how to make this faster? I think <code>calculate_recency_for_one_column()</code> is efficient enough and the <code>df.apply()</code> has the most potential for improvement. Here a as benchmark (100 reps):</p>
<pre><code>>> timeit.timeit(lambda: df.apply(lambda column: calculate_recency_for_one_column(column),axis=0), number=100)
14.700050864834338
</code></pre>
<p><strong>Update</strong></p>
<p>Mustafa's answer:</p>
<pre><code>>> timeit.timeit(lambda: pd.Series(np.where(df.eq(0).all(), 0, len(df) - df[::-1].idxmax())), number=100)
0.8847485752776265
</code></pre>
<p>padu's answer:</p>
<pre><code>>> timeit.timeit(lambda: df.apply(calculate_recency_for_one_column_numpy, raw=True, axis=0), number=100)
0.8892530500888824
</code></pre>
|
<python><pandas><performance>
|
2022-12-28 12:29:17
| 2
| 611
|
Viktor
|
74,940,133
| 4,002,633
|
Django query to get frequency-per-day of events with a DateTimeField
|
<p>I have a model with events that each have a DateTimeField and can readily write a query that gets me the count of events per day:</p>
<pre class="lang-py prettyprint-override"><code>MyEvents.objects.annotate(day=TruncDate('date_time')).values('day').annotate(count=Count('id'))
</code></pre>
<p>which produces (Postgresql) SQL like:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT ("MyEvents"."date_time")::date AS "day",
COUNT("MyEvents"."id") AS "count"
FROM "Doors_visit"
GROUP BY ("MyEvents"."date_time")::date
</code></pre>
<p>which produces a QuerySet that returns dicts containing 'date' and 'count'.</p>
<p>What I would like is to produce a histogram of such counts, by which I mean plotting the 'count' as a category along the x-axis and the number of times we've seen that 'count' on the y-axis.</p>
<p>In short, I want to aggregate the counts, or count the counts if you prefer.</p>
<p>Alas, Django complains vehemently when I try:</p>
<pre class="lang-py prettyprint-override"><code>MyEvents.objects.annotate(day=TruncDate('date_time')).values('day').annotate(count=Count('id')).annotate(count_count=Count('count'))
</code></pre>
<p>produces:</p>
<pre><code>django.core.exceptions.FieldError: Cannot compute Count('count'): 'count' is an aggregate
</code></pre>
<p>Fair enough. Which raises the question, can this be done in one Django Query without resorting to Raw SQL? I mean in SQL it's easy enough to query the results of a query (subquerying). Django seems to lack the smarts for this (I mean, rather than complain it could just wrap the query to date as a subquery and build a query from there, but it elects not to).</p>
<p>I mean, one work around is to extract the count of counts in Python from the result of the first query above. But it can be done in SQL, and arguably should be done in SQL, and I'm left wondering if in the forward march of Django's evolution and my learning there isn't an overlooked feature of Django or some sly trick that makes this possible which I have simply not uncovered.</p>
|
<python><django><postgresql><django-queryset><django-aggregation>
|
2022-12-28 12:14:49
| 0
| 2,192
|
Bernd Wechner
|
74,940,013
| 9,668,481
|
How to check if user defined functions is missing on the file from which it was imported using pylance?
|
<p>I'm refactoring a large codebase. And I have migrated many function here and there.</p>
<p>I want to filter pylint output to show me error like <code>my_function</code> doesn't exist in <code>helper.py</code> function anymore.</p>
<pre class="lang-py prettyprint-override"><code># ------ Before refactor ------
# File helper.py
def my_function():
pass
# File controller.py
from my_module.helper import my_function
</code></pre>
<p>After refactor, let's say <code>my_function()</code> has been moved to a new file <code>utils.py</code> instead.
How can I observe such errors using pylint ?</p>
<p>I'm using <code>pylint $(git ls-files '*.py')</code> command to run linting on project.</p>
|
<python><pylint>
|
2022-12-28 12:01:46
| 1
| 846
|
Saurav Pathak
|
74,939,968
| 2,573,075
|
Dockerfile installs newer python insteam of FROM source
|
<p>I'm trying to make a new image for python dockerfile, but keeps installing Python 3.10 instead of Python 3.8.</p>
<p>My Dockerfile looks like this:</p>
<pre><code>FROM python:3.8.16
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
## follow installation from xtb-python github:
ENV CONDA_DIR /opt/conda
RUN apt-get update && apt-get install -y \
wget \
&& rm -rf /var/lib/apt/lists/*
RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh \
&& /bin/bash ~/miniconda.sh -b -p /opt/conda
ENV PATH=$CONDA_DIR/bin:$PATH
RUN conda config --add channels conda-forge \
&& conda install -y -c conda-forge ...
</code></pre>
<p>I don't know much about conda (but have to use it).
It is Conda messing with the my Python or I did it something wrong?</p>
|
<python><docker><dockerfile>
|
2022-12-28 11:56:26
| 0
| 633
|
Claudiu
|
74,939,758
| 12,985,993
|
Camelot: DeprecationError: PdfFileReader is deprecated
|
<p>I have been using camelot for our project, but since 2 days I got following errorMessage. When trying to run following code snippet:</p>
<pre><code>import camelot
tables = camelot.read_pdf('C:\\Users\\user\\Downloads\\foo.pdf', pages='1')
</code></pre>
<p>I get this error:</p>
<pre><code>DeprecationError: PdfFileReader is deprecated and was removed in PyPDF2 3.0.0. Use PdfReader instead.
</code></pre>
<p>I checked this file and it does use pdfFileReader: c:\ProgramData\Anaconda3\lib\site-packages\camelot\handlers.py</p>
<p>I thought that I can specify the version of PyPDF2, but it will be installed automatically(because the library is used by camelot) when I install camelot. Do you think there is any solution to specify the version of PyPDF2 manually?</p>
|
<python><pypdf><python-camelot>
|
2022-12-28 11:35:44
| 9
| 498
|
Said Akyuz
|
74,939,689
| 9,781,350
|
Python Selenium with Proxy is unpredictable
|
<p>I have been working on a web scraper, it works fine without proxy, but on some website I need a proxy to avoid getting an Access Denied.
The problem is that the proxy is far away from me and the page gets loaded very slowly, it takes around from 80 to 100 seconds to fully load and most of the time it just fails, like if the page is getting renderer partially or just wrong.
I'm not sure whats the problem but it seems pretty unpredictable because sometime an item is being scrapped and sometime not..</p>
<p>Can the slowness of the page loading cause this or there is another problem which I'm not aware of? Without proxy everything is fine and the code is the same.</p>
<p>PS: I'm using the Chrome Driver</p>
|
<python><selenium><proxy>
|
2022-12-28 11:28:57
| 0
| 325
|
Ivan Lo Greco
|
74,939,407
| 159,072
|
Anynchronously update a web page using Flask
|
<p>I want to write a Flask+JavaScript based program that has two files <code>app.py</code> and <code>index.html</code> where we will run a lengthy task that takes one minue to finish and when the task finishes it will automatically update the <code>index.html</code> page asynchronously.</p>
<p>So I wrote:</p>
<p><strong>app.py</strong></p>
<pre><code>from flask import Flask, render_template
from threading import Thread
import time
app = Flask(__name__)
task_complete = False
def run_lengthy_task():
# Perform the lengthy task here
time.sleep(60) # simulate a task that takes 1 minute to complete
global task_complete
task_complete = True
def fetch_data():
# Fetch the updated data here and return it
return "Updated data"
@app.route('/')
def index():
return render_template('index.html')
if __name__ == '__main__':
app.run()
</code></pre>
<p><strong>index.html</strong></p>
<pre><code><html>
<head>
<script>
function startTask() {
// Start the lengthy task in a separate thread
const thread = new Thread(runLengthyTask);
thread.start();
}
function updatePage() {
// Check the status of the task
if (taskComplete) {
// The task has completed, so fetch the updated data and update the page
document.getElementById("data").innerHTML = "Updated data";
} else {
// The task is still running, so set a timer to check again in a few seconds
setTimeout(updatePage, 1000);
}
}
</script>
</head>
<body>
<button onclick="startTask()">Start Task</button>
<div id="data">
<!-- The data will be displayed here once the task finishes -->
</div>
</body>
</html>
</code></pre>
<p>Unfortunately, this doesn't do what I want.</p>
<p>How can I fix the source code?</p>
|
<javascript><python><flask><asynchronous>
|
2022-12-28 11:01:31
| 1
| 17,446
|
user366312
|
74,939,380
| 8,681,229
|
VS Code Terminal: How to use pip version of activated venv=
|
<p>how can I instruct VS Code to use the <code>pip</code> version linked to the activated venv in VS Code Terminal?</p>
<p><strong>What I tried</strong></p>
<p>After activating the same <code>conda virtual env</code> in both Mac Terminal and VS Code Terminal,
when I execute <code>which python</code> both return the same, correct value:</p>
<ul>
<li>Both: <code>/Users/myself/miniconda3/envs/vs-code-3.10/bin/python</code></li>
</ul>
<p>However, when I execute <code>which pip</code> only Mac Terminal returns the correct value and this causes packages installation via VS Code Terminal:</p>
<ul>
<li>Mac Terminal (Correct): <code>/Users/myself/miniconda3/envs/vs-code-3.10/bin/pip</code></li>
<li>VS Code Terminal (Wrong): <code>/usr/local/bin/pip</code></li>
</ul>
<p>Important note: in VS Code I also manually selected the correct Python Interpreter (see screenshot).
<a href="https://i.sstatic.net/wNg7A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wNg7A.png" alt="enter image description here" /></a></p>
<p><strong>Versions</strong></p>
<ul>
<li>macOS 13.1</li>
<li>VS Code 1.74.1</li>
<li>conda 22.9.0</li>
</ul>
|
<python><visual-studio-code><terminal><pip><conda>
|
2022-12-28 10:58:09
| 1
| 3,294
|
Seymour
|
74,939,367
| 5,398,127
|
How do I download all the libraries and dependencies?
|
<p>Suppose if there is a python script which uses libraries like numpy, pandas, keras, tensorflow etc.</p>
<p>Now how do I download all the libraries locally so that script is not breaking due to any updates in libraries.</p>
<p>What I mean is what if certain library is discontinued in future and I want to use it to a different machine where it wasn't installed before. What can be done here?</p>
|
<python><dependencies>
|
2022-12-28 10:56:33
| 2
| 3,480
|
Stupid_Intern
|
74,939,314
| 3,650,983
|
How to comment/uncomment a single line in vim
|
<p>I'm new with vim and working on Mac with python, and want to know how to comment/uncomment a single line with vim. Most of the IDEs that I've worked with use Cmd+/ it there something similar? I tried to find the answer but everything I found was about multiple lines!</p>
|
<python><macos><vim>
|
2022-12-28 10:51:04
| 2
| 4,119
|
ChaosPredictor
|
74,939,274
| 20,878,436
|
How to print mid popular summary of random post of certain topic from Wikipedia?
|
<p>I want to print mid popular summary of random post of certain topic from Wikipedia using Wikipedia API but I am getting completely random topic.</p>
<p>I have tried this:</p>
<pre><code>import requests
def get_wikipedia(topic):
response = requests.get(f"https://en.wikipedia.org/api/rest_v1/page/summary/{topic}")
resp = response.json()
if response.status_code == 200:
summary = resp['extract']
image = resp['thumbnail']['source']
print(summary)
print(image)
else:
print("An error occurred while fetching the summary and image.")
BASE_URL = 'https://en.wikipedia.org/w/api.php'
params = {
'format': 'json',
'action': 'query',
'list': 'random',
'rnnamespace': 0,
'rnlimit': 1,
'rnprop': 'title',
'rnfilterredir': 'nonredirects',
'formatversion': 2,
'utf8': 1,
'utf8mb4': 1,
'titles': 'Quantum Mechanics'
}
response = requests.get(BASE_URL, params=params)
title = response.json()['query']['random'][0]['title']
get_wikipedia(title)
</code></pre>
<p>And also I want to print Mid popular (the articles with page views less than 10k per month) topic. How to do it?</p>
|
<python><mediawiki><wikipedia><wikipedia-api>
|
2022-12-28 10:46:45
| 1
| 368
|
Shounak Das
|
74,939,137
| 17,718,587
|
Reading .csv file with Python and DictReader missing columns
|
<p>I have the following <code>.csv</code> file I'm trying to read as a whole:</p>
<pre><code>---------------------------------------------------------------
#INFO | | | | | |
---------------------------------------------------------------
#Study name | g1 | | | | |
---------------------------------------------------------------
#Respondent Name | name | | | | |
---------------------------------------------------------------
#Respondent Age | 20 | | | | |
---------------------------------------------------------------
#DATA | | | | | |
---------------------------------------------------------------
Row | Timestamp | Source | Event | Sample | Anger|
---------------------------------------------------------------
1 | 133 | Face | 1 | 3 | 0.44 |
---------------------------------------------------------------
2 | 240 | Face | 1 | 4 | 0.20 |
---------------------------------------------------------------
3 | 12 | Face | 1 | 5 | 0.13 |
---------------------------------------------------------------
4 | 133 | Face | 1 | 6 | 0.75 |
---------------------------------------------------------------
5 | 87 | Face | 1 | 7 | 0.25 |
---------------------------------------------------------------
</code></pre>
<p><a href="https://i.sstatic.net/L7KpT.png" rel="nofollow noreferrer">Image of the file</a></p>
<p>This is the code I am using to open, read the file, and print to the terminal:</p>
<pre class="lang-py prettyprint-override"><code>import csv
def read_csv():
with open("in/2.csv", encoding="utf-8-sig") as f:
reader = csv.DictReader(f)
print(list(reader))
read_csv()
</code></pre>
<p>The output I am getting includes only the first two columns of my <code>.csv</code>:</p>
<pre><code>[{'#INFO': '#Study name', '': ''}, {'#INFO': '#Respondent Name', '': ''}, {'#INFO': '#Respondent Age', '': ''}, {'#INFO': '#DATA', '': ''}, {'#INFO': 'Row', '': 'Anger'}, {'#INFO': '1', '': '4.40E-01'}, {'#INFO': '2', '': '2.00E-01'}, {'#INFO': '3', '': '1.30E-01'}, {'#INFO': '4', '': '7.50E-01'}, {'#INFO': '5', '': '2.50E-01'}]
</code></pre>
<p>Why are the other columns missing from my output? I want the other columns starting from <code>Row</code> to be included as well. Any direction would be appreciated. Thanks!</p>
|
<python><csv>
|
2022-12-28 10:33:34
| 2
| 2,772
|
ChenBr
|
74,938,992
| 9,960,645
|
How to send command from bot to discrord chanel
|
<p>For example channel have /some_command[prompt]</p>
<p>How to call with command and prompt instead text ?</p>
<pre><code>async def on_ready():
await client.wait_until_ready()
channel = client.get_channel(int(chanel_id))
await channel.send("/some_command[test]")
</code></pre>
|
<python><discord>
|
2022-12-28 10:19:23
| 1
| 696
|
Π ΡΡΠ»Π°Π½ ΠΠΈΡΠΎΠ²
|
74,938,977
| 19,100,318
|
How to exclude objects when importing a python module
|
<p>Suppose this package which contains only one module:</p>
<pre><code>mypackage/
__init__.py
mymodule.py
</code></pre>
<p>The <code>__init__.py</code> file is empty. The module <code>mymodule.py</code> is as follows:</p>
<pre><code>from math import pi
def two_pi():
return 2 * pi
</code></pre>
<p>This is the content of <code>mymodule</code>:</p>
<pre><code>>>> from mypackage import mymodule
>>> dir(mymodule)
['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', 'pi', 'two_pi']
</code></pre>
<p>The object <code>pi</code> is present when I import the module, but I don't want it to be there.</p>
<p>How can I avoid <code>pi</code> from being present in <code>mymodule</code>?</p>
<p>I have tried defining <code>__all__ = ['tow_pi']</code>, however this only works when importing with <code>from mypackage.mymodule import *</code>.</p>
|
<python><python-3.x><python-import>
|
2022-12-28 10:17:21
| 3
| 450
|
cruzlorite
|
74,938,915
| 10,437,727
|
VSCode Test Explorer not finding module despite PYTHONPATH defined
|
<p><strong>EDIT</strong></p>
<p>Thanks the comment below, I've managed to get the command to work from the CLI. However, I'm still struggling to make it work using VSCode's test explorer.
Here's my current settings.json:</p>
<pre class="lang-json prettyprint-override"><code>{
...
"python.testing.pytestEnabled": false,
"python.testing.unittestEnabled": true,
"python.testing.cwd": "./data-quality/tests"
}
</code></pre>
<p>Thru the Python output on VSCode, I'm getting <code>modulenotfound</code> errors, as if the <code>PYTHONPATH</code> wasn't taken in account.</p>
<pre class="lang-bash prettyprint-override"><code>> /usr/local/bin/python3 ~/.vscode/extensions/ms-python.python-2022.20.1/pythonFiles/testing_tools/unittest_discovery.py . *test*.py
cwd: ./data-quality/tests
[ERROR 2022-11-28 11:31:16.278]: Error discovering unittest tests:
Failed to import test module: test_helpers
Traceback (most recent call last):
File "/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/unittest/loader.py", line 436, in _find_test_path
module = self._get_module_from_name(name)
File "/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/unittest/loader.py", line 377, in _get_module_from_name
__import__(name)
File "/Users/b242pn/Documents/SOC/data-quality/data-quality/tests/test_helpers.py", line 1, in <module>
from helpers import group_columns_by_vendor_product
ModuleNotFoundError: No module named 'helpers'
</code></pre>
<p>=====================================================================</p>
<p>I've got a project with the following structure:</p>
<pre><code>βββ data-quality
β βββ __pycache__
| βββ helpers.py
β βββ tests
β βββ __pycache__
| βββ test_helpers.py
βββ include
βββ share
</code></pre>
<p>I'm currently in <code>tests</code>, trying to use a module in its parent folder, <code>data-quality</code>. I've heard that to use them in a testing context, it is important to use absolute imports, so I tried doing so:</p>
<pre class="lang-py prettyprint-override"><code>import sys
sys.path.append('../')
from helpers import group_columns_by_vendor_product
import unittest
...
</code></pre>
<p>However, whenever I use python to discover these tests, I get a modulenotfound error:</p>
<pre class="lang-bash prettyprint-override"><code> python3 -m unittest discover -v -s data-quality/tests -p 'test_*.py'
======================================================================
ERROR: test_helpers (unittest.loader._FailedTest)
----------------------------------------------------------------------
ImportError: Failed to import test module: test_helpers
Traceback (most recent call last):
File "/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/unittest/loader.py", line 436, in _find_test_path
module = self._get_module_from_name(name)
File "/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/unittest/loader.py", line 377, in _get_module_from_name
__import__(name)
File "data-quality/tests/test_helpers.py", line 4, in <module>
from helpers import group_columns_by_vendor_product
ModuleNotFoundError: No module named 'helpers'
</code></pre>
<p>Am I missing something?</p>
<p>TIA</p>
<p>P.S: I noticed that the tests run when I run the command locally, and inside of the data-quality folder, but not above:</p>
<pre class="lang-bash prettyprint-override"><code>python3 -m unittest discover -v -s . -p 'test_*.py'
test_group_columns_by_vendor_product_normal_behavior (test_helpers.TestGroupColumnsByVendorProduct) ... FAIL
======================================================================
FAIL: test_group_columns_by_vendor_product_normal_behavior (test_helpers.TestGroupColumnsByVendorProduct)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/b242pn/Documents/SOC/data-quality/data-quality/tests/test_helpers.py", line 54, in test_group_columns_by_vendor_product_normal_behavior
self.assertEqual([("Microsoft", "Microsoft Windows")], result)
AssertionError: Lists differ: [('Microsoft', 'Microsoft Windows')] != []
First list contains 1 additional elements.
First extra element 0:
('Microsoft', 'Microsoft Windows')
- [('Microsoft', 'Microsoft Windows')]
+ []
----------------------------------------------------------------------
Ran 1 test in 0.001s
python3 -m unittest discover -v -s ./tests -p 'test_*.py'
test_group_columns_by_vendor_product_normal_behavior (test_helpers.TestGroupColumnsByVendorProduct) ... FAIL
======================================================================
FAIL: test_group_columns_by_vendor_product_normal_behavior (test_helpers.TestGroupColumnsByVendorProduct)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/b242pn/Documents/SOC/data-quality/data-quality/tests/test_helpers.py", line 54, in test_group_columns_by_vendor_product_normal_behavior
self.assertEqual([("Microsoft", "Microsoft Windows")], result)
AssertionError: Lists differ: [('Microsoft', 'Microsoft Windows')] != []
First list contains 1 additional elements.
First extra element 0:
('Microsoft', 'Microsoft Windows')
- [('Microsoft', 'Microsoft Windows')]
+ []
----------------------------------------------------------------------
Ran 1 test in 0.001s
</code></pre>
|
<python><python-3.x><visual-studio-code><python-unittest>
|
2022-12-28 10:10:47
| 0
| 1,760
|
Fares
|
74,938,891
| 5,006,606
|
Is it possible to provide a scoring metric to sklearn RFE?
|
<p>I need to get subsets of top 1, top 2, top 3, etc. features, and performance of my model on each of these subsets. Something like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Number of features</th>
<th>Features</th>
<th>Performance</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>A</td>
<td>0.7</td>
</tr>
<tr>
<td>2</td>
<td>A, D</td>
<td>0.72</td>
</tr>
<tr>
<td>3</td>
<td>A, D, B</td>
<td>0.75</td>
</tr>
</tbody>
</table>
</div>
<p>I wanted to use RFE as a possible improvement over simply using feature importances from models.</p>
<p>In sklearn, the RFECV object has a <code>ranking_</code> attribute, which would let me create the feature subsets. The problem is that all features below the number of features that RFECV found to be optimal are equal to 1, so the first <code>k</code> features are not ordered by importance.</p>
<p>I thought of using a simple RFE instead, but it doesn't accept the <code>scoring</code> parameter, and the default <code>accuracy</code> is not appropriate in my case where classes are very unbalanced.</p>
<p>Is there a way to either somehow provide a scoring metric to sklearn RFE, or to force RFECV (that does accept a <code>scoring</code> parameter) to evaluate ranking below the 'optimal' number of features?</p>
<p>I also considered using SFS but I have about 500 features and it takes days to run.</p>
|
<python><machine-learning><scikit-learn><feature-selection><rfe>
|
2022-12-28 10:09:28
| 0
| 315
|
Charlie
|
74,938,816
| 1,866,576
|
Reuse environment on Tox 4
|
<p>This is my tox.ini file:</p>
<pre><code># Tox (https://tox.readthedocs.io/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
#
# See also https://tox.readthedocs.io/en/latest/config.html for more
# configuration options.
[tox]
# Choose your Python versions. They have to be available
# on the system the tests are run on.
# skipsdist=True
ignore_basepython_conflict=false
[testenv:{setup,lint,codestyle,docstyle,tests,doc-linux,doc-darwin,doc-win32}]
basepython=python3.9
envdir = {toxworkdir}/py39
setenv =
PROJECT_NAME = project_name
passenv =
WINDIR
install_command=
pip install \
--find-links=pkg \
--trusted-host=pypi.python.org \
--trusted-host=pypi.org \
--trusted-host=files.pythonhosted.org \
{opts} {packages}
platform = doc-linux: linux
doc-darwin: darwin
doc-win32: win32
deps =
-r{toxinidir}/requirements-dev.txt
-r{toxinidir}/requirements.txt
commands =
setup: python -c "print('All SetUp')"
# Mind the gap, use a backslash :)
lint: pylint -f parseable -r n --disable duplicate-code \
lint: --extension-pkg-whitelist=PyQt5,numpy,torch,cv2,boto3 \
lint: --ignored-modules=PyQt5,numpy,torch,cv2,boto3 \
lint: --ignored-classes=PyQt5,numpy,torch,cv2,boto3 \
lint: project_name \
lint: {toxinidir}/script
lint: pylint -f parseable -r n --disable duplicate-code \
lint: demo/demo_file.py
codestyle: pycodestyle --max-line-length=100 \
codestyle: --exclude=project_name/third_party/* \
codestyle: project_name demo script
docstyle: pydocstyle \
docstyle: --match-dir='^((?!(third_party|deprecated)).)*' \
docstyle: project_name demo script
doc-linux: make -C {toxinidir}/doc html
doc-darwin: make -C {toxinidir}/doc html
doc-win32: {toxinidir}/doc/make.bat html
tests: python -m pytest -v -s --cov-report xml --durations=10 \
tests: --cov=project_name --cov=script \
tests: {toxinidir}/test
tests: coverage report -m --fail-under 100
</code></pre>
<p>On tox<4.0 it was very convinient to run <code>tox -e lint</code> to fix linting stuff or <code>tox -e codestyle</code> tox fix codestyle stuff, etc. But now, with version tox>4.0 each time I run one of these commands I get this message (for instance):</p>
<pre><code>codestyle: recreate env because env type changed from {'name': 'lint', 'type': 'VirtualEnvRunner'} to {'name': 'codestyle', 'type': 'VirtualEnvRunner'}
codestyle: remove tox env folder .tox/py39
</code></pre>
<p>And it takes forever to run these commands since the evironments are recreated each time ...</p>
<p>I also use these structure for running tests on jenkins so I can map each of these commands to a jenkins stage.</p>
<p>How can I reuse the environment? I have read that it is possible to do it using plugins, but no idea how this can be done, or how to install/use plugins.</p>
<p>I have tried this:
<a href="https://stackoverflow.com/questions/57222212/tox-multiple-tests-re-using-tox-environment">tox multiple tests, re-using tox environment</a></p>
<p>But it does not work in my case.</p>
<p>I spect to reuse the environment for each of the environments defined in the tox file.</p>
|
<python><tox>
|
2022-12-28 10:01:37
| 2
| 331
|
Marcus
|
74,938,477
| 13,596,591
|
Reducing Item by -10 in Class Based View
|
<p>I have a button when clicked should reduce an Item by 10, it is possible to achieve using a function not as_view, please help on reproducing the same effect on a ListView</p>
<p>Views.py:</p>
<pre><code> def subs(request, pk):
sw = get_object_or_404(Swimmers,id=pk)
c_s = sw.sessions - 10
sw.sessions = c_s
sw.save()
return render(request, 'accounts/modals/swimming/_vw_table.html', {'sw': sw})
</code></pre>
<p>URL:</p>
<pre><code>path('subt/<int:pk>/', views.subs, name='subt'),
</code></pre>
<p>Template:</p>
<pre><code> <form id="post-form">
<div class="col float-right">
<button type="submit" class="float-right btn btn-primary" id="saveBtn" name="button">REDUCE SESSION</a>
</div>
</form>
</code></pre>
|
<python><django><django-rest-framework><django-views><django-forms>
|
2022-12-28 09:27:52
| 0
| 431
|
d3vkidhash
|
74,938,342
| 10,847,096
|
Pyspark DataFrame Filter column based on a column in another DataFrame without join
|
<p>I have a pyspark dataframe called <code>df1</code> that looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">ID1</th>
<th style="text-align: center;">ID2</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">aaaa</td>
<td style="text-align: center;">a1</td>
</tr>
<tr>
<td style="text-align: center;">bbbb</td>
<td style="text-align: center;">a2</td>
</tr>
<tr>
<td style="text-align: center;">aaaa</td>
<td style="text-align: center;">a3</td>
</tr>
<tr>
<td style="text-align: center;">bbbb</td>
<td style="text-align: center;">a4</td>
</tr>
<tr>
<td style="text-align: center;">cccc</td>
<td style="text-align: center;">a2</td>
</tr>
</tbody>
</table>
</div>
<p>And I have another dataframe called <code>df2</code> that looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">ID2_1</th>
<th style="text-align: center;">ID2_2</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">a2</td>
<td style="text-align: center;">a1</td>
</tr>
<tr>
<td style="text-align: center;">a3</td>
<td style="text-align: center;">a2</td>
</tr>
<tr>
<td style="text-align: center;">a2</td>
<td style="text-align: center;">a3</td>
</tr>
<tr>
<td style="text-align: center;">a2</td>
<td style="text-align: center;">a1</td>
</tr>
</tbody>
</table>
</div>
<p>where the values of the ID2 in the first dataframe matches to the values in columns ID2_1, ID2_2 in the second dataframe.</p>
<p>So the resultant dataframe will look like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">ID1</th>
<th style="text-align: center;">ID2</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">aaaa</td>
<td style="text-align: center;">a1</td>
</tr>
<tr>
<td style="text-align: center;">bbbb</td>
<td style="text-align: center;">a2</td>
</tr>
<tr>
<td style="text-align: center;">aaaa</td>
<td style="text-align: center;">a3</td>
</tr>
<tr>
<td style="text-align: center;">cccc</td>
<td style="text-align: center;">a2</td>
</tr>
</tbody>
</table>
</div>
<p>(fourth line was filtered out)</p>
<p>I want to filter the column ID2 to contain only values that appear in one of the columns ID2_1 or ID2_2.
I tried doing</p>
<pre class="lang-py prettyprint-override"><code>filter= df1.filter((f.col("ID2").isin(df2.ID2_1)))|
(f.col("ID2").isin(df2.ID2_2)))
</code></pre>
<p>But this doesn't seem to work.
I have seen other suggestions to use a <code>join</code> between the two columns but this operation is way too heavy and I'm trying to avoid such actions. Any suggestions as to how to do this task?</p>
|
<python><dataframe><pyspark>
|
2022-12-28 09:13:20
| 1
| 993
|
Ofek Glick
|
74,938,100
| 10,480,181
|
How to set up imports correctly in Python?
|
<p>I have the following directory structure for my python project.</p>
<pre><code>.
βββ AUTHORS.rst
βββ CONTRIBUTING.rst
βββ HISTORY.rst
βββ MANIFEST.in
βββ Makefile
βββ README.rst
βββ __pycache__
β βββ test.cpython-310.pyc
βββ docs
βββ logs
βββ python_server_copy
β βββ __init__.py
β βββ cli.py
β βββ copy.py
β βββ python_server_copy.py
β βββ test.py
β βββ utils.py
βββ requirements_dev.txt
βββ setup.cfg
βββ setup.py
βββ tests
βββ tox.ini
</code></pre>
<p>This is <code>click</code> based command line application. I have imports defined in the files with the package name. For example:</p>
<p>In <code>copy.py</code>:</p>
<pre><code>from python_server_copy.utils import get_project_root
</code></pre>
<p>However these imports don't work when I am testing. I get <code>ModuleNotFound</code> Error when running the files as a script directly. I have to remove the <code>python_server_copy</code> package name every time I am testing. Everything works fine if it is run as command line click application.</p>
<p>I need suggestions on how to test it properly and handle imports. TIA</p>
<p>I am using python 3.7.11 if that is useful.</p>
|
<python><python-3.x><import><python-import><modulenotfounderror>
|
2022-12-28 08:46:51
| 1
| 883
|
Vandit Goel
|
74,938,028
| 947,506
|
python dict pop could not return pytorch tensor
|
<p>My code is as follows:</p>
<pre><code>import torch
mydict = {"state0": torch.tensor(12345.6), "state1":torch.tensor(23456.7)}
print(f"mydict: {mydict}")
val = mydict.pop("state0")
print(f"val={val}")
</code></pre>
<p>The output is:</p>
<pre><code>mydict: {'state0': tensor(12345.5996), 'state1': tensor(23456.6992)}
val=12345.599609375
</code></pre>
<p>Here why the pop() does not return tensor(12345.5996)? I need it to be a tensor instead of a scalar.</p>
|
<python><pytorch>
|
2022-12-28 08:39:24
| 1
| 377
|
silence_lamb
|
74,937,949
| 3,375,378
|
Pandas changing value of columns before saving to mysql
|
<p>I've got a mysql table defined as follows:</p>
<pre><code>CREATE TABLE series_info (
series_id INT NOT NULL PRIMARY KEY,
s_area VARCHAR(40) NOT NULL,
s_code VARCHAR(10) NOT NULL,
dataset_version VARCHAR(20) NOT NULL
);
</code></pre>
<p>In my Python module, I'm trying to save a Pandas DataFrame into the table with the following code:</p>
<pre><code>.
. insert 3 records in df_series_info (see output)
.
conn = sqlalchemy.create_engine( \
f"mysql+mysqlconnector://{DATABASER_USER}:{DATABASER_PASSWORD}@{DATABASE_HOST}/{DATABASE}")
print(df_series_info.dtypes)
print(df_series_info.head())
df_series_info.to_sql(name='series_info', con=conn, if_exists='append', index=False)
</code></pre>
<p>But I get the following output:</p>
<pre><code>series_id int64
s_area object
s_code object
dataset_version object
dtype: object
series_id s_area s_code dataset_version
0 7179385449897108 AFRG 170202 2022-12-29-SYN-01
0 6756567457160340 AFRG 170203 2022-12-29-SYN-01
0 7692362580057236 AFRG 160305 2022-12-29-SYN-01
Traceback (most recent call last):
File "D:\Projects\AIGreenFleet\dev\aigreenfleet-series-data-generator\.venv\lib\site-packages\mysql\connector\connection_cext.py", line 565, in cmd_query
self._cmysql.query(
_mysql_connector.MySQLInterfaceError: Duplicate entry '2147483647' for key 'PRIMARY'
.
.
.
The above exception was the direct cause of the following exception:
.
.
.
sqlalchemy.exc.IntegrityError: (mysql.connector.errors.IntegrityError) 1062 (23000): Duplicate entry '2147483647' for key 'PRIMARY'
[SQL: INSERT INTO series_info (series_id, s_area, s_code, dataset_version) VALUES (%(series_id)s, %(s_area)s, %(s_code)s, %(dataset_version)s)]
[parameters: ({'series_id': '2952743774805140', 's_area': 'AFRG', 's_code': 170202, 'dataset_version': '2022-12-29-SYN-01'}, {'series_id': '9080550407298196', 's_area': 'AFRG', 's_code': 170203, 'dataset_version': '2022-12-29-SYN-01'}, {'series_id': '2972479528656020', 's_area': 'AFRG', 's_code': 160305, 'dataset_version': '2022-12-29-SYN-01'})]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
</code></pre>
<p>Now, as you see from the output of <code>print(df_series_info.head())</code>, the value in the series_id field is definetly not 2147483647.</p>
<p>If I remove the <code>PRIMARY KEY</code> clause from the table creation statement, the code works (but puts '2147483647' as value for series_id in all rows).</p>
<p>It seems that pandas or the mysql connector are converting the value of the series_id column prior to saving it to the database.</p>
<p>Any idea what I might be doing wrong?</p>
|
<python><mysql><pandas>
|
2022-12-28 08:28:53
| 1
| 2,598
|
chrx
|
74,937,926
| 979,974
|
Python plot heatmap from csv pixel file with panda
|
<p>I would like to plot a heatmap from a csv file which contains pixels position. This csv file has this shape:</p>
<pre><code>0 0 8.400000e+01
1 0 8.500000e+01
2 0 8.700000e+01
3 0 8.500000e+01
4 0 9.400000e+01
5 0 7.700000e+01
6 0 8.000000e+01
7 0 8.300000e+01
8 0 8.900000e+01
9 0 8.500000e+01
10 0 8.300000e+01
</code></pre>
<p>I try to write some lines in Python, but it returns me an error. I guess it is the format of column 3 which contains string. Is there any way to plot this kind of file?</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
path_to_csv= "/run/media/test.txt"
df= pd.read_csv(path_to_csv ,sep='\t')
plt.imshow(df,cmap='hot',interpolation='nearest')
plt.show(df)
</code></pre>
<p>I tried also seaborn but with no success.
Here the error returned:</p>
<pre><code>TypeError: Image data of dtype object cannot be converted to float
</code></pre>
|
<python><csv><heatmap><pixel>
|
2022-12-28 08:26:48
| 1
| 953
|
user979974
|
74,937,869
| 7,317,408
|
ValueError: cannot reindex on an axis with duplicate labels when adding EMAs via group_by
|
<p>Sorry for the noob question.</p>
<p>I have a bunch of minute data like so:</p>
<pre><code> timestamp open high low close volume trade_count vwap symbol volume_10_day
0 2021-06-28 11:15:00+00:00 1.8899 1.89 1.8899 1.8900 1200 2 1.889942 AADI 102027.6
1 2021-06-28 13:15:00+00:00 1.8200 1.82 1.8100 1.8100 710 8 1.818282 AADI 102027.6
2 2021-06-28 13:30:00+00:00 1.8600 1.87 1.8500 1.8600 13845 52 1.858409 AADI 102027.6
3 2021-06-28 13:35:00+00:00 1.8659 1.87 1.8626 1.8700 860 5 1.867670 AADI 102027.6
4 2021-06-28 13:40:00+00:00 1.8700 1.87 1.8477 1.8477 7386 33 1.859019 AADI 102027.6
... ... ... ... ... ... ... ... ... ... ...
423069 2021-01-13 00:35:00+00:00 1.2600 1.27 1.2500 1.2600 242391 140 1.260218 ZOM 122065.8
423070 2021-01-13 00:40:00+00:00 1.2600 1.26 1.2500 1.2600 129074 108 1.256255 ZOM 122065.8
423071 2021-01-13 00:45:00+00:00 1.2500 1.26 1.2500 1.2500 297198 151 1.253695 ZOM 122065.8
423072 2021-01-13 00:50:00+00:00 1.2600 1.26 1.2500 1.2600 223822 121 1.256325 ZOM 122065.8
423073 2021-01-13 00:55:00+00:00 1.2600 1.26 1.2500 1.2600 378248 222 1.255110 ZOM 122065.8
[423074 rows x 10 columns]
timestamp open high low close volume trade_count vwap symbol volume_10_day
2021-07-26 13:00:00+00:00 2.7799 2.78 2.68 2.7 182448 285 2.712106 SOS 6552.7
2021-07-26 13:05:00+00:00 2.7 2.71 2.68 2.69 46906 154 2.692083 SOS 6552.7
</code></pre>
<p>I am then grouping them by their symbols, and adding EMAs to them:</p>
<pre><code>ema72 = lambda x: ta.ema(df.loc[x.index, "close"], 72)
ema89 = lambda x: ta.ema(df.loc[x.index, "close"], 89)
ema216 = lambda x: ta.ema(df.loc[x.index, "close"], 216)
ema267 = lambda x: ta.ema(df.loc[x.index, "close"], 267)
ema200 = lambda x: ta.ema(df.loc[x.index, "close"], 200)
df["EMA72"] = df.groupby(['symbol']).apply(ema72).reset_index(0,drop=True)
df["EMA89"] = df.groupby(['symbol']).apply(ema89).reset_index(0,drop=True)
df["EMA216"] = df.groupby(['symbol']).apply(ema216).reset_index(0,drop=True)
df["EMA267"] = df.groupby(['symbol']).apply(ema267).reset_index(0,drop=True)
df["EMA200"] = df.groupby(['symbol']).apply(ema200).reset_index(0,drop=True)
</code></pre>
<p>Which gives the error:</p>
<pre><code> raise ValueError("cannot reindex on an axis with duplicate labels")
ValueError: cannot reindex on an axis with duplicate labels
</code></pre>
<p>What am I doing wrong here?</p>
<p><strong>UPDATE</strong></p>
<p>I can add the 72 and 89 ema fine by doing:</p>
<pre><code>df["EMA72"] = df.groupby(['symbol']).apply(ema72).reset_index(0,drop=True)
df = df[~df.index.duplicated()]
df["EMA89"] = df.groupby(['symbol']).apply(ema89).reset_index(0,drop=True)
df = df[~df.index.duplicated()]
</code></pre>
<p>Will result in:</p>
<pre><code> timestamp open high low close volume trade_count vwap symbol volume_10_day EMA72 EMA89
0 2021-07-06 13:30:00+00:00 1.7200 1.7300 1.71 1.730 23342 54 1.723628 AADI 102027.6 NaN NaN
1 2021-07-06 13:35:00+00:00 1.7300 1.7300 1.72 1.720 13225 26 1.729698 AADI 102027.6 NaN NaN
2 2021-07-06 13:40:00+00:00 1.7199 1.7199 1.71 1.715 1740 12 1.712175 AADI 102027.6 NaN NaN
3 2021-07-06 13:45:00+00:00 1.7150 1.7177 1.71 1.710 1223 9 1.714411 AADI 102027.6 NaN NaN
4 2021-07-06 13:50:00+00:00 1.7100 1.7100 1.71 1.710 900 1 1.710000 AADI 102027.6 NaN NaN
... ... ... ... ... ... ... ... ... ... ... ... ...
247074 2021-01-13 00:35:00+00:00 1.2600 1.2700 1.25 1.260 242391 140 1.260218 ZOM 122065.8 1.264493 1.258082
247075 2021-01-13 00:40:00+00:00 1.2600 1.2600 1.25 1.260 129074 108 1.256255 ZOM 122065.8 1.264370 1.258125
247076 2021-01-13 00:45:00+00:00 1.2500 1.2600 1.25 1.250 297198 151 1.253695 ZOM 122065.8 1.263976 1.257944
247077 2021-01-13 00:50:00+00:00 1.2600 1.2600 1.25 1.260 223822 121 1.256325 ZOM 122065.8 1.263867 1.257990
247078 2021-01-13 00:55:00+00:00 1.2600 1.2600 1.25 1.260 378248 222 1.255110 ZOM 122065.8 1.263761 1.258035
[247079 rows x 12 columns]
</code></pre>
<p>But doing this:</p>
<pre><code>df["EMA72"] = df.groupby(['symbol']).apply(ema72).reset_index(0,drop=True)
df = df[~df.index.duplicated()]
df["EMA89"] = df.groupby(['symbol']).apply(ema89).reset_index(0,drop=True)
df = df[~df.index.duplicated()]
df["EMA216"] = df.groupby(['symbol']).apply(ema216).reset_index(0,drop=True)
df = df[~df.index.duplicated()]
df["EMA267"] = df.groupby(['symbol']).apply(ema267).reset_index(0,drop=True)
df = df[~df.index.duplicated()]
df["EMA200"] = df.groupby(['symbol']).apply(ema200).reset_index(0,drop=True)
df = df[~df.index.duplicated()]
</code></pre>
<p>Gives the error:</p>
<pre><code> raise ValueError("cannot reindex on an axis with duplicate labels")
ValueError: cannot reindex on an axis with duplicate labels
</code></pre>
<p>I have tried removing the duplicated timestamps for each symbol:</p>
<pre><code>df = df.groupby(['symbol', 'timestamp'])
df = df.last().sort_index().reset_index()
247079 // before
233228 // after
</code></pre>
<p>But it still gives the same error</p>
<p>I thought it may be something to do with the functions but when I do this:</p>
<pre><code>df["EMA200"] = df.groupby('symbol')['close'].rolling(200).mean()
</code></pre>
<p>The error persists</p>
|
<python><pandas><numpy><stock><pandas-ta>
|
2022-12-28 08:21:22
| 0
| 3,436
|
a7dc
|
74,937,551
| 6,684,751
|
Flask Socket application deployment using nginx and gunicorn in Ubuntu 22.04 is not working. Getting 404 error from nginx
|
<p>When I run the below code from a local Ubuntu machine, the socket connection is successfully established; however, when I deploy it to an Ubuntu server, it does not work.</p>
<p>This is the error message I received from Nginx.</p>
<pre><code>- [28/Dec/2022:07:29:10 +0000] "GET /media HTTP/1.1" 404 153 "-" "Boost.Beast/266" "3.235.111.201"
</code></pre>
<p>Linux is running and I check the IP it is 200 and the application also is running.</p>
<p>The code is in the file <code>aap.py</code></p>
<pre><code>import json
import logging
from flask import Flask, request, jsonify
from flask_sockets import Sockets
from gevent import pywsgi
from geventwebsocket.handler import WebSocketHandler
import os
import logging
from logging.handlers import RotatingFileHandler
from flask_cors import CORS
app = Flask(__name__)
CORS(app)
sockets = Sockets(app)
@app.route("/hello", methods=['GET', 'POST'])
def hello():
print("testing")
return jsonify({"message": "Success"})
@sockets.route('/test')
def test(ws):
print(f"*****Test****{ws}")
@sockets.route('/media')
def echo(ws):
print(f"Media WS: {ws}")
while True:
message = ws.receive()
packet = json.loads(message)
print(packet)
if __name__ == '__main__':
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
file_handler = RotatingFileHandler('log_data.log', maxBytes=10000, backupCount=2)
file_handler.setFormatter(formatter)
logging.basicConfig(handlers=[file_handler], level=logging.DEBUG)
logger = logging.getLogger('log_data.log')
app.logger.setLevel(logging.DEBUG)
server = pywsgi.WSGIServer(('127.0.0.1', 5000), app, handler_class=WebSocketHandler)
#server = pywsgi.WSGIServer(('', 5000), app, handler_class=WebSocketHandler)
server.serve_forever()
</code></pre>
<p>from the localhost, I run the command <code>python3 app.py</code>
Here is the screenshot of the terminal and socket connection test result.</p>
<p><a href="https://i.sstatic.net/oHAMo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oHAMo.png" alt="localhost python running" /></a></p>
<p><a href="https://i.sstatic.net/jkyPP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jkyPP.png" alt="client socket connection success" /></a></p>
<p>My nginx config is in file <code>/etc/nginx/sites-available/test-socket</code> and the content is:</p>
<pre><code>server {
listen 80;
server_name _;
access_log /var/log/nginx/access.log;
location / {
include proxy_params;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://127.0.0.1:5000;
}
}
</code></pre>
<p>I am running one service file <code>/etc/systemd/system/demo.service </code></p>
<pre><code>[Unit]
Description=Gunicorn instance to serve testApp
After=network.target
[Service]
User=root
Group=www-data
WorkingDirectory=/root/test-socket
Environment="PATH=/root/test-socket/myenv/bin"
ExecStart=/root/test-socket/myenv/bin/gunicorn --worker-class eventlet -w1 --workers 3 --bind 127.0.0.1:5000 app:app
[Install]
WantedBy=multi-user.target
</code></pre>
<p><a href="https://i.sstatic.net/Tcldp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Tcldp.png" alt="Application is running" /></a></p>
|
<python><sockets><nginx><flask>
|
2022-12-28 07:45:35
| 0
| 906
|
arun n a
|
74,937,536
| 719,001
|
read variable value from bash executable
|
<p>I have a script:</p>
<p>x.sh:</p>
<pre><code>#!/bin/bash
echo nir
</code></pre>
<p>config.sh:</p>
<pre><code>#!/bin/bash
x=$(/home/nir/x.sh)
</code></pre>
<p>script.py:</p>
<pre><code>#!/usr/bin/env python3
#execute config.sh and read x into y
print(y)
</code></pre>
<p>terminal:</p>
<pre><code>$ ] ./x.sh
nir
$ ] . ./config.sh
$ ] echo $x
nir
$ ] ./script.py
nir
</code></pre>
<p>I want to use <code>config.sh</code> in a python script to execute it and then read the value in <code>x</code>. Not sure how to do it.</p>
|
<python><python-3.x><bash>
|
2022-12-28 07:44:08
| 1
| 2,677
|
Nir
|
74,937,459
| 11,693,768
|
Converting a column of strings with different formats that contains year from 2 digits (YY) to 4 digits (YYYY) in python pandas
|
<p>I have a dataframe with the following column. Each row contains different format strings.</p>
<pre><code>col |
----------------------
GRA/B
TPP
BBMY
...
SOCBBA 0 MAX
CMBD 0 MAX
EPR 5.75 MAX
...
PMUST 5.57643 02/15/34
LEO 0 12/30/2099
RGB 3.125 09/15/14
RGB 3.375 04/15/20
</code></pre>
<p>I want to convert all the dates to a format that shows the full year.</p>
<p>Is there a way to regex this so that it looks like this.</p>
<pre><code>col |
----------------------
GRA/B
TPP
BBMY
...
SOCBBA 0 MAX
CMBD 0 MAX
EPR 5.75 MAX
...
PMUST 5.57643 02/15/2034
LEO 0 12/30/2099
RGB 3.125 09/15/2014
RGB 3.375 04/15/2020
</code></pre>
<p>Right now the only thing I can think of doing is doing,</p>
<pre><code>df['col'] = df['col'].str.replace('/14', '/2014')
</code></pre>
<p>for each year, but theres many years, also it will replace the days and months as well.</p>
<p>How can I achieve this properly, should I be using regex?</p>
|
<python><pandas><string><date><replace>
|
2022-12-28 07:35:40
| 1
| 5,234
|
anarchy
|
74,937,236
| 5,810,015
|
Python unable to process heavy files
|
<p>I have a zipped (.gz) log file logfile.20221227.gz. I am writing a python script to process it. I did a test run with file which had 100 lines and the script worked fine. When i ran the same script on actual log file which is almost 5GB the script gets broken. Note that i was able to process log files upto 2GB. Unfortunately the only log file heavier than this is 5GB+ or 7GB+ and the script fails with both of them. My code is as below.</p>
<pre><code>count = 0
toomany = 0
maxhits = 5000
logfile = '/foo/bar/logfile.20221228.gz'
with gzip.open(logfile, 'rt', encoding='utf-8') as page:
for line in page:
count += 1
print("\nFor loop count is: ",count)
string = line.split(' ', 5)
if len(string) < 5:
continue
level = string[3]
shortline = line[0:499]
if level == 'FATAL':
log_lines.append(shortline)
total_fatal += 1
elif level == 'ERROR':
log_lines.append(shortline)
total_error += 1
elif level == 'WARN':
log_lines.append(shortline)
total_warn += 1
if not toomany and (total_fatal + total_error + total_warn) > max_hits:
toomany = 1
if len(log_lines) > 0:
send_report(total_fatal, total_error, total_warn, toomany, log_lines, max_hits)
</code></pre>
<p>Output:</p>
<pre><code>For loop count is: 1
.
.
For loop count is: 192227123
Killed
</code></pre>
<p>What does the <code>Killed</code> means here? It does not offer much to investigate just with this one keyword. Also is there a limit on file size and is there a way to bypass it.</p>
<p>Thank you.</p>
|
<python><out-of-memory><limit><sigkill>
|
2022-12-28 07:06:25
| 1
| 417
|
data-bite
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.