QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
74,879,039
| 913,494
|
Python function not writing or overwriting when threading
|
<p>I have a script that takes a a group of images on the local machine, sends it to removeBG using threading. If it is successful, it gets the resultant file and uploads to s3 and grabs the s3 URL. Now we pass this URL to BannerBear to generate a composite which returns another URL that we print to the screen and then get the file so we can write it locally.</p>
<p>But while all the text of each process is being printed to the screen properly, somewhere along the way the writing of the final image locally gets skipped or overwritten by the other file that is being processed. If I go one at a time it works.</p>
<p><strong>Code:</strong></p>
<pre><code># Check the result of the RemoveBG API request if ok write the file locally and upload to s3
if response.status_code == requests.codes.ok:
with open(
os.path.join(OUTPUT_DIR, os.path.splitext(file)[0] + ".png"), "wb"
) as out:
out.write(response.content)
# Open a file-like object using io.BytesIO
image_data = io.BytesIO(response.content)
# Upload the image data to S3
s3.upload_fileobj(image_data, bucket_name, object_key)
# Get the URL for the uploaded image
image_url = f"https://{bucket_name}.s3.amazonaws.com/{object_key}"
print(f"Image uploaded to S3: {image_url}")
# pass the s3 url to BannerBear
bannerbear(image_url)
# start BannerBear Func
def bannerbear(photo_url):
# Send a POST request to the endpoint URL
responseBB = requests.post(endpoint_url, headers=headers, json=data)
if responseBB.status_code == 200:
# Print the URL of the generated image
print(responseBB.json()["image_url_jpg"])
# Write the image data to a file
filedata = requests.get(responseBB.json()["image_url_jpg"])
with open(
os.path.join(OUTPUT_DIR, "bb", os.path.splitext(file)[0] + ".jpg"), "wb"
) as f:
f.write(filedata.content)
print("BannerBear File Written")
else:
# Something went wrong
print(f"An error occurred: {responseBB.status_code}")
print(response.json()["message"])
# Create a thread pool with a maximum of 4 threads
with concurrent.futures.ThreadPoolExecutor(max_workers=8) as executor:
# Iterate through all the files in the image directory
for file in os.listdir(IMAGE_DIR):
# Check if the file is an image
if file.endswith(".jpg") or file.endswith(".png"):
# Submit the task to the thread pool
executor.submit(process_image, file)
</code></pre>
<p>This is what I see in the console:
So it's going through all the steps.
<a href="https://i.sstatic.net/qLFTq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qLFTq.png" alt="enter image description here" /></a></p>
|
<python>
|
2022-12-21 16:30:34
| 0
| 535
|
Matt Winer
|
74,879,020
| 19,155,645
|
pandas value_counts(): directly compare two instances
|
<p>I have done <code>.value_counts()</code> on two dataFrames (similar column) and would like to compare the two.<br>
I also tried with converting the resulting Series to dataframes (<code>.to_frame('counts')</code> as suggested in this <a href="https://stackoverflow.com/questions/71488764/how-to-compare-value-counts-of-two-dataframes">thread</a>), but it doesn't help.</p>
<pre><code>first = df1['company'].value_counts()
second = df2['company'].value_counts()
</code></pre>
<p>I tried to merge but I think the main problem is that I dont have the company name as a column but its the index (?). Is there a way to resolve it or to use a different way to get the comparison? <br></p>
<p><b> GOAL: </b>The end goal is to be able to see which companies occur more in df2 than in df1, and the value_counts() themselves (or the difference between them).</p>
|
<python><pandas>
|
2022-12-21 16:29:01
| 2
| 512
|
ArieAI
|
74,879,012
| 4,490,376
|
How to preserve Flask app context across Celery and SQLAlchemy
|
<p>I'm building trying to learn Flask with a proof of concept Flask app, that takes a JSON payload, and uses SQLAlchemy to write it to a DB. I'm using celery to manage the write tasks.</p>
<p>The app is structured</p>
<pre><code>|-app.py
|-project
|-__init__.py
|-celery_utils.py
|-config.py
|-users
|-__init_.py
|-models.py
|-tasks.py
</code></pre>
<p><code>app.py</code> builds the flask app and celery instance.</p>
<p><em>app.py</em></p>
<pre><code>from project import create_app, ext_celery
app = create_app()
celery = ext_celery.celery
@app.route("/")
def alive():
return "alive"
</code></pre>
<p><code> /project/__init__.py</code> is the application factory for the flask app. It instantiates the extensions, links everything together, and registers the blueprints.</p>
<p><em>/project/<strong>init</strong>.py</em></p>
<pre><code>import os
from flask import Flask
from flask_celeryext import FlaskCeleryExt
from flask_migrate import Migrate
from flask_sqlalchemy import SQLAlchemy
from project.celery_utils import make_celery
from project.config import config
# instantiate extensions
db = SQLAlchemy()
migrate = Migrate()
ext_celery = FlaskCeleryExt(create_celery_app=make_celery)
def create_app(config_name=None):
if config_name is None:
config_name = os.environ.get("FLASK_CONFIG", "development")
# instantiate the app
app = Flask(__name__)
# set config
app.config.from_object(config[config_name])
# set up extensions
db.init_app(app)
migrate.init_app(app, db)
ext_celery.init_app(app)
# register blueprints
from project.users import users_blueprint
app.register_blueprint(users_blueprint)
# shell context for flask cli
@app.shell_context_processor
def ctx():
return {"app": app, "db": db}
return app
</code></pre>
<p><code>/project/celery_utils.py</code> manages the creation of the celery instances
<em>/project/celery_utils.py</em></p>
<pre><code>from celery import current_app as current_celery_app
def make_celery(app):
celery = current_celery_app
celery.config_from_object(app.config, namespace="CELERY")
return celery
</code></pre>
<p>In the users dir, I'm trying to manage the creation of a basic user with celery task management.</p>
<p>'/project/users/<strong>init</strong>.py` is where I create the blueprints and routes.</p>
<p><em>/project/users/<strong>init</strong>.py</em></p>
<pre><code>from flask import Blueprint, request, jsonify
from .tasks import divide, post_to_db
users_blueprint = Blueprint("users", __name__, url_prefix="/users", template_folder="templates")
from . import models, tasks
@users_blueprint.route('/users', methods=['POST'])
def users():
request_data = request.get_json()
task = post_to_db.delay(request_data)
response = {"id": task.task_id,
"status": task.status,
}
return jsonify(response)
@users_blueprint.route('/responses', methods=['GET'])
def responses():
request_data = request.get_json()
result = AsyncResult(id=request_data['id'])
response = result.get()
return jsonify(response)
</code></pre>
<p><code>/project/users/models.py</code> is a simple User model - however, it does manage to successfully remain in the context of the flask app if created from the flask app cli.</p>
<p><em>/project/users/models.py</em></p>
<pre><code>from project import db
class User(db.Model):
"""model for the user object"""
__tablename__ = "users"
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
username = db.Column(db.String(128), unique=True, nullable=False)
email = db.Column(db.String(128), unique=True, nullable=False)
def __init__(self, username, email, *args, **kwargs):
self.username = username
self.email = email
</code></pre>
<p>Finally, <code>/project/users/tasks.py</code> is where I handle the celery tasks for this dir.</p>
<p><em>/project/users/tasks.py</em></p>
<pre><code>from celery import shared_task
from .models import User
from project import db
@shared_task()
def post_to_db(payload):
print("made it here")
user = User(**payload)
db.session.add(user)
db.session.commit()
db.session.close()
return True
</code></pre>
<p>The modules work, but as soon as I wire it all up and hit the endpoint with a JSON payload, I get the error message:</p>
<pre><code>RuntimeError: No application found. Either work inside a view function or push an application context. ...
</code></pre>
<p>I have tried to preserve the app context in tasks.py by:</p>
<pre><code>...
from project import db, ext_celery
@ext_celery.shared_task()
def post_to_db(payload):
...
</code></pre>
<pre><code>...
from project import db, ext_celery
@ext_celery.task()
def post_to_db(payload):
...
</code></pre>
<p>These error with: <code>TypeError: exceptions must derive from BaseException</code></p>
<p>I've tried pushing the app context</p>
<pre><code>...
from project import db
from app import app
@shared_task()
def post_to_db(payload):
with app.app_context():
...
</code></pre>
<p>This also errors with: <code>TypeError: exceptions must derive from BaseException</code></p>
<p>I've tried importing celery from the app itself</p>
<pre><code>...
from project import db
from app import celery
@celery.task()
def post_to_db(payload):
...
</code></pre>
<p>This also errors with: <code>TypeError: exceptions must derive from BaseException</code></p>
<p>Any suggestions gratefully received. There's a final piece of the puzzle I'm missing, and it's very frustrating.</p>
|
<python><flask><sqlalchemy><celery>
|
2022-12-21 16:28:17
| 1
| 551
|
741852963
|
74,878,991
| 13,983,136
|
Group by on the columns of a data frame and on the values of a dictionary
|
<p>I have a data frame, <code>df</code>, like:</p>
<pre><code>Year Month Country Organizer Participation
2020 1 China FAO True
2020 1 Japan FAO True
2020 1 France EU False
2020 2 France FAO False
2020 2 Japan FAO True
2020 2 Germany EU True
2020 2 Finland EU False
2020 2 India FAO True
2020 3 Senegal FAO True
</code></pre>
<p>and a dictionary, <code>codes</code>:</p>
<pre><code>codes = {
'Asia': ['China', 'Japan', 'India'],
'Europe': ['France', 'Germany', 'Finland'],
'Africa': ['Senegal']
}
</code></pre>
<p>My goal is to perform a group by considering the fields <code>'Year'</code>, <code>'Month'</code>, <code>'Organizer'</code> and the continent (in the <code>df</code> I have <code>'Country'</code>), counting the number of rows. <code>'Participation'</code> is not important.</p>
<p>To be more precise my final output should be:</p>
<pre><code>Year Month Organizer Continent Number
2020 1 FAO Asia 2
2020 1 EU Europe 1
2020 2 FAO Europe 1
2020 2 FAO Asia 2
2020 2 EU Europe 2
2020 3 FAO Africa 1
</code></pre>
<p>Is there a compact way to do this? The alternative way would be to add a column <code>'Continent'</code> and also use this column in the group by, but I'm afraid this is inefficient and inelegant solution.</p>
|
<python><pandas>
|
2022-12-21 16:26:25
| 2
| 787
|
LJG
|
74,878,950
| 4,034,593
|
Apache Beam - combine input with DoFn output
|
<p>I have a <code>DoFn</code> class with <code>process</code> method, which takes a string and enhance it:</p>
<pre><code>class LolString(apache_beam.DoFn):
def process(self, element: str) -> str:
return element + "_lol"
</code></pre>
<p>I want to have a step in my Beam pipeline that gives me a tuple, for example:</p>
<p><code>"Stack" -> ("Stack", "Stack_lol")</code></p>
<p>This is my pipeline step (<code>strings</code> is a <code>PCollection[str]</code>):</p>
<pre><code>strings | "Lol string" >> apache_beam.ParDo(LolString())
</code></pre>
<p>However this gives me the output, as per example:</p>
<p><code>"Stack_lol"</code></p>
<p>but I want the mentioned tuple.</p>
<p>How can I achieve desired output WITHOUT modifying the <code>process</code> method?</p>
|
<python><apache-beam>
|
2022-12-21 16:23:22
| 1
| 599
|
Rafaó
|
74,878,923
| 3,393,192
|
Open3D registration with ICP shows error of 0 and returns the input transformation
|
<p>I try to use the ICP algorithm of open3d to find a transformation that minimizes the distance between 2 point clouds and loosely followed their tutorial page: <a href="http://www.open3d.org/docs/latest/tutorial/pipelines/icp_registration.html" rel="nofollow noreferrer">http://www.open3d.org/docs/latest/tutorial/pipelines/icp_registration.html</a>
(I use Ubuntu 20.04)</p>
<p>I tried to use point clouds from my ouster128, but it didn't work and therefore I decided to use 2 'dummy' point clouds that I create with numpy. The icp registration method gets a transformation as input and, <strong>in my case, always returns the input transformation</strong> (it basically does nothing, probably because the errors are 0). Here's the code (should be ready to use when copy pasted):</p>
<pre><code>import numpy as np
import copy
import open3d as o3d
def draw_registration_result(source, target, transformation):
source_temp = copy.deepcopy(source)
target_temp = copy.deepcopy(target)
source_temp.paint_uniform_color([1, 0.206, 0])
target_temp.paint_uniform_color([0, 0.651, 0.929])
print("Transformation: " + str(transformation))
source_temp.transform(transformation)
coord_frame = o3d.geometry.TriangleMesh.create_coordinate_frame()
o3d.visualization.draw_geometries([source_temp, target_temp, coord_frame],
zoom=0.5,
front=[0.9288, -0.2951, -0.2242],
lookat=[0, 1, 1],
up=[0, 0, 1])
src_points = np.array([
[0.0, 0.0, 0.0],
[1.0, 0.0, 0.0],
[2.0, 0.0, 0.0],
[0.0, 1.0, 0.0],
[1.0, 1.0, 0.0],
[2.0, 1.0, 0.0],
[0.0, 2.0, 0.0],
[1.0, 2.0, 0.0],
[2.0, 2.0, 0.0],
[0.0, 3.0, 0.0],
[1.0, 3.0, 0.0],
[2.0, 3.0, 0.0],
])
tgt_points = np.array([
[0.0, 0.0, 0.1], # Due to the 0.1 the clouds do not match perfectly
[1.0, 0.0, 0.1],
[2.0, 0.0, 0.1],
[0.0, 1.0, 0.1],
[1.0, 1.0, 0.1],
[2.0, 1.0, 0.1],
[0.0, 2.0, 0.1],
[1.0, 2.0, 0.1],
[2.0, 2.0, 0.1],
[0.0, 3.0, 0.1],
[1.0, 3.0, 0.1],
[2.0, 3.0, 0.1],
])
o3d.utility.set_verbosity_level(o3d.utility.VerbosityLevel.Debug)
source = o3d.geometry.PointCloud()
source.points = o3d.utility.Vector3dVector(src_points)
target = o3d.geometry.PointCloud()
target.points = o3d.utility.Vector3dVector(tgt_points)
trans_init = np.asarray([[1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 1.0]])
threshold = 0.02
reg_p2p = o3d.pipelines.registration.registration_icp(
source, target, threshold, trans_init,
o3d.pipelines.registration.TransformationEstimationPointToPoint())
print("Post Registration")
print("Inlier Fitness: ", reg_p2p.fitness)
print("Inlier RMSE: ", reg_p2p.inlier_rmse)
draw_registration_result(source, target, reg_p2p.transformation)
</code></pre>
<p><code>source</code> and <code>target</code> are the same point clouds. Only difference is, that <code>target</code> is translated by 0.1 in z direction. The initial transformation is the identity matrix. The matrix that I would expect as output is the same as I, just with <code>I[2][4]=0.1</code>. Now, the <code>fitness</code> and <code>inlier_rmse</code> are 0. Which makes no sense (unless I completely misunderstood something) as that would mean that the clouds match perfectly, which they obviously don't do. Sometimes the fitness is not zero (for example, when <code>source</code> and <code>target</code> are the same clouds, except that 3 points of <code>target</code> are translated by 0.1).</p>
<p>What I tried before posting this thread:</p>
<ol>
<li>2 different versions of open3d (0.15.2 and 0.16.0).</li>
<li>different point clouds</li>
<li>different initial transformations</li>
<li>some thresholds, 2e-10, 2e-6, 0.2</li>
</ol>
<p>(The visualization window is white and the camera has to be rotated to view the clouds)
So, what am I doing wrong here? Thanks in advance.</p>
|
<python><registration><point-clouds><open3d>
|
2022-12-21 16:21:17
| 1
| 497
|
Sheradil
|
74,878,914
| 3,876,599
|
multiprocessing.Pool inexplicably slow when passing around Pandas data frames
|
<p>I am trying to speed up a code path that creates Pandas data frames of some 100 mega bytes.</p>
<p>The obvious idea is to use <code>multiprocessing.Pool</code>. While the actual processing is parallelized as expected, it seems as if the IPC overhead makes things way slower than not doing any multiprocessing at all.</p>
<p>What really puzzles me is that I can work around the problem by pickling the frames to disk and loading them back once processing is finished. I expect that passing around things in-memory should be faster than using a hard disk.</p>
<p>Find a script below testing the two cases and for comparison plain bytes.</p>
<p>Manually dumping and loading is always way faster.</p>
<h3>Measurements</h3>
<p>Python 3.10 on Mac OS Ventura</p>
<pre><code>Pandas frame: 56.739380359998904
Bytes: 100.31256382900756
Pandas hard disk: 2.5154516759794205
</code></pre>
<p>Python 3.10 on the same Mac but in a Docker container</p>
<pre><code>Pandas frame: 30.639809517015237
Bytes: 17.284717564994935
Pandas hard disk: 4.386503675952554
</code></pre>
<h3>Test script</h3>
<p>My POC that simply dumps the frames to disk runs in <strong>4 seconds</strong>.</p>
<pre><code>import multiprocessing
import pandas as pd
import numpy as np
import time
def creator(id_):
# ~230 Mb in size
return pd.DataFrame(np.ones((3000, 10000)))
def creator_bytes(id_):
return bytearray(3000 * 10000 * 8)
def creator_pkl(id_):
filename = f"pickled_{id_}.pkl"
pd.DataFrame(np.ones((3000, 10000))).to_pickle(filename)
return filename
if __name__ == "__main__":
timestamp = time.perf_counter()
fake_input = list(range(10))
with multiprocessing.Pool(1) as p:
results = p.map(creator, fake_input)
print(f"Pandas frame: {time.perf_counter() - timestamp}")
###
timestamp = time.perf_counter()
with multiprocessing.Pool(1) as p:
results = p.map(creator_bytes, fake_input)
print(f"Byte array: {time.perf_counter() - timestamp}")
###
timestamp = time.perf_counter()
with multiprocessing.Pool(1) as p:
results = p.map(creator_pkl, fake_input)
print(f"Pandas hard disk: {time.perf_counter() - timestamp}")
</code></pre>
<p>Does anyone have an explanation of the overhead?</p>
|
<python><pandas><multiprocessing><pickle>
|
2022-12-21 16:20:22
| 0
| 699
|
Yourstruly
|
74,878,785
| 5,195,209
|
How can I use pango (HTML subset) with the ImageMagick Python library wand?
|
<p>My goal is to take a picture and add a centered text to its center. I want to use italics and bold for this text, specified with the HTML-like pango.</p>
<p>I currently have this code:</p>
<pre class="lang-py prettyprint-override"><code>import os
from wand.image import Image
from wand.drawing import Drawing
from wand.color import Color
with Image(filename='testimg.png') as img:
with Drawing() as draw:
draw.font = 'Arial'
draw.font_size = 36
text = 'pango:<b>Formatted</b> text'
(width, height) = draw.get_font_metrics(img, text).size()
print(width, height)
x = int((img.width - width) / 2)
y = int((img.height - height) / 2)
draw.fill_color = Color('black')
draw.text(x, y, text)
draw(img)
img.save(filename='output.jpg')
</code></pre>
<p>However, the text does not get formatted currently, but is simply "pango:<b>Formatted</b> text", and it is very hard to find any documentation.
(Before this approach I tried using pillow, but that does not seem to support anything HTML-like at all)</p>
|
<python><wand>
|
2022-12-21 16:09:39
| 1
| 587
|
Pux
|
74,878,760
| 4,019,495
|
Is there a pandas function to duplicate each row of a dataframe n times, assigning each of n categories to each row?
|
<p>What is the easiest way to go from:</p>
<pre><code>df = pd.DataFrame({'col1': [1,1,2,3], 'col2': [2,4,3,5]})
group_l = ['a', 'b']
df
col1 col2
0 1 2
1 1 4
2 2 3
3 3 5
</code></pre>
<p>to</p>
<pre><code> col1 col2 group
0 1 2 a
1 1 4 a
2 2 3 a
3 3 5 a
0 1 2 b
1 1 4 b
2 2 3 b
3 3 5 b
</code></pre>
<p>I've thought of a few solutions but none seem great.</p>
<ul>
<li>Use pd.MultiIndex.from_product, then reset_index. This would work fine if the initial DataFrame only had one column.</li>
<li>Add a new column <code>group</code> where each element is <code>['a', 'b']</code>. Use pd.DataFrame.explode. Feels inefficient.</li>
</ul>
|
<python><pandas>
|
2022-12-21 16:07:14
| 5
| 835
|
extremeaxe5
|
74,878,671
| 3,368,667
|
Pandas - create new column where rows match
|
<p>I'd like to create a new column in pandas that matches row values based on the same condition. I'm trying to turn a dependent variable into an independent variable.</p>
<p>In the sample dataframe below (taken from the actual dataset), each row is separated by the value <code>test_name</code> and by time period, 'tp'. I want to create a new column <code>intensity_value</code>, where the values for tp are equal and place the new column where the <code>test_value</code> is 'PANAS'</p>
<p>For instance, in the dataframe, the third row has a 'PANAS' value of 35 and an intensity value of 50 in the 7th row. Both rows have the same id (73) and time period, w3s1. The ideal dataframe would have a new column in the 3rd row, <code>intensity_value</code> of 50.</p>
<p>Code is below to create a dataframe from a dictionary. I haven't been able to find any examples online that create new columns by similar row values.</p>
<p>Thanks for any help and please let me know if I can make this question easier to answer.</p>
<pre><code>df = {'trial_id': {73: 79, 300: 79, 515: 79, 715: 79, 2541: 79, 2673: 79, 2810: 79, 2960: 79}, 'tp': {73: 'w1s1', 300: 'w2s1', 515: 'w3s1', 715: 'w4s1', 2541: 'w1s1', 2673: 'w2s1', 2810: 'w3s1', 2960: 'w4s1'}, 'test_name': {73: 'PANAS', 300: 'PANAS', 515: 'PANAS', 715: 'PANAS', 2541: 'intensity', 2673: 'intensity', 2810: 'intensity', 2960: 'intensity'}, 'value': {73: 25.0, 300: 30.0, 515: 35.0, 715: 34.0, 2541: 0.0, 2673: 0.0, 2810: 50.0, 2960: 0.0}}
pd.DataFrame.from_dict(df)
</code></pre>
|
<python><pandas>
|
2022-12-21 16:01:03
| 1
| 1,077
|
tom
|
74,878,668
| 18,018,869
|
What is the difference between passing the statement Chem.MolFromSmiles directly or via a variable?
|
<p>If I do not store the <code>rdkit.Chem.rdchem.Mol</code> object in a variable but pass the statement <code>Chem.MolFromSmiles("<your-smile>")</code> directly into another function it gives a different result than storing it in a variable before!</p>
<p>Why is that?</p>
<pre><code>>>> from rdkit.Chem import Descriptors
>>> from rdkit import Chem
>>> # direct approach
>>> print(Descriptors.TPSA(Chem.MolFromSmiles('OC(=O)P(=O)(O)O')))
94.83
>>> print(Descriptors.TPSA(Chem.MolFromSmiles('OC(=O)P(=O)(O)O'), includeSandP=True))
104.64000000000001
>>> # mol as variable approach
>>> mol = Chem.MolFromSmiles('OC(=O)P(=O)(O)O')
>>> print(Descriptors.TPSA(mol))
94.83
>>> print(Descriptors.TPSA(mol, includeSandP=True))
94.83
</code></pre>
<p>In my mind the last <code>print</code>statement should also give a result of ~104.64</p>
<p>This links you to the example that I am using: <a href="https://rdkit.org/docs/RDKit_Book.html?highlight=tpsa#implementation-of-the-tpsa-descriptor" rel="nofollow noreferrer">TPSA</a></p>
|
<python><rdkit>
|
2022-12-21 16:00:35
| 1
| 1,976
|
Tarquinius
|
74,878,422
| 7,298,643
|
How to merge dataframes together with matching columns side by side?
|
<p>I have two dataframes with matching keys. I would like to merge them together based on their keys and have the corresponding columns line up side by side. I am not sure how to achieve this as the <code>pd.merge</code> displays all columns for the first dataframe and then all columns for the second data frame:</p>
<pre><code>df1 = pd.DataFrame(data={'key': ['a', 'b'], 'col1': [1, 2], 'col2': [3, 4]})
df2 = pd.DataFrame(data={'key': ['a', 'b'], 'col1': [5, 6], 'col2': [7, 8]})
print(pd.merge(df1, df2, on=['key']))
key col1_x col2_x col1_y col2_y
0 a 1 3 5 7
1 b 2 4 6 8
</code></pre>
<p>I am looking for a way to do the same merge and have the columns displays side by side as such:</p>
<pre><code> key col1_x col1_y col2_x col2_y
0 a 1 5 3 7
1 b 2 6 4 8
</code></pre>
<p>Any help achieving this would be greatly appreciated!</p>
|
<python><pandas><dataframe><merge>
|
2022-12-21 15:41:58
| 1
| 490
|
Jengels
|
74,878,409
| 8,145,400
|
ModuleNotFoundError: No module named 'keras.objectives'
|
<p>I am trying to run a file that is importing a package- <code>from keras.objectives import categorical_crossentropy</code>
Here it is saying <em>ModuleNotFoundError: No module named 'keras.objectives'</em></p>
<p>I found a similar question <a href="https://github.com/theislab/dca/issues/48" rel="nofollow noreferrer">here</a> but does not seems to be working.</p>
<p>How can I resolve this?</p>
|
<python><tensorflow><keras><tf.keras>
|
2022-12-21 15:40:57
| 1
| 634
|
Tanmay Bairagi
|
74,878,262
| 5,568,409
|
How could it be that Python module `arviz` has no attribute `plots`?
|
<p>I tried to run this program:</p>
<pre><code>import pymc3 as pm
import theano.tensor as tt
import scipy
from scipy import optimize
</code></pre>
<p>but I got in return some non understandable comment:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_38164/4136381866.py in <module>
----> 1 import pymc3 as pm
2 import theano.tensor as tt
3 import scipy
4 from scipy import optimize
~\anaconda3\lib\site-packages\pymc3\__init__.py in <module>
70 from pymc3.model import *
71 from pymc3.model_graph import model_to_graphviz
---> 72 from pymc3.plots import *
73 from pymc3.sampling import *
74 from pymc3.smc import *
~\anaconda3\lib\site-packages\pymc3\plots\__init__.py in <module>
26
27 # Makes this module as identical to arviz.plots as possible
---> 28 for attr in az.plots.__all__:
29 obj = getattr(az.plots, attr)
30 if not attr.startswith("__"):
AttributeError: module 'arviz' has no attribute 'plots'
</code></pre>
<p><strong>Note that</strong>:</p>
<p><code>arviz</code> was downloaded: <code>conda install -c conda-forge arviz</code></p>
<p><code>pymc3</code> was downloaded: <code>conda install -c conda-forge pymc3</code></p>
<p><code>theano-pymc</code> was downloaded: <code>conda install -c conda-forge theano-pymc</code></p>
<p>So, please, how could I provide this attribute <code>'plots'</code> to this module <code>'arviz</code>'? I really dont understand how to solve this <code>AttributeError</code>...</p>
|
<python><arviz>
|
2022-12-21 15:26:49
| 0
| 1,216
|
Andrew
|
74,878,253
| 8,586,803
|
How to filter 3D array with a 2D mask
|
<p>I have a <code>(m,n,3)</code> array <code>data</code> and I want to filter its values with a <code>(m,n)</code> mask to receive a <code>(x,3)</code> <code>output</code> array.</p>
<p>The code below works, but how can I replace the for loop with a more efficient alternative?</p>
<pre><code>import numpy as np
data = np.array([
[[11, 12, 13], [14, 15, 16], [17, 18, 19]],
[[21, 22, 13], [24, 25, 26], [27, 28, 29]],
[[31, 32, 33], [34, 35, 36], [37, 38, 39]],
])
mask = np.array([
[False, False, True],
[False, True, False],
[True, True, False],
])
output = []
for i in range(len(mask)):
for j in range(len(mask[i])):
if mask[i][j] == True:
output.append(data[i][j])
output = np.array(output)
</code></pre>
<p>The expected output is</p>
<pre><code>np.array([[17, 18, 19], [24, 25, 26], [31, 32, 33], [34, 35, 36]])
</code></pre>
|
<python><arrays><numpy><masking>
|
2022-12-21 15:25:47
| 1
| 6,178
|
Florian Ludewig
|
74,878,241
| 18,301,789
|
Function not working when hardcoding instead of using input()
|
<p>I'm working with Python on my PC, sending serial commands to an arduino which controls a certain number of stepper motors.<br />
However, in this function:</p>
<pre class="lang-py prettyprint-override"><code># takes array of commands to send to motors (in order) and sends commmand arcodinlgy
# each element of commands is an absolute angle (rad) to give to one motor
def send_command(commands):
if not len(commands) > 0:
return
# make command string to send serial
# (with command separator and line termination)
command_string = "";
for i in range(len(commands) - 1):
command_string += f"{commands[i]:.4f}{COMMAND_SEPARATOR}"
command_string += f"{commands[-1]:.4f}{LINE_TERMINATOR}"
# make command string into bytes UTF-8
# print(command_string)
command_string = bytes(command_string, "utf-8")
# send command string serial
print(f"Sending command: " + str(command_string))
port.write(command_string)
# wait for arduino's return_code on serial
while True:
if port.inWaiting() > 0:
return_code = int(port.readline())
return return_code
# driving code
while True:
commands = [0, 0]
commands[0] = float(input("command 1: "))
commands[1] = float(input("command 2: "))
return_code = send_command(commands)
print(f"return code: {return_code}")
</code></pre>
<p>This code works correctly, but if I don't want user input for the commands :</p>
<pre class="lang-py prettyprint-override"><code>while True:
commands = [float(random.random()*pi), float(random.random()*pi)]
# or even the following doesn't work for exemple:
# commands = [3.1415, 0.5555]
return_code = send_command(commands)
print(f"return code: {return_code}")
sleep(1)
</code></pre>
<p>This code doesn't.
I don't get why because in the first case, the line <code>print(f"Sending command: " + str(command_string))</code> prints exactly the same as in the second case :</p>
<pre><code>Sending command: b'3.1415:0.5555\n'
</code></pre>
<p>(when input those angles)<br />
But in the second case, no return code is received, and the function doesn't work.</p>
<p>I tried totally hardcoding the values given.<br />
I implemented that the commands are always formated the same way, and always converted with <code>float()</code> (before the <code>print("Serial command...</code> line.<br />
So I expected the same behavior for the two codes, and I don't see what changes between the two (since the <code>print("Serial command ...</code> line gives the same and is the last line before the command is sent over serial</p>
<p>Thanks !</p>
|
<python><arduino>
|
2022-12-21 15:24:43
| 2
| 360
|
Stijn B
|
74,878,171
| 10,071,473
|
Private class in python package
|
<p>I'm writing a python library to interface to a rest server.
I would like to be able to add classes inside the library, which I would like to be hidden once the library is installed via pip and imported into any project.</p>
<p>Example of the library structure:</p>
<pre><code>.
├── my_library
├── __init__.py
├── controller
| ├──OtherClass.py
| └──Client.py
└── models
├── __init__.py
└── MyModel.py
</code></pre>
<p>Content of __init__.py in root folder.</p>
<pre><code>from .controller.Client import Client
</code></pre>
<p>setup.py:</p>
<pre><code>from setuptools import find_packages, setup
setup(
name='my_library',
packages=find_packages(include=['my_library']),
version='0.1.0',
description='My library',
author='Pasini Matteo',
setup_requires=['pytest-runner'],
license='MIT',
tests_require=['pytest==4.4.1'],
test_suite='tests',
)
</code></pre>
<p>what I would like not to be able to do those who use the library:</p>
<pre><code>from my_library.controller.OtherClass import OtherClass
</code></pre>
<p><strong>python version >= 3.6</strong></p>
|
<python><python-3.x><package>
|
2022-12-21 15:19:45
| 0
| 2,022
|
Matteo Pasini
|
74,878,128
| 8,248,194
|
Cannot import local files in python debugger (vs code)
|
<p>I am doing the following:</p>
<ol>
<li><code>mkdir folder_structure</code></li>
<li><code>mkdir folder_structure/utils</code></li>
<li><code>touch folder_structure/utils/tools.py</code></li>
<li><code>touch folder_structure/main.py</code></li>
<li>Write in main.py:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>from folder_structure.utils.tools import dummy
if __name__ == '__main__':
dummy()
</code></pre>
<ol start="6">
<li>Write in tools.py:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>def dummy():
print("Dummy")
</code></pre>
<ol start="7">
<li><p><code>touch folder_structure/__init__.py</code></p>
</li>
<li><p><code>touch folder_structure/utils/__init__.py</code></p>
</li>
<li><p>Run the vs code debugger</p>
</li>
</ol>
<p>I'm getting:</p>
<pre><code>Exception has occurred: ModuleNotFoundError
No module named 'folder_structure'
File "/Users/davidmasip/Documents/Others/python-scripts/folder_structure/main.py", line 1, in <module>
from folder_structure.utils.tools import dummy
</code></pre>
<p>I have the following in my launch.json:</p>
<pre class="lang-json prettyprint-override"><code>{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"env": {
"PYTHONPATH": "${workspaceRoot}"
}
}
]
}
</code></pre>
<p>How can I import a local module when using the debugger?</p>
<p>This works for me:</p>
<pre class="lang-bash prettyprint-override"><code>python -m folder_structure.main
</code></pre>
|
<python><visual-studio-code><debugging>
|
2022-12-21 15:16:57
| 2
| 2,581
|
David Masip
|
74,878,109
| 7,317,408
|
How to fill in NaN when using np.append witb a condition?
|
<p>Sorry for another noob question!</p>
<p>I have a function which is taking the opening price of a bar, and increasing it by 100%, to return my target entry price:</p>
<pre><code>def prices(open, index):
gap_amount = 100
prices_array = np.array([])
index = index.vbt.to_ns()
day = 0
target_price = 0
first_bar_of_day = 0
for i in range(open.shape[0]):
first_bar_of_day = 0
day_changed = vbt.utils.datetime_nb.day_changed_nb(index[i - 1], index[i])
# if we have a new day
if (day_changed):
day = day + 1
# print(f'day {day}')
first_bar_of_day = i
fist_open_price_of_day = open[first_bar_of_day]
target_price = increaseByPercentage(fist_open_price_of_day, gap_amount)
prices_array = np.append(prices_array, target_price)
return prices_array
</code></pre>
<p>Used like so:</p>
<pre><code>prices_array = prices(df['open'], df.index)
print(prices_array, 'prices_array')
</code></pre>
<p>It returns:</p>
<pre><code>[ 1.953 12.14 2.4 2.36 6.04 6.6 2.62 2.8 3.94
2. 5.16 3.28 5.74 3.6 2.48 4.2 4.02 2.72
5.52 3.34 3.84 2.02 2.58 4.76 2.28 3.54 2.54
3.7 3.38 3.4 6.68 2.48 7.2 4.5 5.66 4.48
5.92 5.26 4.06 3.96 4. 4.42 2.62 1.76 3.66
5.5 3.82 1.8002 3.02 7.78 2.32 4.6 3.34 0.899
1.52 5.28 5.1 2.88 ] prices_array
</code></pre>
<p>But how can I append the numpy array, to fill in NaN for when the condition hasn't been met? So when my condition isn't met, there should be NaN for each bar / index of the loop.</p>
<p>The data is simply OHLC data of different days, in chronological order.</p>
<p>I hope this makes sense, please let me know if I can improve the question!</p>
<p>I've tried inserting at the specific index using <code>insert</code>:</p>
<pre><code>def prices(close, open, index):
gap_amount = 100
prices_array = np.array([])
index = index.vbt.to_ns()
day = 0
target_price = 10000
first_bar_of_day = 0
for i in range(open.shape[0]):
first_bar_of_day = 0
day_changed = vbt.utils.datetime_nb.day_changed_nb(index[i - 1], index[i])
# if we have a new day
if (day_changed):
day = day + 1
# print(f'day {day}')
first_bar_of_day = i
fist_open_price_of_day = open[first_bar_of_day]
target_price = increaseByPercentage(fist_open_price_of_day, gap_amount)
# if close goes above or equal to target price
if (close[i] >= target_price):
prices_array = np.insert(prices_array, i, target_price)
else:
prices_array = np.insert(prices_array, i, np.nan)
return prices_array
</code></pre>
<p>Which gives:</p>
<pre><code>[nan nan nan ... nan nan nan] prices_array
</code></pre>
|
<python><pandas><numpy><trading><vectorbt>
|
2022-12-21 15:14:04
| 1
| 3,436
|
a7dc
|
74,878,086
| 667,355
|
Creating a custom color map for heatmap
|
<p>I have the following heatmap and I want to make a custom color map for it. For the color map I would like 0, 1, and -3 correspond to red, pink and blue, respectively, so that from 0 to 1 the red color gets lighter and from 0 to -3 the red color gradually turns to blue. I tried to find a solution among the questions that have already been asked in StackOverFlow but couldn't find anything close to my case.</p>
<pre><code>import matplotlib.pyplot as plt
import seaborn as sns
test_data = {"a":{"a":1 , "b":0.5, "c":-0.2, "d":-2.7} , "b":{"a":0.2 , "b":0, "c":-1.3, "d":-2}, "c":{"a":0 , "b":1, "c":-2.2, "d":-0.005}, "d":{"a":-3 , "b":0.9, "c":0.01, "d":-1.15}}
test_data_df = pd.DataFrame.from_dict(test_data)
fig, ax = plt.subplots(figsize=(11,9))
_ = sns.heatmap(test_data_df, annot=True)
</code></pre>
<p><a href="https://i.sstatic.net/7UXpZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7UXpZ.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><seaborn><heatmap><colormap>
|
2022-12-21 15:12:29
| 1
| 3,491
|
amiref
|
74,877,992
| 7,032,878
|
Speed up document update from list of dicts with Pymongo
|
<p>I'm trying to create some Mongo documents starting from a list of dictionaries, like this one:</p>
<pre><code>dictlist = [ dict1, dict2, dict3 ]
</code></pre>
<p>Where the generic dict is something like:</p>
<pre><code>dict1 = { 'Id': 1, 'key1': 'value1', 'key2': 'value2' }
</code></pre>
<p>Each dict in the list has the same keys.</p>
<p>Then I'm using this for loop in order to put each dict on Mongo.</p>
<pre><code>import datetime
from pymongo import MongoClient
database = MongoClient(mongo_connection_string)
coll = database.collection
for idx, record in enumerate(dictlist):
time_now = datetime.datetime.utcnow().isoformat()[:-3]
document_matches = coll.count_documents(record)
id = record['Id']
if ( document_matches == 0 ): # Document is not already present
coll.update_one({'Id': id}, {'$set': record}, upsert=False)
coll.update_one({'Id': id}, {'$set': {'DocumentUpdatedAt': time_now}}, upsert=False)
</code></pre>
<p>The thing is working, but is slow. I'm wondering if there is any different approach that can speed up the process.</p>
|
<python><pymongo>
|
2022-12-21 15:03:58
| 1
| 627
|
espogian
|
74,877,840
| 6,930,340
|
Convert zeros and ones to bool while preserving pd.NA in a multi column index dataframe
|
<p>I have a multi column index dataframe. Some column headers might have <code>pd.NA</code> values.</p>
<p>The actual values in the dataframe might be zero, one, or <code>pd.NA</code>.</p>
<p>How can I transform all zeros and ones into <code>bool</code> while preserving the <code>pd.NA</code> values?</p>
<pre><code>import pandas as pd
idx_l1 = ("a", "b")
idx_l2 = (pd.NA, pd.NA)
idx_l3 = ("c", "c")
df = pd.DataFrame(
data=[
[1, pd.NA, 0, pd.NA, 0, 1, pd.NA, pd.NA],
[pd.NA, 0, 1, pd.NA, pd.NA, pd.NA, 0, 0],
[0, 1, 1, 1, 0, pd.NA, pd.NA, 0],
],
columns=pd.MultiIndex.from_product([idx_l1, idx_l2, idx_l3]),
)
df = df.rename_axis(["level1", "level2", "level3"], axis=1)
print(df)
level1 a b
level2 NaN NaN
level3 c c c c c c c c
0 1 <NA> 0 <NA> 0 1 <NA> <NA>
1 <NA> 0 1 <NA> <NA> <NA> 0 0
2 0 1 1 1 0 <NA> <NA> 0
</code></pre>
|
<python><pandas><nan>
|
2022-12-21 14:51:51
| 1
| 5,167
|
Andi
|
74,877,603
| 11,693,768
|
How to read csv file into pandas, skipping rows until a certain string, then selecting first row after as header and delimiter as |
|
<p>Below is an example of a bunch of csv files / datasets I have. They follow the format below.</p>
<pre><code>FILE-START
COL=yes
DELIMITER=|
.....
.....
.....
START-DATA
header1 | header2 | header3 | header4 | header5
data1 | data2 | data3 | data4 | data5
......
......
</code></pre>
<p>I need to skip headers until the string <code>START-DATA</code> because each file has a different number of rows before the data starts.</p>
<p>To load this file I can do <code>pd.read_csv(filename, skiprows=50, delimiter = '|')</code>, but if I want to do a bulk load of all the files this won't work as the data starts at a different row each time.</p>
<p>How can I make this code dynamic so that it starts when the <code>START-DATA</code> string appears and takes the following row as a header and uses <code>|</code> as a delimiter.</p>
<p>I have tried the following code so far,</p>
<pre><code>df = pd.read_csv(downloaded_file, header=None)
df_2 = df.iloc[(df.loc[df[0]=='START-OF-DATA'].index[0]+1):, :].reset_index(drop = True)
</code></pre>
<p>I start with a dataframe that looks like this,</p>
<pre><code> 0
0 header1 | header2 | header3 | header4 | header5
1 data1 | data2 | data3 | data4 | data5
</code></pre>
<p>Where all the data is squeezed into one column. How do I use <code>|</code> as the delimiter next?</p>
<p>What's the correct and most efficient way to do this dynamically for each file?</p>
|
<python><pandas><csv>
|
2022-12-21 14:33:38
| 3
| 5,234
|
anarchy
|
74,877,596
| 15,531,842
|
how to abort a job using ipywidgets
|
<p>I am creating a button that runs a job when clicked using ipywidgets all inside of a Jupyter Notebook. This job can take some long amount of time, so I would like to also give the user the ability to stop the job.</p>
<p>I've created a minimally reproducible example that runs for only 10 seconds. All of the following is run from a Jupyter Notebook cell:</p>
<pre><code>import ipywidgets as widgets
from IPython.display import display
from time import sleep
button = widgets.Button(description='run job')
output = widgets.Output()
def abort(event):
with output:
print('abort!')
def run_job(event):
with output:
print('running job')
button.description='abort'
button.on_click(run_job, remove=True)
button.on_click(abort)
sleep(10)
print('job complete!')
button.on_click(run_job)
display(button, output)
</code></pre>
<p>If the user clicks 'run job', then waits 2 seconds, then clicks 'abort'. The behavior is:</p>
<pre><code>running job
job complete!
abort!
</code></pre>
<p>Which implies the 'abort' event fires after the job is already complete. I would like the print sequence to be:</p>
<pre><code>running job
abort!
job complete!
</code></pre>
<p>If the abort can fire immediately when clicked, then I have a way to actually stop the job inside of my class objects.</p>
<p>How can I get the abort event to run immediately when clicked from a Jupyter Notebook?</p>
|
<python><jupyter-notebook><ipywidgets>
|
2022-12-21 14:33:25
| 1
| 886
|
lane
|
74,877,580
| 202,645
|
Discover missing module using command-line ("DLL load failed" error)
|
<p>On Windows, when we try to import a <code>.pyd</code> file, and a DLL that the <code>.pyd</code> depends on cannot be found, we get this traceback:</p>
<pre><code>Traceback (most recent call last):
...
ImportError: DLL load failed: The specified module could not be found.
</code></pre>
<p>When this happens, often one has to resort to a graphical tool like <a href="https://github.com/lucasg/Dependencies" rel="nofollow noreferrer">Dependencies</a> to figure out what is the name of the missing module.</p>
<p>How can I obtain the missing module name <strong>via the command-line</strong>?</p>
<p>Context: often we get this error in CI, and it would be easier to login via SSH to find out the missing module name, rather than having to log via GUI.</p>
|
<python><windows><dll><pyd>
|
2022-12-21 14:31:59
| 2
| 15,705
|
Bruno Oliveira
|
74,877,555
| 12,292,032
|
Pandas special pivot dataframe
|
<p>Let's take a sample dataframe :</p>
<pre><code>df = pd.DataFrame({"Name": ["Alan","Alan","Kate","Kate","Brian"],
"Shop" :["A","B","C","A","B"],
"Amount":[4,2,1,3,5]})
Name Shop Amount
0 Alan A 4
1 Alan B 2
2 Kate C 1
3 Kate A 3
4 Brian B 5
</code></pre>
<p><strong>First expected output :</strong></p>
<p>I would like to create a new dataframe from df having :</p>
<ul>
<li>as columns all the possible values in the column <code>Shop</code> and the column <code>Name</code></li>
<li>as index all the possible values in the column <code>Shop</code>, repeated for each value in column `Name'</li>
<li>as values the value in the column <code>Àmount</code> matching with the columns <code>Name</code> and <code>Shop</code></li>
</ul>
<p>Expected output :</p>
<pre><code> A B C Name
A 4 2 0 Alan
B 4 2 0 Alan
C 4 2 0 Alan
A 3 0 1 Kate
B 3 0 1 Kate
C 3 0 1 Kate
A 0 5 0 Brian
B 0 5 0 Brian
C 0 5 0 Brian
</code></pre>
<p><strong>Second expected output :</strong></p>
<p>It's almost the same as the first expected output. The only difference is that the value is the one that match with the index (and not column <code>Name</code>) and the column <code>Shop</code>.</p>
<p>Expected output :</p>
<pre><code> A B C Name
A 4 4 4 Alan
B 2 2 2 Alan
C 0 0 0 Alan
A 3 3 3 Kate
B 0 0 0 Kate
C 1 1 1 Kate
A 0 0 0 Brian
B 5 5 5 Brian
C 0 0 0 Brian
</code></pre>
<p>Thanks to <a href="https://stackoverflow.com/questions/26255671/pandas-column-values-to-columns">this post</a>, I tried several scripts using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.pivot_table.html" rel="nofollow noreferrer">pivot_table</a> and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot.html" rel="nofollow noreferrer">pivot</a> but I didn't reach my expected outputs. Would you know please how to do ?</p>
|
<python><pandas><dataframe><pivot>
|
2022-12-21 14:30:19
| 1
| 945
|
Ewdlam
|
74,877,408
| 10,535,123
|
How can I validate PySpark Dataframe schema is obey to a particular structure?
|
<p>I want to create a unit test that validates the Dataframe schema by comparing it to a particular schema structure I created, how can I do that?</p>
<p>For example, I have a <code>df</code> and schema -</p>
<pre><code>schema = StructType([
StructField("id",StringType(),True),
StructField("name",StringType(),True)
])
</code></pre>
<p>I want to validate that <code>df.schema == schema</code></p>
|
<python><apache-spark><pyspark>
|
2022-12-21 14:18:46
| 0
| 829
|
nirkov
|
74,877,324
| 10,759,785
|
How to turn on all axes boundaries of a 3D scatterplot?
|
<p>How can I turn on all axes lines of a 3D scatterplot? More specifically, how can I make a scatterplot like this:</p>
<p><a href="https://i.sstatic.net/mUiuM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mUiuM.png" alt="enter image description here" /></a></p>
<p>Stay like this:</p>
<p><a href="https://i.sstatic.net/WEPpH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WEPpH.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><scatter-plot><axis><scatterplot3d>
|
2022-12-21 14:12:15
| 0
| 331
|
Lucas Oliveira
|
74,877,304
| 51,816
|
How to match number followed by . and a space character?
|
<p>Basically I have some text like this:</p>
<blockquote>
<ol>
<li>first line</li>
<li>second</li>
<li>more lines</li>
<li>bullet points</li>
</ol>
</blockquote>
<p>I separate these line by line so I can process them, but I want to be able to see if a line actually starts with a number then a . and then a space character.</p>
<p>So I can use this so split the line into 2 and process each of these separately. The number part with the . and space will be treated differently than the rest.</p>
<p>What's the best way to do this in Python? I didn't want to do a simple number check as characters because the numbers can be anything but likely less than 100.</p>
|
<python><string><string-matching>
|
2022-12-21 14:10:55
| 5
| 333,709
|
Joan Venge
|
74,877,082
| 7,760,910
|
tox unable to find boto3 even though it is installed
|
<p>I have a <code>Python</code> <code>tox</code> project where I run the tox for running the test case and I came across one error a few hours back and am unable to resolve it till now. My module is using boto3 library and is installed using both the commands:</p>
<pre><code>pip3 install boto3
pip install boto3 //for venv environments
</code></pre>
<p>When I try to install it again it gives me the below stack trace:</p>
<pre><code>Requirement already satisfied: boto3 in ./venv/lib/python3.8/site-packages (1.26.34)
Requirement already satisfied: botocore<1.30.0,>=1.29.34 in ./venv/lib/python3.8/site-packages (from boto3) (1.29.34)
Requirement already satisfied: jmespath<2.0.0,>=0.7.1 in ./venv/lib/python3.8/site-packages (from boto3) (1.0.1)
Requirement already satisfied: s3transfer<0.7.0,>=0.6.0 in ./venv/lib/python3.8/site-packages (from boto3) (0.6.0)
Requirement already satisfied: urllib3<1.27,>=1.25.4 in ./venv/lib/python3.8/site-packages (from botocore<1.30.0,>=1.29.34->boto3) (1.26.13)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in ./venv/lib/python3.8/site-packages (from botocore<1.30.0,>=1.29.34->boto3) (2.8.2)
Requirement already satisfied: six>=1.5 in ./venv/lib/python3.8/site-packages (from python-dateutil<3.0.0,>=2.1->botocore<1.30.0,>=1.29.34->boto3) (1.16.0)
</code></pre>
<p>But when I run tox it gives me the below error:</p>
<pre><code> File "/Users/tony/IdeaProjects/abc/provisioner/.tox/py38/lib/python3.8/site-packages/api/lambda_handler.py", line 1, in <module>
import boto3
ModuleNotFoundError: No module named 'boto3'
</code></pre>
<p>Is there some path issue? I am using <code>Python 3.8.10</code>. I tried uninstalling and installing the packages but nothing changed.</p>
<p>Any help is much appreciated.</p>
|
<python><boto3><tox>
|
2022-12-21 13:51:30
| 1
| 2,177
|
whatsinthename
|
74,877,011
| 19,155,645
|
pandas: add values which were calculated after grouping to a column in the original dataframe
|
<p>I have a pandas dataframe and want to add a value to a new column ('new') to all instances of .groupby() based on another column ('A'). <br></p>
<p>At the moment I am doing it in several steps by:<br>
1- looping through all unique column A values<br>
2- calculate the value to add (run function on a different column, e.g. 'B')<br>
3- store the value I would like to add to 'new' in a separate list (just one instance in that group!) <br>
4- zip the list of unique groups (<code>.groupby('A').unique()</code>) <br>
5- looping again through the zipped values to store them in the dataframe.</p>
<p>This is a very inefficient way, and takes a long time to run. <br>
is there a native pandas way to do it in less steps and that will run faster?</p>
<p>Example code:</p>
<pre><code>mylist = []
df_groups = df.groupby('A')
groups = df['A'].unique()
for group in groups:
g = df_groups.get_group(group)
idxmin = g.index.min()
example = g.loc[idxmin]
mylist.append(myfunction(example['B'])
zipped = zip(groups, mylist)
df['new'] = np.nan
for group, val in zipped:
df.loc[df['A']==group, 'new'] = val
</code></pre>
<p>A better way to do that would be highly appreciated.</p>
<hr />
<p>EDIT 1: <br>
I could just run myfunction on all rows of the dataframe, but since its a heavy function, it would also take very long - so would prefer to run it as little as possible (that is, once per group).</p>
|
<python><pandas><dataframe><loops><optimization>
|
2022-12-21 13:46:18
| 1
| 512
|
ArieAI
|
74,876,955
| 558,639
|
vectorizing a "leaky integrator" in numpy
|
<p>I need a leaky integrator -- an IIR filter -- that implements:</p>
<pre><code>y[i] = x[i] + y[i-1] * leakiness
</code></pre>
<p>The following code works. However, my x vectors are long and this is in an inner loop. So my questions:</p>
<ul>
<li>For efficiency, is there a way to vectorize this in numpy?</li>
<li>If not numpy, would it be advantageous to use one of the scipy.signal filter algorithms?</li>
</ul>
<p>The iterative code follows. <code>state</code> is simply the value of the previous y[i-1] that gets carried forward over successive calls:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
def leaky_integrator(x, state, leakiness):
y = np.zeros(len(x), dtype=np.float32)
for i in range(len(x)):
if i == 0:
y[i] = x[i] + state * leakiness
else:
y[i] = x[i] + y[i-1] * leakiness
return y, y[-1]
>>> leakiness = 0.5
>>> a1 = [1, 0, 0, 0]
>>> state = 0
>>> print("a1=", a1, "state=", state)
a1= [1, 0, 0, 0] state= 0
>>> a2, state = leaky_integrator(a1, state, leakiness)
>>> print("a2=", a2, "state=", state)
a2= [1. 0.5 0.25 0.125] state= 0.125
>>> a3, state = leaky_integrator(a2, state, leakiness)
>>> print("a3=", a3, "state=", state)
a3= [1.0625 1.03125 0.765625 0.5078125] state= 0.5078125
</code></pre>
|
<python><numpy>
|
2022-12-21 13:41:17
| 2
| 35,607
|
fearless_fool
|
74,876,907
| 8,735,352
|
Python bluez dbus: Custom GATT server how to notify int16 Value changed
|
<p>I'm building a custom BLE GATT Server with Python. I took the original <a href="https://github.com/bluez/bluez/blob/master/test/example-gatt-server" rel="nofollow noreferrer">bluez example server</a> and added a Temperature (0x2a6e) characteristic.</p>
<p>From the documentation, it should be a single field 'Temperature' sint16 (2 bytes)</p>
<p>I was able to add a <code>ReadValue</code> like this:</p>
<pre class="lang-py prettyprint-override"><code>def ReadValue(self, options):
return dbus.Int16(self.value).to_bytes(2, byteorder="little")
</code></pre>
<p>And it appears correctly in nRF Connect app</p>
<p>Now for the notifications, I tried many things, but it never sends the data to the client (btmon has no activity on the Server side). The main approach is this one:</p>
<pre class="lang-py prettyprint-override"><code>self.PropertiesChanged(
GATT_CHRC_IFACE,
dbus.Dictionary(
{
"Value": dbus.Int16(self.value),
},
signature="sv",
),
[],
)
</code></pre>
<p>This leads to the following in dbus (captured with <code>dbus-monitor --system</code>):</p>
<pre><code>signal time=1659004882.858019 sender=:1.129 -> destination=(null destination) serial=26 path=/org/bluez/example/service0/char0; interface=org.freedesktop.DBus.Properties; member=PropertiesChanged
string "org.bluez.GattCharacteristic1"
array [
dict entry(
string "Value"
variant int16 156
)
]
array [
]
</code></pre>
<p>But it does not arrive at the mobile app.</p>
<p>I tried changing 'Value' to 'Temperature', adding 'variant_level=1' to Int16, ...</p>
<p>Sending raw bytes could work but I'm not sure how to assemble the payload.</p>
|
<python><bluetooth><dbus><bluez><bluetooth-gatt>
|
2022-12-21 13:37:38
| 1
| 1,081
|
Danilo Fuchs
|
74,876,857
| 524,743
|
How to convert a list of dictionaries to a Bunch object
|
<p>I have a list of the following dictionaries:</p>
<pre><code>[{'Key': 'tag-key-0', 'Value': 'value-0'},
{'Key': 'tag-key-1', 'Value': 'value-1'},
{'Key': 'tag-key-2', 'Value': 'value-2'},
{'Key': 'tag-key-3', 'Value': 'value-3'},
{'Key': 'tag-key-4', 'Value': 'value-4'}]
</code></pre>
<p>Is there an elegant way to convert it to a Bunch object?</p>
<pre><code>Bunch(tag-key-0='value-0', tag-key-1='value-1', tag-key-2='value-2', tag-key-3='value-3', tag-key-4='value-4')
</code></pre>
<p>My current solution is:</p>
<pre><code>from bunch import Bunch
tags_dict = {}
for tag in actual_tags:
tags_dict[tag['Key']] = tag['Value']
Bunch.fromDict(tags_dict)
</code></pre>
|
<python><bunch>
|
2022-12-21 13:33:45
| 3
| 3,816
|
Samuel
|
74,876,732
| 7,317,408
|
'numpy.ndarray' object has no attribute 'append'
|
<p>Sorry for the noob question but I am very new to Python.</p>
<p>I have a function which takes creates a numpy array and adds to it:</p>
<pre><code>def prices(open, index):
gap_amount = 100
prices_array = np.array([])
index = index.vbt.to_ns()
day = 0
target_price = 10000
first_bar_of_day = 0
for i in range(open.shape[0]):
first_bar_of_day = 0
day_changed = vbt.utils.datetime_nb.day_changed_nb(index[i - 1], index[i])
# if we have a new day
if (day_changed):
first_bar_of_day = i
fist_open_price_of_day = open[first_bar_of_day]
target_price = increaseByPercentage(fist_open_price_of_day, gap_amount)
prices_array.append(target_price)
return prices_array
</code></pre>
<p>And when I append <code>prices_array.append(target_price)</code> I get the following error:</p>
<pre><code>AttributeError: 'numpy.ndarray' object has no attribute 'append'
</code></pre>
<p>What am I doing wrong here?</p>
|
<python><pandas><numpy><vectorbt>
|
2022-12-21 13:23:19
| 2
| 3,436
|
a7dc
|
74,876,518
| 12,486,121
|
Autoscaling y axis in interactive matplotlib plot (with slider)
|
<p>I have the following code, which lets me interactively change a function in jupyter notebook:</p>
<pre><code>import numpy as np
from matplotlib.widgets import Slider, Button
import matplotlib.pyplot as plt
%matplotlib notebook
# Define stuff
my_func = lambda x, c: x - c
# Make plot
x = np.arange(0, 50, 1)
c_init = 10
fig, ax = plt.subplots()
line, = ax.plot(x, my_func(x, c_init), lw=2)
# adjust the main plot to make room for the sliders
fig.subplots_adjust(left=0.25, bottom=0.25)
# Make a vertically oriented slider to control the stock volume
ax_c = fig.add_axes([0.1, 0.25, 0.0225, 0.63])
c_slider = Slider(
ax=ax_c,
label="My function",
valmin=-100,
valmax=100,
valinit=c_init,
orientation="vertical"
)
# The function to be called anytime a slider's value changes
def update(val):
line.set_ydata(my_func(x, c_slider.val))
fig.canvas.draw_idle()
# register the update function with each slider
c_slider.on_changed(update)
# Create a `matplotlib.widgets.Button` to reset the sliders to initial values.
resetax = fig.add_axes([0.8, 0.025, 0.1, 0.04])
button = Button(resetax, 'Reset', hovercolor='0.975')
def reset(event):
c_slider.reset()
button.on_clicked(reset)
plt.show()
</code></pre>
<p>While it works nicely, the y axis is not scaled properly as I am changing the constant. I tried to fix this using <code>ax.autoscale()</code>, <code>fig.autoscale()</code> and a bunch of other stuff within the update function, but it didn't work. Does anyone know how to do that?</p>
<p>Thanks for your help.</p>
|
<python><matplotlib><jupyter-notebook><slider><jupyter>
|
2022-12-21 13:06:21
| 1
| 1,076
|
spadel
|
74,876,432
| 14,333,315
|
Two interactive tables with Flask/Python/bootstrap
|
<p>I have an app (Flask/Python and Bootstrap) to show details on the web. Tables are creating dynamically based on specific filters from post request.
Full information can be displayed in two tables: lets says "Rents" and "Payments". First column of each table is the id.</p>
<p>Would it be possible to create an html with interactive tables where I can select the row in Rents table and "Payments" table will be updated accordingly to selected id to show only data of that specific Rent?</p>
<p>I don't want sending post/get requests for each time for updating the data. Would prefer to have it dynamically change on client side (of cause, if it is possible).</p>
<p><a href="https://i.sstatic.net/aAEJl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aAEJl.png" alt="enter image description here" /></a></p>
|
<python><flask><html-table><bootstrap-5>
|
2022-12-21 12:59:45
| 1
| 470
|
OcMaRUS
|
74,876,384
| 143,091
|
Setuptools doesn't find a package that is installed
|
<p>I am trying to install a package "editable" for developing. I have already installed all dependencies into my virtualenv, but the installation of the package is failing:</p>
<pre class="lang-none prettyprint-override"><code>$ . ./venv/bin/activate
$ pip install -e ../../grader_service
Obtaining file:///Users/jason/src/Grader-Service/grader_service
Installing build dependencies ... done
Checking if build backend supports build_editable ... done
Getting requirements to build editable ... error
error: subprocess-exited-with-error
× Getting requirements to build editable did not run successfully.
│ exit code: 1
╰─> [21 lines of output]
...
File "/private/var/folders/0w/khw84_v56_n_h398ssh0b6l00000gn/T/pip-build-env-50lklzpz/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 335, in run_setup
exec(code, locals())
File "<string>", line 12, in <module>
File "/Users/jason/src/Grader-Service/grader_service/grader_service/main.py", line 18, in <module>
import tornado
ModuleNotFoundError: No module named 'tornado'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
</code></pre>
<p>It cannot find <code>tornado</code> although it is clearly installed:</p>
<pre class="lang-none prettyprint-override"><code>$ which python
/Users/jason/src/Grader-Service/examples/dev_environment/venv/bin/python
$ python -c "import tornado"
</code></pre>
<p>I am using Python 3.10.7 from Homebrew on macOS Ventura 13.0.1 (M1). I think setuptools is running python here in a subprocess, and picking up the wrong interpreter, so it can't find the dependency from the virtualenv. Is there any way to fix this?</p>
<p>I've added some debugging prints to my code, and it seems setuptools is importing my main module with a wrong PYTHONPATH (the venv is not included, but some PEP517 stuff is):</p>
<pre class="lang-none prettyprint-override"><code>sys.executable: /Users/jason/src/Grader-Service/examples/dev_environment/venv/bin/python
sys.argv: ['setup.py', 'egg_info']
sys.path: ['/Users/jason/src/Grader-Service/grader_service', '/Users/jason/src/Grader-Service/examples/dev_environment/venv/lib/python3.10/site-packages/pip/_vendor/pep517/in_process', '/private/var/folders/0w/khw84_v56_n_h398ssh0b6l00000gn/T/pip-build-env-ab7ntspc/site', '/opt/homebrew/Cellar/python@3.10/3.10.7/Frameworks/Python.framework/Versions/3.10/lib/python310.zip', '/opt/homebrew/Cellar/python@3.10/3.10.7/Frameworks/Python.framework/Versions/3.10/lib/python3.10', '/opt/homebrew/Cellar/python@3.10/3.10.7/Frameworks/Python.framework/Versions/3.10/lib/python3.10/lib-dynload', '/private/var/folders/0w/khw84_v56_n_h398ssh0b6l00000gn/T/pip-build-env-ab7ntspc/overlay/lib/python3.10/site-packages', '/private/var/folders/0w/khw84_v56_n_h398ssh0b6l00000gn/T/pip-build-env-ab7ntspc/normal/lib/python3.10/site-packages']
</code></pre>
|
<python><virtualenv><setuptools><python-3.10>
|
2022-12-21 12:56:24
| 0
| 10,310
|
jdm
|
74,876,340
| 10,181,236
|
Compute the mean of the counting of row each 10 minutes with pandas
|
<p>I have a dataframe with a timestamp column. I'm able to group by the rows of this dataframe by timestamps in the range of 1 minute (or more), as you can see from the code below</p>
<pre><code>minutes = '1T'
grouped_df=df.loc[df['id_area'] == 3].groupby(pd.to_datetime(df["timestamp"]).dt.floor(minutes))["x"].count()
</code></pre>
<p>When I print the dataframe I get this</p>
<pre><code> timestamp
2022-11-09 14:14:00 3
2022-11-09 14:17:00 2
2022-11-09 14:28:00 1
2022-11-09 15:10:00 1
2022-11-09 15:35:00 1
2022-11-09 16:12:00 1
2022-11-09 16:14:00 1
Name: x, dtype: int64
</code></pre>
<p>I need to group by the timestamp by 10 minutes, then I need to count the rows in the then minutes range, and compute the mean.</p>
<p>So for example I have 5 in total as sum in the range of ten minutes between 14:10 and 14:20, I need to divide 5 by the number of rows in this range that is two and save the nearest integer</p>
<p>Expected output</p>
<pre><code>timestamp
2022-11-09 14:10:00 3
2022-11-09 14:20:00 1
2022-11-09 15:10:00 1
2022-11-09 15:30:00 1
2022-11-09 16:10:00 1
Name: x, dtype: int64
</code></pre>
|
<python><pandas>
|
2022-12-21 12:52:31
| 1
| 512
|
JayJona
|
74,876,162
| 1,668,622
|
Is there an way to reconfigure `_sysconfigdata.py` or to let it contain dynamic values?
|
<p>Maybe this is a typical "you're doing it wrong" situation - in this case please let me know.</p>
<p>Currently I'm re-using a pre-built Python among different installations located in different directories, and to make this work I need to modify the generated <code>_sysconfigdata.py</code> to contain a certain installation path, which is also available as an environment variable (here <code>SOME_ENV</code>).</p>
<p>This can be done by search/replace a <code>__PLACEHOLDER__</code> (configured with <code>--prefix</code>) inside <code>_sysconfigdata.py</code>, like this:</p>
<pre class="lang-py prettyprint-override"><code>'INCLUDEPY': '__PLACEHOLDER__/include/python3.10',
</code></pre>
<p>.. with something dynamic like this:</p>
<pre class="lang-py prettyprint-override"><code>'INCLUDEPY': f"{os.environ['SOME_ENV']}/include/python3.10",
</code></pre>
<p>Direct references of environment variables like</p>
<pre class="lang-py prettyprint-override"><code>'INCLUDEPY': "${SOME_ENV}/include/python3.10",
</code></pre>
<p>seem to not work.</p>
<p>This of course feels like an ugly workaround for the built-in way to say "please take this pre-built Python and re-configure the <code>prefix</code> I've provided with <code>configure</code>.</p>
<p>Is there a way to "reconfigure" a readily built Python to be located in a different path?</p>
<p>(Background: I need those values in order to be able to build Python packages from source inside a delivered Python installation)</p>
|
<python><environment-variables><configure>
|
2022-12-21 12:37:11
| 0
| 9,958
|
frans
|
74,876,060
| 19,826,650
|
Error parser python read csv data using numpy
|
<p>I have this kind of error in jupyter notebook, is there something needs to be done?</p>
<p><a href="https://i.sstatic.net/8wvlS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8wvlS.png" alt="error" /></a></p>
<hr />
<p>ParserError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_15164\57077545.py in
----> 1 pd.read_csv('datawisata.csv', skip_blank_lines=False)</p>
<p>apparently i have blank data from my travel.csv.</p>
<p><a href="https://i.sstatic.net/1p4sa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1p4sa.png" alt="this is view of the blank" /></a></p>
<p>ParserError: Error tokenizing data. C error: Expected 1 fields in line 54, saw 2</p>
<p>any suggestions how to parse it ??
or maybe it can't be done so i need to change it into 'blank' words first?</p>
|
<python><csv>
|
2022-12-21 12:27:26
| 1
| 377
|
Jessen Jie
|
74,875,995
| 3,759,591
|
Ansible - Unhandled error in Python interpreter discovery for
|
<p>I have a problem trying to ping to machines using ansible, 1 is fedora 35 the 2nd is ubuntu 21.
when I run</p>
<pre><code>ansible all -i inventory -m ping -u salam -k
</code></pre>
<p>I get the following warnings</p>
<blockquote>
<p>[WARNING]: Unhandled error in Python interpreter discovery for host
myubuntuIP: unexpected output from Python interpreter discovery
[WARNING]: sftp transfer mechanism failed on [myubuntuIP]. Use
ANSIBLE_DEBUG=1 to see detailed information
[WARNING]: scp transfer
mechanism failed on [myubuntuIP]. Use ANSIBLE_DEBUG=1 to see detailed
information myubuntuIP | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong" }
[WARNING]: Platform unknown on host myfedoraIP is using the discovered Python interpreter at /usr/bin/python, but future
installation of another Python interpreter could change the meaning of
that path. See
<a href="https://docs.ansible.com/ansible-core/2.14/reference_appendices/interpreter_discovery.html" rel="nofollow noreferrer">https://docs.ansible.com/ansible-core/2.14/reference_appendices/interpreter_discovery.html</a>
for more information. myfedoraIP | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"</p>
</blockquote>
<p>when I do</p>
<pre><code>which python3
</code></pre>
<p>on both machines, I get 2 different paths as follows</p>
<blockquote>
<p>/usr/bin/python3 for fedora box /bin/python3 for ubuntu box</p>
</blockquote>
<p>I understand from 1 thread here that we should indicate the path of python in ansible.cfg file, Can I indicate 2 different paths in the ansible.cfg? If yes how? and why ansible is <strong>not able</strong> to find the python path?</p>
|
<python><ansible><ansible-inventory>
|
2022-12-21 12:20:51
| 1
| 439
|
eliassal
|
74,875,889
| 1,843,329
|
ResourceWarning about unclosed socket from PySpark toPandas() in unit tests
|
<p>I'm getting a ResourceWarning in every unit test I run on Spark like this:</p>
<pre><code> /opt/conda/lib/python3.9/socket.py:775: ResourceWarning: unclosed <socket.socket fd=6, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('127.0.0.1', 37512), raddr=('127.0.0.1', 38975)>
self._sock = None
ResourceWarning: Enable tracemalloc to get the object allocation traceback
</code></pre>
<p>I tracked it down to <a href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.DataFrame.toPandas.html" rel="nofollow noreferrer"><code>DataFrame.toPandas()</code></a>. Example:</p>
<pre><code>import unittest
from pyspark.sql import SparkSession
class PySparkTestCase(unittest.TestCase):
def test_convert_to_pandas_df(self):
spark = SparkSession.builder.master("local[2]").getOrCreate()
rawData = spark.range(10)
print("XXX 1")
pdfData = rawData.toPandas()
print("XXX 2")
print(pdfData)
if __name__ == '__main__':
unittest.main(verbosity=2)
</code></pre>
<p>You'll see the 2 ResourceWarnings just before the <code>XXX 2</code> output line.</p>
<p>However, if you run the same code outside unittest, you <em>won't</em> get the resource warning!</p>
<pre><code>from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local[2]").getOrCreate()
rawData = spark.range(10)
print("XXX 1")
pdfData = rawData.toPandas()
print("XXX 2")
print(pdfData)
</code></pre>
<p>So, is unittest doing something to cause this resource warning in <code>toPandas()</code>? I appreciate I could hide the resource warning (e.g., see <a href="https://stackoverflow.com/a/26620811/1843329">here</a> or <a href="https://stackoverflow.com/a/65275151/1843329">here</a>), but I'd rather not get the resource warning in the first place!</p>
|
<python><apache-spark><pyspark><python-unittest>
|
2022-12-21 12:11:09
| 1
| 2,937
|
snark
|
74,875,431
| 14,425,501
|
model training starts all over again after unfreezing weights in tensorflow
|
<p>I am training an image classifier using Large EfficientNet:</p>
<pre><code>base_model = EfficientNetV2L(input_shape = (300, 500, 3),
include_top = False,
weights = 'imagenet',
include_preprocessing = True)
model = tf.keras.Sequential([base_model,
layers.GlobalAveragePooling2D(),
layers.Dropout(0.2),
layers.Dense(128, activation = 'relu'),
layers.Dropout(0.3),
layers.Dense(6, activation = 'softmax')])
base_model.trainable = False
model.compile(optimizer = optimizers.Adam(learning_rate = 0.001),
loss = losses.SparseCategoricalCrossentropy(),
metrics = ['accuracy'])
callback = [callbacks.EarlyStopping(monitor = 'val_loss', patience = 2)]
history = model.fit(ds_train, batch_size = 28, validation_data = ds_val, epochs = 20, verbose = 1, callbacks = callback)
</code></pre>
<p>it is working properly.</p>
<p>model summary:</p>
<pre><code>Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
efficientnetv2-l (Functiona (None, 10, 16, 1280) 117746848
l)
global_average_pooling2d (G (None, 1280) 0
lobalAveragePooling2D)
dropout (Dropout) (None, 1280) 0
dense (Dense) (None, 128) 163968
dropout_1 (Dropout) (None, 128) 0
dense_1 (Dense) (None, 6) 774
=================================================================
Total params: 117,911,590
Trainable params: 164,742
Non-trainable params: 117,746,848
_________________________________________________________________
</code></pre>
<p>output:</p>
<pre><code>Epoch 4/20
179/179 [==============================] - 203s 1s/step - loss: 0.1559 - accuracy: 0.9474 - val_loss: 0.1732 - val_accuracy: 0.9428
</code></pre>
<p>But, while fine-tuning it, I am unfreezing some weights:</p>
<pre><code>base_model.trainable = True
fine_tune_at = 900
for layer in base_model.layers[:fine_tune_at]:
layer.trainable = False
model.compile(optimizer = optimizers.Adam(learning_rate = 0.0001),
loss = losses.SparseCategoricalCrossentropy(),
metrics = ['accuracy'])
history = model.fit(ds_train, batch_size = 28, validation_data = ds_val, epochs = 20, verbose = 1, callbacks = callback)
</code></pre>
<p>model summary:</p>
<pre><code>Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
efficientnetv2-l (Functiona (None, 10, 16, 1280) 117746848
l)
global_average_pooling2d (G (None, 1280) 0
lobalAveragePooling2D)
dropout (Dropout) (None, 1280) 0
dense (Dense) (None, 128) 163968
dropout_1 (Dropout) (None, 128) 0
dense_1 (Dense) (None, 6) 774
=================================================================
Total params: 117,911,590
Trainable params: 44,592,230
Non-trainable params: 73,319,360
_________________________________________________________________
</code></pre>
<p>And, it is starting the training all over again. For the first time, when I trained it with freezed weights, the loss decreased to 0.1559, after unfreezing the weights, the model started training again from loss = 0.444. Why is this happening? I think fine tuning should't reset the weights.</p>
|
<python><tensorflow><machine-learning><deep-learning>
|
2022-12-21 11:32:15
| 1
| 1,933
|
Adarsh Wase
|
74,875,416
| 10,181,236
|
Pandas fill the counting of rows on missing datetime
|
<p>I have a dataframe with a timestamp column. I'm able to group by the rows of this dataframe by timestamps in the range of 10 minutes, as you can see from the code below</p>
<pre><code>minutes = '10T'
grouped_df=df.loc[df['id_area'] == 3].groupby(pd.to_datetime(df["timestamp"]).dt.floor(minutes))["x"].count()
</code></pre>
<p>When I print the dataframe I get this</p>
<pre><code>timestamp
2022-11-09 14:10:00 2
2022-11-09 14:20:00 1
2022-11-09 15:10:00 1
2022-11-09 15:30:00 1
2022-11-09 16:10:00 2
Name: x, dtype: int64
</code></pre>
<p>So as you can see for example between 14:20 and15:10 there no values. I need to fill these steps with 0. How can I do it?</p>
|
<python><pandas>
|
2022-12-21 11:31:10
| 2
| 512
|
JayJona
|
74,875,265
| 2,020,745
|
"The command line is too long" with subprocess.run
|
<p>I'm trying to execute the <code>subprocess.run</code> command.</p>
<p>I have a parameter that's very large - it's basically a SQL statement more than 10000 characters long.</p>
<p>Executing</p>
<pre><code>subprocess.run(["cmd", param1, param2, param3, param4, param5, param6, param7, param8, param9], shell=True)
</code></pre>
<p>returns an error <code>The command line is too long.</code></p>
<p>It seems the limit for total length of parameters is around 8000 chars.</p>
<p>Running python on Windows:</p>
<pre><code>Python 3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)] on win32
</code></pre>
<p>Is there a way to pass such parameter?</p>
|
<python><windows><subprocess>
|
2022-12-21 11:18:59
| 1
| 812
|
saso
|
74,875,189
| 14,256,643
|
selenium python can't scrape video url
|
<p>I am trying to scrape vidro url from this <a href="https://grabagun.com/firearms/handguns/semi-automatic-handguns/glock-19-gen-5-polished-nickel-9mm-4-02-inch-barrel-15-rounds-exclusive.html" rel="nofollow noreferrer">page</a>. see the picture below</p>
<p><a href="https://i.sstatic.net/Wpa7D.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Wpa7D.png" alt="enter image description here" /></a></p>
<p>but the video element div are not showing on selenium chrome browser and I also tried Firefox.</p>
<p>here is my code:</p>
<pre><code>from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.chrome.service import Service as ChromiumService
from webdriver_manager.chrome import ChromeDriverManager
from webdriver_manager.core.utils import ChromeType
from selenium_stealth import stealth
import requests, csv, time
options = webdriver.ChromeOptions()
options.add_argument("start-maximized")
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('useAutomationExtension', False)
driver = webdriver.Chrome(options=options,service=ChromiumService(ChromeDriverManager(chrome_type=ChromeType.CHROMIUM).install()))
url = "https://grabagun.com/firearms/handguns/semi-automatic-handguns/glock-19-gen-5-polished-nickel-9mm-4-02-inch-barrel-15-rounds-exclusive.html"
driver.get(url)
</code></pre>
<p>I also tried chrome default profile but didn't work.</p>
<pre><code>options.add_argument("user-data-dir=C:\\Users\\JHONE\\AppData\\Local\\Google\\Chrome\\User Data")
driver = webdriver.Chrome(options=options,executable_path="C:\\Users\\chromedriver.exe")
</code></pre>
<p>why the product video section not showing ?what is prevention to show the video section when using selenium?</p>
|
<python><python-3.x><selenium><selenium-webdriver><selenium-chromedriver>
|
2022-12-21 11:13:12
| 0
| 1,647
|
boyenec
|
74,875,143
| 6,368,217
|
How to skip certificate verification in poetry?
|
<p>I'm trying to add a new package using <code>poetry add</code>, but it always comes with this error:</p>
<p><code>HTTPSConnectionPool(host='10.140.240.64', port=443): Max retries exceeded with url: /api/v4/projects/118/packages/pypi/files/47f05b39ebe470235b70724fb049985ea75fad6c1a5007ad3462f3d430da338b/tg_client-0.1.10-py3-none-any.whl (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl.c:1129)')))</code></p>
<p>Who knows how to skip this verification?</p>
<p><strong>Updated:</strong></p>
<p>I try to add a package from private repository:</p>
<pre><code>[[tool.poetry.source]]
name = "my_package"
url = "https://..."
secondary = true
</code></pre>
<p>Maybe that is why the solution <code>poetry config certificates.my_package.cert false</code> doesn't work.</p>
|
<python><ssl><python-poetry>
|
2022-12-21 11:08:12
| 4
| 991
|
Alexander Shpindler
|
74,875,135
| 7,415,524
|
python regex BESTMATCH behaviour
|
<p>I am seeing a behaviour of the python regex library for fuzzy matching that I do not quite understand.</p>
<p>The string I am searching into is:</p>
<pre><code>s="TACGGTTTTAACTTTGCAAGCTTCAGAAGGGATTACTAGCAGTAAAAATGCGGAAATTTCTCTTTATGATGGCGCCACGCTCAATTTGGCTTCAAACAGCGTTAAATTAATGGGTAATGTCAAG"
</code></pre>
<p>and the patterns I am searching for are (5 mismatches allowed):</p>
<pre><code>for match in regex.finditer("(CAAGCTTCAGAAGGGATCACTAGCGATAAA|GGCTTCAAGCAGCGTTAAATTAATGGGTAATGT|AATTTCTCTTTATGAT){s<=5}", s):
print(match)
</code></pre>
<p>My three patterns are found, as expected, using the command above:</p>
<pre><code><regex.Match object; span=(16, 46), match='CAAGCTTCAGAAGGGATTACTAGCAGTAAA', fuzzy_counts=(3, 0, 0)>
<regex.Match object; span=(54, 70), match='AATTTCTCTTTATGAT'>
<regex.Match object; span=(87, 120), match='GGCTTCAAACAGCGTTAAATTAATGGGTAATGT', fuzzy_counts=(1, 0, 0)>
</code></pre>
<p>However if I ask for the best match, the first pattern (starting CAAGCTT) is not found anymore:</p>
<pre><code>for match in regex.finditer("(?b)(CAAGCTTCAGAAGGGATCACTAGCGATAAA|GGCTTCAAGCAGCGTTAAATTAATGGGTAATGT|AATTTCTCTTTATGAT){s<=5}", s):
print(match)
<regex.Match object; span=(54, 70), match='AATTTCTCTTTATGAT'>
<regex.Match object; span=(87, 120), match='GGCTTCAAACAGCGTTAAATTAATGGGTAATGT', fuzzy_counts=(1, 0, 0)>
</code></pre>
<p>How can I explain this, as my patterns are not overlapping and, if I search for them separately, using bestmatch, they are found indeed!?</p>
<p>EDIT</p>
<p>If I modify my string so that the first pattern only has 1 mismatch left (doesn't work with 2), and remove the third pattern from my search, my first pattern will be found along with the second:</p>
<pre><code>s="TACGGTTTTAACTTTGCAAGCTTCAGAAGGGATCACTAGCGGTAAAAATGCGGAAATTTCTCTTTATGATGGCGCCACGCTCAATTTGGCTTCAAACAGCGTTAAATTAATGGGTAATGTCAAG"
for match in regex.finditer("(?b)(CAAGCTTCAGAAGGGATCACTAGCGATAAA|GGCTTCAAGCAGCGTTAAATTAATGGGTAATGT){s<=5}", s):
print(match)
<regex.Match object; span=(16, 46), match='CAAGCTTCAGAAGGGATCACTAGCGGTAAA', fuzzy_counts=(1, 0, 0)>
<regex.Match object; span=(87, 120), match='GGCTTCAAACAGCGTTAAATTAATGGGTAATGT', fuzzy_counts=(1, 0, 0)>
</code></pre>
<p>Is there an unwritten rule of regex provided no more than two best matches, with no more than 1 mismatch?</p>
|
<python><regex><fuzzy-search>
|
2022-12-21 11:07:35
| 1
| 313
|
Agathe
|
74,875,092
| 12,131,472
|
replace the last 2 dates in one column by "2nd day" and "1st day" in a dataframe to make the code dynamic
|
<p>I have this dataframe where the column date are the 2 latest working days(so they change every day when I run my code) in datetime format</p>
<pre><code> shortCode date ... value TCE value
shortCode ...
A6TCE 4858 A6TCE 2022-12-19 ... NaN 89857.0
4859 A6TCE 2022-12-20 ... NaN 80632.0
S2TCE 4370 S2TCE 2022-12-19 ... NaN 103858.0
4371 S2TCE 2022-12-20 ... NaN 94453.0
TD1 242 TD1 2022-12-19 ... 56.44 27654.0
243 TD1 2022-12-20 ... 54.89 24594.0
</code></pre>
<p>I wish to write a code which calculate dynamically the day-to-day value change for the column value and TCE value.</p>
<p>What is the easiest way? for the moment I have thought to pivot the df and calculate the difference with new columns, but then I wish to replace the dates first(let me know if you think this is not necessary because the aim is to append the day-to-day value change later)</p>
<p>desired look</p>
<pre><code> shortCode date ... value TCE value
shortCode ...
A6TCE 4858 A6TCE 1st day ... NaN 89857.0
4859 A6TCE 2nd day ... NaN 80632.0
S2TCE 4370 S2TCE 1st day ... NaN 103858.0
4371 S2TCE 2nd day ... NaN 94453.0
TD1 242 TD1 1st day ... 56.44 27654.0
243 TD1 2nd day ... 54.89 24594.0
</code></pre>
|
<python><pandas><dataframe>
|
2022-12-21 11:03:24
| 2
| 447
|
neutralname
|
74,874,949
| 20,266,647
|
Issue during import, MLRunNotFoundError
|
<p>I installed python package MLRun correctly, but I got in jupyter this error</p>
<pre><code>---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
/opt/conda/lib/python3.8/site-packages/mlrun/errors.py in raise_for_status(response, message)
75 try:
---> 76 response.raise_for_status()
77 except requests.HTTPError as exc:
/opt/conda/lib/python3.8/site-packages/requests/models.py in raise_for_status(self)
942 if http_error_msg:
--> 943 raise HTTPError(http_error_msg, response=self)
944
HTTPError: 404 Client Error: Not Found for url: https://mlrun-api.default-tenant.app.iguazio.prod//api/v1/client-spec
...
/opt/conda/lib/python3.8/site-packages/mlrun/errors.py in raise_for_status(response, message)
80 error_message = f"{str(exc)}: {message}"
81 try:
---> 82 raise STATUS_ERRORS[response.status_code](
83 error_message, response=response
84 ) from exc
MLRunNotFoundError: 404 Client Error: Not Found for url: https://mlrun-api.default-tenant.app.iguazio.prod//api/v1/client-spec: details: Not Found
</code></pre>
<p>based on this source code in python:</p>
<pre><code>%pip install mlrun scikit-learn~=1.0
import mlrun
...
</code></pre>
<p>Do you see issues?</p>
<p>BTW: I installed relevant MLRun version (the same as on server side). I modified file <code>mlrun.env</code> and added values for these variables <code>MLRUN_DBPATH</code>, <code>V3IO_USERNAME</code>, <code>V3IO_ACCESS_KEY</code></p>
|
<python><import><jupyter-notebook><mlops><mlrun>
|
2022-12-21 10:52:16
| 1
| 1,390
|
JIST
|
74,874,877
| 13,557,319
|
Grid and Frames in Tkinter
|
<p>Can we place grid of buttons inside a Frame or can we use both frame and pack inside one class?</p>
<p>I wanted to make a simple GUI based calculator in python like this.</p>
<p><a href="https://i.sstatic.net/HI9IT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HI9IT.png" alt="required output" /></a></p>
<p>P.S I'm new to python GUI. Assuming my Basics are not clear</p>
<p>This is what i got by using only Grids</p>
<p><a href="https://i.sstatic.net/93jdL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/93jdL.png" alt="Grid output" /></a></p>
<p><strong>Code</strong></p>
<pre><code>from tkinter import *
root = Tk()
root.title("Calculator")
Entry(root).grid(row=0, column=0)
one = Button(
root,
text="1",
width=5,
height=1
).grid(row=1, column=1)
two = Button(
root,
text="2",
width=5,
height=1
).grid(row=1, column=2)
three = Button(
root,
text="3",
width=5,
height=1
).grid(row=1, column=3)
four = Button(
root,
text="4",
width=5,
height=1
).grid(row=1, column=4)
five = Button(
root,
text="5",
width=5,
height=1
).grid(row=2, column=1)
six = Button(
root,
text="6",
width=5,
height=1
).grid(row=2, column=2)
seven = Button(
root,
text="7",
width=5,
height=1
).grid(row=2, column=3)
eight = Button(
root,
text="8",
width=5,
height=1
).grid(row=2, column=4)
nine = Button(
root,
text="9",
width=5,
height=1
).grid(row=3, column=1)
zero = Button(
root,
text="0",
width=5,
height=1
).grid(row=3, column=2)
sin = Button(
root,
text="sin",
width=5,
height=1
).grid(row=3, column=3)
cos = Button(
root,
text="cos",
width=5,
height=1
).grid(row=3, column=4)
tan = Button(
root,
text="Tan",
width=5,
height=1
).grid(row=4, column=1)
sqrt = Button(
root,
text="Sqt",
width=5,
height=1
).grid(row=4, column=2)
reset = Button(
root,
text="C",
width=5,
height=1
).grid(row=4, column=3)
result = Button(
root,
text="=",
width=5,
height=1
).grid(row=4, column=4)
add = Button(
root,
text="+",
width=5,
height=1
).grid(row=1, column=0)
subtract = Button(
root,
text="-",
width=5,
height=1
).grid(row=2, column=0)
Multiply = Button(
root,
text="*",
width=5,
height=1
).grid(row=3, column=0)
Divide = Button(
root,
text="/",
width=5,
height=1
).grid(row=4, column=0)
root.mainloop()
</code></pre>
<p>I also tried to use frame for input field but got this error.</p>
<blockquote>
<p>self.tk.call(
_tkinter.TclError: cannot use geometry manager grid inside . which already has slaves managed by pack</p>
</blockquote>
<p><strong>Code</strong></p>
<pre><code>from tkinter import *
root = Tk()
root.title("Calculator")
# input frame
input_frame = Frame(root)
input_frame.pack()
a = Entry(input_frame, width=100)
a.pack()
one = Button(
root,
text="1",
width=5,
height=1
).grid(row=1, column=1)
two = Button(
root,
text="2",
width=5,
height=1
).grid(row=1, column=2)
root.mainloop()
</code></pre>
<p>Is there any way to use both frames and grids to achieve <a href="https://i.sstatic.net/HI9IT.png" rel="nofollow noreferrer">this</a>?</p>
|
<python><tkinter><grid><frame>
|
2022-12-21 10:47:30
| 1
| 342
|
Linear Data Structure
|
74,874,854
| 19,155,645
|
pandas: how to check that a certain value in a column repeats maximum once in each group (after groupby)
|
<p>I have a pandas DataFrame which I want to group by column A, and check that a certain value ('test') in group B does not repeat more than once in each group.</p>
<p>Is there a pandas native way to do the following: <br>
1 - find the groups where 'test' appears in column B more than once ? <br>
2 - delete the additional occurrences (keep the one with the min value in column C).</p>
<p>example:</p>
<pre><code> A B C
0 1 test 342
1 1 t 4556
2 1 te 222
3 1 test 56456
4 2 t 234525
5 2 te 123
6 2 test 23434
7 3 test 777
8 3 tes 665
</code></pre>
<p>if I groupby 'A', I get that 'test' appears twice in A==1, which is the case I would like to deal with.</p>
|
<python><pandas><group-by>
|
2022-12-21 10:45:00
| 1
| 512
|
ArieAI
|
74,874,626
| 14,488,888
|
Finding the first and last occurrences in a numpy array
|
<p>I have a (large) numpy array of boolean values, where the <code>True</code> indicate the zero-crossing condition of another previously processed signal. In my case, I am only interested in the indexes of the first and last <code>True</code> values in the array and therefore I'd rather not use <code>np.where</code> or <code>np.argwhere</code> in order to avoid going through the entire array unnecessarily.</p>
<p>I'd prefer not to convert my array into a <code>list</code> and use the <code>index()</code> method. Or should I?</p>
<p>So far, the best solution I have come up with is the following:</p>
<pre><code># Alias for compactness
enum = enumerate
# Example array, desired output = (3, -5))
a = np.array([False, False, False, True, ..., True, False, False, False, False])
out = next(idx for idx in zip((i for i, x in enum(a) if x), (-i-1 for i, x in enum(a[::-1]) if x)))
print(out)
# (3, -5)
</code></pre>
<p>But I still find it a bit bulky and wonder whether Numpy already implements such functionality with another syntax. Does it?</p>
|
<python><numpy>
|
2022-12-21 10:27:52
| 3
| 741
|
Martí
|
74,874,622
| 8,113,126
|
Add a vertical line in XLXSWriter graph
|
<p>I need to add a vertical line to the excel chart similar to below image. The chart is developed using Xlsxwriter==1.2.9.</p>
<p><a href="https://i.sstatic.net/rYgeS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rYgeS.png" alt="enter image description here" /></a></p>
<p>I am trying to pass a single value (passing a single cell range) to the values field and removed the categories tab.</p>
<pre><code>chart.add_series({'name': 'Vertical Line',
'values':f"{sheet_name,2,5,2,5}",
'line': {'width': 2.5, 'dash_type': 'dash', 'color': '#9ED8D2'}})
</code></pre>
<p>Please advice.</p>
<p>Picture reference:
<a href="https://www.officetooltips.com/excel_2016/tips/how_to_add_a_vertical_line_to_the_chart.html" rel="nofollow noreferrer">https://www.officetooltips.com/excel_2016/tips/how_to_add_a_vertical_line_to_the_chart.html</a></p>
|
<python><python-3.x><xlsxwriter>
|
2022-12-21 10:27:32
| 1
| 340
|
Jai Simha Ramanujapura
|
74,874,518
| 995,431
|
Python Celery nested chord
|
<p>I'm trying to use nested chord's in Celery, but can't get it to work.</p>
<p>The use-case I have is a for first run a single task, the output of the then feeds into a group of multiple tasks, the output of that group is then intended to feed into another single task.</p>
<p>To debug I started with a minimal application inspired by this old issue: <a href="https://github.com/celery/celery/issues/4161" rel="nofollow noreferrer">https://github.com/celery/celery/issues/4161</a></p>
<p>My test code is</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python
from celery import Celery
app = Celery('canvastest', backend='redis://', broker='redis://')
@app.task
def i(x):
return x
</code></pre>
<p>to run it I do:</p>
<ul>
<li><code>celery -A canvastest shell</code></li>
<li><code>celery -A canvastest worker</code></li>
<li><code>docker run -p 6379:6379 redis</code></li>
</ul>
<p>Inside the interactive shell I can the reproduce the example from the issue linked above as follows</p>
<pre><code>Python 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> chord([i.s(1), i.s(2)])(group(i.s(), i.s())).get(timeout=5)
[[1, 2], [1, 2]]
>>>
</code></pre>
<p>transforming that to match the first half of what I'm trying to do also works</p>
<pre><code>>>> chord(i.s(1))(group(i.s(), i.s())).get(timeout=5)
[[1], [1]]
>>>
</code></pre>
<p>now trying to expand that so the output is sent into a single task at the end is where it falls appart</p>
<pre><code>>>> chord(chord(i.s(1))(group(i.s(), i.s())))(i.s()).get(timeout=5)
Traceback (most recent call last):
File "/home/jerkern/abbackend/etl/.venv/lib/python3.10/site-packages/celery/bin/shell.py", line 71, in _invoke_default_shell
import IPython # noqa
ModuleNotFoundError: No module named 'IPython'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/jerkern/abbackend/etl/.venv/lib/python3.10/site-packages/celery/bin/shell.py", line 74, in _invoke_default_shell
import bpython # noqa
ModuleNotFoundError: No module named 'bpython'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/home/jerkern/abbackend/etl/.venv/lib/python3.10/site-packages/celery/canvas.py", line 1377, in __call__
return self.apply_async((), {'body': body} if body else {}, **options)
File "/home/jerkern/abbackend/etl/.venv/lib/python3.10/site-packages/celery/canvas.py", line 1442, in apply_async
return self.run(tasks, body, args, task_id=task_id, **merged_options)
File "/home/jerkern/abbackend/etl/.venv/lib/python3.10/site-packages/celery/canvas.py", line 1506, in run
header_result_args = header._freeze_group_tasks(group_id=group_id, chord=body, root_id=root_id)
File "/home/jerkern/abbackend/etl/.venv/lib/python3.10/site-packages/celery/canvas.py", line 1257, in _freeze_group_tasks
results = list(self._freeze_unroll(
File "/home/jerkern/abbackend/etl/.venv/lib/python3.10/site-packages/celery/canvas.py", line 1299, in _freeze_unroll
yield task.freeze(group_id=group_id,
File "/home/jerkern/abbackend/etl/.venv/lib/python3.10/site-packages/celery/canvas.py", line 304, in freeze
return self.AsyncResult(tid)
File "/home/jerkern/abbackend/etl/.venv/lib/python3.10/site-packages/kombu/utils/objects.py", line 30, in __get__
return super().__get__(instance, owner)
File "/usr/lib/python3.10/functools.py", line 981, in __get__
val = self.func(instance)
File "/home/jerkern/abbackend/etl/.venv/lib/python3.10/site-packages/celery/canvas.py", line 471, in AsyncResult
return self.type.AsyncResult
AttributeError: 'AsyncResult' object has no attribute 'AsyncResult'
</code></pre>
<p>Am I just writing the code in the wrong way, or am I hitting some bug/limitation where it's not possible to feed the output from a chord into another chord?</p>
<p>Using celery 5.2.7 and redis 7.0.7</p>
|
<python><celery><chord>
|
2022-12-21 10:19:48
| 1
| 325
|
ajn
|
74,874,359
| 17,378,883
|
How to sum values in columns in group, keeping row number same?
|
<p>My dataset looks like this:</p>
<pre><code>val1 val2 val3 val4
a1 b 1 c
a1 d 1 k
a2 b 3 c
a2 d 4 k
</code></pre>
<p>I want to sum values in column "val3" grouped by "val1" while keeping all other values same. the number of rows in dataframe shouldn't change. so desired result is:</p>
<pre><code>val1 val2 val3 val4
a1 b 2 c
a1 d 2 k
a2 b 7 c
a2 d 7 k
</code></pre>
<p>how could i do that?</p>
|
<python><python-3.x><dataframe><function><group-by>
|
2022-12-21 10:08:48
| 1
| 397
|
gh1222
|
74,874,310
| 8,708,364
|
Why can't I replace Ellipsis using `pd.DataFrame.replace`?
|
<p>I have this following <code>pd.DataFrame</code>:</p>
<pre class="lang-py prettyprint-override"><code>>>> df = pd.DataFrame({'a': [1, 2, 3, 4], 'b': ['a', 'b', 'c', 'd'], 'c': [1.2, 3.4, 5.6, 7.8], 'd': [..., ..., ..., ...]})
>>> df
a b c d
0 1 a 1.2 Ellipsis
1 2 b 3.4 Ellipsis
2 3 c 5.6 Ellipsis
3 4 d 7.8 Ellipsis
>>>
</code></pre>
<p>I am trying to replace the Ellipsis with <code>1</code>.</p>
<p>I know I could do something like:</p>
<pre class="lang-py prettyprint-override"><code>>>> df.mask(df == Ellipsis, 1)
a b c d
0 1 a 1.2 1
1 2 b 3.4 1
2 3 c 5.6 1
3 4 d 7.8 1
>>>
</code></pre>
<p>But, for some reason. If I do:</p>
<pre><code>df.replace(..., 1)
</code></pre>
<p>Or:</p>
<pre><code>df.replace(Ellipsis, 1)
</code></pre>
<p>I get the following error:</p>
<pre><code>TypeError: Expecting 'to_replace' to be either a scalar, array-like, dict or None, got invalid type 'ellipsis'
</code></pre>
<p>Why doesn't <code>replace</code> allow me to replace <code>Ellipsis</code>?</p>
<p>I know how to fix it, I want to know why this happens.</p>
<hr />
<p>The strange thing here is that, I actually can replace numbers with Ellipsis, but not vice versa.</p>
<p>Example:</p>
<pre><code>>>> df.replace(1, ...)
a b c d
0 Ellipsis a 1.2 Ellipsis
1 2 b 3.4 Ellipsis
2 3 c 5.6 Ellipsis
3 4 d 7.8 Ellipsis
>>>
</code></pre>
<hr />
<p>The even stranger thing here mentioned by @jezrael and @phœnix is that:</p>
<pre><code>df.replace({Ellipsis: 1})
</code></pre>
<p>Also:</p>
<pre><code>df.replace({...: 1})
</code></pre>
<p>As well as:</p>
<pre><code>df.replace([Ellipsis], 1)
</code></pre>
<p>And:</p>
<pre><code>df.replace([...], 1)
</code></pre>
<p>Work as expected!</p>
<p>It gives:</p>
<pre><code> a b c d
0 1 a 1.2 1
1 2 b 3.4 1
2 3 c 5.6 1
3 4 d 7.8 1
</code></pre>
|
<python><pandas><dataframe><replace><ellipsis>
|
2022-12-21 10:04:45
| 2
| 71,788
|
U13-Forward
|
74,874,309
| 9,274,940
|
Seaborn to a image object without saving it
|
<p>I have a seaborn figure, I want convert the figure to a bytes object.</p>
<p>With plotly I would do something like this:</p>
<pre><code>import plotly.express as px
import plotly.io as pio
wide_df = px.data.medals_wide()
fig = px.bar(wide_df, x="nation", y=["gold", "silver", "bronze"], title="Wide-Form Input, relabelled",
labels={"value": "count", "variable": "medal"})
# Convert the figure to a bytes object
img_bytes = pio.to_image(fig, format='png')
# Pass the bytes object to the other function
other_function(img_bytes)
</code></pre>
<p>With seaborn, I'm using this function, but is being saved...</p>
<pre><code>cf_matrix = confusion_matrix([0,1,1], [0,1,0])
ax = sns.heatmap(
cf_matrix, annot=labels, fmt="", vmin=0, vmax=100
)
sns_figure = ax.get_figure()
cm_image = sns_figure.savefig('test_image.png')
</code></pre>
<p>I've also tried with the following (with a plotly fig works) but not with seaborn</p>
<pre><code>import plotly.io as pio
img_bytes = pio.to_image(fig_sns, format='png')
</code></pre>
<p>I'm getting the following error:</p>
<pre><code>ValueError: The fig parameter must be a dict or Figure. Received value of type <class 'matplotlib.figure.Figure'>: Figure(640x480)
</code></pre>
|
<python><image><seaborn>
|
2022-12-21 10:04:43
| 1
| 551
|
Tonino Fernandez
|
74,874,262
| 7,760,910
|
unittest in python not working as expected
|
<p>We are deploying the code in <code>CI/CD</code> fashion via <code>Terraform</code>. So, we have a <code>lambda</code> <code>function</code> under which I have written a code for retrieving the specific <code>secret</code> creds.</p>
<p>Below is my lambda code:</p>
<pre><code>logger = get_provisioner_logger()
session = boto3.session.Session()
client = session.client(
service_name="secretsmanager",
region_name=""
)
def lambda_handler(event, context):
logger.info("lambda invoked with event: " + str(event))
domain_name = "db-creds"
secret_data = getCredentials(domain_name)
acc = secret_data["acc"]
user = secret_data["user"]
password = secret_data["pass"]
#....
#here we are invoking other methods where we are passing the above creds
#....
return handler.handle(event)
def getCredentials(domain_name):
try:
response = client.get_secret_value(SecretId=domain_name)
result = json.loads(response['SecretString'])
print(result)
logger.info("Got value for secret %s.", domain_name)
return result
except UnableToRetrieveDetails as e:
logger.error("unable to retrieve secret details due to ", str(e))
raise e
</code></pre>
<p>Now, I have written a test case where I am trying to mock the client and trying to fake the return value of the response but am unable to do so. Below is my code:</p>
<pre><code>from unittest import TestCase
from unittest.mock import patch
from api.lambda_handler import getCredentials
import json
@patch('api.lambda_handler.client')
class TestSecretManagerMethod(TestCase):
def test_get_secret_creds(self, sm):
sm.response.return_value = {"secret":"gotsomecreds"}
actual = getCredentials("db-creds")
self.assertEqual(actual, "hey")
</code></pre>
<p>It gives me below error:</p>
<pre><code> raise TypeError(f'the JSON object must be str, bytes or bytearray, '
TypeError: the JSON object must be str, bytes or bytearray, not MagicMock.
</code></pre>
<p>What exactly am I missing here?</p>
|
<python><amazon-web-services><unit-testing><python-unittest><python-unittest.mock>
|
2022-12-21 10:00:55
| 1
| 2,177
|
whatsinthename
|
74,874,235
| 10,490,375
|
Confidence interval in normal Q-Q plot using `statsmodels`
|
<p>I would like to do draw 95% confidence interval in <code>python</code> using <code>statsmodels</code>, but <code>qqplot()</code> doesn't have this option.</p>
<p>If it's not possible, math equation is also acceptable so that I can simply add it to the plot on my own.</p>
<pre><code>import numpy as np
import statsmodels.api as sm
import matplotlib.pyplot as plt
data = np.random.normal(5, 10, 100)
fig = sm.ProbPlot(data, fit=True).qqplot()
plt.show()
</code></pre>
<p>There is an implementation in <code>R</code> that I found on the internet, but I am not familiar with <code>R</code>...</p>
<p><a href="https://i.sstatic.net/6U7N7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6U7N7.png" alt="enter image description here" /></a></p>
|
<python><statsmodels>
|
2022-12-21 09:59:33
| 1
| 376
|
nochenon
|
74,874,185
| 5,810,015
|
TypeError: a bytes-like object is required, not 'str' when splitting lines from .gz file
|
<p>I have a gz file <strong>sample.gz</strong>.</p>
<pre class="lang-none prettyprint-override"><code>This is first line of sample gz file.
This is second line of sample gz file.
</code></pre>
<p>I read this .gz file and then split it line by line. Once I have individual lines I further split it into parts with whitespace as separator.</p>
<pre><code>import gzip
logfile = "sample.gz"
with gzip.open(logfile) as page:
for line in page:
string = line.split(" ")
print(*string, sep = ',')
</code></pre>
<p>I am expecting output like</p>
<pre class="lang-none prettyprint-override"><code>This,is,first,line,of,sample,gz,file.
This,is,second,line,of,sample,gz,file.
</code></pre>
<p>But insted of the above result, I am receiving TypeError:</p>
<blockquote>
<p>TypeError: a bytes-like object is required, not 'str'</p>
</blockquote>
<p>Why is the split function not working as it is supposed to?</p>
|
<python><string><split><typeerror>
|
2022-12-21 09:55:56
| 2
| 417
|
data-bite
|
74,873,907
| 10,434,932
|
Match two regex patterns multiple times
|
<p>I have this string "Energy (kWh/m²)" and I want to get "Energy__KWh_m__", meaning, replacing all non word characters and sub/superscript characters with an underscore.</p>
<p>I have the regex for replacing the non word characters -> <code>re.sub("[\W]", "_", column_name)</code> and the regex for replacing the superscript numbers -> <code>re.sub("[²³¹⁰ⁱ⁴⁵⁶⁷⁸⁹⁺⁻⁼⁽⁾ⁿ]", "", column_name)</code></p>
<p>I have tried combining this into one single regex but I have had no luck. Every time I try I only get partial replacements like "Energy (KWh_m__" - with a regex like <strong>([²³¹⁰ⁱ⁴⁵⁶⁷⁸⁹⁺⁻⁼⁽⁾ⁿ]).*(\W)</strong></p>
<p>Any help? Thanks!</p>
|
<python><regex><string>
|
2022-12-21 09:33:33
| 1
| 549
|
J.Doe
|
74,873,778
| 12,468,438
|
Avoid pandas FutureWarning object-dtype columns with all-bool values when possible values are bool and NaN
|
<p>I have ~1 million pandas dataframes containing 0-10,000 rows and 160 columns. In a dataframe 5-10 columns may have values [False, True, np.nan] and are 'object' or 'bool' dtype. Some 'object' dtype columns contain only True or False. I handle all these columns as if they could contain [False, True, np.nan], so no <code>df.loc[df['col']]</code> but <code>df.loc[df['col'] == True]</code>, etc.</p>
<p>When I do a concat on a collection of these frames, occasionally I get <em>In a future version, object-dtype columns with all-bool values will not be included in reductions with bool_only=True. Explicitly cast to bool dtype instead.</em></p>
<p>Both concats below trigger the warning because df2 has a bool-only column with dtype object:</p>
<pre><code>df1 = pd.DataFrame({'foo': np.zeros((2, ), dtype='bool')}, index=[0,1])
df2 = pd.DataFrame({'foo': np.ones((2, ), dtype='bool').astype('object')}, index=[2,3])
df3 = pd.DataFrame({'foo': np.array([np.nan, np.nan])}, index=[6,7])
df_ = pd.concat([df1, df2])
df_ = pd.concat([df2, df3])
</code></pre>
<p>I have two questions:</p>
<ol>
<li><p>Is <code>df = df.infer_objects()</code> the appropriate way to handle this, or would it be better to convert the columns to categorical? Two of my object columns are image thumbnails, but I assume the amount of data in a column has no impact on speed.</p>
</li>
<li><p>Why do I get this warning when concatenating?
In the <a href="https://pandas.pydata.org/docs/whatsnew/v1.5.0.html" rel="noreferrer">pandas release notes for 1.5.0</a> the change is described as <em>Deprecated treating all-bool object-dtype columns as bool-like in DataFrame.any() and DataFrame.all() with bool_only=True, explicitly cast to bool instead (GH46188)</em>. How does concat use any()/all()?</p>
</li>
</ol>
<p>Pandas 1.5.2, python 3.8.15</p>
<p>Slightly related <a href="https://stackoverflow.com/questions/57693544/bool-and-missing-values-in-pandas">Bool and missing values in pandas</a></p>
|
<python><pandas>
|
2022-12-21 09:22:20
| 0
| 314
|
Frank_Coumans
|
74,873,546
| 7,932,866
|
How to handle very big OSMdata with Pyrosm
|
<p>I use Pyrosm for parsing <code>*.osm.pbf</code> files.<br />
On their websites it says "When should I use Pyrosm? However, pyrosm is better suited for situations where you want to fetch data for whole city or larger regions (even whole country)."</p>
<p>However when I try to parse to big <code>.osm.pbf</code> files, I get memory problems.
Is there a solution for that, e.g. like chunking in pandas?
Or do I need to split up the file, if yes, how?</p>
|
<python><parsing><bigdata><openstreetmap>
|
2022-12-21 09:01:33
| 1
| 907
|
Lupos
|
74,873,479
| 15,383,310
|
Google artifact registry list_tags in python - contains invalid argument
|
<p>I'm a code newbie so please be kind :) I've got my google credentials set and I've checked the documentation for the python library for artifact registry (v1), but clearly doing something daft, as I'm receiving this response back - <code>google.api_core.exceptions.InvalidArgument: 400 Request contains an invalid argument.</code></p>
<p>Here's my code:</p>
<pre><code>from google.cloud import artifactregistry_v1
# Using Application Default Credentails is easiest
# export GOOGLE_APPLICATION_CREDENTIALS=/path/to/key.json
client = artifactregistry_v1.ArtifactRegistryClient()
# Best obtained from the environment
project = "**REDACTED**"
location = "europe-west2"
repository = "test"
parent = f"projects/{project}/locations/{location}/repositories/{repository}"
request = artifactregistry_v1.ListTagsRequest(parent=parent)
page_result = client.list_tags(request=request)
for response in page_result:
print(response)
</code></pre>
<p>Here's the official docs, I can't work out what I've done wrong: <a href="https://cloud.google.com/python/docs/reference/artifactregistry/latest/google.cloud.artifactregistry_v1.services.artifact_registry.ArtifactRegistryClient#google_cloud_artifactregistry_v1_services_artifact_registry_ArtifactRegistryClient_list_tags" rel="nofollow noreferrer">https://cloud.google.com/python/docs/reference/artifactregistry/latest/google.cloud.artifactregistry_v1.services.artifact_registry.ArtifactRegistryClient#google_cloud_artifactregistry_v1_services_artifact_registry_ArtifactRegistryClient_list_tags</a></p>
<p><strong>EDIT</strong></p>
<p>Just seen on the Google docs for the Class <code>ListTagsRequest</code> (<a href="https://cloud.google.com/python/docs/reference/artifactregistry/latest/google.cloud.artifactregistry_v1.types.ListTagsRequest" rel="nofollow noreferrer">here</a>) it says <code>parent</code> expects a string (which is what i've done), but noticed in PyCharm it's highlighted and telling me <code>expected type 'Dict', and got 'str' instead</code>....</p>
|
<python><google-cloud-platform><google-artifact-registry>
|
2022-12-21 08:55:27
| 1
| 585
|
sc-leeds
|
74,873,454
| 7,162,781
|
Ray doesn't work in a docker container (linux)
|
<p>I have a python code that uses ray. It works locally on my mac, but once I try to run it inside a local docker container I get the following:</p>
<p>A warning:</p>
<pre><code>WARNING services.py:1922 -- WARNING: The object store is using /tmp instead of /dev/shm
because /dev/shm has only 67108864 bytes available. This will harm performance! You may
be able to free up space by deleting files in /dev/shm. If you are inside a Docker
container, you can increase /dev/shm size by passing '--shm-size=2.39gb' to 'docker run'
(or add it to the run_options list in a Ray cluster config). Make sure to set this to
more than 30% of available RAM.
</code></pre>
<p>after the warning it says: <code>INFO worker.py:1528 -- Started a local Ray instance.</code></p>
<p>and a few seconds later I get this error:</p>
<pre><code>core_worker.cc:179: Failed to register worker
01000000ffffffffffffffffffffffffffffffffffffffffffffffff to Raylet. IOError:
[RayletClient] Unable to register worker with raylet. No such file or directory
</code></pre>
<p>I already tried:</p>
<ol>
<li>increasing the /dev/shm as explained</li>
<li>limit the number of cpus in the ray.init() command (as mentioned <a href="https://stackoverflow.com/questions/72464756/ray-on-slurm-problems-with-initialization">here</a>)</li>
<li>use ray.init(_plasma_directory = 'dev/shm/')</li>
</ol>
<p>I use version 2.1.0 of ray.</p>
<p>The first line of my dockerfile: <code>FROM --platform=linux/amd64 python:3.10.9-slim-bullseye</code>
(without the --platform I can't pip install ray)</p>
<p>Any ideas what can I do to solve it?
Thanks for you help</p>
|
<python><docker><dockerfile><ray>
|
2022-12-21 08:54:05
| 0
| 393
|
HagaiA
|
74,873,366
| 13,440,165
|
Class init in Python with custom list of parameters, set default if parameters is not explicitly passed
|
<p>I want to define in Python 3.9 a class which gets as a parameter a dictionary or a list of arguments and sets the class attributes in the following matter:</p>
<ol>
<li>If the key name is passed in the argument it uses it to set the corresponding class attribute.</li>
<li>If the key name is not passed, it sets the class attribute with a default value.</li>
</ol>
<p>One way to do this is:</p>
<pre><code>class example():
def __init__(self, x=None, y=None):
if x is None:
self.x = "default_x"
else:
self.x = x
if y is None:
self.y = "default_y"
else:
self.y = y
</code></pre>
<p>or shortly (thank to matszwecja):</p>
<pre><code>class example():
def __init__(self, x="default_x", y="default_y"):
self.x = x
self.y = y
</code></pre>
<p>Is there a more Pythonic way to do so without hard-coding <code>if</code> to each parameter? I want some flexibility so that if I add another attribute in the future, I won't be needed to hard-code another if for it.</p>
<p>I want to declare a class with</p>
<pre><code>class example():
def __init__(kwargs):
?
</code></pre>
<p>Such as if I pass <code>example(y="some", z=1, w=2)</code> it will generate</p>
<pre><code>self.x = "default_x"
self.y = "some"
self.z = 1
self.w = 2
</code></pre>
|
<python><python-3.x><class><parameter-passing>
|
2022-12-21 08:46:48
| 2
| 883
|
Triceratops
|
74,873,183
| 5,554,839
|
(python)Getting coordinates of points for dot plot
|
<p>My data is given as 0.1, 0.2, 0.2, 0.2, 0.4, and 0.3.
My goal is to draw dot plot using matplotlib scaterplot by plottting these points which is the solution: (0.1,1) (0.2,1) (0.2,2) (0.2,3) (0.4,1) (0.3,1).
Is there a neat way to obtain these points above? ※The 0.2 appears three time in my data, therefore (0.2,1) (0.2,2) and (0.2,3) in the solution.
I've tried Counter class as below, because it counts occurence, it shows only (0.1,1) (0.2,3) (0.4,1) (0.3,1) which is the subset of the solution.</p>
<pre><code>x = pd.DataFrame({"X":[0.1, 0.2, 0.2, 0.2, 0.4, 0.3]}).X
counter=Counter(x) # How to get {0.1: 1,0.2: 1, 0.2: 2, 0.2: 3, 0.4: 1, 0.3: 1} instead of {0.1: 1, 0.2: 3, 0.4: 1, 0.3: 1}
df=pd.DataFrame.from_dict(counter, orient='index').reset_index().rename(columns={'index':'event', 0:'count'})
plt.scatter(x=df['event'], y=df['count'],alpha=0.3) # This dot plot is not complete. Because there are no points (0.2,1) and (0.2,2)
</code></pre>
|
<python><matplotlib><scatter-plot>
|
2022-12-21 08:28:46
| 1
| 501
|
Soon
|
74,873,163
| 12,752,172
|
How to read text inside a div tag in selenium python?
|
<p>I'm trying to load data for a unique id set that loads from the database to a list. when eneter the id into a search textbox and clicking on the search button it will generate an HTML data table. But Some ids do not create the data table. it will show a message "No resuts to display." Now I need to continue the for loop to the next id to generate the table. How can I check "No resuts to display." inside the div tag with the IF statement?</p>
<p><strong>HTML Code for the table</strong></p>
<pre><code><div id="r1:0:pc1:tx1::db" class="x17e">
<table role="presentation" summary="" class="x17f x184">
<colgroup span="10">
<col style="width:143px;">
<col style="width:105px;">
<col style="width:105px;">
<col style="width:145px;">
<col style="width: 471px;">
<col style="width:145px;">
<col style="width:105px;">
<col style="width:105px;">
<col style="width:105px;">
<col style="width:105px;">
</colgroup>
</table>
No resuts to display.
</div>
</code></pre>
<p>I tried it with the following code. But can't catch the text.</p>
<pre><code>ele = driver.find_elements("xpath", "//*[@id='r1:0:pc1:tx1::db']/text()")
print(ele)
table = WebDriverWait(driver, 30).until(
EC.visibility_of_element_located((By.XPATH, "//*[@id='r1:0:pc1:tx1::db']/text()")))
</code></pre>
<p>This is my full code:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver import Keys
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
import time
options = Options()
options.add_experimental_option("detach", True)
webdriver_service = Service('/WebDriver/chromedriver.exe')
driver = webdriver.Chrome(options=options, service=webdriver_service)
wait = WebDriverWait(driver, 30)
url = "https://"
try:
connection = mysql.connector.connect(host='localhost', database='test_db', user='root', password='456879')
mySql_select_query = """SELECT usdot_id FROM company_usdot WHERE flag=0 """
cursor = connection.cursor()
cursor.execute(mySql_select_query)
usdot_ids = [i[0] for i in cursor.fetchall()]
print(usdot_ids)
except mysql.connector.Error as error:
print("Failed to select record in MySQL table {}".format(error))
finally:
if connection.is_connected():
cursor.close()
connection.close()
print("MySQL connection is closed")
driver.get(url)
for ids in usdot_ids:
time.sleep(5)
wait.until(EC.element_to_be_clickable((By.ID, "r1:0:oitx23::content"))).clear()
wait.until(EC.element_to_be_clickable((By.ID, "r1:0:oitx23::content"))).send_keys(ids, Keys.RETURN)
wait.until(EC.element_to_be_clickable((By.ID, "r1:0:cb2"))).click()
ele = driver.find_elements("xpath", "//*[@id='r1:0:pc1:tx1::db']/text()")
print(ele)
table = WebDriverWait(driver, 30).until(
EC.visibility_of_element_located((By.XPATH, "//*[@id='r1:0:pc1:tx1::db']/text()")))
if(table.text =="No resuts to display."){
#need to continue the loop
}
else{
#get the details from the data table
}
</code></pre>
|
<python><selenium><web-scraping><xpath><webdriver>
|
2022-12-21 08:27:28
| 1
| 469
|
Sidath
|
74,873,029
| 9,457,809
|
Python environment different between terminal and VSCode
|
<p>I am having issues using my conda environment from within VS Code. This is strange because it has usually worked in the past but recently not anymore. I read through some posts to find a solution but was unable to solve it.</p>
<p>I am trying to use a conda environment called <code>jobapp</code>.</p>
<p>In terminal:</p>
<pre><code>(base)
User in JobApp $ conda activate jobapp
(jobapp)
User in JobApp $ which python
/Users/User/opt/anaconda3/envs/jobapp/bin/python
</code></pre>
<p>In VSCode:</p>
<pre><code>(base)
User in jobapp $ conda activate jobapp
(jobapp)
User in jobapp $ which python
/usr/bin/python
</code></pre>
<p>So even though the <code>(jobapp)</code> indicator is making it look like the environment is active, the python path is still wrong.</p>
<p>How can I make it so it works the same as the terminal?</p>
|
<python><visual-studio-code><anaconda><conda>
|
2022-12-21 08:13:37
| 1
| 333
|
Philipp K
|
74,873,026
| 3,971,855
|
How to do a category level fuzzy match in opensearch
|
<p>I have a index(table) in opnesearch with name cat_master_list. Sharing the sample below</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">code</th>
<th style="text-align: center;">category</th>
<th style="text-align: right;">name</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">Specialty</td>
<td style="text-align: right;">Accu</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">Specialty</td>
<td style="text-align: right;">Punture</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">Specialty</td>
<td style="text-align: right;">Allergy</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">Specialty</td>
<td style="text-align: right;">Allergist</td>
</tr>
<tr>
<td style="text-align: left;">5</td>
<td style="text-align: center;">Procedure</td>
<td style="text-align: right;">Pediatric</td>
</tr>
<tr>
<td style="text-align: left;">6</td>
<td style="text-align: center;">Procedure</td>
<td style="text-align: right;">Surgery</td>
</tr>
<tr>
<td style="text-align: left;">7</td>
<td style="text-align: center;">Procedure</td>
<td style="text-align: right;">Immunology</td>
</tr>
<tr>
<td style="text-align: left;">8</td>
<td style="text-align: center;">Procedure</td>
<td style="text-align: right;">Specialist</td>
</tr>
</tbody>
</table>
</div>
<p>Now i know on how to do a fuzzy match search on specific columns
Sharing the sample query for that</p>
<pre><code> query = {
'size': 3,
'query': {
'multi_match': {
'query': 'Accu',
'fields': ['name' ,'category'],
"fuzziness":"AUTO"
}
}
}
</code></pre>
<p>This above query will do the fuzzy match on all the rows for two columns.
Now I want to write a query where I can do a fuzzy match on specific rows.
Basically I want to run the above query where <strong>category='Specialty'</strong>.
I know that I can create a separate index but I want to do a category level search on same table.
Is that possible in opensearch?</p>
|
<python><elasticsearch><opensearch>
|
2022-12-21 08:13:30
| 1
| 309
|
BrownBatman
|
74,872,925
| 1,723,626
|
Python allowing @ in alphanumeric regex
|
<p>I only want to allow alphanumeric & a few characters (<code>+</code>, <code>-</code>, <code>_</code>, whitespace) in my string. But python seems to allow <code>@</code> and <code>.</code> (dot) along with these. How do I exclude <code>@</code>?</p>
<pre><code>import re
def isInvalid(name):
is_valid_char = bool(re.match('[a-zA-Z0-9+-_\s]+$', name))
if not is_valid_char:
print('Name has invalid characters')
else:
print('Name is valid')
isInvalid('abc @')
</code></pre>
<p>This outputs 'Name is valid', how do I exclude <code>@</code>? Have tried to search this, but couldn't find any use case related to <code>@</code>. You can try this snippet here - <a href="https://www.programiz.com/python-programming/online-compiler/" rel="nofollow noreferrer">https://www.programiz.com/python-programming/online-compiler/</a></p>
|
<python><regex><python-re>
|
2022-12-21 08:04:42
| 2
| 6,947
|
Optimus Prime
|
74,872,799
| 6,204,063
|
Inherit a function (incl. docstring) as a class method
|
<p>Imagine, there is a function like this</p>
<pre><code>def bar(a: str, b: str):
"""
Do somehting with a and b.
"""
print(a, b)
</code></pre>
<p>Then there is a class, where i would like to adapt this function, but keep the docstring (and also the dynamic autocomplete and help) of the original function.</p>
<pre><code>class Foo:
def foo(self, **kwargs):
bar(a=kwargs.get("a"), b=kwargs.get("b"))
print("This was called fro the class method")
</code></pre>
<p>How do i dynamically link the original function to the class, including docstrings?
And are kwargs the best way to pass the arguments, without repeating them in the definition of the class-method?</p>
<p>So when i type <code>Foo.foo()</code> in my IDE or jupyter notebook, i would like to see the docstring of foo().</p>
|
<python><python-3.x><class><docstring>
|
2022-12-21 07:52:29
| 0
| 410
|
F. Win
|
74,872,742
| 5,684,405
|
SpaCy3 ValueError without message
|
<p>I'm new to Spacy but want to train simply NER with new labels using the <code>en_core_web_trf</code>. So I've created code like below however I keep getting unknown <code>ValueError</code>.</p>
<p>How can I fix this?</p>
<pre><code>import random
from spacy.training import Example
from spacy.util import minibatch, compounding
def train_spacy_model(data, model='en_core_web_trf', n_iter=30):
if model is not None:
nlp = spacy.load(model) # load existing spaCy model
print("Loaded model '%s'" % model)
else:
nlp = spacy.blank("en") # create blank Language class
print("Created blank 'en' model")
print("ner" in nlp.pipe_names)
if "ner" not in nlp.pipe_names:
ner = nlp.create_pipe("ner")
nlp.add_pipe(ner, last=True)
else:
ner = nlp.get_pipe("ner")
TRAIN_DATA = data
examples = []
for text, annotations in TRAIN_DATA:
examples.append(Example.from_dict(nlp.make_doc(text), annotations))
nlp.initialize(lambda: examples)
pipe_exceptions = ["ner"]
other_pipes = [pipe for pipe in nlp.pipe_names if pipe not in pipe_exceptions]
with nlp.disable_pipes(*other_pipes): # only train NER
optimizer = nlp.create_optimizer()
for itn in range(n_iter):
print ("Starting iteration " + str(itn))
random.shuffle(examples)
losses = {}
batches = minibatch(examples, size=2)
for batch in batches:
nlp.update(
batch,
drop=0.20,
sgd=optimizer,
losses=losses
)
print("Losses", losses)
return nlp
TRAIN_DATA = [
("Who is Shaka Khan?", {"entities": [(7, 17, "PERSON")]}),
("I like London.", {"entities": [(7, 13, "LOC")]}),
]
nlp = train_spacy_model(data=no_verlaps_dataset, n_iter=30)
</code></pre>
<p>the error:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[22], line 51
47 print("Losses", losses)
49 return nlp
---> 51 nlp = train_spacy_model(data=no_verlaps_dataset, n_iter=30)
Cell In[22], line 40, in train_spacy_model(data, model, n_iter)
36 batches = minibatch(examples, size=2)#compounding(4.0, 64.0, 1.2))
37 for batch in batches:
38 # print(batch)
39 # texts, annotations = zip(*batch)
---> 40 nlp.update(
41 batch,
42 drop=0.20,
43 sgd=optimizer,
44 losses=losses
45 )
47 print("Losses", losses)
49 return nlp
File ~/miniconda3/envs/tvman_ENV/lib/python3.9/site-packages/spacy/language.py:1164, in Language.update(self, examples, _, drop, sgd, losses, component_cfg, exclude, annotates)
1161 for name, proc in self.pipeline:
1162 # ignore statements are used here because mypy ignores hasattr
1163 if name not in exclude and hasattr(proc, "update"):
-> 1164 proc.update(examples, sgd=None, losses=losses, **component_cfg[name]) # type: ignore
1165 if sgd not in (None, False):
1166 if (
1167 name not in exclude
1168 and isinstance(proc, ty.TrainableComponent)
1169 and proc.is_trainable
1170 and proc.model not in (True, False, None)
1171 ):
File ~/miniconda3/envs/tvman_ENV/lib/python3.9/site-packages/spacy/pipeline/transition_parser.pyx:398, in spacy.pipeline.transition_parser.Parser.update()
File ~/miniconda3/envs/tvman_ENV/lib/python3.9/site-packages/thinc/model.py:309, in Model.begin_update(self, X)
302 def begin_update(self, X: InT) -> Tuple[OutT, Callable[[InT], OutT]]:
303 """Run the model over a batch of data, returning the output and a
304 callback to complete the backward pass. A tuple (Y, finish_update),
305 where Y is a batch of output data, and finish_update is a callback that
306 takes the gradient with respect to the output and an optimizer function,
307 and returns the gradient with respect to the input.
308 """
--> 309 return self._func(self, X, is_train=True)
File ~/miniconda3/envs/tvman_ENV/lib/python3.9/site-packages/spacy/ml/tb_framework.py:33, in forward(model, X, is_train)
32 def forward(model, X, is_train):
---> 33 step_model = ParserStepModel(
34 X,
35 model.layers,
36 unseen_classes=model.attrs["unseen_classes"],
37 train=is_train,
38 has_upper=model.attrs["has_upper"],
39 )
41 return step_model, step_model.finish_steps
File ~/miniconda3/envs/tvman_ENV/lib/python3.9/site-packages/spacy/ml/parser_model.pyx:217, in spacy.ml.parser_model.ParserStepModel.__init__()
File ~/miniconda3/envs/tvman_ENV/lib/python3.9/site-packages/thinc/model.py:291, in Model.__call__(self, X, is_train)
288 def __call__(self, X: InT, is_train: bool) -> Tuple[OutT, Callable]:
289 """Call the model's `forward` function, returning the output and a
290 callback to compute the gradients via backpropagation."""
--> 291 return self._func(self, X, is_train=is_train)
File ~/miniconda3/envs/tvman_ENV/lib/python3.9/site-packages/thinc/layers/chain.py:54, in forward(model, X, is_train)
52 callbacks = []
53 for layer in model.layers:
---> 54 Y, inc_layer_grad = layer(X, is_train=is_train)
55 callbacks.append(inc_layer_grad)
56 X = Y
File ~/miniconda3/envs/tvman_ENV/lib/python3.9/site-packages/thinc/model.py:291, in Model.__call__(self, X, is_train)
288 def __call__(self, X: InT, is_train: bool) -> Tuple[OutT, Callable]:
289 """Call the model's `forward` function, returning the output and a
290 callback to compute the gradients via backpropagation."""
--> 291 return self._func(self, X, is_train=is_train)
File ~/miniconda3/envs/tvman_ENV/lib/python3.9/site-packages/thinc/layers/chain.py:54, in forward(model, X, is_train)
52 callbacks = []
53 for layer in model.layers:
---> 54 Y, inc_layer_grad = layer(X, is_train=is_train)
55 callbacks.append(inc_layer_grad)
56 X = Y
File ~/miniconda3/envs/tvman_ENV/lib/python3.9/site-packages/thinc/model.py:291, in Model.__call__(self, X, is_train)
288 def __call__(self, X: InT, is_train: bool) -> Tuple[OutT, Callable]:
289 """Call the model's `forward` function, returning the output and a
290 callback to compute the gradients via backpropagation."""
--> 291 return self._func(self, X, is_train=is_train)
File ~/miniconda3/envs/tvman_ENV/lib/python3.9/site-packages/spacy_transformers/layers/listener.py:58, in forward(model, docs, is_train)
56 def forward(model: TransformerListener, docs, is_train):
57 if is_train:
---> 58 model.verify_inputs(docs)
59 return model._outputs, model.backprop_and_clear
60 else:
File ~/miniconda3/envs/tvman_ENV/lib/python3.9/site-packages/spacy_transformers/layers/listener.py:47, in TransformerListener.verify_inputs(self, inputs)
45 def verify_inputs(self, inputs):
46 if self._batch_id is None and self._outputs is None:
---> 47 raise ValueError
48 else:
49 batch_id = self.get_batch_id(inputs)
ValueError:
</code></pre>
|
<python><spacy><spacy-3><spacy-transformers>
|
2022-12-21 07:45:34
| 0
| 2,969
|
mCs
|
74,872,652
| 2,606,766
|
How to overwrite default pytest parameters in pycharm?
|
<p>Whenever I launch a test inside pycharm it is executed w/ the following parameters:</p>
<pre><code>Launching pytest with arguments tests/unittests/routers/test_data.py::TestDataRouter::test_foo --no-header --no-summary -q in ...
</code></pre>
<p>How do I get rid of the <code>--no-header --no-summary -q</code> arguments?</p>
<p>I know I can add arguments in the runtime configuration. But I cannot find above arguments so I can remove them. I also checked the configuration templates, nothing in there either.</p>
|
<python><pycharm><pytest>
|
2022-12-21 07:35:25
| 1
| 1,945
|
HeyMan
|
74,872,551
| 6,843,103
|
Pyspark Dataframe - how to add multiple columns in dataframe, based on data in 2 columns
|
<p>I've a pyspark dataframe requirement, which i need inputs on :</p>
<p>Here is the scenario :</p>
<pre><code>df1 schema:
root
|-- applianceName: string (nullable = true)
|-- customer: string (nullable = true)
|-- daysAgo: integer (nullable = true)
|-- countAnomaliesByDay: long (nullable = true)
Sample Data:
applianceName | customer | daysAgo| countAnomaliesByDay
app1 cust1 0 100
app1 cust1 1 200
app1 cust1 2 300
app1 cust1 3 400
app1 cust1 4 500
app1 cust1 5 600
app1 cust1 6 700
In df1 schema, I need to add columns - day0,day1,day2,day3,day4,day5,day6 as shown below :
applianceName | customer | day0 | day1| day2 | day3 | day4 | day5| day6
app1 cust1 100 200 300 400 500 600 700
i.e. column day0 - will have countAnomaliesByDay when daysAgo =0, column day1 - will have countAnomaliesByDay when daysAgo =1 and so on.
</code></pre>
<p>How do i achieve this ?</p>
<p>tia!</p>
|
<python><dataframe><apache-spark><join><pyspark>
|
2022-12-21 07:23:08
| 2
| 1,101
|
Karan Alang
|
74,872,517
| 1,539,757
|
How to read and execute python script stored in memory from node js app
|
<p>I have node JS application where I am storing python file on filesystem and execute that python script stored in filesystem and log the output as well using</p>
<pre><code>const scriptExecution = spawn('python', ['-u', pythonfilePath]);
scriptExecution.stdout.on('data', (data) => {
}
</code></pre>
<p>but my pythonfilePath is path from filesystem .</p>
<p>So is there any way where I read python file content from DB and store file in memory using <strong>memfs</strong><br />
and execute python script. so instead of filesystem can I use memory.</p>
|
<python><node.js><memory><filesystems>
|
2022-12-21 07:19:51
| 1
| 2,936
|
pbhle
|
74,872,407
| 6,753,182
|
Why does numpy handle overflows inconsistently?
|
<p>How come the following test works fine on Windows, but fails on Linux:</p>
<pre><code>import numpy as np
print(f"Numpy Version: {np.__version__}")
# version A
versionA = np.eye(4)
versionA[:3, 3] = 2**32 + 1
versionA = versionA.astype(np.uint32)
# version B
versionB = np.eye(4, dtype=np.uint32)
versionB[:3, 3] = np.asarray(2**32 + 1)
# # version C
# # (raises OverflowError)
# versionC = np.eye(4, dtype=np.uint32)
# versionC[:3, 3] = 2**32 + 1
np.testing.assert_array_equal(versionA, versionB)
</code></pre>
<p>I tested this on Windows and Linux with numpy versions:
<code>1.23.4</code>, <code>1.21.5</code>, <code>1.24.0</code>. On Windows, the assignment overflows to <code>1</code> in both versions and the assertion compares as equal. On Linux, on the other hand, <code>versionB</code> overflows to <code>1</code>, but <code>versionA</code> results in assigning <code>0</code>. As a result, I get the following failure:</p>
<pre><code>AssertionError:
Arrays are not equal
Mismatched elements: 3 / 16 (18.8%)
Max absolute difference: 4294967295
Max relative difference: 4.2949673e+09
x: array([[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]], dtype=uint32)
y: array([[1, 0, 0, 1],
[0, 1, 0, 1],
[0, 0, 1, 1],
[0, 0, 0, 1]], dtype=uint32)
</code></pre>
<p>Can someone explain why numpy behaves this way?</p>
|
<python><numpy><overflow>
|
2022-12-21 07:08:26
| 1
| 3,290
|
FirefoxMetzger
|
74,872,356
| 17,889,840
|
How to remove list of elements from a Tensorflow tensor
|
<p>For the following tensor:</p>
<pre><code> <tf.Tensor: shape=(2, 10, 6), dtype=int64, numpy=
array([[[ 3, 16, 43, 10, 7, 431],
[ 3, 2, 6, 5, 7, 2],
[ 3, 37, 5, 7, 2, 12],
[ 3, 2, 11, 5, 7, 2],
[ 3, 2, 6, 18, 14, 195],
[ 3, 2, 6, 5, 7, 195],
[ 3, 2, 6, 5, 7, 9],
[ 3, 2, 11, 7, 2, 12],
[ 3, 16, 52, 92, 177, 923],
[ 3, 9, 43, 10, 7, 9]],
[[ 3, 2, 22, 495, 230, 4],
[ 3, 2, 22, 5, 102, 122],
[ 3, 2, 22, 5, 102, 230],
[ 3, 2, 22, 5, 70, 908],
[ 3, 2, 22, 5, 70, 450],
[ 3, 2, 22, 5, 70, 122],
[ 3, 2, 22, 5, 70, 122],
[ 3, 2, 22, 5, 70, 230],
[ 3, 2, 22, 70, 34, 470],
[ 3, 2, 22, 855, 450, 4]]], dtype=int64)>)
</code></pre>
<p>I want to remove the last list <code>[ 3, 2, 22, 855, 450, 4]</code> in the tensor. I tried with <code>tf.unstack</code> but it didn't work.</p>
|
<python><list><tensorflow>
|
2022-12-21 07:04:19
| 3
| 472
|
A_B_Y
|
74,872,315
| 14,553,366
|
Why getting g++ not recognized as a command while running subprocess in python?
|
<p>I have an online ide which takes code and language from the user and upon submitting the server has to execute the file. I have g++ installed on my system still upon execution I get the following error in subprocess module:</p>
<pre><code>'g++' is not recognized as an internal or external command,
operable program or batch file.
'.' is not recognized as an internal or external command,
operable program or batch file.
</code></pre>
<p>The function for file execution is :</p>
<pre><code>def execute_file(file_name,language):
if(language=="cpp"):
#g++ xyz.cpp
subprocess.call(["g++","code/" + file_name],shell=True) #this only compiles the code
subprocess.call(["./a.out"],shell=True) #this executes the compiled file.
</code></pre>
<p>my code file is there in the /code directory.</p>
<p>The directory structure is :</p>
<p><a href="https://i.sstatic.net/BBYai.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BBYai.png" alt="enter image description here" /></a></p>
|
<python><django><django-rest-framework><subprocess><g++>
|
2022-12-21 07:00:10
| 1
| 430
|
Vatsal A Mehta
|
74,872,261
| 1,489,503
|
Python Web Scraping - Wep Page Resources
|
<p>I'm trying to scrape a specific website, but the data is loaded dynamically. I found that the data is in <a href="https://www.fincaraiz.com.co/_next/data/build/proyecto-de-vivienda/altos-del-eden/el-eden/barranquilla/7109201.json?title=altos-del-eden&location1=el-eden&location2=barranquilla&code=7109201" rel="nofollow noreferrer">json files</a>, but I cant get the list of all the elements on the website and I need all the pages.</p>
<ul>
<li>How can I get the list of all the similar json starting by number?</li>
<li>How can I go trough all the pages with this logic?</li>
</ul>
<p>I'm not sure of what to use, I have tried Scrapy but it gets too complicated waiting for the load to page, wanted to know if beautifulsoup or other has a faster response.</p>
<p><strong>Edit</strong>: Adding Scrapy Code</p>
<ul>
<li>I did this code in scrapy, but I don't know how to get all the json
from the page dynamically</li>
</ul>
<pre class="lang-py prettyprint-override"><code># https://www.fincaraiz.com.co/_next/data/build/proyecto-de-vivienda/altos-del-eden/el-eden/barranquilla/7109201.json?title=altos-del-eden&location1=el-eden&location2=barranquilla&code=7109201
import logging
import scrapy
from scrapy_playwright.page import PageMethod
import json
# scrapy crawl fincaraiz-home -O output-home.json
class PwspiderSpider(scrapy.Spider):
name = "fincaraiz-home"
base_url = "https://www.fincaraiz.com.co"
build_url = "https://www.fincaraiz.com.co/_next/data/build"
def start_requests(self):
yield scrapy.Request(
"https://www.fincaraiz.com.co/finca-raiz/venta/antioquia",
meta=dict(
playwright=True,
playwright_include_page=True,
playwright_page_methods=[
#PageMethod("wait_for_selector", 'div[id="listingContainer"]')
PageMethod("wait_for_selector", 'button:has-text("1")')
],
),
errback=self.errback,
)
async def parse(self, response):
for anuncio in response.xpath("//div[@id='listingContainer']/div"):
# if anuncio.xpath("article/a/@href").extract():
# yield scrapy.Request(
# self.build_url + anuncio.xpath("article/a/@href").extract()[0]+".json",
# callback=self.parse_json,
# # meta=dict(
# # callback=self.parse_json,
# # # playwright=True,
# # # playwright_include_page=True,
# # # playwright_page_methods=[
# # # PageMethod("wait_for_selector", 'button:has-text("1")')
# # # ],
# # ),
# errback=self.errback,
# )
yield {
"link": anuncio.xpath("article/a/@href").extract(),
"tipo_anuncio": anuncio.xpath("article/a/ul/li[1]/div/span/text()").extract(),
"tipo_vendedor": anuncio.xpath("article/a/ul/li[2]/div/span/text()").extract(),
"valor": anuncio.xpath("article/a/div/section/div[1]/span[1]/b/text()").extract(),
"area": anuncio.xpath("article/a/div/section/div[2]/span[1]/text()").extract(),
"habitaciones": anuncio.xpath("article/a/div/section/div[2]/span[3]/text()").extract(),
"banos": anuncio.xpath("article/a/div/section/div[2]/span[5]/text()").extract(),
"parqueadero": anuncio.xpath("article/a/div/section/div[2]/span[7]/text()").extract(),
"ubicacion": anuncio.xpath("article/a/div/section/div[3]/div/span/text()").extract(),
"imagen": anuncio.xpath("article/a/figure/img/@src").extract(),
"tipo_inmueble": anuncio.xpath("article/a/div/footer/div/span/b/text()").extract(),
"inmobiliaria": anuncio.xpath("article/a/div/footer/div/div/div").extract(),
}
# async def parse_json(self, response):
# yield json.loads(response.text)
def errback(self, failure):
logging.info(
"Handling failure in errback, request=%r, exception=%r", failure.request, failure.value
)
</code></pre>
|
<python><html><web-scraping><beautifulsoup><scrapy>
|
2022-12-21 06:53:24
| 1
| 968
|
Rednaxel
|
74,872,012
| 8,832,997
|
Python LEGB lookup
|
<p>I think I have misunderstood Python LEGB lookup principle. So, at first just hear my understanding.</p>
<p>If python interpreter encounters a variable name it at first looks for it in <strong>L</strong>ocal scope, if not found then in <strong>E</strong>nclosing scope, if still not found then in <strong>G</strong>lobal and if not found then at last in <strong>B</strong>uiltin scope.</p>
<p>It behaves as such for the following code:</p>
<pre><code>x = 10
def myfunc():
print(x)
</code></pre>
<p>Running <code>myfunc</code> function will print out 10. Python finds the variable x in Global scope.</p>
<p>But problem arises for the following code:</p>
<pre><code>x = 10
def myfunc():
x = x * 2
print(x)
</code></pre>
<p>Now running <code>myfunc</code> function will cause <code>UnboundLocalError</code>.
Why can't python interpreter utilise the LEGB lookup here and print <code>20</code>?
I know that in assignment statements python evaluates the right side expression at first. So, here python should search for x, find it in global scope, evaluate the expression <code>x * 2</code> and attach the name <code>x</code> for this value <code>20</code>.
I am surely missing some simple points here.</p>
|
<python><function><scope>
|
2022-12-21 06:20:59
| 1
| 1,357
|
Sabbir Ahmed
|
74,872,002
| 4,894,051
|
How can I group and sum a pandas dataframe?
|
<p>I've had a good hunt for some time and can't find a solution so asking here.</p>
<p>I have data like so:</p>
<pre><code>Plan | Quantity_y
Starter | 1
Intermediate | 1
Intermediate | 1
Intermediate | 2
Intermediate | 1
Intermediate | 14
Intermediate | 1
Advanced | 1
Advanced | 1
Advanced | 2
Advanced | 1
Incredible | 1
Incredible | 1
Incredible | 1
Incredible | 1
Incredible | 1
Incredible | 2
Incredible | 2
</code></pre>
<p>and I'd like it to group AND count the individual numbers like so:</p>
<pre><code>Plan | Quantity_y
Starter | 1
Intermediate | 20
Advanced | 5
Incredible | 9
</code></pre>
<p>I've tried so many options but am having no luck at all.</p>
<p>I've tried doing an <code>iterrows()</code> as well as trying to assign a value by counting and returning the count via a <code>.apply()</code> function call. I'm just not getting how I can group AND count the sum of the numbers in that group.</p>
<p>Any advice would be appreciated.</p>
<p>Thank you</p>
|
<python><pandas><dataframe><group-by><sum>
|
2022-12-21 06:19:56
| 1
| 656
|
robster
|
74,871,911
| 8,481,155
|
Dataflow Job to start based on PubSub Notification - Python
|
<p>I am writing a Dataflow job which reads from BigQuery and does a few transformations.</p>
<pre><code>data = (
pipeline
| beam.io.ReadFromBigQuery(query='''
SELECT * FROM `bigquery-public-data.chicago_crime.crime` LIMIT 100
''', use_standard_sql=True)
| beam.Map(print)
)
</code></pre>
<p>But my requirement is to read from BigQuery only after receiving a notification from a PubSub Topic. The above DataFlow job should start reading data from BigQuery only if the below message is received. If it is a different job id or a different status, then no action should be done.</p>
<pre><code>PubSub Message : {'job_id':101, 'status': 'Success'}
</code></pre>
<p>Any help on this part?</p>
|
<python><google-cloud-platform><google-cloud-dataflow><apache-beam><google-cloud-pubsub>
|
2022-12-21 06:05:54
| 2
| 701
|
Ashok KS
|
74,871,717
| 4,505,884
|
Install github organization private package using docker build
|
<p>I am part of an organization X. Here, we have a python package which is added into requirements.txt. I have access to this repository.</p>
<p>When I am doing pip install <a href="https://github.com/X/repo.git" rel="nofollow noreferrer">https://github.com/X/repo.git</a>, it is working fine. Because it was using my git identity present in the host or my local machine.</p>
<p>However, when I do pip install with docker as follows</p>
<pre><code>FROM python:3.8
COPY ./app ./app
COPY ./requirements.txt ./requirements.txt
# Install git
RUN apt-get update && apt-get install -y git openssh-client
RUN mkdir -p -m 0600 ~/.ssh
RUN ssh-keyscan github.com >> ~/.ssh/known_hosts
RUN --mount=type=ssh pip install git+ssh://git@github.com/X/repo_name.git@setup#egg=repo_name
# Install Dependencies
RUN pip install -r ./requirements.txt
# Configure server
ENV HOST="0.0.0.0"
ENV PORT=5000
# CMD
ENTRYPOINT uvicorn app:app --host ${HOST} --port ${PORT}
# Remove SSH Key
RUN rm ~/.ssh/id_rsa
</code></pre>
<p>it is throwing the following error
<a href="https://i.sstatic.net/vbyet.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vbyet.png" alt="enter image description here" /></a></p>
<p>I have set the ssh key in github as well using following <a href="https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent" rel="nofollow noreferrer">approach</a></p>
<p>But, when I do <code>ssh -T username@github.com</code> it is throwing <code>Permission denied</code>. But, I have the owner rights of the repository which is under an organization.</p>
<p>Not sure, how to resolve this issue!</p>
|
<python><git><docker><github><ssh>
|
2022-12-21 05:35:57
| 1
| 502
|
vam
|
74,871,659
| 972,399
|
Is it possible to change an object in an array while looping through the array?
|
<p>I am trying to change the structure of an object inside of an array that I am looping though with a for loop. Nothing seems to happen to the object unless I append it to a new array.</p>
<p>I just can't help but think there is a better way to do this...</p>
<p>I have replaced all the real data and naming with placeholder data just for structure.</p>
<p>Essentially though I just want to restructure <code>obj</code> in the <code>original_array</code> rather than appending to a new one.</p>
<pre><code> rebuilt_array = []
for obj in original_array:
rebuilt_obj = {
"inner_obj_1": {
"item_1": obj["old_item_1"],
"item_2": obj["old_item_2"],
"item_3": obj["old_item_3"],
},
"another_item": obj["old_another_item"],
"inner_obj_2": {
"item_1": obj["old_item_1"],
"item_2": obj["old_item_2"],
"item_3": obj["old_item_3"],
},
}
rebuilt_array.append(rebuilt_obj)
</code></pre>
|
<python>
|
2022-12-21 05:26:56
| 1
| 446
|
alphadmon
|
74,871,654
| 6,077,239
|
Calculate the sum of absolute difference of a column over two adjacent days on id
|
<p>I have a dataframe like this</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
dict(
day=[1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 5, 5, 5],
id=[1, 2, 3, 2, 3, 4, 1, 2, 2, 3, 1, 3, 4],
value=[1, 2, 2, 3, 5, 2, 1, 2, 7, 3, 5, 3, 4],
)
)
</code></pre>
<p>I want to calculate the sum of absolute difference of 'value' column between every two consecutive days (ordered from smallest to largest) matched by id and treat null/None/unmatched as 0. To be more specific, the result for day 1 and 2 can be calculated as:</p>
<pre><code>Note id: 1 id: 2 id: 3 id: 4 -- difference for each id (treat non-existent as 0)
(1 - 0) + (3 - 2) + (5 - 2) + (2 - 0) = 7
</code></pre>
<p>And, the final result for my example should be:</p>
<pre><code>day res
1-2 7
2-3 9
3-4 9
4-5 16
</code></pre>
<p>How can I achieve the result I want with idiomatic pandas code?</p>
<p>Is it possible to achieve the goad via groupby and some shift operations? One challenge I have with shift is that non-overlapping ids between two days cannot be handled.</p>
<p>Thanks so much for your help!</p>
|
<python><pandas>
|
2022-12-21 05:26:14
| 1
| 1,153
|
lebesgue
|
74,871,607
| 8,708,364
|
Reverse values in pd.Series by their value type
|
<p>I have this <code>pd.Series</code>:</p>
<pre><code>s = pd.Series([1, 'a', 1.4, 'b', 4, 98, 6.7, 'hello', 98.9])
</code></pre>
<p>My goal is to switch the values by each value type in reverse order.</p>
<p>My desired output is:</p>
<pre><code>>>> s = pd.Series([98, 'hello', 98.9, 'b', 4, 1, 6.7, 'a', 1.4])
>>> s
0 98
1 hello
2 98.9
3 b
4 4
5 1
6 6.7
7 a
8 1.4
dtype: object
>>>
</code></pre>
<p>As you can see, the different value types are still in mixed order, but they are reversed by the other same type values.</p>
<ul>
<li><p>The integer order was <code>1, 4, 98</code> and it's now <code>98, 4, 1</code>.</p>
</li>
<li><p>The float order was <code>1.4, 6.7, 98.9</code> and it's now <code>98.9, 6.7, 1.4</code>.</p>
</li>
<li><p>The string order was <code>'a', 'b', 'hello'</code> and it's now <code>'hello', 'b', 'a'</code></p>
</li>
</ul>
<p>What I have tried so far is:</p>
<pre><code>>>> s.to_frame().groupby([[*map(type, s)]], sort=False).apply(lambda x: x.iloc[::-1]).reset_index(drop=True)
0
0 98
1 4
2 1
3 hello
4 b
5 a
6 98.9
7 6.7
8 1.4
>>>
</code></pre>
<p>And yes, they do get reversed in order. But, since I'm using <code>groupby</code>, the values are grouped together into separated groups, they're not mixed together.</p>
<p>How would I fix this?</p>
|
<python><pandas><group-by><series>
|
2022-12-21 05:20:10
| 2
| 71,788
|
U13-Forward
|
74,871,587
| 20,174,226
|
Improve itertools.pairwise() function
|
<p>I am trying to create a custom function that improves the itertools <a href="https://docs.python.org/3/library/itertools.html#itertools.pairwise" rel="nofollow noreferrer">pairwise</a> function.</p>
<p>Unlike the <strong>pairwise</strong> function, which returns <strong>pairs</strong> of items (<strong>n=2</strong>), I would like to be able to specify the grouping length (<strong>n=x</strong>). So that I can return groups of items of a specified length. Is this possible?</p>
<p>So when <code>n=2</code>, the groupwise function should be equivalent to the <strong>itertools pairwise</strong> function. But when <code>n=3</code>, I would expect it to return groups of 3.</p>
<p>This is what I have so far - which is essentially the same as the pairwise function. ie. <strong>works for n=2</strong>.</p>
<h3>Code:</h3>
<pre><code>from itertools import tee
my_string = 'abcdef'
def groupwise(iterable, n):
a = tee(iterable, n)
next(a[n-1], None)
return zip(a[0], a[1])
for item in groupwise(my_string, 2):
print("".join(item))
</code></pre>
<h3>Output:</h3>
<pre><code>ab
bc
cd
de
ef
</code></pre>
<p>I am trying to modify the code above to accept <code>n=3, n=4, n=5, ... n=x</code>, but I cannot think of a way to provide the undetermined number of components to the zip function considering that I would like to specify any value of <code>n <= len(my_string)</code></p>
<br>
<br>
<p><strong>When n=3 the output should be:</strong></p>
<pre><code>abc
bcd
cde
def
</code></pre>
|
<python><algorithm><python-itertools>
|
2022-12-21 05:16:03
| 2
| 4,125
|
ScottC
|
74,871,496
| 11,332,693
|
AttributeError: 'float' object has no attribute 'split' while splitting the rows with nan in python
|
<p>df</p>
<pre><code>Id Place
Id1 New York,London,Russia
Id2 Argentina
Id3
</code></pre>
<p>I am iterating row wise and splitting the values. For third row and I am getting the error 'AttributeError: 'float' object has no attribute 'split'. How to handle nan cases ?</p>
<p>My try</p>
<pre><code>L1 = []
for index, row in df.iterrows():
x = df['Place'].iloc[0].split(",")
L1.append(x)
</code></pre>
|
<python><pandas><attributeerror>
|
2022-12-21 04:58:49
| 1
| 417
|
AB14
|
74,871,472
| 122,466
|
pandas - dealing with multilayered columns
|
<p>I am trying to read stock quotes from Yahoo finance using a library.
The returned data seems to have columns stacked over two levels:</p>
<p><a href="https://i.sstatic.net/8Kdcm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8Kdcm.png" alt="enter image description here" /></a></p>
<p>I want to get rid of "Adj Close" and have a simple dataframe with just the ticker columns. How do I do this?</p>
|
<python><pandas>
|
2022-12-21 04:52:45
| 1
| 10,387
|
Aadith Ramia
|
74,871,374
| 9,668,218
|
How to apply" Initcap" only on records whose values are not all capital letters in a PySpark DataFrame?
|
<p>I have a PySpark DataFrame and I want to apply "Initcap" on a specific column. However, I want this transformation only on records whose value is not all capitals. For example ,in the sample dataset below, I don't want to apply "Initcap" on USA:</p>
<pre><code># Prepare Data
data = [(1, "Italy"), \
(2, "italy"), \
(3, "USA"), \
(4, "China"), \
(5, "china")
]
# Create DataFrame
columns= ["ID", "Country"]
df = spark.createDataFrame(data = data, schema = columns)
df.show(truncate=False)
</code></pre>
<p><a href="https://i.sstatic.net/pt400.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pt400.png" alt="enter image description here" /></a></p>
<p>The expected output will be:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>Country</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>'Italy'</td>
</tr>
<tr>
<td>2</td>
<td>'Italy'</td>
</tr>
<tr>
<td>3</td>
<td>'USA'</td>
</tr>
<tr>
<td>4</td>
<td>'China'</td>
</tr>
<tr>
<td>5</td>
<td>'China'</td>
</tr>
</tbody>
</table>
</div>
|
<python><dataframe><pyspark><capitalize>
|
2022-12-21 04:36:45
| 3
| 1,033
|
Mohammad
|
74,871,282
| 1,953,475
|
Efficient alternative to for loop with many if statements
|
<p>I have a list where the index represents the year, eg. y1, y2, y3, ..etc and the values represent the company that person worked for.</p>
<p>In the following example, p1[0]=c152 represents p1 in year 0 worked for company c152.</p>
<pre><code> 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
p1 c152 c3 c1 c8 c8 c8 c2 c1 c1 c1 c2 c2 c3 c3 c0 c8 c9 c9 c9 c8
p2 c170 c3 c8 c8 c8 c4 c1 c1 c2 c2 c2 c3 c3 c0 c0 c7 c5 c5 c8 c8
^ ^ ^ ^ ^ ^
</code></pre>
<p>I am trying to identify a type of directed relationship - "followership" that is defined as if a person changed a job and the colleague has been there for at least a year and used to work together.</p>
<pre><code> 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
p1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
p2 0 0 0 1 0 0 0 1 0 0 1 0 1 0 1 0 0 0 0 1
</code></pre>
<p>I have a python solution that is:</p>
<pre><code>p1 = ['c152', 'c3', 'c1', 'c8', 'c8', 'c8', 'c2', 'c1', 'c1', 'c1', 'c2', 'c2', 'c3', 'c3', 'c0', 'c8', 'c9', 'c9', 'c9', 'c8']
p2 = ['c170', 'c3', 'c8', 'c8', 'c8', 'c4', 'c1', 'c1', 'c2', 'c2', 'c2', 'c3', 'c3', 'c0', 'c0', 'c7', 'c5', 'c5', 'c8', 'c8']
cowk = 0
res = []
for i in range(len(p1)):
if cowk == 0 and p1[i] != p2[i]:
res.append((0,0))
continue
if cowk == 0 and p1[i] == p2[i]:
cowk = 1
res.append((0,0))
continue
if cowk == 1 and p1[i] == p2[i]:
if p1[i-1] == p1[i] and p2[i-1] == p2[i]:
res.append((0,0))
continue
elif p1[i-1] == p1[i]:
res.append((1,0))
continue
# print(f'p1 {i}')
elif p2[i-1] == p2[i]:
res.append((0,1))
continue
# print(f'p2 {i}')
else:
continue
res.append((0,0))
5.79 µs ± 189 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
</code></pre>
<p>In retrospective, I feel the real logic is after a coworking, if <strong>there is a hole on the bottom left or top left in a 2x2 rolling window</strong>, add 1 point to the diagonal of the hole.</p>
<p>Is there a more efficient way to store the data and efficiently calculate the output? (I feel there is a smarter way like matrix manipulation / vectorization / bit manipulation)</p>
<p><a href="https://i.sstatic.net/xtrQv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xtrQv.png" alt="enter image description here" /></a></p>
|
<python><bit-manipulation>
|
2022-12-21 04:21:21
| 2
| 19,728
|
B.Mr.W.
|
74,871,215
| 14,122,835
|
transpose multiple rows to columns with pandas
|
<p>I have this excel table read in jupyter notebook with pandas. I want to melt the upper row side of the table into column. The table looks like as follow:</p>
<pre><code> ori code cgk cgk clg clg
ori city jakarta NaN cilegon NaN
ori prop jakarta NaN banten NaN
ori area jawa NaN jawa NaN
code city district island type a days type b days
001 jakarta jakarta jawa 12000 2 13000 3
002 surabaya surabaya jawa 13000 3 14000 4
</code></pre>
<p>I realized that df.melt should be worked to transpose the upper rows, but the <code>type</code> & <code>days</code>columns, and also the 4 rows and the NaN value on it get me confuse on how to do that correctly.</p>
<p>The desire clean dataframe I need is as follow:</p>
<pre><code>code city district island type price_type days ori_code ori_city ori_prop ori_area
001 jakarta jakarta jawa type a 12000 2 cgk jakarta jakarta jawa
001 jakarta jakarta jawa type b 13000 3 clg cilegon banten jawa
002 surabaya surabaya jawa type a 13000 3 cgk jakarta jakarta jawa
002 surabaya surabaya jawa type b 14000 4 clg cilegon banten jawa
</code></pre>
<p>The <code>ori_code, ori_city, ori_prop, ori_area</code> would become column names.</p>
<p>So far what I have done is set fix index name which are code, city, district and also island.</p>
<pre><code>df = df.set_index(['code','city','district','island'])
</code></pre>
<p>can anyone help me to solve this problem? Any helps would be much appreciated. Thank you in advance.</p>
|
<python><pandas><transpose><pandas-melt>
|
2022-12-21 04:07:57
| 1
| 531
|
yangyang
|
74,871,172
| 1,769,197
|
Python: how to speed up this function and make it more scalable?
|
<p>I have the following function which accepts an indicator matrix of shape (20,000 x 20,000). And I have to run the function 20,000 x 20,000 = 400,000,000 times. Note that the <code>indicator_Matrix</code> has to be in the form of a pandas dataframe when passed as parameter into the function, as my actual problem's dataframe has timeIndex and integer columns but I have simplified this a bit for the sake of understanding the problem.</p>
<h1>Pandas Implementation</h1>
<pre><code>indicator_Matrix = pd.DataFrame(np.random.randint(0,2,[20000,20000]))
def operations(indicator_Matrix):
s = indicator_Matrix.sum(axis=1)
d = indicator_Matrix.div(s,axis=0)
res = d[d>0].mean(axis=0)
return res.iloc[-1]
</code></pre>
<p>I tried to improve it by using <code>numpy</code> but it is still taking ages to run. I also tried <code>concurrent.future.ThreadPoolExecutor</code> but it still take a long time to run and not much improvement from list comprehension.</p>
<h1>Numpy Implementation</h1>
<pre><code>indicator_Matrix = pd.DataFrame(np.random.randint(0,2,[20000,20000]))
def operations(indicator_Matrix):
s = indicator_Matrix.to_numpy().sum(axis=1)
d = (indicator_Matrix.to_numpy().T / s).T
d = pd.DataFrame(d, index = indicator_Matrix.index, columns = indicator_Matrix.columns)
res = d[d>0].mean(axis=0)
return res.iloc[-1]
output = [operations(indicator_Matrix) for i in range(0,20000**2)]
</code></pre>
<p>Note that the reason I convert <code>d</code> to a dataframe again is because I need to obtain the column means and retain only the last column mean using <code>.iloc[-1]</code>. <code>d[d>0].mean(axis=0)</code> return column means, i.e.</p>
<pre><code>2478 1.0
0 1.0
</code></pre>
<p><strong>Update:</strong> I am still stuck in this problem. I wonder if using gpu packages like <code>cudf</code> and <code>CuPy</code> on my local desktop would make any difference.</p>
|
<python><pandas><numpy><performance><matrix>
|
2022-12-21 03:57:18
| 3
| 2,253
|
user1769197
|
74,870,994
| 8,739,330
|
Python regex not capturing groups properly
|
<p>I have the following regex <code>(?:RE:\w+|Reference:)\s*((Mr|Mrs|Ms|Miss)?\s+([\w-]+)\s(\w+))</code>.</p>
<p>Input text examples:</p>
<ol>
<li>RE:11567 Miss Jane Doe 12345678</li>
<li>Reference: Miss Jane Doe 12345678</li>
<li>RE:J123 Miss Jane Doe 12345678</li>
<li>RE:J123 Miss Jane Doe 12345678 Reference: Test Company</li>
</ol>
<p><strong>Sample Code:</strong></p>
<pre><code>import re
pattern = re.compile('(?:RE:\w+|Reference:)\s*((Mr|Mrs|Ms|Miss)?\s+([\w-]+)\s(\w+))')
result = pattern.findall('RE:11693 Miss Jane Doe 12345678')
</code></pre>
<p>For all 4 I expect the output <code>('Miss Jane Doe', 'Miss', 'Jane', 'Doe')</code>. However in 4th text example I get <code>[('Miss Jane Doe', 'Miss', 'Jane', 'Doe'), (' Test Company', '', 'Test', 'Company')]</code></p>
<p>How can I get the correct output</p>
|
<python><regex><invoice2data>
|
2022-12-21 03:17:16
| 1
| 2,619
|
West
|
74,870,978
| 399,508
|
how do I pass the name of a dictionary as an argument to my function
|
<pre><code>data = {'employer':'Apppe, Inc', 'title': 'Senior Software Engineer', 'manager': 'Steve Jobs', 'city': 'Cupertino'}
def my_function(**data):
employername = data['employer']
title = data['title']
manager = data['manager']
city = data['city']
print(city)
my_function(**data)
username = input("Enter dictionary name:")
my_function(**username)
</code></pre>
<p>If I enter the text data into the input. I get this strange error.</p>
<pre><code>Enter dictionary name:data
Traceback (most recent call last):
File "/data/user/0/ru.iiec.pydroid3/files/accomp_files/iiec_run/iiec_run.py", line 31, in <module>
start(fakepyfile,mainpyfile)
File "/data/user/0/ru.iiec.pydroid3/files/accomp_files/iiec_run/iiec_run.py", line 30, in start
exec(open(mainpyfile).read(), __main__.__dict__)
File "<string>", line 14, in <module>
TypeError: __main__.my_function() argument after ** must be a mapping, not str
[Program finished]
</code></pre>
|
<python><dictionary>
|
2022-12-21 03:12:39
| 1
| 7,655
|
GettingStarted
|
74,870,866
| 9,613,633
|
How do I remove a comment outside of the root element of an XML document using python lxml
|
<p>How do you remove comments above or below the root node of an xml document using python's <code>lxml</code> module? I want to remove only one comment above the root node, NOT all comments in the entire document. For instance, given the following xml document</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- This comment needs to be removed -->
<root>
<!-- This comment needs to STAY -->
<a/>
</root>
</code></pre>
<p>I want to output</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="UTF-8" standalone="no"?>
<root>
<!-- This comment needs to STAY -->
<a/>
</root>
</code></pre>
<p>The usual way to remove an element would be to do <code>element.getparent().remove(element)</code>, but this doesn't work for the root element since <code>getparent</code> returns <code>None</code>. I also tried the suggestions from <a href="https://stackoverflow.com/questions/61071155/how-do-i-remove-a-comment-outside-of-the-root-element-of-an-xml-document-using-l">this stackoverflow answer</a>, but the first answer (using a parser that remove comments) removes all comments from the document including the ones I want to keep, and the second answer (adding a dummy opening and closing tag around the document) doesn't work if the document has a directive above the root element.</p>
<p>I can get access to the comment above the root element using the following code, but how do I remove it from the document?</p>
<pre class="lang-py prettyprint-override"><code>from lxml import etree as ET
tree = ET.parse("./sample_file.xml")
root = tree.getroot()
comment = root.getprevious()
# What do I do with comment now??
</code></pre>
<p>I've tried doing the following, but none of them worked:</p>
<ol>
<li><code>comment.getparent().remove(comment)</code> says <code>AttributeError: 'NoneType' object has no attribute 'remove'</code></li>
<li><code>del comment</code> does nothing</li>
<li><code>comment.clear()</code> does nothing</li>
<li><code>comment.text = ""</code> renders an empty comment <code><!----></code></li>
<li><code>root.remove(comment)</code> says <code>ValueError: Element is not a child of this node.</code></li>
<li><code>tree.remove(comment)</code> says <code>AttributeError: 'lxml.etree._ElementTree' object has no attribute 'remove'</code></li>
<li><code>tree[:] = [root]</code> says <code>TypeError: 'lxml.etree._ElementTree' object does not support item assignment</code></li>
<li>Initialize a new tree with <code>tree = ET.ElementTree(root)</code>. Serializing this new tree still has the comments somehow.</li>
</ol>
|
<python><xml><lxml>
|
2022-12-21 02:49:53
| 2
| 372
|
Cnoor0171
|
74,870,856
| 5,307,040
|
Using global variables in pytest
|
<p>I am trying to write a test as follows and an ending up getting the following error:</p>
<pre><code>def test_retry():
hits = 0
def f():
global hits
hits += 1
1 / 0
with pytest.raises(ZeroDivisionError):
f()
</code></pre>
<p>and get the following error:</p>
<pre><code>> hits += 1
E NameError: name 'hits' is not defined
</code></pre>
<p>but am curious why this code doesn't work. Does pytest somehow alter the global variables?</p>
<p>I know this can be solved using a list like <code>hits = [0]</code>, but i'm trying to understand why the code doesn't work.</p>
<p>I've also tried using <code>pytest_configure</code>, and that works too.</p>
|
<python><python-3.x><pytest>
|
2022-12-21 02:48:38
| 1
| 772
|
rakshith91
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.