QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
74,965,506
| 19,214,678
|
How to specify url in sqlalchemy?
|
<p>I am trying to link a database table to a python class using <code>sqlalchemy</code>, and when creating the engine I am getting an error on the URL:</p>
<pre><code># Creating the declarative base class
Base = declarative_base()
class State(Base):
"""Class that links to the states table of the hbtn_0e_6_usa DB"""
__tablename__ = 'states'
id = Column(Integer, primary_key=True, autoincrement=True, nullable=False)
name = Column(String(128), nullable=False)
# Create the engine for the connection
Engine = create_engine(f"mysql+mysqldb://{user}:{passwd}@{host}:{port}/{db}")
Base.metadata.create_all(Engine)
</code></pre>
<p>I am geting an error:</p>
<pre><code>Traceback (most recent call last):
...
MySQLdb.OperationalError: (2005, "Unknown MySQL server host '{part of my password}@{host}' (-2)")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
...
sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (2005, "Unknown MySQL server host '{part of my password}@{host}' (-2)")
(Background on this error at: https://sqlalche.me/e/14/e3q8)
</code></pre>
<p><strong>Note</strong>: My password has a '@' character, and I think that is why this happening.</p>
<p>How can I solve this issue?</p>
<p><strong>EDIT</strong>: I have tried to use the <code>sqlalchemy.engine.URL.create()</code>,but I get another error:</p>
<pre><code>Engine = sqlalchemy.engine.URL.create(
drivername="mysql+mysqldb",
username="root",
password="la@1993#",
host="localhost",
port=3306,
database="hbtn_0e_6_usa"
</code></pre>
<p>)
I get the error:</p>
<pre><code>AttributeError: 'URL' object has no attribute '_run_ddl_visitor'
</code></pre>
|
<python><sql><sqlalchemy>
|
2022-12-30 20:18:47
| 1
| 349
|
Leuel Asfaw
|
74,965,445
| 4,796,942
|
Conditionally format cell to end with one of 2 strings using pygsheets
|
<p>I am using pygsheets and have a column that I'd like to conditionally format so that its either a string ending in "@gmail.com" or a string ending in "@yahoo.com".</p>
<p>It seems to work for one but I am not sure how to extend this to 2 conditions.</p>
<p>I went through these resources and could not figure out how this would be done.</p>
<ul>
<li><a href="https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/other#ConditionType" rel="nofollow noreferrer">ConditionType</a></li>
<li><a href="https://readthedocs.org/projects/pygsheets/downloads/pdf/stable/" rel="nofollow noreferrer">pygsheets</a></li>
<li><a href="https://pygsheets.readthedocs.io/en/stable/worksheet.html#pygsheets.Worksheet.add_conditional_formatting" rel="nofollow noreferrer">worksheet.add_conditional_formatting</a></li>
</ul>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>email (double conditional, gmail or yahoo)</th>
</tr>
</thead>
<tbody>
<tr>
<td> </td>
</tr>
<tr>
<td> </td>
</tr>
<tr>
<td> </td>
</tr>
<tr>
<td> </td>
</tr>
<tr>
<td> </td>
</tr>
</tbody>
</table>
</div>
<pre><code>spread_sheet_id = "...insert...spreadsheet...id"
spreadsheet_name = "...spreadsheet_name..."
wks_name_or_pos = "...worksheet_name..."
spreadsheet = pygsheets.Spreadsheet(client=service,id=spread_sheet_id)
wksheet = spreadsheet.worksheet('title',wks_name_or_pos)
wksheet.cell('A1').set_text_format('bold', True).value = 'email (double conditional)'
wksheet.add_conditional_formatting(start='A2', end='A6',
condition_type='TEXT_ENDS_WITH',
format={'backgroundColor':{'red':0.6, 'green':0.9, 'blue':0.6, 'alpha':0}},
condition_values=['@yahoo.com','@gmail.com'])
</code></pre>
|
<python><google-sheets><google-sheets-api><conditional-formatting><pygsheets>
|
2022-12-30 20:07:42
| 1
| 1,587
|
user4933
|
74,965,372
| 8,025,793
|
Pyspark map function behaving unexpectedly with given method
|
<p>I have written a class to add values from a 2D list based on an ID column.</p>
<p>The class works just fine when called normally, and the merge function works as intended. However, when handed to the map function of an Pyspark rdd pipline the result errors whenever I call collect or try and apply the reduce function to it.</p>
<p>As far as I can tell from the error it looks like it is attempting to use the values from the class 'diction' variable, but I can't work out why.</p>
<pre><code># To be used in the MapReduce
class CountAccumilator:
def __init__(self, target, id_position, sum_position):
self.id_position = id_position
self.sum_position = sum_position
self.diction = {}
# Sums given group
self.count(target)
def count(self, iterable):
for x in iterable:
# Either adds value to current key or adds new key with base value
if x[self.id_position] not in self.diction.keys():
self.diction[x[self.id_position]] = x[self.sum_position]
else:
self.diction[x[self.id_position]] += x[self.sum_position]
def merge(self, countAccumilator):
# Same as count. Takes self instead
for x in countAccumilator.diction.keys():
if x not in self.diction.keys():
self.diction[x] = countAccumilator.diction[x]
else:
self.diction[x] += countAccumilator.diction[x]
def accumilate_mapper(data):
return CountAccumilator(data, 0, 1)
def accumilate_reducer(data1, data2):
data1.merge(data2)
return data1
testValues = [ ['John', 8],
['Sarah', 3],
['Mike', 7],
['Emily', 2],
['David', 1],
['Jessica', 6],
['Robert', 9],
['Ashley', 4],
['Chris', 5],
['Megan', 2],
['Laura', 8],
['James', 3],
['Emily', 7],
['William', 1],
['Ashley', 6],
['Michael', 9],
['Samantha', 4],
['Christopher', 5],
['Laura', 2],
['David', 8],
['Sarah', 3],
['Christopher', 7],
['Megan', 1],
['John', 6],
['Jessica', 9],
['Robert', 4],
['James', 5],
['Emily', 2],
['Michael', 8],
['William', 3],
['Ashley', 7],
['Chris', 1],
['Samantha', 6],
['Megan', 9],
['Laura', 4],
['David', 5],
['Sarah', 2],
['Christopher', 8],
['John', 3],
['Jessica', 7],
['Robert', 1],
['James', 6],
['Emily', 9],
['Michael', 4],
['William', 5],
['Ashley', 2],
['Chris', 8],
['Samantha', 3],
['Megan', 7],
['Laura', 1],
['David', 6],
['Sarah', 9]
]
test_data = sc.parallelize(testValues)
mapped_test = test_data.map(accumilate_mapper)
totals = mapped_test.reduce(accumilate_reducer)
print(totals.diction)
</code></pre>
|
<python><dictionary><pyspark><mapreduce>
|
2022-12-30 19:58:16
| 0
| 388
|
Parzavil
|
74,965,310
| 209,920
|
What are best practices for adding initial configuration data to a Python Django instance?
|
<p>I have a Django application that has a <code>Setting</code> model. I've manually added the configuration data to the database through the Django Admin UI/Django Console but I want to package up this data and have it automatically created when an individual creates/upgrades an instance of this app. What are some of the best ways to accomplish this?</p>
<p>I have already looked at:</p>
<ol>
<li><a href="https://docs.djangoproject.com/en/4.1/topics/migrations/" rel="nofollow noreferrer">Django Migrations Topics Documentation</a> which includes a section on <a href="https://docs.djangoproject.com/en/4.1/topics/migrations/#data-migrations" rel="nofollow noreferrer">Data Migrations</a>
<ul>
<li>shows a data migration but it involves data already existing in a database being updated, not new data being added.</li>
</ul>
</li>
<li><a href="https://docs.djangoproject.com/en/4.1/ref/migration-operations/" rel="nofollow noreferrer">Django Migration Operations Reference</a>
<ul>
<li>shows examples using <code>RunSQL</code> and <code>RunPython</code> but both only when adding a minimal amount of data.</li>
</ul>
</li>
<li><a href="https://docs.djangoproject.com/en/4.1/howto/writing-migrations/" rel="nofollow noreferrer">How to create database migrations</a>
<ul>
<li>looks at moving data between apps.</li>
</ul>
</li>
<li><a href="https://docs.djangoproject.com/en/4.1/howto/initial-data/" rel="nofollow noreferrer">How to provide initial data for models</a>
<ul>
<li>mentions data migrations covered in the previous docs and providing data with fixtures.</li>
</ul>
</li>
</ol>
<p>Still I haven't found one that seems to line up well with the use case I'm describing.</p>
<p>I also looked at several StackOverflow Q&As on the topic, but all of these are quite old and may not cover any improvements that occurred in the last 8+ years:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/887627/programmatically-using-djangos-loaddata">Programmatically using Django's loaddata</a></li>
<li><a href="https://stackoverflow.com/questions/18677431/providing-initial-data-to-django-model">How can I provide the initial data to a Django model?</a></li>
<li><a href="https://stackoverflow.com/questions/25960850/loading-initial-data-with-django-1-7-and-data-migrations">Loading initial data with Django 1.7+ and data migrations</a></li>
</ul>
<p>There is a decent amount of settings in this app and the above approaches seem pretty laborious as the amount of data increases.</p>
<p>I have exported the data (to JSON) using fixtures (e.g. <code>manage.py dumpdata</code>) but I don't want folks to have to run fixtures to insert data when installing the app.</p>
<p>It feels like there should be a really simple way to say, "hey, grab this csv, json, yaml, etc. file and populate the models."</p>
<p><strong>Current Thoughts</strong></p>
<p>Barring better recommendations from everyone here my thought is to load the JSON within a data migration and iterate through inserting the data with <code>RunSQL</code> or <code>RunPython</code>. Any other suggestions?</p>
|
<python><json><django><migration><data-migration>
|
2022-12-30 19:50:53
| 3
| 4,452
|
Dave Mackey
|
74,965,265
| 17,975,829
|
How to create a Gunicorn custom application with Click commands?
|
<p>I am trying to build a custom application using <code>gunicorn</code> server with <code>flask</code> framework utilizing <code>click</code> commands.
I used the class <code>App</code> to create an application that has the command <code>say_hello</code> which outputs <code>Hello</code> to the terminal:</p>
<pre><code>import click
from flask import Flask
from gunicorn.app.base import BaseApplication
class App(BaseApplication):
def __init__(self, options=None):
self.options = options or {}
self.application = Flask(__name__)
super(App, self).__init__()
def load_config(self):
config = dict([(key, value) for key, value in self.options.items()
if key in self.cfg.settings and value is not None])
for key, value in config.items():
self.cfg.set(key.lower(), value)
def load(self):
return self.application
@click.command()
def say_hello(self):
print('Hello')
options = {
'bind': '%s:%s' % ('127.0.0.1', '5000'),
'workers': 2
}
app = App(options)
</code></pre>
<p>When I try to run this app using the command <code>gunicorn test:app</code>, I get this error:</p>
<pre><code>[2022-12-30 14:39:20 -0500] [302113] [INFO] Starting gunicorn 20.1.0
[2022-12-30 14:39:20 -0500] [302113] [INFO] Listening at: http://127.0.0.1:8000 (302113)
[2022-12-30 14:39:20 -0500] [302113] [INFO] Using worker: sync
[2022-12-30 14:39:20 -0500] [302114] [INFO] Booting worker with pid: 302114
Application object must be callable.
[2022-12-30 14:39:20 -0500] [302114] [INFO] Worker exiting (pid: 302114)
[2022-12-30 14:39:20 -0500] [302113] [INFO] Shutting down: Master
[2022-12-30 14:39:20 -0500] [302113] [INFO] Reason: App failed to load.
</code></pre>
<p>If I change <code>app = App(options)</code> to <code>app = App(options).application</code>, the application starts with one worker but the workers are set to 2 in the source code.</p>
<p>How to create a custom application that has Click commands?</p>
|
<python><gunicorn><python-click>
|
2022-12-30 19:43:25
| 1
| 360
|
loaded_dypper
|
74,965,228
| 12,323,468
|
Calculate all possible combinations of column totals using pyspark.pandas
|
<p>I have the following code which takes columns in my pandas df and calculates all combination of totals minus duplicates:</p>
<pre><code>import itertools as it
import pandas as pd
df = pd.DataFrame({'a': [3,4,5,6,3], 'b': [5,7,1,0,5], 'c':[3,4,2,1,3], 'd':[2,0,1,5,9]})
orig_cols = df.columns
for r in range(2, df.shape[1] + 1):
for cols in it.combinations(orig_cols, r):
df["_".join(cols)] = df.loc[:, cols].sum(axis=1)
df
</code></pre>
<p>Which generates the desired df:</p>
<p><a href="https://i.sstatic.net/c04ZF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c04ZF.png" alt="enter image description here" /></a></p>
<p>To take advantage of distributed computing I want to run this same code but using pyspark.pandas. I convert the df to spark and apply the same code....</p>
<pre><code>import itertools as it
import pandas as pd
import pyspark.pandas as ps
df = pd.DataFrame({'a': [3,4,5,6,3], 'b': [5,7,1,0,5], 'c':[3,4,2,1,3], 'd':[2,0,1,5,9]})
dfs = ps.from_pandas(df) # convert from pandas to pyspark
orig_cols = dfs.columns
for r in range(2, dfs.shape[1] + 1):
for cols in it.combinations(orig_cols, r):
dfs["_".join(cols)] = dfs.loc[:, cols].sum(axis=1)
dfs
</code></pre>
<p>but I am getting an error message:</p>
<blockquote>
<p>IndexError: tuple index out of range</p>
</blockquote>
<p>Why is the code not working? What change do I need to make so it can work in pyspark?</p>
|
<python><pandas><pyspark>
|
2022-12-30 19:37:06
| 1
| 329
|
jack homareau
|
74,964,957
| 12,860,924
|
How to use SVM classifier with keras CNN model?
|
<p>I am working on predicting seizure epilepsy using CNN. I have 2 classes in my dataset. I want to use CNN model as a feature extractor, then I want to use SVM as a classifier. I don't know how to do this with my model.</p>
<p>I am using <code>mode.fit_generator</code> in my model and I don't have x and y because I use generator for my data. How can I use the traditional SVM for my model?</p>
<p><strong>My model</strong></p>
<pre><code> input_shape=(33,3840,1)
model = Sequential()
#C1
model.add(Conv2D(16, (5, 5), strides=( 2, 2), padding='same',activation='relu',
input_shape=input_shape))
model.add(keras.layers.MaxPooling2D(pool_size=( 2, 2), padding='same'))
model.add(BatchNormalization())
model.compile(loss=categorical_focal_loss(), optimizer=opt_adam, metrics=['accuracy'])
history=model.fit_generator(generate_arrays_for_training(indexPat, train_data, start=0,end=100),
validation_data=generate_arrays_for_training(indexPat, test_data, start=0,end=100),
steps_per_epoch=int((len(train_data)/2)),
validation_steps=int((len(test_data)/2)),
verbose=2,epochs=65, max_queue_size=2, shuffle=True)
</code></pre>
<p>I want to make SVM classifier as my final classifier in this model so how can I do that?</p>
<p>Any help would be appreciated.
Thanks in advance.</p>
|
<python><tensorflow><machine-learning><keras><svm>
|
2022-12-30 19:01:17
| 0
| 685
|
Eda
|
74,964,902
| 6,843,153
|
how to make python argparse to accept not declared argument
|
<p>I have a script that receives online arguments using <code>argparse</code>, like this:</p>
<pre><code>parser = argparse.ArgumentParser()
parser.add_argument(
"--local", action="store_true", help=("Write outputs to local disk.")
)
parser.add_argument(
"--swe", action="store_true", help=("Indicates 'staging' environment.")
)
args = parser.parse_args()
</code></pre>
<p>I want <code>argparse</code> to handle undeclared arguments and just to add them to the arguments dictionary, but it raises an error when I try <code>python runner.py --local true --swe true --undeclared_argument this_is_a_test</code>:</p>
<pre><code>runner.py: error: unrecognized arguments: --undeclared_argument this_is_a_test
</code></pre>
|
<python><argparse>
|
2022-12-30 18:53:42
| 0
| 5,505
|
HuLu ViCa
|
74,964,835
| 2,084,503
|
Can I see library versions for a google cloud function?
|
<p>I've got a cloud function I deployed a while ago. It's running fine, but some of its dependent libraries were updated, and I didn't specify <code>==</code> in the <code>requirements.txt</code>, so now when I try to deploy again <code>pip</code> can't resolve dependencies. I'd like to know which specific versions my working, deployed version is using, but I can't just do a <code>pip freeze</code> of the environment as far as I know.</p>
<p>Is there a way to see which versions of libraries the function's environment is using?</p>
|
<python><google-cloud-platform><google-cloud-functions>
|
2022-12-30 18:43:56
| 2
| 1,266
|
Pavel Komarov
|
74,964,779
| 1,547,004
|
How do I read extended annotations from Annotated?
|
<p><a href="https://peps.python.org/pep-0593/" rel="nofollow noreferrer">PEP 593</a> added extended annotations via <code>Annotated</code>.</p>
<p>But neither the PEP nor the documentation for <code>Annotated</code> describes how we're supposed to access the underlying annotations. How are we supposed to read the extra annotations stored in the <code>Annotated</code> object?</p>
<pre class="lang-py prettyprint-override"><code>from typing import Annotated
class Foo:
bar: Annotated[int, "save"] = 5
hints = get_type_hints(Foo, include_extras=True)
# {'bar': typing.Annotated[int, 'save']}
# Get list of data in Annotated object ???
hints["bar"].get_list_of_annotations()
</code></pre>
|
<python><python-typing>
|
2022-12-30 18:37:29
| 1
| 37,968
|
Brendan Abel
|
74,964,769
| 10,128,864
|
How to Capture The OAuth Redirect to POST A Response
|
<p>So my colleague and I have an application whereby we need to capture the OAuth Redirect from Google's OAuth Server Response, the reason being is we need to send a payload to capture to renew our pickle token, and we need to do it without human intervention. The code looks like this:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import pickle
import os.path
import pandas as pd
import requests
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
import base64
from datetime import datetime, timedelta
from urllib.parse import unquote
from bs4 import BeautifulSoup
# If modifying these scopes, delete the file token.pickle.
SCOPES = ['https://www.googleapis.com/auth/gmail.readonly']
def search_message(service, user_id, search_string):
"""
Search the inbox for emails using standard gmail search parameters
and return a list of email IDs for each result
PARAMS:
service: the google api service object already instantiated
user_id: user id for google api service ('me' works here if
already authenticated)
search_string: search operators you can use with Gmail
(see https://support.google.com/mail/answer/7190?hl=en for a list)
RETURNS:
List containing email IDs of search query
"""
try:
# initiate the list for returning
list_ids = []
# get the id of all messages that are in the search string
search_ids = service.users().messages().list(userId=user_id, q=search_string).execute()
# if there were no results, print warning and return empty string
try:
ids = search_ids['messages']
except KeyError:
print("WARNING: the search queried returned 0 results")
print("returning an empty string")
return ""
if len(ids) > 1:
for msg_id in ids:
list_ids.append(msg_id['id'])
return (list_ids)
else:
list_ids.append(ids['id'])
return list_ids
except:
print("An error occured: %s")
def get_message(service, user_id, msg_id):
"""
Search the inbox for specific message by ID and return it back as a
clean string. String may contain Python escape characters for newline
and return line.
PARAMS
service: the google api service object already instantiated
user_id: user id for google api service ('me' works here if
already authenticated)
msg_id: the unique id of the email you need
RETURNS
A string of encoded text containing the message body
"""
try:
final_list = []
message = service.users().messages().get(userId=user_id, id=msg_id).execute() # fetch the message using API
payld = message['payload'] # get payload of the message
report_link = ""
mssg_parts = payld['parts'] # fetching the message parts
part_one = mssg_parts[1] # fetching first element of the part
#part_onee = part_one['parts'][1]
#print(part_one)
part_body = part_one['body'] # fetching body of the message
part_data = part_body['data'] # fetching data from the body
clean_one = part_data.replace("-", "+") # decoding from Base64 to UTF-8
clean_one = clean_one.replace("_", "/") # decoding from Base64 to UTF-8
clean_one = clean_one.replace("amp;", "") # cleaned amp; in links
clean_two = base64.b64decode(clean_one) # decoding from Base64 to UTF-8
soup = BeautifulSoup(clean_two, features="html.parser")
for link in soup.findAll('a'):
if "talentReportRedirect?export" in link.get('href'):
report_link = link.get('href')
break
final_list.append(report_link) # This will create a dictonary item in the final list
except Exception:
print("An error occured: %s")
return final_list
def get_service():
"""
Authenticate the google api client and return the service object
to make further calls
PARAMS
None
RETURNS
service api object from gmail for making calls
"""
creds = None
# The file token.pickle stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists('token.pickle'):
with open('token.pickle', 'rb') as token:
creds = pickle.load(token)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file('credentials.json', SCOPES)
creds = flow.run_local_server(port=0)
# Save the credentials for the next run
with open('token.pickle', 'wb') as token:
pickle.dump(creds, token)
auth_link = build('gmail', 'v1', credentials=creds)
parsed_url = unquote(auth_link).split('redirect')[-1]
return parsed_url
def get_report(link_array):
for link in link_array:
df = requests.get(link[0], allow_redirects=True)
# df.encoding
# dt = pd.DataFrame(data=df)
print(link)
# upload_to_database(df) -- Richard Barret please update this function
print(df.text)
## dt.to_csv(r'C:\Users\user\Desktop\api_gmail.csv', sep='\t',header=True)
if __name__ == "__main__":
link_list = []
monday = datetime(2022,12,5)#datetime.now() - timedelta(days=datetime.now().weekday())
thursday = datetime(2022,12,8)#datetime.now() - timedelta(days=datetime.now().weekday() - 3)
query = 'from:messages-noreply@linkedin.com ' + 'after:' + monday.strftime('%Y/%m/%d') + ' before:' + thursday.strftime('%Y/%m/%d')
service = get_service()
mssg_list = search_message(service, user_id='me', search_string=query)
for msg in mssg_list:
link_list.append(get_message(service, user_id='me', msg_id=msg))
get_report(link_list)
</code></pre>
<p>It is assumed that you have a directory structure like this:</p>
<pre><code>├── credentials.json
├── gmail_api_linkedin.py
└── requirements.txt
</code></pre>
<p>Obviously, you won't have the <code>credentials.json</code> file, but in essence, the code works and redirects us to a login page to retrieve the new pickle:</p>
<p><a href="https://i.sstatic.net/D6FTZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D6FTZ.png" alt="The Output Generated in the CLI" /></a>
<a href="https://i.sstatic.net/N9DbG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N9DbG.png" alt="The Redirect to OAuth Server" /></a></p>
<p>The main thing is we can't interact with that in an autonomous fashion. As such, how can we capture the URL from the server that prints out the following information the is differenter every single time.</p>
<pre><code>Please visit this URL to authorize this application: https://accounts.google.com/o/oauth2/auth?response_type=code&client_id=212663976989-96o952s9ujadjgfdp6fm0p462p37opml.apps.googleusercontent.com&redirect_uri=http%3A%2F%2Flocalhost%3A58605%2F&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fgmail.readonly&state=ztJir0haFQlvTP79BRthhmEHlSsqIj&access_type=offline
</code></pre>
<p>More succinctly, how can we capture the URL in a pythonic manner to send POST and PUT requests to that redirect?</p>
|
<python><python-3.x><google-oauth><gmail-api><google-api-python-client>
|
2022-12-30 18:36:20
| 1
| 869
|
R. Barrett
|
74,964,740
| 7,446,564
|
List of lists: how to delete all lists that contain certain values?
|
<p>There's a list of lists (or it could be tuple of tuples). For example:</p>
<pre class="lang-py prettyprint-override"><code>my_list = [
['A', 7462],
['B', 8361],
['C', 3713],
]
</code></pre>
<p>What would be the most efficient way to filter out all lists that have a value <code>'B'</code> in them, considering that the number (or other values) might change?</p>
<p>The only way I came up with so far is using a for/in cycle (or rather a list comprehension) but it's very inefficient in this case, so I'd like to know if it's possible to avoid using a loop.</p>
|
<python><list>
|
2022-12-30 18:32:41
| 2
| 447
|
Lev Slinsen
|
74,964,724
| 386,861
|
How to solve error in VS Code - exited with code=127
|
<p>Baffled by this. I've been using VSCode for a few weeks and have python installed.</p>
<pre><code>def print menu():
print ("Let's play a game of Wordle!")
print ("Please type in a 5-letter word")
print_menu()
print_menu()
</code></pre>
<p>So far so simple, but when I run it I get this</p>
<pre><code>[Running] python -u "/Users/davidelks/Dropbox/Personal/worldle.py" /bin/sh: python: command not found
[Done] exited with code=127 in 0.006 seconds
</code></pre>
<p>What does this mean? I'm guessing it failed but why? This appears to be trivial.</p>
<p>Tried:</p>
<pre><code>def print menu():
print ("Let's play a game of Wordle!")
print ("Please type in a 5-letter word")
print_menu()
</code></pre>
<p>Although I get an error on running script I can get an interpreter from python3.</p>
|
<python><visual-studio-code>
|
2022-12-30 18:30:14
| 2
| 7,882
|
elksie5000
|
74,964,698
| 21,115
|
Print SQL generated by SQLObject
|
<pre class="lang-py prettyprint-override"><code>from sqlobject import *
class Data(SQLObject):
ts = TimeCol()
val = FloatCol()
Data.select().count()
</code></pre>
<p>Fails with:</p>
<pre><code>AttributeError: No connection has been defined for this thread or process
</code></pre>
<p>How do I get the SQL which would be generated, without declaring a connection?</p>
|
<python><sqlobject>
|
2022-12-30 18:25:33
| 1
| 18,140
|
davetapley
|
74,964,655
| 773,389
|
pandas performance warning that can't get rid of
|
<p>I have a panda dataframe df with a column name 'C'.
I am creating 280 duplicate columns added to the same dataframe with names of 1 ... 280 as follows:</p>
<pre><code>for l in range(1,281):
df[str[l]] = df['C']
</code></pre>
<p>I haven't figured out how to do this operation more efficiently, however, this operation works as expected but I get the following performance warning message:</p>
<pre><code>PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling `frame.insert` many times, which has poor performance. Consider joining all columns at once using pd.concat(axis=1) instead. To get a de-fragmented frame, use `newframe = frame.copy()`
df_base[str(d)]=col_vals
</code></pre>
<p>I've tried to suppress this warning with</p>
<pre><code>import warnings
warnings.simplefilter(action='ignore', category=pd.errors.PerformanceWarning)
</code></pre>
<p>The performance warning suppression works when running on 1 core however, I'm running this code with joblib with 30 cores.</p>
<p>When running this operation with joblib, the warnning suppresion doesn't work!</p>
<p>How can I get rid of this warning message with either of these 2 methods?</p>
<ol>
<li>how to supress the warning on joblib?
or</li>
<li>how to create duplicate columns in a more efficient way with no warnings?</li>
</ol>
|
<python><pandas><dataframe>
|
2022-12-30 18:20:22
| 1
| 1,843
|
afshin
|
74,964,649
| 8,372,455
|
print time every n seconds using datetime and % operator
|
<p>How do I print the time every 10 seconds based off of using the % operator and the datetime package? This only prints once...</p>
<pre><code>from datetime import datetime, timezone, timedelta
import time
now_utc = datetime.now(timezone.utc)
while True:
if (now_utc - datetime.now(timezone.utc)).total_seconds() % 10 == 0:
print(time.ctime())
</code></pre>
|
<python><datetime>
|
2022-12-30 18:19:34
| 1
| 3,564
|
bbartling
|
74,964,490
| 16,009,435
|
display byte format jpeg in img
|
<p>I have a byte format jpeg that I get from python that looks like this <code>b'\xff\xd8\xff\xe0\x00\x10JFIF\x00\x01\x01\...'</code>. How can I display that inside an HTML <code>img</code> tag? Any help is appreciated. Thanks in advance.</p>
|
<python><html>
|
2022-12-30 18:01:22
| 2
| 1,387
|
seriously
|
74,964,423
| 2,653,179
|
Python: Method in subclasses that has different assignments
|
<p>I have a method <code>run()</code> in subclasses, which has an API POST request in each subclass, gets data from the POST request, and assigns an ID from this data to <code>self._id</code>. Now I would like to get a description too. However, description is returned only in the API request <code>self._api_obj.trigger(...)</code> in <code>SubClassB.run</code>, not in the API request <code>self._api_obj.trigger_run(...)</code> in <code>SubClassA.run</code>. For <code>SubclassA</code> I need a separate API request to get the description.</p>
<p>I tried the following, but I don't think it's a good idea to assign 2 attributes in <code>SubClassB.run</code>, but only 1 attribute in <code>SubClassA.run</code>. Right? Since from my understanding, the same method in subclasses should have the same behavior (Just different implementation).</p>
<pre><code>class SuperClass:
def __init__(self):
self._id = None # Assigned in run()
self._description = None
@property
def description(self):
raise NotImplementedError
def run(self, *args):
raise NotImplementedError
@property
def id(self):
return self._id
class SubClassA(SuperClass):
def __init__(self):
super().__init__()
self._api_obj = ApiObj1()
@property
def description(self):
if not self._description:
_result = self._api_obj.get_data()
self._description = _result["description"]
return self._description
def run(self, *args):
_result = self._api_obj.trigger_run(foo="foo")
self._id = _result["RunId"]
class SubClassB(SuperClass):
def __init__(self):
super().__init__()
self._api_obj = ApiObj2()
@property
def description(self):
return self._description
def run(self, *args):
_result = self._api_obj.trigger(foo="foo", bar="bar", arg1="arg1", arg2="arg2")
self._id = _result["data"]["id"]
self._description = _result["data"]["description"]
</code></pre>
<p>Is there a better way to add assignment to <code>self._description</code>? Or some other solution to include description?</p>
|
<python>
|
2022-12-30 17:53:24
| 1
| 423
|
user2653179
|
74,964,404
| 7,028,973
|
Is it possible to mock a module and all it's submodules in pytest at once?
|
<p>I am trying to mock a module in pytest and have a lot of functions from that module imported like so:</p>
<pre><code>from common.util import funct1, funct2, funct3
from common.shared.utils import some_funct
from common.public.utils import some_other_funct
</code></pre>
<p>So far I have been using</p>
<pre><code>sys.modules['common.util'] = MagicMock()
sys.modules['common.shared.utils'] = MagicMock()
sys.modules['common.public.utils'] = MagicMock()
</code></pre>
<p>Is there a way to combine all these imports into one line?</p>
|
<python><mocking><pytest>
|
2022-12-30 17:51:05
| 0
| 334
|
Jayleen
|
74,964,388
| 7,317,408
|
groupby and mean returning NaN
|
<p>I am trying to use groupby to group by symbol and return the average of prior high volume days using pandas.</p>
<p>I create my data:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({
"date": ['2022-01-01', '2022-01-02', '2022-01-03', '2022-01-04', '2022-01-05', '2022-01-06'],
"symbol": ['ABC', 'ABC', 'ABC', 'AAA', 'AAA', 'AAA'],
"change": [20, 1, 2, 3, 50, 100],
"volume": [20000000, 100, 3000, 500, 40000000, 60000000],
})
</code></pre>
<p>Filter by high volume and change:</p>
<pre><code>high_volume_days = df[(df['volume'] >= 20000000) & (df['change'] >= 20)]
</code></pre>
<p>Then I get the last days volume (this works):</p>
<pre><code>high_volume_days['previous_high_volume_day'] = high_volume_days.groupby('symbol')['volume'].shift(1)
</code></pre>
<p>But when I try to calculate the average of all the days per symbol:</p>
<pre><code>high_volume_days['avg_volume_prior_days'] = df.groupby('symbol')['volume'].mean()
</code></pre>
<p>I am getting NaNs:</p>
<pre><code> date symbol change volume previous_high_volume_day avg_volume_prior_days
0 2022-01-01 ABC 20 20000000 NaN NaN
4 2022-01-05 AAA 50 40000000 NaN NaN
5 2022-01-06 AAA 100 60000000 40000000.0 NaN
</code></pre>
<p>What am I missing here?</p>
<p>Desired output:</p>
<pre><code> date symbol change volume previous_high_volume_day avg_volume_prior_days
0 2022-01-01 ABC 20 20000000 NaN 20000000
4 2022-01-05 AAA 50 40000000 NaN 40000000
5 2022-01-06 AAA 100 60000000 40000000.0 50000000
</code></pre>
|
<python><pandas><numpy>
|
2022-12-30 17:48:33
| 3
| 3,436
|
a7dc
|
74,964,361
| 7,984,318
|
pandas df.to_sql if column value exits replace or update row
|
<p>I'm using pandas df.to_sql to inserting rows to postgresql database.</p>
<pre><code>df.to_sql('example_table', engine, if_exists='replace',index=False)
</code></pre>
<p>example_table has 3 columns :'id' ,'name' ,'datetime'</p>
<p>I want to add a checking logic before inserting ,that if the datetime is already exits ,then replace or update the exiting row.</p>
<p>Is there something like:</p>
<pre><code>df.to_sql('example_table', engine, if_ datetime_exists='replace',index=False)
</code></pre>
|
<python><pandas><postgresql><pandas-to-sql>
|
2022-12-30 17:44:58
| 2
| 4,094
|
William
|
74,964,334
| 8,534,089
|
Google Analytics API authentication failure using JWT
|
<p>I want to connect to Google Analytics Reporting API to fetch data, For which I have been provided with details of the service account in JSON format which contains details like (type, project_id, client_email, client_id, private_key,client_x509_cert_url, auth_provider_x509_cert_url)</p>
<p>Using this JSON file I am able to connect and fetch required report data via Python code <a href="https://developers.google.com/analytics/devguides/reporting/core/v4/quickstart/service-py" rel="nofollow noreferrer">Reporting API quickstart</a>. However, due to business requirements, I have to establish the connection using HTTP Rest API.</p>
<p>So based on the details of the same service account I am trying to authenticate using JWT by referring to Google Docs <a href="https://developers.google.com/identity/protocols/oauth2/service-account#jwt-auth" rel="nofollow noreferrer">Addendum: Service account authorization without OAuth</a> so that once I get the signature I can use that as a static token and make next API call to get report data <code>https://analyticsreporting.googleapis.com/v4/reports:batchGet</code> and passing Request Body</p>
<p>But while testing in <a href="https://jwt.io/" rel="nofollow noreferrer">https://jwt.io/</a> & also in <a href="https://irrte.ch/jwt-js-decode/index.html" rel="nofollow noreferrer">https://irrte.ch/jwt-js-decode/index.html</a> signature creation is failing with an Invalid Signature error.</p>
<p>Can someone please help to understand what am I missing here?</p>
<pre><code>Header
{
"alg": "RS256",
"typ": "JWT",
"kid": "<service account's private key ID>"
}
Payload
{
"iss": "<service account's email address i.e. client_email>",
"sub": "<service account's email address i.e. client_email>",
"aud": "https://oauth2.googleapis.com/token",
"iat": <UTC Unix Epoch Start Time>,
"exp": <UTC Unix Epoch End Time>
}
And then pasted X.509 PUBLIC CERTIFICATE generated for the service account and in Private key pasted Private key again which is from JSON file of service account
-----BEGIN CERTIFICATE-----\nMIrrhherhehrgssreheheherhehehehrhrdsfsfzcsvsvsvscssvsrjnfvdsfsscscscshgeferttyyuuuuuuuuuuudDxE/Q\n-----END CERTIFICATE-----\n
-----BEGIN PRIVATE KEY-----\nLDSVSVSCSYUNJKOLPDFSSSFSFSFSSFSFSFSFSFSFSSFSFSFSFSFSFSFSSSSSSSSSGGGGGGGGGGGGGGGGGGGGGGUUUUUUUUUUUUUUUUUM=\n-----END PRIVATE KEY-----\n
</code></pre>
<p>PS: I have tried to test with \n without \n as well as with prefixes BEGIN, END and without it still have the same result.</p>
|
<python><oauth><jwt><google-analytics-api><service-accounts>
|
2022-12-30 17:40:32
| 0
| 907
|
Vikas J
|
74,963,990
| 18,694,400
|
different behavior for dict and list in match/case (check if it is empty or not)
|
<p>I know there are other ways to check if <code>dict</code> is empty or not using <code>match/case</code> (for example, <code>dict(data) if len(data) == 0</code>), but I can't understand why python give different answers for <code>list</code> and <code>dict</code> types while we check for emptiness</p>
<pre><code>data = [1, 2]
match data:
case []:
print("empty")
case [1, 2]:
print("1, 2")
case _:
print("other")
# 1, 2
</code></pre>
<pre><code>data = {1: 1, 2: 2}
match data:
case {}:
print("empty")
case {1: 1, 2: 2}:
print("1, 2")
case _:
print("other")
# empty
</code></pre>
|
<python><pattern-matching>
|
2022-12-30 16:59:42
| 2
| 2,404
|
MoRe
|
74,963,987
| 6,552,666
|
How to add inputs to a redirect after form?
|
<p>I have a form that, upon submitting, I want to process, and then redirect the user to a validation page.</p>
<pre><code>if request.method == 'POST':
form_validation_list = []
for key in request.form:
processed_field = process_somehow(request, key)
form_validation_list.append(processed_field)
return render_template('foo.validate_form',
form_validation_list=form_validation_list)
</code></pre>
<p>In similar cases, I use <code>redirect(url_for('foo.validate_form', variableA=something, variableB=something_else))</code>, but I don't want <code>form_validation_list</code> to show up as GET variables. In the current case where I'm using <code>render_template</code>, I get a <code>TemplateNotFound</code> exception, but there is certainly a file at <code>templates/foo/validate_form.html</code>. I'm not sure if it's clear what I'm trying to do. If it is, is it clear what's causing the problem? I will add more information as needed.</p>
|
<python><flask><jinja2>
|
2022-12-30 16:58:55
| 1
| 673
|
Frank Harris
|
74,963,767
| 14,901,271
|
how to automated my selenium web scrapper?
|
<p>I was able to write a script to scrape using selenium, right now I'm trying to automate it so it can work periodically on a server so I don't bother myself by running it from my local, I did a lot of googling but I got no clue of how I can do that, can anyone simplify things for me ..</p>
|
<python><selenium><automation>
|
2022-12-30 16:33:44
| 3
| 322
|
Omar Hossam
|
74,963,750
| 5,568,409
|
Seaborn pairplot warning not understandable
|
<p>I give a small example to make the question clearer:</p>
<pre><code>import pandas as pd
import seaborn as sns
df = pd.DataFrame(
[
[28, 72, 12],
[28, 17, 25],
[54, 15, 45],
[30, 57, 28]
],
columns = ["x", "y", "z"]
)
cols = list(df.columns)
sns.pairplot(df)
</code></pre>
<p>The warning generated is as follows:</p>
<pre><code>C:\Users\thaly\anaconda3\lib\site-packages\seaborn\axisgrid.py:1444:
MatplotlibDeprecationWarning: The join function was deprecated in Matplotlib 3.6
and will be removed two minor releases later.
group.join(ax, diag_axes[0])
</code></pre>
<p>I totally don't understand the information given : <code>group.join(ax, diag_axes[0])</code>...</p>
<p>What do you think I should write instead of my simple line : <code>sns.pairplot(df)</code> ? (from which apparently comes the problem)</p>
|
<python><seaborn><warnings><pairplot>
|
2022-12-30 16:31:53
| 1
| 1,216
|
Andrew
|
74,963,650
| 205,147
|
Christofides TSP; let start and end node be those that are the farthest apart
|
<p>I'm using Christofides algorithm to calculate a solution for a Traveling Salesman Problem. The implementation is the one integrated in <code>networkx</code> library for Python.</p>
<p>The algorithm accepts an undirected networkx graph and returns a list of nodes in the order of the TSP solution. I'm not sure if I understand the algorithm correctly yet, so I don't really know yet how it determines the starting node for the calculated solution.</p>
<p>So, my assumption is: the solution is considered circular so that the Salesman returns to his starting node once he visited all nodes. <code>end</code> is now considered the node the Salesman visits last before returning to the <code>start</code> node. The <code>start</code> node of the returned solution is random.</p>
<p>Hence, I understand (correct me if I'm wrong) that for each TSP solution (order of list of nodes) with N nodes that is considered circular like that, there are N <em>actual</em> solutions where each node could be the starting node with the following route left unchanged.</p>
<p>A-B-C-D-E-F-G-H->A could also be D-E-F-G-H-A-B-C->D and would still be a valid route and basically the same solution only with a different starting node.</p>
<p>I need to find that one particular solution of all possible starting nodes of the returned order that has the greatest distance between end and start - assuming that that isn't already guaranteed to be the solution that <code>networkx.algorithms.approximation.christofides</code> returns.</p>
|
<python><networkx><traveling-salesman>
|
2022-12-30 16:20:04
| 1
| 2,229
|
Hendrik Wiese
|
74,963,539
| 8,372,455
|
Modbus client script read a sensor value
|
<p>I have a Modbus server setup on a LAN with IP address 192.168.0.111 and Modbus map is this snip below where I am trying read the sensor highlighted yellow:</p>
<p><a href="https://i.sstatic.net/8Axqh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8Axqh.png" alt="enter image description here" /></a></p>
<p>Can someone give me a tip on how to run a Modbus client script and read the sensor value?</p>
<pre><code>from pymodbus.client import ModbusTcpClient
client = ModbusTcpClient('192.168.0.111')
result = client.read_coils(30500,1)
print(result.bits[0])
client.close()
</code></pre>
<p>This will error out:</p>
<pre><code>print(result.bits[0])
AttributeError: 'ExceptionResponse' object has no attribute 'bits'
</code></pre>
<p>Experimenting a bit and changing the print to <code>print(result)</code> this will return without an exception</p>
<pre><code>Exception Response(129, 1, IllegalFunction)
</code></pre>
|
<python><modbus><pymodbus>
|
2022-12-30 16:06:20
| 1
| 3,564
|
bbartling
|
74,963,435
| 8,885,812
|
Draw a grid of cells using Matplotlib
|
<p>I'm trying to draw a grid of cells using Matplotlib where each border (top, right, bottom, left) of a cell can have a different width (random number between 1 and 5). I should note also that the width and height of the inner area of a cell (white part) can vary between 15 and 20.</p>
<p><a href="https://i.sstatic.net/de6ES.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/de6ES.png" alt="enter image description here" /></a></p>
<p>I want to know how to get the coordinates of each cell in order to avoid any extra space between the cells.</p>
<p>I tried several ideas however I did not get the right coordinates.</p>
|
<python><matplotlib>
|
2022-12-30 15:54:55
| 1
| 353
|
mrkasri
|
74,963,326
| 3,247,006
|
How to retry when exception occurs in Django Admin?
|
<p>I want to retry if exception occurs in <strong>Django Admin</strong>:</p>
<ol>
<li>When trying to add data:</li>
</ol>
<p><a href="https://i.sstatic.net/rh12e.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rh12e.png" alt="enter image description here" /></a></p>
<ol start="2">
<li>When trying to change data:</li>
</ol>
<p><a href="https://i.sstatic.net/C0dkf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C0dkf.png" alt="enter image description here" /></a></p>
<ol start="3">
<li>When clicking on <strong>Delete</strong> on <strong>"Change" page</strong>:</li>
</ol>
<p><a href="https://i.sstatic.net/lBDaa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lBDaa.png" alt="enter image description here" /></a></p>
<p>Then clicking on <strong>Yes, I'm sure</strong> to try deleting data:</p>
<p><a href="https://i.sstatic.net/EoIRd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EoIRd.png" alt="enter image description here" /></a></p>
<ol start="4">
<li>When clicking on <strong>Go</strong> on <strong>"Select to change" page</strong>:</li>
</ol>
<p><a href="https://i.sstatic.net/exnop.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/exnop.png" alt="enter image description here" /></a></p>
<p>Then clicking on <strong>Yes, I'm sure</strong> to try deleting data:</p>
<p><a href="https://i.sstatic.net/lTCq9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lTCq9.png" alt="enter image description here" /></a></p>
<p>I have <strong><code>Person</code> model</strong> below:</p>
<pre class="lang-py prettyprint-override"><code># "store/models.py"
from django.db import models
class Person(models.Model):
name = models.CharField(max_length=30)
</code></pre>
<p>And, <strong><code>Person</code> admin</strong> below:</p>
<pre class="lang-py prettyprint-override"><code># "store/admin.py"
from django.contrib import admin
from .models import Person
@admin.register(Person)
class PersonAdmin(admin.ModelAdmin):
pass
</code></pre>
<p>So, how can I retry when exception occurs in <strong>Django Admin</strong>?</p>
|
<python><django><exception><django-admin><django-admin-actions>
|
2022-12-30 15:43:50
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
74,963,246
| 13,100,938
|
Using serial ports over bluetooth with micro bit
|
<p>I have correctly got a microbit working with serial communication via COM port USB.</p>
<p>My aim is to use COM over bluetooth to do the same.</p>
<p><strong>Steps I have taken:</strong></p>
<ol>
<li><em>(on windows 10)</em> bluetooth settings -> more bluetooth settings -> COM ports -> add -> incoming</li>
<li>in device manager changed the baud rate to match that of the microbit (115,200)</li>
<li>paired and connected to the microbit</li>
<li>tried to write to both the serial and uart bluetooth connection from the microbit to the PC (using a flashed python script)</li>
<li>using Tera Term, setup -> serial port... -> COM(number - in my case 4), with all necessary values (including 115,200 baud rate)</li>
</ol>
<p>After doing all of these, I see no incoming message on Tera Term. Have I missed anything?</p>
|
<python><bluetooth><serial-port><bbc-microbit><spp>
|
2022-12-30 15:34:28
| 2
| 2,023
|
Joe Moore
|
74,963,218
| 4,495,790
|
Create Pandas date column from fix starting date and offset days as integer colum
|
<p>I have the following one-column Pandas data frame:</p>
<pre><code>num_days
--------
9236
9601
9636
10454
</code></pre>
<p>Here the integers are number of days counted from a constant predefined date:</p>
<pre><code>START_DATE = pd.to_datetime('1980-01-01')
</code></pre>
<p>Now I want to have a column with dates (calculated as <code>START_DATE</code> + the respective <code>num_days</code>) like this:</p>
<pre><code>num_days, date
--------------
9236 '2005-04-15'
9601 '2006-04-15'
9636 '2006-05-20'
10454 '2008-08-15'
</code></pre>
<p>I have tried this:</p>
<pre><code>df['date'] = pd.to_datetime('1980-01-01') + timedelta(df.num_days)
</code></pre>
<p>but no success:</p>
<pre><code>TypeError: unsupported type for timedelta days component: Series
</code></pre>
|
<python><pandas><datetime>
|
2022-12-30 15:31:20
| 1
| 459
|
Fredrik
|
74,963,200
| 1,625,545
|
Add a legend that covers the lines of the twinx axis
|
<p>I have this python code.</p>
<ol>
<li>it twinx the axis <code>ax</code> and plots some function on both axis</li>
<li>I plot the legend on <code>ax1</code></li>
</ol>
<p>The problem is that the legend is not covering the curves of <code>ax2</code></p>
<p>It is possible to <strong>automatically positioning</strong> the legend on <code>ax</code> by covering the lines of <code>ax2</code>.</p>
<p>Note that in <code>fig.legend</code> the option <code>loc="best"</code> is not available.
And I need the automatic positioning inside the area of the plot.</p>
<p>Tnx</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
# Set the x values for the sine and cosine functions
x = np.linspace(0, 2*np.pi, 100)
# Create the figure and an axis
fig, ax = plt.subplots()
ax2 = ax.twinx()
# Plot the sine and cosine functions on the axis
ax.plot(x, np.sin(x), label='Sine')
ax.plot(x, np.cos(x), label='Cosine')
ax2.plot(x, np.cos(x+1), label='Cosine 2', color="red")
ax2.plot(x, x, label='Cosine 2', color="green")
# Add a title and labels to the axis
ax.set_title('Sine and Cosine Functions')
ax.set_xlabel('X')
ax.set_ylabel('Y')
# Get the line legends from the axis
lines, labels = ax.get_legend_handles_labels()
lines2, labels2 = ax2.get_legend_handles_labels()
# Add a legend to the figure
ax.legend(lines + lines2, labels + labels2, framealpha=1.0)
ax.get_legend().set_zorder(10)
# Display the plot
plt.show()
</code></pre>
<p>Bellow is the output of the code:</p>
<p><a href="https://i.sstatic.net/ZMyRb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZMyRb.png" alt="The output of the code above:" /></a></p>
|
<python><matplotlib><legend><twinx>
|
2022-12-30 15:29:47
| 3
| 720
|
Giggi
|
74,963,146
| 9,064,356
|
PySpark `monotonically_increasing_id()` returns 0 for each row
|
<p>This code</p>
<pre><code>def foobar(df):
return (
df.withColumn("id", monotonically_increasing_id())
.withColumn("foo", lit("bar"))
.withColumn("bar", lit("foo"))
)
somedf = foobar(somedf)
somedf.show() # <-- each `id` has value 0
</code></pre>
<p>creates and prints a data frame where each id has value 0.</p>
<p>I am really confused as this is <code>monotonically_increasing_id</code> method description from <a href="https://spark.apache.org/docs/3.1.3/api/python/reference/api/pyspark.sql.functions.monotonically_increasing_id.html" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>The generated ID is guaranteed to be <strong>monotonically increasing and unique</strong>, but not consecutive. The current implementation puts the partition ID in the upper 31 bits, and the record number within each partition in the lower 33 bits. The assumption is that the data frame has less than 1 billion partitions, and each partition has less than 8 billion records.</p>
</blockquote>
<p>It clearly says that each row will have a unique value and also it points out that each id will be unique among each partition which means that it is safe to use this method in distributed enviroment as each row will have a unique id across all of the nodes.</p>
<blockquote>
<p>it puts partition ID in the upper 31bits and record number within each
partition in the lower 33 bits</p>
</blockquote>
<p>What is even more confusing that on a single instance enviorment (on my local machine) above code works flawlessly (each row has unique id) but when I deploy the same code to AWS and run it on EMR I get only 0s under ids</p>
|
<python><amazon-web-services><apache-spark><pyspark><amazon-emr>
|
2022-12-30 15:24:48
| 1
| 961
|
hdw3
|
74,963,097
| 9,138,097
|
How to get the key/value from python dict into requests for get method
|
<p>I'm trying to get the specific values from dict and use in the requests module to make the get requests.</p>
<pre><code>clusters = {
'cluster_1':'https://cluster_1.something.com/api',
'cluster_2':'https://cluster_2.something.com/api'
}
</code></pre>
<p>Code:</p>
<pre><code>def getCsv():
for i in clusters:
r = requests.get(i.values(), headers=headers, params=params)
with open("input.csv", "w") as f:
f.writelines(r.text.splitlines(True))
df = pd.read_csv("input.csv")
return df
getCsv()
</code></pre>
<p>Am I doing it right?</p>
<p>Also final step is to print the clusters key into output csv using below code.</p>
<pre><code>with open(rb'output.csv', 'w', newline='') as out_file:
timestamp = datetime.now()
df = getCsv()
if 'Name' in df.columns:
df.rename(columns = {"Name": "team", "Total": "cost"}, inplace = True)
df.insert(0, 'date',timestamp)
df.insert(1, 'resource_type', "pod")
df.insert(2, 'resource_name', "kubernetes")
df.insert(3, 'cluster_name', i.keys)
df.drop(["CPU", "GPU", "RAM", "PV", "Network", "LoadBalancer", "External", "Shared", "Efficiency"], axis=1, inplace=True)
df['team'] = df['team'].map(squads).fillna(df['team'])
df.groupby(["date","resource_type","resource_name","cluster_name","team"]).agg({"cost": sum}).reset_index().to_csv(out_file, index=False)
</code></pre>
<p>But seems not working, any guidance will be appreciated.</p>
|
<python><python-3.x><pandas><csv><request>
|
2022-12-30 15:19:41
| 1
| 811
|
Aziz Zoaib
|
74,962,935
| 1,112,283
|
Menu font too large in PySide6 app on Windows
|
<p>The font size of menus and menu entries in a PySide6 app on Windows is way too large when scaling is greater than 100%. I have it set to 150% (on a 4K monitor) and it looks like this:</p>
<p><a href="https://i.sstatic.net/sMpp4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sMpp4.png" alt="enter image description here" /></a></p>
<p>Notice that text in the main window ("Test HiDPI scaling") is correctly sized.</p>
<p>Here is a minimal example to reproduce the issue:</p>
<pre class="lang-py prettyprint-override"><code>import sys
from PySide6.QtCore import Qt
from PySide6.QtGui import QAction
from PySide6.QtWidgets import QApplication, QLabel, QMainWindow
app = QApplication(sys.argv)
win = QMainWindow()
menubar = win.menuBar()
file_menu = menubar.addMenu("File")
file_menu.addAction(QAction("New", win))
file_menu.addAction(QAction("Open", win))
file_menu.addAction(QAction("Quit", win))
edit_menu = menubar.addMenu("Edit")
edit_menu.addAction(QAction("Copy", win))
edit_menu.addAction(QAction("Paste", win))
edit_menu.addAction(QAction("Cut", win))
view_menu = menubar.addMenu("View")
view_menu.addAction(QAction("Zoom in", win))
view_menu.addAction(QAction("Zoom out", win))
view_menu.addAction(QAction("Reset", win))
help_menu = menubar.addMenu("Help")
help_menu.addAction(QAction("Show help", win))
label = QLabel("Test HiDPI scaling")
label.setAlignment(Qt.AlignHCenter | Qt.AlignVCenter)
win.setCentralWidget(label)
win.show()
sys.exit(app.exec())
</code></pre>
<p>To run this example,</p>
<ul>
<li>save it as e.g. <code>main.py</code>,</li>
<li>install dependencies with <code>pip install PySide6</code>,</li>
<li>and run it with <code>python main.py</code>.</li>
</ul>
|
<python><windows><qt><pyside6>
|
2022-12-30 15:04:17
| 1
| 1,821
|
cbrnr
|
74,962,856
| 859,227
|
Join data frames and reset a column
|
<p>For the data frames like</p>
<pre><code> ID Name Time
0 0 A 100
1 1 B 70
ID Name Time
0 0 C 40
1 1 D 90
</code></pre>
<p>I want to join them by rows and reset the <code>ID</code> numbers. So the final data frame should be</p>
<pre><code> ID Name Time
0 0 A 100
1 1 B 70
2 2 C 40
3 3 D 90
</code></pre>
<p>The code is</p>
<pre><code>big_df = pd.DataFrame()
for i in range(1,3):
fname = 'test_' + str(i) + '.csv'
small_df = pd.read_csv(fname, skiprows=[1])
print(small_df)
frames = [big_df, small_df]
big_df = pd.concat(frames)
i += 1
big_df.set_index('ID', inplace=True)
print(big_df)
</code></pre>
<p>But the output is</p>
<pre><code> Name Time
ID
0 A 100
1 B 70
0 C 40
1 D 90
</code></pre>
<p>I want to copy the index values to <code>ID</code> column, but I know the <code>set_index</code> will make the column as an index. How can I fix the code for that purpose?</p>
<p><strong>UPDATE</strong></p>
<p>I found that <code>big_df['ID'] = big_df.index</code> will copy the index values to <code>ID</code> column.</p>
|
<python><pandas><dataframe>
|
2022-12-30 14:55:59
| 4
| 25,175
|
mahmood
|
74,962,733
| 17,795,398
|
Python: tkinter doubles tkinter.Entry.insert
|
<p>I have this 'simple' code</p>
<pre><code>import tkinter as tk
from tkinter import ttk
import datetime
class Page(tk.Frame):
def __init__(self, root):
# Set up
super().__init__(root)
# Labels
self.labels = {}
# Buttons
self.buttons = {}
# Entries
self.entries = {}
# Year label
self.labels["year"] = ttk.Label(self, text="Year")
self.labels["year"].grid(row=0, column=0)
# Month label
self.labels["month"] = ttk.Label(self, text="Month")
self.labels["month"].grid(row=0, column=1)
# Day label
self.labels["day"] = ttk.Label(self, text="Day")
self.labels["day"].grid(row=0, column=2)
# Year entry
self.year = tk.IntVar()
self.entries["year"] = ttk.Entry(self, textvariable=self.year)
self.entries["year"].delete(0, "end")
self.entries["year"].grid(row=1, column=0)
# Month entry
self.month = tk.IntVar()
self.entries["month"] = ttk.Entry(self, textvariable=self.month)
self.entries["month"].delete(0, "end")
self.entries["month"].grid(row=1, column=1)
# Day entry
self.day = tk.IntVar()
self.entries["day"] = ttk.Entry(self, textvariable=self.day)
self.entries["day"].delete(0, "end")
self.entries["day"].grid(row=1, column=2)
# Today button
self.buttons["today"] = ttk.Button(self, text="Today",
command=lambda:self._f_buttonToday())
self.buttons["today"].grid(row=2, column=0, columnspan=3, sticky="WE")
def _f_buttonToday(self):
year, month, day = map(int, datetime.date.today().isoformat().split("-"))
self.year.set(year)
self.month.set(month)
self.day.set(day)
#self.entries["year"].delete(0, "end")
self.entries["year"].insert(0, self.year.get())
self.entries["month"].insert(0, self.month.get())
self.entries["day"].insert(0, self.day.get())
root = tk.Tk()
page = Page(root)
page.pack()
root.mainloop()
</code></pre>
<p>When you press the button, the text in the entries is updated, but it is doubled:</p>
<p><a href="https://i.sstatic.net/WwG4P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WwG4P.png" alt="picture" /></a></p>
<p>I've already checked and the <code>self.year</code>, <code>self.month</code> and <code>self.day</code> variables are right and the button command is executed only once.</p>
<p>How can I solve this? I don't know if it might be related or not, but if I uncomment <code>self.entries["year"].delete(0, "end")</code>, I get the error:</p>
<pre><code>Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\acgc9\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 592, in get
return self._tk.getint(value)
^^^^^^^^^^^^^^^^^^^^^^
_tkinter.TclError: expected integer but got ""
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\acgc9\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 1948, in __call__
return self.func(*args)
^^^^^^^^^^^^^^^^
File "D:\Sync1\Code\Python3\SportsPy\here\test.py", line 45, in <lambda>
command=lambda:self._f_buttonToday())
^^^^^^^^^^^^^^^^^^^^^
File "D:\Sync1\Code\Python3\SportsPy\here\test.py", line 56, in _f_buttonToday
self.entries["year"].insert(0, self.year.get())
^^^^^^^^^^^^^^^
File "C:\Users\acgc9\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 594, in get
return int(self._tk.getdouble(value))
^^^^^^^^^^^^^^^^^^^^^^^^^
_tkinter.TclError: expected floating-point number but got ""
</code></pre>
<p>Thanks!</p>
|
<python><tkinter>
|
2022-12-30 14:42:38
| 1
| 472
|
Abel Gutiérrez
|
74,962,688
| 2,482,149
|
Python - Reset BytesIO So Next File Isn't Appended
|
<p>I'm having a problem with <code>BytesIO</code> library in Python. I want to convert a pdf file that I have retrieved from an S3 bucket, and convert it into a dataframe using a custom function <code>convert_bytes_to_df</code>. The first pdf file is fine to convert to a csv, however subsequent csvs look like they have appended to each other. I have tried to reset the IO with <code>seek</code> and <code>truncate</code> but it doesn't seem to work. What am I doing wrong?</p>
<pre><code>
import boto3
from io import BytesIO,StringIO
LOGGER = logging.getLogger(__name__)
logging.basicConfig(level=logging.ERROR)
logging.getLogger(__name__).setLevel(logging.DEBUG)
session = boto3.Session()
s3 = session.resource('s3')
src_bucket = s3.Bucket('input-bucket')
dest_bucket = s3.Bucket('output-bucket')
csv_buffer = StringIO()
def lambda_handler(event,context):
msg = event['Records'][0]['Sns']['Message']
pdf_files = json.loads(msg)['pdf_files']
location = json.loads(msg)['location']
total_files= len(pdf_files)
LOGGER.info('Processing: {}'.format(json.dumps(pdf_files)))
for pdf_file in pdf_files:
file_name = pdf_file['key']
obj = s3.Object(src_bucket.name,file_name)
fs = BytesIO(obj.get()['Body'].read())
df = convert_bytes_to_df(fs)
df.to_csv(csv_buffer,index=False)
s3.Object(dest_bucket.name, location +"/"+file_name.split('.')[0]+".csv").put(Body=csv_buffer.getvalue())
fs.seek(0)
fs.truncate(0)
LOGGER.info('Processed: {} in {}'.format(file_name,location))
LOGGER.info('Converted {} files: {}'.format(total_files,json.dumps(pdf_files)))
src_bucket.objects.all().delete()
LOGGER.info('Deleted all files from {}'.format(src_bucket.name))
</code></pre>
|
<python><amazon-s3><boto3><bytesio><pdfplumber>
|
2022-12-30 14:38:18
| 1
| 1,226
|
clattenburg cake
|
74,962,306
| 12,275,675
|
Removing index which is not common between 2 dataframes
|
<p>I have the following code:</p>
<pre><code>import pandas.util.testing as testing
df = testing.makeDataFrame()
df
</code></pre>
<p>This this I have created 2 dataframes with one dataframe have 2 less lines than the original one.</p>
<p>This is <code>df</code> - Original</p>
<pre><code>A B C D
OdhGFPa5Kw -0.686378 -1.210838 1.160708 0.903309
gelZFj4BG5 1.603112 1.852592 -0.065482 0.684566
mp3Aq5ueGD 0.254211 -0.788877 -0.626789 0.109116
pBtz9DHxUZ -0.970632 0.982661 -0.463984 -0.123727
K28pzbdYcX -1.311220 -2.121306 1.209484 -1.695901
71ZFgWaeDE 1.887420 0.337702 -0.176539 0.149089
alWOjkQ2eZ 1.997701 -0.354276 1.997802 -0.086803
</code></pre>
<p>This is <code>df1</code> - with 2 less lines</p>
<pre><code>A B C D
OdhGFPa5Kw -0.686378 -1.210838 1.160708 0.903309
gelZFj4BG5 1.603112 1.852592 -0.065482 0.684566
mp3Aq5ueGD 0.254211 -0.788877 -0.626789 0.109116
pBtz9DHxUZ -0.970632 0.982661 -0.463984 -0.123727
K28pzbdYcX -1.311220 -2.121306 1.209484 -1.695901
</code></pre>
<p>What I am trying to do is to remove all the rows which are not common between the two dataframes. To do this, we find the duplicate index in the two columns.</p>
<pre><code>duplicates = set(df.index).intersection(df1.index)
</code></pre>
<p>Could you please advise how can I remove rows where index is not in the <code>duplicates</code> ?</p>
|
<python><python-3.x><pandas><dataframe>
|
2022-12-30 13:54:15
| 1
| 1,220
|
Slartibartfast
|
74,962,204
| 5,134,203
|
Print text of regex'd/replaced element
|
<p>I am using beautifulsoup and python and I am trying to get the text of a traversed td like below:</p>
<pre><code><tr>
<td class="labelplain">&nbsp;Long Title</td>
<td><table width="100%" border="0" cellspacing="2" cellpadding="0">
<tr>
<td class="labelplain">RESOLUTION CALLING FOR AN INVESTIGATION, IN AID OF LEGISLATION, ON THE FREQUENT CHANGES IN THE PHILIPPINE
BANK NOTES AND COINS INITIATED BY THE BANGKO SENTRAL NG PILIPINAS (BSP)</td>
</tr>
</table></td>
<td>&nbsp;</td>
</tr>
</code></pre>
<p>The thing is I am able to traverse getting this: "RESOLUTION CALLING FOR AN INVESTIGATION, IN AID OF LEGISLATION, ON THE FREQUENT CHANGES IN THE PHILIPPINE
BANK NOTES AND COINS INITIATED BY THE BANGKO SENTRAL NG PILIPINAS (BSP)" with the following below:
long_title = soup.find('td', text = re.compile('Long Title'), attrs={'class': 'labelplain'}).find_next('td')
long_title_text = long_title.text.strip()</p>
<p>But the problem is that there are \r\n in some of the tds like the one above. Notice "PHILIPPINE BANK NOTES", PHILIPPINE is on the first line, newline then BANK NOTES AND COINS.
I am able to replace the newline adding: string = str(long_title).replace('\r', '').replace('\n', ' ')</p>
<pre><code> long_title = soup.find('td', text = re.compile('Long Title'), attrs={'class': 'labelplain'}).find_next('td')
string = str(long_title).replace('\r', '').replace('\n', ' ')
long_title_text = string.text.strip()
</code></pre>
<p>But the problem is I am getting error below when I print or echo long_title_text by using text.strip():</p>
<pre><code>AttributeError: 'str' object has no attribute 'text'
</code></pre>
<p>What should be the correct way to address this? I've tried only ".text" but to no avail.
Thanks in advance!</p>
|
<python><beautifulsoup>
|
2022-12-30 13:42:47
| 1
| 435
|
schnydszch
|
74,961,896
| 4,752,223
|
Python Jupyter/Notebook: How to display a variable as text on a cell without copy/paste
|
<p>It happens to me that when reading/reviewing the code, I becomes easier if I can see the 'look' of the variable a function is processing.</p>
<p>For that, I'd like to display a 'static' version of an instance of that variable (as a visual aid).</p>
<p>That variable may not be there on another run of the notebook, that's why it has to be text, not output.</p>
<p>This is also useful when creating documentation within the notebook.</p>
<p><a href="https://i.sstatic.net/b06N1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b06N1.png" alt="enter image description here" /></a></p>
|
<python><jupyter-notebook><jupyter><jupyter-lab>
|
2022-12-30 13:07:28
| 1
| 2,928
|
Rub
|
74,961,882
| 648,138
|
Multithreading in place of multiprocessing with celery
|
<p>I am a little frustrated with <a href="https://docs.celeryq.dev/en/stable/index.html" rel="nofollow noreferrer">celery documentation</a>. I understand that the command</p>
<pre><code>celery -A my-module worker -l INFO -n w1
</code></pre>
<p>starts a worker instance named <code>w1</code>. This means that the worker instance starts some default number of "<strong>processes</strong>" in the OS.</p>
<p>But can celery also start threads instead of processes? For example what does the following command do?</p>
<pre><code>celery -A my-module worker --pool threads -l INFO -n w1
</code></pre>
<p>I tried reading through the documentation but could not find anything that would give an answer to the question "What does multi threading mean when it comes to celery? Can celery support multi threading in place of multi processing?"</p>
|
<python><multithreading><multiprocessing><celery><distributed-computing>
|
2022-12-30 13:04:59
| 0
| 23,414
|
Suhail Gupta
|
74,961,814
| 4,418,481
|
Create multilevel Dask dataframe from multiple parquet files
|
<p>I have a folder with around 500 CSV files. Each parquet file was created by saving a test result that looks as follows (time series of multiple variables where each file might contain around 50,000 rows).</p>
<p><a href="https://i.sstatic.net/dlZHC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dlZHC.png" alt="enter image description here" /></a></p>
<p>My final goal is to plot a specific parameter versus time from all the files (can't know ahead which parameter). For example, to read 500 files and plot reg vs t that will look like this:</p>
<p><a href="https://i.sstatic.net/598bB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/598bB.png" alt="enter image description here" /></a></p>
<p>In order to make sure that each line in the plot will be separate, I used a multilevel Pandas DataFrame.I used the following code to do so:</p>
<pre><code>folder = os.listdir(os.path.splitext(os.getcwd()+'/results')[0])
comp = {i.split('.')[0]: pd.read_csv('results/'+i, index_col='t') for i in folder}
combined = pd.concat(comp, axis=0)
plt.plot(combined.unstack(level=0)['reg'])
plt.show()
</code></pre>
<p>Since my data is pretty large and it becomes pretty slow to load and plot, I would like to use Dask in order to read the files in a parallel way. Also, I would like to switch to parquet files instead of CSV.</p>
<p>Is there any way to read 500 parquet files (say I saved the CSV files in a parquet format) and store them as a multilevel Dask Dataframe so I will achieve the same result but using parallel computation?</p>
<p>Thank you</p>
|
<python><pandas><dask>
|
2022-12-30 12:57:41
| 0
| 1,859
|
Ben
|
74,961,706
| 3,578,468
|
Label Studio: Access file specified in task record
|
<p>I am trying to implement a pre-labeler for Label Studio. The documentation (e.g. <a href="https://labelstud.io/guide/ml_create.html#Example-inference-call" rel="nofollow noreferrer">https://labelstud.io/guide/ml_create.html#Example-inference-call</a>) is not very helpful here. I have imported images locally and need to read them from a task record to pass them to my model. When I print a task record, it looks like this:</p>
<pre><code>{
"id": 7,
"data": {
"image": "/data/upload/1/cfcf4486-0cdd1413486f7d923e6eff475c43809f.jpeg"
},
"meta": {},
"created_at": "2022-12-29T00:49:34.141715Z",
"updated_at": "2022-12-29T00:49:34.141720Z",
"is_labeled": false,
"overlap": 1,
"inner_id": 7,
"total_annotations": 0,
"cancelled_annotations": 0,
"total_predictions": 0,
"comment_count": 0,
"unresolved_comment_count": 0,
"last_comment_updated_at": null,
"project": 1,
"updated_by": null,
"file_upload": 7,
"comment_authors": [],
"annotations": [],
"predictions": []
}
</code></pre>
<p>My labeler implementation at the moment is this:</p>
<pre><code>class MyModel(LabelStudioMLBase):
def __init__(self, **kwargs):
super(MyModel, self).__init__(**kwargs)
self.model = ...
self.query_fn = ...
def make_result_record(self, path: Path):
# Reference: https://labelstud.io/tutorials/sklearn-text-classifier.html
mask = self.query_fn(path)
image = Image.open(path)
result = [
{
"original_width": image.width,
"original_height": image.height,
"image_rotation": 0,
"value": {"format": "rle", "rle": [mask], "brushlabels": ["crack"]},
"id": uuid(),
"from_name": "tag",
"to_name": "image",
"type": "brushlabels",
"origin": "inference",
}
]
return {"result": result, "score": 1.0}
def predict(self, tasks, **kwargs):
predictions = []
for task in tasks:
logger.info("task:")
logger.info("\n" + json.dumps(task, indent=2))
result = self.make_result_record(Path(task["data"]["image"]))
predictions.append(result)
return predictions
</code></pre>
<p>So where is <code>/data/upload/1/cfcf4486-0cdd1413486f7d923e6eff475c43809f.jpeg</code>? It is inside some storage that Label Studio spins up I suppose. How do I access this? (And why does the documentation not talk about this......)</p>
|
<python><machine-learning><data-annotations><label-studio>
|
2022-12-30 12:44:03
| 2
| 3,954
|
lo tolmencre
|
74,961,668
| 11,558,143
|
How to monitor subprocess's stdout in real time
|
<p>I have a subprocess that generate images. A main programe will consume images.</p>
<p>My plan is to launch subprocess, monitor it. Once there are several images available <em>(i.e. subprocess printed '2' or '3')</em>, I will start main programe.</p>
<p>However, I fail to get 'real-time' output from subprocess. Every time, subprocess will not return anything via PIPE until it has generated all 20 images. Except in debug-mode.</p>
<p>in <code>sub_process.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import time
for i in range(20):
time.sleep(1)
print(i)
# generate one image each second
</code></pre>
<p>main process: ready to go on processing images if there is at lease one image.</p>
<pre class="lang-py prettyprint-override"><code>from subprocess import PIPE
from subprocess import Popen
p= Popen(['python', "./subprocess.py"], stdout=PIPE, stderr=PIPE)
while True:
if '2' in p.stdout.readline().decode("utf-8"):
print('enough image')
break
print('go on processing images generated by subprocess.py')
</code></pre>
|
<python><subprocess><stdout>
|
2022-12-30 12:40:53
| 1
| 1,462
|
zheyuanWang
|
74,961,598
| 3,991,164
|
Why is VSCode Python debugger slowing down execution by a factor of 5
|
<p>I run a Python 3.10 script that analyzes some 350 SQL files using the <code>sqlparse</code> module. When I run this via "Run Python File" in VSCode 1.73.1 it takes about 4 seconds. When I run it via "Debug Python File", it takes roughly 5 times as long.</p>
<p>The final analysis will run on a set of 1,500 SQL files, and debugging something that sits on top of the analysis result (a heavy dependencies network of objects) will probably take more than a minute in each iteration. This will slow down the development cycle considerably, somewhat like in the old times when we had a compile/bind-cycle before we can run our latest change.</p>
<p>I will probably survive, but I am curious: Where does this excessive runtime come from?</p>
|
<python><python-3.x><visual-studio-code><vscode-debugger>
|
2022-12-30 12:34:03
| 0
| 4,235
|
flaschbier
|
74,961,567
| 20,646,427
|
How to get username and user email in my form Django
|
<p>I have "creating order form" which is need email and username to purchase.
I want to push request username and user email into this form (if user is authorized)</p>
<p>forms.py</p>
<pre><code>class OrderCreateForm(forms.ModelForm):
username = forms.CharField(label='Имя пользователя', widget=forms.TextInput(attrs={'class': 'form-control'}))
email = forms.EmailField(label='E-mail', widget=forms.EmailInput(attrs={'class': 'form-control'}))
vk_or_telegram = forms.CharField(label='Введите ссылку на vk или telegram для связи с админом',
widget=forms.TextInput(attrs={'class': 'form-control'}))
captcha = ReCaptchaField()
class Meta:
model = Order
fields = ('username', 'email', 'vk_or_telegram')
</code></pre>
<p>views.py</p>
<pre><code>def order_create(request):
cart = Cart(request)
if request.method == 'POST':
form = OrderCreateForm(request.POST)
if form.is_valid():
order = form.save()
for item in cart:
OrderItem.objects.create(order=order, product=item['post'], price=item['price'])
cart.clear()
return render(request, 'store/orders/created.html', {'order': order})
else:
form = OrderCreateForm()
return render(request, 'store/orders/create.html', {'cart': cart, 'form': form})
</code></pre>
<p><a href="https://i.sstatic.net/6WvU3.png" rel="nofollow noreferrer">template of form</a></p>
|
<python><django>
|
2022-12-30 12:29:59
| 1
| 524
|
Zesshi
|
74,961,555
| 12,368,148
|
How to recieve and parse data coming from a Python socket server in dart?
|
<p>I have a socket server running in Python that sends a byte stream through the socket in which, the first 8 bytes contain the size of the remaining message body. The code looks something like this:</p>
<pre><code> data_to_send = struct.pack("Q", len(pickle_dump))+pickle_dump
client_socket.sendall(data_to_send)
</code></pre>
<p>The client endpoint of this, that is written in Python looks like this:</p>
<pre><code>data = b''
PAYLOAD_SIZE = struct.calcsize('Q')
while len(data) < PAYLOAD_SIZE:
packet = client_socket.recv(1024)
if not packet:
break
data += packet
print(len(data), PAYLOAD_SIZE)
packed_msg_size = struct.unpack('Q', data[:PAYLOAD_SIZE])[0]
data = data[PAYLOAD_SIZE:]
while len(data) < packed_msg_size:
data += client_socket.recv(1024)
print(type(data))
frame_data = data[:packed_msg_size]
data = data[packed_msg_size:]
</code></pre>
<p>Now I am kind of new to <em><strong>dart</strong></em> and <em><strong>flutter</strong></em>. So I kind of tried recieving the packets through dart's sockets (which I am not sure how it is implemented) like this:</p>
<pre><code>Socket.connect("10.0.2.2", 7878).then((socket) {
SOCKET = socket;
setState(() {
print('Connected to: '
'${socket.remoteAddress.address}:${socket.remotePort}');
log = '${socket.remoteAddress.address}:${socket.remotePort}';
});
var FULL_CLUSTER = ByteData(0);
// MAIN STUFF STARTS HERE
socket.listen((data) {
Uint8List size_bytesdata = data.sublist(0, 8);
print(size_bytesdata.buffer.asByteData().getUint64(0)); // Prints -6552722064661282816
print(size_bytesdata);
socket.destroy();
});
//
}).onError((error, stackTrace) {
setState(() {
log = '$error\n\n$stackTrace';
});
});
</code></pre>
<p>Now first of all, I figured out that Dart's sockets shows bytes in Uint8List format, so I tried doing what I did in the python client script, but the values do not match.</p>
<p>The <code>len(pickle_dump)</code> has the value <em>921765</em> but the dart code(as mentioned in the snippet) prints <em>-6552722064661282816</em>. Now I believe the type recognition by dart is what's causing the trouble as I have packed is as <code>'Q'</code> (using struct) in the Server script, which stands for <em>'Unsigned long long'</em> but, in dart, I parsed it as <em>'Unsigned int'</em>. How do I solve this issue? Also, a brief discussion what's exactly going will help a lot.</p>
|
<python><flutter><dart><sockets><bytestream>
|
2022-12-30 12:28:55
| 1
| 532
|
Broteen Das
|
74,961,435
| 1,127,683
|
Python3 idiomatic way to dispatch method call on abstract base classes
|
<p>I'm using <code>Python 3.10.5</code>.</p>
<p>I've got the following class which is designed to:</p>
<ul>
<li>receive an object</li>
<li>lookup which class should handle the request</li>
<li>invoke a method on that class that has been looked up</li>
</ul>
<p>What I have so far:</p>
<pre class="lang-py prettyprint-override"><code>class BaseHandler(ABC):
def handle(self):
pass
class HandlerOne(BaseHandler):
def handle(self):
# some stuff
class HandlerTwo(BaseHandler):
def handle(self):
# some other stuff
class CommandOne():
def __init__(self):
# some attributes
class CommandTwo():
def __init__(self):
# some attributes
class CommandDispatcher():
def __init__(self):
self._commands = {
CommandOne.__name__: HandlerOne,
CommandTow.__name__: HandlerTwo,
}
@property
def get_handler(self, command: object):
return self._commands.get(type(command).__name__)
@classmethod
def dispatch(self, command: object)
handler = self.get_handler(command)
handler.handle() # <-- this doesn't work because the interpreter doesn't know that each handler is of type `BaseHandler`
//
</code></pre>
<p>I'm new to python, but have experience with Java and C#. I've been looking at this for a while but think I've been so used to working with statically typed languages that I'm approaching this incorrectly.</p>
<p>What is the idiomatic way to design this with Python so that from the clients perspective I can achieve the following:</p>
<pre class="lang-py prettyprint-override"><code>dispatcher = CommandDispatcher()
command_one = CommandOne()
command_two = CommandTwo()
dispatcher.dispatch(command_one)
dispatcher.dispatch(command_two)
</code></pre>
|
<python>
|
2022-12-30 12:16:04
| 1
| 3,662
|
GreenyMcDuff
|
74,961,262
| 11,648,332
|
How can I scale an image with PIL without ruining its appearence?
|
<p>I noticed that simply multiplying and dividing an array even for coefficents equivalent to 1, will deform the image.</p>
<p>I need to rescale the image pixels because I need to feed them to a ML model, but I noticed there seems to be a huge loss of information in the process.</p>
<p>This is the original image (an example):</p>
<pre><code>Image.fromarray((np.array(out_img.resize((224, 224)))),'L')
</code></pre>
<p><a href="https://i.sstatic.net/Zixm0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Zixm0.png" alt="Original image" /></a></p>
<p>If I divide it by 255, it somehow ends up like this:</p>
<pre><code>Image.fromarray((np.array(out_img.resize((224, 224)))/255),'L')
</code></pre>
<p><a href="https://i.sstatic.net/n2FdH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/n2FdH.png" alt="enter image description here" /></a></p>
<p>A lot of information seems lost, and apparently I can't revert back to the original:</p>
<pre><code>(np.array(out_img.resize((224, 224)))/255*255==np.array(out_img.resize((224, 224)))).all()
Image.fromarray((np.array(out_img.resize((224, 224)))/255*255),'L')
</code></pre>
<p><a href="https://i.sstatic.net/vsD4a.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vsD4a.png" alt="enter image description here" /></a>
If you see I checked that multiplying and dividing by 255 will give us back the same array, but the images look different.</p>
<p>The same happens even if I naively divide and multiply by 1:</p>
<pre><code>Image.fromarray((np.array(out_img.resize((224, 224)))*(1/1)),'L')
</code></pre>
<p><a href="https://i.sstatic.net/jcO9O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jcO9O.png" alt="enter image description here" /></a></p>
<p>Is there an explanation for this behaviour or a way to prevent the information loss?</p>
|
<python><python-imaging-library>
|
2022-12-30 11:58:03
| 1
| 447
|
9879ypxkj
|
74,961,235
| 2,425,753
|
How to use StartingToken with DynamoDB pagination scan
|
<p>I have a DynamoDB table and I want to output items from it to a client using pagination. I thought I'd use <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/dynamodb.html#DynamoDB.Paginator.Scan" rel="nofollow noreferrer">DynamoDB.Paginator.Scan</a> and supply <code>StartingToken</code>, however I dont see <code>NextToken</code> in the output of either <code>page</code> or <code>iterator</code> itself. So how do I get it?</p>
<p>My goal is a REST API where client requests next X items from a table, supplying StartingToken to iterate. Originally there's no token, but with each response server returns <code>NextToken</code> which client supplies as a <code>StartingToken</code> to get the next X items.</p>
<pre class="lang-py prettyprint-override"><code>import boto3
import json
table="TableName"
client = boto3.client("dynamodb")
paginator = client.get_paginator("query")
token = None
size=1
for i in range(1,10):
client.put_item(TableName=table, Item={"PK":{"S":str(i)},"SK":{"S":str(i)}})
it = paginator.paginate(
TableName=table,
ProjectionExpression="PK,SK",
PaginationConfig={"MaxItems": 100, "PageSize": size, "StartingToken": token}
)
for page in it:
print(json.dumps(page, indent=2))
break
</code></pre>
<p>As a side note - how do I get one page from paginator without using break/for? I tried using <code>next(it)</code> but it does not work.</p>
<p>Here's <code>it</code> object:</p>
<pre><code>{
'_input_token': ['ExclusiveStartKey'],
'_limit_key': 'Limit',
'_max_items': 100,
'_method': <bound method ClientCreator._create_api_method.<locals>._api_call of <botocore.client.DynamoDB object at 0x000001CBA5806AA0>>,
'_more_results': None,
'_non_aggregate_key_exprs': [{'type': 'field', 'children': [], 'value': 'ConsumedCapacity'}],
'_non_aggregate_part': {'ConsumedCapacity': None},
'_op_kwargs': {'Limit': 1,
'ProjectionExpression': 'PK,SK',
'TableName': 'TableName'},
'_output_token': [{'type': 'field', 'children': [], 'value': 'LastEvaluatedKey'}],
'_page_size': 1,
'_result_keys': [{'type': 'field', 'children': [], 'value': 'Items'},
{'type': 'field', 'children': [], 'value': 'Count'},
{'type': 'field', 'children': [], 'value': 'ScannedCount'}],
'_resume_token': None,
'_starting_token': None,
'_token_decoder': <botocore.paginate.TokenDecoder object at 0x000001CBA5D81960>,
'_token_encoder': <botocore.paginate.TokenEncoder object at 0x000001CBA5D82290>
}
</code></pre>
<p>And the page:</p>
<pre><code>{
"Items": [
{
"PK": {
"S": "2"
},
"SK": {
"S": "2"
}
}
],
"Count": 1,
"ScannedCount": 1,
"LastEvaluatedKey": {
"PK": {
"S": "2"
},
"SK": {
"S": "2"
}
},
"ResponseMetadata": {
"RequestId": "DBE4ON8SI0GOTS2RRO2OG43QJVVV4KQNSO5AEMVJF66Q9ASUAAJG",
"HTTPStatusCode": 200,
"HTTPHeaders": {
"server": "Server",
"date": "Fri, 30 Dec 2022 11:37:52 GMT",
"content-type": "application/x-amz-json-1.0",
"content-length": "121",
"connection": "keep-alive",
"x-amzn-requestid": "DBE4ON8SI0GOTS2RRO2OG43QJVVV4KQNSO5AEMVJF66Q9ASUAAJG",
"x-amz-crc32": "973385738"
},
"RetryAttempts": 0
}
}
</code></pre>
<p>I thought I could use <code>LastEvaluatedKey</code> but that throws an error, also tried to get token like this, but it did not work:</p>
<pre class="lang-py prettyprint-override"><code>it._token_encoder.encode(page["LastEvaluatedKey"])
</code></pre>
<p>I also thought about using just <code>scan</code> without iterator, but I'm actually outputting a very filtered result-set. I need to set <code>Limit</code> to a very large value to get results and I don't want too many results at the same time. Is there a way to scan up to 1000 items but stop as soon as 10 items are found?</p>
|
<python><amazon-web-services><pagination><amazon-dynamodb><boto3>
|
2022-12-30 11:54:56
| 1
| 1,636
|
rfg
|
74,961,155
| 7,209,826
|
AttributeError: 'Spider' object has no attribute 'crawler' in scarpy spider
|
<p>In order to access settings from <code>_init_</code> i had to add <code>from_crawler @classmethod</code>. Now it appears that some functionality of scrapy framework was lost. i am getting <code>AttributeError: 'Code1Spider' object has no attribute 'crawler'</code> when url fails and spider tries to retry request.
Scrapy version is 2.0.1. Spider is running on Zyte cloud.</p>
<p>What did i do wrong and how to fix it?</p>
<p>Here is my spider code:</p>
<pre><code>class Code1Spider(scrapy.Spider):
name = 'cointelegraph_pr'
allowed_domains = ['cointelegraph.com']
start_urls = ['https://cointelegraph.com/press-releases']
def __init__(self, settings):
#Returns settings values as dict
settings=settings.copy_to_dict()
self.id = int(str(datetime.now().timestamp()).split('.')[0])
self.gs_id = settings.get('GS_ID')
self.endpoint_url = settings.get('ENDPOINT_URL')
self.zyte_api_key = settings.get('ZYTE_API_KEY')
self.zyte_project_id = settings.get('ZYTE_PROJECT_ID')
self.zyte_collection_name = self.name
#Loads a list of stop words from predefined google sheet
self.denied = load_gsheet(self.gs_id)
#Loads all scraped urls from previous runs from zyte collections
self.scraped_urls = load_from_collection(self.zyte_project_id, self.zyte_collection_name, self.zyte_api_key)
logging.info("###############################")
logging.info("Number of previously scraped URLs = {}.".format(len(self.scraped_urls)))
logging.info("")
# We need this to pass settings into init. Otherwise settings will be accessible only after init function.
# As per https://docs.scrapy.org/en/1.8/topics/settings.html#how-to-access-settings
@classmethod
def from_crawler(cls, crawler):
settings = crawler.settings
return cls(settings)
</code></pre>
<p>Here is the error:</p>
<pre><code>Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/twisted/internet/defer.py", line 1418, in _inlineCallbacks
result = g.send(result)
File "/usr/local/lib/python3.8/site-packages/scrapy/core/downloader/middleware.py", line 42, in process_request
defer.returnValue((yield download_func(request=request, spider=spider)))
File "/usr/local/lib/python3.8/site-packages/twisted/internet/defer.py", line 1362, in returnValue
raise _DefGen_Return(val)
twisted.internet.defer._DefGen_Return: <504 https://cointelegraph.com/press-releases/the-launch-of-santa-browser-to-bring-in-the-next-200m-users-onto-web30>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/twisted/internet/defer.py", line 1418, in _inlineCallbacks
result = g.send(result)
File "/usr/local/lib/python3.8/site-packages/scrapy/core/downloader/middleware.py", line 51, in process_response
response = yield deferred_from_coro(method(request=request, response=response, spider=spider))
File "/usr/local/lib/python3.8/site-packages/scrapy/downloadermiddlewares/retry.py", line 53, in process_response
return self._retry(request, reason, spider) or response
File "/usr/local/lib/python3.8/site-packages/scrapy/downloadermiddlewares/retry.py", line 69, in _retry
stats = spider.crawler.stats
AttributeError: 'Code1Spider' object has no attribute 'crawler'
</code></pre>
<p>Everything else is scrapy default spider. No modifications to settings or middleware.
What did i do wrong and how to fix it?</p>
|
<python><scrapy>
|
2022-12-30 11:46:58
| 1
| 1,119
|
JBJ
|
74,961,061
| 9,267,178
|
Fibonacci Sequence based question slightly changed
|
<p>I was recently given this question in one of my interview which I ofcourse, failed.</p>
<p><a href="https://i.sstatic.net/gRUx6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gRUx6.png" alt="Question" /></a></p>
<p>I just want to know how this will be solved. <strong>The code I tried to use was this:</strong></p>
<pre><code>def gibonacci(n, x, y):
# Base case
if n == 0:
return x
elif n == 1:
return y
else:
sequence = [x, y]
for i in range(2, n):
next_term = sequence[i-1] - sequence[i-2]
sequence.append(next_term)
return sequence[-1]
</code></pre>
<p>Any help, even just the logical help will be enough. Thanks!</p>
|
<python><python-3.x><math><fibonacci>
|
2022-12-30 11:34:14
| 2
| 420
|
Avi Thour
|
74,961,041
| 353,337
|
Strip tox path from pytest-cov output
|
<p>When using pytest and <a href="https://pypi.org/project/pytest-cov/" rel="nofollow noreferrer">pytest-cov</a> with <a href="https://tox.wiki/en/latest/" rel="nofollow noreferrer">tox</a>. My <code>tox.ini</code> is</p>
<pre class="lang-ini prettyprint-override"><code>[tox]
envlist = py3
[testenv]
deps =
pytest
pytest-codeblocks
pytest-cov
commands =
pytest {tty:--color=yes} {posargs} --cov=mypackagename
</code></pre>
<p>The output will contain the full tox path of the files, e.g.:</p>
<pre><code>---------- coverage: platform linux, python 3.11.1-final-0 -----------
Name Stmts Miss Cover
------------------------------------------------------------------------------------------------------------
.tox/py3/lib/python3.11/site-packages/mypackagename/__init__.py 1 0 100%
------------------------------------------------------------------------------------------------------------
TOTAL 678 168 75%
</code></pre>
<p>I'm not interested in the prefix <code>.tox/py3/lib/python3.11/site-packages/</code>. Is there any way to remove it from the output?</p>
<pre><code>mypackagename/__init__.py 1 0 100%
</code></pre>
|
<python><pytest><tox><pytest-cov>
|
2022-12-30 11:30:59
| 1
| 59,565
|
Nico Schlömer
|
74,960,844
| 9,138,097
|
How to merge the duplicates rows in pandas dataframe with python3
|
<p>I'm trying to merge the duplicates in the resulting .csv but unable to get the desired result.</p>
<p>My below code works just fine.</p>
<pre><code>def inputCsv():
r = requests.get(endpoint, headers=headers, params=params)
with open("input.csv", "w") as f:
f.writelines(r.text.splitlines(True))
df = pd.read_csv("input.csv")
return df
def outputCsv():
with open("secret.json", "w") as file:
auth = ssm.get_parameter(Name="/something/something/creds", WithDecryption=True)
file.write(str(auth['Parameter']['Value']))
os.environ["creds"] = "secret.json"
rows = []
with open(rb'output.csv', 'w', newline='') as out_file:
timestamp = datetime.now()
df = getCsv()
if 'Name' in df.columns:
df.rename(columns = {"Name": "team", "Total": "cost"}, inplace = True)
df.insert(0, 'date',timestamp)
df.insert(1, 'resource_type', "pod")
df.insert(2, 'resource_name', "kubernetes")
df.insert(3, 'cluster_name', "eks-cluster")
df.drop(["CPU", "GPU", "RAM", "PV", "Network", "LoadBalancer", "External", "Shared", "Efficiency"], axis=1, inplace=True)
df['team'] = df['team'].map(squads).fillna(df['team'])
df.to_csv(out_file, index=False)
</code></pre>
<p>But the resulting output looks like below because of duplicates inserted via <code>df.map</code> function.</p>
<pre><code>date,resource_type,resource_name,cluster_name,team,cost
2022-12-30 14:56:08.383080,pod,kubernetes,eks-cluster,billing,0.201
2022-12-30 14:56:08.383080,pod,kubernetes,eks-cluster,sre-infra-and-release,0.238
2022-12-30 14:56:08.383080,pod,kubernetes,eks-cluster,sre-infra-and-release,0.008
2022-12-30 14:56:08.383080,pod,kubernetes,eks-cluster,sre-infra-and-release,0.836
2022-12-30 14:56:08.383080,pod,kubernetes,eks-cluster,growth,0.513
2022-12-30 14:56:08.383080,pod,kubernetes,eks-cluster,sre-observability,3.633
2022-12-30 14:56:08.383080,pod,kubernetes,eks-cluster,order-platform,1.963
2022-12-30 14:56:08.383080,pod,kubernetes,eks-cluster,menu,0.46
2022-12-30 14:56:08.383080,pod,kubernetes,eks-cluster,ncr,3.291
2022-12-30 14:56:08.383080,pod,kubernetes,eks-cluster,order-platform,4.846
2022-12-30 14:56:08.383080,pod,kubernetes,eks-cluster,grocery-affordability,0.171
</code></pre>
<p>I want that the duplicated <code>sre-infra-and-release</code> to be merged with single row with cost being sum together.</p>
<p>tried to use this below but some how is not working, the resultant file still had the duplicated rows.</p>
<pre><code>df.groupby(['team']).sum()
</code></pre>
|
<python><python-3.x><pandas><dataframe><lambda>
|
2022-12-30 11:08:50
| 3
| 811
|
Aziz Zoaib
|
74,960,828
| 6,689,249
|
how to compare field value with previous one in pydantic validator?
|
<p>What I tried so far:</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, validator
class Foo(BaseModel):
a: int
b: int
c: int
class Config:
validate_assignment = True
@validator("b", always=True)
def validate_b(cls, v, values, field):
# field - doesn't have current value
# values - has values of other fields, but not for 'b'
if values.get("b") == 0: # imaginary logic with prev value
return values.get("b") - 1
return v
f = Foo(a=1, b=0, c=2)
f.b = 3
assert f.b == -1 # fails
</code></pre>
<p>Also looked up property setters but apparently they don't work with pydantic.</p>
<p>Looks like bug to me, so I made an issue on github: <a href="https://github.com/pydantic/pydantic/issues/4888" rel="nofollow noreferrer">https://github.com/pydantic/pydantic/issues/4888</a></p>
|
<python><pydantic>
|
2022-12-30 11:07:04
| 1
| 4,362
|
aiven
|
74,960,725
| 6,025,866
|
Unable to install tensorflow-text and unable to import keras_nlp
|
<p>I am trying out the Keras-NLP library by using one of the examples provided on the Keras website. I have installed Keras-NLP using the command <code>pip install keras-nlp</code> and Tensorflow(version = 2.9.2).</p>
<p>When I run the following,</p>
<pre><code>import keras_nlp
import tensorflow as tf
from tensorflow import keras
</code></pre>
<p>I get the error as follows</p>
<pre><code>ModuleNotFoundError Traceback (most recent call last)
Input In [5], in <cell line: 2>()
1 get_ipython().system('pip install -q --upgrade keras-nlp tensorflow')
----> 2 import keras_nlp
3 import tensorflow as tf
4 from tensorflow import keras
File /opt/homebrew/Caskroom/miniforge/base/lib/python3.9/site-packages/keras_nlp/__init__.py:17, in <module>
15 from keras_nlp import layers
16 from keras_nlp import metrics
---> 17 from keras_nlp import models
18 from keras_nlp import tokenizers
19 from keras_nlp import utils
File /opt/homebrew/Caskroom/miniforge/base/lib/python3.9/site-packages/keras_nlp/models/__init__.py:30, in <module>
26 from keras_nlp.models.distil_bert.distil_bert_tokenizer import (
27 DistilBertTokenizer,
28 )
29 from keras_nlp.models.roberta.roberta_backbone import RobertaBackbone
---> 30 from keras_nlp.models.roberta.roberta_classifier import RobertaClassifier
31 from keras_nlp.models.roberta.roberta_preprocessor import RobertaPreprocessor
32 from keras_nlp.models.roberta.roberta_tokenizer import RobertaTokenizer
File /opt/homebrew/Caskroom/miniforge/base/lib/python3.9/site-packages/keras_nlp/models/roberta/roberta_classifier.py:22, in <module>
20 from keras_nlp.models.roberta.roberta_backbone import RobertaBackbone
21 from keras_nlp.models.roberta.roberta_backbone import roberta_kernel_initializer
---> 22 from keras_nlp.models.roberta.roberta_preprocessor import RobertaPreprocessor
23 from keras_nlp.models.roberta.roberta_presets import backbone_presets
24 from keras_nlp.utils.pipeline_model import PipelineModel
File /opt/homebrew/Caskroom/miniforge/base/lib/python3.9/site-packages/keras_nlp/models/roberta/roberta_preprocessor.py:20, in <module>
17 import copy
19 import tensorflow as tf
---> 20 import tensorflow_text as tf_text
21 from tensorflow import keras
23 from keras_nlp.models.roberta.roberta_presets import backbone_presets
ModuleNotFoundError: No module named 'tensorflow_text'
</code></pre>
<p>I also tried to install tensorflow-text using the commands <code>pip install -U tensorflow-text==2.9.2</code> and <code>pip install -U tensorflow-text</code>. Both of these commands give the following error:</p>
<p><strong>Error from first command</strong></p>
<pre><code>ERROR: Could not find a version that satisfies the requirement tensorflow-text==2.9.2 (from versions: none)
ERROR: No matching distribution found for tensorflow-text==2.9.2
</code></pre>
<p><strong>Error from second command</strong></p>
<pre><code>ERROR: Could not find a version that satisfies the requirement tensorflow-text (from versions: none)
ERROR: No matching distribution found for tensorflow-text
</code></pre>
<p>The system I am using is as follows:</p>
<p>MacOS Monterey
Apple M1 Pro chip</p>
<p>Is there any other prerequisite library I should install to make this work? Thank You in advance!</p>
|
<python><tensorflow><keras><deep-learning><nlp>
|
2022-12-30 10:53:59
| 0
| 441
|
adhok
|
74,960,707
| 11,358,740
|
Poetry stuck in infinite install/update
|
<p>My issue is that when I execute <code>poetry install</code>, <code>poetry update</code> or <code>poetry lock</code> the process keeps running indefinitely.</p>
<p>I tried using the <code>-vvv</code> flag to get output of what's happening and it looks like it gets stuck forever in the first install.</p>
<p>My connection is good and all packages that I tried installing exist.</p>
<p>I use version 1.2.1 but I cannot upgrade to newer versions because the format of the <code>.lock</code> file is different and our pipeline fails.</p>
|
<python><infinite><python-poetry>
|
2022-12-30 10:52:15
| 6
| 824
|
A-fandino
|
74,960,656
| 1,473,517
|
How to make tqdm work with a simple while loop
|
<p>I have this simple code (do <code>pip install primesieve</code> first):</p>
<pre><code>def prob(n):
it = primesieve.Iterator()
p = 1
prime = it.next_prime()
while prime <= n:
p = p * (1-1/prime)
prime = it.next_prime()
return p
</code></pre>
<p>How can I use tqdm to get a progress bar for the while loop?</p>
<hr />
<pre><code>import primesieve
def prob(n):
it = primesieve.Iterator()
p = 1
prime = it.next_prime()
pbar = tqdm(total=n)
while prime <= n:
p = p * (1-1/prime)
prime = it.next_prime()
pbar.update(prime)
return p
</code></pre>
<p>gives an incrementing clock but it doesn't give any indication of how complete the while loop is. That is there is no progress bar.</p>
<p>You can test this with <code>prob(2**32)</code>.</p>
|
<python><tqdm>
|
2022-12-30 10:46:58
| 2
| 21,513
|
Simd
|
74,960,653
| 979,974
|
Python AttributeError: 'numpy.ndarray' object has no attribute 'append'
|
<p>I am trying to parse a folder with contains csv file (These csv files are pixel images position) and store them into a numpy array.
When I try to perform this action, I have an error: <code>AttributeError: 'numpy.ndarray' object has no attribute 'append'</code>.
I understand that NumPy arrays do not have an <code>append()</code>.</p>
<p>However in my code I used the method: <code>images.append(img)</code></p>
<p>Could you tell what I am doing badly in?</p>
<p>Here my code:</p>
<pre><code># Create an empty list to store the images
images = []
# Iterate over the CSV files in the img_test folder
for file in os.listdir("img_test"):
if file.endswith(".txt"):
# Read the CSV file into a dataframe
df = pd.read_csv(os.path.join("img_test", file), delim_whitespace=True, header=None, dtype=float)
# Convert the dataframe to a NumPy array
image = df.to_numpy()
# Extract the row and column indices and the values
rows, cols, values = image[:, 0], image[:, 1], image[:, 2]
# Convert the row and column indices to integers
rows = rows.astype(int)
cols = cols.astype(int)
# Create a 2D array of the correct shape filled with zeros
img = np.zeros((1024, 1024))
# Assign the values to the correct positions in the array
img[rows, cols] = values
# Resize the image to 28x28
img = cv2.resize(img, (28, 28))
# Reshape the array to a 3D array with a single channel
img = img.reshape(28, 28, 1)
# Append the image to the list
images.append(img)
# Convert the list of images to a NumPy array
images = np.concatenate(images, axis=0)
</code></pre>
|
<python><image><numpy><image-resizing>
|
2022-12-30 10:46:42
| 3
| 953
|
user979974
|
74,959,824
| 12,242,625
|
How to remove white borders on left/right side?
|
<p>I have plots, containing technical measurements which are looking like this:</p>
<p><a href="https://i.sstatic.net/AkoDy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AkoDy.png" alt="enter image description here" /></a></p>
<p>Now I want to remove everything but the content and I managed to get to this point you can see in the following image. But I'm still having the borders on the right and left side. (I changed the background color to yellow for better visibility and highlighted the borders I want to get rid of with red. Actually, they are white).</p>
<p><a href="https://i.sstatic.net/F58Qo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F58Qo.png" alt="enter image description here" /></a></p>
<p>How can I remove them (the bottom and top don't need to be removed), so that the plot begins exactly where the line starts/ends?</p>
<p>Target image should still be 480x480px, even without the borders.</p>
<p><a href="https://i.sstatic.net/5y6n0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5y6n0.png" alt="enter image description here" /></a></p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.DataFrame({"x":[1,2,3,4,5,6,7,8,9,10],
"y1":[4,3,2,6,6,2,1,5,7,3],
"y2":[1,0,6,0,9,2,3,5,4,7]})
size=[480,480]
fig, ax = plt.subplots(figsize=(size[0]/100, size[1]/100), dpi=100, facecolor="yellow")
p2 = sns.lineplot(data=df, x="x", y="y1", ax=ax)
p2.set(xlabel=None, xticklabels=[],
ylabel=None, yticklabels=[])
p3 = sns.lineplot(data=df, x="x", y="y2", ax=ax)
p3.set(xlabel=None, xticklabels=[],
ylabel=None, yticklabels=[])
ax.set(ylim=(0, 10))
plt.box(False)
plt.tick_params(left=False, labelleft=False, bottom=False)
plt.tight_layout()
plt.show()
</code></pre>
|
<python><matplotlib><seaborn>
|
2022-12-30 09:03:50
| 1
| 3,304
|
Marco_CH
|
74,959,760
| 1,826,912
|
Can I use the Scipy simulated annealing library to solve the traveling salesman problem?
|
<p>To solve the traveling salesman problem with simulated annealing, we need to start with an initial solution, a random permutation of the cities. This is the order in which the salesman is supposed to visit them. Then, we switch to a "neighboring solution" by swapping two cities. And then there are details around when to switch to the neighboring solution, etc.</p>
<p>I could implement all of this from scratch. But I wanted to see if I could use the inbuilt Scipy annealing solver. Looking at the documentation, looks like the <a href="https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.optimize.anneal.html" rel="nofollow noreferrer">old anneal method</a> has been deprecated. The new methods that is supposed to have replaced it is <a href="https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.optimize.basinhopping.html#scipy.optimize.basinhopping" rel="nofollow noreferrer">basinhopping</a>. But looking through the documentation and source code, it seems these are more towards optimizing a function of some array where any float values of that array are permissible (and then there are local and global optima). That's what all the examples in the documentation or code comments are geared towards. I can't imagine how I would use any of these inbuilt routines to solve the famous traveling salesman problem itself since the array there is a permutation array. If you just perturb its values by some floating point numbers, you won't get a valid solution.</p>
<p>So, the conclusion seems to be that those standard routines are inapplicable on a combinatorial optimization problem like the traveling salesman? Or am I missing something?</p>
|
<python><optimization><scipy>
|
2022-12-30 08:56:03
| 0
| 2,711
|
Rohit Pandey
|
74,959,601
| 3,381,215
|
pip install from private repo but requirements from PyPi only installs private package
|
<p>Unlike <a href="https://stackoverflow.com/questions/62282049/pip-install-from-private-repo-but-requirements-from-pypi">pip install from private repo but requirements from PyPi</a> I am able to intall my package (daisy) from our private artifactory instance with:</p>
<pre class="lang-bash prettyprint-override"><code>pip3 install -i https://our-artifactory/pypi/simple daisy
</code></pre>
<p>The ouput is:</p>
<pre class="lang-bash prettyprint-override"><code>Looking in indexes: https://our-artifactory/api/pypi/simple
Collecting daisy
Downloading https://our-artifactory/artifactory/api/pypi/my-repo/daisy/0.0.2/daisy-0.0.2-py3-none-any.whl (4.8 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.8/4.8 MB 12.1 MB/s eta 0:00:00
ERROR: Could not find a version that satisfies the requirement pandas<2.0.0,>=1.5.2 (from daisy) (from versions: none)
ERROR: No matching distribution found for pandas<2.0.0,>=1.5.2
</code></pre>
<p>When I then try to install pandas by itself it works:</p>
<pre class="lang-bash prettyprint-override"><code>pip3 install pandas
Collecting pandas
Downloading pandas-1.5.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12.2/12.2 MB 16.0 MB/s eta 0:00:00
Collecting python-dateutil>=2.8.1
Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
Collecting pytz>=2020.1
Downloading pytz-2022.7-py2.py3-none-any.whl (499 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 499.4/499.4 kB 4.1 MB/s eta 0:00:00
Collecting numpy>=1.20.3
Downloading numpy-1.24.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 17.3/17.3 MB 18.9 MB/s eta 0:00:00
Collecting six>=1.5
Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Installing collected packages: pytz, six, numpy, python-dateutil, pandas
Successfully installed numpy-1.24.1 pandas-1.5.2 python-dateutil-2.8.2 pytz-2022.7 six-1.16.0
</code></pre>
<p>It is even the right version. Somehow I think in the first command it tries to get all dependencies from our private repo as well. Is there a way to get the package from the private repo and the dependencies from PyPI?</p>
<p>Btw, I'm working from a conda (miniforge) Python3.9 environment.</p>
<p>Edit: I got a bit further using:</p>
<pre class="lang-bash prettyprint-override"><code>pip3 install -i https://our-artifactory/artifactory/api/pypi/dl-innersource-pypi/simple daisy --extra-index-url https://pypi.org/simple
</code></pre>
<p>however it installs daisy from PyPI, I guess it's unfortunate that I picked an already existing name...</p>
<p>Edit: I can get it to work by specifying my daisy version, like this:</p>
<pre class="lang-bash prettyprint-override"><code>pip install --index-url https://my-artifactory/artifactory/api/pypi/dl-common-pypi/simple daisy==0.0.2
</code></pre>
<p>However, leaving out the version number reverts to fetching the PyPI version of daisy. Should this be considered a bug since I explicitly tell pip to look at my-artifactory first and then at public PyPI?</p>
|
<python><pip><python-poetry>
|
2022-12-30 08:33:28
| 1
| 1,199
|
Freek
|
74,959,578
| 2,998,077
|
Python to count number of occurrence of each line, in huge text file
|
<p>A big text file has millions of lines, that I want to count the number of occurrence of each line (how many times the line appear in the file).</p>
<p><a href="https://i.sstatic.net/7OJe9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7OJe9.png" alt="enter image description here" /></a></p>
<p>Current solution I am using, is as below. It works but very slow.</p>
<p>What is the better way to do so?</p>
<pre><code>from collections import Counter
crimefile = open("C:\\temp\\large text_file.txt", 'r', encoding = 'utf-8')
yourResult = [line.strip().split('\n') for line in crimefile.readlines()]
yourResult = sum(yourResult, [])
result = dict((i, yourResult.count(i)) for i in yourResult)
output = sorted((value,key) for (key,value) in result.items())
print (Counter(yourResult))
</code></pre>
|
<python><count>
|
2022-12-30 08:30:51
| 3
| 9,496
|
Mark K
|
74,959,446
| 2,651,073
|
Pandas: Select top n groups
|
<p>Suppose I have a dataframe like</p>
<pre><code>a b
i1 t1
i1 t2
i2 t3
i2 t1
i2 t3
i3 t2
</code></pre>
<p>I want to group df by "a" and then select 2 top largest group. I specifically want the number of resulting rows</p>
<pre><code>a b
i2 t3
i2 t1
i2 t3
i1 t1
i1 t2
</code></pre>
<p>I tried:</p>
<pre><code>df.groupby("a").head(2)
</code></pre>
<p>But it seems select two rows of each group</p>
|
<python><pandas>
|
2022-12-30 08:11:32
| 1
| 9,816
|
Ahmad
|
74,959,372
| 1,611,067
|
How to concatenate Boto3 filter expression?
|
<p>I would like to dynamically create Boto3 filter expression. My goal is to make tool to fetch easily data from DynamoDb with most used filters. I'm trying to store filter expressions in list and then combine them to one expression.</p>
<p>So, if I have this list:</p>
<pre><code>list_of_expressions = [
Attr("ProcessingStatus").eq("status1"),
Attr("TransportType").eq("transport_type1"),
]
</code></pre>
<p>How can I concatenate it with '&' to get this expression?</p>
<pre><code>filter = Attr("ProcessingStatus").eq("status1") & Attr("TransportType").eq("transport_type1")
</code></pre>
<p>So that I could pass it to table.scan like this:</p>
<pre><code>self.table.scan(FilterExpression=filter)
</code></pre>
|
<python><boto3>
|
2022-12-30 08:01:22
| 2
| 359
|
RamiR
|
74,959,245
| 9,915,864
|
python beautifulsoup - parsing an HTML table row
|
<p>I'm using BeautifulSoup to parse a bunch of combined tables' rows, row by row, column by column in order to import it into Pandas. I can't use <code>to_html()</code> because one of the columns has a list of tag links in each cell. The data structure is the same in all the tables.</p>
<p>I can't figure out the correct method to skip a <code>td.div</code> tag containing the attribute<code>{'class': ['stars']}</code>. My following code works but it doesn't seem correct. I can't just do a <code>if col.div: continue</code> because some of the required columns have extra <code><div></code> tags I need for later.</p>
<pre><code> def rebuild_row(self, row):
new_row = []
for col in row.find_all('td'):
if col.img:
continue
if col.div and 'star' in str(col.div.attrs):
continue
if col.a:
new_row.append(self.handle_links(col))
else:
if not col.text or not col.text.strip():
new_row.append(['NaN'])
else:
new_text = self.clean_tag_text(col)
new_row.append(new_text)
return new_row
</code></pre>
<p>I first tried <code>if 'stars' in col.div['class']:</code> but it choked on <code>key 'class'</code>. So then I tried to find the error:</p>
<pre><code> if col.div:
if not hasattr(col.div, 'class'):
continue
else:
print(f"{col.div['class']}")
</code></pre>
<p>but I get this output & error that I don't understand the why of because shouldn't the <code>not hasattr()</code> catch it?</p>
<pre><code>['stars']
['stars']
Exception has occurred: KeyError
'class'
</code></pre>
<p>HTML row example:</p>
<pre><code><tr id="groupBook3144889">
<td width="5%"><a href="/book/show/40851529-the-7th-of-victorica"><img alt="The 7th of Victorica by Beau Schemery" src="https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1531785241l/40851529._SY75_.jpg" title="The 7th of Victorica by Beau Schemery"/></a></td>
<td width="30%">
<a href="/book/show/40851529-the-7th-of-victorica">The 7th of Victorica (Gadgets and Shadows, #2)</a>
</td>
<td width="10%">
<a href="/author/show/6594115.Beau_Schemery">Schemery, Beau</a>
<span title="Goodreads Author!">*</span>
</td>
<td width="1%">
<div class="stars" data-rating="0" data-resource-id="40851529" data-restore-rating="null" data-submit-url="/review/rate/40851529?stars_click=false" data-user-id="0"><a class="star off" href="#" ref="" title="did not like it">1 of 5 stars</a><a class="star off" href="#" ref="" title="it was ok">2 of 5 stars</a><a class="star off" href="#" ref="" title="liked it">3 of 5 stars</a><a class="star off" href="#" ref="" title="really liked it">4 of 5 stars</a><a class="star off" href="#" ref="" title="it was amazing">5 of 5 stars</a></div>
</td>
<td width="1%">
<a class="actionLinkLite" href="/group/bookshelf/64285?shelf=read">read</a>,
<a class="actionLinkLite" href="/group/bookshelf/64285?shelf=genre-action-adventure">genre-action-adve...</a>,
<a class="actionLinkLite" href="/group/bookshelf/64285?shelf=genre-steampunk-dieselpunk">genre-steampunk-d...</a>,
<a class="actionLinkLite" href="/group/bookshelf/64285?shelf=genre-young-adult">genre-young-adult</a>
</td>
<td width="1%">
</td>
<td width="1%">
</td>
<td width="1%">
<a href="/user/show/4872508-meghan"> Meghan</a>
</td>
<td width="1%">2022/12/25</td>
<td class="view" width="1%">
<a class="actionLink" href="/group/show_book/64285?group_book_id=3144889" style="white-space: nowrap">view activity »</a>
</td>
</tr>
</code></pre>
|
<python><beautifulsoup>
|
2022-12-30 07:43:45
| 1
| 341
|
Meghan M.
|
74,958,898
| 643,208
|
Google authentication issue: Authorized user info was not in the expected format, missing fields refresh_token
|
<p>I am trying to use the Python Google API, and encounter an issue when using Google's quickstart example from <a href="https://developers.google.com/people/quickstart/python#configure_the_sample" rel="nofollow noreferrer">https://developers.google.com/people/quickstart/python#configure_the_sample</a>, as-is. When I run this example the first time, it opens up the Web browser, allows me to authenticate, and it works perfectly fine. But then, if I run the script again, instead of authenticating, it tries to use the token that was saved in <code>token.json</code> and this fails with:</p>
<pre><code>Traceback (most recent call last):
File "/home/thomas/test.py", line 58, in <module>
main()
File "/home/thomas/test.py", line 24, in main
creds = Credentials.from_authorized_user_file('token.json', SCOPES)
File "/home/thomas/.local/lib/python3.10/site-packages/google/oauth2/credentials.py", line 440, in from_authorized_user_file
return cls.from_authorized_user_info(data, scopes)
File "/home/thomas/.local/lib/python3.10/site-packages/google/oauth2/credentials.py", line 390, in from_authorized_user_info
raise ValueError(
ValueError: Authorized user info was not in the expected format, missing fields refresh_token.
</code></pre>
<p>What am I missing?</p>
|
<python><google-oauth>
|
2022-12-30 06:54:03
| 2
| 5,986
|
Thomas Petazzoni
|
74,958,699
| 5,132,860
|
How to create multilingual web pages with fastapi-babel?
|
<p>I'm thinking of creating a multilingual web page with <a href="https://github.com/Anbarryprojects/fastapi-babel" rel="nofollow noreferrer">fastapi-babel</a>.</p>
<p>I have configured according to the <a href="https://github.com/Anbarryprojects/fastapi-babel" rel="nofollow noreferrer">documentation</a>.
The translation from English to French was successful.
However, I created a .po file for another language, translated it, compiled it, but the translated text does not apply.</p>
<pre><code>from fastapi_babel import _
from fastapi_babel.middleware import InternationalizationMiddleware as I18nMiddleware
from fastapi_babel import Babel
from fastapi_babel import BabelConfigs
configs = BabelConfigs(
ROOT_DIR=__file__,
BABEL_DEFAULT_LOCALE="en",
BABEL_TRANSLATION_DIRECTORY="lang",
)
logger.info(f"configs: {configs.__dict__}")
babel = babel(configs)
babel.install_jinja(templates)
app.add_middleware(I18nMiddleware, babel=babel)
@app.get("/items/{id}", response_class=HTMLResponse)
async def read_item(request: Request, id: str):
babel.locale = "en"
logger.info(_("Hello World"))
babel. locale = "fa"
logger.info(_("Hello World"))
babel.locale = "ja"
logger.info(_("Hello World"))
return templates.TemplateResponse('item.html', {'request': request, 'id': id})
</code></pre>
<p>Above, the result will be:</p>
<pre><code>INFO: Hello World
INFO: Bonjour le monde
INFO: Hello World
</code></pre>
<p>How can the translation be applied to languages other than French?</p>
<p><a href="https://i.sstatic.net/QJDkC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QJDkC.png" alt="enter image description here" /></a></p>
|
<python><fastapi><gettext>
|
2022-12-30 06:18:22
| 1
| 3,104
|
Nori
|
74,958,587
| 19,475,994
|
Copyable scrolling text in tkinter
|
<p>I wanted to create a scrollable, copyable text in tkinter. This text should be immutable <em>to the user</em>, but I can change it.</p>
<p>Although this seems simple, there are some ways to approach it, but there are unsatisfactory things about each approach.*</p>
<ol>
<li>Use a <code>state=disabled</code> <code>Text</code> widget.</li>
</ol>
<p>However, this also disallows <em>my program</em> to change it, which means I have to temporarily enable the widget, which can result it the user adding a character or two if they spam a key in the textbox.(Yes, it does matter, I want it to be <em>absolutely immutable</em>.)</p>
<ol start="2">
<li>Use create_text on a <code>Canvas</code>, which is scollable.</li>
</ol>
<p>However, AFAIK, I cannot copy such text.</p>
<ol start="3">
<li><code>pack</code> Labels into a <code>Canvas</code>, which is scrollable.</li>
</ol>
<p>However, AFAIK, They don't scroll with the canvas.</p>
<p>One thing that could possible work is <code>Canvas.create_window</code>, but I can't even find a document of it, and the <code>help</code> text says nothing useful.</p>
<pre><code>help> tkinter.Canvas.create_window
Help on function create_window in tkinter.Canvas:
tkinter.Canvas.create_window = create_window(self, *args, **kw)
Create window with coordinates x1,y1,x2,y2.
help>
</code></pre>
<p>[sic]</p>
|
<python><python-3.x><tkinter><scroll><tkinter-canvas>
|
2022-12-30 05:55:05
| 3
| 303
|
00001H
|
74,958,256
| 17,696,880
|
How do I put a parameter to a function whose return is inside a re.sub()?
|
<pre class="lang-py prettyprint-override"><code>import re
def one_day_or_another_day_relative_to_a_date_func(m, type_of_date_unit_used):
input_text = m.group()
print(repr(input_text))
if (type_of_date_unit_used == "year"): a = "YYYY-mm-dd"
elif (type_of_date_unit_used == "month"): a = "yyyy-MM-dd"
elif (type_of_date_unit_used == "day"): a = "yyyy-mm-DD"
return a
def identify_one_day_or_another_day_relative_to_a_date(input_text, type_of_date_unit_used = "day"):
date_capture_pattern = r"([123456789]\d*-[01]\d-[0-3]\d)(\D*?)"
input_text = re.sub(date_capture_pattern, one_day_or_another_day_relative_to_a_date_func, input_text, re.IGNORECASE) #This is the line
return input_text
input_text = "En 2012-06-12 empezo y termino en algun dia del 2023"
print(identify_one_day_or_another_day_relative_to_a_date(input_text, "month"))
</code></pre>
<p>I need to pass the parameter <code>"month"</code> when calling the <code>one_day_or_another_day_relative_to_a_date_func</code> function in the <code>re.sub()</code></p>
<p>The output that I need with this parameter</p>
<pre><code>"En yyyy-MM-dd empezo y termino en algun dia del 2023"
</code></pre>
|
<python><python-3.x><regex><parameters><regex-group>
|
2022-12-30 04:57:27
| 1
| 875
|
Matt095
|
74,958,225
| 9,510,800
|
how to apply shift to a group when there is a change in value pandas?
|
<p>I am currently having a dataset as below:</p>
<pre><code>id name date_and_hour
1 SS 1/1/2019 00:12
1 SS 1/1/2019 00:13
1 SS 1/1/2019 00:14
1 SB 1/1/2019 00:15
1 SS 1/1/2019 00:16
2 SE 1/1/2019 01:15
2 SR 1/1/2019 01:16
2 SS 1/1/2019 01:17
2 SS 1/1/2019 01:18
</code></pre>
<p>I want the next name with the changed value only based on group ID. Output looks as below</p>
<pre><code>id name date_and_hour next_name
1 SS 1/1/2019 00:12 SB
1 SS 1/1/2019 00:13 SB
1 SS 1/1/2019 00:14 SB
1 SB 1/1/2019 00:15 SS
1 SS 1/1/2019 00:16 null
2 SE 1/1/2019 01:15 SR
2 SR 1/1/2019 01:16 SS
2 SS 1/1/2019 01:17 SR
2 SR 1/1/2019 01:18 null
</code></pre>
<p>Please advice</p>
|
<python><pandas><numpy>
|
2022-12-30 04:50:13
| 3
| 874
|
python_interest
|
74,958,039
| 1,769,197
|
Pandas: astype datetime minutes failing
|
<p>I am using pandas 1.5.2.</p>
<p>I have a column of timestamp stored in a pandas dataframe.</p>
<p><code>df['Timestamp'].astype('datetime64[Y]')</code></p>
<pre><code>0 2017-07-11 09:33:14.819
1 2017-07-11 09:33:14.819
2 2017-07-11 09:33:14.819
3 2017-07-11 09:33:14.819
4 2017-07-11 09:33:14.820
5 2017-07-11 16:20:52.463
6 2017-07-11 16:20:52.463
7 2017-07-11 16:20:52.463
8 2017-07-11 16:20:52.464
9 2017-07-11 16:20:54.984
</code></pre>
<p>I used to be able to convert the timestamp to '%Y-%m-%d %H:%M' by <code>df['Timestamp'].astype('datetime64[m]')</code>. But now this doesn't work and returns the original column.</p>
<pre><code>0 2017-07-11 09:33
1 2017-07-11 09:33
2 2017-07-11 09:33
3 2017-07-11 09:33
4 2017-07-11 09:33
5 2017-07-11 16:20
6 2017-07-11 16:20
7 2017-07-11 16:20
8 2017-07-11 16:20
9 2017-07-11 16:20
</code></pre>
|
<python><pandas><datetime>
|
2022-12-30 04:09:47
| 1
| 2,253
|
user1769197
|
74,957,780
| 2,882,380
|
Sum of values from related rows in Python dataframe
|
<p>I have a data frame where some rows have one ID and one related ID. In the example below, <code>a1</code> and <code>a2</code> are related (say to the same person) while <code>b</code> and <code>c</code> don't have any related rows.</p>
<pre><code>import pandas as pd
test = pd.DataFrame(
[['a1', 1, 'a2'],
['a1', 2, 'a2'],
['a1', 3, 'a2'],
['a2', 4, 'a1'],
['a2', 5, 'a1'],
['b', 6, ],
['c', 7, ]],
columns=['ID1', 'Value', 'ID2']
)
test
ID1 Value ID2
0 a1 1 a2
1 a1 2 a2
2 a1 3 a2
3 a2 4 a1
4 a2 5 a1
5 b 6 None
6 c 7 None
</code></pre>
<p>What I need to achieve is to add a column containing the sum of all values for related rows. In this case, the desired output should be like below. Is there a way to get this, please?</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID1</th>
<th>Value</th>
<th>ID2</th>
<th>Group by ID1 and ID2</th>
</tr>
</thead>
<tbody>
<tr>
<td>a1</td>
<td>1</td>
<td>a2</td>
<td>15</td>
</tr>
<tr>
<td>a1</td>
<td>2</td>
<td>a2</td>
<td>15</td>
</tr>
<tr>
<td>a1</td>
<td>3</td>
<td>a2</td>
<td>15</td>
</tr>
<tr>
<td>a2</td>
<td>4</td>
<td>a1</td>
<td>15</td>
</tr>
<tr>
<td>a2</td>
<td>5</td>
<td>a1</td>
<td>15</td>
</tr>
<tr>
<td>b</td>
<td>6</td>
<td></td>
<td>6</td>
</tr>
<tr>
<td>c</td>
<td>7</td>
<td></td>
<td>7</td>
</tr>
</tbody>
</table>
</div>
<p>Note that I learnt to use <code>group by</code> to get sum for <code>ID1</code> (from <a href="https://stackoverflow.com/questions/74956866/value-not-updated-in-for-loop-python/74957091#74957091">this question</a>); but not for 'ID1' and 'ID2' together.</p>
<pre><code>test['Group by ID1'] = test.groupby("ID1")["Value"].transform("sum")
test
ID1 Value ID2 Group by ID1
0 a1 1 a2 6
1 a1 2 a2 6
2 a1 3 a2 6
3 a2 4 a1 9
4 a2 5 a1 9
5 b 6 None 6
6 c 7 None 7
</code></pre>
<p><strong>Update</strong></p>
<p>Think I can still use <code>for</code> loop to get this done like below. But wondering if there is another non-loop way. Thanks.</p>
<pre><code>bottle = pd.DataFrame().reindex_like(test)
bottle['ID1'] = test['ID1']
bottle['ID2'] = test['ID2']
for index, row in bottle.iterrows():
bottle.loc[index, "Value"] = test[test['ID1'] == row['ID1']]['Value'].sum() + \
test[test['ID1'] == row['ID2']]['Value'].sum()
print(bottle)
ID1 Value ID2
0 a1 15.0 a2
1 a1 15.0 a2
2 a1 15.0 a2
3 a2 15.0 a1
4 a2 15.0 a1
5 b 6.0 None
6 c 7.0 None
</code></pre>
|
<python><pandas><dataframe>
|
2022-12-30 03:00:39
| 1
| 1,231
|
LaTeXFan
|
74,957,756
| 6,498,757
|
Jupyter notebook code cell to code block convert
|
<p>I have a notebook that has executable code cells and I want to batch convert them into markdown version code blocks. How to do that with <code>.ipynb</code>?</p>
<p>I think it need to write some complexed regex to do the conversion.</p>
<p>Suppose I know the code cell has type <code>scheme</code></p>
<p>Before:</p>
<p><a href="https://i.sstatic.net/2xBJQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2xBJQ.png" alt="enter image description here" /></a></p>
|
<python><json><jupyter-notebook>
|
2022-12-30 02:54:22
| 1
| 351
|
Yiffany
|
74,957,673
| 5,032,387
|
Extract site id and value from dropdown on webpage
|
<p>I'm trying to get the names that appear in the dropdown and the associated value (as it appears in the html code).</p>
<p>Here's what I have so far. The result is empty so I know I'm not searching for the section properly.</p>
<pre><code>from bs4 import BeautifulSoup
from requests import get
url = 'http://clima.edu.ar/InformePorPeriodo.aspx'
page = get(url)
soup = BeautifulSoup(page.text)
test = soup.find_all('p', {'class': 'CaptionForm'})
</code></pre>
<p>The correct section starts here:
<a href="https://i.sstatic.net/lddQ5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lddQ5.png" alt="enter image description here" /></a></p>
|
<python><beautifulsoup>
|
2022-12-30 02:31:01
| 1
| 3,080
|
matsuo_basho
|
74,957,524
| 2,474,581
|
Tkinter: 2 buttons next to each other resize in width with window. How to?
|
<p>The 2 buttons should take each of the half of the window, one on the left, one on the right. The height is fixed all time. With <code>.grid()</code> nor <code>.place()</code> I can come to that result. The red bar is the color of the frame where the buttons are placed on. The buttons resize in width with the window, but keep their constant height.</p>
<p>How to?</p>
<p><a href="https://i.sstatic.net/ER9Gs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ER9Gs.png" alt="enter image description here" /></a></p>
|
<python><tkinter><tkinter-button>
|
2022-12-30 01:57:19
| 3
| 1,346
|
Kustekjé Meklootn
|
74,957,429
| 3,277,133
|
How to chunk dataframe into list by sum of value in column not exceeding X?
|
<p>I have a df that looks like this with many rows:</p>
<pre><code>col1 count
a 80
b 100
c 20
</code></pre>
<p>I need to chunk this dataframe by sum of the count column not exceeding 100 in sum. So the chunk code create chunks of the df in the list should be like so, where each chunk is determined by the sum of the count column not exceeding 100. Also in my case the index does not matter as long as the column values are the same:</p>
<pre><code>lst_df = [chunk1, chunk2]
Chunk1 =
col1 count
a 80
c 20
Chunk2 =
col1 count
b 100
</code></pre>
<p>I can chunk by row count but not sure how to do it by sum of values in a column and repeat that.</p>
<pre><code>n = 25 #chunk row size
list_df = [df[i:i+n] for i in range(0,df.shape[0],n)]
</code></pre>
|
<python><pandas>
|
2022-12-30 01:32:05
| 1
| 3,707
|
RustyShackleford
|
74,957,360
| 8,528,141
|
Is current time zone detected by default if `USE_TZ=True` in Django?
|
<p>It is mentioned in docs that the current time zone should be activated.</p>
<blockquote>
<p>You should set the current time zone to the end user’s actual time zone with <code>activate()</code>. Otherwise, the default time zone is used</p>
</blockquote>
<p>However, I see that the values are parsed in my local timezone in the forms correctly without activating it from my side, and <code>TIME_ZONE</code> is <code>UTC</code> which is different from my local timezone.</p>
|
<python><django><timezone><utc><django-timezone>
|
2022-12-30 01:11:19
| 1
| 1,480
|
Yasser Mohsen
|
74,957,311
| 1,229,531
|
conda error on install for RAPIDS fails due to incompatible glib
|
<p>OS: Linux 4.18.0-193.28.1.el8_2.x86_64
anaconda: anaconda3/2022.10</p>
<p>Trying to install RAPIDS, I get:</p>
<pre><code>$ conda install -c rapidsai rapids
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: |
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
UnsatisfiableError: The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versionsThe following specifications were found to be incompatible with your system:
- feature:/linux-64::__glibc==2.28=0
- feature:|@/linux-64::__glibc==2.28=0
- rapids -> cucim=22.12 -> __glibc[version='>=2.17|>=2.17,<3.0.a0']
Your installed version is: 2.28
$
</code></pre>
<p>As has been asked by others (but, as far as I can tell, not answered), why is "__glibc" version 2.28 not between 2.17 & 3.0?</p>
|
<python><conda><rapids>
|
2022-12-30 01:00:09
| 2
| 599
|
Mark Bower
|
74,957,250
| 3,591,044
|
Extracting part of string according to tags
|
<p>I have a big string and would like to extract parts of the string. You can see an example below. I would extract the text corresponding to the <code>text</code> tags. So we have three text tags, namely <code>"text": "{{{VAR_2}}}"</code>, <code>"text": "Tell me. How old are you?"</code>, <code>"text": "{{{VAR_0}}}"</code>.</p>
<p>This should result in the following list of strings: <code>["What is your age?", "Tell me. How old are you?", "How old are you?"]</code>. The number of text tags varies. If no text tag is present, it should result in None.</p>
<p>How can this be solved in an elegant way?</p>
<pre><code><VAR_0>
How old are you?
</VAR_0>
<VAR_2>
What is your age?
</VAR_2>
(set:$ageask to 'true')
(set:$storybeat_01 to 1)
(set:$inputAge to -1)
<audio stopall>
<audio src="sound.wav" autoplay loop volume="100%">
<onSuccess($user_age==0,$inputAge:=age)>[[age selector]]
<onSuccess(5>age OR 100<age,$inputAge:=age)>[[impossible age]]
<onSuccess(90<age,$inputAge:=age)>[[very old]]
<interaction type="AgeNumberQuestionInteraction">
{
"BaseClassConfig QuestionInteractionInterface": {
"QuestionInteractionInterfaceConfig": {
"maxQuestionRepetitionCount": 1
},
"QuestionSpeakingInteraction": {
"SpeakingInteractionConfig": {
"speech": [
{
"emotion": "HAPPY",
"text": "{{{VAR_2}}}"
},
{
"emotion": "HAPPY",
"text": "Tell me. How old are you?"
},
{
"emotion": "HAPPY",
"text": "{{{VAR_0}}}"
}
]
}
}
}
}
</interaction>
</code></pre>
|
<python><python-3.x><string><split>
|
2022-12-30 00:46:49
| 2
| 891
|
BlackHawk
|
74,957,244
| 7,475,303
|
Boto3 S3 Paginator not returning filtered results
|
<p>I'm trying to list the items in my S3 bucket from the last few months. I'm able to get results from the normal paginator (<code>page_iterator</code>), but the <code>filtered_iterator</code> isn't yielding anything when I iterate over it.</p>
<p>I've been referencing the documentation <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/guide/paginators.html" rel="nofollow noreferrer">here</a>. I've double checked my filter string both using <a href="https://jmespath.org" rel="nofollow noreferrer">JmesPath site</a> and the AWS CLI, and it works in both places. I'm at a loss at this point on what I need to do.</p>
<p>Current Code:</p>
<pre class="lang-py prettyprint-override"><code>client = boto3.client('s3', region_name='us-west-2')
paginator = client.get_paginator('list_objects_v2')
operation_parameters = {'Bucket': self.bucket_name,
'Prefix': file_path_prefix}
page_iterator = paginator.paginate(**operation_parameters)
filtered_iterator = page_iterator.search("Contents[?LastModified>='2022-10-31'][]")
for key_data in filtered_iterator:
print('page2', key_data)
</code></pre>
|
<python><amazon-web-services><amazon-s3><boto3>
|
2022-12-30 00:45:27
| 0
| 453
|
Logan Waite
|
74,957,143
| 3,520,363
|
Bad output with python html parsing
|
<p>I have an html file in C:\temp.
I want to extract this text</p>
<pre><code>Death - Individual Thought Patterns (1993), Progressive Death Metal
http://xxxxxxx.bb/1196198
http://yyyyyyyyyyyy.com/files/153576607/d-xxx_xxx_xxx-xxxxx-xxxxx.rar
Alfadog (1994), Black Metal
</code></pre>
<p>from this block of code</p>
<pre><code><td width='99%' style='word-wrap:break-word;'><div><img src='style_images/1/nav_m.gif' border='0' alt='&gt;' width='8' height='8' />&nbsp;<b>Death - Individual Thought Patterns (1993)</b>, Progressive Death Metal</div></td>
<!--HideBegin--><div class='hidetop'>Hidden text</div><div class='hidemain'><!--HideEBegin--><!--coloro:#FF0000--><span style="color:#FF0000"><!--/coloro--><b>Download:</b><!--colorc--></span><!--/colorc--><br />Download from <a href="http://xxxxxxx.bb/1196198" target="_blank">ifolder.ru <i>*Death - Individual Thought Patterns (1993)* <b>by Dissident God</b></i></a><br /><!--HideEnd--></div><!--HideEEnd--><br /><!--HideBegin--><div class='hidetop'>Hidden text</div><div class='hidemain'><!--HideEBegin--><!--coloro:#ff0000--><span style="color:#ff0000"><!--/coloro--><b>Download (mp3@VBR230kbps) (67 MB):</b><!--colorc--></span><!--/colorc--><br />Download from <a href="http://yyyyyyyyyyyy.com/files/153576607/d-xxx_xxx_xxx-xxxxx-xxxxx.rar" target="_blank">rapidshare.com <i>*Death - Individual Thought Patterns (Remastered) (2008)* <b>by smashter</b></i></a><!--HideEnd--></div><!--HideEEnd-->
<td width='99%' style='word-wrap:break-word;'><div><img src='style_images/1/nav_m.gif' border='0' alt='&gt;' width='8' height='8' />&nbsp;<b>Alfadog (1994)</b>, Black Metal</div></td>
</code></pre>
<p>The extracted text must be saved in a file called links.txt</p>
<p>Despite my changes my script only ever extracts this text to me</p>
<pre><code>http://xxxxxxx.bb/1196198
http://yyyyyyyyyyyy.com/files/153576607/d-xxx_xxx_xxx-xxxxx-xxxxx.rar
</code></pre>
<p>But I want you to extract this text and like this</p>
<pre><code>Death - Individual Thought Patterns (1993), Progressive Death Metal
http://xxxxxxx.bb/1196198
http://yyyyyyyyyyyy.com/files/153576607/d-xxx_xxx_xxx-xxxxx-xxxxx.rar
Alfadog (1994), Black Metal
</code></pre>
<p>This is the script</p>
<pre><code>import requests
from bs4 import BeautifulSoup
# Open the HTML file in read mode
with open("C:/temp/pagina.html", "r") as f:
html = f.read()
# Create a Beautiful Soup object from HTML code
soup = BeautifulSoup(html, "html.parser")
# Initialize a list to contain the extracted text
extracted_text = []
# Find all td's with style "word-wrap:break-word"
tds = soup.find_all("td", style="word-wrap:break-word")
# For each td found, look for the div tag and the b tag inside
# and extract the text contained in these tags
for td in tds:
div_tag = td.find("div")
b_tag = div_tag.find("b")
if b_tag:
text = b_tag.text
# Also add the text after the b tag
text += td.text[td.text.index(b_tag.text) + len(b_tag.text):]
extracted_text.append(text)
# Find all divs with class "hidemain"
divs = soup.find_all("div", class_="hidemain")
# For each div found, look for the a tag inside
# and extract the link text contained in this tag
for div in divs:
a_tag = div.find("a")
if a_tag:
link = a_tag.get("href")
extracted_text.append(link)
# Save the extracted text to a text file
with open("links.txt", "w") as f:
for line in extracted_text:
f.write(line + "\n")
</code></pre>
<p>I can't understand the problem of why it doesn't return the text I ask for</p>
|
<python>
|
2022-12-30 00:20:15
| 1
| 380
|
user3520363
|
74,957,083
| 8,272,788
|
Outgoing messages are being dropped for client x
|
<p>I have a client (ESP32) running umqtt.simple, and a broker (Pi) running Paho.MQTT. The Broker sends messages to the client. The client simply subscribes - and calls some functions as part of <code>client.call_back()</code>.</p>
<p>I am am using client.ping() to keep the client connection alive.</p>
<p>The client works, but I notice that after some time the client stops receiving messages. Looking at <code>/var/log/mosquitto/mosquitto.log</code> on the broker I see the message 'Outgoing messages are being dropped for client [CLIENT NAME]`.</p>
<p>There is no disconnect shown.</p>
<p>I am trying to work out why outgoing messages are being dropped. The only question I can see describing this is here: <a href="https://stackoverflow.com/questions/49130641/mosquitto-outgoing-messages-are-being-dropped">Mosquitto: Outgoing messages are being dropped</a>. They suggest the solution is to set <code>max_queued_messages 0</code> in <code>mosquitto.conf</code>. The explanation for this is that this parameter controls the 'maximum number of QoS 1 or 2 messages to hold in the queue (per client)'.</p>
<p>The issue is I am publishing my messages from the broker with the default <code>qos = 0</code>, which is 'fire and forget'. Nothing should be being held.</p>
<p>Here is a simplified account of my code on the broker - which is dropping messages for the client.</p>
<pre><code>import paho.mqtt.client as paho
BROKER = [broker_ip]
uname = [uname]
pwd = [password]
def on_pub(client, userdata, result):
print('Published', result)
pass
client = paho.Client('Pi')
client.on_publish = on_pub
client.username_pw_set(password = pwd, username = uname)
while True:
[when some condition True]: #This condition isn't met very repeatedly very fast. Its a motion sensor that only sends a message if there hasn't been motion for a while, or if the motion reset counter has expired
client.publish(topic = [topic], payload = [payload])
</code></pre>
<p>A simplified account of the micropython code on the ESP is as follows:</p>
<pre><code>from umqtt.simple import MQTTClient
import time
[Call function to connect to the WiFi function]
BROKER_IP = [broker_IP]
CLIENT_NAME = [client_name]
USER = [user]
PASSWORD = [password]
TOPIC = [topic]
x = [some_list]
def sub_cb(topic, msg):
global x
[clean the message string so its a list]
[pass the cleaned list and x to a function]
[reassign some variables so that the cleaned list becomes x for the time the message is recieved]
def connect_and_subscribe():
global CLIENT_NAME, BROKER_IP, USER, PASSWORD, TOPIC
client = MQTTClient(client_id=CLIENT_NAME,
server=BROKER_IP,
user=USER,
password=PASSWORD,
keepalive=60)
client.set_callback(sub_cb)
client.connect()
client.subscribe(TOPIC)
print('Connected to MQTT broker at: %s, subscribed to topic: %s' % (BROKER_IP, TOPIC))
return(client)
client = connect_and_subscribe()
now = time.time()
while True:
try: #In case broker has had to reboot, and check_masg fails, wait 60 secs and try again
client.check_msg()
except:
time.sleep(60)
client = connect_and_subscribe()
time.sleep(0.1)
if time.time() - now > 80: #Ping every 80 secs to keep client alive (will drop client after 90secs (1.5 * keepalive value)
client.ping()
now = time.time()
</code></pre>
<p>The other thing i should note is that the function called in sub_cb can take 5 secs to run. I saw on the golang MQTT page that if the callback function invokes a long run-time, a separate go routine should be written for it. I saw nothing of the sort for the python documentation.</p>
<p>Why am I getting this error and what can I do to get rid of it? I don't want to just set <code>max_queued_messages 0</code> as I assume this will just store whatever is being stored until the system runs out memory and fails silently - but perhaps I have this wrong.</p>
|
<python><mqtt><paho>
|
2022-12-30 00:05:32
| 0
| 1,261
|
MorrisseyJ
|
74,957,025
| 866,082
|
Can I sort the items in an Apache beam PCollection using python?
|
<p>Can I sort the items in an Apache beam PCollection using python?</p>
<p>I need to perform an operation (transformation) that relies on the items to be sorted. But so far, I cannot find any trace of a "sorting" mechanism for the Apache beam.</p>
<p>My use case is not for live streams. I understand that it is pointless to talk about sorting when the data is live and/or infinite. This is an operation on an offline dataset.</p>
<p>Is this possible?</p>
|
<python><sorting><apache-beam>
|
2022-12-29 23:51:47
| 1
| 17,161
|
Mehran
|
74,957,022
| 7,776,559
|
How to install Python 3.11 in Jupyter Notebook?
|
<p>I've installed Jupyter Notebook from <a href="https://www.anaconda.com/products/distribution" rel="nofollow noreferrer">https://www.anaconda.com/products/distribution</a></p>
<p>Unfortunately, it's default version of python 3.9, but the latest version is 3.11.</p>
<p>How can I install the latest version or other versions for jupyter notebook?</p>
<p>I'm using PC Windows 10.</p>
|
<python><installation><jupyter-notebook><anaconda><version>
|
2022-12-29 23:50:26
| 2
| 906
|
MichaelRSF
|
74,957,011
| 5,032,387
|
Correctly refer to a button to scrape aspx webpage using requests
|
<p>I'm trying to extract data from an aspx site. Trying to find the name of the submit button to include in the post request, but it doesn't appear when I inspect the page.</p>
<pre><code>import csv
import requests
form_data = {
'CalendarioDesde': '01/11/2022',
'CalendarioHasta': '28/12/2022',
'IdEstacion': 'Aeropuerto San Luis (REM)',
'': 'Consultar' # trying to get the button name
}
response = requests.post('http://clima.edu.ar/InformePorPeriodo.aspx', data=form_data)
reader = csv.reader(response.text.splitlines())
</code></pre>
<p>Based on the posts I've seen, it's standard to use beautiful soup for scraping aspx pages, but I wasn't able to find a tutorial on the complex parameters (i.e. VIEWSTATE). So I'm starting with this basic requests version.</p>
<p>If this isn't the correct method, please suggest a post / resource that I can use as a template to extract data.</p>
|
<python><web-scraping><beautifulsoup><python-requests>
|
2022-12-29 23:48:49
| 1
| 3,080
|
matsuo_basho
|
74,956,916
| 6,248,559
|
Sequentially cascading radiobutton forms with tkinter
|
<p>I'm working on creating a simple GUI for one of my codes, so that I can enter parameters without looking for the files or the lines they're defined. However, the definition is a sequential one. Up to now, I have this:</p>
<pre><code>import tkinter as tk
def initiation():
analysis_type = radiobutton_initial.get()
label_dummy = tk.Label(gui, bg="#6D7BB4", fg='white',text=" ",font=('Courier', 14)).grid(row=2)
if analysis_type==1:
label2 = tk.Label(gui, bg="#6D7BB4", fg='white',
text="Please select the group you want to analyze:",
font=('Courier', 14))
label2.grid(row=3, columnspan=2, sticky="w")
if analysis_type ==2:
label2 = tk.Label(gui, bg="#6D7BB4", fg='white',
text="Please select the submethod you want to apply:",
font=('Courier', 14))
label2.grid(row=3, columnspan=2, sticky="w")
return analysis_type
gui = tk.Tk()
gui.geometry("800x500")
gui.title("Analysis Parameters")
gui.configure(bg='#6D7BB4')
label = tk.Label(gui, bg="#6D7BB4", fg='white', text="Please select the analysis you want to conduct:", font=('Courier', 14))
label.grid(row=0, sticky="nw", columnspan=2)
# initial_text = tk.Text(gui, bg="#A0AEE1", height=3, font=("Arial", 16))
# initial_text.pack()
radiobutton_initial = tk.IntVar()
tk.Radiobutton(gui, text="Group Analysis\t\t", bg="#6D7BB4", fg='white', font=('Courier', 14), variable = radiobutton_initial, value=1, command=initiation).grid(row =1, column=0, sticky="w")
tk.Radiobutton(gui, text="Individual analysis", bg="#6D7BB4", fg='white', font=('Courier', 14), variable = radiobutton_initial, value = 2, command=initiation).grid(row=1, column=1, sticky="w")
gui.mainloop()
</code></pre>
<p>If one of the radiobuttons is selected, I want to ask other questions but the questions depend on the selected value. Then, given the new answer, another radiobutton set appears. It will be 5-6 sequential questions with some if-else conditions.</p>
<p>I also want to disable the option of changing previous answers.</p>
<p>I was not able to make <code>analysis_type</code> a global value, let alone cascading form. How should I go on?</p>
|
<python><tkinter>
|
2022-12-29 23:27:48
| 1
| 301
|
A Doe
|
74,956,877
| 1,358,829
|
Hydra config: using another config as an input to multiple keys within a config
|
<p>Suppose I have the following directory structure for my hydra configs:</p>
<pre><code>config
|_config.yml
operations
|_subconfig.yml
</code></pre>
<p><code>subconfig.yml</code> is</p>
<pre class="lang-yaml prettyprint-override"><code>param_1: foo
param_2: bar
</code></pre>
<p>and <code>config.yml</code> is:</p>
<pre class="lang-yaml prettyprint-override"><code>operations:
signals:
signal_a:
param_1: foo
param_2: bar
signal_b:
param_1: foo
param_2: bar
timestamps:
ts_1:
param_1: foo
param_2: bar
</code></pre>
<p>Instead of repeating <code>param_1</code> and <code>param_2</code> in my main config, I want to use subconfig to fill these parameters as a default. I know that I could do it if I used the defaults list and just replaced param_1 and param_2 at the top level of my main config. But I`m struggling to figure it out how to replace the values of subkeys within the main config by the values within the subconfig file. Is there any way to do this?</p>
|
<python><config><fb-hydra>
|
2022-12-29 23:20:27
| 1
| 1,232
|
Alb
|
74,956,866
| 2,882,380
|
Value not updated in for loop Python
|
<p>I am testing the following simple example (see comments in the coding below for background). I have two questions. Thanks.</p>
<ul>
<li>How come <code>b</code> in <code>bottle</code> is not updated even though the for loop did calculate the right value?</li>
<li>Is there an easier way to do this without using for loop? I heard that using loop can take a lot of time to run when the data is bigger than this simple example.</li>
</ul>
<blockquote>
<pre><code>test = pd.DataFrame(
[[1, 5],
[1, 8],
[1, 9],
[2, 1],
[3, 1],
[4, 1]],
columns=['a', 'b']
) # Original df
bottle = pd.DataFrame().reindex_like(test) # a blank df with the same shape
bottle['a'] = test['a'] # set 'a' in bottle to be the same in test
print(bottle)
a b
0 1 NaN
1 1 NaN
2 1 NaN
3 2 NaN
4 3 NaN
5 4 NaN
for index, row in bottle.iterrows():
row['b'] = test[test['a'] == row['a']]['b'].sum()
print(row['a'], row['b'])
1.0 22.0
1.0 22.0
1.0 22.0
2.0 1.0
3.0 1.0
4.0 1.0 # I can see for loop is doing what I need.
bottle
a b
0 1 NaN
1 1 NaN
2 1 NaN
3 2 NaN
4 3 NaN
5 4 NaN # However, 'b' in bottle is not updated by the for loop. Why? And how to fix that?
test['c'] = bottle['b'] # This is the end output I want to get, but not working due to the above. Also is there a way to achieve this without using for loop?
</code></pre>
</blockquote>
|
<python><pandas><dataframe>
|
2022-12-29 23:17:51
| 2
| 1,231
|
LaTeXFan
|
74,956,805
| 2,365,595
|
Spotify API curl example
|
<p>as I'm following some tutorials, I dont see curl examples (in Authorization guide for example), I think because spotify api guide changed? Isn't possibile to find the old guide?</p>
<p>Thank you</p>
|
<python><python-requests><spotify><webapi><spotipy>
|
2022-12-29 23:08:00
| 1
| 575
|
Aidoru
|
74,956,684
| 3,572,505
|
regex python single step, match in a text only brackets that contains more than two separated numbers
|
<p>This is to get combo references (e.g. [23,24,28-30], this bracket contain four numbers) in sci papers with vancouver notation format. I need to match brackets with more than two numbers (don't confuse with digits). Min.repr.example:</p>
<p>Input raw text</p>
<pre><code>blabla [23,24] bleble [23,24,28-30] blibli [40,45-48] bloblo [113]
</code></pre>
<p>The regex I look for yields only,</p>
<pre><code>>>> ['[23,24,28-30]', '[40,45-48]']
</code></pre>
<p>My regex try: <code>r"\[[,\-(?:\d+)]{3,}\]"</code> but I failed. I look for a single step regex expression.</p>
<p>Many thanks for your experience.</p>
|
<python><regex><brackets>
|
2022-12-29 22:44:00
| 1
| 903
|
José Crespo Barrios
|
74,956,660
| 5,462,551
|
pytest-xdist - running parametrized fixtures for parametrized tests once without blocking
|
<p>Consider the following example (which is a simple template of my real issue, of course):</p>
<pre class="lang-py prettyprint-override"><code>import time
import pytest
@pytest.fixture(scope="session", params=[1, 2, 3, 4, 5])
def heavy_computation(request):
print(f"heavy_computation - param is {request.param}")
time.sleep(10)
return request.param
@pytest.mark.parametrize("param", ["A", "B", "C", "D", "E"])
def test_heavy_computation(param, heavy_computation):
print(f"running {param} with {heavy_computation}")
</code></pre>
<p>We have parametrized tests (with 5 parameters) dependent on parameterized fixtures (with 5 different parameters), giving a total of 25 tests.
As you can guess by its name, the fixture does some heavy computation that takes a while.</p>
<h4>TL;DR - how can I use pytest-xdist such that each worker will run one <code>heavy_computation</code> and its dependent tests right afterward (without separating this file into 5 files)?</h4>
<p>Now the full details:</p>
<p>In order to speed up the testing process, I'm using <a href="https://github.com/pytest-dev/pytest-xdist" rel="nofollow noreferrer">pytest-xdist</a>. A known issue of pytest-xdist, is that it does not support running fixtures once, which means, for example, that if we have 5 workers that will grab tests 1-A, 1-B, ..., 1-E (to be clear: x-y is a combination the fixture and test parameters), <strong>all</strong> 5 workers will run the heavy computation, which yields the same result - we don't want that.</p>
<p>In the official docs of the package, there's a <a href="https://pytest-xdist.readthedocs.io/en/latest/how-to.html#making-session-scoped-fixtures-execute-only-once" rel="nofollow noreferrer">proposed solution</a> that suggests using a file lock. The problem with this approach, as far as I understand, is that all tests waiting for a fixture to be ready will hang until the process that started first will finish, instead of waiting inside that process, leaving the other workers to start computing the rest of the fixtures.</p>
<p><strong>My goal</strong> is to gather a fixture and its dependent tests to run as a group inside a single worker, without blocking other workers. Is there an <strong>elegant</strong>* way to do this?</p>
<ul>
<li>yes, of course, one solution is separating this file into 5 files with hard-coded fixtures - I don't want to do that if there's a nicer solution.</li>
</ul>
<p>Hope that it all makes sense. Thanks!</p>
|
<python><testing><pytest><pytest-xdist>
|
2022-12-29 22:40:25
| 1
| 4,161
|
noamgot
|
74,956,616
| 14,403,635
|
Python Selenium: How to select dropdown menu
|
<p><a href="https://i.sstatic.net/tiPH4.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tiPH4.jpg" alt="enter image description here" /></a>I need to select the item - "Property owner statement" in dropdown menu. In the screenshot, it shows there has no unique name I can select on the ul class so the code doesn't works.</p>
<pre><code>[![from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
url = 'https://system.arthuronline.co.uk/genieliew1/dashboards/index'
driver.get(url)
wait = WebDriverWait(driver, 20)
titleTags=driver.find_elements(By.CLASS_NAME, "select2-result")
select.selectByVisibleText("Property Owner Statement")][1]][1]
</code></pre>
|
<python><selenium><selenium-webdriver><xpath><webdriverwait>
|
2022-12-29 22:33:03
| 2
| 333
|
janicewww
|
74,956,538
| 3,277,133
|
How to write pandas df to snowflake from Azure databricks?
|
<p>I am having a hard time writing to snowflake from Azure databricks.</p>
<p>I have the following code that I am trying to use to write a pandas df to a snowflake table but I keep getting that my username or password is incorrect.</p>
<p>I have tried the username and password to login in on my browser and it works no problem. Furthermore I have also tried using my email address associated with this account and get the same error. What am I doing wrong?:</p>
<pre><code>import pandas as pd
import sqlalchemy as sa
from sqlalchemy import *
import snowflake.connector
from snowflake.connector.pandas_tools import pd_writer
def savedfToSnowflake(df):
engine = snowflake.connector.connect(
user='xxxx',
password='xxxx',
role='xxxx',
account='xxxx',
warehouse='xxxx',
database='xxxx',
schema='xxxx',
autocommit=xxxx
)
try:
connection = engine.connect()
print("Connected to Snowflake ")
df3.to_sql('table_name', con=connection, index=False,
if_exists='append') # make sure index is False, Snowflake doesnt accept indexes
print("Successfully saved data to snowflake")
except Exception as ex:
print("Exception occurred {}".format(ex))
# finally:
# # close connection
# if connection:
# connection.close()
# if engine:
# engine.dispose()
def main():
savedfToSnowflake(df3)
if __name__ == '__main__':
main()
</code></pre>
|
<python><pandas><snowflake-cloud-data-platform><azure-databricks>
|
2022-12-29 22:22:13
| 1
| 3,707
|
RustyShackleford
|
74,956,404
| 18,840,965
|
How to stop a PostgreSQL query if it runs longer than the some time limit with psycopg?
|
<p>In Python is library <code>psycopg</code> and with that I can do queries.</p>
<p>I have an array with texts and I just iterate over them and run query to find this text in postgres. But some time the query takes so much time to execute and in that moment I want to stop/terminate this query and go to the next. For example if query takes 10sec or longer I need to stop it and go to the next.</p>
<p>Is it possible with psycopg? Or maybe it is possible with something else?</p>
|
<python><postgresql><psycopg3>
|
2022-12-29 21:58:31
| 2
| 397
|
Dmiich
|
74,956,320
| 3,247,006
|
How to remove the default "Successfully deleted ..." message for "Delete Selected" in Django Admin Action?
|
<p>In the overridden <a href="https://docs.djangoproject.com/en/4.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_queryset" rel="nofollow noreferrer">delete_queryset()</a>, I deleted loaded messages and added <strong>the new message "Deleted successfully!!"</strong> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "store/admin.py"
from .models import Person
from django.contrib import admin, messages
@admin.register(Person)
class PersonAdmin(admin.ModelAdmin):
def delete_queryset(self, request, queryset):
msgs = messages.get_messages(request)
# Delete loaded messages
for i in range(len(msgs)):
del msgs[i]
# Add a new message
self.message_user(request, "Deleted successfully!!", messages.SUCCESS)
queryset.delete()
</code></pre>
<p>Then, when clicking <strong>Go</strong> to go to delete the selected persons as shown below:</p>
<p><a href="https://i.sstatic.net/zVyk2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zVyk2.png" alt="enter image description here" /></a></p>
<p>Then, clicking <strong>Yes I'm sure</strong> to delete the selected persons:</p>
<p><a href="https://i.sstatic.net/w47oD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/w47oD.png" alt="enter image description here" /></a></p>
<p>But, <strong>the default message "Successfully deleted 2 persons."</strong> is not removed as shown below:</p>
<p><a href="https://i.sstatic.net/IbOAk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IbOAk.png" alt="enter image description here" /></a></p>
<p>So, how can I remove the default message for <code>Delete Selected</code> in Django Admin Action?</p>
|
<python><django><django-admin><django-messages><django-admin-actions>
|
2022-12-29 21:45:48
| 0
| 42,516
|
Super Kai - Kazuya Ito
|
74,956,140
| 13,067,435
|
How to import a python class/module in a jupyter notebook?
|
<p>example contrived for this question. apologize for basic question. I have following code/directory structure</p>
<pre><code>src
- main.py
my_notebook.ipynb
</code></pre>
<p>main.py</p>
<pre><code>class DoSomething():
pass
</code></pre>
<p>I have seen various questions in stackoverflow, which suggest this should work in my notebook,</p>
<p>my_notebook.ipynb</p>
<pre><code>from src.main import DoSomething
...
</code></pre>
<p>when i run this notebook , i get Module not found error : No module named 'src.main': 'src' is not a package. Although, i knew this wouldn't work but i tried to move main.py to the root of my project and tried</p>
<pre><code>from main import DoSomething
</code></pre>
<p>I get similar error -> ModuleNotFoundError: No module named 'DoSomething'</p>
|
<python><jupyter-notebook>
|
2022-12-29 21:19:02
| 1
| 841
|
arve
|
74,956,132
| 4,119,822
|
Using pypa's build on a python project leads to a generic "none-any.whl" wheel, but the package has OS-specific binaries (cython)
|
<p>I am trying to build a package for distribution which has cython code that I would like to compile into binaries before uploading to PyPI. To do this I am using pypa's <code>build</code>,</p>
<p><code>python -m build</code></p>
<p>in the project's root directory. This cythonizes the code and generates the binaries for my system then creates the sdist and wheel in the <code>dist</code> directory. However, the wheel is named "--py3-none-any.whl". When I unzip the <code>.whl</code> I do find the appropriate binaries stored,
(e.g., <code>cycode.cp39-win_amd64.pyd</code>). The problem is I plan to run this in a GitHub workflow where binaries are built for multiple python versions and operating systems. That workflow works fine but overwrites (or causes a duplicate version error) when uploading to PyPI since all of the wheels from the various OS share the same name. Then if I install from PyPI on another OS I get "module can't be found" errors since the binaries for that OS are not there and, since it was a wheel, the installation did not re-compile the cython files.</p>
<p>I am working with 64-bit Windows, MacOS, and Ubuntu. Python versions 3.8-3.10. And a small set of other packages which are listed below.</p>
<p>Does anyone see what I am doing wrong here? Thanks!</p>
<p><em>Simplified Package</em></p>
<pre><code>Tests\
Project\
__init__.py
pycode.py
cymod\
__init__.py
_cycode.pyx
_build.py
pyproject.toml
</code></pre>
<p><em>pyproject.toml</em></p>
<pre class="lang-ini prettyprint-override"><code>[project]
name='Project'
version = '0.1.0'
description = 'My Project'
authors = ...
requires-python = ...
dependencies = ...
[build-system]
requires = [
'setuptools>=64.0.0',
'numpy>=1.22',
'cython>=0.29.30',
'wheel>=0.38'
]
build-backend = "setuptools.build_meta"
[tool.setuptools]
py-modules = ["_build"]
include-package-data = true
packages = ["Project",
"Project.cymod"]
[tool.setuptools.cmdclass]
build_py = "_build._build_cy"
</code></pre>
<p><em>_build.py</em></p>
<pre class="lang-py prettyprint-override"><code>import os
from setuptools.extension import Extension
from setuptools.command.build_py import build_py as _build_py
class _build_cy(_build_py):
def run(self):
self.run_command("build_ext")
return super().run()
def initialize_options(self):
super().initialize_options()
import numpy as np
from Cython.Build import cythonize
print('!-- Cythonizing')
if self.distribution.ext_modules == None:
self.distribution.ext_modules = []
# Add to ext_modules list
self.distribution.ext_modules.append(
Extension(
'Project.cymod.cycode',
sources=[os.path.join('Project', 'cymod', '_cycode.pyx')],
include_dirs=[os.path.join('Project', 'cymod'), np.get_include()]
)
)
# Add cythonize ext_modules
self.distribution.ext_modules = cythonize(
self.distribution.ext_modules,
compiler_directives={'language_level': "3"},
include_path=['.', np.get_include()]
)
print('!-- Finished Cythonizing')
</code></pre>
|
<python><cython><setuptools><pypi><python-packaging>
|
2022-12-29 21:17:13
| 1
| 329
|
Oniow
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.