CreationDate
stringlengths 19
19
| Users Score
int64 -3
17
| Tags
stringlengths 6
76
| AnswerCount
int64 1
12
| A_Id
int64 75.3M
76.6M
| Title
stringlengths 16
149
| Q_Id
int64 75.3M
76.2M
| is_accepted
bool 2
classes | ViewCount
int64 13
82.6k
| Question
stringlengths 114
20.6k
| Score
float64 -0.38
1.2
| Q_Score
int64 0
46
| Available Count
int64 1
5
| Answer
stringlengths 30
9.2k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2023-03-17 19:20:41 | 0 | python,yolov5 | 1 | 75,771,554 | Python: Return tuple value to function and calling a specfic value. TypeError: 'NoneType' object is not subscriptable | 75,771,425 | false | 51 | Hello this is my first Post ever so if I can do anything to improve please tell me.
So I am returning to variables x, y to a function.
When I output the Tuple value in my main function with print(coords()) it works perfectly fine.
It just returns 2 int values (500, 400).
When I try print(coords()[0]) I should get my X value, but I do get a:
TypeError: 'NoneType' object is not subscriptable
The best part is it worked yesterday just like that, when I reopened my Project today it just doesn't work anymore.
Running this code without any functions works perfectly fine as well.
Also interesting is that sometimes trying print(coords()[0]) works for a second and shows the right results, but then crashes with Type Error.
I am using Yolov5, but Yolo is working perfectly fine, and it's only trying to get the coordinates of a single object.
def coords():
rl = detection.xyxy[0].tolist()
#calculates x and y pos from detected bonding box pixel location
if len(rl) > 0:
if rl[0][4] > 0.7:
if rl[0][5] == 0:
width = float(rl[0][2]) - float(rl[0][0])
height = float(rl[0][3]) - float(rl[0][1])
x = float(rl[0][2]) - (width / 2)
y = float(rl[0][3]) - (height / 2) + 15
return int(x), int(y)
#yesterday my code looked like this
#coords = x, y
#return list(coords) # but this results in the same type error
def main():
while True:
print(coords()) # does work
print(coords()[0]) # does not work , but did yesterday | 0 | 1 | 1 | The answer from John Gordon is right:
If any of the top three if conditions are false, the function will never make it to the return int(x), int(y) line, and ends up returning None by default.
I filtered the None Values now it's working. |
2023-03-17 20:20:58 | 0 | python,tkinter,canvas,radio-button,ttk | 1 | 75,772,254 | (PYTHON, TKINTER) How can I get radiobuttons to delete with my canvases? (School Project Combat Game) | 75,771,912 | false | 33 | I am trying to figure out how to delete radiobuttons along with my canvas when it switches to the next one.
my code:
(P.S., I am trying to make it so that after the user's turn ends, another canvas with the same properties (but with a fresh set of radiobuttons) opens up, allowing the user to play their next turn. Open to any suggestions/improvements!)
from tkinter import *
import random
root = Tk()
def game_canvas():
global c, var
c.destroy()
c = Canvas(root, height = 720, width = 1280, bg = 'black')
c.pack()
#creates the background of the scene
c.create_image(0, 0, image = arena, anchor = NW)
#adds the user and griffin's images
c.create_image(300, 95, image = user_geralt_image2, anchor = NW)
c.create_image(875, 300, image = griffin_image)
#creates and places the button to go to the next canva
confirm_button = Button(c, text = 'Comfirm', command = action, font = 'Calibri 16', borderwidth = 5)
create_confirm_button = c.create_window(1156, 685, window = confirm_button)
#creates and places the button to quit the game
quit_button = Button(c, text = 'Quit', command = exit_confirmation, font = 'Calibri 16', borderwidth = 5)
create_quit_button = c.create_window(1241, 685, window = quit_button)
var = IntVar()
r1 = Radiobutton(root, text = 'Attack', font = 'Calibri 16', variable = var, value = 1)
r1.pack(anchor = W)
r2 = Radiobutton(root, text = 'Parry', font = 'Calibri 16', variable = var, value = 2)
r2.pack(anchor = W)
r3 = Radiobutton(root, text = 'Dodge', font = 'Calibri 16', variable = var, value = 3)
r3.pack(anchor = W)
root.mainloop()
I've done a few google searches to find a way to attach the radioheads to the Canvas, but to no avail.
Thanks!! | 0 | 1 | 1 | There are at least two simple solutions, based on the same principle.
First, you can make the radiobuttons a child of the canvas. When you delete the canvas, all children will automatically be deleted.
Since you didn't do that, I'm guessing you want the radiobuttons to be outside of the canvas. In that case, the simplest thing is to make both the radio buttons and the canvas a frame. When you delete the frame, all of the widgets will be destroyed too. |
2023-03-17 22:20:57 | 0 | python,oracle,ms-access | 2 | 75,772,691 | "ORA-01036 illegal variable name/number" at INSERT INTO query | 75,772,652 | false | 173 | I'm creating a Python code that fetches data from an MS Access table and inserts it into an Oracle SQL table, but when I run the code, I get the error ORA-01036 illegal variable name/number.
This error is occurring in the INSERT INTO statement. I don't know what it could be.
import pyodbc
import cx_Oracle
# Set up the Microsoft Access connection
access_conn_str = (
r'DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};'
r'DBQ=C:\MyFolder\ACC_BASE.MDB;'
)
access_conn = pyodbc.connect(access_conn_str)
# Define the Oracle SQL connection string
oracle_conn_str = cx_Oracle.makedsn("MyConnection", "MyPort", "MySID")
# Create a connection to the Oracle SQL database
oracle_conn = cx_Oracle.connect(user="MyUser", password="MyPassword", dsn=oracle_conn_str)
# Create a cursor for each connection
access_cursor = access_conn.cursor()
oracle_cursor = oracle_conn.cursor()
# Execute the select statement to extract data from the Access table
access_cursor.execute('SELECT * FROM ACC_TABLE')
# Loop through the rows of the Access table and insert them into the Oracle SQL table
for row in access_cursor.fetchall():
oracle_cursor.execute(
'INSERT INTO ORACLE_TABLE (COD, LEV, AZET, HUES) VALUES (?, ?, ?, ?)',
[row[0], row[1], row[2], row[3]]
)
# Commit the changes to the Oracle SQL table
oracle_conn.commit()
# Close the cursors and connections
access_cursor.close()
access_conn.close()
oracle_cursor.close()
oracle_conn.close() | 0 | 1 | 1 | The message
ORA-01036 illegal variable name/number
indicate that there might be a problem with the variable you are trying to put into the database, try checking the names of columns in the ORACLE_TABLE and check if they match what there is in your INSERT command. |
2023-03-18 06:42:28 | 1 | python-3.x,docker-compose,celery,fastapi,flower | 1 | 75,774,507 | How to map flower port 5556 to an enpoint in FastAPI? | 75,774,245 | true | 67 | I have a simple FastAPI app. I am using celery for async task processing and flower dashboard for monitoring tasks
My main application is running on port 80
My flower dashboard for task monitoring is running on port 5556
Now I want to map the port to the app endpoint, something like - http://localhost/flower-dashboard
Here is my docker-compose.yml file:
version: '3.8'
services:
web:
build: ./project
ports:
- 80:80
command: uvicorn main:app --host 0.0.0.0 --reload
volumes:
- ./project:/usr/src/app
environment:
- CELERY_BROKER_URL=redis://:password@redis:6379/0
- CELERY_RESULT_BACKEND=redis://:password@redis:6379/0
depends_on:
- redis
worker:
build: ./project
command: celery worker --app=worker.celery --loglevel=info --logfile=logs/celery.log
volumes:
- ./project:/usr/src/app
environment:
- CELERY_BROKER_URL=redis://:password@redis:6379/0
- CELERY_RESULT_BACKEND=redis://:password@redis:6379/0
depends_on:
- web
- redis
redis:
image: public.ecr.aws/ubuntu/redis:5.0-20.04_edge
restart: always
command: /bin/sh -c "redis-server --requirepass $$REDIS_HOST_PASSWORD"
env_file:
- redis.env
dashboard:
build: ./project
command: flower --app=worker.celery --port=5555 --broker=redis://:password@redis:6379/0
ports:
- 5556:5555
environment:
- CELERY_BROKER_URL=redis://:password@redis:6379/0
- CELERY_RESULT_BACKEND=redis://:password@redis:6379/0
depends_on:
- web
- redis
Any help would be highly appreciated, thanks! | 1.2 | 2 | 1 | This may not be an easy thing to do. To map localhost:5556 to localhost/flower-dashboard you'd need to use a proxy. You could add an Nginx or Apache service to your docker-compose configuration and make it route localhost/flower-dashboard requests to the dashboard service and all other requests localhost/* to the web service. This implies that you do not map web port 80 to the host like you do now, and map the proxy port 80 instead. |
2023-03-18 07:02:56 | 0 | python-3.x,linux,cron,debian,filenotfounderror | 1 | 75,780,076 | File not found while creating file in Python3 in a cron job. (FileNotFoundError: [Errno 2]) | 75,774,322 | false | 78 | After I've updated my system, a weird error appeared:
Traceback (most recent call last):
File /home/username/bin/my_application.py, line 116, in <module>
with open(lock_file, 'x'):
FileNotFoundError: [Errno 2] No such file or directory: '/run/user/1000/running_application.lock'
Line 116+117 look like this:
with open(lock_file, 'x'):
print("Lockfile " + lock_file + " created")
The path of the lockfile is defined like this:
lock_file = '/run/user/' + str(os.getuid()) + '/running_application.lock'
The uid is always 1000 (if triggered by cron or by hand).
I am trying to create a file only to avoid to run this script several times at the same time. It will be triggered in a cron job (crontab -e) on a Debian 11 bullseye server. Before this line, a routine will be triggered to wait until the file doesn't exist anymore or if x minutes are over to exit.
It always worked and if I trigger this script by hand, it still works. But out of the nowhere it doesn't work if it's triggered over the cron job.
And I don't understand the error at all. How can Python expect a file, if the file doesn't exist and have to be created?
I already tried it with the 'w' option, but the same error appears.
And I don't know what to do anymore, because this script works perfectly if it's triggered by hand.
Thanks a lot for your help :) | 0 | 1 | 1 | You can try specifying the environment variable inside your cronjob
in your cronjob you could add something like */5 * * * * export XDG_RUNTIME_DIR=/run/user/1000 && /usr/bin/python3 /home/username/bin/my_application.py
you can also use an absolut path for your lock file lock_file = '/home/username/locks/running_application.lock'
Also check the permissions, maybe the user running the cron dont have the right permission to create the lock file ? |
2023-03-18 12:46:34 | 0 | python,machine-learning,librosa | 2 | 76,103,973 | TypeError in librosa, MFCC | 75,775,979 | false | 969 | I have the code below, which takes an data set(GTZAN) and turns it into an MFCC in dictionary:
DATASET_PATH = '/content/drive/MyDrive/ColabNotebooksNew/PROJECT/ProjectMusic/Data/genres_original'
JSON_PATH = "data_10.json"
SAMPLE_RATE = 22050 #each song is 30s long, with a 22,050 Hz sample rate
TRACK_DURATION = 30 # measured in seconds
SAMPLES_PER_TRACK = SAMPLE_RATE * TRACK_DURATION #=661,500
def save_mfcc(dataset_path, json_path, num_mfcc=13, n_fft=2048, hop_length=512, num_segments=5):
# dictionary to store mapping, labels, and MFCCs
data = {
"mapping": [], #label names. size - (10,)
"labels": [], #Stores the 'real' song type(value from 0-9). size - (5992,)
"mfcc": [] #store the mfccs.size - (5992, 216, 13)
}
samples_per_segment = int(SAMPLES_PER_TRACK / num_segments) #=110250
num_mfcc_vectors_per_segment = math.ceil(samples_per_segment / hop_length) #=216(math.ceil of 215.332)
# loop through all genre sub-folder
for i, (dirpath, dirnames, filenames) in enumerate(os.walk(dataset_path)):
# ensure we're processing a genre sub-folder level
if dirpath is not dataset_path:
# save genre label (i.e., sub-folder name) in the mapping
semantic_label = dirpath.split("/")[-1]
data["mapping"].append(semantic_label)
print("\nProcessing: {}".format(semantic_label))
# process all audio files in genre sub-dir
for f in filenames:
# load audio file
file_path = os.path.join(dirpath, f)
if file_path != '/content/drive/MyDrive/ColabNotebooksNew/PROJECT/ProjectMusic/Data/genres_original/jazz/jazz.00054.wav':
"""fileError: Error opening '/content/drive/MyDrive/ColabNotebooksNew/PROJECT/ProjectMusic/Data/genres_original/jazz/jazz.00054.wav': File contains data in an unknown format."""
signal, sample_rate = librosa.load(file_path, sr=SAMPLE_RATE) #signal= how much samples in the audio file, sample rate = num of sample rate of the audio file, sample=22050
# process all segments of audio file
for d in range(num_segments):
# calculate start and finish sample for current segment
start = samples_per_segment * d
finish = start + samples_per_segment
# extract mfcc
mfcc = librosa.feature.mfcc(signal[start:finish], sample_rate, n_mfcc=num_mfcc, n_fft=n_fft, hop_length=hop_length) #mfcc - time and Coef(13 because num_mfcc=13),
mfcc = mfcc.T #[216,13]
# store only mfcc feature with expected number of vectors
if len(mfcc) == num_mfcc_vectors_per_segment: #==216
data["mfcc"].append(mfcc.tolist())
data["labels"].append(i-1)
print("{}, segment:{}".format(file_path, d+1))
# save MFCCs to json file
with open(json_path, "w") as fp:
json.dump(data, fp, indent=4) #puts everything in the Json File
# Runs Data Processing
save_mfcc(DATASET_PATH, JSON_PATH, num_segments=6)
I have been using this code for a long while, it has worked great until today I got an error as below:
TypeError Traceback (most recent call last)
<ipython-input-10-4a9371926618> in <module>
1 # Runs Data Processing
----> 2 save_mfcc(DATASET_PATH, JSON_PATH, num_segments=6)
<ipython-input-9-8ba1c6e78747> in save_mfcc(dataset_path, json_path, num_mfcc, n_fft, hop_length, num_segments)
56
57 # extract mfcc
---> 58 mfcc = librosa.feature.mfcc(signal[start:finish], sample_rate, n_mfcc=num_mfcc, n_fft=n_fft, hop_length=hop_length) #mfcc - time and Coef(13 because num_mfcc=13),
59 mfcc = mfcc.T #[216,13]
60 # store only mfcc feature with expected number of vectors
TypeError: mfcc() takes 0 positional arguments but 2 positional arguments (and 1 keyword-only argument) were given
About the save_mfcc function:
Extracts MFCCs from music dataset and saves them into a json file along with genre labels.
:param dataset_path (str): Path to dataset
:param json_path (str): Path to json file used to save MFCCs
:param num_mfcc (int): Number of coefficients to extract
:param n_fft (int): Interval we consider to apply FFT. Measured in # of samples
:param hop_length (int): Sliding window for FFT. Measured in # of samples
:param: num_segments (int): Number of segments we want to divide sample tracks into
:return:
I don't understand why the problem just appeared today, and how to fix it.
How can I solve the error? | 0 | 2 | 1 | I also meet this problem during doing my graduation project, you can uninstall librosa0.9.0 version, download the 0.8.0 version and have a try, the Tsinghua mirror source can't be used to install librosa 0.8.0,you'd better to get a vpn to download the version 0.8.0 from the official |
2023-03-18 13:38:35 | 1 | python,sockets,bluetooth,pybluez | 1 | 75,778,065 | Using python sockets to connect bluetooth device | 75,776,240 | false | 258 | I'm trying to connect to custom bluetooth device with sockets using python. I use pybluez to find device and get it's address. Current code looks like this:
import bluetooth, subprocess
import socket
class BCI(object):
"""
Bluetooth Connection to BCI
"""
socket_ = None
bluetooth_address = None
connected = False
port = 0x005
dev_name = None
cmds = {}
status = {
0: 'ok',
1: 'communication timeout',
3: 'checksum error',
4: 'unknown command',
5: 'invalid access level',
8: 'hardware error',
10: 'device not ready',
}
def __init__(self, *args, **kwargs):
self.bluetooth_address = kwargs.get("bluetooth_address", None)
if self.bluetooth_address is None:
self.find()
def connect(self):
try:
self.socket_ = socket.socket(socket.AF_BLUETOOTH, socket.SOCK_STREAM, socket.BTPROTO_RFCOMM)
#self.socket_ = socket.socket(socket.AF_BLUETOOTH, socket.SOCK_SEQPACKET, socket.BTPROTO_L2CAP)
self.socket_.connect((self.bluetooth_address, self.port))
self.connected = True
except:
self.socket_.close()
self.connected = False
raise ConnectionError
def find(self):
dev = self.__class__.__name__
print('Searching for ' + dev)
nearby_devices = bluetooth.discover_devices(duration=8, lookup_names=True, flush_cache=True)
for index, val in enumerate(nearby_devices):
addr, name = val
if dev in name.upper():
self.dev_name = name
self.bluetooth_address = addr
print('Found BCI: ' + name + ' @', self.bluetooth_address)
return
def find_bluetooth_services(self):
services = bluetooth.find_service(address=self.bluetooth_address)
if len(services) > 0:
print("found %d services on %s" % (len(services), self.bluetooth_address))
print(services)
else:
print("no services found")
def close(self):
self.socket_.close()
if __name__ == "__main__":
try:
device = BCI()
except:
print('No devices BCI found')
try:
print("Trying to connect with "+ device.__class__.__name__)
device.connect()
except ConnectionError:
print ('Can\'t connect with ' + device.__class__.__name__)
'''
try:
device.find_bluetooth_services()
except:
print("Some problem occured")
'''
if device.connected:
print('Connected {}'.format(device.dev_name)+ device.__class__.__name__ +' @', device.bluetooth_address)
It founds device and tries to connect (terminal output):
Searching for BCI
Found BCI: BCI-2016-SN005 @ 00:80:E1:BE:EE:F1
Trying to connect with BCI
,so there is a notification from settings demanding approve the connection with the some number code. After I approve connection, this gets out (terminal output):
Can't connect with BCI
What am I doing wrong?
I tried to use connect_ex() instead of connect() and got this error code 10065, but couldn't find out what it means in context of bluetooth | 0.197375 | 1 | 1 | I just used wrong port number, shoud've been used port = 1 |
2023-03-18 14:46:11 | 2 | python,machine-learning,nlp,text-classification,naivebayes | 1 | 75,782,684 | Is there anything incorrect about my implementation of my Naive Bayes Classifier? | 75,776,587 | false | 45 | I'm building a text classifier using Naive Bayes for a school project. My accuracy on the testing set that my professor provided us is 89.4% which is reasonably high but my professor said that if I correctly implemented Naive Bayes that it should be around a point higher. Is there anything incorrect with my implementation of Naive Bayes or if there's maybe any normalization technique that I should try applying to the documents? Below are the two main functions for my classifier. There are other functions and without them the program won't run but they're mostly related to tuning the model. Please let me know if you need me to clarify anything else or if you want the full version of the code.
Also the training and testing set consists of two columns with the first column being the file path to the document and the second one the class that that document belongs to.
def getStatistics(trainSet):
#here we're going to calculate the probability of a document being in a certain
#category by first looking at the fraction of documents in the training set that
#are in that category
probCateg = trainSet["Category"].value_counts()
probCateg = probCateg/probCateg.sum()
#we create a dictionary that maps a category to the probability of a document being
#in that category
probCateg = probCateg.to_dict()
#to calculate p(t|c) we need to find the probability of a term occuring in
#a category
#So to quickly get the probability of a word appearing in a document of a certain category
#I used a dictionary which maps the word to another dictionary which in turn maps the category
#to "C(t|c)". This might not be as efficient as having the dictionary map to a list of
#prob values (lookup is still O(1) but hashing the key is probably more computationally expensive than retrieving
#element index) but in my opinion it makes the data structure work very well with pandas
#so that training my classifier only takes a few lines of code.
wordFreq = {}
lemmatizer = WordNetLemmatizer()
#for each data point in our training set
for index, row in trainSet.iterrows():
#we read in the document
with open(row["Filename"], 'r') as f:
text = f.read()
f.close()
#and split the document into tokens
#then we're going to reduce all of the words in the document
#to their base form and then make all of the letters lower case.
#This helps make our assumption that the probabilities of each individual
#word appearing in the document are independent of one another more valid
#as if you see the word robbery in a document then you're also likely to see
#words like robbed, robbing, and rob in the document. Additionally, this will
#make training and classification of our model much faster as there are significantly
#less words that it has to calculate probabilities for
#I also removed stop words and punctuation from the documents since they only seemed
#to hurt the model by adding noise and it will also make training and classification
#faster as well
tokens = nltk.word_tokenize(text)
#I found that for the third corpus, counting the number of times the word appears in a document
#instead of just if it occurs in the document signifcantly increases the accuracy by around 4%
tokens = list(set(tokens))
tokens = [token.lower() for token in tokens]
tokens = [token for token in tokens if (token not in punctuation) and (token not in stopwords)]
tokens = [(lemmatizer.lemmatize(word)) for word in tokens]
#we then convert the tokens to a set and then back to a list so that we only
#have a list of unique words in the document
tokens = list(set(tokens))
#for each word
for word in tokens:
#if the word isn't yet in the dictionary
if not (word in wordFreq):
#initialize an entry in the dictionary that maps that word
#to a dictionary that maps the names of the categories to 0s
#since so far none of the categories have had that word
wordFreq[word] = dict(zip(probCateg.keys(), [0]*len(probCateg.keys())))
#then if the word is in that document. We increment the # of occurences of that term
#in this category by 1
wordFreq[word][row["Category"]] += 1
return wordFreq, probCateg
#in this function, we classify the document
#the function takes in the name of the file in which the document is in
#and a tunable paramter that's used to account for words that don't
#appear in the training set but are present in the testing set
#The function returns the category that the document is most likely
#to be
def classifyDoc(wordFreq, probCateg, filename, eps, trainSize):
#open and read in the file
with open(filename, 'r') as f:
text = f.read()
f.close()
#and we do the same text processing that we did for the documents in the training set
lemmatizer = WordNetLemmatizer()
tokens = nltk.word_tokenize(text)
tokens = list(set(tokens))
tokens = [token.lower() for token in tokens]
tokens = [token for token in tokens if (token not in punctuation) and (token not in stopwords)]
tokens = [(lemmatizer.lemmatize(word)) for word in tokens]
#we then convert the tokens to a set and then back to a list so that we only
#have a list of unique words in the document
tokens = list(set(tokens))
#since we're going to multiply several very small numbers together, we run
#the risk of the value rounding to 0. To avoid this, we take the log probability
#instead
#we're going to calculate the log probability for each category
#so we can hold these values in a dictionary that maps the categories to
#their probabilities for that word
logProb = dict(zip(probCateg.keys(), [math.log(probCateg[key]) for key in probCateg.keys()]))
secCorp = "O" in probCateg.keys()
#for each category
for categ in probCateg.keys():
#to get p(t|c) we divide by total number of documents that are of that category
denom = trainSize*probCateg[categ]
#for the second corpus i found that the full form for laplace smoothing gives better results
if secCorp:
denom += len(wordFreq)*eps
#for each word in the document
for word in tokens:
#if we encountered the word before
if word in wordFreq:
#we get C(t|c) from the dictionary and add the log of it to the corresponding
#log probability plus the smoothing parameter.
logProb[categ] += math.log(wordFreq[word][categ] + eps)
#However, if the word isn't in that category, that would mean that p(t|c) = 0
#which would mean that p(c|d) = 0 which is unreasonable. Instead,
#we add a small constant, eps, to all of the term frequencies for all of the categories
else:
logProb[categ] += math.log(eps)
#and then here we divide by denom to get p(t|c) which for log prob is the same as subtracting
#log(denom)
logProb[categ] -= math.log(denom)
#after we have calculated the log probabilities we return the category with
#the highest probability
return max(logProb, key=logProb.get) | 0.379949 | 1 | 1 | So I think I figured out why my classifier wasn't performing as well as I expected it to and it's actually pretty funny and also confusing. When I didn't perform lemmatization on the documents it actually increased my accuracy to 90.3% which feels counterintuitive to me. I always thought that lemmatization was just one of those things you always do when you're processing a text. I think the reason for this jump in performance is that the verb tense can be used to distinguish categories in this corpus. The categories in this corpus are the categories of news articles and they are crime, discovery, politics, struggle, and other. In politics, we talk about statistics and trends whereas the crime category mostly covers individual things that have happened. So the crime section would be more likely to have singular words like robbery and murder whereas the politics section would have the plural forms of those words: robberies and murders. |
2023-03-18 15:28:41 | 1 | python,web-applications,architecture | 1 | 75,777,017 | Best way to build application triggering reminder at certain time | 75,776,827 | true | 50 | I want to build a python app sending reminder at certain time. It would have several subscribers. Each subscriber sets specific moment when they want to be notified. For example, John wants to have a reminder every days whereas Cassandra wants it every months.
I see several ways to do it :
Use a script running 24/7 with a while loop and checks if it's time to do send the reminder.
Use a cron tab that runs the script every minutes to check if there's reminder to send
Create a simple api (in Flask for example). It checks every minutes or so if there is a reminder to send to subscribers or even make a request to the api.
What is the best way to build such application in Python for few subscribers (10) and for a larger amount(1000) | 1.2 | 1 | 1 | For a small amount of subscribers I would do a script that run all the times, it would continuously check the time and send reminders as needed, you can use time and datetime modules for that.
For a large amount of subscribers it would be an API that can manage multiple request simultaneously, you can use Flask or Django or FastAPI for that.
For the API how would it work is that subscribers would make POST request with all the informational notification ( date, time, what message they want to receive for example ) then the server store those informations in a database like if you are familiar with python Celery seem the best one but you have Airflow too, it will schedule a reminder to be executed at a certain time.
Then when the reminder is executed it just retrieve the informations from the database, and send it via email for example |
2023-03-18 16:17:15 | 1 | python,django,redis,python-huey | 2 | 75,777,441 | Django and Huey task issue, record in DB doesn't exist when the task is ran | 75,777,110 | false | 138 | I am testing Huey with Django and found one issue, tasks can't find the record in DB.
Postgres 14
Redis 7.0.5
Django 4
use with docker
Here is the code:
# settings.py
USE_HUEY = env.bool("USE_HUEY", False)
HUEY = {
"huey_class": "huey.RedisHuey",
"name": "huey",
"immediate": not USE_HUEY,
"connection": {"url": REDIS_URL},
}
# app/signals.py
@receiver(post_save, sender=Post)
def post_signal(sender, instance, **kwargs):
from app.tasks import create_or_update_related_objects
create_or_update_related_objects(instance.pk)
# app/tasks.py
@db_task()
def create_or_update_related_objects(object_pk):
post = Post.objects.get(pk=object_pk)
...
This is running an async task but I am getting the error:
app.models.Post.DoesNotExist: Post matching query does not exist.
This is not correct, there is a post, and this task is running on a post_save signal.
What is weird, is if I do something like this, it works fine:
@db_task()
def create_or_update_related_objects(object_pk):
import time
time.sleep(3)
post = Post.objects.get(pk=object_pk)
...
What am I doing wrong here? | 0.099668 | 1 | 1 | I'm not sure, but most likely your task will be completed before the database is committed. It seems that the enqueue() method can add your task to the queue so that the database has time to commit. |
2023-03-18 16:20:00 | 0 | python | 4 | 75,777,159 | Print letters with a delay all on the same line | 75,777,130 | false | 54 | I can't get the characters to all be printed on one line.
list = ["W","e","l","c","o","m","e","","t","o","","w","o","r","d","l","e"]
for item in list:
sleep(0.25)
print(item)
i tried using a list and imported the sleep library | 0 | 2 | 1 | In your print, add this code after item as a second parameter : end=''. By default, every time you printed, the default value \n was added. |
2023-03-18 22:20:41 | 2 | python | 2 | 75,779,046 | Why can't I import config? | 75,779,002 | false | 166 | I have a config.yml file with api keys in it, further, I have a .py file which greps for it.
I should theoretically be able to import config, and I can do so when I move the config.py file from the main directory into the correct subdirectory, but that doesn't quite work efficiently.
Forgive the poor attempt at file structure formatting.
in directory
servers
backend
file1.py
file2.py
file_i_need_config.py
config.py
I am trying:
from config import Config
print(Config.get_property("api_key_for_thing").unwrap())
and receiving the error:
ModuleNotFoundError: No module named 'config'
Any advice would be appreciated. | 0.197375 | 2 | 1 | With the kind souls above, I figured it out.
I was executing within the directory of user:backend % python -m file_i_need_config from there.
Needed to execute from the parent directory via user:directory % python -m servers.backend.file_i_need_config |
2023-03-19 03:14:27 | 0 | python,numpy,performance,benchmarking,apple-m1 | 1 | 75,779,994 | Varying performance of numpy axpy | 75,779,886 | false | 68 | I was trying to test the performance of numpy using this very simple script:
import numpy as np
import argparse
from timeit import default_timer as timer
p = argparse.ArgumentParser()
p.add_argument("--N", default=1000, type=int, help="size of matrix A")
p.add_argument(
"--repeat", default=1000, type=int, help="perform computation x = A*b repeat times"
)
args = p.parse_args()
np.random.seed(0)
A = np.random.rand(args.N, args.N)
b = np.random.rand(args.N)
x = np.zeros(args.N)
ts = timer()
for i in range(args.repeat):
x[:] = A.dot(b)
te = timer()
gbytes = 8.0 * (args.N**2 + args.N * 2) * 1e-9
print("bandwidth: %.2f GB/s" % (gbytes * args.repeat / (te - ts)))
What it does is it creates a random dense matrix, performs matrix-vector multiplication repeat times, and computes the averaged bandwidth of such operation, which I believe includes memory read, computation and memory write. However when I run this script on my laptop, the results vary quite significantly for each run:
~/toys/python ❯ python numpy_performance.py --N 8000 --repeat 100
bandwidth: 93.64 GB/s
~/toys/python ❯ python numpy_performance.py --N 8000 --repeat 100
bandwidth: 99.15 GB/s
~/toys/python ❯ python numpy_performance.py --N 8000 --repeat 100
bandwidth: 95.08 GB/s
~/toys/python ❯ python numpy_performance.py --N 8000 --repeat 100
bandwidth: 77.28 GB/s
~/toys/python ❯ python numpy_performance.py --N 8000 --repeat 100
bandwidth: 56.90 GB/s
~/toys/python ❯ python numpy_performance.py --N 8000 --repeat 100
bandwidth: 63.87 GB/s
~/toys/python ❯ python numpy_performance.py --N 8000 --repeat 100
bandwidth: 85.43 GB/s
~/toys/python ❯ python numpy_performance.py --N 8000 --repeat 100
bandwidth: 95.69 GB/s
~/toys/python ❯ python numpy_performance.py --N 8000 --repeat 100
bandwidth: 93.91 GB/s
~/toys/python ❯ python numpy_performance.py --N 8000 --repeat 100
bandwidth: 101.99 GB/s
Is this behavior expected? If so, how can it be explained? Thanks! | 0 | 1 | 1 | There can be multiple reason for the not stable result, CPU can not be stable because not confired for your unique process, you can have other process interfering with your runs, and also thermal throttling that could perturbate the run between cooling
One of the thing you could do is to make multiple run then average the result |
2023-03-19 12:06:34 | 2 | python,sqlalchemy,alembic | 1 | 75,787,851 | Running alembic command cause ImportError: cannot import name '_BindParamClause' from 'sqlalchemy.sql.expression' | 75,781,862 | true | 1,211 | This happens whenever I ran any alembic command. I am using sqlalchemy version 2.0.3
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ado/anaconda3/lib/python3.8/site-packages/alembic/__init__.py", line 8, in <module>
from . import op # noqa
File "/home/ado/anaconda3/lib/python3.8/site-packages/alembic/op.py", line 1, in <module>
from .operations.base import Operations
File "/home/ado/anaconda3/lib/python3.8/site-packages/alembic/operations/__init__.py", line 1, in <module>
from .base import Operations, BatchOperations
File "/home/ado/anaconda3/lib/python3.8/site-packages/alembic/operations/base.py", line 3, in <module>
from .. import util
File "/home/ado/anaconda3/lib/python3.8/site-packages/alembic/util/__init__.py", line 9, in <module>
from .sqla_compat import ( # noqa
File "/home/ado/anaconda3/lib/python3.8/site-packages/alembic/util/sqla_compat.py", line 8, in <module>
from sqlalchemy.sql.expression import _BindParamClause
ImportError: cannot import name '_BindParamClause' from 'sqlalchemy.sql.expression' (/home/***/anaconda3/lib/python3.8/site-packages/sqlalchemy/sql/expression.py) | 1.2 | 2 | 1 | Solved after uninstalling alembic and reinstalling it afresh
I ran:
pip3 uninstall alembic
pip3 install alembic |
2023-03-19 13:20:41 | 4 | python | 1 | 75,782,359 | Single quotes around types in Python variable declarations | 75,782,276 | true | 44 | Python accepts this statement:
stop:bool = False
I have been told this is better:
stop:'bool' = False
Does it matter? Is there a kind of beautifier that will make my code use a single consistent style? | 1.2 | 1 | 1 | It's not that the quoted style is better, but that it's necessary if you need to use a type that hasn't been defined yet.
Type hints have to be valid expressions at runtime, which means any identifier has to be defined before it can be used as a hint. If that's not possible you can supply a string that contains the name of the type.
One option for a consistent style that doesn't require everything to be quoted is to use from __future__ import annotations, which treats all type hints as strings implicitly. With that, you needn't quote anything. |
2023-03-19 13:44:36 | 0 | python,discord,discord.py | 1 | 75,782,487 | How do i get the name of the author who is using a modal? | 75,782,400 | false | 47 | I'm making a password protected notes command for my bot where the user can enter their password and a note and it gets stored in a txt file
Im trying to get the display_name of the user who was in the modal.
Despite looking everywhere I've been unsuccessful.
Any help would be appreciated.
This is my Code
@Client.tree.command(name="add", description="Adds a new note")
async def add(interaction: discord.Interaction):
file_name = f"{interaction.user.name}.txt"
if not os.path.isfile(file_name):
await interaction.response.send_message("You do not have a personal Notes file.")
return
class Add(ui.Modal, title='Add Note'):
Password = ui.TextInput(label='Enter password')
Note = ui.TextInput(label='Enter your Note', style=discord.TextStyle.paragraph)
#####I want the name of the user in the curly brackets in the next line#####
with open(f'{}.txt','r') as file:
pwd = file.readline()
print(ui.Member.display_name)
async def on_submit(self, interaction: discord.Interaction):
if Password != pwd:
await interaction.response.send_message(f'You entered a wrong password', ephemeral=True)
else:
with open(f'{file_name}','a') as file:
file.write(Note)
await interaction.response.send_message(f'Note Added', ephemeral=True)
I tried ui.interaction.display_name and several other methods I thought would work to no avail | 0 | 1 | 1 | You can get the name from the Interaction instance, as you've figured out in the code above. If you want to access it in the Modal, but before submission (e.g., in the __init__), you add it as an __init__ argument & pass it in. A modal is just a class, so basic OOP principles work just fine. |
2023-03-19 15:18:56 | 0 | python,regex,expression,leading-zero | 2 | 75,783,037 | Regex: Match decimal numerals with no digits before decimal point | 75,783,003 | false | 72 | I am trying to match decimal numerals with no digits before the decimal point in a string using regex. For example, I have the strings:
The dimensions of the object-1 is: 1.4 meters wide, .5 meters long, 5.6 meters high
The dimensions of the object-2 is: .8 meters wide, .11 meters long, 0.6 meters high
I want to capture only the decimal numbers without integer digits and prefix leading zeros to them. So my final desired output will be:
The dimensions of the object-1 is: 1.4 meters wide, 0.5 meters long, 5.6 meters high
The dimensions of the object-2 is: 0.8 meters wide, 0.11 meters long, 0.6 meters high
This is what I have tried so far:
(\d+)?\.(\d+)
This expression is capturing all the decimal numbers such as: 1.4, .5, 5.6, .8, .11, 0.6.
But I need to capture only decimal numbers without integer digits: .5, .8, .11. | 0 | 1 | 1 | Use a negative lookbehind: (?<!\d)(\.\d+) |
2023-03-19 21:19:22 | 0 | python-3.x,opencv,cv2,haar-classifier | 3 | 75,872,265 | opencv error conv_winograd_f63.cpp ubuntu 22 python3.10 cv2 although works on 2nd machine | 75,784,986 | false | 183 | I have a development machine working with ubuntu 22, python 3.10 and cv2; when I try to replicate on another machine then I get runtime error:
from section calling age prediciton from haarclassifier and age_net.caffemodel:
line 118 return age_net-forward()
cv2.error: OpenCV(4.7.0-dev)
(note this after building files from source orcv2 4.7.0 or 4.6.0 same result)
/home/art/opencv_build/opencv/modules/dnn/src/layers/cpu_kernels/conv_winograd_f63.cpp:401: error: (-215:Assertion failed) CONV_WINO_IBLOCK == 3 && CONV_WINO_ATOM_F32 == 4 in function 'winofunc_BtXB_8x8_f32'
I have tried various permutations of installing opencv-python or opencv-contrib-python or building and compiling the opencv files from source, always with the same result. Works on the original machine but always throws this error on the second machine when running the same python code.
I have searched online generally and in stackoverflow and I don't see anyone noting this error.
Anyone know?
Tried to duplicate machine where it is working and various permuations of installing opencv either directly:
pip3 install opencv-python
or
pip3 install opencv-contrib-python
or build the opencv files from source,
which is generally to build the dependencies:
sudo apt install build-essential cmake git pkg-config libgtk-3-dev \ libavcodec-dev libavformat-dev libswscale-dev libv4l-dev \ libxvidcore-dev libx264-dev libjpeg-dev libpng-dev libtiff-dev \ gfortran openexr libatlas-base-dev python3-dev python3-numpy \ libtbb2 libtbb-dev libdc1394-dev
clone the repositories:
$ mkdir ~/opencv_build && cd ~/opencv_build
$ git clone https://github.com/opencv/opencv.git
$ git clone https://github.com/opencv/opencv_contrib.git
make:
Sudo ~/opencv_build/opencv/cmake -D CMAKE_BUILD_TYPE=RELEASE \ -D WITH_VTK=OFF -D BUILD_opencv_viz=OFF
-D CMAKE_INSTALL_PREFIX=/usr/local \ -D INSTALL_C_EXAMPLES=ON \ -D INSTALL_PYTHON_EXAMPLES=ON \ -D OPENCV_GENERATE_PKGCONFIG=ON \ -D OPENCV_EXTRA_MODULES_PATH=~/opencv_build/opencv_contrib/modules \ -D BUILD_EXAMPLES=ON ..
and then install
I check all the python code, file folder setup etc is the same, both running ubuntu 22, amd 64 bit.
Works in the original machine, always throws the error in the second. The python code correctly captures an image, recognizes the face, crops the image and saves it before encountering the error, so the error is specifically to the haarclassifier and age prediction.
I can't find any documentation or comments on the subject. | 0 | 1 | 2 | yes I reverted to 4.6.0.66 by
'sudo pip install opencv-contrib-python==4.6.0.66'
to call the specific version and this version works whereas 4.7 does not work so I'm sure they will figure out whatever bug exists in their newest release. |
2023-03-19 21:19:22 | 0 | python-3.x,opencv,cv2,haar-classifier | 3 | 75,814,634 | opencv error conv_winograd_f63.cpp ubuntu 22 python3.10 cv2 although works on 2nd machine | 75,784,986 | false | 183 | I have a development machine working with ubuntu 22, python 3.10 and cv2; when I try to replicate on another machine then I get runtime error:
from section calling age prediciton from haarclassifier and age_net.caffemodel:
line 118 return age_net-forward()
cv2.error: OpenCV(4.7.0-dev)
(note this after building files from source orcv2 4.7.0 or 4.6.0 same result)
/home/art/opencv_build/opencv/modules/dnn/src/layers/cpu_kernels/conv_winograd_f63.cpp:401: error: (-215:Assertion failed) CONV_WINO_IBLOCK == 3 && CONV_WINO_ATOM_F32 == 4 in function 'winofunc_BtXB_8x8_f32'
I have tried various permutations of installing opencv-python or opencv-contrib-python or building and compiling the opencv files from source, always with the same result. Works on the original machine but always throws this error on the second machine when running the same python code.
I have searched online generally and in stackoverflow and I don't see anyone noting this error.
Anyone know?
Tried to duplicate machine where it is working and various permuations of installing opencv either directly:
pip3 install opencv-python
or
pip3 install opencv-contrib-python
or build the opencv files from source,
which is generally to build the dependencies:
sudo apt install build-essential cmake git pkg-config libgtk-3-dev \ libavcodec-dev libavformat-dev libswscale-dev libv4l-dev \ libxvidcore-dev libx264-dev libjpeg-dev libpng-dev libtiff-dev \ gfortran openexr libatlas-base-dev python3-dev python3-numpy \ libtbb2 libtbb-dev libdc1394-dev
clone the repositories:
$ mkdir ~/opencv_build && cd ~/opencv_build
$ git clone https://github.com/opencv/opencv.git
$ git clone https://github.com/opencv/opencv_contrib.git
make:
Sudo ~/opencv_build/opencv/cmake -D CMAKE_BUILD_TYPE=RELEASE \ -D WITH_VTK=OFF -D BUILD_opencv_viz=OFF
-D CMAKE_INSTALL_PREFIX=/usr/local \ -D INSTALL_C_EXAMPLES=ON \ -D INSTALL_PYTHON_EXAMPLES=ON \ -D OPENCV_GENERATE_PKGCONFIG=ON \ -D OPENCV_EXTRA_MODULES_PATH=~/opencv_build/opencv_contrib/modules \ -D BUILD_EXAMPLES=ON ..
and then install
I check all the python code, file folder setup etc is the same, both running ubuntu 22, amd 64 bit.
Works in the original machine, always throws the error in the second. The python code correctly captures an image, recognizes the face, crops the image and saves it before encountering the error, so the error is specifically to the haarclassifier and age prediction.
I can't find any documentation or comments on the subject. | 0 | 1 | 2 | Today I tried out OpenCV 4.5.5.
Works for me now. |
2023-03-20 05:41:56 | 1 | python,arduino,pyserial | 1 | 75,788,740 | pySerial sending data from Python to arduino | 75,786,876 | true | 59 | I have a question about sending data using the pySerial Python library.
I'm new to this library and sorry about my bad english.
How can i send the data from python to Arduino continuously? For example, i want to send a string that contain 1 digit (0 or 1), can i put it in while loop in python?
For example:
import serial
import time
import cv2
import mediapipe as mp
from cvzone.HandTrackingModule import HandDetector
ser = serial.Serial('COM5', 9600)
#initiate webcam
cap = cv2.VideoCapture(0)
cap.set(3, 1280)
cap.set(4, 720)
x3 = 0
y3 = 0
xgrab = 0
ygrab = 0
grab = 0
# hand detector
detector = HandDetector(detectionCon=0.8, maxHands=1)
while True:
success, image = cap.read()
hands = detector.findHands(image, draw=False)
if hands:
lmList = hands[0]['lmList']
xgrab, ygrab = lmList[4][:2]
x3, y3 = lmList[3][:2]
if xgrab < x3:
grab = 1
else:
grab = 0
else:
grab = 0
grab = str(grab)
ser.write(grab.encode())
cv2.imshow('Control window', cv2.flip(image,1))
if cv2.waitKey(5) & 0xFF == 27:
break
cap.release()
This code will detect your hand gesture and send 1 or 0 to Arduino. Since i put the ser.write command in while loop to send it continuously, but somehow it doesn't work as i expected. My arduino doesn't turn on/off the LED though.
Here is my Arduino code:
String Recstring;
void setup() {
Serial.begin(9600);
pinMode(LED_BUILTIN, OUTPUT);
}
void loop() {
if (Serial.available()){
Recstring = Serial.readStringUntil('\n');
if (Recstring == "1"){
digitalWrite(LED_BUILTIN, HIGH);
}
else{
digitalWrite(LED_BUILTIN, LOW);
}
}
}
I don't know if Arduino doesn't read the data fast enough to turn on/off LED. I've tried put some time.sleep to delay the data but it still like that. | 1.2 | 1 | 1 | So i need to add \n to grab = str(grab) + '\n' in my python code and it will work.
Thanks to Mark Setchell. |
2023-03-20 11:56:49 | 0 | python,json,api,rest,python-requests | 2 | 75,790,033 | Select specific values from JSON output | 75,789,968 | false | 56 | I am querying a REST API and I need to select 2 fields from the adapter output below.
I basically need to make variables from OUT_Detailed Description and OUT_Vendor Ticket Number:
Code:
headers = {'content-type': 'application/json', 'Authentication-Token': authToken}
response = requests.post('http://dev.startschools.local:2031/baocdp/rest/process/:ITSM_Interface:IncidentManagement:QueryIncident/execute', headers=headers, data=json.dumps(get_query_inc_json()))
print(response.text)
json_format = json.loads(response)
Description = (json_format['OUT_Detailed Decription'])
Ref_Number = (json_format['OUT_Vendor Ticket Number'])
response.text printed Output:
[{"name":"OUT_HPD_CI","value":"001"},{"name":"OUT_Detailed Description","value":"Student needs a new card issued"},{"name":"OUT_Vendor Ticket Number","value":"INC0000019"}]
Error:
in loads
raise TypeError(f'the JSON object must be str, bytes or bytearray, '
TypeError: the JSON object must be str, bytes or bytearray, not Response
PS C:\Program Files\SB_Python\AD Extract>
I tried several methods to get just the OUT_Detailed Description and OUT_Vendor Ticket Number values from the output but its all failing. | 0 | 1 | 1 | Have you tried doingjson_format = json.loads(response.content.decode('utf-8')) to translate your response into a string? |
2023-03-20 14:58:09 | 2 | python,import,pycharm,pyc | 1 | 75,792,046 | PyCharm cannot see my newly compiled .pyc see on import | 75,791,899 | true | 39 | I'm using PyCharm and I have file foo.py. foo.py has a username and base64 representation of a password inside of it. I'm compiling foo.py so it's no longer readable by human eyes.
I'm running a command via the terminal
python -m py_compile foo.py
I can see the compiled .pyc file in the pychache folder named foo.cpython-39.pyc.
I remove foo.py.
When I add the line import foo, PyCharm indicates it cannot find the module.
If foo.py is retained, then of course the import line is fine.
If I copy and rename the .pyc file from the pycache folder to the root folder, calling it foo.pyc, PyCharm still indicates that it cannot find the module.
I have done this before so I know it's possible, but there is obviously a step I'm missing. Does anyone have any idea what that might be? | 1.2 | 1 | 1 | It is a cache issue or maybe a configuration issue,
You could try to clear the cach, go File then Invalidate Caches / Restart... then Invalidate and Restart
You can also check that the .pyc files are in the pycache directory |
2023-03-20 15:32:55 | 0 | python,docker,docker-registry | 1 | 75,863,016 | unable to pull python from official docker registry | 75,792,278 | true | 95 | When I use docker pull python:3 it outputs:
3: Pulling from library/python
32fb02163b6b: Retrying in 1 second
167c7feebee8: Retrying in 1 second
d6dfff1f6f3d: Retrying in 1 second
e9cdcd4942eb: Waiting
ca3bce705f6c: Waiting
5e1c6c4f8bbf: Waiting
2da42ff3382c: Waiting
86f9457966ab: Waiting
896264e2a03c: Waiting
error pulling image configuration: download failed after attempts=6: x509: certificate signed by unknown authority
How is that possible if per default it pulls from the official docker registry? | 1.2 | 1 | 1 | I had company software installed that was messing with my ssl-certificates |
2023-03-20 17:47:39 | 0 | python,pandas,string | 3 | 75,799,319 | How to convert a string list to (object) list in Pandas? | 75,793,632 | true | 79 | I have the dictionary below in which the values are string type:
data = {'object_1':"['abc']",
'object_2':"['def']",
"object_3": "['xyz']",
"object_4": "['abc']"}
I want to convert the values of the dictionary from a string type to a list type. I tried to use literal_eval and eval() but without any success to get pure python lists:
Desired output:
data = {'object_1':['abc'],
'object_2':['def'],
"object_3": ['xyz'],
"object_4": ['abc']}
Thanks for any advice. | 1.2 | 1 | 1 | saving the pandas file with different separator saved the day. |
2023-03-20 19:09:08 | 0 | python,xlwings | 1 | 75,860,692 | Unable to save excel workbook on mac using xlwings | 75,794,357 | false | 73 | I'm running the same Xlwings code with jupyter notebook on both MAC and Windows to save a new Excel workbook in a folder.
import xlwings as xw
import os
wb = xw.Book()
wb.save(os.path.join(os.getcwd(),r'pro/fi/g.xlsx'))
wb.close()
It runs on Windows fine but gives the following error on MAC;
CommandError: Command failed:
OSERROR: -50
MESSAGE: Parameter error.
COMMAND: app (pid=71190). workbooks [ 'Book2'].save_workbook_as (filename='Macintosh HD: Users: mohit: Desktop: pro:fi:g.xlsx', overwrite=True, file_format=k.Excel_XML_file_format, timeout=-1, password=None) | 0 | 1 | 1 | I've experienced the same issue and I noticed that was because I chose One Drive folder to save the file.
When I chose a folder outside One Drive the problem disappear.
Well, at the first instance the excel ask me to grant permission to save a file in the folder. But it saves without a problem on the subsequent safes. |
2023-03-20 21:48:32 | 2 | python,sockets,server,tcp,keep-alive | 1 | 75,795,757 | Is it ok to keep TCP connections alive throughout the serving time? | 75,795,630 | true | 37 | I'm making a socket TCP server with python that handles multiple connections simultaneously.
so I defined a dictionary to hold the clients socket addresses, so I could access them and traffic the information between the clients easily. but I wander if there is a better way to connect them together without holding for example a hundred connection at the same time, which is really close to keep-alive in http connections that I believe we shouldn't use excessively and throughout the connection time. so do you guys have any ideas?
That's who the code looks like:
def run(self, server, port):
while True:
server.listen(5)
conn, addr = server.accept()
self.connectionMap[str(conn.getpeername())] = conn
print('New connection:', conn.getpeername())
Thread_(target=self.manageIndividual, args=([conn, addr, port])) | 1.2 | 1 | 1 | There's nothing wrong with keeping 100 connections open, especially if they are mostly idle.
In the past, having 100 threads open used to be a problem since (on many operating systems) each thread reserves 1MB of memory. So it was desirable to handle many connections per thread. But memory is plentiful now so it's still not a problem.
In the past, having 1000 connections open on one thread used to be a problem as well, since there weren't good ways to do that, but now there is epoll (Linux) and I/O Completion Ports (Windows).
But those are historical problems. Nowadays you can get away with thousands of threads, and there is also no problem handling tens of thousands of connections on one thread.
This answer is necessarily shallow since it's hard to prove a universal negative. In order to really show why it's not a bad idea I'd have to know why you think it is, and then disprove that. |
2023-03-21 10:43:35 | 1 | python,fastapi,uvicorn,asgi | 1 | 75,800,663 | How can my python program do work and serve results via FastAPI at the same time? | 75,800,159 | false | 83 | I have a Python program that reads financial market data to do some analysis. I want to offer the results of the analysis via FastAPI. The problem is that when I start the uvicorn server for fastAPI, the rest of my Python program, especially the main loop, is nnot executed properly.
It is unclear to me how I can use FastAPI in a Python program that also has to do other work. This is what my main.py looks like (left out imports for simplicity...):
# Add fastAPI.
app = FastAPI()
# Start market scanner.
ms1 = MarketScanner(name="ms1", sample_size=10)
# Get the root.
@app.get("/")
def read_root():
return {"Welcome": "to analyst."}
# For test purpose, get length of the current market data cache.
@app.get("/mdata/")
def read_market_data_len():
mdata = ms1.get_market_data_len()
#print(f"mdata: {mdata}")
return {"Market data length": mdata}
if __name__ == "__main__":
ms1.start()
while True:
ms1.get_new_market_data()
print(f"market data length: {ms1.get_market_data_len()}")
time.sleep(3)
I start the uvicorn server from a command line:
uvicorn main:app --reload
When the unicorn server is running, I start the Python program with
python main.py
I can see the market data length from the print in the while-loop increase, so my MarketScanner is fetching data from the financial data provider, but when I navigate to 127.0.0.1:8000/mdata in my browser, I always see "Market data length: 0", it just never increases.
I am a bit lost here, how can I built a python app with a fastAPI, but also a lot of other features that it handles in the background? | 0.197375 | 1 | 1 | I gather that you are running two different processes, one serving the web app and the other running this module alone. Those two programs are completely separate from one-another. Each process has its own MarketScanner instance that is completely independent of the other.
One option is to move your MarketScanner into a function and run that function in a thread at module import time. |
2023-03-21 11:32:06 | 0 | python,api,google-ads-api | 2 | 75,903,477 | Google Ads API "RefreshError: ('unauthorized_client: Unauthorized', {'error': 'unauthorized_client', 'error_description': 'Unauthorized'})" | 75,800,636 | false | 245 | I am trying to make a call to Google Ads API to access campaign data in python. I have completed the following steps:
Created a web app and enabled Google-Ads API
Saved json file that contains client id and secret information
Generated refresh token
Updated google-ads yaml file based on the client id, client secret, refresh token, developer token and customer id
However when I try this:
from google.ads.googleads.client import GoogleAdsClient
client = GoogleAdsClient.load_from_storage("google-ads.yaml")
I get the error: RefreshError: ('unauthorized_client')
I have rechecked for any typos or white-spaces. I have the "Standard Access" on my email id: on Google-Ads account under the Access and Security section. Any help would be really great. | 0 | 1 | 1 | You need to generate your refresh token again.
Is your app set to testing/public? Are you an added as an authorised user for if it's testing? |
2023-03-21 11:35:55 | 2 | python,python-packaging | 1 | 75,800,745 | Accessing model when creating new python package | 75,800,672 | false | 33 | I'm making a python package where the main functionality is using a model that I've trained. I want it to be as easy as possible to access the model for end users so they can just use the main functions without needing to actively go find and download the model separately.
I initially thought I would just include the model file in the package but it is on the large side (~95MB) and Github warns me about it so my feeling is I should try find an alternative way.
I've tried compressing the model by zipping it but that doesn't make much of a dent in the file size. The model itself is fixed so I can't reduce the size by training an alternative version with different hyperparameters.
My solution at the moment is to not include the model and instead download it from s3 when the relevant class is used (this is an internal package, so everybody using it would hypothetically have access to the s3 bucket). However, I don't really like this solution because it requires users to have things set up to access AWS with a specific role, and even if I include examples / documentation / hints in error messaging, I can imagine the experience not being ideal. They also have the option of passing a file path if they have the model saved somewhere, but again that requires some initial setup.
I've tried researching ways to access / package models but haven't come up with much.
So my questions are:
Is there a way to include this model in the package?
Are there other ways to access the model that I haven't thought about? | 0.379949 | 1 | 1 | Is there a way to include this model in the package?
Yes, you can include any file—no matter the size—in a Python package.
Are there other ways to access the model that I haven't thought about?
If the user is going to have to download it anyway, why not in the package?
One reason: if the model rarely changes, but the code around it does, the users will have to repeatedly download a large package.
Could you have two Python packages? One that is just the model and the other that is the code? That way the user will only need to download the model again if a new version is available. |
2023-03-21 13:52:52 | 0 | python,class,oop,contextmanager | 2 | 75,802,242 | Catch except in __enter__() | 75,802,104 | false | 55 | I wanted to try using the context manager mode. They say he's better. It can also catch exceptions and is better readable.
with my_class() as some:
some.do()
The examples I found all look like this:
class Exm:
def __init__(self):
some_data_get
connection = connection_to_resource_good # for example 1
def __enter__(self):
connection = connection_to_resource_good # for example 2
return connection
def __exit__(self, exc_type, exc_val, exc_tb):
connection.close()
return some_True # for continue code if some error in some.do()
But in the examples I've seen, no one catches an exception when accessing a resource. If there is an error in the connection, then this will break the continuation of the code
class My:
def __init__(self):
print("INIT")
def __enter__(self):
connection = 1 / 0 # error bad connection
return connection
def __exit__(self, exc_type, exc_val, exc_tb): # not call - error in init or enter
connection.close()
return some_True # for continue code if some error in some.do()
Here is a more specific example:
class DB:
def __init__(self, host, user, password, db_name):
self.host = host
self.user = user
self.password = password
self.db_name = db_name
def __enter__(self):
self.connection = name_some_module_for_sql.connection(self.host, self.user, self.password, self.db_name)
self.cursor = self.connection.cursor()
return self.cursor
def __exit__(self, exc_type, exc_val, exc_tb):
self.cursor.close()
self.connection.close()
return True # for continue code
for an example in the code, you need to insert in three places
with DB(1, 1, 1, 1) as cursor: # into db_1
cursor.insert_to_db_word("Hello")
cursor.insert_to_db_word("My")
cursor.insert_to_db_word("Dear")
cursor.insert_to_db_word("Friend")
with DB(2, 2, 2, 2) as cursor: # into db_2
cursor.insert_to_db_word("Hello")
cursor.insert_to_db_word("My")
cursor.insert_to_db_word("Dear")
cursor.insert_to_db_word("Friend")
with DB(3, 3, 3, 3) as cursor: # into db_3
cursor.insert_to_db_word("Hello")
cursor.insert_to_db_word("My")
cursor.insert_to_db_word("Dear")
cursor.insert_to_db_word("Friend")
How can I handle exceptions in the init or enter functions so that the cursor.insert_to_db_word functions are not called and the program code continues to be executed if there is a problem connecting to the first database?
Only for class
P.S. sorry for google translete
Before that, I implemented it with decorators. I wanted to use a class for beautiful code and readability | 0 | 1 | 1 | __exit__ is not there for "suppressing" the exceptions. Its main idea is to ensure that there is a piece of code which always gonna be executed even if an exception occurs "inside" that block of code. Since an exception might occur it gives the ability to inspect that object and possibly suppress it by returning True in some situations.
With that being said I think it doesn't make sense for __enter__ to also have this ability to suppress exceptions. You need try-except block for that inside __enter__(like anywhere else in the code). |
2023-03-21 15:04:42 | 2 | python,django | 1 | 75,802,999 | Django form field uploading error bad filename | 75,802,908 | true | 40 | In that code I try to get the first page to make preview image. I use PyMuPDF library
def form_valid(self, form):
text_book = form.save(commit=False)
text_book.owner_id = self.request.user.pk
# i'm trying to get the first page to make preview image
my_file = self.request.FILES['book_file'].file
pdf_file = fitz.open(my_file)
page = pdf_file.load_page(0)
pix = page.get_pixmap()
preview_image = Image.frombytes('RGB', [pix.width, pix.height], pix.samples)
preview_image.thumbnail((200, 200))
image_io = BytesIO()
preview_image.save(image_io, format='JPEG')
image_data = image_io.getvalue()
content_file = ContentFile(image_data)
text_book.preview_image.save(f'text_book_preview_image{Path(text_book.book_file.name).stem}_preview.jpg',
content_file,
save=False)
text_book.save()
return HttpResponseRedirect(self.success_url)
Everything works. But when I put that in settings:
DATA_UPLOAD_MAX_MEMORY_SIZE = 1024 * 1024 * 500
FILE_UPLOAD_MAX_MEMORY_SIZE = DATA_UPLOAD_MAX_MEMORY_SIZE
I get "bad filename".
When code works without MAX_MEMORY_SIZE settings, my_file is
<tempfile._TemporaryFileWrapper object at 0x7f6491f1ae20>
But when I put that settings my_file is
<_io.BytesIO object at 0x7f69b6945950>
What is wrong? | 1.2 | 2 | 1 | your code is trying to open a file using a BytesIO object instead of a file path, which is likely causing the "bad filename" error. The difference between tempfile._TemporaryFileWrapper and _io.BytesIO is that the former represents a temporary file on disk, while the latter represents a file-like object in memory.
When you set DATA_UPLOAD_MAX_MEMORY_SIZE and FILE_UPLOAD_MAX_MEMORY_SIZE, you are limiting the size of files that can be uploaded to your server in memory rather than on disk. This means that larger files will not be saved to a temporary file on disk but instead will be held in memory as a BytesIO object.
To fix the "bad filename" error, you may need to modify the code that opens the PDF file to accept a BytesIO object instead of a file path. One way to do this is to pass the my_file object directly to the fitz.open method, like this:
pdf_file = fitz.open(stream=my_file.read())
This should allow your code to work even when DATA_UPLOAD_MAX_MEMORY_SIZE and FILE_UPLOAD_MAX_MEMORY_SIZE are set. However, be aware that if you allow large files to be uploaded and held in memory, you may run into performance issues or memory errors on your server. |
2023-03-21 15:29:32 | 1 | python,multiprocessing,pybind11 | 1 | 75,995,774 | multiprocessing on pybind11 c++ function hangs on pool.join() | 75,803,215 | false | 83 | I'm trying to run a c++ function in parallel on python. I've wrapped the c++ function with pybind11, and then again define a wrapper function in python, that I call using multiprocessing.pool.map_async().
Here is the basic code.
displayNameLst is just a list of strings in python
displayNameLensLst is a list of ints, just gives displayNameLensLst[j] = len(displayNameLst[j])
cutoffLst is a list of ints that gives the cutoff to Levenshtein distance where the inner c++ Levenstein distance function can just return False.
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
#include <vector>
#include <string>
#include <iostream>
#include <string.h>
#include <sstream>
using namespace std;
bool LevenshteinDistanceCutoff(const char* str1, const char* str2, const uint_fast16_t l1, const uint_fast16_t l2, const uint_fast16_t cutoff, uint_fast16_t** d)
{
# skipped for brevity
}
vector<vector<uint_fast32_t>> CheckPairwiseLevensteinDistOfList(vector<string> &str_list, vector<uint_fast16_t> &strLens, const uint_fast16_t cutoff)
{
uint_fast32_t size = str_list.size();
cout << "length of str_list = " << size << endl;
uint_fast16_t maxLen = strLens[size-1];
uint_fast16_t penultimateLen = strLens[size-2];
cout << "maxLen " << maxLen << endl;
cout << "penultimateLen " << penultimateLen << endl;
vector<vector<uint_fast32_t>> indPotMatches(size);
cout << "current size indPotMatches " << indPotMatches.size() << endl;
cout << "max size indPotMatches " << indPotMatches.max_size() << endl;
uint_fast16_t** d = new uint_fast16_t*[penultimateLen+1];
for (uint_fast16_t i=0; i<penultimateLen+1; ++i)
{
d[i] = new uint_fast16_t[maxLen+1];
}
for (uint_fast32_t j=0; j<size-1; ++j)
{
for (uint_fast32_t k=j+1; k<size; ++k)
{
if (LevenshteinDistanceCutoff(str_list[j].c_str(), str_list[k].c_str(), strLens[j], strLens[k], cutoff, d))
{
indPotMatches[j].push_back(k);
}
}
}
for (uint_fast16_t i=0; i<penultimateLen+1; ++i)
{
delete[] d[i];
}
delete[] d;
return indPotMatches;
}
PYBIND11_MODULE(CHelpers, m) {
m.def("CheckPairwiseLevensteinDistOfList", &CheckPairwiseLevensteinDistOfList, pybind11::return_value_policy::take_ownership);
}
and the python code:
import multiprocessing as mp
sys.path.append(DIR of C++.so)
import CHelpers
sys.path.remove(DIR of C++.so)
NBR_PROC = 12
# python wrapper around C++ function
def CheckPairwiseLEvensteinDistOFListWrapper(arg):
locStartValRange = arg[0]
locEndValRange = arg[1]
locDisplayNameLst = arg[2]
locDisplayNameLensLst = arg[3]
locCutoff = arg[4]
tmpInd = CHelpers.CheckPairwiseLevensteinDistOfList(locDisplayNameLst, locDisplayNameLensLst, locCutoff)
return locStartValRange, locEndValRange, locCutoff, tmpInd
# some other minor stuff like global variables
if __name__ == '__main__':
# A bunch of other stuff to prepare the args list to be sent to the multiprocessing pool
lstArgsForMap = [(startValRangeLst[j], endValRangeLst[j], displayNameLst[startValRangeLst[j]:endValRangeLst[j]].copy(), displayNameLensLst[startValRangeLst[j]:endValRangeLst[j]].copy(), cutoffLst[j]) for j in range(len(startValRangeLst))]
mp.set_start_method('spawn')
pool = mp.Pool(processes = NBR_PROC, maxtasksperchild=1)
res = pool.map_async( CheckPairwiseLEvensteinDistOFListWrapper, lstArgsForMap)
pool.close()
pool.join()
res = res.get()
# other post-processing of res
Now the weird thing is that this executes without any issues on my laptop, but when I try this on my remote server, it always hangs on the pool.join(). Eventually I see that there is 1 process left, taking 0% of CPU and ~20% of memory. After a very long time (12+ hours) the process eventually gets killed, and I get the error:
“/usr/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 6 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d 'w”
Both the server and laptop run ubuntu 20.04. python3.8 and python3 both point to python 3.8.10.
The only difference between the server and my laptop is the size of the list of strings that I'm trying to compare. On my laptop it's roughly 1M, while on the server it's about 250M. (This does take up a lot of memory, but the server has ~500GB of RAM, and it can easily hold a pandas dataframe with all the strings + a bunch of other columns of lists.)
I changed the start method from "fork" to "spawn" after reading about issues that fork causes with these kinds of memory leaks.
I also added maxtaksperchild to create a new process for each task.
Added pybind11::return_value_policy::take_ownership to make sure python knows it's suppose to handle the deletion of vector of vectors that each task returns (maybe this is not working and that's the issue)
I've done a lot of googling about this, but the issue seems very murky. It's the first time I use pybind11 and multiprocessing together and I am basically a total noob with both. I'm not sure where to even start to continue troubleshooting this. Any help would be much appreciated, since I am hoping to use this combination of parallel python + inner c++ loops more in the future. Worst case I could rewrite the entire thing in c++ using threads. | 0.197375 | 1 | 1 | It turns out the problem was mostly that my return values of vectors of vectors of ints were too large in some cases. I am doing a pairwise comparison of strings, and for some inputs, there were just way too many matches, which meant I ended up essentially with a non-sparse 1M x 1M matrix. I don't know if I should delete the question since it's not an issue with multiprocessing at all.
The memory management of multiprocessing with forkserver duplicating memory for every process and then tripling that memory usage when pickling (is my understanding of how it works) didn't help of course, and is very relevant when you're operating close to memory limits. |
2023-03-21 18:50:42 | 0 | python,bots,telegram,emulation,pyautogui | 2 | 75,805,384 | Telegram bot that can play in emulator (BizHawk) | 75,805,306 | false | 73 | I'm trying to make a bot that can read all messages sent in a group, and play the game. Something close to Twitch Plays Pokemon but for every game.
Right now I have this code, the bot works fine in Telegram but once it get the window focused and should move it doesn't...
import logging
import pyautogui # Importa la biblioteca pyautogui
import pygetwindow as gw
from telegram import Update
from telegram.ext import Updater, CommandHandler, MessageHandler, Filters, CallbackContext
# Configurar el registro
logging.basicConfig(level=logging.INFO)
TOKEN = "TOKEN"
MOVEMENTS = {
"up": "w",
"Up": "W",
"down": "s",
"left": "a",
"right": "d"
}
def focus_pokemon_window(window_title):
try:
window = gw.getWindowsWithTitle(window_title)[0]
window.restore()
window.activate()
except IndexError:
print(f"No se encontró ninguna ventana con el título '{window_title}'.")
def move_character(direction):
focus_pokemon_window("Pokemon - FireRed Version (USA) [Gameboy Advance] - BizHawk")
pyautogui.hotkey(MOVEMENTS[direction])
def start(update: Update, context: CallbackContext):
pass
def handle_text(update: Update, context: CallbackContext):
bot_username = context.bot.username
text = update.message.text.lower()
if f"@{bot_username.lower()}" not in text:
return
# Eliminar la mención del bot del texto
text = text.replace(f"@{bot_username.lower()}", "").strip()
if text in MOVEMENTS:
move_character(text)
else:
update.message.reply_text("Comando desconocido. Por favor, envía una dirección válida (up, down, left, right).")
def main():
updater = Updater(TOKEN)
dispatcher = updater.dispatcher
# Agregar manejadores de comandos y mensajes
dispatcher.add_handler(CommandHandler("start", start))
dispatcher.add_handler(MessageHandler(Filters.text & ~Filters.command, handle_text))
# Iniciar el bot
updater.start_polling()
updater.idle()
if __name__ == "__main__":
main()
the game will be streamed by OBS privately.
I tried all kind of solutions but seems that when windows pop-ups it doesn't make any movement. | 0 | 1 | 1 | In my experience, anything that involves pulling up a specific window by name using something like pygetwindow or pywin32 doesn't really work consistently, sometimes works and sometimes doesn't. I blame Microsoft for changing the Windows API constantly without much documentation. You could try using Pyautogui to get the window. To click an image in Pyautogui, just use pyautogui.click(x = image_name). Make sure that your screencap is accurate, if you're on Windows hit the Windows button, then type "snip". Drag over the part you want to click then click save. If you can't crop the image small enough, download GIMP and crop it with that. |
2023-03-21 22:01:35 | 1 | python,regex | 3 | 75,806,778 | Capture strings between groups with groups aren't fixed (RegEx) | 75,806,710 | true | 51 | I'm trying to capture a group of strings between two groups of strings. The RegEx is mostly working, but it doesn't capture all when there is a change in pattern.
The string is:
2023-03-20 / 10:56:58 4737 Security-Enabled Global Group Modified 73 high SRVDC2 john.smi.admin 10.7.3.252 1
After the date and time is a four-digit number, which sometimes is not present, so the log show N/A instead. That's when I'm having trouble. The RegEx must be able to capture both the four-digit number and the N/A message.
Here's what I've tried:
import re
string = '2023-03-20 / 10:56:58 4737 Security-Enabled Global Group Modified 73 high SRVDC2 john.smi.admin 10.7.3.252 1'
pattern = '(?<=\d{4} )(.*?)(?=\s\d{2}\s)'
res = re.findall(pattern,string,re.MULTILINE)
print(res) | 1.2 | 2 | 1 | Use '(?<=\d{4} | N/A )(.*?)(?=\s\d{2}\s)' instead.
It looks behind to 4 digits and space OR N/A between spaces. This is needed to keep lookbehind group fixed length. |
2023-03-22 10:09:07 | 0 | python,windows,pjsip,pjsua2 | 1 | 75,850,914 | How to create PJSUA2 Python 3 package? | 75,810,866 | false | 112 | I have a very hard time creating a PJSUA2 package for Python 3.x on Windows 10. I downloaded the source code from the pjsip site and I'm able to compile the C++ code without problems, but I cannot build the PJSUA2 module for Python. The docs mention using SWIG, but I had no luck so far.
When I run make from ./pjsip-apps/src/swig I get the following:
Makefile:1: ../../../build.mak: No such file or directory
make: *** No rule to make target '../../../build.mak'. Stop.
When I run make from ./pjsip-apps/src/swig/python I get:
sed: -e expression #1, char 8: unterminated `s' command
Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='cp1250'>
OSError: [Errno 22] Invalid argument
python3 setup.py build --compiler=mingw32
helper.mak:2: /build/common.mak: No such file or directory
make[1]: *** No rule to make target '/build/common.mak'. Stop.
helper.mak:2: /build/common.mak: No such file or directory
make[1]: *** No rule to make target '/build/common.mak'. Stop.
helper.mak:2: /build/common.mak: No such file or directory
make[1]: *** No rule to make target '/build/common.mak'. Stop.
helper.mak:2: /build/common.mak: No such file or directory
make[1]: *** No rule to make target '/build/common.mak'. Stop.
running build
running build_py
running build_ext
building '_pjsua2' extension
x86_64-w64-mingw32-gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -O2 -pipe -fno-ident -I/c/mingw-builds/ucrt64-seh-posix/x86_64-1220-posix-seh-ucrt-rt_v10-rev2/mingw64/opt/include -I/c/mingw-builds/ucrt64-seh-posix/prerequisites/x86_64-zlib-static/include -I/c/mingw-builds/ucrt64-seh-posix/prerequisites/x86_64-w64-mingw32-static/include -D__USE_MINGW_ANSI_STDIO=1 -IC:/ProgramData/chocolatey/lib/mingw/tools/install/mingw64/opt/include/python3.9 -c pjsua2_wrap.cpp -o build/temp.mingw_x86_64_ucrt-3.9/pjsua2_wrap.o
pjsua2_wrap.cpp:3841:10: fatal error: pjsua2.hpp: No such file or directory
3841 | #include "pjsua2.hpp"
| ^~~~~~~~~~~~
compilation terminated.
error: command 'C:\\ProgramData\\chocolatey\\bin/x86_64-w64-mingw32-gcc.exe' failed with exit code 1
make: *** [Makefile:37: _pjsua2.so] Error 1
I don't understand what could be wrong, as I literally only downloaded the source code and did not touch anything. | 0 | 1 | 1 | When I was trying to install the PJSIP/PJSUA2 package firstly on windows it ran into constant build errors which would never go away, instead try to switch to a Linux based operating system. When that's not possible try to use the Windows Linux subsystem\WSL2 which gave me great success.
(Note that the use of audio is not possible inside WSL2, kind of defeating the purpose depending on what your project is about)
I'm hoping to have helped you out! |
2023-03-14 04:09:50 | 0 | python,flask,sqlite | 2 | 75,811,087 | Value interpolation issue with Python SQLite3 library | 75,810,940 | true | 24 | I have an odd problem with an SQL query and I'm finding it difficult to debug unfortunately.
I have a query that inserts a user into a db, if I run this debug statement I get the values returned from the form.
app.logger.debug(f'''{request.form.get("firstName").capitalize()}, {request.form.get("lastName").capitalize()}, {request.form.get("userName").lower()}, {request.form.get("inputEmail").lower()}, {generate_password_hash(request.form.get("inputPassword"), method="sha256")}''')
which are as follows
DEBUG: David, Taylor, taylord, dave@imagetaylor.com, sha256$xXfFJg6EKm9V7O5D$f8eadba22c3ad99c4133ac9ae4d2d25dfe9665b70df04d51417d1caec708ab4c
if I then plug those into the SQL query everythings works
dbexe("INSERT INTO users(firstname,lastname,username,email,hash) VALUES('David','Taylor','taylord','dave@imagetaylor.com','sha256$04meQSacFSYL12aq$b17d53e8a3de54e0dcf66c11cf36858f551b9b72a771b18c9b8f3bf24599f8b2')")
However when I use safe interpolation through the sqlite3 library it fauls and I'm having trouble finding a way to see the issue.
dbexe("INSERT INTO users(firstname,lastname,username,email,hash) VALUES('?','?','?','?','?')", request.form.get("firstName").capitalize(), request.form.get("lastName").capitalize(), request.form.get("userName").lower(), request.form.get("inputEmail").lower(), generate_password_hash(request.form.get("inputPassword"), method="sha256"))
The dbexec function is as follows.
def dbexe(query):
try:
retval = []
db = sqlite3.connect("main.db")
db.row_factory = dict_factory
for row in db.execute(query):
retval.append(row)
db.commit()
db.close()
except:
return "Failed"
return retval | 1.2 | 1 | 1 | Use VALUES(?,?,?,?,?) instead of VALUES('?','?','?','?','?').
There's no safe interpolation. The code is using a query with parameters, not string interpolation of any kind. ? is a parameter, not a format placeholder and shouldn't be quoted. The parameter values are included in the query itself. They're sent as separate, strongly-typed values to the database alongside the query. The database then compiles the query into a parameterized execution plan which gets executed using those values.
This way you can send numbers, dates, or strings containing anything to the server without worrying about formats or injection risks. |
2023-03-22 12:19:22 | 1 | python,python-3.x,time-complexity,binary-search-tree,inorder | 1 | 75,812,622 | Time Complexity of Binary Search Tree Inorder Traversal | 75,812,184 | true | 41 | I found the following python code for inorder traversal of a binary search tree, with an explanation that its time complexity is O(n), where n is the number of nodes in the tree. I understand that the code visits every node in the tree, so its time complexity is linear in the number of nodes, but at each level of recursion, isn't it also performing an addition operation on lists that should take O(n) time? Shouldn't the time complexity be O(n^2)?
def inorder(r):
return inorder(r.left) + [r.val] + inorder(r.right) if r else [] | 1.2 | 1 | 1 | Yes. List concatenation in Python seems to be an O(m + n) operation, as values are copied each time. Therefore, the work per node is not constant but depends on the length of its children. So you are correct in your thinking and this code is not O(n). However, it could easily be implemented in such a way that it would be O(n).
The worst-case complexity of this code would indeed be O(n^2), as you claim. It is easiest to think of the extreme case if the tree is so unbalanced that it is just a 1D linked list. Then, you would obtain a complexity of O(1 + 2 + ... + n) = O(n^2). |
2023-03-22 14:33:16 | 0 | python,librosa | 1 | 75,815,404 | Python, working with sound: librosa and pyrubberband conflict | 75,813,603 | false | 221 | I have the following script that I was using to manipulate an mp3:
import librosa
import soundfile as sf
from playsound import playsound
from direct.showbase.ShowBase import ShowBase
#import pyrubberband as pyrb
filename = "music.mp3"
y, sr = librosa.load(filename)
tempo, beat_frames = librosa.beat.beat_track(y=y, sr=sr)
bars = []
bar_dif=[]
initial_beat = beat_frames[0]
for i in range(0,len(beat_frames)):
if i != 0:
bar_dif.append(beat_frames[i]-beat_frames[i-1])
if i % 16 == 0:
bars.append(beat_frames[i])
for i in range(0,len(bars)):
if i != len(bars) - 1:
start = bars[i]
end = bars[i+1]+1
start_sample = librosa.frames_to_samples(start)
end_sample = librosa.frames_to_samples(end)
y_cut = y[start_sample:end_sample]
sf.write(f"bar_{i}.wav", y_cut, sr)
base = ShowBase()
#Section A
#sound = base.loader.loadSfx("bar_27.wav")
#sound.setLoop(True)
#sound.setPlayRate(1.5)
#sound.play()
#base.run()
#Section B
#stretched_samples, _ = sf.read("bar_27.wav")
#tempo_factor = 1.5
#stretched_samples = pyrb.time_stretch(stretched_samples, sr, tempo_factor)
#stretched_sound = base.loader.loadSfxData(stretched_samples.tobytes())
#stretched_sound.setLoop(True)
#stretched_sound.setLoop(True)
#stretched_sound.play()
#base.run()
Originally I just had section A up and running with no pyrubberband. The script at this stage split the song bar bars of 4. It would then play a loop of a specific bar. It would then speed the loop up. The issue with this is that it pitched my sample up, which I did not want.
I then decided to install pyrubberband to try to pitch shift the sample. Now I receive errors on the line y, sr = librosa.load(filename) (See below) I then decided to uninstall/delete pyrubberband / section B, put the problem still persists. I then uninstalled and reinstalled librosa and re-downloaded the mp3, but he problem still persists.
Have I broken my computer?
Traceback (most recent call last):
File "C:\Users\Charlie\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\librosa\core\audio.py", line 176, in load
y, sr_native = __soundfile_load(path, offset, duration, dtype)
File "C:\Users\Charlie\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\librosa\core\audio.py", line 209, in __soundfile_load
context = sf.SoundFile(path)
File "C:\Users\Charlie\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\soundfile.py", line 740, in __init__
self._file = self._open(file, mode_int, closefd)
File "C:\Users\Charlie\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\soundfile.py", line 1264, in _open
_error_check(_snd.sf_error(file_ptr),
File "C:\Users\Charlie\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\soundfile.py", line 1455, in _error_check
raise RuntimeError(prefix + _ffi.string(err_str).decode('utf-8', 'replace'))
RuntimeError: Error opening 'music.mp3': File contains data in an unknown format.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Users\Charlie\Desktop\BeatMaker\beatmaker.py", line 12, in <module>
y, sr = librosa.load(filename)
File "C:\Users\Charlie\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\librosa\core\audio.py", line 178, in load
except sf.SoundFileRuntimeError as exc:
AttributeError: module 'soundfile' has no attribute 'SoundFileRuntimeError' | 0 | 1 | 1 | I ended up fixing my script by reinstalling Python! |
2023-03-22 15:52:04 | 2 | python,logistic-regression,statsmodels,stderr | 1 | 75,815,931 | Statsmodels Clustered Logit Model With Robust Standard Errors | 75,814,525 | true | 144 | I have the following dataframe:
df.head()
id case volfluid map rr o2 fluid
1044 36 3 3.0 1.0 3.0 2.0 0.0
1045 37 3 2.0 3.0 1.0 2.0 1.0
1046 38 3 3.0 2.0 2.0 1.0 0.0
1047 36 4 2.0 3.0 1.0 3.0 1.0
1048 37 4 1.0 1.0 3.0 3.0 1.0
.
.
.
I want to run a logistic regression model clustered on id and with robust standard errors. Here is what I have for the equation
smf.logit('''fluid ~ C(volfluid) + C(map, Treatment(reference = 3.0)) +
C(o2, Treatment(reference = 3.0)) + C(rr) +
C(case, Treatment(reference = 4))''',
data = df).fit(cov_type='cluster', cov_kwds={'groups': df['id']})
I'm not sure if this accomplishes both the clustering, and the robust std. errors. I understand that setting cov_type = 'hc0' provides robust std. errors, but if I do that can I still cluster on id? And do I need to do that, or are clustered standard errors inherently robust?
Thank you! | 1.2 | 2 | 1 | Cluster robust standard errors are also heteroscedasticity robust (HC). A HC cov_types do not take any correlation into account.
Related aside: Using GEE with independence correlation has the same underlying model as Logit but has the option of bias-reduced cluster robust standard errors (similar so CR3, the HC3 analogue for cluster correlations) |
2023-03-22 17:10:21 | 0 | python-3.x,matplotlib,conda | 1 | 75,823,139 | Type error in matplotlib renderer when saving pcolormesh figure | 75,815,295 | false | 54 | I've recently changed one of my plotting scripts to use pcolormesh instead of contourf. It's part of a script that loops over several netCDF datasets to create a couple of maps at different timesteps and, in the problematic part, cross-sections of some variables at a specific location over time. The code line used to be:
im = ax.contourf(X,Y,np.transpose(data),*args,**kwargs)
where X (time, converted to float64 seconds) and Y (height, also float64) are a meshgrid and data is the corresponding slice of the netcdf dataset (float32). I replaced it with
im = ax.pcolormesh(X,Y,np.transpose(data),shading='gouraud',*args,**kwargs)
as it happened that the plot style works better for the data. The kwargs are a colorbar and the colorbar limits, nothing out of the ordinary.
It worked fine on my small test case that I had on my local (Windows) machine using spyder, however when I try to use it on the (Linux) Server from the command line, it suddenly gives me the following error message when I'm trying to save the plot:
Traceback (most recent call last):
File "(redacted)/main.py", line 37, in <module>
dataset.make_plots()
File "(redacted)/plotDataset.py", line 75, in make_plots
self._crossecPlot(data, variable, crossec)
File "(redacted)/plotDataset.py", line 196, in _crossecPlot
plt.savefig(self.settings['output_path']+'/'+filename+'.png', bbox_inches='tight')
File "(redacted)/miniconda3/lib/python3.10/site-packages/matplotlib/pyplot.py", line 1023, in savefig
res = fig.savefig(*args, **kwargs)
File "(redacted)/miniconda3/lib/python3.10/site-packages/matplotlib/figure.py", line 3343, in savefig
self.canvas.print_figure(fname, **kwargs)
File "(redacted)/miniconda3/lib/python3.10/site-packages/matplotlib/backends/backend_qtagg.py", line 75, in print_figure
super().print_figure(*args, **kwargs)
File "(redacted)/miniconda3/lib/python3.10/site-packages/matplotlib/backend_bases.py", line 2366, in print_figure
result = print_method(
File "(redacted)/miniconda3/lib/python3.10/site-packages/matplotlib/backend_bases.py", line 2232, in <lambda>
print_method = functools.wraps(meth)(lambda *args, **kwargs: meth(
File "(redacted)/miniconda3/lib/python3.10/site-packages/matplotlib/backends/backend_agg.py", line 509, in print_png
self._print_pil(filename_or_obj, "png", pil_kwargs, metadata)
File "(redacted)/miniconda3/lib/python3.10/site-packages/matplotlib/backends/backend_agg.py", line 457, in _print_pil
FigureCanvasAgg.draw(self)
File "(redacted)/miniconda3/lib/python3.10/site-packages/matplotlib/backends/backend_agg.py", line 400, in draw
self.figure.draw(self.renderer)
File "(redacted)/miniconda3/lib/python3.10/site-packages/matplotlib/artist.py", line 95, in draw_wrapper
result = draw(artist, renderer, *args, **kwargs)
File "(redacted)/miniconda3/lib/python3.10/site-packages/matplotlib/artist.py", line 72, in draw_wrapper
return draw(artist, renderer)
File "(redacted)/miniconda3/lib/python3.10/site-packages/matplotlib/figure.py", line 3140, in draw
mimage._draw_list_compositing_images(
File "(redacted)/miniconda3/lib/python3.10/site-packages/matplotlib/image.py", line 131, in _draw_list_compositing_images
a.draw(renderer)
File "(redacted)/miniconda3/lib/python3.10/site-packages/matplotlib/artist.py", line 72, in draw_wrapper
return draw(artist, renderer)
File "(redacted)/miniconda3/lib/python3.10/site-packages/matplotlib/axes/_base.py", line 3064, in draw
mimage._draw_list_compositing_images(
File "(redacted)/miniconda3/lib/python3.10/site-packages/matplotlib/image.py", line 131, in _draw_list_compositing_images
a.draw(renderer)
File "(redacted)/miniconda3/lib/python3.10/site-packages/matplotlib/artist.py", line 72, in draw_wrapper
return draw(artist, renderer)
File "(redacted)/miniconda3/lib/python3.10/site-packages/matplotlib/collections.py", line 2095, in draw
renderer.draw_gouraud_triangles(
TypeError: Cannot cast array data from dtype('float128') to dtype('float64') according to the rule 'safe'
I've already tried using the default (nearest) shading type instead of Gouraud triangles, which gave me the same error except that it's in renderer.draw_quad_mesh, and I've used the shared anaconda installation before installing miniconda with the hope that it was something that got fixed in an update.
What is suprising to me, when i use im = ax.pcolormesh(lons,lats,values,transform=ccrs.PlateCarree(),*args, **kwargs) in a different plot type (which uses a different slice of the same data and cartopy), the error doesn't occur. All in all though, I have no idea where the float128 type is coming from, and why it wouldn't convert to a float64 (since it's only for plotting, i would think that the lost precision shouldn't be much of an issue). | 0 | 1 | 1 | It seems to be as Jody Klomak said - if I explicitly cast X, Y, and the data to float64 by adding .astype('float64'), it works. I still didn't figure out why it happened in the first place, but I'm happy that it works now. |
2023-03-22 17:33:12 | 1 | python,pandas,dataframe | 3 | 75,815,559 | Get the name of columns from rows distinct from zero python | 75,815,522 | true | 41 | I have this dataframe:
df0 = pd.DataFrame({'points': [0, 0, -3, 16, 0, 5, -3, 14],
'assists': [0, 0, 2, 0, 1, -7, 0, 6],
'numbers': [0, 0, 1, 6, 10, 5, 8, 7]})
and my desired dataset looks like this:
points assists numbers colX
0 0 0 0
0 0 0 0
-3 2 1 'points-assists-numbers'
16 0 6 'points-numbers'
0 1 10 'assists-numbers'
5 7 5 'points-assists-numbers'
-3 0 8 'points-numbers'
14 8 7 'points-assists-numbers'
A function that create a string from columns names that have values distinct from zero.
Any help? | 1.2 | 2 | 1 | This kind of operation is well suited to a lambda expression.
Something like this should work:
df0['colX'] = df0.apply(lambda x: '-'.join(c for c in df0.columns if x[c] != 0), axis=1).replace('', 0)
first it gets a list of the columns that are not 0
joins the names of those columns with a "-"
after that, fills blank names with a 0 |
2023-03-23 03:24:33 | 3 | python | 2 | 75,820,277 | python keras.preprocessing.sequence has no attribute pad_sequences | 75,819,004 | false | 1,238 | import:from keras.preprocessing import sequence
but:
AttributeError: module 'keras.preprocessing.sequence' has no attribute 'pad_sequences'
Why?
How can I edit this? | 0.291313 | 2 | 1 | Seems like your keras version is greater than 2.8 that's why getting error as
from keras.preprocessing import sequence
works for earlier version. Instead, replace with the below given code:
from keras.utils.data_utils import pad_sequences
You can also use:
from tensorflow.keras.preprocessing.sequence import pad_sequences
They both worked for me. |
2023-03-23 04:36:41 | 1 | python,tensorflow,machine-learning,keras | 2 | 75,851,122 | Keras: time per step increases with a filter on the number of samples, epoch time continues the same | 75,819,291 | true | 122 | I'm implementing a simple sanity check model on Keras for some data I have. My training dataset is comprised of about 550 files, and each contributes to about 150 samples. Each training sample has the following signature:
({'input_a': TensorSpec(shape=(None, 900, 1), dtype=tf.float64, name=None),
'input_b': TensorSpec(shape=(None, 900, 1), dtype=tf.float64, name=None)},
TensorSpec(shape=(None, 1), dtype=tf.int64, name=None)
)
Essentially, each training sample is made up of two inputs with shape (900, 1), and the target is a single (binary) label. The first step of my model is a concatenation of inputs into a (900, 2) Tensor.
The total number of training samples is about 70000.
As input to the model, I'm creating a tf.data.Dataset, and applying a few preparation steps:
tf.Dataset.filter: to filter some samples with invalid labels
tf.Dataset.shuffle
tf.Dataset.filter: to undersample my training dataset
tf.Dataset.batch
Step 3 is the most important in my question. To undersample my dataset I apply a simple function:
def undersampling(dataset: tf.data.Dataset, drop_proba: Iterable[float]) -> tf.data.Dataset:
def undersample_function(x, y):
drop_prob_ = tf.constant(drop_proba)
idx = y[0]
p = drop_prob_[idx]
v = tf.random.uniform(shape=(), dtype=tf.float32)
return tf.math.greater_equal(v, p)
return dataset.filter(undersample_function)
Essentially, the function accepts a a vector of probabilities drop_prob such that drop_prob[l] is the probability of dropping a sample with label l (the function is a bit convoluted, but it's the way I found to implement it as Dataset.filter). Using equal probabilities, say drop_prob=[0.9, 0.9], I`ll be dropping about 90% of my samples.
Now, the thing is, I've been experimenting with different undersamplings for my dataset, in order to find a sweet spot between performance and training time, but when I undersample, the epoch duration is the same, with time/step increasing instead.
Keeping my batch_size fixed at 20000, for the complete dataset I have a total of 4 batches, and the following time for an average epoch:
Epoch 4/1000
1/4 [======>.......................] - ETA: 9s
2/4 [==============>...............] - ETA: 5s
3/4 [=====================>........] - ETA: 2s
4/4 [==============================] - ETA: 0s
4/4 [==============================] - 21s 6s/step
While if I undersample my dataset with a drop_prob = [0.9, 0.9] (That is, I'm getting rid of about 90% of the dataset), and keeping the same batch_size of 20000, I have 1 batch, and the following time for an average epoch:
Epoch 4/1000
1/1 [==============================] - ETA: 0s
1/1 [==============================] - 22s 22s/step
Notice that while the number of batches is only 1, the epoch time is the same! It just takes longer to process the batch.
Now, as a sanity check, I tried a different way of undersampling, by filtering the files instead. So I selected about 55 of the training files (10%), to have a similar number of samples in a single batch, and removed the undersampling from the tf.Dataset. The epoch time decreates as expected:
Epoch 4/1000
1/1 [==============================] - ETA: 0s
1/1 [==============================] - 2s 2s/step
Note that the original dataset has 70014 training samples, while the undersampled dataset by means of tf.Dataset.filter had 6995 samples and the undersampled dataset by means of file filtering had 7018 samples, thus the numbers are consistent.
Much faster. In fact, it takes about 10% of the time as the epoch takes with the full dataset. So there is an issue with the way I'm performing undersampling (by using tf.data.Dataset.filter) when creating the tf.Dataset, I would like to ask for help to figure it out what is the issue. Thanks. | 1.2 | 2 | 1 | It seems that most of the time is spent on the dataset operations rather than the network itself. From examining the evidence, my theory would be that if this is executed on GPU (dataset operations are executed on the CPU regardless) then the GPU has to wait for the dataset between batches.
So as the dataset operation always takes the same time, this is why on the progress bar it would seem that batches take longer.
If executed on a GPU, the right way to assert if this theory is correct is to observe the GPU utilization (you can use watch -n 0.5 nvidia-smi as it runs, or better yet use nvtop or any other GPU monitoring tool). If there are times where the utilization (not memory! but utilization) is not close to 100%, then that would be an indicator that this is indeed the problem. Notice it should never drop from 90% even not for half a second.
To solve this, you should use the Dataset.prefetch as the last dataset operation in your code, this will cause the CPU to over-fetch batches so it has batches available for the network to use so it won't wait. |
2023-03-23 09:50:52 | 1 | python,svg,rendering | 1 | 75,883,170 | Python render svg image with the python only modules | 75,821,466 | true | 192 | The question is simple, but I have googled a lot of methods, and there no such solution as:
import svg-render-library
figure = svg-render-library.open('test.svg')
figure.render()
Is there any simple methods to display an SVG image using only python libraries?
I am asking about rendering the SVG image without any conversion to another formats and to render using pure python, without any 3rd party software. As I have tried, this seems impossible for now.
As built-in python I mean - only python packages available through pip, so it is not necessary to install/compile anything else. And to render I mean to show inside window which part of the python, not the browser or any external software. | 1.2 | 6 | 1 | Currently, there is no method to render natively cross-platform with just the standard library (ie. some python distributions for OSX do not include tkinter by default). Ergo, there is no good way to do this.
AFAIK, there are no other ways to do this maintaining your described API without writing your own code or reaching out to non-standard library modules.
If you still are 100% set on doing it with pure python and the standard library, you have tkinter, and don't care about writing your own implementation, then proceed.
If you are talking about rendering in the context of displaying an SVG in a window, then your best bet would be to utilize the tkinter and xml modules.
SVG is just an XML file, so xml.minidom should be able to parse an svg file. You can then extract the height and width from the dom, draw each element onto a tkinter.Canvas widget, and then have tkinter render the window. You will need to either pre-calculate transparencies or handle that while managing the layering.
Another option is to use the turtle package which wraps around tkinter. If the SVG is just a path, then this should be able to draw that path fairly straight forward.
If you are willing to reach out beyond the standard library, then cairosvg or svglib both can easily handle this task. cairosvg is a bit of a bear if you aren't used to installing system libraries, but svglib is a pure python implementation. |
2023-03-23 10:10:42 | 0 | python,pandas,dataframe | 2 | 75,822,061 | Does DataFrame.index.empty imply DataFrame.empty? | 75,821,671 | false | 35 | If I have a DataFrame, df, for which df.index.empy is True, will this ALWAYS imply that df.empty is also True?
My intend is to test only df.index.empy when I need to test both conditions (lazy programming style). | 0 | 1 | 1 | yes, if index of DataFrame is empty it will always satisfies condition of DataFrame.empty
For e.g
DataFrame.empty = True
works in both condition for index as well as for columns.
Hence if you want to check any of these are empty then you can go with
DataFrame.empty
else need to be specific
DataFrame.index.empty |
2023-03-23 10:43:43 | 0 | python,pandas,uuid | 2 | 75,823,215 | Select row in dataframe containing a specific UUID | 75,821,999 | false | 78 | I have a dataframe, one column of which holds a UUID:
import numpy as np
import pandas as pd
import uuid
df = pd.DataFrame(
data=[[1, 2, 3], [4, 5, 6]],
columns=['a', 'b', 'c']
)
df['d'] = np.NaN
df['d'] = df['d'].apply(
lambda x: uuid.uuid4()
)
Preview:
df
------- a b c d
------- 0 1 2 3 31abc2af-117d-4fe8-b43f-e68fa429187f
------- 1 4 5 6 f63b36c8-bb4e-4148-ace9-a89fa117e15c
I now want to select rows based on a UUID. But the following returns an empty set of rows:
df.loc[
df['d'] == '31abc2af-117d-4fe8-b43f-e68fa429187f'
]
How do I select rows using UUID as the match criteria? | 0 | 1 | 1 | I see you found a solution to you problem. Another solution would be to convert the column values to strings:
df.loc[df['d'].astype(str) == '31abc2af-117d-4fe8-b43f-e68fa429187f'] |
2023-03-23 12:47:02 | 0 | python,scikit-learn,missing-data | 1 | 75,825,064 | Scikit learn Iterative Imputer : change and scaled tolerance | 75,823,177 | false | 138 | As part of a school project, I have to explore and perform data analysis and machine learning methods on a given database. The point is that my database is pretty big (12 651 lines for 810 columns) and contains a lot of missing values.
I wanted to impute these values with the Iterative Imputer of Scikit-learn, and this is what I get:
imp = IterativeImputer(estimator = LinearRegression(),
max_iter=10,
random_state=0,
verbose=2,
n_nearest_features=10,
initial_strategy="most_frequent")
imp.fit(data)
Results:
IterativeImputer] Completing matrix with shape (12651, 810)
[IterativeImputer] Ending imputation round 1/10, elapsed time 8.38
[IterativeImputer] Change: 101844979.96577276, scaled tolerance: 208867.02000000002
[IterativeImputer] Ending imputation round 2/10, elapsed time 18.76
[IterativeImputer] Change: 633298988.0233588, scaled tolerance: 208867.02000000002
[IterativeImputer] Ending imputation round 3/10, elapsed time 28.81
[IterativeImputer] Change: 591554347.9059296, scaled tolerance: 208867.02000000002
[IterativeImputer] Ending imputation round 4/10, elapsed time 37.43
[IterativeImputer] Change: 1289773197.9995384, scaled tolerance: 208867.02000000002
[IterativeImputer] Ending imputation round 5/10, elapsed time 46.58
[IterativeImputer] Change: 1291562921.1247401, scaled tolerance: 208867.02000000002
[IterativeImputer] Ending imputation round 6/10, elapsed time 54.32
[IterativeImputer] Change: 32943821.50762498, scaled tolerance: 208867.02000000002
[IterativeImputer] Ending imputation round 7/10, elapsed time 64.13
[IterativeImputer] Change: 58342050.73579848, scaled tolerance: 208867.02000000002
[IterativeImputer] Ending imputation round 8/10, elapsed time 73.44
[IterativeImputer] Change: 1559818227.7418892, scaled tolerance: 208867.02000000002
[IterativeImputer] Ending imputation round 9/10, elapsed time 81.46
[IterativeImputer] Change: 164792431487.71582, scaled tolerance: 208867.02000000002
[IterativeImputer] Ending imputation round 10/10, elapsed time 90.07
[IterativeImputer] Change: 13045775634991.55, scaled tolerance: 208867.02000000002
/usr/local/lib/python3.9/dist-packages/sklearn/impute/_iterative.py:785: ConvergenceWarning: [IterativeImputer] Early stopping criterion not reached.
warnings.warn(
I don't know if i can do anything to make it converge?
Thanks!
P.s: One thing I did not mention is that many of the columns actually represent categorical variables (but pandas casted these columns as float64 because of the NaN values).
What I tried: changing the estimator from BayesianRidge to LinearRegressor, setting n_nearest_features = 10 and 20, setting initial_strategy="most_frequent" instead of "mean". It did not seem to work either :/ | 0 | 1 | 1 | There are a few different things you can try to get better results with IterativeImputer. The first thing I would do is try some regularization by using either Ridge or ElasticNet as your estimator, and increasing max_iter to ~50 to see if your results ever stabilize.
In my personal experience I also have better success if I scale the data before imputing and choose initial_strategy="median" rather than 'most_frequent'.
If you have a mix of continuous and categorical features, you will likely need a nonlinear estimator and will need to postprocess the results by re-quantizing the categorical features. |
2023-03-23 13:03:59 | 0 | python,deep-learning,pytorch,linear-algebra | 1 | 75,823,802 | Sparse matrix multiplication in pytorch | 75,823,367 | true | 197 | I want to implement the following formula in pytorch in a batch manner:
x^T A x
where x has shape: [BATCH, DIM1]
and A has shape: [BATCH, DIM1, DIM1]
I managed to implement it for the dense matrix A as follows:
torch.bmm(torch.bmm(x.unsqueeze(1), A), x.unsqueeze(2)).squeeze().
However, now I need to implement it for a SPARSE matrix A and I am failing to implement it.
The error that I am getting is {RuntimeError}bmm_sparse: Tensor 'mat2' must be dense, which comes from the torch.bmm(x.unsqueeze(1), A) part of the code.
In order to reproduce my work you could run this:
import torch
sparse = True # switch to dense to see the working version
batch_size = 10
dim1 = 5
x = torch.rand(batch_size, dim1)
A = torch.rand(batch_size, dim1, dim1)
if sparse:
A = A.to_sparse_coo()
xTAx = torch.bmm(torch.bmm(x.unsqueeze(1), A), x.unsqueeze(2)).squeeze()
My pytorch version is 1.12.1+cu116 | 1.2 | 1 | 1 | The solution is as simple as changing the order of multiplications from
(xT A) x to xT (Ax). |
2023-03-23 13:30:15 | 1 | python,algorithm,optimization,path-finding | 1 | 75,824,515 | Python Pathfinding Optimization | 75,823,652 | true | 51 | So the task is to find the safest path for the player within a given maze r, where the amount of encountered monsters is as low as possible.
Every maze is represented as an n by n array, where '.' represents a floor tile, '#' a wall tile and '@' a monster tile.
Start is at the upper left corner and exit at the bottom right corner. This is an example maze:
r = ["....@",
"@##.#",
".##@#",
".@..#",
"###@."]
Player can only move right and down.
I'm able to create a working algorithm, but it slows down significantly as the maze size increases (e.g. a 20 by 20 maze). I'd appreciate any kind of help with optimizing this algorithm:
def count(r):
n = len(r)
visited = [[False for j in range(n)] for i in range(n)]
paths = []
def pathfind(i, j, path, monster_count):
if i == n - 1 and j == n - 1:
paths.append((path, monster_count))
return
visited[i][j] = True
if i + 1 < n and not visited[i + 1][j] and r[i + 1][j] != '#':
if r[i + 1][j] == '@':
pathfind(i + 1, j, path + [(i + 1, j)], monster_count + 1)
else:
pathfind(i + 1, j, path + [(i + 1, j)], monster_count)
if j + 1 < n and not visited[i][j + 1] and r[i][j + 1] != '#':
if r[i][j + 1] == '@':
pathfind(i, j + 1, path + [(i, j + 1)], monster_count + 1)
else:
pathfind(i, j + 1, path + [(i, j + 1)], monster_count)
visited[i][j] = False
if r[0][0] == '@':
pathfind(0, 0, [(0, 0)], 1)
elif r[0][0] == '#':
return -1
else:
pathfind(0, 0, [(0, 0)], 0)
if len(paths) == 0:
return -1
return paths | 1.2 | 1 | 1 | The reason why you're slow is that you're exploring paths going through (i, j) every time you find a path to that spopt, regardless of whether some previous path got to there and figured out what can be done.
Instead of visited, construct 2 data structures. came_from and monsters. In came_from you record what path you found that gets here. In monsters you record how many monsters there were on that path. If you get to a spot with a came_from you only continue exploring if you got there with fewer monsters than it already had.
And now when you're done you'll be able to start at the bottom right corner and follow came_from back to the start like a cookie trail.
There are improvements on this idea, but this should still find a path in reasonable time even with a 200 by 200 maze. |
2023-03-23 21:04:00 | 0 | python,encoding,character-encoding,openai-api,gpt-3 | 1 | 76,237,797 | how do i stop this encoding error in the openai python module? | 75,827,960 | false | 288 | i'm trying to make a chat completion bot using opeAI's GPT engine that takes voice input and outputs a text to speech file, however, i keep getting an encoding error that i dont understand
import os
import speech_recognition as sr
import openai
from dotenv import load_dotenv
from os import path
from playsound import playsound
from gtts import gTTS
import simpleaudio as sa
load_dotenv()
language = 'en'
openai.api_key = os.getenv("OPENAI_API_KEY")
while True:
# this recognizes your voice input
recog = sr.Recognizer()
with sr.Microphone() as source:
audio = recog.listen(source)
#this transcribes the voice to text
with open("microphone-results.wav", "wb") as f:
f.write(audio.get_wav_data())
AUDIO_FILE = path.join(path.dirname(path.realpath(__file__)), "microphone-results.wav")
my_question = recog.recognize_sphinx(audio)
#this generates a response
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a chatbot named jarvis"},
{"role": "user", "content": str(my_question)},
]
)
reply = ''.join(choice.message.content for choice in response.choices)
tts = gTTS(reply)
tts_file = "temp.wav"
tts.save(tts_file)
wave_obj = sa.WaveObject.from_wave_file(tts_file)
play_obj = wave_obj.play()
play_obj.wait_done()
os.remove(tts_file)
i tried formatting it, thinking it would output the tts result instead, it said this:
File "c:\Users\tonda\python\SSPS_Projects\PortfolioApps\assistant\functions\ChatGPT\Chat.py", line 28, in <module>
response = openai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tonda\scoop\apps\python\3.11.0\Lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tonda\scoop\apps\python\3.11.0\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "C:\Users\tonda\scoop\apps\python\3.11.0\Lib\site-packages\openai\api_requestor.py", line 216, in request
result = self.request_raw(
^^^^^^^^^^^^^^^^^
File "C:\Users\tonda\scoop\apps\python\3.11.0\Lib\site-packages\openai\api_requestor.py", line 516, in request_raw
result = _thread_context.session.request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tonda\scoop\apps\python\3.11.0\Lib\site-packages\requests\sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tonda\scoop\apps\python\3.11.0\Lib\site-packages\requests\sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tonda\scoop\apps\python\3.11.0\Lib\site-packages\requests\adapters.py", line 489, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "C:\Users\tonda\scoop\apps\python\3.11.0\Lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "C:\Users\tonda\scoop\apps\python\3.11.0\Lib\site-packages\urllib3\connectionpool.py", line 398, in _make_request
conn.request(method, url, **httplib_request_kw)
File "C:\Users\tonda\scoop\apps\python\3.11.0\Lib\site-packages\urllib3\connection.py", line 239, in request
super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File "C:\Users\tonda\scoop\apps\python\3.11.0\Lib\http\client.py", line 1282, in request
self._send_request(method, url, body, headers, encode_chunked)
File "C:\Users\tonda\scoop\apps\python\3.11.0\Lib\http\client.py", line 1323, in _send_request
self.putheader(hdr, value)
File "C:\Users\tonda\scoop\apps\python\3.11.0\Lib\site-packages\urllib3\connection.py", line 224, in putheader
_HTTPConnection.putheader(self, header, *values)
File "C:\Users\tonda\scoop\apps\python\3.11.0\Lib\http\client.py", line 1255, in putheader
values[i] = one_value.encode('latin-1')
^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeEncodeError: 'latin-1' codec can't encode character '\u201c' in position 7: ordinal not in range(256)
it is saying something about an unknown character? i really dont understand this, partially because im new to coding | 0 | 3 | 1 | I had the same error when I was trying to publish an app in Kubernetes using the API for OpenAI. The problem in my case was that the OpenAI key, somehow had gotten encoded wrongly - so it was no longer latin-1 characters. Make sure that the key you are using have not been encoded somehow?
I had my keys in .yaml file and had to to:
echo "whatever_my_key_was" | base64
and put those values into the .yaml file instead |
2023-03-24 00:08:32 | 0 | python,linux,anaconda,conda,spacy | 1 | 75,882,460 | Conda reports that a package is installed, but it appears not to be | 75,829,009 | true | 39 | I'm trying to install spaCy using conda but i still getting the error
ModuleNotFoundError: No module named 'spacy'
Things i already tried:
re installing conda.
delete and create a new environment.
OBS:
the environment is active.
running Linux.
if i run !conda list i can see to spaCy package.
I thing the problem is about the installation directory, but i'm not sure how to solve it.
some output:
import sys
print(sys.prefix)
output: /usr
active environment : Texto
active env location : /home/arancium/.conda/envs/Texto
shell level : 1
user config file : /home/arancium/.condarc
populated config files : /etc/conda/condarc
/home/arancium/miniforge3/.condarc
/home/arancium/.condarc
conda version : 23.1.0
conda-build version : not installed
python version : 3.10.9.final.0
virtual packages : __archspec=1=x86_64
__glibc=2.37=0
__linux=6.2.8=0
__unix=0=0
base environment : /home/arancium/miniforge3 (writable)
conda av data dir : /home/arancium/miniforge3/etc/conda
conda av metadata url : None
channel URLs : https://conda.anaconda.org/conda-forge/linux-64
https://conda.anaconda.org/conda-forge/noarch
https://repo.anaconda.com/pkgs/main/linux-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/linux-64
https://repo.anaconda.com/pkgs/r/noarch
package cache : /home/arancium/.conda/pkgs
envs directories : /home/arancium/.conda/envs
/home/arancium/miniforge3/envs
platform : linux-64
user-agent : conda/23.1.0 requests/2.28.2 CPython/3.10.9 Linux/6.2.8-zen1-1-zen arch/rolling glibc/2.37
UID:GID : 1000:1000
netrc file : None
offline mode : False
# conda environments:
#
Texto * /home/arancium/.conda/envs/Texto
base /home/arancium/miniforge3
sys.version: 3.10.9 | packaged by conda-forge | (main...
sys.prefix: /home/arancium/miniforge3
sys.executable: /home/arancium/miniforge3/bin/python
conda location: /home/arancium/miniforge3/lib/python3.10/site-packages/conda
conda-build: None
conda-env: /home/arancium/miniforge3/bin/conda-env
user site dirs: ~/.local/lib/python3.10
CIO_TEST: <not set>
CONDA_DEFAULT_ENV: Texto
CONDA_EXE: /home/arancium/miniforge3/bin/conda
CONDA_PREFIX: /home/arancium/.conda/envs/Texto
CONDA_PROMPT_MODIFIER: (Texto)
CONDA_PYTHON_EXE: /home/arancium/miniforge3/bin/python
CONDA_ROOT: /home/arancium/miniforge3
CONDA_SHLVL: 1
CUDA_PATH: /opt/cuda
CURL_CA_BUNDLE: <not set>
LD_PRELOAD: <not set>
PATH: /home/arancium/.conda/envs/Texto/bin:/home/arancium/miniforge3/condabin:/usr/local/sbin:/usr/local/bin:/usr/bin:/opt/cuda/bin:/opt/cuda/nsight_compute:/opt/cuda/nsight_systems/bin:/var/lib/flatpak/exports/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
REQUESTS_CA_BUNDLE: <not set>
SSL_CERT_FILE: <not set>
XDG_SEAT_PATH: /org/freedesktop/DisplayManager/Seat0
XDG_SESSION_PATH: /org/freedesktop/DisplayManager/Session1 | 1.2 | 1 | 1 | The Python interpreter you are using is not the one from your active conda environment.
Lets ensure that you are using the correct Python interpreter from your active conda environment.
first activate your conda env conda activate Texto then check your python interpreter which python it should be something like /home/arancium/.conda/envs/Texto/bin/python if not make sure that make sure your PATH environment variable is correctly set. You can update it like that export PATH="/home/arancium/.conda/envs/Texto/bin:$PATH" Now run your script with the correct interpreter python your_script.py
If that still dont work use the full path like this /home/arancium/.conda/envs/Texto/bin/python your_script.py |
2023-03-24 00:15:56 | 1 | python,discord,voice | 1 | 75,958,662 | Python: Discord bot voice chat no longer functioning | 75,829,048 | false | 234 | My Discord voice chat bot was working for 2 years. As of earlier this week the bot connects to discord fine, however it doesn't play any audio.
The purpose of the bot is to look for "$Hello" message in the Discord text chat channels, then connects to voice chat and play Hello.mp3 file over voice chat.
I run this bot on two different servers and both stopped working at the same time.
I've used a few print commands and I can see it hangs on this line of code without any errors, but it also doesn't print "test Hello 2"
vc = await voice_channel.connect()
Full code:
import os
import discord
from discord.ext.commands import Bot
from discord.ext import commands
from datetime import datetime as dt
from time import strftime
from discord.ext import tasks
import asyncio
TOKEN = 'xxxxxxxxxxxxxx' ### This is the discord unique token for the bot to work ###
client = discord.Client()
voiceChannelId = 57575757575 ###This is the voice channel ID the bot connects to in order to play MP3s ###
#bot = commands.Bot('.')
#DISCORD_TOKEN = os.getenv("DISCORD_TOKEN")
###### When the code executes, the bot is in the on_ready state and goes to playSounds() #####
##### This code is not used in this part of the code but left it in just in case ####
@client.event
async def on_ready():
await asyncio.sleep(1)
print('We have logged in as {0.user}'.format(client))
playSounds.start()
##### Voice chat connection ####
@client.event
async def join(ctx):
channel = ctx.author.voice.channel
await channel.connect()
###### looks for Messages anywhere in discord that contain "$Hello" #######
@client.event
async def on_message(message):
if (message.content == "$Hello"):
print('starting Hello()')
voice_channel = client.get_channel(voiceChannelId)
try:
print('test Hello 1')
global vc
vc = await voice_channel.connect()
print('test Hello 2') ##### This is not printing, suspect line above is hanging ###
vc.play(discord.FFmpegPCMAudio(source='/root/Desktop/Discord/Code/mp3/Hello.mp3'))
vc.source = discord.PCMVolumeTransformer(vc.source)
vc.source.volume = 1
while vc.is_playing():
await asyncio.sleep(1)
print('test Hello 3')
except:
print('test Hello 4')
if (vc.is_connected() == True):
vc.play(discord.FFmpegPCMAudio(source='/root/Desktop/Discord/Code/mp3/Hello.mp3'))
vc.source = discord.PCMVolumeTransformer(vc.source)
vc.source.volume = 1
while vc.is_playing():
await asyncio.sleep(1)
print('Test Hello 5')
await message.channel.send("testing audio.")
Tried updating:
python3 -m pip install -U discord.py.
python3 -m pip install -U discord.py[voice]
and updating python as well.
EDIT: I even tried getting ChatGPT to get this working with 10 different variations of my code.. no help there. AI suggests there might be a recent change in Discord API or something, but can't tell me any more.
My server and bots have correct permissions, nothing has changed there, but I still triple checked..
EDIT 2:
I used pip install -U discord and it worked!!!! For a single day..
Now we're back to hanging on vc = await voice_channel.connect() again..
I'm truly at a loss. | 0.197375 | 2 | 1 | Make sure your library is up to date, discord made a change recently. It worked for me.
Note that I'm updating discord, not discord.py:
pip install -U discord |
2023-03-24 05:22:40 | 0 | python,matplotlib,filenotfounderror | 2 | 76,129,774 | matplotlib filenotfounderror site-packages | 75,830,268 | false | 288 | matplotlib has completely broken my python env.
When i run:
import matplotlib as plt
I received:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\SamCurtis.AzureAD\AppData\Roaming\Python\Python38\site-packages\matplotlib.libs\.load-order-matplotlib-3.7.1'
I receive the same error if i try to pip install OR pip uninstall matplotlib
Infact all my pip functionality is broken (i cannot pip freeze, uninstall / install) anything. | 0 | 1 | 1 | I bumped into a similar problem just now after attempting to downgrade back to my old matplotlib version from 3.7.1. pip was throwing up this matplotlib.libs error even when I wasn't trying to do anything involving matplotlib.
The solution was to delete the matplotlib and mpl_toolkits directories from site-packages. Then I was able to reinstall my old matplotlib version and use pip as usual. |
2023-03-24 10:35:43 | 0 | python,networkx,pagerank | 1 | 75,832,951 | Initial pagerank precomputed values with networkx | 75,832,557 | true | 23 | I'm trying to run an experiment where I have PageRank values and a directed graph built. I have a graph in the shape of a star (many surrounding nodes that point to a central node).
All those surrounding nodes have already a PageRank value precomputed and I want to check how the central node PageRank value is affected by the surrounding ones.
Is there a way to perform this with networkx? I've tried building the graph with weighs (using the weights to store the precomputed PageRank values) but at the end, it look does not look like the central node value changes much. | 1.2 | 1 | 1 | I will answer myself the question. In the PageRank method for NetworX you have the parameter nstart, which specifically is the starting pagerank point for the nodes.
nstart : dictionary, optional
Starting value of PageRank iteration for each node.
Still, I'm afraid the graph structure is the limiting factor when doing the random walk and obtaining a relevant result. |
2023-03-24 12:50:59 | 1 | python-3.x,machine-learning,audio,cluster-analysis,diarization | 1 | 75,915,906 | Segmention instead of diarization for speaker count estimation | 75,833,879 | false | 473 | I'm using diarization of pyannote to determine the number of speakers in an audio, where number of speakers cannot be predetermined. Here is the code to determine speaker count by diarization:
from pyannote.audio import Pipeline
MY_TOKEN = "" # huggingface_auth_token
audio_file = "my_audio.wav"
pipeline = Pipeline.from_pretrained("pyannote/speaker-diarization@2.1", use_auth_token=MY_TOKEN)
output = pipeline(audio_file, min_speakers=2, max_speakers=10)
results = []
for turn, _, speaker in list(output.itertracks(yield_label=True)):
results.append(speaker)
num_speakers = len(set(results))
print(num_speakers)
Using diarization for speaker count estimation seems an overkill and slow. So I was trying to segment the audio into chunks, embed the audio segments and do some clustering on the embeddings to determine the ideal number of clusters as the possible number of speakers. In the backend, pyannote might also be doing something similar to estimate number of speakers. Here is what I tried in code:
from sklearn.cluster import SpectralClustering, KMeans, AgglomerativeClustering
from sklearn.metrics import silhouette_score
from spectralcluster import SpectralClusterer
from resemblyzer import VoiceEncoder, preprocess_wav
from pyannote.audio.pipelines.speaker_verification import PretrainedSpeakerEmbedding
from pyannote.audio import Model
from pyannote.audio import Audio
from pyannote.core import Segment
from pyannote.audio.pipelines import VoiceActivityDetection
import numpy as np
audio_file = "my_audio.wav"
MY_TOKEN = "" # huggingface_token
embedding_model = PretrainedSpeakerEmbedding("speechbrain/spkrec-ecapa-voxceleb")
encoder = VoiceEncoder()
model = Model.from_pretrained("pyannote/segmentation",
use_auth_token=MY_TOKEN)
pipeline = VoiceActivityDetection(segmentation=model)
HYPER_PARAMETERS = {
# onset/offset activation thresholds
"onset": 0.5, "offset": 0.5,
# remove speech regions shorter than that many seconds.
"min_duration_on": 0.0,
# fill non-speech regions shorter than that many seconds.
"min_duration_off": 0.0
}
pipeline.instantiate(HYPER_PARAMETERS)
vad = pipeline(audio_file)
audio_model = Audio()
segments = list(vad.itertracks(yield_label=True))
embeddings = np.zeros(shape=(len(segments), 192))
#embeddings = np.zeros(shape=(len(segments), 256))
for i, diaz in enumerate(segments):
print(i, diaz)
waveform, sample_rate = audio_model.crop(audio_file, diaz[0])
embed = embedding_model(waveform[None])
#wav = preprocess_wav(waveform[None].flatten().numpy())
#embed = encoder.embed_utterance(wav)
embeddings[i] = embed
embeddings = np.nan_to_num(embeddings)
max_clusters = 10
silhouette_scores = []
# clustering = SpectralClusterer(min_clusters=2, max_clusters=max_clusters, custom_dist="cosine")
# labels = clustering.predict(embeddings)
# print(labels)
for n_clusters in range(2, max_clusters+1):
# clustering = SpectralClustering(n_clusters=n_clusters, affinity='nearest_neighbors').fit(embeddings)
# clustering = KMeans(n_clusters=n_clusters).fit(embeddings)
clustering = AgglomerativeClustering(n_clusters).fit(embeddings)
labels = clustering.labels_
score = silhouette_score(embeddings, labels)
print(n_clusters, score)
silhouette_scores.append(score)
# Choose the number of clusters that maximizes the silhouette score
number_of_speakers = np.argmax(silhouette_scores) + 2 # add 2 to account for starting at n_clusters=2
print(number_of_speakers)
But the problem is that I'm not getting the same results as the results from pyannote diarization, especially when number of speakers is greater than 2. Pyannote diarization seems returning more realistic number. How to get the same results as pyannote diarization, but using some process that is faster like segmentation? | 0.197375 | 1 | 1 | It is not surprising that the two methods are giving different results. Speaker diarization and speaker clustering are two different approaches to the same problem of speaker counting, and they make different assumptions about the data and the problem.
Speaker diarization relies on techniques like speaker change detection and speaker embedding to segment the audio into regions that correspond to different speakers, and then assigns each segment to a unique speaker label. This approach is robust to various sources of variation in the audio, such as overlapping speech, background noise, and speaker characteristics, but it can be computationally expensive.
Speaker clustering, on the other hand, assumes that the audio can be divided into a fixed number of non-overlapping segments, and attempts to group them into clusters that correspond to different speakers based on some similarity metric. This approach is faster than diarization but may not be as accurate, especially when the number of speakers is not known a priori.
To improve the accuracy of your speaker clustering approach, you may want to consider incorporating some of the techniques used in diarization, such as voice activity detection and speaker embedding. For example, you could use a VAD algorithm to segment the audio into regions of speech and non-speech, and then apply clustering to the speech regions only. You could also use a pre-trained speaker embedding model to extract features from the speech regions and use them as input to your clustering algorithm.
Overall, it is unlikely that you will be able to achieve the same level of accuracy as diarization using clustering alone, but you may be able to get close by combining the two approaches. |
2023-03-24 13:43:29 | 1 | python,amazon-sagemaker,openai-whisper | 2 | 76,018,052 | ModuleNotFoundError: No module named 'whisper' when trying install in sagemaker | 75,834,417 | false | 1,841 | I'm trying to install openai-whisper on AWS Sagemaker. I've tried creating virtual env, upgrading to python 3.9 and found out the installation get "Killed" before it even finishes. I need help solving this, been struggling for a couple for days. Thanks in advance.
pip install openai-whisper
Traceback:
Keyring is skipped due to an exception: 'keyring.backends'
Collecting openai-whisper
Using cached openai-whisper-20230306.tar.gz (1.2 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting torch
Killed
import whisper
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
/tmp/ipykernel_13713/3212043240.py in <module>
----> 1 import whisper
ModuleNotFoundError: No module named 'whisper' | 0.099668 | 2 | 2 | If you have a look at the last line of the installation logs, this is happening because the installation gets killed due to running out of memory.
Use the following for your installation when you encounter the issue:
pip install openai-whisper --no-cache-dir |
2023-03-24 13:43:29 | 0 | python,amazon-sagemaker,openai-whisper | 2 | 75,891,959 | ModuleNotFoundError: No module named 'whisper' when trying install in sagemaker | 75,834,417 | false | 1,841 | I'm trying to install openai-whisper on AWS Sagemaker. I've tried creating virtual env, upgrading to python 3.9 and found out the installation get "Killed" before it even finishes. I need help solving this, been struggling for a couple for days. Thanks in advance.
pip install openai-whisper
Traceback:
Keyring is skipped due to an exception: 'keyring.backends'
Collecting openai-whisper
Using cached openai-whisper-20230306.tar.gz (1.2 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting torch
Killed
import whisper
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
/tmp/ipykernel_13713/3212043240.py in <module>
----> 1 import whisper
ModuleNotFoundError: No module named 'whisper' | 0 | 2 | 2 | Can you try to implement this with no virtual environment and let us know what happens? Are you doing this in Studio or Classic Notebook Instances? |
2023-03-24 16:47:47 | 0 | python,selenium-webdriver | 1 | 75,837,033 | Cannot access page on this server (python and selenium) | 75,836,300 | false | 28 | I'm trying to access a product page from this site, but it shows that I cannot access it on this server, I added an user agent and used a vpn but those didn't work... Any help is appreciated
from selenium.webdriver import Chrome, ChromeOptions
from selenium.webdriver.common.by import By
from bs4 import BeautifulSoup
import chromedriver_autoinstaller
chromedriver_autoinstaller.install()
def get_product(driver: Chrome, id: int):
driver.get(MAIN_URL.format(id))
link = driver.find_elements(By.CSS_SELECTOR, '[data-clicktype="product_tile_click"]')[0].get_attribute('href')
driver.get(link) # Cannot access this link
soup = BeautifulSoup(driver.page_source, 'lxml')
# ...
MAIN_URL = 'https://www.lowes.com/search?searchTerm={}&refinement=1'
options = ChromeOptions()
options.add_argument("--incognito")
options.add_argument("user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36")
driver = Chrome(options=options)
products = []
product_ids = [333346]
for product_id in product_ids:
products.append(get_product(driver, product_id))
print(products[0]) | 0 | 1 | 1 | There is multiple things you can try: like trying with a browser then a different browser, using a proxy or TOR, add a delay like time.sleep() in case you are trying to access the page too quickly ? |
2023-03-24 19:28:22 | 1 | python,python-3.x,windows,sqlalchemy | 1 | 75,891,380 | import ibm_db_sa Fails with: AttributeError: type object 'String' has no attribute 'RETURNS_CONDITIONAL' | 75,837,491 | true | 229 | OS: Win 10 22H2
Python: 3.9.13
Install location: C:\Program Files\Python39
ibm-db
Version: 3.1.4
ibm-db-sa
Version: 0.3.9
SQLAlchemy
Version: 2.0.7
When running python -i and import ibm_db_sa I get the below. Any help appreciated. I searched for dependencies but did not find anything useful.
PS C:\windows\system32> python -i
Python 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import ibm_db
>>> import ibm_db_sa
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Program Files\Python39\lib\site-packages\ibm_db_sa\__init__.py", line 22, in <module>
from . import ibm_db, pyodbc, base
File "C:\Program Files\Python39\lib\site-packages\ibm_db_sa\ibm_db.py", line 20, in <module>
from .base import DB2ExecutionContext, DB2Dialect
File "C:\Program Files\Python39\lib\site-packages\ibm_db_sa\base.py", line 662, in <module>
class DB2Dialect(default.DefaultDialect):
File "C:\Program Files\Python39\lib\site-packages\ibm_db_sa\base.py", line 675, in DB2Dialect
returns_unicode_strings = sa_types.String.RETURNS_CONDITIONAL
AttributeError: type object 'String' has no attribute 'RETURNS_CONDITIONAL'
>>> | 1.2 | 1 | 1 | I just ran into this issue and discovered the solution after a contact at IBM gave me a lead.
It looks like the ibm_db_sa package was just updated earlier in the month. If you look at the prerequisites on the product description page you will see it requires "SQLAlchemy version between 0.7.3 - 1.4.x". The latest version of SQLAlchemy is 2.0, which you have installed like I did.
I uninstalled SQLAlchemy, and reinstalled a down-leveled version.
pip3 uninstall sqlalchemy
pip3 install sqlalchemy==1.4.47
Once I did this, I was able to run the import cleanly.
Update:
I did discover backleveling SQLAlchemy caused a dependency issue with ipython-sql. and had to run the additional commands.
pip3 uninstall ipython-sql
pip3 install ipython-sql==0.4.1 |
2023-03-25 03:53:10 | 0 | python,github-actions,mongodb-atlas | 1 | 75,846,432 | Why can't I connect to MongoDB Atlas with GitHub Actions? | 75,839,566 | false | 196 | Note: I'm new to MongoDB and GitHub Actions.
I'm working on a personal project where I want to create a workflow that automatically scrapes reviews of an app from Google Play Store at a scheduled time every day and stores it in my collection in MongoDB Atlas. So first, I created a Python script called "scraping_daily.py" that will scrape 5,000 new reviews and filter out any that were previously collected. When I tested it and ran it manually, the script worked perfectly fine. Here's what the script looks like:
# Import libraries
import numpy as np
import pandas as pd
from google_play_scraper import Sort, reviews, reviews_all, app
from pymongo import MongoClient
# Create a connection to MongoDB
client = MongoClient("mongodb+srv://<MY_USERNAME>:<MY_PASSWORD>@project1.lpu4kvx.mongodb.net/?retryWrites=true&w=majority")
db = client["vidio"]
collection = db["google_play_store_reviews"]
# Load the data from MongoDB
df = pd.DataFrame(list(collection.find()))
df = df.drop("_id", axis=1)
df = df.sort_values("at", ascending=False)
# Collect 5000 new reviews
result = reviews(
"com.vidio.android",
lang="id",
country="id",
sort=Sort.NEWEST,
count=5000
)
new_reviews = pd.DataFrame(result[0])
new_reviews = new_reviews.fillna("empty")
# Filter the scraped reviews to exclude any that were previously collected
common = new_reviews.merge(df, on=["reviewId", "userName"])
new_reviews_sliced = new_reviews[(~new_reviews.reviewId.isin(common.reviewId)) & (~new_reviews.userName.isin(common.userName))]
# Update MongoDB with any new reviews that were not previously scraped
if len(new_reviews_sliced) > 0:
new_reviews_sliced_dict = new_reviews_sliced.to_dict("records")
batch_size = 1_000
num_records = len(new_reviews_sliced_dict)
num_batches = num_records // batch_size
if num_records % batch_size != 0:
num_batches += 1
for i in range(num_batches):
start_idx = i * batch_size
end_idx = min(start_idx + batch_size, num_records)
batch = new_reviews_sliced_dict[start_idx:end_idx]
if batch:
collection.insert_many(batch)
Next, I want to schedule my script using GitHub Actions. Just like what I followed from YouTube tutorials, I created an actions.yml file in the .github/workflows folder. Here's what the YML file looks like:
name: Scraping Google Play Reviews
on:
schedule:
- cron: 50 16 * * * # At 16:50 every day
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: check out the repository content
uses: actions/checkout@v2
- name: set up python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: install requirements
run:
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: execute the script
run: python -m scraping_daily.py
However, it always throws an error when it executes my script. The error message is:
Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.10.10/x64/lib/python3.10/runpy.py", line 187, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/opt/hostedtoolcache/Python/3.10.10/x64/lib/python3.10/runpy.py", line 110, in _get_module_details
__import__(pkg_name)
File "/home/runner/work/vidio_google_play_store_reviews/vidio_google_play_store_reviews/scraping_daily.py", line 16, in <module>
df = pd.DataFrame(list(collection.find()))
File "/opt/hostedtoolcache/Python/3.10.10/x64/lib/python3.10/site-packages/pymongo/cursor.py", line 1248, in next
if len(self.__data) or self._refresh():
File "/opt/hostedtoolcache/Python/3.10.10/x64/lib/python3.10/site-packages/pymongo/cursor.py", line 1139, in _refresh
self.__session = self.__collection.database.client._ensure_session()
File "/opt/hostedtoolcache/Python/3.10.10/x64/lib/python3.10/site-packages/pymongo/mongo_client.py", line 1740, in _ensure_session
return self.__start_session(True, causal_consistency=False)
File "/opt/hostedtoolcache/Python/3.10.10/x64/lib/python3.10/site-packages/pymongo/mongo_client.py", line 1685, in __start_session
self._topology._check_implicit_session_support()
File "/opt/hostedtoolcache/Python/3.10.10/x64/lib/python3.10/site-packages/pymongo/topology.py", line 538, in _check_implicit_session_support
self._check_session_support()
File "/opt/hostedtoolcache/Python/3.10.10/x64/lib/python3.10/site-packages/pymongo/topology.py", line 554, in _check_session_support
self._select_servers_loop(
File "/opt/hostedtoolcache/Python/3.10.10/x64/lib/python3.10/site-packages/pymongo/topology.py", line 238, in _select_servers_loop
raise ServerSelectionTimeoutError(
pymongo.errors.ServerSelectionTimeoutError: ac-dc8axn9-shard-00-01.lpu4kvx.mongodb.net:27017: connection closed,ac-dc8axn9-shard-00-02.lpu4kvx.mongodb.net:27017: connection closed,ac-dc8axn9-shard-00-00.lpu4kvx.mongodb.net:27017: connection closed, Timeout: 300.0s, Topology Description: <TopologyDescription id: 641dd5b78e0efba394e00ffc, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('ac-dc8axn9-shard-00-00.lpu4kvx.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('ac-dc8axn9-shard-00-00.lpu4kvx.mongodb.net:27017: connection closed')>, <ServerDescription ('ac-dc8axn9-shard-00-01.lpu4kvx.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('ac-dc8axn9-shard-00-01.lpu4kvx.mongodb.net:27017: connection closed')>, <ServerDescription ('ac-dc8axn9-shard-00-02.lpu4kvx.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('ac-dc8axn9-shard-00-02.lpu4kvx.mongodb.net:27017: connection closed')>]>
Error: Process completed with exit code 1.
I tried to increase the timeout setting by adding serverSelectionTimeoutMS=300000 inside MongoClient(), but it still gave me the same error. Can anyone tell me how to solve this? Thanks in advance!
By the way, I'm using a Windows machine (I'm not sure if it's useful information though). | 0 | 1 | 1 | Good news! I've managed to resolve the issue. It turns out that to enable access to your MongoDB database via GitHub Actions, you need to add the IP address and select the Allow Access from Anywhere option. Thanks to everyone who commented on my question! |
2023-03-25 04:05:42 | 1 | python,python-3.x,windows,speech-recognition,pyaudio | 1 | 75,843,822 | Speech recognition error can't find Microphone as source | 75,839,593 | true | 212 | I'm working on the listening part of a speech recognition project. Currently, I'm stuck with this error message when I run the code.
import speech_recognition as sr
robot_ear = sr.Recognizer()
with sr.Microphone(device_index=1) as mic:
#robot_ear.adjust_for_ambient_noise(mic)
print("Robot: I'm listening")
audio = robot_ear.listen(mic)
try:
you = robot_ear.recognize_google(audio)
except:
you = ""
print(you)
This is the error:
Robot: I'm listening
Traceback (most recent call last):
File "C:\Users\trung\Documents\Python 3\nghe.py", line 7, in <module>
audio = robot_ear.listen(mic)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\trung\AppData\Local\Programs\Python\Python311\Lib\site-packages\speech_recognition\__init__.py", line 465, in listen
assert source.stream is not None, "Audio source must be entered before listening, see documentation for ``AudioSource``; are you using ``source`` outside of a ``with`` statement?"
^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: Audio source must be entered before listening, see documentation for ``AudioSource``; are you using ``source`` outside of a ``with`` statement?
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\trung\Documents\Python 3\nghe.py", line 4, in <module>
with sr.Microphone(device_index=1) as mic:
File "C:\Users\trung\AppData\Local\Programs\Python\Python311\Lib\site-packages\speech_recognition\__init__.py", line 189, in __exit__
self.stream.close()
^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'close'
I tried setting the Microphone manually by looking for its index, which appears to be index 1, but doesn't seem to fix the problem. | 1.2 | 1 | 1 | It appears that my microphone access setting in Windows 11 is off. I just needed to go to Start > Settings > Privacy & security > Microphone and make sure Microphone access is turned on, then it's working now. |
2023-03-25 04:39:33 | 0 | python,python-3.x,performance,dictionary,interpreter | 1 | 75,839,850 | Does the attribute name length of Python objects has effect on performance? | 75,839,668 | false | 31 | Python objects keep attributes in the __dict__ dictionaries (unless you are using __slots__), and you can assign an arbitrary string as an attribute of an object at runtime using setattr() for example. This means that every time an attribute is accessed or assigned, the interpreter must perform the string hashing of the attribute name to get it's value from the dictionary, which is more expensive the longer is the string. Could this affect performance in small programs where the cost of attribute name's hashing is comparable to everything else the program is doing?
Also, if I understand correctly, this doesn't apply to regular variable names since in the bytecode they are converted into some kind of integer indices, right? | 0 | 1 | 1 | So the answer seems to be that CPython interpreter doesn't actually compute the attribute name's hash on every attribute access. Instead, it applies string interning, meaning that it keeps a single copy of most compile-time string constants (including variable/function/attribute/etc names, and even dictionary keys), and compares their references whenever they appear in code, which are simple integer comparisons. This probably doesn't apply to identifiers created at runtime, such as using setattr().
Thanks to @jasonharper for the pointer. |
2023-03-25 08:33:14 | 0 | python,pandas | 2 | 75,840,651 | Pandas Groupby multiple columns with cumcount | 75,840,510 | false | 57 | I am new to python
I have a dataset where the same customer can apply for a product multiple times in a day and have fields for cust_number and date
when I apply
df['g']=dfc.groupby('CustNo','DATE').cumcount()
python errors
ValueError: No axis named DATE for object type DataFrame
is there an easy solution? I think an assignment of axis'?
help please | 0 | 1 | 1 | check the column name in your data frame, whether "DATE" column is exist. if the date column is exist in ur data frame, first you need to change the date format, follow the below step.
import datetime
#use below syntax to convert your date column into date format
df['DATE']=pd.to_datetime(df['DATE']) |
2023-03-25 09:56:08 | 3 | python,numpy | 1 | 75,840,951 | np.logical_or() in Numpy - Python | 75,840,893 | true | 45 | Please anyone can explain me this resulting code
x = np.array([9, 5])
y = np.array([16, 12])
np.logical_or(x < 5, y > 15)
Result -> array([ True, False])
Since:
np.logical_or(x < 5, y > 15)
x < 5 = [False, False], y > 15 = [True, False]
I think it should be:
[False or False, False or True] = [False, True]
In fact Python giving the result like this:
[False or True, False or False] = [True, False]
Why the result is not [False, True], but in fact python giving the result [True, False]?
It is not match with the order of operation np.logical_or(x < 5, y > 15)
Even I tried ChatGPT explanation I still don't have clear understanding of this concept.
Please I would be appreciate, if anyone can explain the background of python process in detail step by step. | 1.2 | 1 | 1 | You are interpreting the result wrong. Numpy's logical_or can have a few arguments as filters. In your case, the input arguments are x < 5 and y > 15. It then combines them together with an or operator.
The first argument x < 5 would lead to: [False, False]
The second argument y > 15 would lead to: [True, False]
The final result = [False or True, False or False] = [True, False] |
2023-03-25 10:38:03 | 2 | python,algorithm,fft | 1 | 75,841,689 | Issue with implementing inverse FFT for polynoms | 75,841,115 | true | 59 | I am studying the FFT algorithm for fast polynomial multiplication. We went over the algorithm and I decided to try and implement it in Python.
from typing import List
import numpy as np
def fft(p: List[int]) -> List[int]:
n = len(p)
if n == 1:
return p
unity_root = np.exp(2j * np.pi / n)
p_even = p[::2]
p_odd = p[1::2]
y_even = fft(p_even)
y_odd = fft(p_odd)
y = [0] * n
for j in range(n // 2):
omega = np.power(unity_root, j)
y[j] = y_even[j] + omega * y_odd[j]
y[n // 2 + j] = y_even[j] - omega * y_odd[j]
return y
def ifft(p: List[int]) -> List[int]:
n = len(p)
if n == 1:
return p
unity_root = (1 / n) * np.exp(-2j * np.pi / n)
p_even = p[::2]
p_odd = p[1::2]
y_even = ifft(p_even)
y_odd = ifft(p_odd)
y = [0] * n
for j in range(n // 2):
omega = np.power(unity_root, j)
y[j] = y_even[j] + omega * y_odd[j]
y[n // 2 + j] = y_even[j] - omega * y_odd[j]
return y
I tried running the following code to make sure it works
print(ifft(fft([1, 2, 3, 4])))
I expected the output to be the original list I started with as that list represents the coefficients, yet I am getting (ignoring precision issues with floating point arithmetic):
[(4+0j), (11-0j), (12+0j), (13+0j)]
My question is:
Shouldn't I be getting the original list? If I should be getting the original list, where is the mistake in the code? as I went over the code several times and I am having issues finding it. If I shouldn't be getting the original list back and my code is correct, what am I actually getting? | 1.2 | 3 | 1 | The problem is that, in ifft, you're dividing the root of unity by n. You need to divide the final result instead. |
2023-03-25 10:40:16 | 1 | python,tensorflow,tensorflow2.0,tensor | 2 | 75,841,308 | How does argmax works for Three Dimensional array? | 75,841,131 | false | 49 | x=tf.constant ([
[[1,2,3],
[4,5,6]],
[[7,8,9],
[10,11,12]],
[[13,14,15],
[16,17,18]]])
#print(x)
print(tf.math.argmax(x,axis=0))
Result:
tf.Tensor(
[[2 2 2]
[2 2 2]], shape=(2, 3), dtype=int64)
How does argmax() works for 3D-Arrays??
please someone help!!! | 0.099668 | 1 | 1 | The result at (0,0) gives you the index (along axis=0) with the highest number.
So here you compare 1, 7 and 13. Since 13 is the largest number, the result at (0,0) = 2
For the result at (0,1) you compare 2, 8 and 14 and so on... |
2023-03-25 11:01:00 | 1 | python,pandas,dataframe,display | 3 | 75,841,465 | Display a dataframe larger in python | 75,841,241 | false | 53 | I create a dataframe by a csv file.
df.to_csv('C:\\file.csv', index = False, header = True, encoding='latin1', sep = ';')
My dataframe is like :
Id
Text
1
Pedro-Sanchez /n Gomez
It displays the first row on two row. When it is a long text with space, the dataframe display it to the line.
I want to display my dataframe like :
Id
Text
1
Pedro-Sanchez Gomez
I use pd.set_option('display.max_colwidth',None) but it doesn't work.
Can you please help me? | 0.066568 | 1 | 2 | you can use replace to remove this
df['Text'] = df['Text'].str.replace('\n', '') |
2023-03-25 11:01:00 | 0 | python,pandas,dataframe,display | 3 | 75,841,357 | Display a dataframe larger in python | 75,841,241 | false | 53 | I create a dataframe by a csv file.
df.to_csv('C:\\file.csv', index = False, header = True, encoding='latin1', sep = ';')
My dataframe is like :
Id
Text
1
Pedro-Sanchez /n Gomez
It displays the first row on two row. When it is a long text with space, the dataframe display it to the line.
I want to display my dataframe like :
Id
Text
1
Pedro-Sanchez Gomez
I use pd.set_option('display.max_colwidth',None) but it doesn't work.
Can you please help me? | 0 | 1 | 2 | If I don't misunderstand your problem, you are trying to delete the /ns, you should use the regex for this.For example, you can customize it like this
res = []
for sub in test_list:
res.append(sub.replace("\n", "")) |
2023-03-25 16:55:11 | 1 | python,multiprocessing,python-multiprocessing | 3 | 75,843,413 | Python Multiprocessing Pool (Concurent Futures ProcessPoolExecutor) slow down with increasing number of workers | 75,843,201 | false | 349 | Problem description
Hi, I've got a computationally heavy function which I am running in parallel.
I've noticed that when using the concurrent futures ProcessPoolExecutor (or the multiprocessing Pool) the processes slows down when using more workers (or adding more tasks). I.e. when I run my code on 2 workers, the average execution time of a process is around 0.7s, however when using 16 workers (the cpu_count of my processor) the average execution time of the same process is of 6.7s.
A similar thing happens when running the same calculation on more tasks.
See the test code and results below.
Test code
import concurrent.futures
import os
from time import perf_counter
import numpy as np
def func(foo):
start_time = perf_counter()
long_calculation = np.random.random(size=100000000).std()
stop_time = perf_counter()
execution_time = stop_time - start_time
return execution_time
cpu_count = os.cpu_count()
assert cpu_count is not None
print(f"CPU Count: {cpu_count}")
# ======= Increasing Tasks ============
# max_workers = 10
# for tasks in [1, 2, 5, 10, 20, 50, 100, 1000]:
# ======= Increasing Max Workers ============
tasks = 50
for max_workers in range(1, cpu_count + 1):
with concurrent.futures.ProcessPoolExecutor(max_workers) as pool:
total_exec_time: float = 0.0
processes = pool.map(func, range(tasks))
for process_result in processes:
total_exec_time += process_result
print(
f"{tasks} tasks on {max_workers} workers - average process execution time: {round(total_exec_time / tasks, 2)}s"
)
The results
Increasing the number of max_workers
PU Count: 16
50 tasks on 1 workers - average process execution time: 0.65s
50 tasks on 2 workers - average process execution time: 0.76s
50 tasks on 3 workers - average process execution time: 0.97s
50 tasks on 4 workers - average process execution time: 1.16s
50 tasks on 5 workers - average process execution time: 1.6s
50 tasks on 6 workers - average process execution time: 1.9s
50 tasks on 7 workers - average process execution time: 2.29s
50 tasks on 8 workers - average process execution time: 2.67s
50 tasks on 9 workers - average process execution time: 3.02s
50 tasks on 10 workers - average process execution time: 3.5s
50 tasks on 11 workers - average process execution time: 4.0s
50 tasks on 12 workers - average process execution time: 4.73s
50 tasks on 13 workers - average process execution time: 5.37s
50 tasks on 14 workers - average process execution time: 5.66s
50 tasks on 15 workers - average process execution time: 6.07s
50 tasks on 16 workers - average process execution time: 6.71s
Increasing the number of tasks
10 workers with 1 tasks - average process execution time: 0.67s
10 workers with 2 tasks - average process execution time: 0.84s
10 workers with 5 tasks - average process execution time: 1.91s
10 workers with 10 tasks - average process execution time: 3.82s
10 workers with 20 tasks - average process execution time: 3.52s
10 workers with 50 tasks - average process execution time: 3.88s
10 workers with 100 tasks - average process execution time: 4.0s
10 workers with 1000 tasks - average process execution time: 3.92s
Results
Overall, the whole program runs faster, however it is slowing down progressively. I would like it to keep the same efficiency, no matter how many workers I use.
NOTE
The CPU usage is increasing linearly alongside the max_workers parameter.
The CPU usage is at 100% when using 15 or 16 max_workers (the idle CPU usage is around 7%) and my RAM has always plenty of free space.
Using multiprocessing
Side note: I get the exact same results using the multiprocessing Pool.
from multiprocessing import Pool
...
with Pool(processes=max_workers) as pool:
... | 0.066568 | 1 | 1 | Overall, the whole program runs faster, however it is slowing down progressively. I would like it to keep the same efficiency, no matter how many workers I use.
in short you can't always do that, each process is assigned a core with only around 5 MB of cache, if your process tries to use more memory than that it has to use the RAM, which has a limited bandwidth and is the reason of the bottleneck here, python alone takes much more than 5MB for its internal calls and will almost always be contending on the RAM bandwidth, there are also other bottlenecks like the OS scheduler and context switching.
lastly there is thermal throttling which happens when a CPU gets hotter, so you need a second CPU to spread the heat generation.
if you want the same speed when you increase the number of workers you have to increase the number of RAM chips and CPUs, which ultimately means, use cluster computing.
there are also faster methods like not storing the random numbers in memory and consuming them on construction which can eliminate this contention.
also using a language that runs on bare-metal like C++ will reduce the memory (and power) consumption which reduces this contention. (if you are not storing millions of numbers in memory). |
2023-03-26 01:28:22 | 3 | python-3.x,logging | 1 | 75,845,551 | logging basicConfig not having effect in __main__ | 75,845,421 | false | 21 | I am writing a package that has a __main__.py with something like:
def main():
logging.basicConfig(level=logging.INFO)
print(logging.getLogger(__name__))
if __name__ == "__main__":
main()
And I am running it like so:
python3 -m my_package
And seeing:
<Logger __main__ (ERROR)>
instead of INFO. And so none of my log messages are showing up on the screen.
Why is basicConfig not having effect? | 0.53705 | 2 | 1 | Another package I was importing was setting logging upfront, and my basicConfig settings were not overriding. I added force=True to my basicConfig call and it works now. |
2023-03-26 14:54:27 | 0 | python,vpython | 1 | 75,869,167 | Location of VPython library | 75,848,452 | false | 31 | Where is vpython installed?
When I enter in the terminal the command: pip install vpython,There are some processes in the log, but in the end I don’t see any files of this library in the root folder of the project, but at the same time I can import this library in the code | 0 | 1 | 1 | Like most Python modules, you'll find it in Lib/site-packages/vpython within your Python installation. |
2023-03-26 17:56:39 | 2 | python,tkinter | 1 | 75,849,762 | In Tkinter is there any way to Zoom in/out on the whole of the GUI content? | 75,849,560 | true | 55 | Looking for similar functionality that a browser has? (ctrl:shift:+ for example)
I tried:
root.tk.call('tk', 'scaling', new_scale) but it didn't change anything.
Any suggestions greatly welcomed... | 1.2 | 1 | 1 | In Tkinter is there any way to Zoom in/out on the whole of the GUI content?
No, there is no way to do what you want. There is a way to scale items on a canvas, and you can scale fonts (and thus, widgets that depend on the size of a font) but you can't the UI as a whole. |
2023-03-26 23:19:47 | 0 | python,python-3.x,loops,while-loop | 3 | 75,851,172 | Close a While True Loop from another While True | 75,851,108 | false | 79 | Ok, I've been trying to mess with the code, and that is what I get. I've been for a long time trying to make an advanced calculator for (no particular reason, just practice). Until then OK. Also tried to make an exception zone, mainly for the ValueError for the coma instead of period on input.
Ok still ok 'till here. Also, after the error happens I added a loop, why? So after the exception, you can automatically start again at the calculator and the "app" does not just close. And it works! But now starts the issue with one feature.
If you see, inside the code there is an "exit" feature. So if you type exit, you can close the app. The issue is that the loop (While True) I made to keep the app running after an exception, is what now denies closing the app. So if you type "exit", the app restarts and go back to start the loop.
I've tried many things, most of them actually stupid, with dead-ends... So not worth describing.
I am really green not only at Python, but coding in general (this is my first code). So I don't know if I don't have knowledge enough to do what I want to achieve or if I am just missing something.
Thank you in advance!
from rich import print
from rich.table import Table
# Con este bucle consigo que si sale algún "Excption", con el "continue", vuelva a iniciar el Loop y no se corte la calculadora.
while True:
try:
# Configuración de la Tabla a modo de ayuda
leyenda = Table("Comando", "Operador")
leyenda.add_row("+", "Suma")
leyenda.add_row("-", "Resta")
leyenda.add_row("*", "Multiplicación")
leyenda.add_row("/", "División decimal")
leyenda.add_row("help", "Imprime leyenda")
leyenda.add_row("exit", "Cerrar la app")
print("Calculadora de dos valores\n")
print(leyenda)
# Bucle de la calculadora para que esté siempre abierta
while True:
# Teclado de la calculadora
dig1 = input("Introduce el primer número a calcular: ")
if dig1 == "exit":
print("¡Hasta pronto!")
break
elif dig1 == "help":
print(leyenda)
continue # Con el continue forzamos a que el bucle vuelva a comenzar
operator = input("Introduce el operador: ")
if operator == "exit":
print("¡Hasta pronto!")
break
elif operator == "help":
print(leyenda)
continue
dig2 = input("Introduce el segundo número a calcular: ")
if dig2 == "exit":
print("¡Hasta pronto!")
break
elif dig2 == "help":
print(leyenda)
continue
# Conversor de valores de string (input) a float
num1 = float(dig1)
num2 = float(dig2)
# Zona de cálculo (el motor de la calculadora)
if operator == "+":
print(f"{dig1} más {dig2} es igual a {num1 + num2}.\n")
if operator == "-":
print(f"{dig1} menos {dig2} es igual a {num1 - num2}.\n")
if operator == "*":
print(f"{dig1} multiplicado por {dig2} es igual a {num1 * num2}.\n")
if operator == "/":
print(f"{dig1} dividido entre {dig2} es igual a {num1 / num2}.\n")
except TypeError as error_tipo:
print("Error de tipo.\nDetalles del error:", error_tipo,".\n")
continue
except ValueError as error_valor:
print("Error de valor.\nDetalles del error:", error_valor)
print("Posible solución: Si para los decimales has usado la coma (,), usa el punto(.).\n")
continue
except SyntaxError as error_sintaxis:
print("Error de sintáxis.\nDetalles del error:", error_sintaxis,".\n")
continue
except:
print("Error general.\n")
continue
finally:
print("Reiniciando aplicación.\n") | 0 | 1 | 1 | You could avoid having to do this in the first place by changing up the design a bit to avoid a nested while loop. For example, your initialization doesn't have to happen more than once. The outer-most while loop is therefore unnecessary and exception handling can be moved to the inner loop.
Of course, you can use a flag to break the outter loop from within the inner loop like what Pablo described, or you could simply shut down the program from inside the inner loop eg. using sys.exit, but it's best to instead try to simplify the design and flatten out the structure to not have to deal with this issue in the first place. |
2023-03-27 00:34:26 | 0 | python,ubuntu,installation,pip | 1 | 75,851,359 | pip install: ModuleNotFoundError: No module named 'compileall' | 75,851,322 | false | 109 | I'm running Ubuntu 22.04 LTS ARM on a UTM VM using Apple Virtualization. When I run python -m pip install, it errors out as follows:
$ python -m pip install hg-git
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/usr/lib/python3/dist-packages/pip/__main__.py", line 31, in <module>
sys.exit(_main())
File "/usr/lib/python3/dist-packages/pip/_internal/cli/main.py", line 68, in main
command = create_command(cmd_name, isolated=("--isolated" in cmd_args))
File "/usr/lib/python3/dist-packages/pip/_internal/commands/__init__.py", line 109, in create_command
module = importlib.import_module(module_path)
File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/usr/lib/python3/dist-packages/pip/_internal/commands/install.py", line 14, in <module>
from pip._internal.cli.req_command import (
File "/usr/lib/python3/dist-packages/pip/_internal/cli/req_command.py", line 21, in <module>
from pip._internal.index.package_finder import PackageFinder
File "/usr/lib/python3/dist-packages/pip/_internal/index/package_finder.py", line 32, in <module>
from pip._internal.req import InstallRequirement
File "/usr/lib/python3/dist-packages/pip/_internal/req/__init__.py", line 8, in <module>
from .req_install import InstallRequirement
File "/usr/lib/python3/dist-packages/pip/_internal/req/req_install.py", line 39, in <module>
from pip._internal.operations.install.wheel import install_wheel
File "/usr/lib/python3/dist-packages/pip/_internal/operations/install/wheel.py", line 5, in <module>
import compileall
ModuleNotFoundError: No module named 'compileall'
How do I fix this error? I am particularly puzzled because another VM with the same settings does not have this issue. Thanks in advance!
Solutions I found online seem to suggest fixing the problem by running pip install compileall, but that results in the same error:
$ pip install compileall
Traceback (most recent call last):
File "/usr/bin/pip", line 8, in <module>
sys.exit(main())
File "/usr/lib/python3/dist-packages/pip/_internal/cli/main.py", line 68, in main
command = create_command(cmd_name, isolated=("--isolated" in cmd_args))
File "/usr/lib/python3/dist-packages/pip/_internal/commands/__init__.py", line 109, in create_command
module = importlib.import_module(module_path)
File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/usr/lib/python3/dist-packages/pip/_internal/commands/install.py", line 14, in <module>
from pip._internal.cli.req_command import (
File "/usr/lib/python3/dist-packages/pip/_internal/cli/req_command.py", line 21, in <module>
from pip._internal.index.package_finder import PackageFinder
File "/usr/lib/python3/dist-packages/pip/_internal/index/package_finder.py", line 32, in <module>
from pip._internal.req import InstallRequirement
File "/usr/lib/python3/dist-packages/pip/_internal/req/__init__.py", line 8, in <module>
from .req_install import InstallRequirement
File "/usr/lib/python3/dist-packages/pip/_internal/req/req_install.py", line 39, in <module>
from pip._internal.operations.install.wheel import install_wheel
File "/usr/lib/python3/dist-packages/pip/_internal/operations/install/wheel.py", line 5, in <module>
import compileall
ModuleNotFoundError: No module named 'compileall' | 0 | 1 | 1 | You can have a try of this order:
python -m pip install compileall2 |
2023-03-27 00:47:47 | 0 | python,python-3.x,python-3.9 | 2 | 75,851,383 | Python - Is there a built-in publisher/consumer pattern? | 75,851,364 | false | 58 | I was looking for a very simple, inline, publisher/consumer, or an event pattern, builtin in Python, is there such thing?
For example:
db/user.py
def create(**kwargs):
user = db.put('User', **kwargs)
publish('user.created', user)
admin/listeners.py
@consume('user.created')
def send_email_on_signup(user):
send_admin_mail(f'New user signup {user.name}') | 0 | 1 | 2 | For in-process event queue, the asyncio event loop can act as a very simple in-process event handling pattern.
Many web frameworks like Django and FastAPI comes with their own event handling patterns though. Django has signals and FastAPI has background task. But it's also pretty common to use something like Celery with both of these. |
2023-03-27 00:47:47 | 0 | python,python-3.x,python-3.9 | 2 | 75,851,389 | Python - Is there a built-in publisher/consumer pattern? | 75,851,364 | false | 58 | I was looking for a very simple, inline, publisher/consumer, or an event pattern, builtin in Python, is there such thing?
For example:
db/user.py
def create(**kwargs):
user = db.put('User', **kwargs)
publish('user.created', user)
admin/listeners.py
@consume('user.created')
def send_email_on_signup(user):
send_admin_mail(f'New user signup {user.name}') | 0 | 1 | 2 | Python has a built-in module called queue which provides a thread-safe way to implement the producer-consumer pattern.
The queue module provides the Queue class, which implements a thread-safe FIFO queue. The Queue class has methods for adding items to the queue (put()), removing items from the queue (get()), and checking whether the queue is empty or full (empty(), full()).
In the producer-consumer pattern, producers add items to the queue, and consumers remove items from the queue. The queue module provides a convenient way to synchronize access to the queue so that producers and consumers don't interfere with each other. |
2023-03-27 09:12:35 | 0 | python,azure,azure-web-app-service | 1 | 75,960,742 | Azure app service - app not in root directory | 75,853,973 | true | 452 | I have a mono repo with more than one application in it. The application I'm trying to deploy is in the directory rest_api. The deploy as seen in github actions is successful, but start-up fails.
This is my start-up command gunicorn -w 1 -k uvicorn.workers.UvicornWorker main:app
This is what the github actions file look like:
name: Deploy rest_api (dev) to Azure
env:
AZURE_WEBAPP_NAME: 'xxx-rest-api'
PYTHON_VERSION: '3.11'
on:
push:
branches: [ "dev" ]
workflow_dispatch:
permissions:
contents: read
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python version
uses: actions/setup-python@v3.0.0
with:
python-version: ${{ env.PYTHON_VERSION }}
cache: 'pip'
- name: Create and start virtual environment
working-directory: rest_api
run: |
python -m venv venv
source venv/bin/activate
- name: Install dependencies
working-directory: rest_api
run: |
pip install --upgrade pip
pip install -r requirements/base.txt
- name: Upload artifact for deployment jobs
uses: actions/upload-artifact@v3
with:
name: python-app
path: |
rest_api
!venv/
!rest_api/venv/
deploy:
permissions:
contents: none
runs-on: ubuntu-latest
needs: build
environment:
name: 'Production'
url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
steps:
- name: Download artifact from build job
uses: actions/download-artifact@v3
with:
name: python-app
path: rest_api
- name: 'Deploy to Azure Web App'
id: deploy-to-webapp
uses: azure/webapps-deploy@v2
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
publish-profile: ${{ secrets.AZUREAPPSERVICE_PUBLISHPROFILE_A79D58476BA645D1BF1A6116201A6E5F }}
package: rest_api
This is the error:
2023-03-27T08:52:28.865320556Z Launching oryx with: create-script -appPath /home/site/wwwroot -output /opt/startup/startup.sh -virtualEnvName antenv -defaultApp /opt/defaultsite -userStartupCommand 'gunicorn -w 1 -k uvicorn.workers.UvicornWorker main:app'
2023-03-27T08:52:28.925294508Z Cound not find build manifest file at '/home/site/wwwroot/oryx-manifest.toml'
2023-03-27T08:52:28.925870805Z Could not find operation ID in manifest. Generating an operation id...
2023-03-27T08:52:28.925880705Z Build Operation ID: d2e6df02-e71b-45fd-b84c-a0d9d8114831
2023-03-27T08:52:29.307010231Z Oryx Version: 0.2.20230103.1, Commit: df89ea1db9625a86ba583272ce002847c18f94fe, ReleaseTagName: 20230103.1
2023-03-27T08:52:29.354907333Z Writing output script to '/opt/startup/startup.sh'
2023-03-27T08:52:29.472048349Z WARNING: Could not find virtual environment directory /home/site/wwwroot/antenv.
2023-03-27T08:52:29.491003271Z WARNING: Could not find package directory /home/site/wwwroot/__oryx_packages__.
2023-03-27T08:52:30.408188883Z
2023-03-27T08:52:30.408226683Z Error: class uri 'uvicorn.workers.UvicornWorker' invalid or not found:
2023-03-27T08:52:30.408231383Z
2023-03-27T08:52:30.408234383Z [Traceback (most recent call last):
2023-03-27T08:52:30.408237183Z File "/opt/python/3.11.1/lib/python3.11/site-packages/gunicorn/util.py", line 99, in load_class
2023-03-27T08:52:30.408240383Z mod = importlib.import_module('.'.join(components))
2023-03-27T08:52:30.408243083Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-03-27T08:52:30.408245783Z File "/opt/python/3.11.1/lib/python3.11/importlib/__init__.py", line 126, in import_module
2023-03-27T08:52:30.408248583Z return _bootstrap._gcd_import(name[level:], package, level)
2023-03-27T08:52:30.408251383Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-03-27T08:52:30.408262283Z File "<frozen importlib._bootstrap>", line 1206, in _gcd_import
2023-03-27T08:52:30.408265483Z File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
2023-03-27T08:52:30.408268383Z File "<frozen importlib._bootstrap>", line 1128, in _find_and_load_unlocked
2023-03-27T08:52:30.408271083Z File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
2023-03-27T08:52:30.408273883Z File "<frozen importlib._bootstrap>", line 1206, in _gcd_import
2023-03-27T08:52:30.408276583Z File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
2023-03-27T08:52:30.408279383Z File "<frozen importlib._bootstrap>", line 1142, in _find_and_load_unlocked
2023-03-27T08:52:30.408282083Z ModuleNotFoundError: No module named 'uvicorn'
2023-03-27T08:52:30.408284783Z ]
My hunch is that the error relates to the app and the virtualenv not being in root. | 1.2 | 5 | 1 | I found out that the problem was requirements were not in rest_api/requirements.txt but in rest_api/requirements/requirements.txt. |
2023-03-27 11:24:42 | 0 | multithreading,memory,memory-management,out-of-memory,python-itertools | 1 | 76,191,712 | How to optimize memory usage of itertools.combinations? | 75,855,173 | true | 51 | I want to generate all possible combinations for my missing letters of password.
I choose itertools.combinations over itertools.products because it produces about 4% of the later and it's fast but after running my code gets killed because of out of memory.
def genb(lst, word_length):
with concurrent.futures.ThreadPoolExecutor(max_workers=multiprocessing.cpu_count()) as executor:
comb = ("".join(combination) for combination in itertools.combinations(lst, word_length))
for future in concurrent.futures.as_completed(executor.submit(combine, comb)):
result = future.result()
if result:
return result
return False
Is there a way to reduce memory usage of the above code?
I have 256GB of RAM and 64 threads running. | 1.2 | 1 | 1 | This memory issue is not related to the combinations() call which uses only a tiny, fixed amount of memory.
Instead, the issue is caused by having too many instances of concurrent futures.
A possible solution is to create only a handful of futures (each running a separate core) and to submit batches of password attempts to evaluate. |
2023-03-27 12:01:57 | 0 | python,import,jupyter-notebook,argparse | 2 | 75,855,670 | SystemExit: 2 : error when calling parse_args() | 75,855,527 | false | 416 | I am getting the following error. As I am new to python I don't understand how to solve this error. Thank you in advance.
" usage: ipykernel_launcher.py [-h] -i INPUT [-f F]
ipykernel_launcher.py: error: the following arguments are required: -i/--input
An exception has occurred, use %tb to see the full traceback.
SystemExit: 2"
This is my code.
parser = argparse.ArgumentParser()
parser.add_argument("-i", "--input", required = True,
help = 'path to the input data')
parser.add_argument("-f", required=False)
args = vars(parser.parse_args()) | 0 | 1 | 1 | -i or --input are required arguments but you don't pass any argument with
args = vars(parser.parse_args())
Try to add the argument with the folder of the required input data maybe
args = vars(parser.parse_args("-i /folder")) |
2023-03-27 17:31:01 | 2 | python,geopandas,shapely | 1 | 75,858,819 | what does shapely_object.bounds do? | 75,858,702 | false | 34 | I am working with folium and shapely, and just came across a keyword named as bounds. Can anyone explain what does it exactly do.
I found an explanation but i dont understand the point behind it : circle_geom.bounds is the bounding box of the circle geometry, which is a tuple of (minx, miny, maxx, maxy) coordinates. | 0.379949 | 1 | 1 | One application could be if you wanted to place a smaller object within a larger object in the direct center.
For example, if you know the max and min x and y of your larger shape, and the same for another shape you wish to place in the direct center of your larger shape, simple calculations can place the smaller object within the larger object in that perfect center.
Does that help? |
2023-03-27 19:21:51 | 0 | python,discord.py | 1 | 75,868,288 | Slash command gives incorrect value for member.status | 75,859,619 | true | 48 | The following hybrid command gives varying results depending on if it is invoked as a commands.command or as a slash command:
@commands.hybrid_command()
async def test(self, ctx: commands.Context):
await ctx.send(f"Your status: {ctx.author.status}")
When invoked as !test, it correctly gives the member.status. However, when the command is invoked as a slash command, the result is always "offline".
Any slash command that I create cannot correctly determine member.status, and any commands.command() I create works correctly.
I have my intents set to discord.Intents.all(). I've tried resetting my permissions through the Discord Developer Portal, but I am mostly baffled as to why a normal command works perfectly and a slash command fails in the same scenario.
Edit
Using the object that returns from ctx.guild.get_member(ctx.author) the member.status value works as intended from within a slash command. For whatever reason, it just seems that the member object passed from ctx.author in a slash command does not have the same functionality as it normally does. | 1.2 | 2 | 1 | It seems this may just be a bug, as there is nothing inherently wrong with the code (according to the docs). A work-around would be to access the desired member through using guild.get_member() which always returns a functioning discord.Member object. |
2023-03-27 20:22:31 | 0 | azure-devops,azure-machine-learning-service,azureml-python-sdk | 1 | 75,860,302 | AzureML Cannot create a deployment in endpoint p2b-sample-endpoint because it is in Creating provisioning state | 75,860,055 | false | 104 | online_endpoint_name = "p2b-sample-endpoint"
# create an online endpoint
endpoint = ManagedOnlineEndpoint(
name=online_endpoint_name,
description="this is a sample online endpoint",
auth_mode="key",
tags={"foo": "bar"},
blue_deployment = ManagedOnlineDeployment(
name="blue",
endpoint_name=online_endpoint_name,
model=model,
environment=env,
code_configuration=CodeConfiguration(
code="./", scoring_script="score.py"
),
instance_type="Standard_DS2_v2",
instance_count=1,
)
)
ml_client.online_deployments.begin_create_or_update(blue_deployment)
HttpResponseError: (UserError) Cannot create a deployment in endpoint p2b-sample-endpoint because it is in Creating provisioning state.
Code: UserError
Message: Cannot create a deployment in endpoint p2b-sample-endpoint because it is in Creating provisioning state.
Additional Information:Type: ComponentName
Info: {
"value": "managementfrontend"
}Type: Correlation
Info: {
"value": {
"operation": "295e1dadc1e11a2db8a470788ec6494f",
"request": "adb9c5f973b580b9"
}
}Type: Environment
Info: {
"value": "westeurope"
}Type: Location
Info: {
"value": "westeurope"
}Type: Time
Info: {
"value": "2023-03-27T20:16:26.6786058+00:00"
}Type: InnerError
Info: {
"value": {
"code": "BadArgument",
"innerError": {
"code": "EndpointNotReady",
"innerError": null
}
}
}Type: MessageFormat
Info: {
"value": "Cannot create a deployment in endpoint {endpointName} because it is in {state} provisioning state."
}Type: MessageParameters
Info: {
"value": {
"endpointName": "p2b-sample-endpoint",
"state": "Creating"
}
}
Also, no deployment logs found. folder is empty | 0 | 1 | 1 | Somewhere burried deepdown in the logs was gunicorn not found, I just had to make sure gunicorn was part part of requirements.txt and it worked |
2023-03-27 21:47:46 | 0 | python,peft | 1 | 76,131,301 | big_modeling.py not finding the offload_dir | 75,860,641 | false | 566 | I'm trying to load a large model on my local machine and trying to offload some of the compute to my CPU since my GPU isn't great (Macbook Air M2). Here's my code:
from peft import PeftModel
from transformers import AutoTokenizer, GPTJForCausalLM, GenerationConfig
from transformers import BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(llm_int8_enable_fp32_cpu_offload=True)
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
offload_folder="/Users/matthewberman/Desktop/offload"
model = GPTJForCausalLM.from_pretrained(
"EleutherAI/gpt-j-6B",
device_map="auto",
offload_folder=offload_folder,
quantization_config=quantization_config
)
model = PeftModel.from_pretrained(model, "samwit/dolly-lora", offload_dir=offload_folder)
However, I get this error:
ValueError: We need an `offload_dir` to dispatch this model according to this `device_map`, the following submodules need to be offloaded: base_model.model.transformer.h.10, base_model.model.transformer.h.11, base_model.model.transformer.h.12, base_model.model.transformer.h.13, base_model.model.transformer.h.14, base_model.model.transformer.h.15, base_model.model.transformer.h.16, base_model.model.transformer.h.17, base_model.model.transformer.h.18, base_model.model.transformer.h.19, base_model.model.transformer.h.20, base_model.model.transformer.h.21, base_model.model.transformer.h.22, base_model.model.transformer.h.23, base_model.model.transformer.h.24, base_model.model.transformer.h.25, base_model.model.transformer.h.26, base_model.model.transformer.h.27, base_model.model.transformer.ln_f, base_model.model.lm_head.
I am definitely pointing to a valid offload directory as the previous method uses offload_folder successfully (I see things being put in there).
What am I doing wrong? | 0 | 1 | 1 | Try adding " offload_folder='./', " to your peftModel.from_pretrained(...) argument. |
2023-03-28 00:33:57 | 1 | python,openai-api,gpt-4 | 6 | 75,886,590 | An error occurred: module 'openai' has no attribute 'ChatCompletion' | 75,861,442 | false | 2,891 | I'm trying to build a discord bot that uses the GPT-4 API to function as a chatbot on discord. I have the most recent version of the OpenAI library but when I run my code it tells me "An error occurred: module 'openai' has no attribute 'ChatCompletion'"
I tried uninstalling and reinstalling the OpenAI library, I tried using the completions endpoint and got the error "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?"
This is the snippet of code thats giving me issues:
async def get_gpt_response(prompt, history):
history_strings = [f"{message['role']}: {message['content']}" for message in history] # update history format
chat_prompt = '\n'.join(history_strings + [f"user: {prompt}"])
completions = openai.ChatCompletion.create(
engine=config["model"],
prompt=chat_prompt,
max_tokens=config["max_tokens"],
n=1,
temperature=config["temperature"],
)
return completions.choices[0].text.strip().split('assistant:', 1)[-1].strip() | 0.033321 | 2 | 5 | Make sure you have the latest OpenAI library. I have the same issue and resolved it by upgrade openai 26.5 to 27.2 version. |
2023-03-28 00:33:57 | 3 | python,openai-api,gpt-4 | 6 | 76,084,452 | An error occurred: module 'openai' has no attribute 'ChatCompletion' | 75,861,442 | false | 2,891 | I'm trying to build a discord bot that uses the GPT-4 API to function as a chatbot on discord. I have the most recent version of the OpenAI library but when I run my code it tells me "An error occurred: module 'openai' has no attribute 'ChatCompletion'"
I tried uninstalling and reinstalling the OpenAI library, I tried using the completions endpoint and got the error "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?"
This is the snippet of code thats giving me issues:
async def get_gpt_response(prompt, history):
history_strings = [f"{message['role']}: {message['content']}" for message in history] # update history format
chat_prompt = '\n'.join(history_strings + [f"user: {prompt}"])
completions = openai.ChatCompletion.create(
engine=config["model"],
prompt=chat_prompt,
max_tokens=config["max_tokens"],
n=1,
temperature=config["temperature"],
)
return completions.choices[0].text.strip().split('assistant:', 1)[-1].strip() | 0.099668 | 2 | 5 | Make sure you don’t have a file called “openai.py” |
2023-03-28 00:33:57 | 2 | python,openai-api,gpt-4 | 6 | 76,226,451 | An error occurred: module 'openai' has no attribute 'ChatCompletion' | 75,861,442 | false | 2,891 | I'm trying to build a discord bot that uses the GPT-4 API to function as a chatbot on discord. I have the most recent version of the OpenAI library but when I run my code it tells me "An error occurred: module 'openai' has no attribute 'ChatCompletion'"
I tried uninstalling and reinstalling the OpenAI library, I tried using the completions endpoint and got the error "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?"
This is the snippet of code thats giving me issues:
async def get_gpt_response(prompt, history):
history_strings = [f"{message['role']}: {message['content']}" for message in history] # update history format
chat_prompt = '\n'.join(history_strings + [f"user: {prompt}"])
completions = openai.ChatCompletion.create(
engine=config["model"],
prompt=chat_prompt,
max_tokens=config["max_tokens"],
n=1,
temperature=config["temperature"],
)
return completions.choices[0].text.strip().split('assistant:', 1)[-1].strip() | 0.066568 | 2 | 5 | Yes i have the file nameed openai.py after changes the name it runs |
2023-03-28 00:33:57 | 0 | python,openai-api,gpt-4 | 6 | 76,442,933 | An error occurred: module 'openai' has no attribute 'ChatCompletion' | 75,861,442 | false | 2,891 | I'm trying to build a discord bot that uses the GPT-4 API to function as a chatbot on discord. I have the most recent version of the OpenAI library but when I run my code it tells me "An error occurred: module 'openai' has no attribute 'ChatCompletion'"
I tried uninstalling and reinstalling the OpenAI library, I tried using the completions endpoint and got the error "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?"
This is the snippet of code thats giving me issues:
async def get_gpt_response(prompt, history):
history_strings = [f"{message['role']}: {message['content']}" for message in history] # update history format
chat_prompt = '\n'.join(history_strings + [f"user: {prompt}"])
completions = openai.ChatCompletion.create(
engine=config["model"],
prompt=chat_prompt,
max_tokens=config["max_tokens"],
n=1,
temperature=config["temperature"],
)
return completions.choices[0].text.strip().split('assistant:', 1)[-1].strip() | 0 | 2 | 5 | Be sure your python version is 3.8 or 3.9. I used 3.6 and had the same issue, it was not until I upgraded that it worked correctly |
2023-03-28 00:33:57 | 2 | python,openai-api,gpt-4 | 6 | 76,136,773 | An error occurred: module 'openai' has no attribute 'ChatCompletion' | 75,861,442 | false | 2,891 | I'm trying to build a discord bot that uses the GPT-4 API to function as a chatbot on discord. I have the most recent version of the OpenAI library but when I run my code it tells me "An error occurred: module 'openai' has no attribute 'ChatCompletion'"
I tried uninstalling and reinstalling the OpenAI library, I tried using the completions endpoint and got the error "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?"
This is the snippet of code thats giving me issues:
async def get_gpt_response(prompt, history):
history_strings = [f"{message['role']}: {message['content']}" for message in history] # update history format
chat_prompt = '\n'.join(history_strings + [f"user: {prompt}"])
completions = openai.ChatCompletion.create(
engine=config["model"],
prompt=chat_prompt,
max_tokens=config["max_tokens"],
n=1,
temperature=config["temperature"],
)
return completions.choices[0].text.strip().split('assistant:', 1)[-1].strip() | 0.066568 | 2 | 5 | I experienced exactly the same error, even after just installation the OpenAi library. I ran the script below:
pip install --upgrade openai
Which gave me that latest version with ChatCompletion as a method. |
2023-03-28 02:14:33 | 0 | python,reinforcement-learning,sumo,traffic-simulation | 1 | 75,869,684 | Flow vehicle problems: collision avoidance behavior | 75,861,834 | false | 31 | In main_UrbanRoadway.py, I generate 3 types of vehicle flows with the help of vehicle.add() defined in flow.core.params.py
Requirements
My requirements for vehicle flows are shown as follows:
Follow the specified acceleration and speed range even under emergency situations which may lead to collisions;
Allow for possible collisions to take place.
Problems occurred
If I set the speed mode as "aggressive"(e.g. SumoCarFollowingParams(speed_mode='aggressive')), will the generated flow follow the acceleration & speed limit set by me in additional_params?
According to the docs, the "obey_safe_speed" mode prevents vehicles from colliding longitudinally, but can fail in cases where vehicles are allowed to lane change. If I set the speed mode as "obey_safe_speed", will the generated flow goes beyond the limit in avoidance with possible longitudinal collisions?
One of my vehicle types is shown below:
vehicles = VehicleParams()
vehicles.add(veh_id="hdv",
lane_change_params=SumoLaneChangeParams('only_strategic_safe'),
car_following_params=SumoCarFollowingParams(speed_mode='aggressive', min_gap=5, tau=1, max_speed=MAX_HDV_SPEED),
acceleration_controller=(IDMController, {})
) | 0 | 1 | 1 | "aggressive" will not limit sumo in accelerations, but speed limit yes.
Yes. |
2023-03-28 08:49:27 | 1 | python,machine-learning,keras,image-processing | 1 | 75,867,100 | Error calling ImageDataGenerator.flow in Keras | 75,864,119 | false | 52 | I am working on image augmentation with images having numeric target value for regression problem, and got the error "ValueError: Unsupported channel number: 150". The code and full stack trace is below.
x_dataset=[]
for file in glob.glob(img):
a=cv2.imread(file, 0)
a=cv2.resize(a, (150, 150))
a=a/255.
x_dataset.append(a)
datagen = ImageDataGenerator(
rotation_range=90, # rotate images up to 20 degrees
horizontal_flip=True, # flip images horizontally
vertical_flip=True # flip images vertically
)
X.shape
(1, 30, 150, 150)
y.shape
(1, 30)
i=0
for batch in datagen.flow (X, y, batch_size=32,
save_to_dir='aug',
save_format='png'):
i +=1
if i> 20:
break
ValueError Traceback (most recent call last)
<ipython-input-57-7ca22a93f1b2> in <module>
1 i=0
----> 2 for batch in datagen.flow (X, y, batch_size=32,
3 save_to_dir='aug',
4 save_format='png'):
5 i +=1
3 frames
/usr/local/lib/python3.9/dist-packages/keras/preprocessing/image.py in __next__(self, *args, **kwargs)
154
155 def __next__(self, *args, **kwargs):
--> 156 return self.next(*args, **kwargs)
157
158 def next(self):
/usr/local/lib/python3.9/dist-packages/keras/preprocessing/image.py in next(self)
166 # The transformation of images is not under thread lock
167 # so it can be done in parallel
--> 168 return self._get_batches_of_transformed_samples(index_array)
169
170 def _get_batches_of_transformed_samples(self, index_array):
/usr/local/lib/python3.9/dist-packages/keras/preprocessing/image.py in _get_batches_of_transformed_samples(self, index_array)
807 if self.save_to_dir:
808 for i, j in enumerate(index_array):
--> 809 img = image_utils.array_to_img(
810 batch_x[i], self.data_format, scale=True
811 )
/usr/local/lib/python3.9/dist-packages/keras/utils/image_utils.py in array_to_img(x, data_format, scale, dtype)
277 return pil_image.fromarray(x[:, :, 0].astype("uint8"), "L")
278 else:
--> 279 raise ValueError(f"Unsupported channel number: {x.shape[2]}")
280
281
ValueError: Unsupported channel number: 150
Please help me to come out of this error | 0.197375 | 1 | 1 | I see, the problem is in your shape:
In X.shape you have (1, 30, 150, 150), but in keras it expects this: (30, 150, 150, 1) which is (batch_size, height, width, channel).
I guess your images are in gray scale so it has only 1 channel.
And for the Y shape you just need (30,). |
2023-03-28 13:58:17 | 0 | python,django | 3 | 75,867,465 | How to add another dictionary entry in a nested python dictionary | 75,867,263 | false | 53 | I would like to make a dictionary in the dictionary.
I have this code
dictionary = {}
for g in genre:
total = 0
products = Product.objects.filter(genre=g)
for product in products:
total += product.popularity
dictionary[g.category] = {g.name: total}
I would like it to look like this, for example
{'book': {'Horror':0, 'Comedy:0}, 'cd': {'Disco': 0, 'Rap': 0}} | 0 | 1 | 1 | for g in genre:
total = 0
products = Product.objects.filter(genre=g)
for product in products:
total += product.popularity
if g.category not in dictionary:
dictionary[g.category] = {}
dictionary[g.category][g.name] = total |
2023-03-28 15:43:21 | 3 | python,jupyter-notebook,jupyter,conda,environment | 2 | 75,868,937 | Deleted conda environments still appears in jupyter-lab. How can I remove them? | 75,868,353 | false | 111 | Formerly, I add a conda installation of Python in ~/opt/anaconda3 with 3 environments : DEV (python 3.9) , PYT (python 3.11) and PROD (python 3.10)
I removed it (all ~/opt deleted), an now I have a miniconda installation in ~/miniconda3, with one environment : PROD (python 3.10)
When I start jupyter-lab, it still offers me to create notebook with DEV, PYT and PROD. It evidently fails if I try to use one of the two firsts, saying that ~/opt/anaconda3/envs/DEV does not exists.
How can I get rid of the two inexistent environments ? Where is the memory of that ?
Computer is Mac under OSX 13.2.1. jupyter-lab is 3.6.3.
Thanks,
Olivier | 0.291313 | 2 | 2 | Did you look at your ~/.bashrc file, which might still export the path of those environments in your python path?
Using the command jupyter kernelspec list, you can look at the different envs you created. If you see unwanted kernels, you can do jupyter kernelspec uninstall unwanted-kernel. |
2023-03-28 15:43:21 | 0 | python,jupyter-notebook,jupyter,conda,environment | 2 | 75,874,402 | Deleted conda environments still appears in jupyter-lab. How can I remove them? | 75,868,353 | false | 111 | Formerly, I add a conda installation of Python in ~/opt/anaconda3 with 3 environments : DEV (python 3.9) , PYT (python 3.11) and PROD (python 3.10)
I removed it (all ~/opt deleted), an now I have a miniconda installation in ~/miniconda3, with one environment : PROD (python 3.10)
When I start jupyter-lab, it still offers me to create notebook with DEV, PYT and PROD. It evidently fails if I try to use one of the two firsts, saying that ~/opt/anaconda3/envs/DEV does not exists.
How can I get rid of the two inexistent environments ? Where is the memory of that ?
Computer is Mac under OSX 13.2.1. jupyter-lab is 3.6.3.
Thanks,
Olivier | 0 | 2 | 2 | I found the envs in ~/Library/Jupyter/kernels
@CamB04 : I found my solution without kernelspec. But yes, I see now that jupyter kernelspec ... is the right tool. Thank :-) |
2023-03-28 15:59:07 | 1 | python,nicegui | 1 | 75,869,613 | Why is NiceGUI displaying a notification before the function with the notification is called? | 75,868,505 | false | 220 | When I run this code, I'm expecting the "running action" notification to not be shown until the action function is called, similar to how the "handling upload" notification is not shown until the handle_upload function is called. I'm not sure why it works as expected with handle_upload, but not with action. When I upload a file, I see both notifications, "running action" and "handling upload". When I click the "Action the Data" button, I don't see any notification. Appreciate any advice/suggestions for getting the expected behavior (and better understanding how NiceGUI works).
@ui.page("/submit")
def submit():
def action_it(data: bytes) -> None:
ui.notify("running action")
def handle_upload(file: events.UploadEventArguments) -> None:
ui.notify("handling upload")
data = file.content.read()
lines = [line for line in data.splitlines() if line.strip()]
output.clear()
with output:
ui.label('Lines uploaded:')
for line in lines:
ui.label(line.decode())
ui.button("Action the Data", on_click=action_it(data))
ui.button("Cancel")
ui.upload(on_upload=handle_upload, auto_upload=True)
output = ui.column() | 0.197375 | 1 | 1 | @markalex Pointed out that I wasn't passing a function delegate to on_click but was actually sending it the results of the function execution, and that is why I wasn't getting the behaviour I expected.
To get the behavior I wanted, I just needed to use a lambda function to call my function and pass it the argument:
ui.button("Action the Data", on_click=lambda: action_it(data)) |
2023-03-28 18:27:31 | 0 | python-3.x,tkinter | 1 | 75,883,863 | Python tkinter form display a few pixels off | 75,869,812 | false | 64 | I'm attempting to use tkinter to display a window in a specific point in the screen (0.0 = top/left, 1.0 = bottom/right). Unfortunately, the display is often approximately 8 pixels off from where it should be. As I want to be able to center the form (not its top left point), I've written a function to calculate the x and y positions with the form's dimensions. It uses win32api.GetMonitorInfo and win32api.MonitorFromPoint to calculate the monitor and taskbar's size so it does not generate the form under the taskbar.)
Since tkinter allows positioning using the geometry("form_width x form_height + x_offset + y_offset"), whenever I attempt to get it to generate at an offset of (0,0) it generates slightly too far to the right - a position of (-8,0) is needed to load the window at the top left of the screen with no gap.
Loading it at the bottom left (0,1) generates it slightly too high (there is a gap between the form and taskbar) and too far to the right. Similar problems exist for top right and bottom right as well. In short, it always generates about 8 pixels too far to the right, and if it's attempting to generate at the bottom of the workspace of the monitor, it generates 8 too high. (Generating at the top does not put it too high.)
When the positioning function is called after all elements have been added to the form, it calls .update() on the form to recalculate its required space, then uses .winfo_reqwidth() and .winfo_reqheight() to perform all necessary calculations, after which the result is passed to the window's geometry().
Why is the pixel offset problem caused? Is it due to some padding of the form's elements that update() does not account for, or is it a problem with my computer's display? (I don't think it is with the screen - all other applications function properly, but I could be wrong.)
The function to calculate the position of the form is below. Parameters: window - the form/toplevel to calculate the position, pos_xy - a list/tuple or enum (containing a tuple) for its position in the screen (top left 0,0; bottom right 1,1).
def new_calc_position(self, window, pos_xy):
# if size of desktop minus taskbar has not been calculated, calculate it
if self.work_area is None:
monitor_info = GetMonitorInfo(MonitorFromPoint((0, 0)))
workspace = monitor_info.get("Work")
self.work_area = [workspace[2], workspace[3]] # [width, height]
print(f"Work area: {workspace[2]}, {workspace[3]}")
monitor_area = monitor_info.get("Monitor")
self.taskbar_height = monitor_area[3] - workspace[3]
print(f"Monitor area: {monitor_area[2]}, {monitor_area[3]}")
print(f"Taskbar height: {self.taskbar_height}")
# test that pos_xy is enum Position or list containing two floats 0.0 <= x <= 1.0
if type(pos_xy) is Position:
pos_xy = pos_xy.value # convert enum to list for use
elif isinstance(pos_xy, (list, tuple)) and len(pos_xy) == 2 and all(isinstance(el, float) and
0.0 <= el <= 1.0 for el in pos_xy):
pass # is list/tuple, length of 2, and all numbers are floats 0.0 <= x <= 1.0
else:
raise TypeError("pos_xy must be of type Position or a list containing two numbers between 0.0 and 1.0")
window.withdraw() # stop window from showing
window.update() # force it to update its required size; would flash to screen if not withdrawn
window.deiconify() # allow window to show
# calculate targeted position
target_width = self.work_area[0] * pos_xy[0]
target_height = self.work_area[1] * pos_xy[1]
print("Monitor width: " + str(self.work_area[0]))
print("Monitor height: " + str(self.work_area[1]))
print("Width percent: " + str(pos_xy[0]))
print("Height percent: " + str(pos_xy[1]))
print("Width target: " + str(target_width))
print("Height target: " + str(target_height))
# calculate required width and height
width = window.winfo_reqwidth()
height = window.winfo_reqheight()
print("Form width: " + str(width))
print("Form height: " + str(height))
# TODO create a lock parameter to allow a certain generation point
x_offset = int(target_width - (width / 2))
y_offset = int(target_height - (height / 2))
print("Initial xoffset: " + str(x_offset))
print("Initial yoffset: " + str(y_offset))
# bounce window to display entirely in screen; assume will not overlap both sides in one dimension
if x_offset < 0: # too far to left
x_offset = 0
elif x_offset + width > self.work_area[0]: # too far to right
x_offset = self.work_area[0] - width
if y_offset < 0:
y_offset = 0
elif y_offset + height > self.work_area[1]:
y_offset = self.work_area[1] - self.taskbar_height - height
print("Ending xoffset: " + str(x_offset))
print("Ending yoffset: " + str(y_offset))
print(f"{width}x{height}+{x_offset}+{y_offset}")
return f"{width}x{height}+{x_offset}+{y_offset}" | 0 | 1 | 1 | I can't tell you why the x offset issue is happening, but I can at least confirm that you are quite right.
I had never noticed this before, but on my Windows 10 machine, an x offset of 0 and y offset of 0 will place the root window correctly at the top of the screen. But it sits about 7 pixels (in my case) to the right, leaving a small gap at the left hand side of the screen. Setting the x offset to -7 will correctly position the root window at the left side of the screen with no gap.
Note also that you can grab and move the root window to the left side of the screen with the mouse. So whether you use a negative offset, or move the window manually, it has no problem sitting correctly at the left side of the screen when forced to do so.
Other Windows applications do sit correctly at the left hand side of the screen when maximized, so this narrows it down to either a quirk of tkinter, or an interaction between Windows and tkinter.
If this behavior is the same on Linux and Mac, then it is a quirk of tkinter. If not, then it is a Windows + tkinter issue.
Most importantly, your code is not the problem. |
2023-03-29 02:38:49 | 1 | python,geopandas,epsg | 1 | 76,112,522 | geopandas to_crs() return ProjError: x, y, z, and time must be same size | 75,872,652 | false | 789 | How can I convert this geodataframe with polygons in it from epsg:32748 to epsg:4326?
I have a geodataframe called poi_shp_utm_buffered which looks like this
| | PlaceID | Official_N | Primary__1 | Full_Stree | Overall_Sc | X | Y | geometry |
|--:|------------------------------------------:|-----------:|-----------:|-----------:|-----------:|--------------:|-------------:|--------------------------------------------------:|
| 0 | 360qqu5b-409576faa57505ae78aa1cd551661af6 | a | 1 | 1 | 2 | 709469.120296 | 9.299854e+06 | POLYGON ((709479.120 9299853.811, 709479.072 9... |
| 1 | 360qqu5b-5ec43f6ad613e60c15e3c4779f9c003d | b | 1 | 1 | 2 | 709369.905462 | 9.299615e+06 | POLYGON ((709379.905 9299615.157, 709379.857 9... |
| 2 | 360qqu5b-7c11918bf9754eb6841c838f3337b783 | c | 2 | 2 | 1 | 707546.465569 | 9.300030e+06 | POLYGON ((707556.466 9300030.011, 707556.417 9... |
When I extract the crs, it gives me
>>> poi_shp_utm_buffered.to_crs
<Projected CRS: EPSG:32748>
Name: WGS 84 / UTM zone 48S
Axis Info [cartesian]:
- E[east]: Easting (metre)
- N[north]: Northing (metre)
Area of Use:
- name: World - S hemisphere - 102°E to 108°E - by country
- bounds: (102.0, -80.0, 108.0, 0.0)
Coordinate Operation:
- name: UTM zone 48S
- method: Transverse Mercator
Datum: World Geodetic System 1984
- Ellipsoid: WGS 84
- Prime Meridian: Greenwich
The error:
poi_shp_utm_buffered.to_crs(4326)
ProjError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_15024\2103948270.py in
2 # __ = poi_shp_utm_buffered
3
----> 4 __ = poi_shp_utm_buffered.drop(columns = ['X','Y']).to_crs(4326)
c:\Users\arasyidi\Anaconda3\envs\python_ds_gis\lib\site-packages\geopandas\geodataframe.py in to_crs(self, crs, epsg, inplace)
1362 else:
1363 df = self.copy()
-> 1364 geom = df.geometry.to_crs(crs=crs, epsg=epsg)
1365 df.geometry = geom
1366 if not inplace:
c:\Users\arasyidi\Anaconda3\envs\python_ds_gis\lib\site-packages\geopandas\geoseries.py in to_crs(self, crs, epsg)
1122 """
1123 return GeoSeries(
-> 1124 self.values.to_crs(crs=crs, epsg=epsg), index=self.index, name=self.name
1125 )
1126
c:\Users\arasyidi\Anaconda3\envs\python_ds_gis\lib\site-packages\geopandas\array.py in to_crs(self, crs, epsg)
777 transformer = Transformer.from_crs(self.crs, crs, always_xy=True)
778
--> 779 new_data = vectorized.transform(self.data, transformer.transform)
...
432 iny,
pyproj/_transformer.pyx in pyproj._transformer._Transformer._transform()
ProjError: x, y, z, and time must be same size
This is my geopandas version
>>> gpd.show_versions()
SYSTEM INFO
-----------
python : 3.9.13 (main, Aug 25 2022, 23:51:50) [MSC v.1916 64 bit (AMD64)]
executable : c:\Users\arasyidi\Anaconda3\envs\python_ds_gis\python.exe
machine : Windows-10-10.0.22000-SP0
GEOS, GDAL, PROJ INFO
---------------------
GEOS : None
GEOS lib : None
GDAL : 3.6.2
GDAL data dir: c:\Users\arasyidi\Anaconda3\envs\python_ds_gis\lib\site-packages\pyogrio\gdal_data\
PROJ : 6.2.1
PROJ data dir: C:\Users\arasyidi\Anaconda3\envs\python_ds_gis\Library\share\proj
PYTHON DEPENDENCIES
-------------------
geopandas : 0.12.2
numpy : 1.21.5
pandas : 1.4.3
pyproj : 2.6.1.post1
shapely : 1.8.4
fiona : None
geoalchemy2: None
geopy : None
matplotlib : 3.5.2
mapclassify: 2.5.0
pygeos : 0.9
pyogrio : 0.5.1
psycopg2 : None
pyarrow : 11.0.0
rtree : 0.9.7
Ps. if I check using conda list on my environment, I actually have fiona installed, but somehow the gpd.show_version can detect it. | 0.197375 | 4 | 1 | updating proj and pyproj to latest as suggest in the github error report fixed this issue. |
2023-03-29 04:51:11 | 2 | python,opencv,camera,distortion,camera-intrinsics | 1 | 75,873,422 | How does camera distortion coefficients and camera intrinsic parameters change after image crop or resize? | 75,873,241 | false | 191 | I try to make image some changes(crop, resize, undistort) and I want to know how distortion coefficients and camera intrinsinc parameters change after that.
Origin Image shape = [848, 480]
camera matrix = [[fx, 0, cx], [0, fy, cy], [0, 0, 1]]
distortion coefficients = [k1, k2, p1, p2]
crop
[848, 480] -> [582, 326]
fx, fy : no changes
cx, cy : cx -133, cy - 77
distortion coefficients -> ??
resize
[582, 326] -> [848, 480]
fx, cx -> 1.457fx, 1.457cx
fy, cy -> 1.472fy, 1.472cy
[k1, k2, p1, p2] -> [k1, k2, p1, p2] same
undistort
fx, fy, cx, cy -> same
[k1, k2, p1, p2] -> [0, 0, 0, 0]
Does anyone knows the answer?
For me I tried using my camera and calibrate some results but I don't know the exact equation.
origin
fx = 402.242923
fy = 403.471056
cx = 426.716067
cy = 229.689399
k1 = 0.068666
k2 = -0.039624
p1 = -0.000182
p2 = -0.001510
crop
fx = 408.235312 -> almost no change
fy = 409.653612 -> almost no change
cx = 297.611639 -> cx - 133
cy = 153.667098 -> cy - 77
k1 = 0.048520 -> I don't know
k2 = -0.010035 -> I don't know
p1 = 0.000943 -> I don't know
p2 = -0.000870 -> I don't know
crop_resize
fx = 598.110106 -> almost * 1.457
fy = 608.949995 -> almost * 1.472
cx = 430.389861 -> almost * 1.457
cy = 226.585804 -> almost * 1.472
k1 = 0.054762 -> I don't know
k2 = -0.025597 -> I don't know
p1 = 0.002752 -> I don't know
p2 = -0.001316 -> I don't know
undistort
fx = 404.312916 -> almost same
fy = 405.544033 -> almost same
cx = 427.986926 -> almost same
cy = 229.213162 -> almost same
k1 = -0.000838 -> almost 0
k2 = 0.001244 -> almost 0
p1 = -0.000108 -> almost 0
p2 = 0.000769 -> almost 0 | 0.379949 | 1 | 1 | All part you write as "I don't know" will be "same(not changed)".
Because Cropping and Resizing is representable with only (cx,cy,fx,fy). |
2023-03-29 11:50:08 | 0 | python,html,css,flask,web-hosting | 1 | 75,881,240 | Hosting a flask web app with a domain I already own | 75,876,820 | true | 115 | I needed some information from you, since I haven't found much on the internet.
I recently bought a domain on godaddy.com and I wanted to use it to host my web app developed in Flask.
Given the price spent to buy the domain, I wanted to avoid spending money on hosting as well (if it could be avoided).
I tried with Netlify, but it doesn't allow you to host apps developed in Flask (it gives me everytime a 404 error), a pity because it would have been very convenient, since I could very easily configure the web app with my domain.
Can anyone tell me a possible free solution? I repeat, I already have the domain, I just need a hosting service.
Thanks in advance to those who will reply. | 1.2 | 1 | 1 | If you're looking for free options, your best bets are Render.com or pythonanywhere, as has already been mentioned.
Personally I'd recommend Render as I have, like you, been looking for places to host my flask apps. I used to use Heroku which was perfect but once they removed the free tier I moved away.
It depends on the complexity of your flask app as to what will work with Render, but one I'm hosting uses their PostgreSQL without any issues. It's really simple to setup too if you have your repository on GitHub with their integrations.
I have one however that doesn't properly work but I think that's more to do with limitations on the free tier and how I've coded it to work. I also tried Netlify but it doesn't seem to like Flask or Python very much. |
2023-03-29 16:04:16 | 0 | python,scikit-learn,logistic-regression | 1 | 75,882,253 | Why does adding duplicated features improve Logistic Regression accuracy? | 75,879,613 | true | 27 | from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
X, y = load_iris(return_X_y=True)
for i in range(5):
X_redundant = np.c_[X,X[:,:i]] # repeating redundant features
print(X_redundant.shape)
clf = LogisticRegression(random_state=0,max_iter=1000).fit(X_redundant, y)
print(clf.score(X_redundant, y))
Output
(150, 4)
0.9733333333333334
(150, 5)
0.98
(150, 6)
0.98
(150, 7)
0.9866666666666667
(150, 8)
0.9866666666666667
Question: Why is the score (default being Accuracy) increasing as more redundant features are added for Logistic Regression?
I expect the score to remain the same, by drawing analogies from LinearRegression's behaviour.
If it was LinearRegression, the score (default R2) will not change because as more columns are added because LinearRegression will evenly distribute coef between each of the 2 redundant coefficients
from sklearn.datasets import load_iris
from sklearn.linear_model import LinearRegression
X, y = load_iris(return_X_y=True)
X, y = X[:,:-1],X[:,-1]
for i in range(4):
X_redundant = np.c_[X,X[:,:i]] # repeating redundant features
print(X_redundant.shape)
clf = LinearRegression().fit(X_redundant, y)
print(clf.score(X_redundant, y))
print(clf.coef_)
Output
(150, 3)
0.9378502736046809
[-0.20726607 0.22282854 0.52408311]
(150, 4)
0.9378502736046809
[-0.10363304 0.22282854 0.52408311 -0.10363304]
(150, 5)
0.9378502736046809
[-0.10363304 0.11141427 0.52408311 -0.10363304 0.11141427]
(150, 6)
0.9378502736046809
[-0.10363304 0.11141427 0.26204156 -0.10363304 0.11141427 0.26204156] | 1.2 | 1 | 1 | This is because LogisticRegression applies regularization by default. Set penalty="none" or penalty=None (depending on your version of sklearn) and you should see the behavior you expected. |
2023-03-29 16:21:04 | 1 | python,pandas,drop-duplicates | 3 | 75,879,955 | Dropping duplicate rows in a Pandas DataFrame based on multiple column values | 75,879,794 | false | 77 | In a dataframe I need to drop/filter out duplicate rows based the combined columns A and B. In the example DataFrame
A B C D
0 1 1 3 9
1 1 2 4 8
2 1 3 5 7
3 1 3 4 6
4 1 4 5 5
5 1 4 6 4
rows 2 and 3, and 5 and 6 are duplicates and one of them should be dropped, keeping the row with the lowest value of
2 * C + 3 * D
To do this, I created a new temporary score column, S
df['S'] = 2 * df['C'] + 3 * df['D']
and finally to return the index of the minimum value for S
df.loc[df.groupby(['A', 'B'])['S'].idxmin()]
del ['S']
The result is
A B C D
0 1 1 3 9
1 1 2 4 8
3 1 3 4 6
5 1 4 6 4
However, is there a more efficient way of doing this, without having to add (and later drop) a new column? | 0.066568 | 1 | 1 | df.drop_duplicates(subset=['A','B'],inplace=True)
by default, drop_duplicates drops rows (axis=0). |