Question
stringlengths
114
20.6k
A_Id
int64
75.3M
76.6M
Title
stringlengths
16
149
is_accepted
bool
2 classes
Available Count
int64
1
5
AnswerCount
int64
1
12
Tags
stringlengths
6
76
ViewCount
int64
13
82.6k
Q_Id
int64
75.3M
76.2M
Answer
stringlengths
30
9.2k
Users Score
int64
-3
17
Score
float64
-0.38
1.2
CreationDate
stringlengths
19
19
Q_Score
int64
0
46
In this piece of code, I could write a simple except clause without writing Exception in front of it. I mean the last line could be like this : except: print('Hit an exception other than KeyError or NameError!') What is the point of writing Exception in front of an except clause ? try: discounted_price(instrument, discount) except KeyError: print("There is a keyerror in your code") except NameError: print('There is a TypeError in your code') except Exception: print('an exception occured') I tried writing an except clause without Exception keyword and it worked the same. Thank you guys for your answers . I know the point of catching specific errors. If I want to ask more clearly , what is the difference between two clauses : except Exception: print('an exception occured') except : print('an exception occured')
75,304,591
Difference between bare except and specifying a specific exception
false
1
5
python,exception,try-catch
130
75,304,491
The purpose of writing "Exception" in front of an except clause is to catch all possible exceptions that can occur in the code. By specifying "Exception", you are telling the interpreter to handle any type of exception that might be raised. The more specific the exception specified in the except clause, the more targeted the handling of the exception can be. For example, if you only want to handle "KeyError" exceptions, you can specify that explicitly in the except clause, as in the first example.
0
0
2023-02-01 00:25:39
3
`I'm using in edge impulse FOMO I know that object detection fps is 1/inference time my model's time per inference is 2ms so object detection is 500fps but my model run vscode fps is 9.5 what is the difference between object detection fps and video fps ?
75,305,707
what is the difference between object detection fps and video fps?
true
1
1
python,deep-learning,frame-rate
20
75,305,535
If I understand correctly, your object detection fps indicates the number of frames (or images) that your model, given your system, can process in a second. A video fps in your input source's frames per second. For example, if your video has an fps (also referred to as framerate) of 100, then your model would be able to detect objects in all of the frames in 100ms (or 1/10 of a second). In your case, your video input source seems to have 9.5 frames in a second. This means that your model, given your system, will process 1-second wort of a video in about ~20ms.
0
1.2
2023-02-01 04:05:23
0
Recently I have install python 3.9.9 in my windows 10.it want show the path I have typed cmd promt "Wchich Python" it want show
75,305,674
How to identify python in windows 10
false
3
4
python
36
75,305,542
In Command Prompt, either which python or where python will print the path to your python executable. If which python or where python does not show the path to your Python executable it is likely that it is not in your PATH variable. To add your executable to the PATH variable you, search for Environment Variables in the Settings application. This will open the Advanced tab in System Properties. Click the Environment Variables button towards the bottom. You can then edit the PATH variable to include the path to your Python executable. Once you have applied the changes and restarted Command Prompt you can then run which python or where python to confirm your changes have taken effect.
0
0
2023-02-01 04:06:10
0
Recently I have install python 3.9.9 in my windows 10.it want show the path I have typed cmd promt "Wchich Python" it want show
75,305,623
How to identify python in windows 10
false
3
4
python
36
75,305,542
Just type python or python3 in cmd
0
0
2023-02-01 04:06:10
0
Recently I have install python 3.9.9 in my windows 10.it want show the path I have typed cmd promt "Wchich Python" it want show
75,305,641
How to identify python in windows 10
false
3
4
python
36
75,305,542
You can use in your cmd where python It will show you the path of all installed python in your device
0
0
2023-02-01 04:06:10
0
How does one create a python file from pycharm terminal.In VS Code they use "code {name}" so I want something similar to that but in pycharm. I am getting an error "zsh:command not found:code"
75,308,329
creating python files from pycharm terminal
false
1
2
python,terminal,pycharm
30
75,308,086
Setting->Keymap Search "new" Under "Pyhon Community Edition" there will be an option for "Python File" Add a new shortcut to this option (SHIFT+N is usually unassigned)
0
0
2023-02-01 09:30:54
0
I'm trying to get data from csv and output it to the console (ie, command line). I have 30 columns, but I can only output 5 to 6 columns. df = pd.read_csv(csv_raw) print(df.head()) date level mark source 0 2022-01-01 A 1 facebook 1 2022-01-01 B 2 facebook 2 2022-01-01 C 12 facebook 3 2022-01-01 D 53 facebook 4 2022-01-01 T 22 facebook If I display all 30 columns it turns out like this: print(df.head(30)) date ... source 0 2022-01-01 ... facebook 1 2022-01-01 ... facebook 2 2022-01-01 ... facebook 3 2022-01-01 ... facebook 4 2022-01-01 ... facebook 5 2022-01-01 ... facebook when i try pd.options.display.max_columns = 50 it returns me like that: date level clicks \ 0 2022-01-01 A 1 1 2022-01-01 B 2 2 2022-01-01 C 12 3 2022-01-01 D 53 4 2022-01-01 T 22 5 2022-01-01 Free trial, upgrade to basic at https://www.wi... 1 source 0 facebook 1 facebook 2 facebook 3 facebook 4 facebook 5 facebook Is it possible somehow to display more than 5 columns as in the first case?
75,310,017
How to print up to 40 rows in DataFrame
true
1
1
python,pandas,dataframe
61
75,309,424
There are 3 dataframe settings to be set to display the desired output (1) Set the overall width (number of characters) pd.options.display.width = 500 pd.options.display.width = None #for unlimited (2) Set the maximum columns count (number of columns) pd.options.display.max_columns = 50 pd.options.display.max_columns = None #for unlimited (3) Set the maximum width of each column (number of characters) pd.options.display.max_colwidth = 30 pd.options.display.max_colwidth = None #for unlimited There is a row (index 5) having the value Free trial, upgrade to basic at https://www.wi... which is making a mess of the columns. To delete this row, use: df.drop(5, inplace=True)
1
1.2
2023-02-01 11:20:50
2
Hej, I have the following code snippet and I don't understand the output: a = "foo" b = "foo" c = "bar" foo_list = ["foo", "bar"] print(a == b in foo_list) # True print(a == c in foo_list) # False --- Output: True False The first output is True. I dont understand it because either a == b is executed first which results in True and then the in operation should return False as True is not in foo_list. The other way around, if b in foo_list is executed first, it will return True but then a == True should return False. I tried setting brackets around either of the two operations, but both times I get False as output: print((a == b) in foo_list) # False print(a == (b in foo_list)) # False --- Output: False False Can somebody help me out? Cheers!
75,309,861
Explaining the Output of Comparison Expressions Involving Strings and Lists in Python
false
1
1
python,order-of-execution,in-operator
36
75,309,537
Ah, thanks @Ture Pålsson. The answer is chaining comparisons. a == b in foo_listis equivalent to a == b and b in foo_list where a==b is True and b in foo_list is True. If you set brackets, there will be no chaining.
0
0
2023-02-01 11:30:13
1
I have stored a class object as a pickle in an SQLite DB. Below is code for the file pickle.py sqlite3.register_converter("pickle", pickle.loads) sqlite3.register_adapter(list, pickle.dumps) sqlite3.register_adapter(set, pickle.dumps) class F: a = None b = None def __init__(self) -> None: pass df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]}) f = F() f.a = df f.b = df.columns data = pickle.dumps(f, protocol=pickle.HIGHEST_PROTOCOL) sqliteConnection = sqlite3.connect('SQLite_Python.db') cursor = sqliteConnection.cursor() print("Successfully Connected to SQLite") DATA = sqlite3.Binary(data) sqlite_insert_query = f"""INSERT INTO PICKLES1 (INTEGRATION_NAME, DATA) VALUES ('James',?)""" resp = cursor.execute(sqlite_insert_query,(DATA,)) sqliteConnection.commit() After that, I am trying to fetch the pickle from the DB. The pickle is stored in a pickle datatype column which I had registered earlier on SQLite in file retrieve_pickle.py. cur = conn.cursor() cur.execute("SELECT DATA FROM PICKLES1 where INTEGRATION_NAME='James'") df = None rows = cur.fetchall() for r in rows[0]: print(type(r)) #prints <class 'bytes'> df = pickle.loads(r) But it gives me an error File "/Users/ETC/Work/pickle_work/picklertry.py", line 34, in select_all_tasks df = pickle.loads(r) AttributeError: Can't get attribute 'F' on <module '__main__' from '/Users/rusab1/Work/pickle_work/picklertry.py'> I was trying to store a class object in a pickle column in sqlite after registering pickle.loads as a pickle datatype. I kept the object successfully and was able to retrieve it from DB but when I try to load it back so that I can access the thing and attributes it gives me an error.
75,311,151
Error when loading pickle object from SQLite
true
1
2
python,pickle
67
75,309,871
Pickling requires you to import the actual module which you have in the pickle. I had to import F into the 2nd file where I was loading the pickle.
2
1.2
2023-02-01 12:00:25
1
I have cloned a GitHub repository that contains the code for a Python package to my local computer (it's actually on a high performance cluster). I have also installed the package with pip install 'package_name'. If I now run a script that uses the package, it of course uses the installed package and not the cloned repository, so if I want to make changes to the code, I cannot run those. Is there a way to do this, potentially with pip install -e (but I read that was deprecated) or a fork? How could I then get novel updates in the package to my local version, as it is frequently updated?
75,310,593
How can I edit a GitHub repository (for a Python package) locally and run the package with my changes?
false
2
2
python,github,pip
32
75,310,545
If you run an IDE like PyCharm, you can mark a folder in your project as Sources Root. It will then import any packages from that folder instead of the standard environment packages.
0
0
2023-02-01 12:59:45
0
I have cloned a GitHub repository that contains the code for a Python package to my local computer (it's actually on a high performance cluster). I have also installed the package with pip install 'package_name'. If I now run a script that uses the package, it of course uses the installed package and not the cloned repository, so if I want to make changes to the code, I cannot run those. Is there a way to do this, potentially with pip install -e (but I read that was deprecated) or a fork? How could I then get novel updates in the package to my local version, as it is frequently updated?
75,324,284
How can I edit a GitHub repository (for a Python package) locally and run the package with my changes?
false
2
2
python,github,pip
32
75,310,545
In the end I indeed did use pip install -e, and it is working for now. I will figure it out once the owner of the package releases another update!
0
0
2023-02-01 12:59:45
0
why does 'from sklearn.impute import SimpleImputer as si' works but ' import sklearn.impute.SimpleImputer as si' do not work I want to know, why this won't work. I am new to python.
75,313,500
import vs from import in sklearn
false
2
2
python,scikit-learn
46
75,313,450
You can only use import with modules. with from ... import ... you can import variables so submodules, functions, classes, and everything else. As SimpleImputer is not a module only the second option is availiable. Writen a bit differently import only works in general with files, from ... import works with variables declared in the script.
1
0.099668
2023-02-01 16:43:56
1
why does 'from sklearn.impute import SimpleImputer as si' works but ' import sklearn.impute.SimpleImputer as si' do not work I want to know, why this won't work. I am new to python.
75,313,487
import vs from import in sklearn
true
2
2
python,scikit-learn
46
75,313,450
The reason for this is the way the Python import statement works. The first import statement imports the SimpleImputer class from the sklearn.impute module and then names it si. The second import statement tries to import a module named SimpleImputer from a module named sklearn.impute. This does not work because in Python, the import statement only allows you to import modules, not submodules.
2
1.2
2023-02-01 16:43:56
1
I'd like to display an embed with a picture using HTML, but I couldnt find anything online using python to do it. Is that even possible? if it is I would love an explanation. I tried searching, couldnt find anything about it.
75,576,248
How can I display a HTML page in discord using a bot with python?
false
1
1
python,html,discord
24
75,313,745
Makes sense, sounded like a ChatGPT generated answer... AFAIK you can't get HTML embeds on Discord (which is a pain in the arse as it could make the experience much more enjoyable for bot users). One way you could tackle this though is by generating a picture of the content you want to be sent stored on a server you have access to and have the bot share said picture. Lots of cons here, but that's the best I figured out. Good luck.
1
0.197375
2023-02-01 17:05:46
0
There are several Python packages that implement the datetime.tzinfo interface, including pytz and dateutil. If someone hands me a timezone object and wants me to apply it to a datetime, the procedure is different depending on what kind of timezone object it is: def apply_tz_to_datetime(dt: datetime.datetime, tz: datetime.tzinfo, ambiguous, nonexistent): if isinstance(tz, dateutil.tz._common._tzinfo): # do dt.replace(tz, fold=...) elif isinstance(tz, pytz.tzinfo.BaseTzInfo): # do tz.localize(dt, is_dst=...) # other cases here (The dateutil.tz case is a lot more complicated than I've shown, because there are a lot of cases to consider for non-existent or ambiguous datetimes, but the gist is always to either call dt.replace(tz, fold=...) or raise an exception.) Checking dateutil.tz._common._tzinfo seems like a no-no, though, is there a better way?
75,339,617
Check whether timezone is dateutil.tz instance
false
1
2
python,timezone,pytz,python-dateutil
77
75,314,041
It appears from the ratio of comments to answers (currently 9/0 = ∞), there is no available answer to the surface-level question (how to determine whether something is a dateutil.tz-style timezone object). I'll open a feature request ticket with the maintainers of the library.
0
0
2023-02-01 17:35:07
1
I am trying to store data retrieved from a website into MySQL database via a pandas data frame. However, when I make the function call df.to_sql(), the compiler give me an error message saying: AttributeError: 'Connection' object has no attribute 'connect'. I tested it couple times and I am sure that there is neither connection issue nor table existence issue involved. Is there anything wrong with the code itself? The code I am using is the following: from sqlalchemy import create_engine, text import pandas as pd import mysql.connector config = configparser.ConfigParser() config.read('db_init.INI') password = config.get("section_a", "Password") host = config.get("section_a", "Port") database = config.get("section_a", "Database") engine = create_engine('mysql+mysqlconnector://root:{0}@{1}/{2}'. format(password, host, database), pool_recycle=1, pool_timeout=57600, future=True) conn = engine.connect() df.to_sql("tableName", conn, if_exists='append', index = False) The full stack trace looks like this: Traceback (most recent call last): File "/Users/chent/Desktop/PFSDataParser/src/FetchPFS.py", line 304, in <module> main() File "/Users/chent/Desktop/PFSDataParser/src/FetchPFS.py", line 287, in main insert_to_db(experimentDataSet, expName) File "/Users/chent/Desktop/PFSDataParser/src/FetchPFS.py", line 89, in insert_to_db df.to_sql(tableName, conn, if_exists='append', index = False) File "/Users/chent/opt/anaconda3/lib/python3.9/site-packages/pandas/core/generic.py", line 2951, in to_sql return sql.to_sql( File "/Users/chent/opt/anaconda3/lib/python3.9/site-packages/pandas/io/sql.py", line 698, in to_sql return pandas_sql.to_sql( File "/Users/chent/opt/anaconda3/lib/python3.9/site-packages/pandas/io/sql.py", line 1754, in to_sql self.check_case_sensitive(name=name, schema=schema) File "/Users/chent/opt/anaconda3/lib/python3.9/site-packages/pandas/io/sql.py", line 1647, in check_case_sensitive with self.connectable.connect() as conn: AttributeError: 'Connection' object has no attribute 'connect' The version of pandas I am using is 1.4.4, sqlalchemy is 2.0 I tried to make a several execution of sql query, for example, CREATE TABLE xxx IF NOT EXISTS or SELECT * FROM, all of which have given me the result I wish to see.
76,357,663
AttributeError: 'Connection' object has no attribute 'connect' when use df.to_sql()
false
1
2
python,pandas,sqlalchemy,mysql-connector
4,100
75,315,117
I have faced the same problem and it got solved (As @nacho suggested above in a comment to the question) when I replace connection object with sqlalchemy engine in DataFrame.to_sql() arguments.
0
0
2023-02-01 19:15:21
9
I need to change the value of two random variables out of four to '—'. How do I do it with maximum effectiveness and readability? Code below is crap just for reference. from random import choice a = 10 b = 18 c = 15 d = 92 choice(a, b, c, d) = '—' choice(a, b, c, d) = '—' print(a, b, c, d) >>> 12 — — 92 >>> — 19 — 92 >>> 10 18 — — I've tried choice(a, b, c, d) = '—' but ofc it didn't work. There's probably a solution using list functions and methods but it's complicated and almost impossible to read, so I'm searching for an easier solution.
75,315,942
How do I change 2 random variables out of 4?
false
1
6
python,variables,random,replace
91
75,315,891
Variable names are not available when you run your code, so you cannot change a "random variable". Instead, I recommend that you use a list or a dictionary. Then you can choose a random element from the list or a random key from the dictionary.
1
0.033321
2023-02-01 20:32:37
1
I'm running into an issue when trying to have a Python script running on an EC2 instance assume a role to perform S3 tasks. Here's what I have done. Created a IAM role with AmazonS3FullAccess permissions and got the following ARN: arn:aws:iam::<account_number>:role/<role_name> The trust policy is set so the principal is a the EC2 service. I interpret this as allowing any EC2 instance within the account being allowed to assume the role. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } I launched an EC2 instance and attached the above IAM role. I attempt to call assume_role() using Boto3 session = boto3.Session() sts = session.client("sts") response = sts.assume_role( RoleArn="arn:aws:iam::<account_number>:role/<role_name>", RoleSessionName="role_session_name" ) But it throws the following error: botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:sts::<account_number>:assumed-role/<role_name>/i-<instance_id> is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::<account_number>:role/<role_name> All other StackOverflow questions about this talk about the Role's trust policy but mine is set to allow EC2. So either I'm misinterpreting what the policy should be or there is some other error I can't figure out.
75,317,468
AccessDenied when calling Boto3 assume_role from EC2 even with service principal
true
1
1
python,amazon-web-services,amazon-ec2,boto3
141
75,317,209
You do not have to explicitly call sts.assume_role. If the role is attached to the EC2 instance, boto3 will use in the background seamlessly.You use boto3 as you would normally do, and it will take care of using the IAM role for you. No action required from you.
0
1.2
2023-02-01 23:21:25
1
I'm building a windows service with Python 3.6 in an anaconda virtual environment. I make a post request using python requests: requests.post(url, files=files, data=data, headers=headers) After creating the service, on my windows machine (the one that has the source code that created the service) this works right off the bat. When I install this service on another windows machine, I keep getting SSL: CERTIFICATE_VERIFY_FAILED. I installed it on a third windows machine and that works fine (but isn't the machine we need it to work on sadly). Things I've tried: Installed python-certifi-win32 with conda in my virtual environment before creating the service. Specified a path to a .pem file with the chain of certificates for the url and added it with the verify parameter. So my request is as such: requests.post(url, files=files, data=data, headers=headers, verify='path\to\pemfile'). This works on my machine but not on the other one. I printed out requests.certs.where() on both computers and they both say C:\Windows\TEMP\_MEXXXX\certifi\cacert.pem. How can I get my service to run the same on all computers? UPDATE: Reproducible example: # debugFile.py import servicemanager import socket import win32event import win32service import win32serviceutil import traceback import sys, getopt import requests class SCPWorker: def __init__(self): self.running = True def test_function(self): data = {} token = 'auth token for url' response = requests.post(custom_url, data=data, headers={'Authorization': "Token " + token}) class StoreScp(win32serviceutil.ServiceFramework): _svc_name_ = "Service" _svc_display_name_ = "Debug Service" _svc_description_ = "description" def __init__(self, args): self.worker = SCPWorker() win32serviceutil.ServiceFramework.__init__(self, args) self.hWaitStop = win32event.CreateEvent(None, 0, 0, None) socket.setdefaulttimeout(60) def SvcStop(self): try: self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING) win32event.SetEvent(self.hWaitStop) self.worker.stop() self.running = False except: servicemanager.LogErrorMsg(traceback.format_exc()) def SvcDoRun(self): try: self.worker.test_function() while rc != win32event.WAIT_OBJECT_0 and rc != win32event.WAIT_FAILED and rc != win32event.WAIT_TIMEOUT and rc != win32event.WAIT_ABANDONED: rc = win32event.WaitForSingleObject(self.hWaitStop, 5000) if rc == win32event.WAIT_OBJECT_0: servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE, servicemanager.PYS_SERVICE_STARTED, ('Service stopped', '')) else: servicemanager.LogMsg(servicemanager.EVENTLOG_ERROR_TYPE, servicemanager.PYS_SERVICE_STOPPED, ('Service quit unexpectedly with status %d' % rc, '')) except: servicemanager.LogErrorMsg(traceback.format_exc()) if __name__ == '__main__': if len(sys.argv) == 1: servicemanager.Initialize() servicemanager.PrepareToHostSingle(StoreScp) servicemanager.StartServiceCtrlDispatcher() else: win32serviceutil.HandleCommandLine(StoreScp) And then run pyinstaller -F --hidden-import=win32timezone DebugFile.py to create the exe. And then install the exe on a machine.
75,477,824
Python requests causes SSL Verification error on one Windows computer but not another
true
1
1
python-3.x,windows,python-requests,anaconda,ssl-certificate
156
75,317,498
I never figured out why it didn't work on the other computer but I did manage to make a workaround work. First I ensured that my pyinstaller was at least 4.10 and then I installed pip-system-certs. Finally I added import pip_system_certs.wrapt_requests at the top of my python file. This library meant I had to install everything using pip and not conda.
1
1.2
2023-02-02 00:12:49
1
Ok so to preface this, I am very new to jupyter notebook and anaconda. Anyways I need to download opencv to use in my notebook but every time I download I keep getting a NameError saying that ‘cv2’ is not defined. I have uninstalled and installed opencv many times and in many different ways and I keep getting the same error. I saw on another post that open cv is not in my python path or something like that… How do I fix this issue and put open cv in the path? (I use Mac btw) Please help :( Thank you!
75,318,938
Anaconda Jupyter Notebook Opencv not working
false
1
1
opencv,anaconda,jupyter,nameerror,pythonpath
27
75,318,885
Try the following: Install OpenCV using Anaconda Navigator or via terminal by running: conda install -c conda-forge opencv Now you should check if its installed by running this in terminal: conda list Import OpenCV in Jupyter Notebook: In your Jupyter Notebook, run import cv2 and see if it works. If the above steps are not working, you should add OpenCV to your Python PATH by writing the following code to your Jupyter NB: import sys sys.path.append('/anaconda3/lib/python3.7/site-packages') This should work.
0
0
2023-02-02 05:07:55
0
So far, I am using detect.py with appropiate arguments for the object detection tasks, using a custom trained model. How can I call the detect method with the parameters(weights, source, conf, and img_size) from a python program, instead of using CLI script? I am unable to do so.
75,324,664
How to call yolov7 detect method from a python program
false
1
1
python,object-detection,yolo,yolov5,yolov7
22
75,319,737
you can create a main.py file where you call all these methods from. Please make sure you import these methods at the top of main.py, e.g. from detect.py import detect (or whatever you want to call from this file). Hard to give more precise advice without more input from you. and then you just run your main file. Alternatively maybe consider using a jupyter notebook - not the 'nicest' way, but it makes everything more convenient for testing etc.
0
0
2023-02-02 07:09:17
0
Is there a way to save Polars DataFrame into a database, MS SQL for example? ConnectorX library doesn’t seem to have that option.
76,234,129
Polars DataFrame save to sql
false
2
2
python-polars,rust-polars
1,076
75,320,233
Polars exposes the write_database method on the DataFrame class.
3
0.291313
2023-02-02 08:10:34
3
Is there a way to save Polars DataFrame into a database, MS SQL for example? ConnectorX library doesn’t seem to have that option.
75,396,733
Polars DataFrame save to sql
true
2
2
python-polars,rust-polars
1,076
75,320,233
Polars doesen't support direct writing to a database. You can proceed in two ways: Export the DataFrame in an intermediate format (such as .csv using .write_csv()), then import it into the database. Process it in memory: you can convert the DataFrame in a simpler data structure using .to_dicts(). The result will be a list of dictionaries, each of them containing a row in key/value format. At this point is easy to insert them into a database using SqlAlchemy or any specific library for your database of choice.
2
1.2
2023-02-02 08:10:34
3
All of the sudden, my terminal stopped recognizing the 'conda'. Also the VS Code stopped seeing my environments. All the folders, with my precious environments are there (/opt/anaconda3), but when I type conda I get: conda zsh: command not found: conda I tried install conda again (from .pkg) but it fails at the end of installation (no log provided). How can I clean it without losing my envs? I use Apple M1 MacBookPro with Monterey.
75,333,369
conda disappeared, command not found - corrupted .zshrc
false
2
2
python,macos,conda
631
75,320,243
For some reason my .zshrc file was corrupted after some operations. This prevented terminal to call conda init and in general, to have 'conda' call understandable. What is more - this prevented installing any condas, minicondas, minoforge. Both from .pkg and .sh - annoyingly - without any log, information - just crash and goodbye. I cleared both .zshrc' and .bash_profile` and then it helped - I managed to install minigorge and have my 'conda' accessible from terminal. Unfortunately, in the process I removed all my previous 'envs'.
1
0.099668
2023-02-02 08:11:22
2
All of the sudden, my terminal stopped recognizing the 'conda'. Also the VS Code stopped seeing my environments. All the folders, with my precious environments are there (/opt/anaconda3), but when I type conda I get: conda zsh: command not found: conda I tried install conda again (from .pkg) but it fails at the end of installation (no log provided). How can I clean it without losing my envs? I use Apple M1 MacBookPro with Monterey.
75,320,362
conda disappeared, command not found - corrupted .zshrc
false
2
2
python,macos,conda
631
75,320,243
To recover conda if it has disappeared and you're getting a "command not found" error, follow these steps: Check if conda is installed on your system by running the command: which conda If the above command doesn't return anything, you may need to add the path to your conda installation to your PATH environment variable. To find the path, run the following command: find / -name conda 2>/dev/null Add the path to your .bashrc or .bash_profile file: export PATH="/bin:$PATH" Restart your terminal or run the following command to reload your environment variables: source ~/.bashrc Try running conda again to see if it's working. If conda is still not working, it may have been uninstalled or moved. In that case, you can reinstall conda from the Anaconda website or from the Miniconda website.
1
0.099668
2023-02-02 08:11:22
2
On SikulixIDE, the library webbrowser always open the default browser, even when i use the get method, i tried my code on regular python, it does work. Anyone know why it is reacting like that ? webbrowser.get('C:/Program Files/Google/Chrome/Application/chrome.exe %s').open(myurl)
75,433,915
webbrowser library is not working as intended on SikulixIDE
true
1
1
python,jython,sikuli,sikuli-ide,sikuli-x
47
75,321,910
Fixed by automating it using a python file and running it trough cmd with base python.exe.
0
1.2
2023-02-02 10:39:04
1
In Numpy, Transposing of a column vector makes the the array an embedded array. For example, transposing [[1.],[2.],[3.]] gives [[1., 2., 3.]] and the dimension of the outermost array is 1. And this produces many errors in my code. Is there a way to produce [1., 2., 3.] directly?
75,322,147
Python NumPy, remove unnecessary brackets
false
1
2
python,numpy
43
75,322,105
Try .flatten(), .ravel(), .reshape(-1), .squeeze().
1
0.099668
2023-02-02 10:53:54
1
While installing flair using pip install flair in python 3.10 virtual environment on mac-os Ventura, I get the following error: ERROR: Failed building wheel for sentencepiece Seperately installing sentencepeice using pip install sentenpeice did not work. Upgrading pip did not work.
75,806,128
ERROR: Failed building wheel for sentencepiece while installing flair on python 3.10
false
1
1
python,python-3.x,flair
14
75,322,177
Try downgrading Python. I was having this same issue, also with an intel mac, everytime I tried to use the transformers library, went through a lot of possible solutions without success, even with multiple chatGPT advices. I uninstalled python 3.11 and got back to the 3.9.13 version and the issue was gone! It seems there's some issue with wheels for latest python versions
1
0.197375
2023-02-02 11:00:08
0
I am a student and my profesor needs me to install Django on PyCharm. I made a big folder called PyCharmProjects and it includes like everything I have done in Python. The problem is that I made a new folder inside this PyCharmProjects called Elementar, and I need to have the Django folders in there but it's not downloading. I type in the PyCharm terminal django-admin manage.py startproject taskmanager1 (this is how my profesor needs me to name it) After I run the code it says: No Django settings specified. Unknown command: 'manage.py' Type 'django-admin help' for usage. I also tried to install it through the MacOS terminal but I don't even have acces the folder named Elementar (cd: no such file or directory: Elementar) although it is created and it is seen in the PyCharm.
75,326,283
Manage.py unknown command
false
1
2
python,django,pycharm
52
75,322,300
First of all, you can't create a project using manage.py because the manage.py file doesn't exist yet. It will be created automatically in the folder taskmanager1 if you run the command below. You can create a project with the command django-admin startproject taskmanager1 After that you can change the directory to the taskmanager1 folder with the cd taskmanager/ command. When you changed the directory you can use the python manage.py commando for example if you want to run your migrations or creating an app. python manage.py migrate
0
0
2023-02-02 11:10:50
0
I have 2 directories containing tests: project/ | |-- test/ | | | |-- __init__.py | |-- test_1.py | |-- my_submodule/ | |-- test/ | |-- __init__.py |-- test_2.py How can I run all tests? python -m unittest discover . only runs test_1.py and obviously python -m unittest discover my_submodule only runs test_2.py
75,324,957
How to run unittest tests from multiple directories
true
1
2
python,unit-testing,python-unittest
95
75,322,357
unittest currently sees project/my_submodule as an arbitrary directory to ignore, not a package to import. Just add project/my_submodule/__init__.py to change that.
4
1.2
2023-02-02 11:16:43
2
I'm trying to find out if Pandas.read_json performs some level of autodetection. For example, I have the following data: data_records = [ { "device": "rtr1", "dc": "London", "vendor": "Cisco", }, { "device": "rtr2", "dc": "London", "vendor": "Cisco", }, { "device": "rtr3", "dc": "London", "vendor": "Cisco", }, ] data_index = { "rtr1": {"dc": "London", "vendor": "Cisco"}, "rtr2": {"dc": "London", "vendor": "Cisco"}, "rtr3": {"dc": "London", "vendor": "Cisco"}, } If I do the following: import pandas as pd import json pd.read_json(json.dumps(data_records)) --- device dc vendor 0 rtr1 London Cisco 1 rtr2 London Cisco 2 rtr3 London Cisco though I get the output that I desired, the data is record based. Being that the default orient is columns, I would have not thought this would have worked. Therefore is there some level of autodetection going on? With index based inputs the behaviour seems more inline. As this shows appears to have parsed the data based on a column orient by default. pd.read_json(json.dumps(data_index)) rtr1 rtr2 rtr3 dc London London London vendor Cisco Cisco Cisco pd.read_json(json.dumps(data_index), orient="index") dc vendor rtr1 London Cisco rtr2 London Cisco rtr3 London Cisco
75,399,595
Pandas JSON Orient Autodetection
false
1
4
python,json,pandas
636
75,324,072
No, Pandas does not perform any autodetection when using the read_json function. It is entirely determined by the orient parameter, which specifies the format of the input json data. In your first example, you passed the data_records list to the json.dumps function, which is then converted it to a json-string. After passing the resulting json string to pd.read_json, it is seen as a record orientation. In your second example, you passed the data_index to json.dumps which is thenseen as a "column" orientation In both cases, the behavior of the read_json function is entirely based on the value of the orient parameter and not by an automatic detection by Pandas.
0
0
2023-02-02 13:45:13
6
I installed cdk on wsl2 and I try to use it but I get this error: (manifest,filePath,ASSETS_SCHEMA,Manifest.patchStackTagsOnRead)}static loadAssetManifest(filePath){return this.loadManifest(filePath,ASSETS_SCHEMA)}static saveIntegManifest(manifest,filePath){Manifest.saveManifest(manifest,filePath,INTEG_SCHEMA)}static loadIntegManifest(filePath){return this.loadManifest(filePath,INTEG_SCHEMA)}static version(){return SCHEMA_VERSION}static save(manifest,filePath){return this.saveAssemblyManifest(manifest,filePath)}static load(filePath){return this.loadAssemblyManifest(filePath)}static validate(manifest,schema4,options){function parseVersion(version){const ver=semver.valid(version);if(!ver){throw new Error(`Invalid semver string: "${version}"`)}return ver}const maxSupported=parseVersion(Manifest.version());const actual=parseVersion(manifest.version);if(semver.gt(actual,maxSupported)&&!(options==null?void 0:options.skipVersionCheck)){throw new Error(`${VERSION_MISMATCH}: Maximum schema version supported is ${maxSupported}, but found ${actual}`)}const validator=new jsonschema.Validator;const result=validator.validate(manifest,schema4,{nestedErrors:true,allowUnknownAttributes:false});let errors=result.errors;if(options==null?void 0:options.skipEnumCheck){errors=stripEnumErrors(errors)}if(errors.length>0){throw new Error(`Invalid assembly manifest: SyntaxError: Unexpected token '?' at wrapSafe (internal/modules/cjs/loader.js:915:16) at Module._compile (internal/modules/cjs/loader.js:963:27) at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10) at Module.load (internal/modules/cjs/loader.js:863:32) at Function.Module._load (internal/modules/cjs/loader.js:708:14) at Module.require (internal/modules/cjs/loader.js:887:19) at require (internal/modules/cjs/helpers.js:74:18) at Object.<anonymous> (/usr/local/lib/node_modules/aws-cdk/bin/cdk.js:3:15) at Module._compile (internal/modules/cjs/loader.js:999:30) at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10) I've tried reinstalling it, updating it, but I didn't succeed. I also searched on stack overflow but I didn't find anything to help me.
75,334,906
Why can't I use cdk on wsl2?
false
1
1
python-3.x,amazon-web-services,aws-cdk
101
75,326,322
This was a problem in Node v12. Upgrading the version to v14 or higher should solve the problem.
0
0
2023-02-02 16:42:23
1
I have a URL that I am having difficulty reading. It is uncommon in the sense that it is data that I have self-generated or in other words have created using my own inputs. I have tried with other queries to use something like this and it works fine but not in this case: bst = pd.read_csv('https://psl.noaa.gov/data/correlation/censo.data', skiprows=1, skipfooter=2,index_col=[0], header=None, engine='python', # c engine doesn't have skipfooter delim_whitespace=True) Here is the code + URL that is providing the challenge: zwnd = pd.read_csv('https://psl.noaa.gov/cgi-bin/data/timeseries/timeseries.pl? ntype=1&var=Zonal+Wind&level=1000&lat1=50&lat2=25&lon1=-135&lon2=-65&iseas=0&mon1=0&mon2=0&iarea=0&typeout=1&Submit=Create+Timeseries', skiprows=1, skipfooter=2,index_col=[0], header=None, engine='python', # c engine doesn't have skipfooter delim_whitespace=True) Thank you for any help that you can provide. Here is the full error message: pd.read_csv('https://psl.noaa.gov/cgi-bin/data/timeseries/timeseries.pl?ntype=1&var=Zonal+Wind&level=1000&lat1=50&lat2=25&lon1=-135&lon2=-65&iseas=0&mon1=0&mon2=0&iarea=0&typeout=1&Submit=Create+Timeseries', skiprows=1, skipfooter=2,index_col=[0], header=None, engine='python', # c engine doesn't have skipfooter delim_whitespace=True) Traceback (most recent call last): Cell In[240], line 1 pd.read_csv('https://psl.noaa.gov/cgi-bin/data/timeseries/timeseries.pl?ntype=1&var=Zonal+Wind&level=1000&lat1=50&lat2=25&lon1=-135&lon2=-65&iseas=0&mon1=0&mon2=0&iarea=0&typeout=1&Submit=Create+Timeseries', skiprows=1, skipfooter=2,index_col=[0], header=None, File ~\Anaconda3\envs\Stats\lib\site-packages\pandas\util\_decorators.py:211 in wrapper return func(*args, **kwargs) File ~\Anaconda3\envs\Stats\lib\site-packages\pandas\util\_decorators.py:331 in wrapper return func(*args, **kwargs) File ~\Anaconda3\envs\Stats\lib\site-packages\pandas\io\parsers\readers.py:950 in read_csv return _read(filepath_or_buffer, kwds) File ~\Anaconda3\envs\Stats\lib\site-packages\pandas\io\parsers\readers.py:611 in _read return parser.read(nrows) File ~\Anaconda3\envs\Stats\lib\site-packages\pandas\io\parsers\readers.py:1778 in read ) = self._engine.read( # type: ignore[attr-defined] File ~\Anaconda3\envs\Stats\lib\site-packages\pandas\io\parsers\python_parser.py:282 in read alldata = self._rows_to_cols(content) File ~\Anaconda3\envs\Stats\lib\site-packages\pandas\io\parsers\python_parser.py:1045 in _rows_to_cols self._alert_malformed(msg, row_num + 1) File ~\Anaconda3\envs\Stats\lib\site-packages\pandas\io\parsers\python_parser.py:765 in _alert_malformed raise ParserError(msg) ParserError: Expected 2 fields in line 133, saw 3. Error could possibly be due to quotes being ignored when a multi-char delimiter is used.
75,328,332
Reading Data from URL into a Pandas Dataframe
false
1
2
python,pandas,csv,url
163
75,327,185
Its because the first one directly points to a dataset from storage in .data format but the second url points to a website (which is made up of html, css, json, etc files). You can only use pd.read_csv if you are parsing in a .csv file, and i guess a .data file too since it worked for you. If you can find a link to the actual .data or .csv file on that website you will be able to parse it no problem. Since its a gov website, they probably will have a good file format. If you cannot, and still need this data you will have to do some webscraping from that website (like using selenium), then you will need to store them as dataframes, and maybe preprocess it so it gets added like expected.
0
0
2023-02-02 18:00:07
1
I have a mat file with sparse data for around 7000 images with 512x512 dimensions stored in a flattened format (so rows of 262144) and I’m using scipy’s loadmat method to turn this sparse information into a Compressed Sparse Column format. The data inside of these images is a smaller image that’s usually around 25x25 pixels somewhere inside of the 512x512 region , though the actual size of the smaller image is not consitant and changes for each image. I want to get the sparse information from this format and turn it into a numpy array with only the data in the smaller image; so if I have an image that’s 512x512 but there’s a circle in a 20x20 area in the center I want to just get the 20x20 area with the circle and not get the rest of the 512x512 image. I know that I can use .A to turn the image into a non-sparse format and get a 512x512 numpy array, but this option isn’t ideal for my RAM. Is there a way to extract the smaller images stored in a sparse format without turning the sparse data into dense data? I tried to turn the sparse data into dense data, reshape it into a 512x512 image, and then I wrote a program to find the top, bottom, left, and right edges of the image by checking for the first occurrence of data from the top, bottom, left, and right but this whole processes seemed horribly inefficient.
75,342,851
Numpy Extract Data from Compressed Sparse Column Format
false
1
1
python,numpy,scipy,sparse-matrix
29
75,328,837
Sorry about the little amount of information I provided; I ended up figuring it out.Scipy's loadmat function when used to extract sparse data from a mat file returns a csc_matrix, which I then converted to numpy's compressed sparse column format. Numpy's format has a method .nonzero() that will return the index of every non_zero element in that matrix. I then reshaped the numpy csc matrix into 512x512, and then used .nonzero() to get the non-zero elements in 2D, then used used those indexes to figure out the max height and width of my image I was interested in. Then I created a numpy matrix of zeros the size of the image I wanted, and set the elements in that numpy matrix to the elements to the pixels I wanted by indexing into my numpy csc matrix (after I called .tocsr() on it)
0
0
2023-02-02 20:54:58
0
I cannot find an example in the Simics documentation on how the clock object is obtained so that we can use it as an argument in the post() method. I suspect that either an attribute can be used to get the clock or in the ConfObject class scope we get the clock using SIM_object_clock() I created a new module using bin\project-setup --py-device event-py I have defined two methods in the ConfObject class scope called clock_set and clock_get. I wanted to use these methods so that I can set/get the clock object to use in the post method. The post() method fails when reading the device registers in the vacuum machine. import pyobj # Tie code to specific API, simplifying upgrade to new major version import simics_6_api as simics class event_py(pyobj.ConfObject): """This is the long-winded documentation for this Simics class. It can be as long as you want.""" _class_desc = "one-line doc for the class" _do_not_init = object() def _initialize(self): super()._initialize() def _info(self): return [] def _status(self): return [("Registers", [("value", self.value.val)])] def getter(self): return self # In my mind, clock_set is supposed to set the clock object. That way we can use # it in post() def clock_set(self): self.clock = simics.SIM_object_clock(self) def clock_get(self): return self.clock(self): class value(pyobj.SimpleAttribute(0, 'i')): """The <i>value</i> register.""" class ev1(pyobj.Event): def callback(self, data): return 'ev1 with %s' % data class regs(pyobj.Port): class io_memory(pyobj.Interface): def operation(self, mop, info): offset = (simics.SIM_get_mem_op_physical_address(mop) + info.start - info.base) size = simics.SIM_get_mem_op_size(mop) if offset == 0x00 and size == 1: if simics.SIM_mem_op_is_read(mop): val = self._up._up.value.val simics.SIM_set_mem_op_value_le(mop, val) # POST HERE AS TEST self._up._up.ev1.post(clock, val, seconds = 1) else: val = simics.SIM_get_mem_op_value_le(mop) self._up._up.value.val = val return simics.Sim_PE_No_Exception else: return simics.Sim_PE_IO_Error
75,449,884
How to get the clock argument of event.post(clock, data, duration) in a python device?
false
1
2
python,post,events,clock,simics
41
75,329,701
You mention using the vacuum example machine and within its script you see that sdp->queue will point to timer. So SIM_object_clock(sdp) would return timer. Simics is using queue attribute in all conf-objects to reference their clock individually, though other implementations are used too. BR Simon #IAmIntel
1
0.099668
2023-02-02 22:46:29
2
I need to start a python program when the system boots. It must run in the background (forever) such that opening a terminal session and closing it does not affect the program. I have demonstrated that by using tmux this can be done manually from a terminal session. Can the equivalent be done from a script that is run at bootup? Then where done one put that script so that it will be run on bootup.
75,346,392
ubuntu run python program in background on startup
false
1
2
python,background,boot
26
75,330,853
It appears that in addition to putting a script that starts the program in /etc/init.d, one also has to put a link in /etc/rc2.d with sudo ln -s /etc/init.d/scriptname.sh sudo mv scriptname.sh S01scriptname.sh The S01 was just copied from all the other files in /etc/rc2.d
0
0
2023-02-03 02:15:38
0
I have a use case where we have text file like key value format . The file is not any of the fixed format but created like key value . We need to create JSON out of that file . I am able to create JSON but when text format has array like structure it creates just Key value json not the array json structure . This is my Input . [DOCUMENT] Headline=This is Headline MainLanguage=EN DocType.MxpCode=1000 Subject[0].MxpCode=BUSNES Subject[1].MxpCode=CONS Subject[2].MxpCode=ECOF Author[0].MxpCode=6VL6 Industry[0].CtbCode=53 Industry[1].CtbCode=5340 Industry[2].CtbCode=534030 Industry[3].CtbCode=53403050 Symbol[0].Name=EXPE.OQ Symbol[1].Name=ABNB.OQ WorldReg[0].CtbCode=G4 Country[0].CtbCode=G26 Country[1].CtbCode=G2V [ENDOFFILE] Exiting code to create json is below with open("file1.csv") as f: lines = f.readlines() data = {} for line in lines: parts = line.split('=') if len(parts) == 2: data[parts[0].strip()] = parts[1].strip() print(json.dumps(data, indent=' ')) The current output is below { "Headline": "This is Headline", "MainLanguage": "EN", "DocType.MxpCode": "1000", "Subject[0].MxpCode": "BUSNES", "Subject[1].MxpCode": "CONS", "Subject[2].MxpCode": "ECOF", "Author[0].MxpCode": "6VL6", "Industry[0].CtbCode": "53", "Industry[1].CtbCode": "5340", "Industry[2].CtbCode": "534030", "Industry[3].CtbCode": "53403050", "Symbol[0].Name": "EXPE.OQ", "Symbol[1].Name": "ABNB.OQ", "WorldReg[0].CtbCode": "G4", "Country[0].CtbCode": "G26", "Country[1].CtbCode": "G2V" } Expected out is is something like below For the Subject key and like wise for others also { "subject": [ { "mxcode": 123 }, { "mxcode": 123 }, { "mxcode": 123 } ] } Like wise for Industry and Symbol and Country. so the idea is when we have position in the text file it should be treated as array in the json output .
75,331,962
How to convert key value text to json arrya format python
false
1
3
json,python-3.x
283
75,331,933
Use one more loop as it is nested. Use for loop from where subject starts. try it that way.
0
0
2023-02-03 05:45:06
1
Whenever launching telethon from an existing session I receive two error messages: Server sent a very new message with ID xxxxxxxxxxxxxxxxxxx, ignoring Server sent a very new message with ID xxxxxxxxxxxxxxxxxxx, ignoring And thereafter it gets clogged , preventing any execution. The answer I got from another post is "in Windows time settings, enable automatic setting of time and time zone". But I am using a Linux system, and the system is set to the Asia/Shanghai time zone. How can I fix this problem?
75,333,806
Error messages clogging Telethon resulting : Server sent a very new message xxxxx was ignored
false
1
1
python,telethon
45
75,332,067
I think I found the reason. The time difference between the local environment and the Telegram server is too large. After manually adjusting the time to correct the delay, the problem was fixed.
1
0.197375
2023-02-03 06:05:17
0
I have scheduled a task arp -a which runs once per hour, that scans my wi-fi network to save all the info about currently connected devices into a scan.txt file. After the scan, a python script reads the scan.txt and saves the data into a database. This is what my wifiscan.sh script looks like: cd /home/pi/python/wifiscan/ arp -a > /home/pi/python/wifiscan/scan.txt python wifiscan.py This is my crontab task: #wifiscan 59 * * * * sh /home/pi/launcher/wifiscan.sh If I run the wifiscan.sh file manually, all the process works perfectly; when it is run by the crontab, the scan.txt file is generated empty and the rest of the process works, but with no data, so I'm assuming that the problem lies in the arp -a command. How is it possible that arp -a does not produce any output when it is run by crontab? Is there any mistakes I'm making?
75,334,177
Raspberry Pi - Crontab task not running properly
true
1
1
python,cron,raspberry-pi,arp
42
75,333,089
As @Mark Setchell commented, I solved my problem by launching the command with its entire path (in this case, /usr/sbin/arp)
2
1.2
2023-02-03 08:18:17
2
I am trying to test the data being written to RDS, but I can't seem to be able to mock the DB. The idea is to mock a DB, then run my code and retrieve the data for testing. Could anyone help, please? import unittest import boto3 import mock from moto import mock_s3, mock_rds from sqlalchemy import create_engine @mock_s3 @mock_rds class TestData(unittest.TestCase): def setUp(self): """Initial setup.""" # Setup db test_instances = db_conn.create_db_instance( DBName='test_db', AllocatedStorage=10, StorageType='standard', DBInstanceIdentifier='instance', DBInstanceClass='db.t2.micro', Engine='postgres', MasterUsername='postgres_user', MasterUserPassword='p$ssw$rd', AvailabilityZone='us-east-1', PubliclyAccessible=True, DBSecurityGroups=["my_sg"], VpcSecurityGroupIds=["sg-123456"], Port=5432 ) db_instance = test_instances["DBInstance"] user_name = db_instance['MasterUsername'] host = db_instance['Endpoint']['Address'] port = db_instance['Endpoint']['Port'] db_name = db_instance['DBName'] conn_str = f'postgresql://{user_name}:p$ssw$rd@{host}:{port}/{db_name}' print(conn_str) engine_con = create_engine(conn_str) engine_con.connect() Error: > conn = _connect(dsn, connection_factory=connection_factory, **kwasync) E sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "instance.aaaaaaaaaa.eu-west-1.rds.amazonaws.com" to address: nodename nor servname provided, or not known E E (Background on this error at: https://sqlalche.me/e/14/e3q8)
75,373,965
How to test data from RDS using mock_rds
false
1
1
python,testing,mocking,amazon-rds,moto
282
75,335,575
So, instead of testing the data from my DB, I replicated the execution of the code I had in my lambda on my test, accessing the results locally. So those same tests are working fine on Github now.
1
0.197375
2023-02-03 12:12:04
1
I have somehow managed to mess up my pip indexes for a local virtual env. pip config list returns the following :env:.index-url='https://***/private-pypi/simple/' global.index-url='https://pypi.python.org/simple' This makes pip to always default to searching the private pypi index first. Any idea how I can remove the env specific index? It does not appear in the pip.conf file and running pip config unset env.index-url does not work either or I can't get the right syntax. Thanks!
75,337,096
Remove private PyPi index from local virtual env
true
1
1
python,pip,pypi
17
75,336,808
You can remove the environment-specific index by directly editing the environment's pip.ini file or pip.conf file. The file should be located in the environment's lib/pythonX.X/site-packages/pip/ directory. Simply delete the line with the "index-url" value and the default global index will be used.
0
1.2
2023-02-03 14:05:07
0
I got an error when creating virtualenv with Python 3.11 interpreter. I typed this in my terminal python3.11 -m venv env It returned this: Error: Command '['/home/bambang/env/bin/python3.11', '-m', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 1. What's possibly missing?
75,346,003
Creating Virtual Environment with Python 3.11 Returns an Error
true
1
2
python-venv,python-3.11
2,184
75,338,314
I tried to add --without-pip flag. It doesn't return an error so far
5
1.2
2023-02-03 16:11:39
4
What does it mean when I keep getting these warnings WARNING: The script jupyter-trust is installed in '/Users/josephchoi/Library/Python/3.9/bin' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. I am on MacOS and zsh. I tried researching but the texts were too complicated. As you can tell, I am a complete beginner.
75,342,255
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location
false
1
1
python,python-3.x,terminal,pip,zsh
30
75,338,776
This normally will happen when you've installed a pip package that contains an executable, and it shouldn't be a problem. If you don't like the warning, you can add the folder to your PATH variable by adding the line export PATH=$PATH:/Users/josephchoi/Library/Python/3.9/bin to your .zshrc file in your home directory and it will stop shouting at you.
0
0
2023-02-03 16:51:01
1
I have features extracted from 4 images. These images are video frames. And i want to combine them into one vector of shape (1 ,768) or (1, 512) Is AvgPooling the best way to do it? import torch input = torch.rand([1, 4, 768]) sumpool = torch.nn.AdaptiveAvgPool2d((1, 512)) sumpool(input).shape #torch.Size([1, 1, 512]) Also i tried MeanPooling: result = torch.sum(visual_output, dim=1) / 4 #(1, 768) But seems like i wrong somewhere. After using these combined features results are worse. Is everything correct?
75,341,659
Concatenate video frames using AvgPooling
false
1
1
python,machine-learning,pytorch,computer-vision,data-science
42
75,340,196
Adaptive average pool adjusts sizes for pooling regions whereas mean pooling is similar to AvgPool2d, it solves by dividing the input feature map into several non-overlapping regions and computing the average of each region, assuming your input size is always different from output size created we get irregular results. Basic Pooling had this problem that is why Adaptive pooling came into being.
0
0
2023-02-03 19:25:23
2
I find implementing a multi-threaded binary tree search algorithm in Python can be challenging because it requires proper synchronization and management of multiple threads accessing shared data structures. One approach, I think is to achieve this would be to use a thread-safe queue data structure to distribute search tasks to worker threads, and use locks or semaphores to ensure that each node in the tree is accessed by only one thread at a time. How can you implement a multi-threaded binary tree search algorithm in Python that takes advantage of multiple cores, while maintaining thread safety and avoiding race conditions?
75,341,042
Multi-Thread Binary Tree Search Algorithm
false
1
2
python,multithreading,binary
44
75,340,879
How can you implement a multi-threaded binary tree search algorithm in Python that takes advantage of multiple cores, while maintaining thread safety and avoiding race conditions? You can write a multi-threaded binary tree search in Python that is thread-safe and has no race conditions. Another answer makes some good suggestions about that. But if you're writing it in pure Python then you cannot make effective use of multiple cores to improve the performance of your search, at least not with CPython, because the Global Interpreter Lock prevents any concurrent execution within the Python interpreter. Multithreading can give you a performance improvement if your threads spend a significant fraction of their time in native code or blocked, but tree searching does not have any characteristics that would make room for an improvement from multithreading in a CPython environment.
1
0.099668
2023-02-03 20:44:20
0
I wrote a Python3 script to solve a picoCTF challenge. I received the encrypted flag which is: cvpbPGS{c33xno00_1_f33_h_qrnqorrs} From its pattern, I thought it is encoded using caesar cipher. So I wrote this script: alpha_lower = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u','v', 'w', 'x', 'y', 'z'] alpha_upper = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'] text = 'cvpbPGSc33xno00_1_f33_h_qrnqorrs ' for iterator in range(len(alpha_lower)): temp = '' for char in text: if char.islower(): ind = alpha_lower.index(char) this = ind + iterator while this > len(alpha_lower): this -= len(alpha_lower) temp += alpha_lower[this] elif char.isupper(): ind = alpha_upper.index(char) that = ind + iterator while that > len(alpha_upper): that -= len(alpha_upper) temp += alpha_upper[that] print(temp) I understand what the error means. I can't understand where the flaw is to fix. Thanks in advance. Sorrym here is the error: Desktop>python this.py cvpbPGScxnofhqrnqorrs dwqcQHTdyopgirsorpsst exrdRIUezpqhjstpsqttu Traceback (most recent call last): File "C:\Users\user\Desktop\this.py", line 18, in <module> temp += alpha_lower[this] IndexError: list index out of range
75,342,083
Error, index out of range. What is wrong?
false
1
2
python,python-3.x,algorithm
63
75,341,796
Why that break is simple : If this==len(alpha_lower) then we won't enter your loop: while this > len(alpha_lower): And thus when trying temp += alpha_lower[this] it will return an error. An index must be strictly inferior to the size of the array. Your condition should have been while this >= len(alpha_lower):. As pointed out, a better method here is to use a modulus.
1
0.099668
2023-02-03 22:59:34
1
Using pd.Grouper with a datetime key in conjunction with another key creates a set of groups, but this does not seem to encompass all of the groups that need to be created, in my opinion. >>> test = pd.DataFrame({"id":["a","b"]*3, "b":pd.date_range("2000-01-01","2000-01-03", freq="9H")}) >>> test id b 0 a 2000-01-01 00:00:00 1 b 2000-01-01 09:00:00 2 a 2000-01-01 18:00:00 3 b 2000-01-02 03:00:00 4 a 2000-01-02 12:00:00 5 b 2000-01-02 21:00:00 When I tried to create groups based on the date and id values: >>> g = test.groupby([pd.Grouper(key='b', freq="D"), 'id']) >>> g.groups {(2000-01-01 00:00:00, 'a'): [0], (2000-01-02 00:00:00, 'b'): [1]} g.groups shows only 2 groups when I expected 4 groups: both "a" and "b" for each day. However, when I created another column based off of "b": >>> test['date'] = test.b.dt.date >>> g = test.groupby(['date', 'id']) >>> g.groups {(2000-01-01, 'a'): [0, 2], (2000-01-01, 'b'): [1], (2000-01-02, 'a'): [4], (2000-01-02, 'b'): [3, 5]} The outcome was exactly what I expected. I don't know how to make sense of these different outcomes. Please enlighten me.
75,342,499
pd.Grouper with datetime key in conjunction with another grouping key seemingly creates the wrong number of groups
false
1
2
python,pandas,datetime,group-by
92
75,342,439
I believe it is because of the difference between 'pd.Grouper' and the 'dt.date' method in pandas. 'pd.Grouper' groups by a range of values (e.g., daily, hourly, etc.) while 'dt.date' returns just the date part of a datetime object, effectively creating a categorical variable. When you use 'pd.Grouper' with a frequency of "D", it will group by full days, so each day is represented by only one group. But in your case, each id has multiple records for a given day. So, 'pd.Grouper' is not able to capture all of the groups that you expect. On the other hand, when you use the 'dt.date' method to extract the date part of the datetime, it creates a categorical variable that represents each date independently. so when you group by this new date column along with the id column, each group will correspond to a unique combination of date and id, giving you the expected outcome. In summary, pd.Grouper is useful when you want to group by a range of values (e.g., daily, hourly), while using a separate column for the exact values (e.g., a column for dates only) is useful when you want to group by specific values.
0
0
2023-02-04 01:39:32
2
So I manually imported a certificate and key pair issued by a third party to certmanage in AWS and I am trying to programaticly export to a webserver and I get this error: botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the ExportCertificate operation: Certificate ARN: arn:aws:acm:us-east-1:x:certificatexxxxxxxx is not a private certificate Can I export a third party cert and private key from AWS certmanager? python -V Python 3.10.0 I am trying to export a AWS managed certificate from certmanager and its failing. I've tried googleing the error code but come up with nothing.
75,345,246
Exporting Certificates from AWS Certmanager Boto3 Python310
false
1
1
python-3.x,amazon-web-services,boto3
15
75,342,534
AWS Certificate Manager (ACM) has two types of certificates. Public and Private. You can't export any certificate when it is public. Even if you imported it. You can associate your ACM certificate with ALB, for example, and put this ALB in front of your EC2 instance. But you can't export. As you imported the certificate, it means you have the public and private parts of the certificate. You can just use it on your instance. Only ACM privates ones can be exported.
1
0.197375
2023-02-04 02:16:07
0
I have a custom indicator that I use on Tradingview. The values for the mst indicator in python do not match the values for mst indicator in Tradingview. How do I fix this so the values are exactly the same? The pinescript code is as follows: //Calculate MST RSI = ta.rsi(close, 14) rsidelta = ta.mom(RSI, 9) rsisma = ta.sma(ta.rsi(close, 3), 3) mst = rsidelta+rsisma plot(mst, title="MST", color=#BB2BFA, linewidth = 2) I am trying to replicate the exact values for MST in a python script. The python code for RSI that I am using is as follows: def rsi(df: pd.DataFrame, period: int = 14, source: str = 'close') -> np.array: rsi = ta.rsi(df[source], period) if rsi is not None: return rsi.values This is code in my configuration file: [scans.7] # MST rsi_source = 'close' rsi_period = 14 rsi_delta_period = 9 rsi_sma_period = 3 mst_threshold = [20, 80] This is code in scanner.py file # Scan 7 if '7' in self.scans: scan = self.scans['7'] rsi = indicators.rsi(df=df, period=scan['rsi_period'], source=scan['rsi_source']) rsi_delta = rsi[-1] - rsi[-scan['rsi_delta_period']] rsi_sma = pd.Series(indicators.rsi(df=df, period=scan['rsi_sma_period'], source=scan['rsi_source'])).rolling(scan['rsi_sma_period']).mean() mst = rsi_delta + rsi_sma
76,095,739
RSI values in Python (lib is Pandas) don't match RSI values in Tradingview-Pinescript
false
1
1
python,pandas,tradingview-api,rsi
194
75,343,111
I have encountered the same issue on the EMA indicator for me the issue was that I took the last 500 candles from Binance and tried to calculate the EMA and see if the values matched Trading View's values, then I realized that the EMA + RSI indicators are recursive(meaning they relay on past result values to generate a result.) with this said it might be that the reason for this inaccuracy in results is simply the fact that my indicator calculation started at a different point than that of Trading view's resulting in slight inaccuracies between the results.
0
0
2023-02-04 05:11:53
1
I got python code that has no GUI and works in terminal. Can I convert it to apk and run on android? I'm just curious if it's possible.
75,344,967
Is it possible to run code without gui on android?
false
1
1
python,android
28
75,344,601
No, you cannot directly run a Python script in the terminal as an Android app. Python scripts are typically run on a computer using the Python interpreter, and Android devices use the Android operating system which is different from the typical computer operating systems. However, you can use a tool such as Kivy, which is a Python library for creating mobile apps, to create an Android app from your Python script. Kivy provides a way to package your Python code into an Android app, so you can run it on an Android device. I am sure there are other tools providing this option as well. These tools essentially bundle the Python interpreter and your script into a single executable file, so the user doesn't need to have Python installed on their device to run your app. I believe there are tutorials on youtube as well so as to how to use Kivy to run your python code. I hope this helps :)
0
0
2023-02-04 10:57:23
0
I was trying learning about logging in python for the first time today. i discovered when i tried running my code from VS Code, i received this error message /bin/sh: 1: python: not found however when i run the code directly from my terminal, i get the expected result. I need help to figure out the reason for the error message when i run the code directly from vscode I've tried checking the internet for a suitable solution, no fix yet. i will appreciate your responses.
75,357,343
Configuring Python execution from VS Code
false
1
1
python,python-3.x,visual-studio-code,logging,error-log
24
75,344,761
The error message you are receiving indicates that the "python" executable is not found in the PATH environment variable of the terminal you are using from within Visual Studio Code. Add the location of the Python executable to the PATH environment variable in your terminal. Specify the full path to the Python executable in your Visual Studio Code terminal. You can find the full path to the Python executable by running the command "which python" in your terminal.
-1
-0.197375
2023-02-04 11:26:42
2
I am aware that io.BytesIO() returns a binary stream object which uses in-memory buffer. but also provides getbuffer() which provides a readable and writable view (memoryview obj) over the contents of the buffer without copying them. obj = io.BytesIO(b'abcdefgh') buf = obj.getbuffer() Now, we know buf points to underlying data and when sliced(buf[:3]) returns a memoryview object again without making a copy. So I want to know, if we do obj.read(3) does it also uses in-memory buffer or makes a copy ?. if it does uses in-memeory buffer, what is the difference between obj.read and buf and which one to prefer to effectively read the data in chunks for considerably very long byte objects ?
75,345,687
does read method of io.BytesIO returns copy of underlying bytes data?
false
1
1
python,buffer,bytesio,memoryview
761
75,345,565
Simply put, BytesIO.read reads data from the in-memory buffer. The method reads the data and returns as bytes objects and gives you a copy of the read data. buf however, is a memory view object that views the underlying buffer and doesn't make a copy of the data. The difference between BytesIO.read and buf is that, subsequent data retrieves will not be affected when io.BytesIO.read is used as you will get a copy of the data of the buffer, but if you change data bufyou also will change the data in the buffer as well. In terms of performance, using obj.read would be a better choice if you want to read the data in chunks, because it provides a clear separation between the data and the buffer, and makes it easier to manage the buffer. On the other hand, if you want to modify the data in the buffer, using buf would be a better choice because it provides direct access to the underlying data.
1
0.197375
2023-02-04 13:55:50
2
As an example, I can cross validation when I do hyperparameter tuning (GridSearchCV). I can select the best estimator from there and do RFECV. and I can perform cross validate again. But this is a time-consuming task. I'm new to data science and still learning things. Can an expert help me lean how to use these things properly in machine learning model building? I have time series data. I'm trying to do hyperparameter tuning and cross validation in a prediction model. But it is taking a long time run. I need to learn the most efficient way to do these things during the model building process.
75,348,200
How to do the cross validation properly?
false
1
1
python,machine-learning,cross-validation,hyperparameters
27
75,345,615
Cross-validation is a tool in order to evaluate model performance. Specifically avoid over-fitting. When we put all the data in training side, your Model will get over-fitting by ignoring generalisation of the data. The concept of turning parameter should not based on cross-validation because hyper-parameter should be changed based on model performance, for example the depth of tree in a tree algorithm…. When you do a 10-fold cv, you will be similar to training 10 model, of cause it will have time cost. You could tune the hyper-parameter based on the cv result as cv-> model is a result of the model. However it does not make sense when putting the tuning and do cv to check again because the parameter already optimised based on the first model result. P.s. if you are new to data science, you could learn something call regularization/dimension reduction to lower the dimension of your data in order to reduce time cost.
0
0
2023-02-04 14:05:25
0
import re input_text = "((NOUN) ) ) de el auto rojizo, algo) ) )\n Luego ((PL_ADVB)dentro ((NOUN)de baúl ))abajo.) )." input_text = input_text.replace(" )", ") ") print(repr(input_text)) Simply using the .replace(" )", ") ") function I get this bad output, as it doesn't consider the conditional replacements that a function using regex patterns could, for example using re.sub( , ,input_text, flags = re.IGNORECASE) '((NOUN)) ) de el auto rojizo, algo)) ) \n Luego ((PL_ADVB)dentro ((NOUN)de baúl) )abajo.)) .' The goal is to get this output where closing parentheses are stripped of leading whitespace's and a single whitespace is added after as long as the closing parenthesis ) is not in front of a dot . , a newline \n or the end of line $ '((NOUN))) de el auto rojizo, algo)))\n Luego ((PL_ADVB)dentro ((NOUN)de baúl))abajo.)).'
75,347,416
Set a regex pattern to condition placing or removing spaces before or after a ) according to the characters that are before or after
false
1
2
python,regex
44
75,347,375
Try this pattern it should solve it /(\s*)())(\s*)(?=[^\s])/g This pattern will match a ')' that is followed by a non-whitespace character and remove any spaces before or after the ')'. If you want to add spaces around a ')' instead of removing them, you can modify the pattern like this: /(\s*)())(\s*)(?=[^\s])/g
2
0.197375
2023-02-04 18:27:15
1
I want the user to be able to input more than one character they want to remove. It works but only if one character is entered. string = input("Please enter a sentence: ") removing_chars = input("Please enter the characters you would like to remove: ") replacements = [(removing_chars, "")] for char, replacement in replacements: if char in string: string = string.replace(char, replacement) print(string)
75,347,482
Multiple replacements
false
1
4
python
60
75,347,437
when you loop over replacements, char takes removing_chars as value. Then, when you check if char in string, Python checks if removing_chars is a substring of string. To actually remove the characters separately, you have to loop over removing_chars in order to get the individual characters.
0
0
2023-02-04 18:38:05
1
I have a zip file with this structure: Report │ └───folder1 │ │ │ └───subfolder1 | | │ │file 1 2022.txt │ └───folder2 │ file2.txt And their relative file paths are as follows: Report/folder1 / subfolder1 / file 1 2022.txt and Report/folder2/file2.txt I tried to extract the zip file to another destination using the following code: with ZipFile(attachment_filepath, 'r') as z: z.extractall('Destination') However, it gives me a FileNotFoundError: [Winerror 3] The system cannot find the path specified: 'C:\\Users\\myname\\Desktop\\Report\\folder1 \\ subfolder1 ' I can extract just file2.txt without any problems but trying to extract file 1 2022.txt gives me that error,presumably due to all the extra whitespaces
75,347,641
FileNotFoundError with filepath that has whitespaces using ZipFile extract
true
1
1
python,path,python-zipfile
66
75,347,596
"folder1 " (note the space) isn't the same as "folder1" (no space). When passing a path, it has to be the exact path. You can't add whitespace between path separators because the file system will assume you want a path name with spaces. Whatever put those spaces into the path is the problem.
1
1.2
2023-02-04 19:00:01
1
My src directory's layout is the following: Learning innit.py settings.py urls.py wsgi.py pages innit.py admin.py apps.py models.py tests.py views.py Views.py has this code from django.shortcuts import render from django.http import HttpResponse def home_view(*args,**kwargs): return HttpResponse("<h1>Hello World, (again)!</h1>") urls.py has this code from django.contrib import admin from django.urls import path from pages.views import home_view urlpatterns = [ path("", home_view, name = "home"), path('admin/', admin.site.urls), ] The part where it says 'pages.views' in 'from pages.views import home_view' has a yellow/orange squiggle underneath it meaning that it is having problems importing the file and it just doesn't see the package/application called 'pages' and doesn't let me import it even though the package has a folder called 'innit.py'. Even worse is the fact that the tutorial I am currently following receives no such error and I can't see anyone else who has encountered this error. As you probably expect I am a beginner so I don't have experience and this is my first time editing views.html in Django so I may have made an obvious mistake if so, just point it out. I tried doing from ..pages.views import home_view However it failed and gave me an error I have also tried changing the project root however this now causes issues with the imports in 'views.py'.
75,348,049
Issue importing application in Django in urls.html
true
1
1
python,django,django-views
38
75,347,911
The part where it says 'pages.views' in 'from pages.views import home_view' has a yellow/orange squiggle underneath it meaning that it is having problems importing the file and it just doesn't see. You need to mark the correct "source root". This is for Django the project directory, which is the directory that contains the apps. For example in PyCharm you click right on that directory, and use Mark Directory as… ⟩ Sources Root.
2
1.2
2023-02-04 19:47:03
1
Is there any difference between the infinities returned by the math module and cmath module? Does the complex infinity have an imaginary component of 0?
75,349,428
Is there a difference between math.inf and cmath.inf in Python?
false
1
1
python,python-3.x,complex-numbers,infinity,python-cmath
46
75,349,427
Any difference? No, there is no difference. According to the docs, both math.inf and cmath.inf are equivalent to float('inf'), or floating-point infinity. If you want a truly complex infinity that has a real component of infinity and an imaginary component of 0, you have to build it yourself: complex(math.inf, 0) There is, however, cmath.infj, if you want 0 as a real value and infinity as the imaginary component. Constructing imaginary infinity As others have pointed out math.inf + 0j is a bit faster than complex(math.inf, 0). We're talking on the order of nanoseconds though.
2
0.379949
2023-02-05 00:54:45
1
I have a script that modifies a pandas dataframe with several concurrent functions (asyncio coroutines). Each function adds rows to the dataframe and it's important that the functions all share the same list. However, when I add a row with pd.concat a new copy of the dataframe is created. I can tell because each dataframe now has a different memory location as given by id(). As a result the functions are no longer share the same object. How can I keep all functions pointed at a common dataframe object? Note that this issue doesn't arise when I use the append method, but that is being deprecated.
75,349,863
Pandas dataframe sharing between functions isn't working
false
1
1
pandas,dataframe,python-asyncio
13
75,349,540
pandas dataframes are efficient because they use contiguous memory blocks, frequently of fundamental types like int and float. You can't just add a row because the dataframe doesn't own the next bit of memory it would have to expand into. Concatenation usually requires that new memory is allocated and data is copied. Once that happens, referrers to the original dataframe If you know the final size you want, you can preallocate and fill. Otherwise, you are better off keeping a list of new dataframes and concatenating them all at once. Since these are parallel procedures, they aren't dependent on each others output, so this may be a feasable option.
0
0
2023-02-05 01:23:06
0
This is what I'm trying to do. Scan the csv using Polars lazy dataframe Format the phone number using a function Remove nulls and duplicates Write the csv in a new file Here is my code import sys import json import polars as pl import phonenumbers #define the variable and parse the encoded json args = json.loads(sys.argv[1]) #format phone number as E164 def parse_phone_number(phone_number): try: return phonenumbers.format_number(phonenumbers.parse(phone_number, "US"), phonenumbers.PhoneNumberFormat.E164) except phonenumbers.NumberParseException: pass return None #scan the csv file do some filter and modify the data and then write the output to a new csv file pl.scan_csv(args['path'], sep=args['delimiter']).select( [args['column']] ).with_columns( #convert the int phne number as string and apply the parse_phone_number function [pl.col(args['column']).cast(pl.Utf8).apply(parse_phone_number).alias(args['column']), #add another column list_id with value 100 pl.lit(args['list_id']).alias("list_id") ] ).filter( #filter nulls pl.col(args['column']).is_not_null() ).unique(keep="last").collect().write_csv(args['saved_path'], sep=",") I tested a file with 800k rows and 23 columns (150mb) and it takes around 20 seconds and more than 500mb ram then it completes the task. Is this normal? Can I optimize the performance (the memory usage at least)? I'm really new with Polars and I work with PHP and I'm very noob at python too, so sorry if my code looks bit dumb haha.
75,351,869
Python Polars consuming high memory and taking longer time
true
1
2
python,pandas,python-polars
625
75,349,550
You are using an apply, which means you are effectively writing a python for loop. This often is 10-100x slower than using expressions. Try to avoid apply. And if you do use apply, don't expect it to be fast. P.S. you can reduce memory usage by not casting the whole column to Utf8, but instead cast inside your apply function. Though I don't think using 500MB is that high. Ideally polars uses as much RAM as available without going OOM. Unused RAM might be wasted potential.
4
1.2
2023-02-05 01:26:50
1
I'm trying to use PySpark to read from Avro file into dataframe, do some transformations and write the dataframe out to HDFS as hive tables using the code below. The file format for the hive tables is parquet. df.write.mode("overwrite").format("hive").insertInto("mytable") #this write a partition every day. When re-run, it would overwrite that run day's partition The problem is, when the source data has a schema change, like added a column, it will fail with an error saying: source file structure not match with existing table schema. How should I handle this case programmatically? Many thanks for your help. Edited :I want the new schema changes to be reflected in target table. I'm looking for a programmatic way to do this.
75,379,697
PySpark- How to handle source data schema change
false
1
3
python,dataframe,apache-spark,pyspark,hive
533
75,351,823
You should be able to query off the system tables. You can run a comparison on these to see what changes have occurred since your last run.
0
0
2023-02-05 11:17:41
1
For example, I have a post and I want to update it with tags and some custom field, like 'rating' or 'mood' (not using any plugin, only WP built-in options for custom fields and REST API). r = requests.post(WP_url, params = {'tags': tags, 'rating': rating}, auth = wp_auth) Something like this. It works great for updating existing post parameters and fields, but I cannot find a way to create a custom field using Python API requests only.
75,352,704
How do I make a Python request for WordPress REST API to create a custom field?
false
1
1
python,wordpress,rest,python-requests
112
75,352,647
I don't think it is possible to create a new field from request. Its depend on you WP Rest API Server, how it handles the excess argument you, if you API create a new field for any excess provided then only it will be possible to create new field.
0
0
2023-02-05 13:34:19
1
I created a Pixel class for image processing (and learn how to build a class). A full image is then a 2D numpy.array of Pixel but when I added a __getattr__ method , it stopped to work, because numpy wants an __array_struct__ attribute. I tried to add this in __getattr__: if name == '__array_struct__': return object.__array_struct__ Now it works but I get '''DeprecationWarning: An exception was ignored while fetching the attribute __array__ from an object of type 'Pixel'. With the exception of AttributeError NumPy will always raise this exception in the future. Raise this deprecation warning to see the original exception. (Warning added NumPy 1.21) I = np.array([Pixel()],dtype = Pixel)''' a part of the class: class Pixel: def __init__(self,*args): #things to dertermine RGB self.R,self.G,self.B = RGB #R,G,B are float between 0 and 255 ... def __getattr__(self,name): if name == '__array_struct__': return object.__array_struct__ if name[0] in 'iI': inted = True name = name[1:] else: inted = False if len(name)==1: n = name[0] if n in 'rgba': value = min(1,self.__getattribute__(n.upper())/255) elif n in 'RGBA': value = min(255,self.__getattribute__(n)) assert 0<=value else: h,s,v = rgb_hsv(self.rgb) if n in 'h': value = h elif n == 's': value = s elif n == 'v': value = v elif n == 'S': value = s*100 elif n == 'V': value = v*100 elif n == 'H': value = int(h) if inted: return int(value) else: return value else: value = [] for n in name: try: v = self.__getattribute__(n) except AttributeError: v = self.__getattr__(n) if inted: value.append(int(v)) else: value.append(v) return value
75,354,910
How do I store objects I created in np.array if a __getattr__ exists?
true
1
2
python,numpy-ndarray
82
75,354,472
Your class should either implement __array__ or raise an AttributeError when numpy tries to get it. The warning message says you raised some other error and that numpy will not accept that in the future. I haven't figured out your code well enough to know, but it could be that calling self.__getattr__(n) inside of __getattr__ hits a maximum recursion error. object.__array_struct__ doesn't exist and so just by luck its AttributeError exception is what numpy was looking for. A better strategy is to raise AttributeError for anything that doesn't meet the selection criteria for your automatically generated attributes. Then you can take out the special case for __array_struct__ that doesn't work properly anyway.
1
1.2
2023-02-05 18:15:55
2
When I do pip install dotenv it says this - `Collecting dotenv Using cached dotenv-0.0.5.tar.gz (2.4 kB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [72 lines of output] C:\Users\Anju Tiwari\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools\installer.py:27: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer. warnings.warn( error: subprocess-exited-with-error python setup.py egg_info did not run successfully. exit code: 1 [17 lines of output] Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 14, in <module> File "C:\Users\Anju Tiwari\AppData\Local\Temp\pip-wheel-xv3lcsr9\distribute_009ecda977a04fb699d5559aac28b737\setuptools\__init__.py", line 2, in <module> from setuptools.extension import Extension, Library File "C:\Users\Anju Tiwari\AppData\Local\Temp\pip-wheel-xv3lcsr9\distribute_009ecda977a04fb699d5559aac28b737\setuptools\extension.py", line 5, in <module> from setuptools.dist import _get_unpatched File "C:\Users\Anju Tiwari\AppData\Local\Temp\pip-wheel-xv3lcsr9\distribute_009ecda977a04fb699d5559aac28b737\setuptools\dist.py", line 7, in <module> from setuptools.command.install import install File "C:\Users\Anju Tiwari\AppData\Local\Temp\pip-wheel-xv3lcsr9\distribute_009ecda977a04fb699d5559aac28b737\setuptools\command\__init__.py", line 8, in <module> from setuptools.command import install_scripts File "C:\Users\Anju Tiwari\AppData\Local\Temp\pip-wheel-xv3lcsr9\distribute_009ecda977a04fb699d5559aac28b737\setuptools\command\install_scripts.py", line 3, in <module> from pkg_resources import Distribution, PathMetadata, ensure_directory File "C:\Users\Anju Tiwari\AppData\Local\Temp\pip-wheel-xv3lcsr9\distribute_009ecda977a04fb699d5559aac28b737\pkg_resources.py", line 1518, in <module> register_loader_type(importlib_bootstrap.SourceFileLoader, DefaultProvider) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: module 'importlib._bootstrap' has no attribute 'SourceFileLoader' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Encountered error while generating package metadata. See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. Traceback (most recent call last): File "C:\Users\Anju Tiwari\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools\installer.py", line 82, in fetch_build_egg subprocess.check_call(cmd) File "C:\Users\Anju Tiwari\AppData\Local\Programs\Python\Python311\Lib\subprocess.py", line 413, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['C:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\python.exe', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', 'C:\\Users\\ANJUTI~1\\AppData\\Local\\Temp\\tmpcq62ekpo', '--quiet', 'distribute']' returned non-zero exit status 1. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "C:\Users\Anju Tiwari\AppData\Local\Temp\pip-install-j7w9rs9u\dotenv_0f4daa500bef4242bb24b3d9366608eb\setup.py", line 13, in <module> setup(name='dotenv', File "C:\Users\Anju Tiwari\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools\__init__.py", line 86, in setup _install_setup_requires(attrs) File "C:\Users\Anju Tiwari\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools\__init__.py", line 80, in _install_setup_requires dist.fetch_build_eggs(dist.setup_requires) File "C:\Users\Anju Tiwari\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools\dist.py", line 875, in fetch_build_eggs resolved_dists = pkg_resources.working_set.resolve( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Anju Tiwari\AppData\Local\Programs\Python\Python311\Lib\site-packages\pkg_resources\__init__.py", line 789, in resolve dist = best[req.key] = env.best_match( ^^^^^^^^^^^^^^^ File "C:\Users\Anju Tiwari\AppData\Local\Programs\Python\Python311\Lib\site-packages\pkg_resources\__init__.py", line 1075, in best_match return self.obtain(req, installer) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Anju Tiwari\AppData\Local\Programs\Python\Python311\Lib\site-packages\pkg_resources\__init__.py", line 1087, in obtain return installer(requirement) ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Anju Tiwari\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools\dist.py", line 945, in fetch_build_egg return fetch_build_egg(self, req) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Anju Tiwari\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools\installer.py", line 84, in fetch_build_egg raise DistutilsError(str(e)) from e distutils.errors.DistutilsError: Command '['C:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\python.exe', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', 'C:\\Users\\ANJUTI~1\\AppData\\Local\\Temp\\tmpcq62ekpo', '--quiet', 'distribute']' returned non-zero exit status 1. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details.` I tried doing pip install dotenv but then that error come shown above. I also tried doing pip install -U dotenv but it didn't work and the same error came. Can someone please help me fix this?
75,354,709
Pip install dotenv, Error 1 Windows 10 Pro
true
1
1
python,error-handling,pip,download,dotenv
1,365
75,354,617
pip install python-dotenv worked for me.
7
1.2
2023-02-05 18:37:23
3
def mean(x): return(sum(x)/len(x)) def variance(x): x_mean = mean(x) return sum((x-x_mean)**2)/(len(x)-1) def standard_deviation(x): return math.sqrt(variance(x)) The functions above build on each other. They depend on the previous function. What is a good way to implement this in Python? Should I use a class which has these functions? Are there other options?
75,356,009
Functions depending on other functions in Python
true
1
1
python
47
75,355,949
Because they are widely applicable, keep them as they are Many parts of a program may need to calculate these statistics, and it will save wordiness to not have to get them out of a class. Moreover, the functions actually don't need any class-stored data: they would simply be static methods of a class. (Which in the old days, we would have simply called "functions"!) If they needed to store internal information to work correctly, that is a good reason to put them into a class The advantage in that case is that it is more obvious to the programmer what information is being shared. Moreover, you might want to create two or more instances that had different sets of shared data. That is not the case here.
3
1.2
2023-02-05 22:31:29
1
I need a product's unit of stock(quantity). Is it possible with SP API, if possible how can I get it? Note: I can get it with SKU like the following code but the product is not listed by my sellers. from sp_api.api import Inventories quantity = Inventories(credentials=credentials, marketplace=Marketplaces.FR).get_inventory_summary_marketplace(**{ "details": False, "marketplaceIds": ["A13V1IB3VIYZZH"], "sellerSkus": ["MY_SKU_1" , "MY_SKU_2"] }) print(quantity)
75,561,704
How can I get quantity with SP API Python
false
1
1
python,amazon-selling-partner-api
304
75,356,060
from sp_api.api import Inventories quantity = Inventories(credentials=credentials, marketplace=Marketplaces.FR).get_inventory_summary_marketplace(**{ "details": False, "marketplaceIds": ["A13V1IB3VIYZZH"], "sellerSkus": ["MY_SKU_1" , "MY_SKU_2"] }) print(quantity)
0
0
2023-02-05 22:54:14
1
I'm training a VAE with TensorFlow Keras backend and I'm using Adam as the optimizer. the code I used is attached below. def compile(self, learning_rate=0.0001): optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate) self.model.compile(optimizer=optimizer, loss=self._calculate_combined_loss, metrics=[_calculate_reconstruction_loss, calculate_kl_loss(self)]) The TensorFlow version I'm using is 2.11.0. The error I'm getting is AttributeError: 'Adam' object has no attribute 'get_updates' I'm suspecting the issues arise because of the version mismatch. Can someone please help me to sort out the issue? Thanks in advance.
76,288,587
AttributeError: 'Adam' object has no attribute 'get_updates'
false
2
3
python,tensorflow
3,566
75,356,826
Of late, I had to use the tensorflow2.5 and I replaced all "import keras" by "import tensorflow.keras". Now I use tensorflow2.12 and I met this error and when I returned those replacements; this error was removed. thank you!
1
0.066568
2023-02-06 02:20:41
1
I'm training a VAE with TensorFlow Keras backend and I'm using Adam as the optimizer. the code I used is attached below. def compile(self, learning_rate=0.0001): optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate) self.model.compile(optimizer=optimizer, loss=self._calculate_combined_loss, metrics=[_calculate_reconstruction_loss, calculate_kl_loss(self)]) The TensorFlow version I'm using is 2.11.0. The error I'm getting is AttributeError: 'Adam' object has no attribute 'get_updates' I'm suspecting the issues arise because of the version mismatch. Can someone please help me to sort out the issue? Thanks in advance.
76,295,165
AttributeError: 'Adam' object has no attribute 'get_updates'
false
2
3
python,tensorflow
3,566
75,356,826
Two ways worked for me, By using tf.keras.optimizers.legacy.SGD - instead of tf.keras.optimizers.SGD Importing statement is changed from import tensorflow.keras as keras to 'import keras'
0
0
2023-02-06 02:20:41
1
I have a column that has name variations that I'd like to clean up. I'm having trouble with the regex expression to remove everything after the first word following a comma. d = {'names':['smith,john s','smith, john', 'brown, bob s', 'brown, bob']} x = pd.DataFrame(d) Tried: x['names'] = [re.sub(r'/.\s+[^\s,]+/','', str(x)) for x in x['names']] Desired Output: ['smith,john','smith, john', 'brown, bob', 'brown, bob'] Not sure why my regex isn't working, but any help would be appreciated.
75,356,969
Regex - removing everything after first word following a comma
false
1
2
python,regex
65
75,356,848
Try re.sub(r'/(,\s*\w+).*$/','$1', str(x))... Put the triggered pattern into capture group 1 and then restore it in what gets replaced.
0
0
2023-02-06 02:27:11
2
I have training data with 2 dimension. (200 results of 4 features) I proved 100 different applications with 10 repetition resulting 1000 csv files. I want to stack each csv results for machine learning. But I don't know how. each of my csv files look like below. test1.csv to numpy array data [[0 'crc32_pclmul' 445 0] [0 'crc32_pclmul' 270 4096] [0 'crc32_pclmul' 234 8192] ... [249 'intel_pmt' 272 4096] [249 'intel_pmt' 224 8192] [249 'intel_pmt' 268 12288]] I tried below python code. path = os.getcwd() csv_files = glob.glob(os.path.join(path, "*.csv")) cnt=0 for f in csv_files: cnt +=1 seperator = '_' app = os.path.basename(f).split(seperator, 1)[0] if cnt==1: a = np.array(preprocess(f)) b = np.array(app) else: a = np.vstack((a, np.array(preprocess(f)))) b = np.append(b,app) print(a) print(b) preprocess function returns df.to_numpy results for each csv files. My expectation was like below. a(1000, 200, 4) [[[0 'crc32_pclmul' 445 0] [0 'crc32_pclmul' 270 4096] [0 'crc32_pclmul' 234 8192] ... [249 'intel_pmt' 272 4096] [249 'intel_pmt' 224 8192] [249 'intel_pmt' 268 12288]], [[0 'crc32_pclmul' 445 0] [0 'crc32_pclmul' 270 4096] [0 'crc32_pclmul' 234 8192] ... [249 'intel_pmt' 272 4096] [249 'intel_pmt' 224 8192] [249 'intel_pmt' 268 12288]], ... [[0 'crc32_pclmul' 445 0] [0 'crc32_pclmul' 270 4096] [0 'crc32_pclmul' 234 8192] ... [249 'intel_pmt' 272 4096] [249 'intel_pmt' 224 8192] [249 'intel_pmt' 268 12288]]] However, I'm getting this. a(200000, 4) [[0 'crc32_pclmul' 445 0] [0 'crc32_pclmul' 270 4096] [0 'crc32_pclmul' 234 8192] ... [249 'intel_pmt' 272 4096] [249 'intel_pmt' 224 8192] [249 'intel_pmt' 268 12288]] I want to access each csv results using a[0] to a[1000] each sub-array looks like (200,4) How can I solve the problem? I'm quite lost
75,357,911
make 3d numpy array using for loop in python
false
1
3
python,arrays,numpy,3d,2d
76
75,357,819
Make a new list (outside of the loop) and append each item to that new list after reading.
0
0
2023-02-06 06:02:38
1
I am new to docker and using apptainer for that. the def file is: firstApp.def: `Bootstrap: docker From: ubuntu:22.04 %environment export LC_ALL=C ` then I built it as follows and I want it to be writable (I hope I am not so naive), so I can install some packages later: `apptainer build --sandbox --fakeroot firstApp.sif firstApp.def ` now I do not know how to install Python3 (preferably, 3.8 or later). I tried to add the following command lines to the def file: `%post apt-get -y install update apt-get -y install python3.8 ` it raises these errors as well even without "apt-get -y install python3.8": Reading package lists... Done Building dependency tree... Done Reading state information... Done E: Unable to locate package update FATAL: While performing build: while running engine: exit status 100
75,740,197
How to install Python or R in an apptainer?
false
1
1
python,docker,apptainer
104
75,360,485
This work for me %post apt-get update && apt-get install -y netcat python3.8
0
0
2023-02-06 11:03:48
1
I defined a function which returns a third order polynomial function for either a value, a list or a np.array: def two_d_third_order(x, a, b, c, d): return a + np.multiply(b, x) + np.multiply(c, np.multiply(x, x)) + np.multiply(d, np.multiply(x, np.multiply(x, x))) The issue I noticed is, however, when I use "two_d_third_order" on the following two inputs: 1500 1500.0 With (a, b, c, d) = (1.20740028e+00, -2.93682465e-03, 2.29938078e-06, -5.09134552e-10), I get two different results: 2.4441 0.2574 , respectively. I don't know how this is possible, and any help would be appreciated. I tried several inputs, and somehow the inclusion of a floating point on certain values (despite representing the same numerical value) changes the end result.
75,362,712
Python code yielding different result for same numerical value, depending on inclusion of precision point
true
1
2
python-3.x,numpy,scipy
45
75,360,628
Python uses implicit data type conversions. When you use only integers (like 1500), there is a loss of precision in all subsequent operations. Whereas when you pass it a float or double (like 1500.0), subsequent operations are performed with the associated datatype, i.e in this case with higher precision. This is not a "bug" so to speak, but generally how Python operates without the explicit declaration of data types. Languages like C and C++ require explicit data type declarations and explicit data type casting to ensure operations are performed in the prescribed precision formats. Can be a boon or a bane depending on usage.
0
1.2
2023-02-06 11:19:44
1
I try to use an assembly for .NET framework 4.8 via Pythonnet. I am using version 3.0.1 with Python 3.10. The documentation of Pythonnet is stating: You must set Runtime.PythonDLL property or PYTHONNET_PYDLL environment variable starting with version 3.0, otherwise you will receive BadPythonDllException (internal, derived from MissingMethodException) upon calling Initialize. Typical values are python38.dll (Windows), libpython3.8.dylib (Mac), libpython3.8.so (most other Unix-like operating systems). However, the documentation unfortunately is not stating how the property is set and I do not understand how to do this. When I try: import clr from pythonnet import load load('netfx') clr.AddReference(r'path\to\my.dll') unsurprisingly the following error is coming up Failed to initialize pythonnet: System.InvalidOperationException: This property must be set before runtime is initialized bei Python.Runtime.Runtime.set_PythonDLL(String value) bei Python.Runtime.Loader.Initialize(IntPtr data, Int32 size) bei Python.Runtime.Runtime.set_PythonDLL(String value) bei Python.Runtime.Loader.Initialize(IntPtr data, Int32 size) [...] in load raise RuntimeError("Failed to initialize Python.Runtime.dll") RuntimeError: Failed to initialize Python.Runtime.dll The question now is, where and how the Runtime.PythonDLL property or PYTHONNET_PYDLL environment variable is set Thanks, Jens
75,368,080
Trouble shooting using Pythonnet and setting Runtime.PythonDLL property
false
1
2
python,.net,clr,python.net
1,574
75,362,126
I believe this is because import clr internally calls pythonnet.load, and in the version of pythonnet you are using this situation does not print any warning. E.g. the right way is to call load before you call import clr for the first time.
0
0
2023-02-06 13:46:28
2
I have a virtual environment where I am developing a Python package. The folder tree is the following: working-folder |-setup.py |-src |-my_package |-__init__.py |-my_subpackage |-__init__.py |-main.py main.py contains a function my_main that ideally, I would want to run as a bash command. I am using setuptools and the setup function contains the following line of code setup( ... entry_point={ "console_scripts": [ "my-command = src.my_package.my_subpackage.main:my_main", ] }, ... ) When I run pip install . the package gets correctly installed in the virtual environment. However, when running my-command on the shell, the command does not exist. Am I missing some configuration to correctly generate the entry point?
75,386,087
Python entry_point in virtual environment not working
true
1
1
python,package,virtualenv,setuptools,entry-point
27
75,362,342
I simply mistyped the argument entry_point, which actually is entry_points. Unfortunately, I was not getting any output errors.
0
1.2
2023-02-06 14:04:40
1
I have a figure with different plots on several axes. Some of those axes do not play well with some of the navigation toolbar actions. In particular, the shortcuts to go back to the home view and the ones to go to the previous and next views. Is there a way to disable those shortcuts only for those axes? For example, in one of the two in the figure from the example below. import matplotlib.pyplot as plt # Example data for two plots x1 = [1, 2, 3, 4] y1 = [10, 20, 25, 30] x2 = [2, 3, 4, 5] y2 = [5, 15, 20, 25] # Create figure and axes objects fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5)) # Plot data on the first axis ax1.plot(x1, y1) ax1.set_title("First Plot") # Plot data on the second axis ax2.plot(x2, y2) ax2.set_title("Second Plot") # Show plot plt.show() Edit 1: The following method will successfully disable the pan and zoom tools from the GUI toolbox in the target axis. ax2.set_navigate(False) However, the home, forward, and back buttons remain active. Is there a trick to disable also those buttons in the target axis?
75,447,405
How to disable the Matplotlib navigation toolbar in a particular axis?
false
1
3
python,matplotlib,user-interface,widget,interactive
274
75,362,809
You can try to use ax2.get_xaxis().set_visible(False)
0
0
2023-02-06 14:45:06
2
I am trying to automate the process of liking pages on Facebook. I've got a list of each page's link and I want to open and like them one by one. I think the Like button doesn't have any id or name, but it is in a span class. <span class="x1lliihq x6ikm8r x10wlt62 x1n2onr6 xlyipyv xuxw1ft">Like</span> I used this code to find and click on the "Like" button. def likePages(links, driver): for link in links: driver.get(link) time.sleep(3) driver.find_element(By.LINK_TEXT, 'Like').click() And I get the following error when I run the function: selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element
75,363,222
How to find and click the "Like" button on Facebook page using Selenium
false
1
2
python,selenium,selenium-webdriver,xpath,nosuchelementexception
362
75,363,011
You cannot use Link_Text locator as Like is not a hyperlink. Use XPath instead, see below: XPath : //span[contains(text(),"Like")] driver.find_element(By.XPATH, '//span[contains(text(),"Like")]').click()
0
0
2023-02-06 15:03:27
1
i have a package and in it i use pyproject.toml and for proper typing i need stubs generated, although its kinda annoying to generate them manually every time, so, is there a way to do it automatically using it ? i just want it to run stubgen and thats it, just so mypy sees the stubs and its annoying seeing linters throw warnings and you keep having to # type: ignore heres what i have as of now, i rarely do this so its probably not that good : [build-system] requires = ["setuptools", "setuptools-scm"] build-backend = "setuptools.build_meta" [project] name = "<...>" authors = [ {name = "<...>", email = "<...>"}, ] description = "<...>" readme = "README" requires-python = ">=3.10" keywords = ["<...>"] license = {text = "GNU General Public License v3 or later (GPLv3+)"} classifiers = [ "License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)", "Programming Language :: Python :: 3", ] dependencies = [ "<...>", ] dynamic = ["version"] [tool.setuptools] include-package-data = true [tool.setuptools.package-data] <...> = ["*.pyi"] [tool.pyright] pythonVersion = "3.10" exclude = [ "venv", "**/node_modules", "**/__pycache__", ".git" ] include = ["src", "scripts"] venv = "venv" stubPath = "src/stubs" typeCheckingMode = "strict" useLibraryCodeForTypes = true reportMissingTypeStubs = true [tool.mypy] exclude = [ "^venv/.*", "^node_modules/.*", "^__pycache__/.*", ] thanks for the answers in advance
75,371,297
how to automatically generate mypy stubs using pyproject.toml
false
1
1
python,python-3.x,mypy,pyproject.toml
266
75,367,685
just make a shellscript and add it to pyproject.toml as a script :+1:
0
0
2023-02-06 23:46:22
1
I made an .exe file using pyinstaller, but when I run the file it opens a PowerShell window as well. I was wondering if there is anyway I can get it to not open so I just have the python program open. I haven't really tried anything as I don't really know what I'm doing.
75,368,754
.exe file opening Powershell Window
false
2
2
python,powershell,pyinstaller,exe
43
75,368,407
if you run it from terminal, you can use this command: start /min "" "path\file_name.exe"
0
0
2023-02-07 02:27:36
1
I made an .exe file using pyinstaller, but when I run the file it opens a PowerShell window as well. I was wondering if there is anyway I can get it to not open so I just have the python program open. I haven't really tried anything as I don't really know what I'm doing.
75,368,529
.exe file opening Powershell Window
true
2
2
python,powershell,pyinstaller,exe
43
75,368,407
When running pyinstaller be sure to use the --windowed argument. For example: pyinstaller –-onefile myFile.py –-windowed
0
1.2
2023-02-07 02:27:36
1
this is my data X_train prepared for LSTM of shape (7000, 2, 200) [[[0.500858 0. 0.5074856 ... 1. 0.4911533 0. ] [0.4897923 0. 0.48860878 ... 0. 0.49446714 1. ]] [[0.52411383 0. 0.52482396 ... 0. 0.48860878 1. ] [0.4899698 0. 0.48819458 ... 1. 0.4968341 1. ]] ... [[0.6124623 1. 0.6118705 ... 1. 0.6328777 0. ] [0.6320492 0. 0.63512635 ... 1. 0.6960175 0. ]] [[0.6118113 1. 0.6126989 ... 0. 0.63512635 1. ] [0.63530385 1. 0.63595474 ... 1. 0.69808865 0. ]]] I create my sequential model model = Sequential() model.add(LSTM(units = 50, activation = 'relu', input_shape = (X_train.shape[1], 200))) model.add(Dropout(0.2)) model.add(Dense(1, activation = 'linear')) model.compile(loss = 'mean_squared_error', optimizer = 'adam') Then I fit my model: history = model.fit( X_train, Y_train, epochs = 20, batch_size = 200, validation_data = (X_test, Y_test), verbose = 1, shuffle = False, ) model.summary() And at the end I can see something like this: Layer (type) Output Shape Param # ================================================================= lstm_16 (LSTM) (None, 2, 50) 50200 dropout_10 (Dropout) (None, 2, 50) 0 dense_10 (Dense) (None, 2, 1) 51 Why does it say that output shape have a None value as a first element? Is it a problem? Or it should be like this? What does it change and how can I change it? I will appreciate any help, thanks!
75,368,566
Keras LSTM None value output shape
true
1
1
python,tensorflow,keras,lstm
112
75,368,490
The first value in TensorFlow is always reserved for the batch-size. Your model doesn't know in advance what is your batch-size so it makes it None. If we go into more details let's suppose your dataset is 1000 samples and your batch-size is 32. So, 1000/32 will become 31.25, if we just take the floor value which is 31. So, there would be 31 batches in a total of size 32. But if you look here the total sample size of your dataset is 1000 but you have 31 batches of size 32, which is 32 * 31 = 992, where 1000 - 992 = 8, it means there would be one more batch of size 8. But the model doesn't know in advance so, what does it do? it reserves a space in the memory where it doesn't define a specific shape for it, in other words, the memory is dynamic for the batch-size. Therefore, you are seeing it None there. So, the model doesn't know in advance what would be the shape of my batch-size so it makes it None so it should know it later when it computes the first epoch meaning computes all of the batches. The None value can't be changed because it is Dynamic in Tensorflow, the model knows it and fix it when your model completes its first epoch. So, always set the shapes which are after it like in your case it is (2, 200). The 7000 is your model's total number of samples so the model doesn't know in advance what would be your batch-size and the other big issue is most of the time your batch-size is not evenly divisible by your total number of samples in dataset therefore, it is necessary for the model to make it None to know it later when it computes all the batches in the very first epoch.
1
1.2
2023-02-07 02:45:45
1
I have docker file like below: FROM continuumio/miniconda3 RUN conda update -n base -c defaults conda RUN conda create -c conda-forge -n pymc3_env pymc3 numpy theano-pymc mkl mkl-service COPY ./src /app WORKDIR /app CMD ["conda", "run", "-n", "pymc3_env", "python", "ma.py"] I get the following error: ------ > [3/5] RUN conda create -c conda-forge -n pymc3_env pymc3 numpy theano-pymc mkl mkl-service: #0 0.400 Collecting package metadata (current_repodata.json): ...working... done #0 9.148 Solving environment: ...working... failed with repodata from current_repodata.json, will retry with next repodata source. #0 9.149 Collecting package metadata (repodata.json): ...working... done #0 45.81 Solving environment: ...working... failed #0 45.82 #0 45.82 PackagesNotFoundError: The following packages are not available from current channels: #0 45.82 #0 45.82 - mkl-service #0 45.82 - mkl #0 45.82 #0 45.82 Current channels: #0 45.82 #0 45.82 - https://conda.anaconda.org/conda-forge/linux-aarch64 #0 45.82 - https://conda.anaconda.org/conda-forge/noarch #0 45.82 - https://repo.anaconda.com/pkgs/main/linux-aarch64 #0 45.82 - https://repo.anaconda.com/pkgs/main/noarch #0 45.82 - https://repo.anaconda.com/pkgs/r/linux-aarch64 #0 45.82 - https://repo.anaconda.com/pkgs/r/noarch #0 45.82 #0 45.82 To search for alternate channels that may provide the conda package you're #0 45.82 looking for, navigate to #0 45.82 #0 45.82 https://anaconda.org #0 45.82 #0 45.82 and use the search bar at the top of the page. #0 45.82 #0 45.82 ------ failed to solve: executor failed running [/bin/sh -c conda create -c conda-forge -n pymc3_env pymc3 numpy theano-pymc mkl mkl-service]: exit code: 1 Can anybody help me to understand why conda could not find mkl and mkl-service in conda-forge channel and what do I need to resolve this? I am using macos as a host, if it is any concern. Thanks in advance for any help.
75,375,632
unable to install mkl mkl-service using conda in docker
true
1
1
python,linux,docker,anaconda,conda
123
75,368,928
MKL only works for x86_64, that is the Docker image must use the platform linux/amd64. So, either specify --platform=linux/amd64 in the build command line or in the FROM.
1
1.2
2023-02-07 04:20:39
1
I am trying to get the last message that user 476686545034674176 sent in channel 1049386904065409054 and when I try to debug it, I either get a weird output or an error that says it is a Nonetype after I got an output that should trigger if it got a message. I tried: @client.event async def on_ready(): print('Logged in as') print(client.user.name) print(client.user.id) print('------') await tree.sync(guild=discord.Object(id=1049253865112997888)) aviv_venting_about_his_shitass_brothers = client.get_channel(1049386904065409054) global last_message async for message in aviv_venting_about_his_shitass_brothers.history(limit=1000): if message.author.id == 476686545034674176: last_message = message if last_message is None: print('no messages found') elif last_message.content == None: print('invalid message') else: print(f'found message {last_message.content}') break There is a line later in the code: await interaction.response.send_message(f'aviv last vented at {datetime.datetime.fromtimestamp(last_message.created_at).strftime("%Y-%m-%d %H:%M:%S")} <@{interaction.user.id}>') and it gives me this error: discord.app_commands.errors.CommandInvokeError: Command 'last_vent' raised an exception: TypeError: 'datetime.datetime' object cannot be interpreted as an integer I expected to get an output when the bot starts up and I either get no output or 'found message'
75,370,912
How do I get the last message sent by a certain user in a certain channel with discord.py?
true
1
1
python,discord.py
56
75,370,722
Your problem is not that the bot doesn't find a matching message, its problem lies within the execution of the send_message command. Read the error message. You're trying to pass an invalid type for an argument. I am not familiar with the intricacies of discord.py, but if I could hazard a guess, last_message.created_at already is a datetime object.
0
1.2
2023-02-07 08:32:02
1
The subject contains the whole idea. I came accross code sample where it shows something like: async for item in getItems(): await item.process() And others where the code is: for item in await getItems(): await item.process() Is there a notable difference in these two approaches?
75,373,144
In Python, what is the difference between `async for x in async_iterator` and `for x in await async_iterator`?
false
1
2
python,python-3.x,asynchronous,python-asyncio
157
75,372,032
Those are completely different. This for item in await getItems() won't work (will throw an error) if getItems() is an asynchronous iterator or asynchronous generator, it may be used only if getItems is a coroutine which, in your case, is expected to return a sequence object (simple iterable). async for is a conventional (and pythonic) way for asynchronous iterations over async iterator/generator.
0
0
2023-02-07 10:26:26
4
I'm trying to use TA-lib for a hobby project. I found some code-snippets as reference telling me to do the following; import talib as ta ta.add_all_ta_features("some parameters here") i get the following error when running the code: ta.add_all_ta_features( AttributeError: module 'talib' has no attribute 'add_all_ta_features' It looks like i need to manualy add all the features i want as i cant find the attribute .add_all_ta_features in the talib folder. i've installed TA-Lib and made it a 64-bit library using Visual studio and managed to run TA-Lib in other projects before but have never used the .add_all_ta_features-attribute. Does anybody know how i can fix this? Google seems to not return any usefull results when searched for this. The documentation i'm following also does not mention anything about this attribute. i tried using pandas_ta and tried using the Google colab space, but both return the same error.
75,382,873
TA-LIB module has no attribute 'add_all_ta_features'
false
1
1
python,ta-lib
326
75,372,851
Found the problem. I was trying to use TA-Lib as TA, but nowhere was it specified that we need a seperate library, not findable through the python package mangager simply called TA. Thanks!
1
0.197375
2023-02-07 11:40:36
1
I am trying to find all observations that are located within 100 meters of a set of coordinates. I have two dataframes, Dataframe1 has 400 rows with coordinates, and for each row, I need to find all the observations from Dataframe2 that are located within 100 meters of that location, and count them. Ideally, Both the dataframes are formatted like this: | Y | X | observations_within100m | |:----:|:----:|:-------------------------:| |100 |100 | 22 | |110 |105 | 25 | |110 |102 | 11 | I am looking for the most efficient way to do this computation, as dataframe2 has over a 200 000 dwelling locations. I know it can be done with applying a distance function with something as a for loop but I was wondering what the best method is here.
75,375,261
Most resource-efficient way to calculate distance between coordinates
false
1
2
python,pandas
58
75,374,930
If there's a small area you're working on, you could make a grid of all known locations, then for each point precompute a list of which entries in df1 which are withing 100m from that point. Step 2 would be to go thru the 200k lines df2 and increase the count for the df1 entries found at the point correspondingly. Otherwise, this problem is similar to collision detection, for which there might be smart implementations. e.g. pygame has one, no idea though how efficient. Depending on how sparse the area is there might be gains thru dividing it into cells, so you'd only have to detect collision/distance for the entries in that cell, decreasing from 400 objects you'd have to check against for each of the 200k. Hope the answer was helpful and good luck!
0
0
2023-02-07 14:42:37
1
My team is using AWS Glue endpoints to locally develop using VS code notebooks, this morning for some reason - our endpoints get the error below. Its 3 machines (Mac, Linux and Windows) that did not update anything and just suddenly got this error when trying to use the Glue endpoint. Anyone else getting this error? Whats even stranger is that the fourth developer, who does not have a different setup can still use the endpoint. If I create a notebook using jupyter notebook and use the glue pyspark kernel there, it will work. Any attempt at updating or redownloading Python / the packages has no effect. When I add a print to this library I can see the Data object is empty. If I comment this line out I am unable to see outputs from my notebook. Anyone else getting this error? The error: Trying to create a Glue session for the kernel. Worker Type: G.1X Number of Workers: 2 Session ID: 6f7ecef2-de6a-44fe-bbfc-bf8b1fa53ce5 Applying the following default arguments: --glue_kernel_version 0.35 --enable-glue-datacatalog true --additional-python-modules great-expectations==0.15.17 --conf spark.sql.legacy.parquet.int96RebaseModeInWrite=CORRECTED --conf spark.sql.legacy.parquet.int96RebaseModeInRead=CORRECTED --conf spark.sql.legacy.parquet.datetimeRebaseModeInRead=CORRECTED --enable-job-insights true Waiting for session 6f7ecef2-de6a-44fe-bbfc-bf8b1fa53ce5 to get into ready status... Session 6f7ecef2-de6a-44fe-bbfc-bf8b1fa53ce5 has been created Exception encountered while running statement: 'TextPlain' Traceback (most recent call last): File "/home/user/.local/lib/python3.10/site-packages/aws_glue_interactive_sessions_kernel/glue_pyspark/GlueKernel.py", line 163, in do_execute self._send_output(statement_output["Data"]["TextPlain"]) KeyError: 'TextPlain'
75,389,505
Exception encountered while running statement: 'TextPlain' for Glue session
true
1
1
python,aws-glue
284
75,375,998
I had the same issue but I managed to fix it by downgrading to python3.9 from python3.10, updated aws-glue-sessions to 0.37.0 from 0.35.0 and downgrading psutil to 5.9.1. There could potentially be other issues but those should be apparent in the "Output" tab in VSCode.
1
1.2
2023-02-07 16:02:56
1
Can mypy check that a NumPy array of floats is passed as a function argument? For the code below mypy is silent when an array of integers or booleans is passed. import numpy as np import numpy.typing as npt def half(x: npt.NDArray[np.cfloat]): return x/2 print(half(np.full(4,2.1))) print(half(np.full(4,6))) # want mypy to complain about this print(half(np.full(4,True))) # want mypy to complain about this
75,378,152
How to use mypy to ensure that a NumPy array of floats is passed as function argument?
true
1
1
python,numpy,numpy-ndarray,mypy
145
75,378,061
Mypy can check the type of values passed as function arguments, but it currently has limited support for NumPy arrays. You can use the numpy.typing.NDArray type hint, as in your code, to specify that the half function takes a NumPy array of complex floats as an argument. However, mypy will not raise an error if an array of integers or booleans is passed, as it currently cannot perform type-checking on the elements of the array. To ensure that only arrays of complex floats are passed to the half function, you will need to write additional runtime checks within the function to validate the input.
1
1.2
2023-02-07 19:23:42
1
I have two relatively large dataframes (less than 5MB), which I receive from my front-end as files via my API Gateway. I am able to receive the files and can print the dataframes in my receiver Lambda function. From my Lambda function, I am trying to invoke my state machine (which just cleans up the dataframes and does some processing). However, when passing my dataframe to my step function, I receive the following error: ClientError: An error occurred (413) when calling the StartExecution operation: HTTP content length exceeded 1049600 bytes My Receiver Lambda function: dict = {} dict['username'] = arr[0] dict['region'] = arr[1] dict['country'] = arr[2] dict['grid'] = arr[3] dict['physicalServers'] = arr[4] #this is one dataframe in json format dict['servers'] = arr[5] #this is my second dataframe in json format client = boto3.client('stepfunctions') response = client.start_execution( stateMachineArn='arn:aws:states:us-west-2:##:stateMachine:MyStateMachineTest', name='testStateMachine', input= json.dumps(dict) ) print(response) Is there something I can do to pass in my dataframes to my step function? The dataframes contain sensitive customer data which I would rather not store in my S3. I realize I can store the files into S3 (directly from my front-end via pre-signed URLs) and then read the files from my step function but this is one of my least preferred approaches.
75,378,554
Passing in a dataframe to a stateMachine from Lambda
false
1
1
python,pandas,amazon-web-services,aws-lambda,aws-step-functions
152
75,378,081
Passing them as direct input via input= json.dumps(dict) isn't going to work, as you are finding. You are running up against the size limit of the request. You need to save the dataframes to a file, somewhere the step functions can access it, and then just pass the file paths as input to the step function. The way I would solve this is to write the data frames to files in the Lambda file system, with some random ID, perhaps the Lambda invocation ID, in the filename. Then have the Lambda function copy those files to an S3 bucket. Finally invoke the step function with the S3 paths as part of the input. Over on the Step Functions side, have your state machine expect S3 paths for the physicalServers and servers input values, and use those paths to download the files from S3 during state machine execution. Finally, I would configure an S3 lifecycle policy on the bucket, to remove any objects more than a few days old (or whatever time makes sense for your application) so that the bucket doesn't get large and run up your AWS bill. An alternative to using S3 would be to use an EFS volume mount in both this Lambda function, and in the Lambda function or (or EC2 or ECS) that your step function is executing. With EFS your code could write and read from it just like a local file system, which would eliminate the steps of copying to/from S3, but you would have to add some code at the end of your step function to clean up the files after you are done since EFS won't do that for you.
1
0.197375
2023-02-07 19:26:13
1
I am trying to insert data into my database using psycopg2 and I get this weird error. I tried some things but nothing works. This is my code: def insert_transaction(): global username now = datetime.now() date_checkout = datetime.today().strftime('%d-%m-%Y') time_checkout = now.strftime("%H:%M:%S") username = "Peter1" connection_string = "host='localhost' dbname='Los Pollos Hermanos' user='postgres' password='******'" conn = psycopg2.connect(connection_string) cursor = conn.cursor() try: query_check_1 = """(SELECT employeeid FROM employee WHERE username = %s);""" cursor.execute(query_check_1, (username,)) employeeid = cursor.fetchone()[0] conn.commit() except: print("Employee error") try: query_check_2 = """SELECT MAX(transactionnumber) FROM Transaction""" cursor.execute(query_check_2) transactionnumber = cursor.fetchone()[0] + 1 conn.commit() except: transactionnumber = 1 """"---------INSERT INTO TRANSACTION------------""" query_insert_transaction = """INSERT INTO transactie (transactionnumber, date, time, employeeemployeeid) VALUES (%s, %s, %s, %s);""" data = (transactionnumber, date_checkout, time_checkout, employeeid) cursor.execute(query_insert_transaction, data) conn.commit() conn.close() this is the error: ", line 140, in insert_transaction cursor.execute(query_insert_transaction, data) psycopg2.errors.InFailedSqlTransaction: current transaction is aborted, commands ignored until end of transaction block
76,561,514
psycopg2.errors.InFailedSqlTransaction: current transaction is aborted, commands ignored until end of transaction block, dont know how to fix it
false
1
2
python,sql,postgresql,psycopg2
859
75,380,280
Executing the conn.rollback() function after checking for errors and executing the code again should help!
0
0
2023-02-08 00:18:28
1
We are developing a prediction model using deepchem's GCNModel. Model learning and performance verification proceeded without problems, but it was confirmed that a lot of time was spent on prediction. We are trying to predict a total of 1 million data, and the parameters used are as follows. model = GCNModel(n_tasks=1, mode='regression', number_atom_features=32, learning_rate=0.0001, dropout=0.2, batch_size=32, device=device, model_dir=model_path) I changed the batch size to improve the performance, and it was confirmed that the time was faster when the value was decreased than when the value was increased. All models had the same GPU memory usage. From common sense I know, it is estimated that the larger the batch size, the faster it will be. But can you tell me why it works in reverse? We would be grateful if you could also let us know how we can further improve the prediction time.
75,381,683
In deep learning, can the prediction speed increase as the batch size decreases?
false
1
2
python,deep-learning,batchsize
204
75,381,096
There are two components regarding the speed: Your batch size and model size Your CPU/GPU power in spawning and processing batches And two of them need to be balanced. For example, if your model finishes prediction of this batch, but the next batch is not yet spawned, you will notice a drop in GPU utilization for a brief moment. Sadly there is no inner metrics that directly tell you this balance - try using time.time() to benchmark your model's prediction as well as the dataloader speed. However, I don't think that's worth the effort, so you can keep decreasing the batch size up to the point there is no improvement - that's where to stop.
0
0
2023-02-08 03:21:32
1
I have python script to copy data from excel to CSV file. I have created Execute Process Task package in SSIS and deployed to SSISDB. This works fine when i execute in SSIS and in SSISDB manually.However,if i schedule or execute through SQL server agent it fails. I am using proxy account to schedule package. Other "non python SSIS package" runs fine in sql server agent. Error - Execute PY Script:Error: In Executing C:\Program Files\Python311\python.exe" "\\org\data\project\test.py" at "\\org\data\project", The process exit code was "1" while the expected was "0". Python Script - print('Start CSV File Conversion') import pandas as pd from pandas import DataFrame, read_csv file = r'\\\org\data\project\test.xlsm' dframe = pd.read_excel(file, sheet_name='data') export_csv = dframe.to_csv( R'\\\org\data\project\test.csv', index=None, header=True, sep='~') print(dframe) print('...Completed') All Files are saved in \\org\data\project I am learning pyhton. Any inputs will be helpful. Thank you.
75,396,800
SSIS package fails in SQL server Agent
false
1
1
python,sql-server,ssis
113
75,381,830
that doesn't look like ssis related error but python error. Check your code, may be create VS project where you can test it to escape complexity of running through SSIS.
0
0
2023-02-08 05:53:21
1
I dont know why this error occurs. pd.read_excel('data/A.xlsx', usecols=["B", "C"]) Then I get this error: "Value must be either numerical or a string containing a wild card" So i change my code use nrows all data pd.read_excel('data/A.xlsx', usecols=["B","C"], nrows=172033) Then there is no error and a dataframe is created. my excel file has 172034 rows, 1st is column name.
75,764,831
python pandas read_excel error "Value must be either numerical or a string containing a wild card"
false
1
1
python,excel,pandas
3,562
75,382,340
If you deselect all your filters the read_excel function should work.
6
1
2023-02-08 07:02:00
1
I need one help regarding killing application in linux As manual process I can use command -- ps -ef | grep "app_name" | awk '{print $2}' It will give me jobids and then I will kill using command " kill -9 jobid". I want to have python script which can do this task. I have written code as import os os.system("ps -ef | grep app_name | awk '{print $2}'") this collects jobids. But it is in "int" type. so I am not able to kill the application. Can you please here? Thank you
75,385,024
Kill application in linux using python
false
1
2
python,linux
76
75,384,904
To kill a process in Python, call os.kill(pid, sig), with sig = 9 (signal number for SIGKILL) and pid = the process ID (PID) to kill. To get the process ID, use os.popen instead of os.system above. Alternatively, use subprocess.Popen(..., stdout=subprocess.PIPE). In both cases, call the .readline() method, and convert the return value of that to an integer with int(...).
0
0
2023-02-08 11:08:54
2
We have a poetry project with a pyproject.toml file like this: [tool.poetry] name = "daisy" version = "0.0.2" description = "" authors = [""] [tool.poetry.dependencies] python = "^3.9" pandas = "^1.5.2" DateTime = "^4.9" names = "^0.3.0" uuid = "^1.30" pyyaml = "^6.0" psycopg2-binary = "^2.9.5" sqlalchemy = "^2.0.1" pytest = "^7.2.0" [tool.poetry.dev-dependencies] jupyterlab = "^3.5.2" line_profiler = "^4.0.2" matplotlib = "^3.6.2" seaborn = "^0.12.1" [build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" When I change the file to use Python 3.11 and run poetry update we get the following error: Current Python version (3.9.7) is not allowed by the project (^3.11). Please change python executable via the "env use" command. I only have one env: > poetry env list daisy-Z0c0FuMJ-py3.9 (Activated) Strangely this issue does not occur on my Macbook, only on our Linux machine.
75,394,642
Current Python version (3.9.7) is not allowed by the project (^3.11)
false
1
1
python,python-poetry
789
75,384,957
Poetry cannot update the Python version of an existing venv. Remove the existing one and run poetry install again.
1
0.197375
2023-02-08 11:13:14
1
When I try to read a xlsx file using pandas, I receive the error "numpy has no float attribute", but I'm not using numpy in my code, I get this error when using the code below info = pd.read_excel(path_info) The xlsx file I'm using has just some letters inside of it for test purpouses, there's no numbers or floats. What I want to know is how can I solve that bug or error. I tried to create different files, change my info type to specify a pd.dataframe too Python Version 3.11 Pandas Version 1.5.3
75,415,344
Numpy has no float attribute error when using Read_Excel
false
1
2
python,excel,pandas,numpy
783
75,386,792
Had the same problem. Fixed it by updating openpyxl to latest version.
0
0
2023-02-08 13:54:05
1
I have a dataframe 'qbPast' which contains nfl player data for a season. P Player Week Team Opp Opp Rank Points Def TD Def INT Def Yds/att Year 2 QB Kyler Murray 2 ARI MIN 14 38.10 1.8125 1.0000 6.9 2021 3 QB Lamar Jackson 2 BAL KC 6 37.26 1.6875 0.9375 7 2021 5 QB Tom Brady 2 TB ATL 28 30.64 1.9375 0.7500 6.8 2021 I am attempting to create a new rolling average based on the "Points" column for each individual player for each 3 week period, for the first two weeks it should just return the points for that week and after that it should return the average for the 3 week moving period e,g Player A scores 20,30,40,30,40 the average should return 20,30,30,33.3 etc. My attempt # qbPast['Avg'] = qbPast.groupby('Player')['Points'].rolling(3).mean().reset_index(drop=True) The problem is it is only returning the 3 week average for all players I need it to filter by player so that it returns the rolling average for each player, the other players should not affect the rolling average.
75,387,668
Rolling average Pandas for 3 week period for specific column values
false
1
3
python,pandas,dataframe
49
75,387,489
You have to change the .reset_index(drop=True) into .reset_index(0, drop=True) so it is not mixing the players indices together.
1
0.066568
2023-02-08 14:49:43
1
I can read an Excel file from pandas as usual: df = pd.read_excel(join("./data", file_name) , sheet_name="Sheet1") I got the following error: ValueError: Value must be either numerical or a string containing a wildcard What I'm doing wrong? I'm using: Pandas 1.5.3 + python 3.11.0 + xlrd 2.0.1
76,631,500
Unable to read an Excel file using Pandas
false
2
3
pandas,openpyxl,xlrd,python-3.11
3,917
75,387,600
For people like me who are wondering what sort and filter is, it is an option in your Excel viewer. If you are using Microsoft Excel, you can go to the tab "Home" and then to the right side of the tab, you can find Sort & Filter, from there select Clear.
0
0
2023-02-08 14:57:59
9
I can read an Excel file from pandas as usual: df = pd.read_excel(join("./data", file_name) , sheet_name="Sheet1") I got the following error: ValueError: Value must be either numerical or a string containing a wildcard What I'm doing wrong? I'm using: Pandas 1.5.3 + python 3.11.0 + xlrd 2.0.1
75,404,407
Unable to read an Excel file using Pandas
true
2
3
pandas,openpyxl,xlrd,python-3.11
3,917
75,387,600
I got the same issue and then realized that the sheet I was reading is in "filtering" mode. Once I deselect "sort&filter", the read_excel function works.
14
1.2
2023-02-08 14:57:59
9
I'm trying to show a list of elements from a data set in a tkinter window. I want to able to manipulate the elements, by highlighting, deleting etc. I have this code: from tkinter import * window = Tk() window.geometry("100x100") #data from API data_list = [ ["1", "Lorem"], ["2", "Lorem"], ["3", "Lorem"], ["4", "Lorem"] ] #create selectable rectangles from data_list with delete buttons rectangles = {} delete_buttons = {} def CreateRectangles(): i = 0 for data in data_list: rectangles[i] = Canvas(window, bg="#BFBFBF", height=15, width=80) rectangles[i].place(x=19, y=20.0 + (i * 19)) rectangles[i].create_text(5.0, 1.0, anchor="nw", text=str(f'#{data[0]}:{data[1]}')) delete_buttons[i] = Label(window, text="X ", bg="#D9D9D9") delete_buttons[i].place(x=6, y=20.0 + (i * 19)) i += 1 CreateRectangles() #highlight clicked rectangle def RectangleClick(e, arg): #reset how all rectangles look for i in rectangles: rectangles[i].config(bg="#BFBFBF") #highlight the one clicked rectangles[arg].config(bg="#999999") for key in rectangles: rectangles[key].bind("<ButtonPress-1>", lambda event, arg=key: RectangleClick(event, arg)) #delete button action def DeleteClick(e, arg): # delete all rectangles and buttons from window for rectangle in rectangles: rectangles[rectangle].place_forget() for delete in delete_buttons: delete_buttons[delete].destroy() # delete all rectangles and buttons from dictionary rectangles.clear() delete_buttons.clear() # delete the specific data from de data_list data_list.pop(arg) # re do everything but now the data list has one less item CreateRectangles() for num in delete_buttons: delete_buttons[num].bind("<ButtonPress-1>", lambda event, arg=num: DeleteClick(event, arg)) window.mainloop() It only works the first time. For example, if I delete an item, it doesn't do anything else. What's wrong?
75,387,728
Python dictionary, list and for-loop bug
true
1
1
python,function,dictionary,for-loop,tkinter
46
75,387,699
Move all the code that binds event handlers inside the CreateRectangles method. Since all the previous rectangles are destroyed, the event handlers need to be attached again.
1
1.2
2023-02-08 15:04:55
1
Brief explanation of my program (or what it's meant to do): I have created a simulation program that models amoeba populations in Pygame. The program uses two classes - Main and Amoeba. The Main class runs the simulation and displays the results on a Pygame window and a Matplotlib plot. The Amoeba class models the properties and behavior of each amoeba in the population, including its maturing speed, age, speed, and movement direction. The simulation runs in a loop until the "q" key is pressed or the simulation is stopped. The GUI is created using the Tkinter library, which allows the user to interact with the simulation by starting and stopping it. The simulation updates the amoeba population and displays their movements on the Pygame window and updates the Matplotlib plot every 100 steps. The plot displays the average maturing speed and the reproduction rate of the amoeba population. My issue is that whilst the stop button in the GUI works fine, the start button does not. It registers being pressed and actually outputs the variable it is meant to change to the terminal (the running variable which you can see more of in the code). So the issue is not in the button itself, but rather the way in which the program is restarted. I have tried to do this via if statements and run flags but it has failed. There are no error messages, the program just remains paused. Here is the code to run the simulation from my Main.py file (other initialisation code before this): def run_simulation(): global step_counter global num_collisions global run_flag while run_flag: if globalvars.running: #main code here else: run_flag = False gc.root = tk.Tk() app = gc.GUI(gc.root) app.root.after(100, run_simulation) gc.root.mainloop() This is the code from my GUI class: import tkinter as tk import globalvars class GUI: def __init__(self,root): self.root = root self.root.title("Graphical User Interface") self.root.geometry("200x200") self.startbutton = tk.Button(root, bg="green", text="Start", command=self.start) self.startbutton.pack() self.stopbutton = tk.Button(root, bg="red", text="Stop", command=self.stop) self.stopbutton.pack() def start(self): globalvars.running = True print(globalvars.running) def stop(self): globalvars.running = False print(globalvars.running) Also in a globalvars.py file I store global variables which includes the running var. Would you mind explaining the issue please?
75,394,947
Tkinter GUI start button registering input but not restarting program
true
1
1
python,tkinter
48
75,388,233
There's a logic error in the application: when stop() is called it sets globalvars.running = False. This means, in run_simulation() the else branch is executed which turns run_flag = False. This variable is never reset to True! So the while loop is left and never entered again and #main code here not executed. In addition to setting run_flag = True, function run_simulation() needs to be called from start(). Turned my earlier comment into an answer so it can be accepted and the question resolved.
1
1.2
2023-02-08 15:43:34
1

SO dataset of pythontag data Question filters:

  • images
  • links
  • code blocks
  • Q_Score > 0
  • Answer_count > 0
  • CreationDate > 2023-02-01

Answers filters:

  • images
  • links
  • code blocks

Scores are tanh applied to scaled with AbsMaxScaler to IQR range of Original SO Answers' scores

Downloads last month
11
Edit dataset card