diff --git "a/train.json" "b/train.json" new file mode 100644--- /dev/null +++ "b/train.json" @@ -0,0 +1 @@ +[{"Q_Id":75304491,"CreationDate":"2023-02-01 00:25:39","Q_Score":3,"ViewCount":130,"Question":"In this piece of code, I could write a simple except clause without writing Exception in front of it. I mean the last line could be like this :\nexcept:\n print('Hit an exception other than KeyError or NameError!')\n\nWhat is the point of writing Exception in front of an except clause ?\ntry:\n discounted_price(instrument, discount)\nexcept KeyError:\n print(\"There is a keyerror in your code\")\nexcept NameError:\n print('There is a TypeError in your code')\nexcept Exception:\n print('an exception occured')\n\nI tried writing an except clause without Exception keyword and it worked the same.\nThank you guys for your answers . I know the point of catching specific errors. If I want to ask more clearly , what is the difference between two clauses :\nexcept Exception:\n print('an exception occured')\n\nexcept :\n print('an exception occured')","Title":"Difference between bare except and specifying a specific exception","Tags":"python,exception,try-catch","AnswerCount":5,"A_Id":75304591,"Answer":"The purpose of writing \"Exception\" in front of an except clause is to catch all possible exceptions that can occur in the code. By specifying \"Exception\", you are telling the interpreter to handle any type of exception that might be raised. The more specific the exception specified in the except clause, the more targeted the handling of the exception can be. For example, if you only want to handle \"KeyError\" exceptions, you can specify that explicitly in the except clause, as in the first example.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75305535,"CreationDate":"2023-02-01 04:05:23","Q_Score":0,"ViewCount":20,"Question":"`I'm using in edge impulse FOMO\nI know that object detection fps is 1\/inference time\nmy model's time per inference is 2ms\nso object detection is 500fps\nbut my model run vscode fps is 9.5\nwhat is the difference between object detection fps and video fps ?","Title":"what is the difference between object detection fps and video fps?","Tags":"python,deep-learning,frame-rate","AnswerCount":1,"A_Id":75305707,"Answer":"If I understand correctly, your object detection fps indicates the number of frames (or images) that your model, given your system, can process in a second.\nA video fps in your input source's frames per second. For example, if your video has an fps (also referred to as framerate) of 100, then your model would be able to detect objects in all of the frames in 100ms (or 1\/10 of a second).\nIn your case, your video input source seems to have 9.5 frames in a second. This means that your model, given your system, will process 1-second wort of a video in about ~20ms.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75305542,"CreationDate":"2023-02-01 04:06:10","Q_Score":0,"ViewCount":36,"Question":"Recently I have install python 3.9.9 in my windows 10.it want show the path\nI have typed cmd promt \"Wchich Python\" it want show","Title":"How to identify python in windows 10","Tags":"python","AnswerCount":4,"A_Id":75305674,"Answer":"In Command Prompt, either which python or where python will print the path to your python executable.\nIf which python or where python does not show the path to your Python executable it is likely that it is not in your PATH variable.\nTo add your executable to the PATH variable you, search for Environment Variables in the Settings application. This will open the Advanced tab in System Properties. Click the Environment Variables button towards the bottom. You can then edit the PATH variable to include the path to your Python executable. Once you have applied the changes and restarted Command Prompt you can then run which python or where python to confirm your changes have taken effect.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":3},{"Q_Id":75305542,"CreationDate":"2023-02-01 04:06:10","Q_Score":0,"ViewCount":36,"Question":"Recently I have install python 3.9.9 in my windows 10.it want show the path\nI have typed cmd promt \"Wchich Python\" it want show","Title":"How to identify python in windows 10","Tags":"python","AnswerCount":4,"A_Id":75305623,"Answer":"Just type python or python3 in cmd","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":3},{"Q_Id":75305542,"CreationDate":"2023-02-01 04:06:10","Q_Score":0,"ViewCount":36,"Question":"Recently I have install python 3.9.9 in my windows 10.it want show the path\nI have typed cmd promt \"Wchich Python\" it want show","Title":"How to identify python in windows 10","Tags":"python","AnswerCount":4,"A_Id":75305641,"Answer":"You can use in your cmd\n\nwhere python\n\nIt will show you the path of all installed python in your device","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":3},{"Q_Id":75308086,"CreationDate":"2023-02-01 09:30:54","Q_Score":0,"ViewCount":30,"Question":"How does one create a python file from pycharm terminal.In VS Code they use \"code {name}\" so I want something similar to that but in pycharm.\nI am getting an error \"zsh:command not found:code\"","Title":"creating python files from pycharm terminal","Tags":"python,terminal,pycharm","AnswerCount":2,"A_Id":75308329,"Answer":"Setting->Keymap\nSearch \"new\"\nUnder \"Pyhon Community Edition\" there will be an option for \"Python File\"\nAdd a new shortcut to this option (SHIFT+N is usually unassigned)","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75309424,"CreationDate":"2023-02-01 11:20:50","Q_Score":2,"ViewCount":61,"Question":"I'm trying to get data from csv and output it to the console (ie, command line).\nI have 30 columns, but I can only output 5 to 6 columns.\ndf = pd.read_csv(csv_raw)\nprint(df.head())\n date level mark source \n0 2022-01-01 A 1 facebook\n1 2022-01-01 B 2 facebook\n2 2022-01-01 C 12 facebook\n3 2022-01-01 D 53 facebook\n4 2022-01-01 T 22 facebook\n\nIf I display all 30 columns it turns out like this:\nprint(df.head(30))\n date ... source\n0 2022-01-01 ... facebook\n1 2022-01-01 ... facebook\n2 2022-01-01 ... facebook\n3 2022-01-01 ... facebook\n4 2022-01-01 ... facebook\n5 2022-01-01 ... facebook\n\nwhen i try pd.options.display.max_columns = 50\nit returns me like that:\n date level clicks \\\n0 2022-01-01 A 1 \n1 2022-01-01 B 2 \n2 2022-01-01 C 12 \n3 2022-01-01 D 53 \n4 2022-01-01 T 22 \n5 2022-01-01 Free trial, upgrade to basic at https:\/\/www.wi... 1 \n\n source \n0 facebook \n1 facebook \n2 facebook \n3 facebook \n4 facebook \n5 facebook \n\nIs it possible somehow to display more than 5 columns as in the first case?","Title":"How to print up to 40 rows in DataFrame","Tags":"python,pandas,dataframe","AnswerCount":1,"A_Id":75310017,"Answer":"There are 3 dataframe settings to be set to display the desired output\n(1) Set the overall width (number of characters)\npd.options.display.width = 500\npd.options.display.width = None #for unlimited\n(2) Set the maximum columns count (number of columns)\npd.options.display.max_columns = 50\npd.options.display.max_columns = None #for unlimited\n(3) Set the maximum width of each column (number of characters)\npd.options.display.max_colwidth = 30\npd.options.display.max_colwidth = None #for unlimited\nThere is a row (index 5) having the value Free trial, upgrade to basic at https:\/\/www.wi... which is making a mess of the columns. To delete this row, use:\ndf.drop(5, inplace=True)","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75309537,"CreationDate":"2023-02-01 11:30:13","Q_Score":1,"ViewCount":36,"Question":"Hej,\nI have the following code snippet and I don't understand the output:\na = \"foo\"\nb = \"foo\"\nc = \"bar\"\nfoo_list = [\"foo\", \"bar\"]\n\n\nprint(a == b in foo_list) # True\nprint(a == c in foo_list) # False\n\n---\nOutput: \nTrue\nFalse\n\nThe first output is True.\nI dont understand it because either a == b is executed first which results in True and then the in operation should return False as True is not in foo_list.\nThe other way around, if b in foo_list is executed first, it will return True but then a == True should return False.\nI tried setting brackets around either of the two operations, but both times I get False as output:\nprint((a == b) in foo_list) # False\nprint(a == (b in foo_list)) # False\n---\nOutput: \nFalse\nFalse\n\nCan somebody help me out?\nCheers!","Title":"Explaining the Output of Comparison Expressions Involving Strings and Lists in Python","Tags":"python,order-of-execution,in-operator","AnswerCount":1,"A_Id":75309861,"Answer":"Ah, thanks @Ture P\u00e5lsson.\nThe answer is chaining comparisons.\na == b in foo_listis equivalent to a == b and b in foo_list where a==b is True and b in foo_list is True.\nIf you set brackets, there will be no chaining.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75309871,"CreationDate":"2023-02-01 12:00:25","Q_Score":1,"ViewCount":67,"Question":"I have stored a class object as a pickle in an SQLite DB.\nBelow is code for the file pickle.py\nsqlite3.register_converter(\"pickle\", pickle.loads)\nsqlite3.register_adapter(list, pickle.dumps)\nsqlite3.register_adapter(set, pickle.dumps)\n\nclass F:\n\n a = None\n b = None\n\n def __init__(self) -> None:\n pass\ndf = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]})\nf = F()\nf.a = df\nf.b = df.columns\n\ndata = pickle.dumps(f, protocol=pickle.HIGHEST_PROTOCOL)\n\nsqliteConnection = sqlite3.connect('SQLite_Python.db')\ncursor = sqliteConnection.cursor()\nprint(\"Successfully Connected to SQLite\")\nDATA = sqlite3.Binary(data)\nsqlite_insert_query = f\"\"\"INSERT INTO PICKLES1 (INTEGRATION_NAME, DATA) VALUES ('James',?)\"\"\"\n\nresp = cursor.execute(sqlite_insert_query,(DATA,))\nsqliteConnection.commit()\n\nAfter that, I am trying to fetch the pickle from the DB. The pickle is stored in a pickle datatype column which I had registered earlier on SQLite in file retrieve_pickle.py.\ncur = conn.cursor()\n cur.execute(\"SELECT DATA FROM PICKLES1 where INTEGRATION_NAME='James'\")\n df = None\n rows = cur.fetchall()\n for r in rows[0]:\n print(type(r)) #prints \n df = pickle.loads(r)\n\nBut it gives me an error\n File \"\/Users\/ETC\/Work\/pickle_work\/picklertry.py\", line 34, in select_all_tasks\n df = pickle.loads(r)\nAttributeError: Can't get attribute 'F' on \n\nI was trying to store a class object in a pickle column in sqlite after registering pickle.loads as a pickle datatype. I kept the object successfully and was able to retrieve it from DB but when I try to load it back so that I can access the thing and attributes it gives me an error.","Title":"Error when loading pickle object from SQLite","Tags":"python,pickle","AnswerCount":2,"A_Id":75311151,"Answer":"Pickling requires you to import the actual module which you have in the pickle. I had to import F into the 2nd file where I was loading the pickle.","Users Score":2,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75310545,"CreationDate":"2023-02-01 12:59:45","Q_Score":0,"ViewCount":32,"Question":"I have cloned a GitHub repository that contains the code for a Python package to my local computer (it's actually on a high performance cluster). I have also installed the package with pip install 'package_name'. If I now run a script that uses the package, it of course uses the installed package and not the cloned repository, so if I want to make changes to the code, I cannot run those. Is there a way to do this, potentially with pip install -e (but I read that was deprecated) or a fork? How could I then get novel updates in the package to my local version, as it is frequently updated?","Title":"How can I edit a GitHub repository (for a Python package) locally and run the package with my changes?","Tags":"python,github,pip","AnswerCount":2,"A_Id":75310593,"Answer":"If you run an IDE like PyCharm, you can mark a folder in your project as Sources Root. It will then import any packages from that folder instead of the standard environment packages.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75310545,"CreationDate":"2023-02-01 12:59:45","Q_Score":0,"ViewCount":32,"Question":"I have cloned a GitHub repository that contains the code for a Python package to my local computer (it's actually on a high performance cluster). I have also installed the package with pip install 'package_name'. If I now run a script that uses the package, it of course uses the installed package and not the cloned repository, so if I want to make changes to the code, I cannot run those. Is there a way to do this, potentially with pip install -e (but I read that was deprecated) or a fork? How could I then get novel updates in the package to my local version, as it is frequently updated?","Title":"How can I edit a GitHub repository (for a Python package) locally and run the package with my changes?","Tags":"python,github,pip","AnswerCount":2,"A_Id":75324284,"Answer":"In the end I indeed did use pip install -e, and it is working for now. I will figure it out once the owner of the package releases another update!","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75313450,"CreationDate":"2023-02-01 16:43:56","Q_Score":1,"ViewCount":46,"Question":"why does 'from sklearn.impute import SimpleImputer as si' works but '\nimport sklearn.impute.SimpleImputer as si'\n\ndo not work\nI want to know, why this won't work. I am new to python.","Title":"import vs from import in sklearn","Tags":"python,scikit-learn","AnswerCount":2,"A_Id":75313500,"Answer":"You can only use import with modules.\nwith from ... import ... you can import variables so submodules, functions, classes, and everything else.\nAs SimpleImputer is not a module only the second option is availiable.\n\nWriten a bit differently import only works in general with files, from ... import works with variables declared in the script.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":2},{"Q_Id":75313450,"CreationDate":"2023-02-01 16:43:56","Q_Score":1,"ViewCount":46,"Question":"why does 'from sklearn.impute import SimpleImputer as si' works but '\nimport sklearn.impute.SimpleImputer as si'\n\ndo not work\nI want to know, why this won't work. I am new to python.","Title":"import vs from import in sklearn","Tags":"python,scikit-learn","AnswerCount":2,"A_Id":75313487,"Answer":"The reason for this is the way the Python import statement works. The first import statement imports the SimpleImputer class from the sklearn.impute module and then names it si. The second import statement tries to import a module named SimpleImputer from a module named sklearn.impute. This does not work because in Python, the import statement only allows you to import modules, not submodules.","Users Score":2,"is_accepted":true,"Score":1.2,"Available Count":2},{"Q_Id":75313745,"CreationDate":"2023-02-01 17:05:46","Q_Score":0,"ViewCount":24,"Question":"I'd like to display an embed with a picture using HTML, but I couldnt find anything online using python to do it. Is that even possible? if it is I would love an explanation.\nI tried searching, couldnt find anything about it.","Title":"How can I display a HTML page in discord using a bot with python?","Tags":"python,html,discord","AnswerCount":1,"A_Id":75576248,"Answer":"Makes sense, sounded like a ChatGPT generated answer... AFAIK you can't get HTML embeds on Discord (which is a pain in the arse as it could make the experience much more enjoyable for bot users). One way you could tackle this though is by generating a picture of the content you want to be sent stored on a server you have access to and have the bot share said picture. Lots of cons here, but that's the best I figured out. Good luck.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75314041,"CreationDate":"2023-02-01 17:35:07","Q_Score":1,"ViewCount":77,"Question":"There are several Python packages that implement the datetime.tzinfo interface, including pytz and dateutil. If someone hands me a timezone object and wants me to apply it to a datetime, the procedure is different depending on what kind of timezone object it is:\ndef apply_tz_to_datetime(dt: datetime.datetime, tz: datetime.tzinfo, ambiguous, nonexistent):\n if isinstance(tz, dateutil.tz._common._tzinfo):\n # do dt.replace(tz, fold=...)\n elif isinstance(tz, pytz.tzinfo.BaseTzInfo):\n # do tz.localize(dt, is_dst=...)\n # other cases here\n\n(The dateutil.tz case is a lot more complicated than I've shown, because there are a lot of cases to consider for non-existent or ambiguous datetimes, but the gist is always to either call dt.replace(tz, fold=...) or raise an exception.)\nChecking dateutil.tz._common._tzinfo seems like a no-no, though, is there a better way?","Title":"Check whether timezone is dateutil.tz instance","Tags":"python,timezone,pytz,python-dateutil","AnswerCount":2,"A_Id":75339617,"Answer":"It appears from the ratio of comments to answers (currently 9\/0 = \u221e), there is no available answer to the surface-level question (how to determine whether something is a dateutil.tz-style timezone object). I'll open a feature request ticket with the maintainers of the library.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75315117,"CreationDate":"2023-02-01 19:15:21","Q_Score":9,"ViewCount":4100,"Question":"I am trying to store data retrieved from a website into MySQL database via a pandas data frame. However, when I make the function call df.to_sql(), the compiler give me an error message saying: AttributeError: 'Connection' object has no attribute 'connect'. I tested it couple times and I am sure that there is neither connection issue nor table existence issue involved. Is there anything wrong with the code itself? The code I am using is the following:\n from sqlalchemy import create_engine, text\n import pandas as pd\n import mysql.connector\n\n \n config = configparser.ConfigParser()\n config.read('db_init.INI')\n password = config.get(\"section_a\", \"Password\")\n host = config.get(\"section_a\", \"Port\")\n database = config.get(\"section_a\", \"Database\")\n\n engine = create_engine('mysql+mysqlconnector:\/\/root:{0}@{1}\/{2}'.\n format(password, host, database),\n pool_recycle=1, pool_timeout=57600, future=True)\n \n conn = engine.connect()\n df.to_sql(\"tableName\", conn, if_exists='append', index = False)\n\nThe full stack trace looks like this:\nTraceback (most recent call last):\n File \"\/Users\/chent\/Desktop\/PFSDataParser\/src\/FetchPFS.py\", line 304, in \n main()\n File \"\/Users\/chent\/Desktop\/PFSDataParser\/src\/FetchPFS.py\", line 287, in main\n insert_to_db(experimentDataSet, expName)\n File \"\/Users\/chent\/Desktop\/PFSDataParser\/src\/FetchPFS.py\", line 89, in insert_to_db\n df.to_sql(tableName, conn, if_exists='append', index = False)\n File \"\/Users\/chent\/opt\/anaconda3\/lib\/python3.9\/site-packages\/pandas\/core\/generic.py\", line 2951, in to_sql\n return sql.to_sql(\n File \"\/Users\/chent\/opt\/anaconda3\/lib\/python3.9\/site-packages\/pandas\/io\/sql.py\", line 698, in to_sql\n return pandas_sql.to_sql(\n File \"\/Users\/chent\/opt\/anaconda3\/lib\/python3.9\/site-packages\/pandas\/io\/sql.py\", line 1754, in to_sql\n self.check_case_sensitive(name=name, schema=schema)\n File \"\/Users\/chent\/opt\/anaconda3\/lib\/python3.9\/site-packages\/pandas\/io\/sql.py\", line 1647, in check_case_sensitive\n with self.connectable.connect() as conn:\n\nAttributeError: 'Connection' object has no attribute 'connect'\n\nThe version of pandas I am using is 1.4.4, sqlalchemy is 2.0\nI tried to make a several execution of sql query, for example, CREATE TABLE xxx IF NOT EXISTS or SELECT * FROM, all of which have given me the result I wish to see.","Title":"AttributeError: 'Connection' object has no attribute 'connect' when use df.to_sql()","Tags":"python,pandas,sqlalchemy,mysql-connector","AnswerCount":2,"A_Id":76357663,"Answer":"I have faced the same problem and it got solved (As @nacho suggested above in a comment to the question) when I replace connection object with sqlalchemy engine in DataFrame.to_sql() arguments.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75315891,"CreationDate":"2023-02-01 20:32:37","Q_Score":1,"ViewCount":91,"Question":"I need to change the value of two random variables out of four to '\u2014'. How do I do it with maximum effectiveness and readability?\nCode below is crap just for reference.\nfrom random import choice\na = 10\nb = 18\nc = 15\nd = 92\n\nchoice(a, b, c, d) = '\u2014'\nchoice(a, b, c, d) = '\u2014'\n\nprint(a, b, c, d)\n\n>>> 12 \u2014 \u2014 92\n>>> \u2014 19 \u2014 92\n>>> 10 18 \u2014 \u2014\n\nI've tried choice(a, b, c, d) = '\u2014' but ofc it didn't work. There's probably a solution using list functions and methods but it's complicated and almost impossible to read, so I'm searching for an easier solution.","Title":"How do I change 2 random variables out of 4?","Tags":"python,variables,random,replace","AnswerCount":6,"A_Id":75315942,"Answer":"Variable names are not available when you run your code, so you cannot change a \"random variable\". Instead, I recommend that you use a list or a dictionary. Then you can choose a random element from the list or a random key from the dictionary.","Users Score":1,"is_accepted":false,"Score":0.0333209931,"Available Count":1},{"Q_Id":75317209,"CreationDate":"2023-02-01 23:21:25","Q_Score":1,"ViewCount":141,"Question":"I'm running into an issue when trying to have a Python script running on an EC2 instance assume a role to perform S3 tasks. Here's what I have done.\n\nCreated a IAM role with AmazonS3FullAccess permissions and got the following ARN:\n\narn:aws:iam:::role\/\nThe trust policy is set so the principal is a the EC2 service. I interpret this as allowing any EC2 instance within the account being allowed to assume the role.\n\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"Service\": \"ec2.amazonaws.com\"\n },\n \"Action\": \"sts:AssumeRole\"\n }\n ]\n}\n\n\n\nI launched an EC2 instance and attached the above IAM role.\n\nI attempt to call assume_role() using Boto3\n\n\n\nsession = boto3.Session()\nsts = session.client(\"sts\")\nresponse = sts.assume_role(\n RoleArn=\"arn:aws:iam:::role\/\",\n RoleSessionName=\"role_session_name\"\n)\n\n\nBut it throws the following error:\n\nbotocore.exceptions.ClientError: An error occurred (AccessDenied) when\ncalling the AssumeRole operation: User:\narn:aws:sts:::assumed-role\/\/i-\nis not authorized to perform: sts:AssumeRole on resource:\narn:aws:iam:::role\/\n\nAll other StackOverflow questions about this talk about the Role's trust policy but mine is set to allow EC2. So either I'm misinterpreting what the policy should be or there is some other error I can't figure out.","Title":"AccessDenied when calling Boto3 assume_role from EC2 even with service principal","Tags":"python,amazon-web-services,amazon-ec2,boto3","AnswerCount":1,"A_Id":75317468,"Answer":"You do not have to explicitly call sts.assume_role. If the role is attached to the EC2 instance, boto3 will use in the background seamlessly.You use boto3 as you would normally do, and it will take care of using the IAM role for you. No action required from you.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75317498,"CreationDate":"2023-02-02 00:12:49","Q_Score":1,"ViewCount":156,"Question":"I'm building a windows service with Python 3.6 in an anaconda virtual environment. I make a post request using python requests: requests.post(url, files=files, data=data, headers=headers)\nAfter creating the service, on my windows machine (the one that has the source code that created the service) this works right off the bat. When I install this service on another windows machine, I keep getting SSL: CERTIFICATE_VERIFY_FAILED. I installed it on a third windows machine and that works fine (but isn't the machine we need it to work on sadly).\nThings I've tried:\n\nInstalled python-certifi-win32 with conda in my virtual environment before creating the service.\nSpecified a path to a .pem file with the chain of certificates for the url and added it with the verify parameter. So my request is as such: requests.post(url, files=files, data=data, headers=headers, verify='path\\to\\pemfile'). This works on my machine but not on the other one.\n\nI printed out requests.certs.where() on both computers and they both say C:\\Windows\\TEMP\\_MEXXXX\\certifi\\cacert.pem.\nHow can I get my service to run the same on all computers?\nUPDATE: Reproducible example:\n# debugFile.py\nimport servicemanager\nimport socket\nimport win32event\nimport win32service\nimport win32serviceutil\nimport traceback\nimport sys, getopt\nimport requests\n\nclass SCPWorker:\n def __init__(self):\n self.running = True\n\n def test_function(self):\n data = {}\n token = 'auth token for url'\n response = requests.post(custom_url, data=data, headers={'Authorization': \"Token \" + token})\n \n\n\nclass StoreScp(win32serviceutil.ServiceFramework):\n _svc_name_ = \"Service\"\n _svc_display_name_ = \"Debug Service\"\n _svc_description_ = \"description\"\n\n \n def __init__(self, args):\n self.worker = SCPWorker()\n win32serviceutil.ServiceFramework.__init__(self, args)\n self.hWaitStop = win32event.CreateEvent(None, 0, 0, None)\n socket.setdefaulttimeout(60)\n\n def SvcStop(self):\n try:\n self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)\n win32event.SetEvent(self.hWaitStop)\n self.worker.stop()\n self.running = False\n except:\n servicemanager.LogErrorMsg(traceback.format_exc())\n\n def SvcDoRun(self):\n try:\n self.worker.test_function()\n \n while rc != win32event.WAIT_OBJECT_0 and rc != win32event.WAIT_FAILED and rc != win32event.WAIT_TIMEOUT and rc != win32event.WAIT_ABANDONED:\n rc = win32event.WaitForSingleObject(self.hWaitStop, 5000)\n\n if rc == win32event.WAIT_OBJECT_0:\n servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE,\n servicemanager.PYS_SERVICE_STARTED,\n ('Service stopped', ''))\n else:\n servicemanager.LogMsg(servicemanager.EVENTLOG_ERROR_TYPE,\n servicemanager.PYS_SERVICE_STOPPED,\n ('Service quit unexpectedly with status %d' % rc, ''))\n except:\n servicemanager.LogErrorMsg(traceback.format_exc())\n\n\nif __name__ == '__main__':\n if len(sys.argv) == 1:\n servicemanager.Initialize()\n servicemanager.PrepareToHostSingle(StoreScp)\n servicemanager.StartServiceCtrlDispatcher()\n else:\n win32serviceutil.HandleCommandLine(StoreScp)\n\nAnd then run pyinstaller -F --hidden-import=win32timezone DebugFile.py to create the exe. And then install the exe on a machine.","Title":"Python requests causes SSL Verification error on one Windows computer but not another","Tags":"python-3.x,windows,python-requests,anaconda,ssl-certificate","AnswerCount":1,"A_Id":75477824,"Answer":"I never figured out why it didn't work on the other computer but I did manage to make a workaround work. First I ensured that my pyinstaller was at least 4.10 and then I installed pip-system-certs. Finally I added import pip_system_certs.wrapt_requests at the top of my python file. This library meant I had to install everything using pip and not conda.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75318885,"CreationDate":"2023-02-02 05:07:55","Q_Score":0,"ViewCount":27,"Question":"Ok so to preface this, I am very new to jupyter notebook and anaconda. Anyways I need to download opencv to use in my notebook but every time I download I keep getting a NameError saying that \u2018cv2\u2019 is not defined.\nI have uninstalled and installed opencv many times and in many different ways and I keep getting the same error. I saw on another post that open cv is not in my python path or something like that\u2026\nHow do I fix this issue and put open cv in the path? (I use Mac btw) Please help :( Thank you!","Title":"Anaconda Jupyter Notebook Opencv not working","Tags":"opencv,anaconda,jupyter,nameerror,pythonpath","AnswerCount":1,"A_Id":75318938,"Answer":"Try the following:\n\nInstall OpenCV using Anaconda Navigator or via terminal by running:\nconda install -c conda-forge opencv\nNow you should check if its installed by running this in terminal: conda list\nImport OpenCV in Jupyter Notebook: In your Jupyter Notebook, run import cv2 and see if it works.\nIf the above steps are not working, you should add OpenCV to your Python PATH by writing the following code to your Jupyter NB:\nimport sys\nsys.path.append('\/anaconda3\/lib\/python3.7\/site-packages')\n\nThis should work.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75319737,"CreationDate":"2023-02-02 07:09:17","Q_Score":0,"ViewCount":22,"Question":"So far, I am using detect.py with appropiate arguments for the object detection tasks, using a custom trained model.\nHow can I call the detect method with the parameters(weights, source, conf, and img_size) from a python program, instead of using CLI script?\nI am unable to do so.","Title":"How to call yolov7 detect method from a python program","Tags":"python,object-detection,yolo,yolov5,yolov7","AnswerCount":1,"A_Id":75324664,"Answer":"you can create a main.py file where you call all these methods from.\nPlease make sure you import these methods at the top of main.py, e.g. from detect.py import detect (or whatever you want to call from this file).\nHard to give more precise advice without more input from you.\nand then you just run your main file.\nAlternatively maybe consider using a jupyter notebook - not the 'nicest' way, but it makes everything more convenient for testing etc.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75320233,"CreationDate":"2023-02-02 08:10:34","Q_Score":3,"ViewCount":1076,"Question":"Is there a way to save Polars DataFrame into a database, MS SQL for example?\nConnectorX library doesn\u2019t seem to have that option.","Title":"Polars DataFrame save to sql","Tags":"python-polars,rust-polars","AnswerCount":2,"A_Id":76234129,"Answer":"Polars exposes the write_database method on the DataFrame class.","Users Score":3,"is_accepted":false,"Score":0.2913126125,"Available Count":2},{"Q_Id":75320233,"CreationDate":"2023-02-02 08:10:34","Q_Score":3,"ViewCount":1076,"Question":"Is there a way to save Polars DataFrame into a database, MS SQL for example?\nConnectorX library doesn\u2019t seem to have that option.","Title":"Polars DataFrame save to sql","Tags":"python-polars,rust-polars","AnswerCount":2,"A_Id":75396733,"Answer":"Polars doesen't support direct writing to a database. You can proceed in two ways:\n\nExport the DataFrame in an intermediate format (such as .csv using .write_csv()), then import it into the database.\nProcess it in memory: you can convert the DataFrame in a simpler data structure using .to_dicts(). The result will be a list of dictionaries, each of them containing a row in key\/value format. At this point is easy to insert them into a database using SqlAlchemy or any specific library for your database of choice.","Users Score":2,"is_accepted":true,"Score":1.2,"Available Count":2},{"Q_Id":75320243,"CreationDate":"2023-02-02 08:11:22","Q_Score":2,"ViewCount":631,"Question":"All of the sudden, my terminal stopped recognizing the 'conda'. Also the VS Code stopped seeing my environments.\nAll the folders, with my precious environments are there (\/opt\/anaconda3), but when I type conda I get:\nconda \nzsh: command not found: conda\n\nI tried install conda again (from .pkg) but it fails at the end of installation (no log provided).\nHow can I clean it without losing my envs?\nI use Apple M1 MacBookPro with Monterey.","Title":"conda disappeared, command not found - corrupted .zshrc","Tags":"python,macos,conda","AnswerCount":2,"A_Id":75333369,"Answer":"For some reason my .zshrc file was corrupted after some operations.\nThis prevented terminal to call conda init and in general, to have 'conda' call understandable.\nWhat is more - this prevented installing any condas, minicondas, minoforge. Both from .pkg and .sh - annoyingly - without any log, information - just crash and goodbye.\nI cleared both .zshrc' and .bash_profile` and then it helped - I managed to install minigorge and have my 'conda' accessible from terminal.\nUnfortunately, in the process I removed all my previous 'envs'.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":2},{"Q_Id":75320243,"CreationDate":"2023-02-02 08:11:22","Q_Score":2,"ViewCount":631,"Question":"All of the sudden, my terminal stopped recognizing the 'conda'. Also the VS Code stopped seeing my environments.\nAll the folders, with my precious environments are there (\/opt\/anaconda3), but when I type conda I get:\nconda \nzsh: command not found: conda\n\nI tried install conda again (from .pkg) but it fails at the end of installation (no log provided).\nHow can I clean it without losing my envs?\nI use Apple M1 MacBookPro with Monterey.","Title":"conda disappeared, command not found - corrupted .zshrc","Tags":"python,macos,conda","AnswerCount":2,"A_Id":75320362,"Answer":"To recover conda if it has disappeared and you're getting a \"command not found\" error, follow these steps:\n\nCheck if conda is installed on your system by running the command:\nwhich conda\n\nIf the above command doesn't return anything, you may need to add the path to your conda installation to your PATH environment variable. To find the path, run the following command:\nfind \/ -name conda 2>\/dev\/null\n\nAdd the path to your .bashrc or .bash_profile file:\nexport PATH=\"\/bin:$PATH\"\n\nRestart your terminal or run the following command to reload your environment variables:\nsource ~\/.bashrc\n\nTry running conda again to see if it's working.\n\n\nIf conda is still not working, it may have been uninstalled or moved. In that case, you can reinstall conda from the Anaconda website or from the Miniconda website.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":2},{"Q_Id":75321910,"CreationDate":"2023-02-02 10:39:04","Q_Score":1,"ViewCount":47,"Question":"On SikulixIDE, the library webbrowser always open the default browser, even when i use the get method, i tried my code on regular python, it does work. Anyone know why it is reacting like that ?\nwebbrowser.get('C:\/Program Files\/Google\/Chrome\/Application\/chrome.exe %s').open(myurl)","Title":"webbrowser library is not working as intended on SikulixIDE","Tags":"python,jython,sikuli,sikuli-ide,sikuli-x","AnswerCount":1,"A_Id":75433915,"Answer":"Fixed by automating it using a python file and running it trough cmd with base python.exe.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75322105,"CreationDate":"2023-02-02 10:53:54","Q_Score":1,"ViewCount":43,"Question":"In Numpy, Transposing of a column vector makes the the array an embedded array.\nFor example, transposing\n[[1.],[2.],[3.]] gives [[1., 2., 3.]] and the dimension of the outermost array is 1. And this produces many errors in my code. Is there a way to produce [1., 2., 3.] directly?","Title":"Python NumPy, remove unnecessary brackets","Tags":"python,numpy","AnswerCount":2,"A_Id":75322147,"Answer":"Try .flatten(), .ravel(), .reshape(-1), .squeeze().","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75322177,"CreationDate":"2023-02-02 11:00:08","Q_Score":0,"ViewCount":14,"Question":"While installing flair using pip install flair in python 3.10 virtual environment on mac-os Ventura, I get the following error:\nERROR: Failed building wheel for sentencepiece\nSeperately installing sentencepeice using pip install sentenpeice did not work.\nUpgrading pip did not work.","Title":"ERROR: Failed building wheel for sentencepiece while installing flair on python 3.10","Tags":"python,python-3.x,flair","AnswerCount":1,"A_Id":75806128,"Answer":"Try downgrading Python.\nI was having this same issue, also with an intel mac, everytime I tried to use the transformers library, went through a lot of possible solutions without success, even with multiple chatGPT advices. I uninstalled python 3.11 and got back to the 3.9.13 version and the issue was gone! It seems there's some issue with wheels for latest python versions","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75322300,"CreationDate":"2023-02-02 11:10:50","Q_Score":0,"ViewCount":52,"Question":"I am a student and my profesor needs me to install Django on PyCharm.\nI made a big folder called PyCharmProjects and it includes like everything I have done in Python.\nThe problem is that I made a new folder inside this PyCharmProjects called Elementar, and I need to have the Django folders in there but it's not downloading.\nI type in the PyCharm terminal django-admin manage.py startproject taskmanager1 (this is how my profesor needs me to name it)\nAfter I run the code it says:\nNo Django settings specified.\nUnknown command: 'manage.py'\nType 'django-admin help' for usage.\nI also tried to install it through the MacOS terminal but I don't even have acces the folder named Elementar (cd: no such file or directory: Elementar) although it is created and it is seen in the PyCharm.","Title":"Manage.py unknown command","Tags":"python,django,pycharm","AnswerCount":2,"A_Id":75326283,"Answer":"First of all, you can't create a project using manage.py because the manage.py file doesn't exist yet. It will be created automatically in the folder taskmanager1 if you run the command below.\nYou can create a project with the command\ndjango-admin startproject taskmanager1\nAfter that you can change the directory to the taskmanager1 folder with the cd taskmanager\/ command.\nWhen you changed the directory you can use the python manage.py commando for example if you want to run your migrations or creating an app.\npython manage.py migrate","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75322357,"CreationDate":"2023-02-02 11:16:43","Q_Score":2,"ViewCount":95,"Question":"I have 2 directories containing tests:\nproject\/\n|\n|-- test\/\n| |\n| |-- __init__.py\n| |-- test_1.py\n|\n|-- my_submodule\/\n |\n |-- test\/\n |\n |-- __init__.py\n |-- test_2.py\n\n\nHow can I run all tests?\npython -m unittest discover .\nonly runs test_1.py\nand obviously\npython -m unittest discover my_submodule\nonly runs test_2.py","Title":"How to run unittest tests from multiple directories","Tags":"python,unit-testing,python-unittest","AnswerCount":2,"A_Id":75324957,"Answer":"unittest currently sees project\/my_submodule as an arbitrary directory to ignore, not a package to import. Just add project\/my_submodule\/__init__.py to change that.","Users Score":4,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75324072,"CreationDate":"2023-02-02 13:45:13","Q_Score":6,"ViewCount":636,"Question":"I'm trying to find out if Pandas.read_json performs some level of autodetection. For example, I have the following data:\ndata_records = [\n {\n \"device\": \"rtr1\",\n \"dc\": \"London\",\n \"vendor\": \"Cisco\",\n },\n {\n \"device\": \"rtr2\",\n \"dc\": \"London\",\n \"vendor\": \"Cisco\",\n },\n {\n \"device\": \"rtr3\",\n \"dc\": \"London\",\n \"vendor\": \"Cisco\",\n },\n]\n\ndata_index = {\n \"rtr1\": {\"dc\": \"London\", \"vendor\": \"Cisco\"},\n \"rtr2\": {\"dc\": \"London\", \"vendor\": \"Cisco\"},\n \"rtr3\": {\"dc\": \"London\", \"vendor\": \"Cisco\"},\n}\n\nIf I do the following:\nimport pandas as pd\nimport json\n\npd.read_json(json.dumps(data_records))\n---\n device dc vendor\n0 rtr1 London Cisco\n1 rtr2 London Cisco\n2 rtr3 London Cisco\n\nthough I get the output that I desired, the data is record based. Being that the default orient is columns, I would have not thought this would have worked.\nTherefore is there some level of autodetection going on? With index based inputs the behaviour seems more inline. As this shows appears to have parsed the data based on a column orient by default.\npd.read_json(json.dumps(data_index))\n\n rtr1 rtr2 rtr3\ndc London London London\nvendor Cisco Cisco Cisco\n\npd.read_json(json.dumps(data_index), orient=\"index\")\n\n dc vendor\nrtr1 London Cisco\nrtr2 London Cisco\nrtr3 London Cisco","Title":"Pandas JSON Orient Autodetection","Tags":"python,json,pandas","AnswerCount":4,"A_Id":75399595,"Answer":"No, Pandas does not perform any autodetection when using the read_json function.\nIt is entirely determined by the orient parameter, which specifies the format of the input json data.\nIn your first example, you passed the data_records list to the json.dumps function, which is then converted it to a json-string. After passing the resulting json string to pd.read_json, it is seen as a record orientation.\nIn your second example, you passed the data_index to json.dumps which is thenseen as a \"column\" orientation\nIn both cases, the behavior of the read_json function is entirely based on the value of the orient parameter and not by an automatic detection by Pandas.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75326322,"CreationDate":"2023-02-02 16:42:23","Q_Score":1,"ViewCount":101,"Question":"I installed cdk on wsl2 and I try to use it but I get this error:\n(manifest,filePath,ASSETS_SCHEMA,Manifest.patchStackTagsOnRead)}static loadAssetManifest(filePath){return this.loadManifest(filePath,ASSETS_SCHEMA)}static saveIntegManifest(manifest,filePath){Manifest.saveManifest(manifest,filePath,INTEG_SCHEMA)}static loadIntegManifest(filePath){return this.loadManifest(filePath,INTEG_SCHEMA)}static version(){return SCHEMA_VERSION}static save(manifest,filePath){return this.saveAssemblyManifest(manifest,filePath)}static load(filePath){return this.loadAssemblyManifest(filePath)}static validate(manifest,schema4,options){function parseVersion(version){const ver=semver.valid(version);if(!ver){throw new Error(`Invalid semver string: \"${version}\"`)}return ver}const maxSupported=parseVersion(Manifest.version());const actual=parseVersion(manifest.version);if(semver.gt(actual,maxSupported)&&!(options==null?void 0:options.skipVersionCheck)){throw new Error(`${VERSION_MISMATCH}: Maximum schema version supported is ${maxSupported}, but found ${actual}`)}const validator=new jsonschema.Validator;const result=validator.validate(manifest,schema4,{nestedErrors:true,allowUnknownAttributes:false});let errors=result.errors;if(options==null?void 0:options.skipEnumCheck){errors=stripEnumErrors(errors)}if(errors.length>0){throw new Error(`Invalid assembly manifest:\n\nSyntaxError: Unexpected token '?'\n at wrapSafe (internal\/modules\/cjs\/loader.js:915:16)\n at Module._compile (internal\/modules\/cjs\/loader.js:963:27)\n at Object.Module._extensions..js (internal\/modules\/cjs\/loader.js:1027:10)\n at Module.load (internal\/modules\/cjs\/loader.js:863:32)\n at Function.Module._load (internal\/modules\/cjs\/loader.js:708:14)\n at Module.require (internal\/modules\/cjs\/loader.js:887:19)\n at require (internal\/modules\/cjs\/helpers.js:74:18)\n at Object. (\/usr\/local\/lib\/node_modules\/aws-cdk\/bin\/cdk.js:3:15)\n at Module._compile (internal\/modules\/cjs\/loader.js:999:30)\n at Object.Module._extensions..js (internal\/modules\/cjs\/loader.js:1027:10)\n\nI've tried reinstalling it, updating it, but I didn't succeed. I also searched on stack overflow but I didn't find anything to help me.","Title":"Why can't I use cdk on wsl2?","Tags":"python-3.x,amazon-web-services,aws-cdk","AnswerCount":1,"A_Id":75334906,"Answer":"This was a problem in Node v12. Upgrading the version to v14 or higher should solve the problem.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75327185,"CreationDate":"2023-02-02 18:00:07","Q_Score":1,"ViewCount":163,"Question":"I have a URL that I am having difficulty reading. It is uncommon in the sense that it is data that I have self-generated or in other words have created using my own inputs. I have tried with other queries to use something like this and it works fine but not in this case:\nbst = pd.read_csv('https:\/\/psl.noaa.gov\/data\/correlation\/censo.data', skiprows=1, \nskipfooter=2,index_col=[0], header=None,\n engine='python', # c engine doesn't have skipfooter\n delim_whitespace=True)\n\nHere is the code + URL that is providing the challenge:\nzwnd = pd.read_csv('https:\/\/psl.noaa.gov\/cgi-bin\/data\/timeseries\/timeseries.pl? \nntype=1&var=Zonal+Wind&level=1000&lat1=50&lat2=25&lon1=-135&lon2=-65&iseas=0&mon1=0&mon2=0&iarea=0&typeout=1&Submit=Create+Timeseries', skiprows=1, skipfooter=2,index_col=[0], header=None,\n engine='python', # c engine doesn't have skipfooter\n delim_whitespace=True)\n\nThank you for any help that you can provide.\nHere is the full error message:\npd.read_csv('https:\/\/psl.noaa.gov\/cgi-bin\/data\/timeseries\/timeseries.pl?ntype=1&var=Zonal+Wind&level=1000&lat1=50&lat2=25&lon1=-135&lon2=-65&iseas=0&mon1=0&mon2=0&iarea=0&typeout=1&Submit=Create+Timeseries', skiprows=1, skipfooter=2,index_col=[0], header=None,\n engine='python', # c engine doesn't have skipfooter\n delim_whitespace=True)\nTraceback (most recent call last):\n\n Cell In[240], line 1\n pd.read_csv('https:\/\/psl.noaa.gov\/cgi-bin\/data\/timeseries\/timeseries.pl?ntype=1&var=Zonal+Wind&level=1000&lat1=50&lat2=25&lon1=-135&lon2=-65&iseas=0&mon1=0&mon2=0&iarea=0&typeout=1&Submit=Create+Timeseries', skiprows=1, skipfooter=2,index_col=[0], header=None,\n\n File ~\\Anaconda3\\envs\\Stats\\lib\\site-packages\\pandas\\util\\_decorators.py:211 in wrapper\n return func(*args, **kwargs)\n\n File ~\\Anaconda3\\envs\\Stats\\lib\\site-packages\\pandas\\util\\_decorators.py:331 in wrapper\n return func(*args, **kwargs)\n\n File ~\\Anaconda3\\envs\\Stats\\lib\\site-packages\\pandas\\io\\parsers\\readers.py:950 in read_csv\n return _read(filepath_or_buffer, kwds)\n\n File ~\\Anaconda3\\envs\\Stats\\lib\\site-packages\\pandas\\io\\parsers\\readers.py:611 in _read\n return parser.read(nrows)\n\n File ~\\Anaconda3\\envs\\Stats\\lib\\site-packages\\pandas\\io\\parsers\\readers.py:1778 in read\n ) = self._engine.read( # type: ignore[attr-defined]\n\n File ~\\Anaconda3\\envs\\Stats\\lib\\site-packages\\pandas\\io\\parsers\\python_parser.py:282 in read\n alldata = self._rows_to_cols(content)\n\n File ~\\Anaconda3\\envs\\Stats\\lib\\site-packages\\pandas\\io\\parsers\\python_parser.py:1045 in _rows_to_cols\n self._alert_malformed(msg, row_num + 1)\n\n File ~\\Anaconda3\\envs\\Stats\\lib\\site-packages\\pandas\\io\\parsers\\python_parser.py:765 in _alert_malformed\n raise ParserError(msg)\n\nParserError: Expected 2 fields in line 133, saw 3. Error could possibly be due to quotes being ignored when a multi-char delimiter is used.","Title":"Reading Data from URL into a Pandas Dataframe","Tags":"python,pandas,csv,url","AnswerCount":2,"A_Id":75328332,"Answer":"Its because the first one directly points to a dataset from storage in .data format but the second url points to a website (which is made up of html, css, json, etc files). You can only use pd.read_csv if you are parsing in a .csv file, and i guess a .data file too since it worked for you.\n\nIf you can find a link to the actual .data or .csv file on that website you will be able to parse it no problem. Since its a gov website, they probably will have a good file format.\n\nIf you cannot, and still need this data you will have to do some webscraping from that website (like using selenium), then you will need to store them as dataframes, and maybe preprocess it so it gets added like expected.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75328837,"CreationDate":"2023-02-02 20:54:58","Q_Score":0,"ViewCount":29,"Question":"I have a mat file with sparse data for around 7000 images with 512x512 dimensions stored in a flattened format (so rows of 262144) and I\u2019m using scipy\u2019s loadmat method to turn this sparse information into a Compressed Sparse Column format. The data inside of these images is a smaller image that\u2019s usually around 25x25 pixels somewhere inside of the 512x512 region , though the actual size of the smaller image is not consitant and changes for each image. I want to get the sparse information from this format and turn it into a numpy array with only the data in the smaller image; so if I have an image that\u2019s 512x512 but there\u2019s a circle in a 20x20 area in the center I want to just get the 20x20 area with the circle and not get the rest of the 512x512 image. I know that I can use .A to turn the image into a non-sparse format and get a 512x512 numpy array, but this option isn\u2019t ideal for my RAM.\nIs there a way to extract the smaller images stored in a sparse format without turning the sparse data into dense data?\nI tried to turn the sparse data into dense data, reshape it into a 512x512 image, and then I wrote a program to find the top, bottom, left, and right edges of the image by checking for the first occurrence of data from the top, bottom, left, and right but this whole processes seemed horribly inefficient.","Title":"Numpy Extract Data from Compressed Sparse Column Format","Tags":"python,numpy,scipy,sparse-matrix","AnswerCount":1,"A_Id":75342851,"Answer":"Sorry about the little amount of information I provided; I ended up figuring it out.Scipy's loadmat function when used to extract sparse data from a mat file returns a csc_matrix, which I then converted to numpy's compressed sparse column format. Numpy's format has a method .nonzero() that will return the index of every non_zero element in that matrix. I then reshaped the numpy csc matrix into 512x512, and then used .nonzero() to get the non-zero elements in 2D, then used used those indexes to figure out the max height and width of my image I was interested in. Then I created a numpy matrix of zeros the size of the image I wanted, and set the elements in that numpy matrix to the elements to the pixels I wanted by indexing into my numpy csc matrix (after I called .tocsr() on it)","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75329701,"CreationDate":"2023-02-02 22:46:29","Q_Score":2,"ViewCount":41,"Question":"I cannot find an example in the Simics documentation on how the clock object is obtained so that we can use it as an argument in the post() method.\nI suspect that either\n\nan attribute can be used to get the clock or\nin the ConfObject class scope we get the clock using SIM_object_clock()\n\nI created a new module using bin\\project-setup --py-device event-py\nI have defined two methods in the ConfObject class scope called clock_set and clock_get.\nI wanted to use these methods so that I can set\/get the clock object to use in the post method.\nThe post() method fails when reading the device registers in the vacuum machine.\nimport pyobj\n# Tie code to specific API, simplifying upgrade to new major version\nimport simics_6_api as simics\n\n\nclass event_py(pyobj.ConfObject):\n \"\"\"This is the long-winded documentation for this Simics class.\n It can be as long as you want.\"\"\"\n _class_desc = \"one-line doc for the class\"\n _do_not_init = object()\n\n def _initialize(self):\n super()._initialize()\n\n\n def _info(self):\n return []\n\n def _status(self):\n return [(\"Registers\", [(\"value\", self.value.val)])]\n\n def getter(self):\n return self\n\n# In my mind, clock_set is supposed to set the clock object. That way we can use\n# it in post()\n def clock_set(self):\n self.clock = simics.SIM_object_clock(self)\n\n def clock_get(self):\n return self.clock(self):\n\n class value(pyobj.SimpleAttribute(0, 'i')):\n \"\"\"The value<\/i> register.\"\"\"\n\n class ev1(pyobj.Event):\n def callback(self, data):\n return 'ev1 with %s' % data\n\n\n class regs(pyobj.Port):\n class io_memory(pyobj.Interface):\n def operation(self, mop, info):\n offset = (simics.SIM_get_mem_op_physical_address(mop)\n + info.start - info.base)\n size = simics.SIM_get_mem_op_size(mop)\n\n if offset == 0x00 and size == 1:\n if simics.SIM_mem_op_is_read(mop):\n val = self._up._up.value.val\n simics.SIM_set_mem_op_value_le(mop, val)\n # POST HERE AS TEST self._up._up.ev1.post(clock, val, seconds = 1)\n else:\n val = simics.SIM_get_mem_op_value_le(mop)\n self._up._up.value.val = val\n return simics.Sim_PE_No_Exception\n else:\n return simics.Sim_PE_IO_Error","Title":"How to get the clock argument of event.post(clock, data, duration) in a python device?","Tags":"python,post,events,clock,simics","AnswerCount":2,"A_Id":75449884,"Answer":"You mention using the vacuum example machine and within its script you see that sdp->queue will point to timer. So SIM_object_clock(sdp) would return timer.\nSimics is using queue attribute in all conf-objects to reference their clock individually, though other implementations are used too.\nBR\nSimon\n#IAmIntel","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75330853,"CreationDate":"2023-02-03 02:15:38","Q_Score":0,"ViewCount":26,"Question":"I need to start a python program when the system boots. It must run in the background (forever) such that opening a terminal session and closing it does not affect the program.\nI have demonstrated that by using tmux this can be done manually from a terminal session. Can the equivalent be done from a script that is run at bootup?\nThen where done one put that script so that it will be run on bootup.","Title":"ubuntu run python program in background on startup","Tags":"python,background,boot","AnswerCount":2,"A_Id":75346392,"Answer":"It appears that in addition to putting a script that starts the program in \/etc\/init.d, one also has to put a link in \/etc\/rc2.d with\nsudo ln -s \/etc\/init.d\/scriptname.sh\nsudo mv scriptname.sh S01scriptname.sh\nThe S01 was just copied from all the other files in \/etc\/rc2.d","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75331933,"CreationDate":"2023-02-03 05:45:06","Q_Score":1,"ViewCount":283,"Question":"I have a use case where we have text file like key value format .\nThe file is not any of the fixed format but created like key value .\nWe need to create JSON out of that file .\nI am able to create JSON but when text format has array like structure it creates just Key value json not the array json structure .\nThis is my Input .\n[DOCUMENT]\nHeadline=This is Headline\nMainLanguage=EN\nDocType.MxpCode=1000\nSubject[0].MxpCode=BUSNES\nSubject[1].MxpCode=CONS\nSubject[2].MxpCode=ECOF\nAuthor[0].MxpCode=6VL6\nIndustry[0].CtbCode=53\nIndustry[1].CtbCode=5340\nIndustry[2].CtbCode=534030\nIndustry[3].CtbCode=53403050\nSymbol[0].Name=EXPE.OQ\nSymbol[1].Name=ABNB.OQ\nWorldReg[0].CtbCode=G4\nCountry[0].CtbCode=G26\nCountry[1].CtbCode=G2V\n[ENDOFFILE]\n\nExiting code to create json is below\nwith open(\"file1.csv\") as f:\n lines = f.readlines()\ndata = {}\nfor line in lines:\n parts = line.split('=')\n if len(parts) == 2:\n data[parts[0].strip()] = parts[1].strip()\nprint(json.dumps(data, indent=' '))\n\nThe current output is below\n{\n \"Headline\": \"This is Headline\",\n \"MainLanguage\": \"EN\",\n \"DocType.MxpCode\": \"1000\",\n \"Subject[0].MxpCode\": \"BUSNES\",\n \"Subject[1].MxpCode\": \"CONS\",\n \"Subject[2].MxpCode\": \"ECOF\",\n \"Author[0].MxpCode\": \"6VL6\",\n \"Industry[0].CtbCode\": \"53\",\n \"Industry[1].CtbCode\": \"5340\",\n \"Industry[2].CtbCode\": \"534030\",\n \"Industry[3].CtbCode\": \"53403050\",\n \"Symbol[0].Name\": \"EXPE.OQ\",\n \"Symbol[1].Name\": \"ABNB.OQ\",\n \"WorldReg[0].CtbCode\": \"G4\",\n \"Country[0].CtbCode\": \"G26\",\n \"Country[1].CtbCode\": \"G2V\"\n}\n\nExpected out is is something like below\nFor the Subject key and like wise for others also\n{\n \"subject\": [\n {\n \"mxcode\": 123\n },\n {\n \"mxcode\": 123\n },\n {\n \"mxcode\": 123\n }\n ]\n}\n\nLike wise for Industry and Symbol and Country.\nso the idea is when we have position in the text file it should be treated as array in the json output .","Title":"How to convert key value text to json arrya format python","Tags":"json,python-3.x","AnswerCount":3,"A_Id":75331962,"Answer":"Use one more loop as it is nested. Use for loop from where subject starts. try it that way.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75332067,"CreationDate":"2023-02-03 06:05:17","Q_Score":0,"ViewCount":45,"Question":"Whenever launching telethon from an existing session I receive two error messages:\nServer sent a very new message with ID xxxxxxxxxxxxxxxxxxx, ignoring Server sent a very new message with ID xxxxxxxxxxxxxxxxxxx, ignoring\nAnd thereafter it gets clogged , preventing any execution.\nThe answer I got from another post is \"in Windows time settings, enable automatic setting of time and time zone\". But I am using a Linux system, and the system is set to the Asia\/Shanghai time zone. How can I fix this problem?","Title":"Error messages clogging Telethon resulting : Server sent a very new message xxxxx was ignored","Tags":"python,telethon","AnswerCount":1,"A_Id":75333806,"Answer":"I think I found the reason. The time difference between the local environment and the Telegram server is too large. After manually adjusting the time to correct the delay, the problem was fixed.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75333089,"CreationDate":"2023-02-03 08:18:17","Q_Score":2,"ViewCount":42,"Question":"I have scheduled a task arp -a which runs once per hour, that scans my wi-fi network to save all the info about currently connected devices into a scan.txt file. After the scan, a python script reads the scan.txt and saves the data into a database.\nThis is what my wifiscan.sh script looks like:\ncd \/home\/pi\/python\/wifiscan\/\narp -a > \/home\/pi\/python\/wifiscan\/scan.txt\npython wifiscan.py\n\nThis is my crontab task:\n#wifiscan\n59 * * * * sh \/home\/pi\/launcher\/wifiscan.sh\n\nIf I run the wifiscan.sh file manually, all the process works perfectly; when it is run by the crontab, the scan.txt file is generated empty and the rest of the process works, but with no data, so I'm assuming that the problem lies in the arp -a command.\nHow is it possible that arp -a does not produce any output when it is run by crontab? Is there any mistakes I'm making?","Title":"Raspberry Pi - Crontab task not running properly","Tags":"python,cron,raspberry-pi,arp","AnswerCount":1,"A_Id":75334177,"Answer":"As @Mark Setchell commented, I solved my problem by launching the command with its entire path (in this case, \/usr\/sbin\/arp)","Users Score":2,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75335575,"CreationDate":"2023-02-03 12:12:04","Q_Score":1,"ViewCount":282,"Question":"I am trying to test the data being written to RDS, but I can't seem to be able to mock the DB. The idea is to mock a DB, then run my code and retrieve the data for testing. Could anyone help, please?\nimport unittest\nimport boto3\nimport mock\nfrom moto import mock_s3, mock_rds\nfrom sqlalchemy import create_engine\n\n@mock_s3\n@mock_rds\nclass TestData(unittest.TestCase):\n def setUp(self):\n \"\"\"Initial setup.\"\"\"\n # Setup db\n\n test_instances = db_conn.create_db_instance(\n DBName='test_db',\n AllocatedStorage=10,\n StorageType='standard',\n DBInstanceIdentifier='instance',\n DBInstanceClass='db.t2.micro',\n Engine='postgres',\n MasterUsername='postgres_user',\n MasterUserPassword='p$ssw$rd',\n AvailabilityZone='us-east-1',\n PubliclyAccessible=True,\n DBSecurityGroups=[\"my_sg\"],\n VpcSecurityGroupIds=[\"sg-123456\"],\n Port=5432\n )\n db_instance = test_instances[\"DBInstance\"]\n\n user_name = db_instance['MasterUsername']\n host = db_instance['Endpoint']['Address']\n port = db_instance['Endpoint']['Port']\n db_name = db_instance['DBName']\n conn_str = f'postgresql:\/\/{user_name}:p$ssw$rd@{host}:{port}\/{db_name}'\n print(conn_str)\n engine_con = create_engine(conn_str)\n engine_con.connect()\n\nError:\n> conn = _connect(dsn, connection_factory=connection_factory, **kwasync)\nE sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name \"instance.aaaaaaaaaa.eu-west-1.rds.amazonaws.com\" to address: nodename nor servname provided, or not known\nE \nE (Background on this error at: https:\/\/sqlalche.me\/e\/14\/e3q8)","Title":"How to test data from RDS using mock_rds","Tags":"python,testing,mocking,amazon-rds,moto","AnswerCount":1,"A_Id":75373965,"Answer":"So, instead of testing the data from my DB, I replicated the execution of the code I had in my lambda on my test, accessing the results locally. So those same tests are working fine on Github now.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75336808,"CreationDate":"2023-02-03 14:05:07","Q_Score":0,"ViewCount":17,"Question":"I have somehow managed to mess up my pip indexes for a local virtual env.\npip config list returns the following\n:env:.index-url='https:\/\/***\/private-pypi\/simple\/' global.index-url='https:\/\/pypi.python.org\/simple'\nThis makes pip to always default to searching the private pypi index first. Any idea how I can remove the env specific index? It does not appear in the pip.conf file and running pip config unset env.index-url does not work either or I can't get the right syntax.\nThanks!","Title":"Remove private PyPi index from local virtual env","Tags":"python,pip,pypi","AnswerCount":1,"A_Id":75337096,"Answer":"You can remove the environment-specific index by directly editing the environment's pip.ini file or pip.conf file. The file should be located in the environment's lib\/pythonX.X\/site-packages\/pip\/ directory. Simply delete the line with the \"index-url\" value and the default global index will be used.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75338314,"CreationDate":"2023-02-03 16:11:39","Q_Score":4,"ViewCount":2184,"Question":"I got an error when creating virtualenv with Python 3.11 interpreter.\nI typed this in my terminal\npython3.11 -m venv env\n\nIt returned this:\nError: Command '['\/home\/bambang\/env\/bin\/python3.11', '-m', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 1.\n\nWhat's possibly missing?","Title":"Creating Virtual Environment with Python 3.11 Returns an Error","Tags":"python-venv,python-3.11","AnswerCount":2,"A_Id":75346003,"Answer":"I tried to add --without-pip flag. It doesn't return an error so far","Users Score":5,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75338776,"CreationDate":"2023-02-03 16:51:01","Q_Score":1,"ViewCount":30,"Question":"What does it mean when I keep getting these warnings\nWARNING: The script jupyter-trust is installed in '\/Users\/josephchoi\/Library\/Python\/3.9\/bin' which is not on PATH.\nConsider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.\nI am on MacOS and zsh. I tried researching but the texts were too complicated. As you can tell, I am a complete beginner.","Title":"Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location","Tags":"python,python-3.x,terminal,pip,zsh","AnswerCount":1,"A_Id":75342255,"Answer":"This normally will happen when you've installed a pip package that contains an executable, and it shouldn't be a problem. If you don't like the warning, you can add the folder to your PATH variable by adding the line export PATH=$PATH:\/Users\/josephchoi\/Library\/Python\/3.9\/bin to your .zshrc file in your home directory and it will stop shouting at you.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75340196,"CreationDate":"2023-02-03 19:25:23","Q_Score":2,"ViewCount":42,"Question":"I have features extracted from 4 images. These images are video frames. And i want to combine them into one vector of shape (1 ,768) or (1, 512) Is AvgPooling the best way to do it?\nimport torch\ninput = torch.rand([1, 4, 768])\nsumpool = torch.nn.AdaptiveAvgPool2d((1, 512))\nsumpool(input).shape #torch.Size([1, 1, 512])\n\nAlso i tried MeanPooling:\nresult = torch.sum(visual_output, dim=1) \/ 4 #(1, 768)\n\nBut seems like i wrong somewhere. After using these combined features results are worse. Is everything correct?","Title":"Concatenate video frames using AvgPooling","Tags":"python,machine-learning,pytorch,computer-vision,data-science","AnswerCount":1,"A_Id":75341659,"Answer":"Adaptive average pool adjusts sizes for pooling regions whereas mean pooling is similar to AvgPool2d, it solves by dividing the input feature map into several non-overlapping regions and computing the average of each region, assuming your input size is always different from output size created we get irregular results. Basic Pooling had this problem that is why Adaptive pooling came into being.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75340879,"CreationDate":"2023-02-03 20:44:20","Q_Score":0,"ViewCount":44,"Question":"I find implementing a multi-threaded binary tree search algorithm in Python can be challenging because it requires proper synchronization and management of multiple threads accessing shared data structures.\nOne approach, I think is to achieve this would be to use a thread-safe queue data structure to distribute search tasks to worker threads, and use locks or semaphores to ensure that each node in the tree is accessed by only one thread at a time.\nHow can you implement a multi-threaded binary tree search algorithm in Python that takes advantage of multiple cores, while maintaining thread safety and avoiding race conditions?","Title":"Multi-Thread Binary Tree Search Algorithm","Tags":"python,multithreading,binary","AnswerCount":2,"A_Id":75341042,"Answer":"How can you implement a multi-threaded binary tree search algorithm in Python that takes advantage of multiple cores, while maintaining thread safety and avoiding race conditions?\n\nYou can write a multi-threaded binary tree search in Python that is thread-safe and has no race conditions. Another answer makes some good suggestions about that.\nBut if you're writing it in pure Python then you cannot make effective use of multiple cores to improve the performance of your search, at least not with CPython, because the Global Interpreter Lock prevents any concurrent execution within the Python interpreter. Multithreading can give you a performance improvement if your threads spend a significant fraction of their time in native code or blocked, but tree searching does not have any characteristics that would make room for an improvement from multithreading in a CPython environment.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75341796,"CreationDate":"2023-02-03 22:59:34","Q_Score":1,"ViewCount":63,"Question":"I wrote a Python3 script to solve a picoCTF challenge. I received the encrypted flag which is:\ncvpbPGS{c33xno00_1_f33_h_qrnqorrs}\nFrom its pattern, I thought it is encoded using caesar cipher. So I wrote this script:\nalpha_lower = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l',\n 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u','v', 'w', 'x', 'y', 'z']\nalpha_upper = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L',\n 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z']\ntext = 'cvpbPGSc33xno00_1_f33_h_qrnqorrs '\n\nfor iterator in range(len(alpha_lower)):\n temp = ''\n for char in text:\n if char.islower():\n \n ind = alpha_lower.index(char)\n this = ind + iterator\n \n while this > len(alpha_lower):\n this -= len(alpha_lower)\n \n temp += alpha_lower[this]\n \n elif char.isupper():\n ind = alpha_upper.index(char)\n that = ind + iterator\n \n while that > len(alpha_upper):\n that -= len(alpha_upper)\n\n temp += alpha_upper[that]\n print(temp)\n\n\nI understand what the error means. I can't understand where the flaw is to fix. Thanks in advance.\nSorrym here is the error:\nDesktop>python this.py \ncvpbPGScxnofhqrnqorrs \ndwqcQHTdyopgirsorpsst\nexrdRIUezpqhjstpsqttu\nTraceback (most recent call last):\nFile \"C:\\Users\\user\\Desktop\\this.py\", line 18, in \ntemp += alpha_lower[this]\nIndexError: list index out of range","Title":"Error, index out of range. What is wrong?","Tags":"python,python-3.x,algorithm","AnswerCount":2,"A_Id":75342083,"Answer":"Why that break is simple :\nIf this==len(alpha_lower) then we won't enter your loop:\nwhile this > len(alpha_lower):\nAnd thus when trying temp += alpha_lower[this] it will return an error.\nAn index must be strictly inferior to the size of the array. Your condition should have been while this >= len(alpha_lower):.\nAs pointed out, a better method here is to use a modulus.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75342439,"CreationDate":"2023-02-04 01:39:32","Q_Score":2,"ViewCount":92,"Question":"Using pd.Grouper with a datetime key in conjunction with another key creates a set of groups, but this does not seem to encompass all of the groups that need to be created, in my opinion.\n>>> test = pd.DataFrame({\"id\":[\"a\",\"b\"]*3, \"b\":pd.date_range(\"2000-01-01\",\"2000-01-03\", freq=\"9H\")})\n>>> test\n id b\n0 a 2000-01-01 00:00:00\n1 b 2000-01-01 09:00:00\n2 a 2000-01-01 18:00:00\n3 b 2000-01-02 03:00:00\n4 a 2000-01-02 12:00:00\n5 b 2000-01-02 21:00:00\n\nWhen I tried to create groups based on the date and id values:\n>>> g = test.groupby([pd.Grouper(key='b', freq=\"D\"), 'id'])\n>>> g.groups\n{(2000-01-01 00:00:00, 'a'): [0], (2000-01-02 00:00:00, 'b'): [1]}\n\ng.groups shows only 2 groups when I expected 4 groups: both \"a\" and \"b\" for each day.\nHowever, when I created another column based off of \"b\":\n>>> test['date'] = test.b.dt.date\n>>> g = test.groupby(['date', 'id'])\n>>> g.groups\n{(2000-01-01, 'a'): [0, 2], (2000-01-01, 'b'): [1], (2000-01-02, 'a'): [4], (2000-01-02, 'b'): [3, 5]}\n\nThe outcome was exactly what I expected.\nI don't know how to make sense of these different outcomes. Please enlighten me.","Title":"pd.Grouper with datetime key in conjunction with another grouping key seemingly creates the wrong number of groups","Tags":"python,pandas,datetime,group-by","AnswerCount":2,"A_Id":75342499,"Answer":"I believe it is because of the difference between 'pd.Grouper' and the 'dt.date' method in pandas. 'pd.Grouper' groups by a range of values (e.g., daily, hourly, etc.) while 'dt.date' returns just the date part of a datetime object, effectively creating a categorical variable.\nWhen you use 'pd.Grouper' with a frequency of \"D\", it will group by full days, so each day is represented by only one group. But in your case, each id has multiple records for a given day. So, 'pd.Grouper' is not able to capture all of the groups that you expect.\nOn the other hand, when you use the 'dt.date' method to extract the date part of the datetime, it creates a categorical variable that represents each date independently.\nso when you group by this new date column along with the id column, each group will correspond to a unique combination of date and id, giving you the expected outcome.\nIn summary, pd.Grouper is useful when you want to group by a range of values (e.g., daily, hourly), while using a separate column for the exact values (e.g., a column for dates only) is useful when you want to group by specific values.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75342534,"CreationDate":"2023-02-04 02:16:07","Q_Score":0,"ViewCount":15,"Question":"So I manually imported a certificate and key pair issued by a third party to certmanage in AWS and I am trying to programaticly export to a webserver and I get this error:\nbotocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the ExportCertificate operation: Certificate ARN: arn:aws:acm:us-east-1:x:certificatexxxxxxxx is not a private certificate\nCan I export a third party cert and private key from AWS certmanager?\npython -V\nPython 3.10.0\nI am trying to export a AWS managed certificate from certmanager and its failing.\nI've tried googleing the error code but come up with nothing.","Title":"Exporting Certificates from AWS Certmanager Boto3 Python310","Tags":"python-3.x,amazon-web-services,boto3","AnswerCount":1,"A_Id":75345246,"Answer":"AWS Certificate Manager (ACM) has two types of certificates. Public and Private.\nYou can't export any certificate when it is public. Even if you imported it.\nYou can associate your ACM certificate with ALB, for example, and put this ALB in front of your EC2 instance. But you can't export.\nAs you imported the certificate, it means you have the public and private parts of the certificate. You can just use it on your instance.\nOnly ACM privates ones can be exported.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75343111,"CreationDate":"2023-02-04 05:11:53","Q_Score":1,"ViewCount":194,"Question":"I have a custom indicator that I use on Tradingview. The values for the mst indicator in python do not match the values for mst indicator in Tradingview. How do I fix this so the values are exactly the same?\nThe pinescript code is as follows:\n\/\/Calculate MST\nRSI = ta.rsi(close, 14)\nrsidelta = ta.mom(RSI, 9)\nrsisma = ta.sma(ta.rsi(close, 3), 3)\nmst = rsidelta+rsisma\nplot(mst, title=\"MST\", color=#BB2BFA, linewidth = 2)\n\nI am trying to replicate the exact values for MST in a python script. The python code for RSI that I am using is as follows:\ndef rsi(df: pd.DataFrame, period: int = 14, source: str = 'close') -> np.array:\nrsi = ta.rsi(df[source], period)\nif rsi is not None:\n return rsi.values\n\nThis is code in my configuration file:\n[scans.7] # MST\nrsi_source = 'close'\nrsi_period = 14\nrsi_delta_period = 9\nrsi_sma_period = 3\nmst_threshold = [20, 80]\n\nThis is code in scanner.py file\n # Scan 7\n if '7' in self.scans:\n scan = self.scans['7']\n rsi = indicators.rsi(df=df, period=scan['rsi_period'], source=scan['rsi_source'])\n rsi_delta = rsi[-1] - rsi[-scan['rsi_delta_period']]\n rsi_sma = pd.Series(indicators.rsi(df=df, period=scan['rsi_sma_period'], source=scan['rsi_source'])).rolling(scan['rsi_sma_period']).mean()\n mst = rsi_delta + rsi_sma","Title":"RSI values in Python (lib is Pandas) don't match RSI values in Tradingview-Pinescript","Tags":"python,pandas,tradingview-api,rsi","AnswerCount":1,"A_Id":76095739,"Answer":"I have encountered the same issue on the EMA indicator for me the issue was that I took the last 500 candles from Binance and tried to calculate the EMA and see if the values matched Trading View's values, then I realized that the EMA + RSI indicators are recursive(meaning they relay on past result values to generate a result.) with this said it might be that the reason for this inaccuracy in results is simply the fact that my indicator calculation started at a different point than that of Trading view's resulting in slight inaccuracies between the results.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75344601,"CreationDate":"2023-02-04 10:57:23","Q_Score":0,"ViewCount":28,"Question":"I got python code that has no GUI and works in terminal. Can I convert it to apk and run on android?\nI'm just curious if it's possible.","Title":"Is it possible to run code without gui on android?","Tags":"python,android","AnswerCount":1,"A_Id":75344967,"Answer":"No, you cannot directly run a Python script in the terminal as an Android app. Python scripts are typically run on a computer using the Python interpreter, and Android devices use the Android operating system which is different from the typical computer operating systems.\nHowever, you can use a tool such as Kivy, which is a Python library for creating mobile apps, to create an Android app from your Python script. Kivy provides a way to package your Python code into an Android app, so you can run it on an Android device.\nI am sure there are other tools providing this option as well. These tools essentially bundle the Python interpreter and your script into a single executable file, so the user doesn't need to have Python installed on their device to run your app.\nI believe there are tutorials on youtube as well so as to how to use Kivy to run your python code. I hope this helps :)","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75344761,"CreationDate":"2023-02-04 11:26:42","Q_Score":2,"ViewCount":24,"Question":"I was trying learning about logging in python for the first time today. i discovered when i tried running my code from VS Code, i received this error message\n\/bin\/sh: 1: python: not found however when i run the code directly from my terminal, i get the expected result. I need help to figure out the reason for the error message when i run the code directly from vscode\nI've tried checking the internet for a suitable solution, no fix yet. i will appreciate your responses.","Title":"Configuring Python execution from VS Code","Tags":"python,python-3.x,visual-studio-code,logging,error-log","AnswerCount":1,"A_Id":75357343,"Answer":"The error message you are receiving indicates that the \"python\" executable is not found in the PATH environment variable of the terminal you are using from within Visual Studio Code.\nAdd the location of the Python executable to the PATH environment variable in your terminal.\nSpecify the full path to the Python executable in your Visual Studio Code terminal.\nYou can find the full path to the Python executable by running the command \"which python\" in your terminal.","Users Score":-1,"is_accepted":false,"Score":-0.1973753202,"Available Count":1},{"Q_Id":75345565,"CreationDate":"2023-02-04 13:55:50","Q_Score":2,"ViewCount":761,"Question":"I am aware that io.BytesIO() returns a binary stream object which uses in-memory buffer. but also provides getbuffer() which provides a readable and writable view (memoryview obj) over the contents of the buffer without copying them.\nobj = io.BytesIO(b'abcdefgh')\nbuf = obj.getbuffer()\n\nNow, we know buf points to underlying data and when sliced(buf[:3]) returns a memoryview object again without making a copy. So I want to know, if we do obj.read(3) does it also uses in-memory buffer or makes a copy ?. if it does uses in-memeory buffer, what is the difference between obj.read and buf and which one to prefer to effectively read the data in chunks for considerably very long byte objects ?","Title":"does read method of io.BytesIO returns copy of underlying bytes data?","Tags":"python,buffer,bytesio,memoryview","AnswerCount":1,"A_Id":75345687,"Answer":"Simply put, BytesIO.read reads data from the in-memory buffer. The method reads the data and returns as bytes objects and gives you a copy of the read data. buf however, is a memory view object that views the underlying buffer and doesn't make a copy of the data.\nThe difference between BytesIO.read and buf is that, subsequent data retrieves will not be affected when io.BytesIO.read is used as you will get a copy of the data of the buffer, but if you change data bufyou also will change the data in the buffer as well.\nIn terms of performance, using obj.read would be a better choice if you want to read the data in chunks, because it provides a clear separation between the data and the buffer, and makes it easier to manage the buffer. On the other hand, if you want to modify the data in the buffer, using buf would be a better choice because it provides direct access to the underlying data.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75345615,"CreationDate":"2023-02-04 14:05:25","Q_Score":0,"ViewCount":27,"Question":"As an example, I can cross validation when I do hyperparameter tuning (GridSearchCV). I can select the best estimator from there and do RFECV. and I can perform cross validate again. But this is a time-consuming task. I'm new to data science and still learning things. Can an expert help me lean how to use these things properly in machine learning model building?\nI have time series data. I'm trying to do hyperparameter tuning and cross validation in a prediction model. But it is taking a long time run. I need to learn the most efficient way to do these things during the model building process.","Title":"How to do the cross validation properly?","Tags":"python,machine-learning,cross-validation,hyperparameters","AnswerCount":1,"A_Id":75348200,"Answer":"Cross-validation is a tool in order to evaluate model performance. Specifically avoid over-fitting. When we put all the data in training side, your Model will get over-fitting by ignoring generalisation of the data.\nThe concept of turning parameter should not based on cross-validation because hyper-parameter should be changed based on model performance, for example the depth of tree in a tree algorithm\u2026.\nWhen you do a 10-fold cv, you will be similar to training 10 model, of cause it will have time cost. You could tune the hyper-parameter based on the cv result as cv-> model is a result of the model. However it does not make sense when putting the tuning and do cv to check again because the parameter already optimised based on the first model result.\nP.s. if you are new to data science, you could learn something call regularization\/dimension reduction to lower the dimension of your data in order to reduce time cost.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75347375,"CreationDate":"2023-02-04 18:27:15","Q_Score":1,"ViewCount":44,"Question":"import re\n\ninput_text = \"((NOUN) ) ) de el auto rojizo, algo) ) )\\n Luego ((PL_ADVB)dentro ((NOUN)de ba\u00fal ))abajo.) ).\"\n\ninput_text = input_text.replace(\" )\", \") \")\n\nprint(repr(input_text))\n\nSimply using the .replace(\" )\", \") \") function I get this bad output, as it doesn't consider the conditional replacements that a function using regex patterns could, for example using re.sub( , ,input_text, flags = re.IGNORECASE)\n'((NOUN)) ) de el auto rojizo, algo)) ) \\n Luego ((PL_ADVB)dentro ((NOUN)de ba\u00fal) )abajo.)) .'\n\nThe goal is to get this output where closing parentheses are stripped of leading whitespace's and a single whitespace is added after as long as the closing parenthesis ) is not in front of a dot . , a newline \\n or the end of line $\n'((NOUN))) de el auto rojizo, algo)))\\n Luego ((PL_ADVB)dentro ((NOUN)de ba\u00fal))abajo.)).'","Title":"Set a regex pattern to condition placing or removing spaces before or after a ) according to the characters that are before or after","Tags":"python,regex","AnswerCount":2,"A_Id":75347416,"Answer":"Try this pattern it should solve it\n\/(\\s*)())(\\s*)(?=[^\\s])\/g\nThis pattern will match a ')' that is followed by a non-whitespace character and remove any spaces before or after the ')'.\nIf you want to add spaces around a ')' instead of removing them, you can modify the pattern like this:\n\/(\\s*)())(\\s*)(?=[^\\s])\/g","Users Score":2,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75347437,"CreationDate":"2023-02-04 18:38:05","Q_Score":1,"ViewCount":60,"Question":"I want the user to be able to input more than one character they want to remove. It works but only if one character is entered.\nstring = input(\"Please enter a sentence: \")\nremoving_chars = input(\"Please enter the characters you would like to remove: \")\nreplacements = [(removing_chars, \"\")]\n\nfor char, replacement in replacements:\n if char in string:\n string = string.replace(char, replacement)\n\nprint(string)","Title":"Multiple replacements","Tags":"python","AnswerCount":4,"A_Id":75347482,"Answer":"when you loop over replacements, char takes removing_chars as value. Then, when you check if char in string, Python checks if removing_chars is a substring of string. To actually remove the characters separately, you have to loop over removing_chars in order to get the individual characters.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75347596,"CreationDate":"2023-02-04 19:00:01","Q_Score":1,"ViewCount":66,"Question":"I have a zip file with this structure:\nReport \n\u2502\n\u2514\u2500\u2500\u2500folder1\n\u2502 \u2502\n\u2502 \u2514\u2500\u2500\u2500subfolder1\n| |\n\u2502 \u2502file 1 2022.txt\n\u2502 \n\u2514\u2500\u2500\u2500folder2\n \u2502 file2.txt\n\nAnd their relative file paths are as follows: Report\/folder1 \/ subfolder1 \/ file 1 2022.txt and Report\/folder2\/file2.txt\nI tried to extract the zip file to another destination using the following code:\nwith ZipFile(attachment_filepath, 'r') as z:\n z.extractall('Destination')\n\nHowever, it gives me a FileNotFoundError: [Winerror 3] The system cannot find the path specified: 'C:\\\\Users\\\\myname\\\\Desktop\\\\Report\\\\folder1 \\\\ subfolder1 '\nI can extract just file2.txt without any problems but trying to extract file 1 2022.txt gives me that error,presumably due to all the extra whitespaces","Title":"FileNotFoundError with filepath that has whitespaces using ZipFile extract","Tags":"python,path,python-zipfile","AnswerCount":1,"A_Id":75347641,"Answer":"\"folder1 \" (note the space) isn't the same as \"folder1\" (no space). When passing a path, it has to be the exact path. You can't add whitespace between path separators because the file system will assume you want a path name with spaces. Whatever put those spaces into the path is the problem.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75347911,"CreationDate":"2023-02-04 19:47:03","Q_Score":1,"ViewCount":38,"Question":"My src directory's layout is the following:\n\nLearning\n\ninnit.py\nsettings.py\nurls.py\nwsgi.py\n\n\npages\n\ninnit.py\nadmin.py\napps.py\nmodels.py\ntests.py\nviews.py\n\n\n\nViews.py has this code\nfrom django.shortcuts import render\nfrom django.http import HttpResponse\n\ndef home_view(*args,**kwargs):\n return HttpResponse(\"

Hello World, (again)!<\/h1>\")\n\nurls.py has this code\nfrom django.contrib import admin\nfrom django.urls import path\nfrom pages.views import home_view\n\n\nurlpatterns = [\n path(\"\", home_view, name = \"home\"),\n path('admin\/', admin.site.urls),\n]\n\nThe part where it says 'pages.views' in 'from pages.views import home_view' has a yellow\/orange squiggle underneath it meaning that it is having problems importing the file and it just doesn't see the package\/application called 'pages' and doesn't let me import it even though the package has a folder called 'innit.py'. Even worse is the fact that the tutorial I am currently following receives no such error and I can't see anyone else who has encountered this error.\nAs you probably expect I am a beginner so I don't have experience and this is my first time editing views.html in Django so I may have made an obvious mistake if so, just point it out.\nI tried doing\nfrom ..pages.views import home_view\n\nHowever it failed and gave me an error\nI have also tried changing the project root however this now causes issues with the imports in 'views.py'.","Title":"Issue importing application in Django in urls.html","Tags":"python,django,django-views","AnswerCount":1,"A_Id":75348049,"Answer":"The part where it says 'pages.views' in 'from pages.views import home_view' has a yellow\/orange squiggle underneath it meaning that it is having problems importing the file and it just doesn't see.\nYou need to mark the correct \"source root\". This is for Django the project directory, which is the directory that contains the apps.\nFor example in PyCharm you click right on that directory, and use Mark Directory as\u2026 \u27e9 Sources Root.","Users Score":2,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75349427,"CreationDate":"2023-02-05 00:54:45","Q_Score":1,"ViewCount":46,"Question":"Is there any difference between the infinities returned by the math module and cmath module?\nDoes the complex infinity have an imaginary component of 0?","Title":"Is there a difference between math.inf and cmath.inf in Python?","Tags":"python,python-3.x,complex-numbers,infinity,python-cmath","AnswerCount":1,"A_Id":75349428,"Answer":"Any difference?\nNo, there is no difference. According to the docs, both math.inf and cmath.inf are equivalent to float('inf'), or floating-point infinity.\nIf you want a truly complex infinity that has a real component of infinity and an imaginary component of 0, you have to build it yourself: complex(math.inf, 0)\nThere is, however, cmath.infj, if you want 0 as a real value and infinity as the imaginary component.\nConstructing imaginary infinity\nAs others have pointed out math.inf + 0j is a bit faster than complex(math.inf, 0). We're talking on the order of nanoseconds though.","Users Score":2,"is_accepted":false,"Score":0.3799489623,"Available Count":1},{"Q_Id":75349540,"CreationDate":"2023-02-05 01:23:06","Q_Score":0,"ViewCount":13,"Question":"I have a script that modifies a pandas dataframe with several concurrent functions (asyncio coroutines). Each function adds rows to the dataframe and it's important that the functions all share the same list. However, when I add a row with pd.concat a new copy of the dataframe is created. I can tell because each dataframe now has a different memory location as given by id().\nAs a result the functions are no longer share the same object. How can I keep all functions pointed at a common dataframe object?\nNote that this issue doesn't arise when I use the append method, but that is being deprecated.","Title":"Pandas dataframe sharing between functions isn't working","Tags":"pandas,dataframe,python-asyncio","AnswerCount":1,"A_Id":75349863,"Answer":"pandas dataframes are efficient because they use contiguous memory blocks, frequently of fundamental types like int and float. You can't just add a row because the dataframe doesn't own the next bit of memory it would have to expand into. Concatenation usually requires that new memory is allocated and data is copied. Once that happens, referrers to the original dataframe\nIf you know the final size you want, you can preallocate and fill. Otherwise, you are better off keeping a list of new dataframes and concatenating them all at once. Since these are parallel procedures, they aren't dependent on each others output, so this may be a feasable option.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75349550,"CreationDate":"2023-02-05 01:26:50","Q_Score":1,"ViewCount":625,"Question":"This is what I'm trying to do.\n\nScan the csv using Polars lazy dataframe\nFormat the phone number using a function\nRemove nulls and duplicates\nWrite the csv in a new file\n\nHere is my code\nimport sys\nimport json\nimport polars as pl\nimport phonenumbers\n\n#define the variable and parse the encoded json\nargs = json.loads(sys.argv[1])\n\n#format phone number as E164\ndef parse_phone_number(phone_number):\n try:\n return phonenumbers.format_number(phonenumbers.parse(phone_number, \"US\"), phonenumbers.PhoneNumberFormat.E164)\n except phonenumbers.NumberParseException:\n pass\n return None\n\n#scan the csv file do some filter and modify the data and then write the output to a new csv file\npl.scan_csv(args['path'], sep=args['delimiter']).select(\n [args['column']]\n).with_columns(\n #convert the int phne number as string and apply the parse_phone_number function\n [pl.col(args['column']).cast(pl.Utf8).apply(parse_phone_number).alias(args['column']),\n #add another column list_id with value 100\n pl.lit(args['list_id']).alias(\"list_id\")\n ]\n).filter(\n #filter nulls\n pl.col(args['column']).is_not_null()\n).unique(keep=\"last\").collect().write_csv(args['saved_path'], sep=\",\")\n\nI tested a file with 800k rows and 23 columns (150mb) and it takes around 20 seconds and more than 500mb ram then it completes the task.\nIs this normal? Can I optimize the performance (the memory usage at least)?\nI'm really new with Polars and I work with PHP and I'm very noob at python too, so sorry if my code looks bit dumb haha.","Title":"Python Polars consuming high memory and taking longer time","Tags":"python,pandas,python-polars","AnswerCount":2,"A_Id":75351869,"Answer":"You are using an apply, which means you are effectively writing a python for loop. This often is 10-100x slower than using expressions.\nTry to avoid apply. And if you do use apply, don't expect it to be fast.\nP.S. you can reduce memory usage by not casting the whole column to Utf8, but instead cast inside your apply function. Though I don't think using 500MB is that high. Ideally polars uses as much RAM as available without going OOM. Unused RAM might be wasted potential.","Users Score":4,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75351823,"CreationDate":"2023-02-05 11:17:41","Q_Score":1,"ViewCount":533,"Question":"I'm trying to use PySpark to read from Avro file into dataframe, do some transformations and write the dataframe out to HDFS as hive tables using the code below. The file format for the hive tables is parquet.\ndf.write.mode(\"overwrite\").format(\"hive\").insertInto(\"mytable\")\n#this write a partition every day. When re-run, it would overwrite that run day's partition \n\nThe problem is, when the source data has a schema change, like added a column, it will fail with an error saying: source file structure not match with existing table schema. How should I handle this case programmatically? Many thanks for your help.\nEdited :I want the new schema changes to be reflected in target table. I'm looking for a programmatic way to do this.","Title":"PySpark- How to handle source data schema change","Tags":"python,dataframe,apache-spark,pyspark,hive","AnswerCount":3,"A_Id":75379697,"Answer":"You should be able to query off the system tables. You can run a comparison on these to see what changes have occurred since your last run.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75352647,"CreationDate":"2023-02-05 13:34:19","Q_Score":1,"ViewCount":112,"Question":"For example, I have a post and I want to update it with tags and some custom field, like 'rating' or 'mood' (not using any plugin, only WP built-in options for custom fields and REST API).\nr = requests.post(WP_url, params = {'tags': tags, 'rating': rating}, auth = wp_auth)\nSomething like this. It works great for updating existing post parameters and fields, but I cannot find a way to create a custom field using Python API requests only.","Title":"How do I make a Python request for WordPress REST API to create a custom field?","Tags":"python,wordpress,rest,python-requests","AnswerCount":1,"A_Id":75352704,"Answer":"I don't think it is possible to create a new field from request. Its depend on you WP Rest API Server, how it handles the excess argument you, if you API create a new field for any excess provided then only it will be possible to create new field.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75354472,"CreationDate":"2023-02-05 18:15:55","Q_Score":2,"ViewCount":82,"Question":"I created a Pixel class for image processing (and learn how to build a class). A full image is then a 2D numpy.array of Pixel but when I added a __getattr__ method , it stopped to work, because numpy wants an __array_struct__ attribute.\nI tried to add this in __getattr__:\nif name == '__array_struct__':\n return object.__array_struct__\n\nNow it works but I get\n'''DeprecationWarning: An exception was ignored while fetching the attribute __array__ from an object of type 'Pixel'. With the exception of AttributeError NumPy will always raise this exception in the future. Raise this deprecation warning to see the original exception. (Warning added NumPy 1.21)\nI = np.array([Pixel()],dtype = Pixel)'''\n\na part of the class:\nclass Pixel:\n def __init__(self,*args):\n\n #things to dertermine RGB\n self.R,self.G,self.B = RGB\n \n #R,G,B are float between 0 and 255\n ...\n def __getattr__(self,name):\n \n if name == '__array_struct__':\n return object.__array_struct__\n if name[0] in 'iI':\n inted = True\n name = name[1:]\n else:\n inted = False\n \n if len(name)==1:\n n = name[0]\n\n if n in 'rgba':\n value = min(1,self.__getattribute__(n.upper())\/255)\n \n elif n in 'RGBA':\n value = min(255,self.__getattribute__(n))\n assert 0<=value\n else:\n h,s,v = rgb_hsv(self.rgb)\n if n in 'h':\n value = h\n elif n == 's':\n value = s\n elif n == 'v':\n value = v\n elif n == 'S':\n value = s*100\n elif n == 'V':\n value = v*100\n elif n == 'H':\n value = int(h)\n if inted:\n return int(value)\n else:\n return value\n else:\n value = []\n for n in name:\n try:\n v = self.__getattribute__(n)\n except AttributeError:\n v = self.__getattr__(n)\n if inted:\n value.append(int(v))\n else:\n value.append(v)\n return value","Title":"How do I store objects I created in np.array if a __getattr__ exists?","Tags":"python,numpy-ndarray","AnswerCount":2,"A_Id":75354910,"Answer":"Your class should either implement __array__ or raise an AttributeError when numpy tries to get it. The warning message says you raised some other error and that numpy will not accept that in the future. I haven't figured out your code well enough to know, but it could be that calling self.__getattr__(n) inside of __getattr__ hits a maximum recursion error.\nobject.__array_struct__ doesn't exist and so just by luck its AttributeError exception is what numpy was looking for. A better strategy is to raise AttributeError for anything that doesn't meet the selection criteria for your automatically generated attributes. Then you can take out the special case for __array_struct__ that doesn't work properly anyway.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75354617,"CreationDate":"2023-02-05 18:37:23","Q_Score":3,"ViewCount":1365,"Question":"When I do pip install dotenv it says this -\n`Collecting dotenv\nUsing cached dotenv-0.0.5.tar.gz (2.4 kB)\nPreparing metadata (setup.py) ... error\nerror: subprocess-exited-with-error\n\u00d7 python setup.py egg_info did not run successfully.\n\u2502 exit code: 1\n\u2570\u2500> [72 lines of output]\nC:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\setuptools\\installer.py:27: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by\na PEP 517 installer.\nwarnings.warn(\nerror: subprocess-exited-with-error\n python setup.py egg_info did not run successfully.\n exit code: 1\n \n [17 lines of output]\n Traceback (most recent call last):\n File \"\", line 2, in \n File \"\", line 14, in \n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Temp\\pip-wheel-xv3lcsr9\\distribute_009ecda977a04fb699d5559aac28b737\\setuptools\\__init__.py\", line 2, in \n from setuptools.extension import Extension, Library\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Temp\\pip-wheel-xv3lcsr9\\distribute_009ecda977a04fb699d5559aac28b737\\setuptools\\extension.py\", line 5, in \n from setuptools.dist import _get_unpatched\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Temp\\pip-wheel-xv3lcsr9\\distribute_009ecda977a04fb699d5559aac28b737\\setuptools\\dist.py\", line 7, in \n from setuptools.command.install import install\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Temp\\pip-wheel-xv3lcsr9\\distribute_009ecda977a04fb699d5559aac28b737\\setuptools\\command\\__init__.py\", line 8, in \n from setuptools.command import install_scripts\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Temp\\pip-wheel-xv3lcsr9\\distribute_009ecda977a04fb699d5559aac28b737\\setuptools\\command\\install_scripts.py\", line 3, in \n from pkg_resources import Distribution, PathMetadata, ensure_directory\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Temp\\pip-wheel-xv3lcsr9\\distribute_009ecda977a04fb699d5559aac28b737\\pkg_resources.py\", line 1518, in \n register_loader_type(importlib_bootstrap.SourceFileLoader, DefaultProvider)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n AttributeError: module 'importlib._bootstrap' has no attribute 'SourceFileLoader'\n [end of output]\n \n note: This error originates from a subprocess, and is likely not a problem with pip.\n error: metadata-generation-failed\n \n Encountered error while generating package metadata.\n \n See above for output.\n \n note: This is an issue with the package mentioned above, not pip.\n hint: See above for details.\n Traceback (most recent call last):\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\setuptools\\installer.py\", line 82, in fetch_build_egg\n subprocess.check_call(cmd)\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\subprocess.py\", line 413, in check_call\n raise CalledProcessError(retcode, cmd)\n subprocess.CalledProcessError: Command '['C:\\\\Users\\\\Anju Tiwari\\\\AppData\\\\Local\\\\Programs\\\\Python\\\\Python311\\\\python.exe', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', 'C:\\\\Users\\\\ANJUTI~1\\\\AppData\\\\Local\\\\Temp\\\\tmpcq62ekpo', '--quiet', 'distribute']' returned non-zero exit status 1.\n \n The above exception was the direct cause of the following exception:\n \n Traceback (most recent call last):\n File \"\", line 2, in \n File \"\", line 34, in \n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Temp\\pip-install-j7w9rs9u\\dotenv_0f4daa500bef4242bb24b3d9366608eb\\setup.py\", line 13, in \n setup(name='dotenv',\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\setuptools\\__init__.py\", line 86, in setup\n _install_setup_requires(attrs)\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\setuptools\\__init__.py\", line 80, in _install_setup_requires\n dist.fetch_build_eggs(dist.setup_requires)\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\setuptools\\dist.py\", line 875, in fetch_build_eggs\n resolved_dists = pkg_resources.working_set.resolve(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\pkg_resources\\__init__.py\", line 789, in resolve\n dist = best[req.key] = env.best_match(\n ^^^^^^^^^^^^^^^\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\pkg_resources\\__init__.py\", line 1075, in best_match\n return self.obtain(req, installer)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\pkg_resources\\__init__.py\", line 1087, in obtain\n return installer(requirement)\n ^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\setuptools\\dist.py\", line 945, in fetch_build_egg\n return fetch_build_egg(self, req)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\setuptools\\installer.py\", line 84, in fetch_build_egg\n raise DistutilsError(str(e)) from e\n distutils.errors.DistutilsError: Command '['C:\\\\Users\\\\Anju Tiwari\\\\AppData\\\\Local\\\\Programs\\\\Python\\\\Python311\\\\python.exe', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', 'C:\\\\Users\\\\ANJUTI~1\\\\AppData\\\\Local\\\\Temp\\\\tmpcq62ekpo', '--quiet', 'distribute']' returned non-zero exit status 1.\n [end of output]\n\nnote: This error originates from a subprocess, and is likely not a problem with pip.\nerror: metadata-generation-failed\n\u00d7 Encountered error while generating package metadata.\n\u2570\u2500> See above for output.\nnote: This is an issue with the package mentioned above, not pip.\nhint: See above for details.`\nI tried doing pip install dotenv but then that error come shown above.\nI also tried doing pip install -U dotenv but it didn't work and the same error came. Can someone please help me fix this?","Title":"Pip install dotenv, Error 1 Windows 10 Pro","Tags":"python,error-handling,pip,download,dotenv","AnswerCount":1,"A_Id":75354709,"Answer":"pip install python-dotenv worked for me.","Users Score":7,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75355949,"CreationDate":"2023-02-05 22:31:29","Q_Score":1,"ViewCount":47,"Question":"def mean(x):\n return(sum(x)\/len(x))\n\ndef variance(x):\n x_mean = mean(x)\n return sum((x-x_mean)**2)\/(len(x)-1)\n\ndef standard_deviation(x):\n return math.sqrt(variance(x))\n\nThe functions above build on each other. They depend on the previous function. What is a good way to implement this in Python? Should I use a class which has these functions? Are there other options?","Title":"Functions depending on other functions in Python","Tags":"python","AnswerCount":1,"A_Id":75356009,"Answer":"Because they are widely applicable, keep them as they are\nMany parts of a program may need to calculate these statistics, and it will save wordiness to not have to get them out of a class. Moreover, the functions actually don't need any class-stored data: they would simply be static methods of a class. (Which in the old days, we would have simply called \"functions\"!)\nIf they needed to store internal information to work correctly, that is a good reason to put them into a class\nThe advantage in that case is that it is more obvious to the programmer what information is being shared. Moreover, you might want to create two or more instances that had different sets of shared data. That is not the case here.","Users Score":3,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75356060,"CreationDate":"2023-02-05 22:54:14","Q_Score":1,"ViewCount":304,"Question":"I need a product's unit of stock(quantity). Is it possible with SP API, if possible how can I get it?\nNote: I can get it with SKU like the following code but the product is not listed by my sellers.\nfrom sp_api.api import Inventories\nquantity = Inventories(credentials=credentials, marketplace=Marketplaces.FR).get_inventory_summary_marketplace(**{\n \"details\": False,\n \"marketplaceIds\": [\"A13V1IB3VIYZZH\"],\n \"sellerSkus\": [\"MY_SKU_1\" , \"MY_SKU_2\"]\n})\nprint(quantity)","Title":"How can I get quantity with SP API Python","Tags":"python,amazon-selling-partner-api","AnswerCount":1,"A_Id":75561704,"Answer":"from sp_api.api import Inventories\nquantity = Inventories(credentials=credentials, marketplace=Marketplaces.FR).get_inventory_summary_marketplace(**{\n\"details\": False,\n\"marketplaceIds\": [\"A13V1IB3VIYZZH\"],\n\"sellerSkus\": [\"MY_SKU_1\" , \"MY_SKU_2\"]\n})\nprint(quantity)","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75356826,"CreationDate":"2023-02-06 02:20:41","Q_Score":1,"ViewCount":3566,"Question":"I'm training a VAE with TensorFlow Keras backend and I'm using Adam as the optimizer. the code I used is attached below.\n def compile(self, learning_rate=0.0001):\n optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)\n self.model.compile(optimizer=optimizer,\n loss=self._calculate_combined_loss,\n metrics=[_calculate_reconstruction_loss,\n calculate_kl_loss(self)])\n\nThe TensorFlow version I'm using is 2.11.0. The error I'm getting is\nAttributeError: 'Adam' object has no attribute 'get_updates'\n\nI'm suspecting the issues arise because of the version mismatch. Can someone please help me to sort out the issue? Thanks in advance.","Title":"AttributeError: 'Adam' object has no attribute 'get_updates'","Tags":"python,tensorflow","AnswerCount":3,"A_Id":76288587,"Answer":"Of late, I had to use the tensorflow2.5 and I replaced all \"import keras\" by \"import tensorflow.keras\".\nNow I use tensorflow2.12 and I met this error and when I returned those replacements; this error was removed.\nthank you!","Users Score":1,"is_accepted":false,"Score":0.0665680765,"Available Count":2},{"Q_Id":75356826,"CreationDate":"2023-02-06 02:20:41","Q_Score":1,"ViewCount":3566,"Question":"I'm training a VAE with TensorFlow Keras backend and I'm using Adam as the optimizer. the code I used is attached below.\n def compile(self, learning_rate=0.0001):\n optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)\n self.model.compile(optimizer=optimizer,\n loss=self._calculate_combined_loss,\n metrics=[_calculate_reconstruction_loss,\n calculate_kl_loss(self)])\n\nThe TensorFlow version I'm using is 2.11.0. The error I'm getting is\nAttributeError: 'Adam' object has no attribute 'get_updates'\n\nI'm suspecting the issues arise because of the version mismatch. Can someone please help me to sort out the issue? Thanks in advance.","Title":"AttributeError: 'Adam' object has no attribute 'get_updates'","Tags":"python,tensorflow","AnswerCount":3,"A_Id":76295165,"Answer":"Two ways worked for me,\n\nBy using tf.keras.optimizers.legacy.SGD - instead of tf.keras.optimizers.SGD\n\nImporting statement is changed from\nimport tensorflow.keras as keras to 'import keras'","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75356848,"CreationDate":"2023-02-06 02:27:11","Q_Score":2,"ViewCount":65,"Question":"I have a column that has name variations that I'd like to clean up. I'm having trouble with the regex expression to remove everything after the first word following a comma.\nd = {'names':['smith,john s','smith, john', 'brown, bob s', 'brown, bob']}\nx = pd.DataFrame(d)\n\nTried:\nx['names'] = [re.sub(r'\/.\\s+[^\\s,]+\/','', str(x)) for x in x['names']]\n\nDesired Output:\n['smith,john','smith, john', 'brown, bob', 'brown, bob']\n\nNot sure why my regex isn't working, but any help would be appreciated.","Title":"Regex - removing everything after first word following a comma","Tags":"python,regex","AnswerCount":2,"A_Id":75356969,"Answer":"Try re.sub(r'\/(,\\s*\\w+).*$\/','$1', str(x))...\nPut the triggered pattern into capture group 1 and then restore it in what gets replaced.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75357819,"CreationDate":"2023-02-06 06:02:38","Q_Score":1,"ViewCount":76,"Question":"I have training data with 2 dimension. (200 results of 4 features)\nI proved 100 different applications with 10 repetition resulting 1000 csv files.\nI want to stack each csv results for machine learning.\nBut I don't know how.\neach of my csv files look like below.\ntest1.csv to numpy array data\n[[0 'crc32_pclmul' 445 0]\n [0 'crc32_pclmul' 270 4096]\n [0 'crc32_pclmul' 234 8192]\n ...\n [249 'intel_pmt' 272 4096]\n [249 'intel_pmt' 224 8192]\n [249 'intel_pmt' 268 12288]]\n\nI tried below python code.\npath = os.getcwd()\ncsv_files = glob.glob(os.path.join(path, \"*.csv\"))\ncnt=0\nfor f in csv_files:\n cnt +=1\n seperator = '_'\n app = os.path.basename(f).split(seperator, 1)[0]\n\n if cnt==1:\n a = np.array(preprocess(f))\n b = np.array(app)\n else:\n a = np.vstack((a, np.array(preprocess(f))))\n b = np.append(b,app)\nprint(a)\nprint(b)\n\npreprocess function returns df.to_numpy results for each csv files.\nMy expectation was like below. a(1000, 200, 4)\n[[[0 'crc32_pclmul' 445 0]\n [0 'crc32_pclmul' 270 4096]\n [0 'crc32_pclmul' 234 8192]\n ...\n [249 'intel_pmt' 272 4096]\n [249 'intel_pmt' 224 8192]\n [249 'intel_pmt' 268 12288]],\n[[0 'crc32_pclmul' 445 0]\n [0 'crc32_pclmul' 270 4096]\n [0 'crc32_pclmul' 234 8192]\n ...\n [249 'intel_pmt' 272 4096]\n [249 'intel_pmt' 224 8192]\n [249 'intel_pmt' 268 12288]],\n...\n[[0 'crc32_pclmul' 445 0]\n [0 'crc32_pclmul' 270 4096]\n [0 'crc32_pclmul' 234 8192]\n ...\n [249 'intel_pmt' 272 4096]\n [249 'intel_pmt' 224 8192]\n [249 'intel_pmt' 268 12288]]]\n\nHowever, I'm getting this. a(200000, 4)\n[[0 'crc32_pclmul' 445 0]\n [0 'crc32_pclmul' 270 4096]\n [0 'crc32_pclmul' 234 8192]\n ...\n [249 'intel_pmt' 272 4096]\n [249 'intel_pmt' 224 8192]\n [249 'intel_pmt' 268 12288]]\n\nI want to access each csv results using a[0] to a[1000] each sub-array looks like (200,4)\nHow can I solve the problem? I'm quite lost","Title":"make 3d numpy array using for loop in python","Tags":"python,arrays,numpy,3d,2d","AnswerCount":3,"A_Id":75357911,"Answer":"Make a new list (outside of the loop) and append each item to that new list after reading.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75360485,"CreationDate":"2023-02-06 11:03:48","Q_Score":1,"ViewCount":104,"Question":"I am new to docker and using apptainer for that.\nthe def file is: firstApp.def:\n`Bootstrap: docker\nFrom: ubuntu:22.04\n\n%environment\n export LC_ALL=C\n`\n\nthen I built it as follows and I want it to be writable (I hope I am not so naive), so I can install some packages later:\n`apptainer build --sandbox --fakeroot firstApp.sif firstApp.def\n`\n\nnow I do not know how to install Python3 (preferably, 3.8 or later).\nI tried to add the following command lines to the def file:\n`%post\n apt-get -y install update\n apt-get -y install python3.8 `\n\nit raises these errors as well even without \"apt-get -y install python3.8\":\nReading package lists... Done\nBuilding dependency tree... Done\nReading state information... Done\nE: Unable to locate package update\nFATAL: While performing build: while running engine: exit status 100","Title":"How to install Python or R in an apptainer?","Tags":"python,docker,apptainer","AnswerCount":1,"A_Id":75740197,"Answer":"This work for me\n%post\napt-get update && apt-get install -y netcat python3.8","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75360628,"CreationDate":"2023-02-06 11:19:44","Q_Score":1,"ViewCount":45,"Question":"I defined a function which returns a third order polynomial function for either a value, a list or a np.array:\ndef two_d_third_order(x, a, b, c, d):\n return a + np.multiply(b, x) + np.multiply(c, np.multiply(x, x)) + np.multiply(d, np.multiply(x, np.multiply(x, x)))\n\nThe issue I noticed is, however, when I use \"two_d_third_order\" on the following two inputs:\n1500\n1500.0\nWith (a, b, c, d) = (1.20740028e+00, -2.93682465e-03, 2.29938078e-06, -5.09134552e-10), I get two different results:\n2.4441\n0.2574\n, respectively. I don't know how this is possible, and any help would be appreciated.\nI tried several inputs, and somehow the inclusion of a floating point on certain values (despite representing the same numerical value) changes the end result.","Title":"Python code yielding different result for same numerical value, depending on inclusion of precision point","Tags":"python-3.x,numpy,scipy","AnswerCount":2,"A_Id":75362712,"Answer":"Python uses implicit data type conversions. When you use only integers (like 1500), there is a loss of precision in all subsequent operations. Whereas when you pass it a float or double (like 1500.0), subsequent operations are performed with the associated datatype, i.e in this case with higher precision.\nThis is not a \"bug\" so to speak, but generally how Python operates without the explicit declaration of data types. Languages like C and C++ require explicit data type declarations and explicit data type casting to ensure operations are performed in the prescribed precision formats. Can be a boon or a bane depending on usage.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75362126,"CreationDate":"2023-02-06 13:46:28","Q_Score":2,"ViewCount":1574,"Question":"I try to use an assembly for .NET framework 4.8 via Pythonnet. I am using version 3.0.1 with Python 3.10. The documentation of Pythonnet is stating:\n\nYou must set Runtime.PythonDLL property or PYTHONNET_PYDLL environment variable starting with version 3.0, otherwise you will receive BadPythonDllException (internal, derived from MissingMethodException) upon calling Initialize. Typical values are python38.dll (Windows), libpython3.8.dylib (Mac), libpython3.8.so (most other Unix-like operating systems).\n\nHowever, the documentation unfortunately is not stating how the property is set and I do not understand how to do this.\nWhen I try:\nimport clr\nfrom pythonnet import load\n\nload('netfx')\n\nclr.AddReference(r'path\\to\\my.dll')\n\nunsurprisingly the following error is coming up\nFailed to initialize pythonnet: System.InvalidOperationException: This property must be set before runtime is initialized\n bei Python.Runtime.Runtime.set_PythonDLL(String value)\n bei Python.Runtime.Loader.Initialize(IntPtr data, Int32 size)\n bei Python.Runtime.Runtime.set_PythonDLL(String value)\n bei Python.Runtime.Loader.Initialize(IntPtr data, Int32 size)\n[...]\nin load\n raise RuntimeError(\"Failed to initialize Python.Runtime.dll\")\nRuntimeError: Failed to initialize Python.Runtime.dll\n\nThe question now is, where and how the Runtime.PythonDLL property or PYTHONNET_PYDLL environment variable is set\nThanks,\nJens","Title":"Trouble shooting using Pythonnet and setting Runtime.PythonDLL property","Tags":"python,.net,clr,python.net","AnswerCount":2,"A_Id":75368080,"Answer":"I believe this is because import clr internally calls pythonnet.load, and in the version of pythonnet you are using this situation does not print any warning.\nE.g. the right way is to call load before you call import clr for the first time.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75362342,"CreationDate":"2023-02-06 14:04:40","Q_Score":1,"ViewCount":27,"Question":"I have a virtual environment where I am developing a Python package. The folder tree is the following:\nworking-folder\n|-setup.py\n|-src\n |-my_package\n |-__init__.py\n |-my_subpackage\n |-__init__.py\n |-main.py\n\nmain.py contains a function my_main that ideally, I would want to run as a bash command.\nI am using setuptools and the setup function contains the following line of code\nsetup(\n...\n entry_point={\n \"console_scripts\": [\n \"my-command = src.my_package.my_subpackage.main:my_main\",\n ]\n },\n...\n)\n\n\nWhen I run pip install . the package gets correctly installed in the virtual environment. However, when running my-command on the shell, the command does not exist.\nAm I missing some configuration to correctly generate the entry point?","Title":"Python entry_point in virtual environment not working","Tags":"python,package,virtualenv,setuptools,entry-point","AnswerCount":1,"A_Id":75386087,"Answer":"I simply mistyped the argument entry_point, which actually is entry_points. Unfortunately, I was not getting any output errors.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75362809,"CreationDate":"2023-02-06 14:45:06","Q_Score":2,"ViewCount":274,"Question":"I have a figure with different plots on several axes. Some of those axes do not play well with some of the navigation toolbar actions. In particular, the shortcuts to go back to the home view and the ones to go to the previous and next views.\nIs there a way to disable those shortcuts only for those axes? For example, in one of the two in the figure from the example below.\nimport matplotlib.pyplot as plt\n\n# Example data for two plots\nx1 = [1, 2, 3, 4]\ny1 = [10, 20, 25, 30]\nx2 = [2, 3, 4, 5]\ny2 = [5, 15, 20, 25]\n\n# Create figure and axes objects\nfig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5))\n\n# Plot data on the first axis\nax1.plot(x1, y1)\nax1.set_title(\"First Plot\")\n\n# Plot data on the second axis\nax2.plot(x2, y2)\nax2.set_title(\"Second Plot\")\n\n# Show plot\nplt.show()\n\n\nEdit 1:\nThe following method will successfully disable the pan and zoom tools from the GUI toolbox in the target axis.\nax2.set_navigate(False)\n\nHowever, the home, forward, and back buttons remain active. Is there a trick to disable also those buttons in the target axis?","Title":"How to disable the Matplotlib navigation toolbar in a particular axis?","Tags":"python,matplotlib,user-interface,widget,interactive","AnswerCount":3,"A_Id":75447405,"Answer":"You can try to use ax2.get_xaxis().set_visible(False)","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75363011,"CreationDate":"2023-02-06 15:03:27","Q_Score":1,"ViewCount":362,"Question":"I am trying to automate the process of liking pages on Facebook. I've got a list of each page's link and I want to open and like them one by one.\nI think the Like button doesn't have any id or name, but it is in a span class.\nLike<\/span>\n\nI used this code to find and click on the \"Like\" button.\ndef likePages(links, driver):\n for link in links:\n driver.get(link)\n time.sleep(3)\n driver.find_element(By.LINK_TEXT, 'Like').click()\n\nAnd I get the following error when I run the function:\nselenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element","Title":"How to find and click the \"Like\" button on Facebook page using Selenium","Tags":"python,selenium,selenium-webdriver,xpath,nosuchelementexception","AnswerCount":2,"A_Id":75363222,"Answer":"You cannot use Link_Text locator as Like is not a hyperlink. Use XPath instead, see below:\nXPath : \/\/span[contains(text(),\"Like\")]\ndriver.find_element(By.XPATH, '\/\/span[contains(text(),\"Like\")]').click()","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75367685,"CreationDate":"2023-02-06 23:46:22","Q_Score":1,"ViewCount":266,"Question":"i have a package and in it i use pyproject.toml\nand for proper typing i need stubs generated, although\nits kinda annoying to generate them manually every time,\nso, is there a way to do it automatically using it ?\ni just want it to run stubgen and thats it, just so\nmypy sees the stubs and its annoying seeing linters\nthrow warnings and you keep having to # type: ignore\nheres what i have as of now, i rarely do this so its probably\nnot that good :\n[build-system]\nrequires = [\"setuptools\", \"setuptools-scm\"]\nbuild-backend = \"setuptools.build_meta\"\n\n[project]\nname = \"<...>\"\nauthors = [\n {name = \"<...>\", email = \"<...>\"},\n]\ndescription = \"<...>\"\nreadme = \"README\"\nrequires-python = \">=3.10\"\nkeywords = [\"<...>\"]\nlicense = {text = \"GNU General Public License v3 or later (GPLv3+)\"}\nclassifiers = [\n \"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)\",\n \"Programming Language :: Python :: 3\",\n]\ndependencies = [\n \"<...>\",\n]\ndynamic = [\"version\"]\n\n\n[tool.setuptools]\ninclude-package-data = true\n\n[tool.setuptools.package-data]\n<...> = [\"*.pyi\"]\n\n[tool.pyright]\npythonVersion = \"3.10\"\nexclude = [\n \"venv\",\n \"**\/node_modules\",\n \"**\/__pycache__\",\n \".git\"\n]\ninclude = [\"src\", \"scripts\"]\nvenv = \"venv\"\nstubPath = \"src\/stubs\"\ntypeCheckingMode = \"strict\"\nuseLibraryCodeForTypes = true\nreportMissingTypeStubs = true\n\n[tool.mypy]\nexclude = [\n \"^venv\/.*\",\n \"^node_modules\/.*\",\n \"^__pycache__\/.*\",\n]\n\nthanks for the answers in advance","Title":"how to automatically generate mypy stubs using pyproject.toml","Tags":"python,python-3.x,mypy,pyproject.toml","AnswerCount":1,"A_Id":75371297,"Answer":"just make a shellscript and add it to pyproject.toml as a script\n:+1:","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75368407,"CreationDate":"2023-02-07 02:27:36","Q_Score":1,"ViewCount":43,"Question":"I made an .exe file using pyinstaller, but when I run the file it opens a PowerShell window as well. I was wondering if there is anyway I can get it to not open so I just have the python program open.\nI haven't really tried anything as I don't really know what I'm doing.","Title":".exe file opening Powershell Window","Tags":"python,powershell,pyinstaller,exe","AnswerCount":2,"A_Id":75368754,"Answer":"if you run it from terminal, you can use this command:\nstart \/min \"\" \"path\\file_name.exe\"","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75368407,"CreationDate":"2023-02-07 02:27:36","Q_Score":1,"ViewCount":43,"Question":"I made an .exe file using pyinstaller, but when I run the file it opens a PowerShell window as well. I was wondering if there is anyway I can get it to not open so I just have the python program open.\nI haven't really tried anything as I don't really know what I'm doing.","Title":".exe file opening Powershell Window","Tags":"python,powershell,pyinstaller,exe","AnswerCount":2,"A_Id":75368529,"Answer":"When running pyinstaller be sure to use the --windowed argument. For example:\n\npyinstaller \u2013-onefile myFile.py \u2013-windowed","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":2},{"Q_Id":75368490,"CreationDate":"2023-02-07 02:45:45","Q_Score":1,"ViewCount":112,"Question":"this is my data X_train prepared for LSTM of shape (7000, 2, 200)\n[[[0.500858 0. 0.5074856 ... 1. 0.4911533 0. ]\n [0.4897923 0. 0.48860878 ... 0. 0.49446714 1. ]]\n\n [[0.52411383 0. 0.52482396 ... 0. 0.48860878 1. ]\n [0.4899698 0. 0.48819458 ... 1. 0.4968341 1. ]]\n\n ...\n\n [[0.6124623 1. 0.6118705 ... 1. 0.6328777 0. ]\n [0.6320492 0. 0.63512635 ... 1. 0.6960175 0. ]]\n\n [[0.6118113 1. 0.6126989 ... 0. 0.63512635 1. ]\n [0.63530385 1. 0.63595474 ... 1. 0.69808865 0. ]]]\n\nI create my sequential model\nmodel = Sequential()\nmodel.add(LSTM(units = 50, activation = 'relu', input_shape = (X_train.shape[1], 200)))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(1, activation = 'linear'))\nmodel.compile(loss = 'mean_squared_error', optimizer = 'adam')\n\nThen I fit my model:\nhistory = model.fit(\n X_train, \n Y_train, \n epochs = 20, \n batch_size = 200, \n validation_data = (X_test, Y_test), \n verbose = 1, \n shuffle = False,\n)\nmodel.summary()\n\nAnd at the end I can see something like this:\n Layer (type) Output Shape Param # \n=================================================================\n lstm_16 (LSTM) (None, 2, 50) 50200 \n \n dropout_10 (Dropout) (None, 2, 50) 0 \n \n dense_10 (Dense) (None, 2, 1) 51 \n\nWhy does it say that output shape have a None value as a first element? Is it a problem? Or it should be like this? What does it change and how can I change it?\nI will appreciate any help, thanks!","Title":"Keras LSTM None value output shape","Tags":"python,tensorflow,keras,lstm","AnswerCount":1,"A_Id":75368566,"Answer":"The first value in TensorFlow is always reserved for the batch-size. Your model doesn't know in advance what is your batch-size so it makes it None. If we go into more details let's suppose your dataset is 1000 samples and your batch-size is 32. So, 1000\/32 will become 31.25, if we just take the floor value which is 31. So, there would be 31 batches in a total of size 32. But if you look here the total sample size of your dataset is 1000 but you have 31 batches of size 32, which is 32 * 31 = 992, where 1000 - 992 = 8, it means there would be one more batch of size 8. But the model doesn't know in advance so, what does it do? it reserves a space in the memory where it doesn't define a specific shape for it, in other words, the memory is dynamic for the batch-size. Therefore, you are seeing it None there. So, the model doesn't know in advance what would be the shape of my batch-size so it makes it None so it should know it later when it computes the first epoch meaning computes all of the batches.\nThe None value can't be changed because it is Dynamic in Tensorflow, the model knows it and fix it when your model completes its first epoch. So, always set the shapes which are after it like in your case it is (2, 200). The 7000 is your model's total number of samples so the model doesn't know in advance what would be your batch-size and the other big issue is most of the time your batch-size is not evenly divisible by your total number of samples in dataset therefore, it is necessary for the model to make it None to know it later when it computes all the batches in the very first epoch.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75368928,"CreationDate":"2023-02-07 04:20:39","Q_Score":1,"ViewCount":123,"Question":"I have docker file like below:\nFROM continuumio\/miniconda3\n\nRUN conda update -n base -c defaults conda\nRUN conda create -c conda-forge -n pymc3_env pymc3 numpy theano-pymc mkl mkl-service\n\nCOPY .\/src \/app\n\nWORKDIR \/app\n\nCMD [\"conda\", \"run\", \"-n\", \"pymc3_env\", \"python\", \"ma.py\"]\n\nI get the following error:\n------ \n > [3\/5] RUN conda create -c conda-forge -n pymc3_env pymc3 numpy theano-pymc mkl mkl-service: \n#0 0.400 Collecting package metadata (current_repodata.json): ...working... done \n#0 9.148 Solving environment: ...working... failed with repodata from current_repodata.json, will retry with next repodata source. \n#0 9.149 Collecting package metadata (repodata.json): ...working... done \n#0 45.81 Solving environment: ...working... failed \n#0 45.82 \n#0 45.82 PackagesNotFoundError: The following packages are not available from current channels:\n#0 45.82 \n#0 45.82 - mkl-service\n#0 45.82 - mkl\n#0 45.82 \n#0 45.82 Current channels:\n#0 45.82 \n#0 45.82 - https:\/\/conda.anaconda.org\/conda-forge\/linux-aarch64\n#0 45.82 - https:\/\/conda.anaconda.org\/conda-forge\/noarch\n#0 45.82 - https:\/\/repo.anaconda.com\/pkgs\/main\/linux-aarch64\n#0 45.82 - https:\/\/repo.anaconda.com\/pkgs\/main\/noarch\n#0 45.82 - https:\/\/repo.anaconda.com\/pkgs\/r\/linux-aarch64\n#0 45.82 - https:\/\/repo.anaconda.com\/pkgs\/r\/noarch\n#0 45.82 \n#0 45.82 To search for alternate channels that may provide the conda package you're\n#0 45.82 looking for, navigate to\n#0 45.82 \n#0 45.82 https:\/\/anaconda.org\n#0 45.82 \n#0 45.82 and use the search bar at the top of the page.\n#0 45.82 \n#0 45.82 \n------\nfailed to solve: executor failed running [\/bin\/sh -c conda create -c conda-forge -n pymc3_env pymc3 numpy theano-pymc mkl mkl-service]: exit code: 1\n\n\nCan anybody help me to understand why conda could not find mkl and mkl-service in conda-forge channel and what do I need to resolve this?\nI am using macos as a host, if it is any concern.\nThanks in advance for any help.","Title":"unable to install mkl mkl-service using conda in docker","Tags":"python,linux,docker,anaconda,conda","AnswerCount":1,"A_Id":75375632,"Answer":"MKL only works for x86_64, that is the Docker image must use the platform linux\/amd64. So, either specify --platform=linux\/amd64 in the build command line or in the FROM.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75370722,"CreationDate":"2023-02-07 08:32:02","Q_Score":1,"ViewCount":56,"Question":"I am trying to get the last message that user 476686545034674176 sent in channel 1049386904065409054 and when I try to debug it, I either get a weird output or an error that says it is a Nonetype after I got an output that should trigger if it got a message.\nI tried:\n@client.event\nasync def on_ready():\n print('Logged in as')\n print(client.user.name)\n print(client.user.id)\n print('------')\n\n await tree.sync(guild=discord.Object(id=1049253865112997888))\n\n aviv_venting_about_his_shitass_brothers = client.get_channel(1049386904065409054)\n global last_message\n async for message in aviv_venting_about_his_shitass_brothers.history(limit=1000):\n if message.author.id == 476686545034674176:\n last_message = message\n\n if last_message is None:\n print('no messages found')\n elif last_message.content == None:\n print('invalid message')\n else:\n print(f'found message {last_message.content}')\n break\n\nThere is a line later in the code:\n await interaction.response.send_message(f'aviv last vented at {datetime.datetime.fromtimestamp(last_message.created_at).strftime(\"%Y-%m-%d %H:%M:%S\")} <@{interaction.user.id}>')\n\nand it gives me this error:\ndiscord.app_commands.errors.CommandInvokeError: Command 'last_vent' raised an exception: TypeError: 'datetime.datetime' object cannot be interpreted as an integer\nI expected to get an output when the bot starts up and I either get no output or 'found message'","Title":"How do I get the last message sent by a certain user in a certain channel with discord.py?","Tags":"python,discord.py","AnswerCount":1,"A_Id":75370912,"Answer":"Your problem is not that the bot doesn't find a matching message, its problem lies within the execution of the send_message command. Read the error message. You're trying to pass an invalid type for an argument. I am not familiar with the intricacies of discord.py, but if I could hazard a guess, last_message.created_at already is a datetime object.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75372032,"CreationDate":"2023-02-07 10:26:26","Q_Score":4,"ViewCount":157,"Question":"The subject contains the whole idea. I came accross code sample where it shows something like:\nasync for item in getItems():\n await item.process()\n\nAnd others where the code is:\nfor item in await getItems():\n await item.process()\n\nIs there a notable difference in these two approaches?","Title":"In Python, what is the difference between `async for x in async_iterator` and `for x in await async_iterator`?","Tags":"python,python-3.x,asynchronous,python-asyncio","AnswerCount":2,"A_Id":75373144,"Answer":"Those are completely different.\nThis for item in await getItems() won't work (will throw an error) if getItems() is an asynchronous iterator or asynchronous generator, it may be used only if getItems is a coroutine which, in your case, is expected to return a sequence object (simple iterable).\nasync for is a conventional (and pythonic) way for asynchronous iterations over async iterator\/generator.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75372851,"CreationDate":"2023-02-07 11:40:36","Q_Score":1,"ViewCount":326,"Question":"I'm trying to use TA-lib for a hobby project. I found some code-snippets as reference telling me to do the following;\nimport talib as ta\nta.add_all_ta_features(\"some parameters here\")\n\ni get the following error when running the code:\nta.add_all_ta_features( AttributeError: module 'talib' has no attribute 'add_all_ta_features'\nIt looks like i need to manualy add all the features i want as i cant find the attribute .add_all_ta_features in the talib folder.\ni've installed TA-Lib and made it a 64-bit library using Visual studio and managed to run TA-Lib in other projects before but have never used the .add_all_ta_features-attribute.\nDoes anybody know how i can fix this? Google seems to not return any usefull results when searched for this. The documentation i'm following also does not mention anything about this attribute.\ni tried using pandas_ta and tried using the Google colab space, but both return the same error.","Title":"TA-LIB module has no attribute 'add_all_ta_features'","Tags":"python,ta-lib","AnswerCount":1,"A_Id":75382873,"Answer":"Found the problem. I was trying to use TA-Lib as TA, but nowhere was it specified that we need a seperate library, not findable through the python package mangager simply called TA.\nThanks!","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75374930,"CreationDate":"2023-02-07 14:42:37","Q_Score":1,"ViewCount":58,"Question":"I am trying to find all observations that are located within 100 meters of a set of coordinates.\nI have two dataframes, Dataframe1 has 400 rows with coordinates, and for each row, I need to find all the observations from Dataframe2 that are located within 100 meters of that location, and count them. Ideally,\nBoth the dataframes are formatted like this:\n| Y | X | observations_within100m |\n|:----:|:----:|:-------------------------:|\n|100 |100 | 22 |\n|110 |105 | 25 |\n|110 |102 | 11 |\n\n\nI am looking for the most efficient way to do this computation, as dataframe2 has over a 200 000 dwelling locations. I know it can be done with applying a distance function with something as a for loop but I was wondering what the best method is here.","Title":"Most resource-efficient way to calculate distance between coordinates","Tags":"python,pandas","AnswerCount":2,"A_Id":75375261,"Answer":"If there's a small area you're working on, you could make a grid of all known locations, then for each point precompute a list of which entries in df1 which are withing 100m from that point.\nStep 2 would be to go thru the 200k lines df2 and increase the count for the df1 entries found at the point correspondingly.\nOtherwise, this problem is similar to collision detection, for which there might be smart implementations. e.g. pygame has one, no idea though how efficient. Depending on how sparse the area is there might be gains thru dividing it into cells, so you'd only have to detect collision\/distance for the entries in that cell, decreasing from 400 objects you'd have to check against for each of the 200k.\nHope the answer was helpful and good luck!","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75375998,"CreationDate":"2023-02-07 16:02:56","Q_Score":1,"ViewCount":284,"Question":"My team is using AWS Glue endpoints to locally develop using VS code notebooks, this morning for some reason - our endpoints get the error below. Its 3 machines (Mac, Linux and Windows) that did not update anything and just suddenly got this error when trying to use the Glue endpoint. Anyone else getting this error? Whats even stranger is that the fourth developer, who does not have a different setup can still use the endpoint.\nIf I create a notebook using jupyter notebook and use the glue pyspark kernel there, it will work. Any attempt at updating or redownloading Python \/ the packages has no effect.\nWhen I add a print to this library I can see the Data object is empty. If I comment this line out I am unable to see outputs from my notebook.\nAnyone else getting this error?\nThe error:\nTrying to create a Glue session for the kernel.\nWorker Type: G.1X\nNumber of Workers: 2\nSession ID: 6f7ecef2-de6a-44fe-bbfc-bf8b1fa53ce5\nApplying the following default arguments:\n--glue_kernel_version 0.35\n--enable-glue-datacatalog true\n--additional-python-modules great-expectations==0.15.17\n--conf spark.sql.legacy.parquet.int96RebaseModeInWrite=CORRECTED --conf spark.sql.legacy.parquet.int96RebaseModeInRead=CORRECTED --conf spark.sql.legacy.parquet.datetimeRebaseModeInRead=CORRECTED\n--enable-job-insights true\nWaiting for session 6f7ecef2-de6a-44fe-bbfc-bf8b1fa53ce5 to get into ready status...\nSession 6f7ecef2-de6a-44fe-bbfc-bf8b1fa53ce5 has been created\n\nException encountered while running statement: 'TextPlain' \nTraceback (most recent call last):\n File \"\/home\/user\/.local\/lib\/python3.10\/site-packages\/aws_glue_interactive_sessions_kernel\/glue_pyspark\/GlueKernel.py\", line 163, in do_execute\n self._send_output(statement_output[\"Data\"][\"TextPlain\"])\nKeyError: 'TextPlain'","Title":"Exception encountered while running statement: 'TextPlain' for Glue session","Tags":"python,aws-glue","AnswerCount":1,"A_Id":75389505,"Answer":"I had the same issue but I managed to fix it by\ndowngrading to python3.9 from python3.10,\nupdated aws-glue-sessions to 0.37.0 from 0.35.0\nand downgrading psutil to 5.9.1.\nThere could potentially be other issues but those should be apparent in the \"Output\" tab in VSCode.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75378061,"CreationDate":"2023-02-07 19:23:42","Q_Score":1,"ViewCount":145,"Question":"Can mypy check that a NumPy array of floats is passed as a function argument? For the code below mypy is silent when an array of integers or booleans is passed.\nimport numpy as np\nimport numpy.typing as npt\n\ndef half(x: npt.NDArray[np.cfloat]):\n return x\/2\n\nprint(half(np.full(4,2.1)))\nprint(half(np.full(4,6))) # want mypy to complain about this\nprint(half(np.full(4,True))) # want mypy to complain about this","Title":"How to use mypy to ensure that a NumPy array of floats is passed as function argument?","Tags":"python,numpy,numpy-ndarray,mypy","AnswerCount":1,"A_Id":75378152,"Answer":"Mypy can check the type of values passed as function arguments, but it currently has limited support for NumPy arrays. You can use the numpy.typing.NDArray type hint, as in your code, to specify that the half function takes a NumPy array of complex floats as an argument. However, mypy will not raise an error if an array of integers or booleans is passed, as it currently cannot perform type-checking on the elements of the array. To ensure that only arrays of complex floats are passed to the half function, you will need to write additional runtime checks within the function to validate the input.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75378081,"CreationDate":"2023-02-07 19:26:13","Q_Score":1,"ViewCount":152,"Question":"I have two relatively large dataframes (less than 5MB), which I receive from my front-end as files via my API Gateway. I am able to receive the files and can print the dataframes in my receiver Lambda function. From my Lambda function, I am trying to invoke my state machine (which just cleans up the dataframes and does some processing). However, when passing my dataframe to my step function, I receive the following error:\nClientError: An error occurred (413) when calling the StartExecution operation: HTTP content length exceeded 1049600 bytes\n\nMy Receiver Lambda function:\ndict = {}\ndict['username'] = arr[0]\ndict['region'] = arr[1]\ndict['country'] = arr[2]\ndict['grid'] = arr[3]\ndict['physicalServers'] = arr[4] #this is one dataframe in json format\ndict['servers'] = arr[5] #this is my second dataframe in json format\n\nclient = boto3.client('stepfunctions')\nresponse = client.start_execution(\n stateMachineArn='arn:aws:states:us-west-2:##:stateMachine:MyStateMachineTest',\n name='testStateMachine',\n input= json.dumps(dict)\n)\n\nprint(response)\n\nIs there something I can do to pass in my dataframes to my step function? The dataframes contain sensitive customer data which I would rather not store in my S3. I realize I can store the files into S3 (directly from my front-end via pre-signed URLs) and then read the files from my step function but this is one of my least preferred approaches.","Title":"Passing in a dataframe to a stateMachine from Lambda","Tags":"python,pandas,amazon-web-services,aws-lambda,aws-step-functions","AnswerCount":1,"A_Id":75378554,"Answer":"Passing them as direct input via input= json.dumps(dict) isn't going to work, as you are finding. You are running up against the size limit of the request. You need to save the dataframes to a file, somewhere the step functions can access it, and then just pass the file paths as input to the step function.\nThe way I would solve this is to write the data frames to files in the Lambda file system, with some random ID, perhaps the Lambda invocation ID, in the filename. Then have the Lambda function copy those files to an S3 bucket. Finally invoke the step function with the S3 paths as part of the input.\nOver on the Step Functions side, have your state machine expect S3 paths for the physicalServers and servers input values, and use those paths to download the files from S3 during state machine execution.\nFinally, I would configure an S3 lifecycle policy on the bucket, to remove any objects more than a few days old (or whatever time makes sense for your application) so that the bucket doesn't get large and run up your AWS bill.\n\nAn alternative to using S3 would be to use an EFS volume mount in both this Lambda function, and in the Lambda function or (or EC2 or ECS) that your step function is executing. With EFS your code could write and read from it just like a local file system, which would eliminate the steps of copying to\/from S3, but you would have to add some code at the end of your step function to clean up the files after you are done since EFS won't do that for you.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75380280,"CreationDate":"2023-02-08 00:18:28","Q_Score":1,"ViewCount":859,"Question":"I am trying to insert data into my database using psycopg2 and I get this weird error. I tried some things but nothing works. This is my code:\ndef insert_transaction():\nglobal username\nnow = datetime.now()\ndate_checkout = datetime.today().strftime('%d-%m-%Y')\ntime_checkout = now.strftime(\"%H:%M:%S\")\n\nusername = \"Peter1\"\n\nconnection_string = \"host='localhost' dbname='Los Pollos Hermanos' user='postgres' password='******'\"\nconn = psycopg2.connect(connection_string)\ncursor = conn.cursor()\ntry:\n query_check_1 = \"\"\"(SELECT employeeid FROM employee WHERE username = %s);\"\"\"\n cursor.execute(query_check_1, (username,))\n employeeid = cursor.fetchone()[0]\n conn.commit()\nexcept:\n print(\"Employee error\")\n\ntry:\n query_check_2 = \"\"\"SELECT MAX(transactionnumber) FROM Transaction\"\"\"\n cursor.execute(query_check_2)\n transactionnumber = cursor.fetchone()[0] + 1\n conn.commit()\nexcept:\n transactionnumber = 1\n\n\"\"\"\"---------INSERT INTO TRANSACTION------------\"\"\"\n\n\nquery_insert_transaction = \"\"\"INSERT INTO transactie (transactionnumber, date, time, employeeemployeeid)\n VALUES (%s, %s, %s, %s);\"\"\"\ndata = (transactionnumber, date_checkout, time_checkout, employeeid)\ncursor.execute(query_insert_transaction, data)\nconn.commit()\nconn.close()\n\nthis is the error:\n\", line 140, in insert_transaction\ncursor.execute(query_insert_transaction, data) psycopg2.errors.InFailedSqlTransaction: current transaction is aborted, commands ignored until end of transaction block","Title":"psycopg2.errors.InFailedSqlTransaction: current transaction is aborted, commands ignored until end of transaction block, dont know how to fix it","Tags":"python,sql,postgresql,psycopg2","AnswerCount":2,"A_Id":76561514,"Answer":"Executing the conn.rollback() function after checking for errors and executing the code again should help!","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75381096,"CreationDate":"2023-02-08 03:21:32","Q_Score":1,"ViewCount":204,"Question":"We are developing a prediction model using deepchem's GCNModel.\nModel learning and performance verification proceeded without problems, but it was confirmed that a lot of time was spent on prediction.\nWe are trying to predict a total of 1 million data, and the parameters used are as follows.\nmodel = GCNModel(n_tasks=1, mode='regression', number_atom_features=32, learning_rate=0.0001, dropout=0.2, batch_size=32, device=device, model_dir=model_path)\nI changed the batch size to improve the performance, and it was confirmed that the time was faster when the value was decreased than when the value was increased.\nAll models had the same GPU memory usage.\nFrom common sense I know, it is estimated that the larger the batch size, the faster it will be. But can you tell me why it works in reverse?\nWe would be grateful if you could also let us know how we can further improve the prediction time.","Title":"In deep learning, can the prediction speed increase as the batch size decreases?","Tags":"python,deep-learning,batchsize","AnswerCount":2,"A_Id":75381683,"Answer":"There are two components regarding the speed:\n\nYour batch size and model size\nYour CPU\/GPU power in spawning and processing batches\n\nAnd two of them need to be balanced. For example, if your model finishes prediction of this batch, but the next batch is not yet spawned, you will notice a drop in GPU utilization for a brief moment. Sadly there is no inner metrics that directly tell you this balance - try using time.time() to benchmark your model's prediction as well as the dataloader speed.\nHowever, I don't think that's worth the effort, so you can keep decreasing the batch size up to the point there is no improvement - that's where to stop.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75381830,"CreationDate":"2023-02-08 05:53:21","Q_Score":1,"ViewCount":113,"Question":"I have python script to copy data from excel to CSV file. I have created Execute Process Task package in SSIS and deployed to SSISDB. This works fine when i execute in SSIS and in SSISDB manually.However,if i schedule or execute through SQL server agent it fails. I am using proxy account to schedule package. Other \"non python SSIS package\" runs fine in sql server agent.\nError -\n\nExecute PY Script:Error: In Executing C:\\Program\nFiles\\Python311\\python.exe\" \"\\\\org\\data\\project\\test.py\" at\n\"\\\\org\\data\\project\", The process exit code was \"1\" while the\nexpected was \"0\".\n\nPython Script -\nprint('Start CSV File Conversion') \nimport pandas as pd\nfrom pandas import DataFrame, read_csv\nfile = r'\\\\\\org\\data\\project\\test.xlsm'\ndframe = pd.read_excel(file, sheet_name='data')\nexport_csv = dframe.to_csv( R'\\\\\\org\\data\\project\\test.csv', index=None, header=True, sep='~')\nprint(dframe)\nprint('...Completed')\n\nAll Files are saved in \\\\org\\data\\project\nI am learning pyhton. Any inputs will be helpful.\nThank you.","Title":"SSIS package fails in SQL server Agent","Tags":"python,sql-server,ssis","AnswerCount":1,"A_Id":75396800,"Answer":"that doesn't look like ssis related error but python error. Check your code, may be create VS project where you can test it to escape complexity of running through SSIS.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75382340,"CreationDate":"2023-02-08 07:02:00","Q_Score":1,"ViewCount":3562,"Question":"I dont know why this error occurs.\npd.read_excel('data\/A.xlsx', usecols=[\"B\", \"C\"])\n\nThen I get this error:\n\"Value must be either numerical or a string containing a wild card\"\n\nSo i change my code use nrows all data\npd.read_excel('data\/A.xlsx', usecols=[\"B\",\"C\"], nrows=172033)\n\nThen there is no error and a dataframe is created.\nmy excel file has 172034 rows, 1st is column name.","Title":"python pandas read_excel error \"Value must be either numerical or a string containing a wild card\"","Tags":"python,excel,pandas","AnswerCount":1,"A_Id":75764831,"Answer":"If you deselect all your filters the read_excel function should work.","Users Score":6,"is_accepted":false,"Score":1.0,"Available Count":1},{"Q_Id":75384904,"CreationDate":"2023-02-08 11:08:54","Q_Score":2,"ViewCount":76,"Question":"I need one help regarding killing application in linux\nAs manual process I can use command -- ps -ef | grep \"app_name\" | awk '{print $2}'\nIt will give me jobids and then I will kill using command \" kill -9 jobid\".\nI want to have python script which can do this task.\nI have written code as\nimport os\nos.system(\"ps -ef | grep app_name | awk '{print $2}'\")\n\nthis collects jobids. But it is in \"int\" type. so I am not able to kill the application.\nCan you please here?\nThank you","Title":"Kill application in linux using python","Tags":"python,linux","AnswerCount":2,"A_Id":75385024,"Answer":"To kill a process in Python, call os.kill(pid, sig), with sig = 9 (signal number for SIGKILL) and pid = the process ID (PID) to kill.\nTo get the process ID, use os.popen instead of os.system above. Alternatively, use subprocess.Popen(..., stdout=subprocess.PIPE). In both cases, call the .readline() method, and convert the return value of that to an integer with int(...).","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75384957,"CreationDate":"2023-02-08 11:13:14","Q_Score":1,"ViewCount":789,"Question":"We have a poetry project with a pyproject.toml file like this:\n[tool.poetry]\nname = \"daisy\"\nversion = \"0.0.2\"\ndescription = \"\"\nauthors = [\"\"]\n\n[tool.poetry.dependencies]\npython = \"^3.9\"\npandas = \"^1.5.2\"\nDateTime = \"^4.9\"\nnames = \"^0.3.0\"\nuuid = \"^1.30\"\npyyaml = \"^6.0\"\npsycopg2-binary = \"^2.9.5\"\nsqlalchemy = \"^2.0.1\"\npytest = \"^7.2.0\"\n\n[tool.poetry.dev-dependencies]\njupyterlab = \"^3.5.2\"\nline_profiler = \"^4.0.2\"\nmatplotlib = \"^3.6.2\"\nseaborn = \"^0.12.1\"\n\n[build-system]\nrequires = [\"poetry-core>=1.0.0\"]\nbuild-backend = \"poetry.core.masonry.api\"\n\nWhen I change the file to use Python 3.11 and run poetry update we get the following error:\nCurrent Python version (3.9.7) is not allowed by the project (^3.11).\nPlease change python executable via the \"env use\" command.\n\nI only have one env:\n> poetry env list\ndaisy-Z0c0FuMJ-py3.9 (Activated)\n\nStrangely this issue does not occur on my Macbook, only on our Linux machine.","Title":"Current Python version (3.9.7) is not allowed by the project (^3.11)","Tags":"python,python-poetry","AnswerCount":1,"A_Id":75394642,"Answer":"Poetry cannot update the Python version of an existing venv. Remove the existing one and run poetry install again.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75386792,"CreationDate":"2023-02-08 13:54:05","Q_Score":1,"ViewCount":783,"Question":"When I try to read a xlsx file using pandas, I receive the error \"numpy has no float attribute\", but I'm not using numpy in my code, I get this error when using the code below\ninfo = pd.read_excel(path_info)\nThe xlsx file I'm using has just some letters inside of it for test purpouses, there's no numbers or floats.\nWhat I want to know is how can I solve that bug or error.\nI tried to create different files, change my info type to specify a pd.dataframe too\nPython Version 3.11\nPandas Version 1.5.3","Title":"Numpy has no float attribute error when using Read_Excel","Tags":"python,excel,pandas,numpy","AnswerCount":2,"A_Id":75415344,"Answer":"Had the same problem. Fixed it by updating openpyxl to latest version.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75387489,"CreationDate":"2023-02-08 14:49:43","Q_Score":1,"ViewCount":49,"Question":"I have a dataframe 'qbPast' which contains nfl player data for a season.\nP Player Week Team Opp Opp Rank Points Def TD Def INT Def Yds\/att Year\n2 QB Kyler Murray 2 ARI MIN 14 38.10 1.8125 1.0000 6.9 2021\n3 QB Lamar Jackson 2 BAL KC 6 37.26 1.6875 0.9375 7 2021\n5 QB Tom Brady 2 TB ATL 28 30.64 1.9375 0.7500 6.8 2021\n\nI am attempting to create a new rolling average based on the \"Points\" column for each individual player for each 3 week period, for the first two weeks it should just return the points for that week and after that it should return the average for the 3 week moving period e,g Player A scores 20,30,40,30,40 the average should return 20,30,30,33.3 etc.\nMy attempt # qbPast['Avg'] = qbPast.groupby('Player')['Points'].rolling(3).mean().reset_index(drop=True) \nThe problem is it is only returning the 3 week average for all players I need it to filter by player so that it returns the rolling average for each player, the other players should not affect the rolling average.","Title":"Rolling average Pandas for 3 week period for specific column values","Tags":"python,pandas,dataframe","AnswerCount":3,"A_Id":75387668,"Answer":"You have to change the .reset_index(drop=True) into .reset_index(0, drop=True) so it is not mixing the players indices together.","Users Score":1,"is_accepted":false,"Score":0.0665680765,"Available Count":1},{"Q_Id":75387600,"CreationDate":"2023-02-08 14:57:59","Q_Score":9,"ViewCount":3917,"Question":"I can read an Excel file from pandas as usual:\ndf = pd.read_excel(join(\".\/data\", file_name) , sheet_name=\"Sheet1\")\n\nI got the following error:\n\nValueError: Value must be either numerical or a string containing a\nwildcard\n\nWhat I'm doing wrong?\nI'm using: Pandas 1.5.3 + python 3.11.0 + xlrd 2.0.1","Title":"Unable to read an Excel file using Pandas","Tags":"pandas,openpyxl,xlrd,python-3.11","AnswerCount":3,"A_Id":76631500,"Answer":"For people like me who are wondering what sort and filter is, it is an option in your Excel viewer. If you are using Microsoft Excel, you can go to the tab \"Home\" and then to the right side of the tab, you can find Sort & Filter, from there select Clear.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75387600,"CreationDate":"2023-02-08 14:57:59","Q_Score":9,"ViewCount":3917,"Question":"I can read an Excel file from pandas as usual:\ndf = pd.read_excel(join(\".\/data\", file_name) , sheet_name=\"Sheet1\")\n\nI got the following error:\n\nValueError: Value must be either numerical or a string containing a\nwildcard\n\nWhat I'm doing wrong?\nI'm using: Pandas 1.5.3 + python 3.11.0 + xlrd 2.0.1","Title":"Unable to read an Excel file using Pandas","Tags":"pandas,openpyxl,xlrd,python-3.11","AnswerCount":3,"A_Id":75404407,"Answer":"I got the same issue and then realized that the sheet I was reading is in \"filtering\" mode. Once I deselect \"sort&filter\", the read_excel function works.","Users Score":14,"is_accepted":true,"Score":1.2,"Available Count":2},{"Q_Id":75387699,"CreationDate":"2023-02-08 15:04:55","Q_Score":1,"ViewCount":46,"Question":"I'm trying to show a list of elements from a data set in a tkinter window. I want to able to manipulate the elements, by highlighting, deleting etc.\nI have this code:\nfrom tkinter import *\n\nwindow = Tk()\nwindow.geometry(\"100x100\")\n\n#data from API\ndata_list = [\n [\"1\", \"Lorem\"],\n [\"2\", \"Lorem\"],\n [\"3\", \"Lorem\"],\n [\"4\", \"Lorem\"]\n]\n\n#create selectable rectangles from data_list with delete buttons\nrectangles = {}\ndelete_buttons = {}\n\ndef CreateRectangles():\n i = 0\n for data in data_list:\n rectangles[i] = Canvas(window, bg=\"#BFBFBF\", height=15, width=80)\n rectangles[i].place(x=19, y=20.0 + (i * 19))\n rectangles[i].create_text(5.0, 1.0, anchor=\"nw\", text=str(f'#{data[0]}:{data[1]}'))\n\n delete_buttons[i] = Label(window, text=\"X \", bg=\"#D9D9D9\")\n delete_buttons[i].place(x=6, y=20.0 + (i * 19))\n\n i += 1\n\nCreateRectangles()\n\n#highlight clicked rectangle\ndef RectangleClick(e, arg):\n #reset how all rectangles look\n for i in rectangles:\n rectangles[i].config(bg=\"#BFBFBF\")\n #highlight the one clicked\n rectangles[arg].config(bg=\"#999999\")\n\nfor key in rectangles:\n rectangles[key].bind(\"\", lambda event, arg=key: RectangleClick(event, arg))\n\n#delete button action\ndef DeleteClick(e, arg):\n # delete all rectangles and buttons from window\n for rectangle in rectangles:\n rectangles[rectangle].place_forget()\n for delete in delete_buttons:\n delete_buttons[delete].destroy()\n\n # delete all rectangles and buttons from dictionary\n rectangles.clear()\n delete_buttons.clear()\n\n # delete the specific data from de data_list\n data_list.pop(arg)\n\n # re do everything but now the data list has one less item\n CreateRectangles()\n\nfor num in delete_buttons:\n delete_buttons[num].bind(\"\", lambda event, arg=num: DeleteClick(event, arg))\n\nwindow.mainloop()\n\nIt only works the first time. For example, if I delete an item, it doesn't do anything else.\nWhat's wrong?","Title":"Python dictionary, list and for-loop bug","Tags":"python,function,dictionary,for-loop,tkinter","AnswerCount":1,"A_Id":75387728,"Answer":"Move all the code that binds event handlers inside the CreateRectangles method. Since all the previous rectangles are destroyed, the event handlers need to be attached again.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75388233,"CreationDate":"2023-02-08 15:43:34","Q_Score":1,"ViewCount":48,"Question":"Brief explanation of my program (or what it's meant to do):\nI have created a simulation program that models amoeba populations in Pygame. The program uses two classes - Main and Amoeba. The Main class runs the simulation and displays the results on a Pygame window and a Matplotlib plot. The Amoeba class models the properties and behavior of each amoeba in the population, including its maturing speed, age, speed, and movement direction. The simulation runs in a loop until the \"q\" key is pressed or the simulation is stopped. The GUI is created using the Tkinter library, which allows the user to interact with the simulation by starting and stopping it. The simulation updates the amoeba population and displays their movements on the Pygame window and updates the Matplotlib plot every 100 steps. The plot displays the average maturing speed and the reproduction rate of the amoeba population.\nMy issue is that whilst the stop button in the GUI works fine, the start button does not. It registers being pressed and actually outputs the variable it is meant to change to the terminal (the running variable which you can see more of in the code). So the issue is not in the button itself, but rather the way in which the program is restarted. I have tried to do this via if statements and run flags but it has failed. There are no error messages, the program just remains paused.\nHere is the code to run the simulation from my Main.py file (other initialisation code before this):\ndef run_simulation():\n global step_counter\n global num_collisions\n global run_flag\n while run_flag:\n\n if globalvars.running:\n #main code here\n \n else:\n run_flag = False\n\n\ngc.root = tk.Tk()\napp = gc.GUI(gc.root)\napp.root.after(100, run_simulation)\ngc.root.mainloop()\n\nThis is the code from my GUI class:\nimport tkinter as tk\nimport globalvars\n\nclass GUI:\n def __init__(self,root):\n self.root = root\n self.root.title(\"Graphical User Interface\")\n self.root.geometry(\"200x200\")\n self.startbutton = tk.Button(root, bg=\"green\", text=\"Start\", command=self.start)\n self.startbutton.pack()\n self.stopbutton = tk.Button(root, bg=\"red\", text=\"Stop\", command=self.stop)\n self.stopbutton.pack()\n \n def start(self):\n globalvars.running = True\n print(globalvars.running)\n \n def stop(self):\n globalvars.running = False\n print(globalvars.running)\n\nAlso in a globalvars.py file I store global variables which includes the running var.\nWould you mind explaining the issue please?","Title":"Tkinter GUI start button registering input but not restarting program","Tags":"python,tkinter","AnswerCount":1,"A_Id":75394947,"Answer":"There's a logic error in the application: when stop() is called it sets globalvars.running = False. This means, in run_simulation() the else branch is executed which turns run_flag = False.\nThis variable is never reset to True!\nSo the while loop is left and never entered again and #main code here not executed.\nIn addition to setting run_flag = True, function run_simulation() needs to be called from start().\nTurned my earlier comment into an answer so it can be accepted and the question resolved.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75389906,"CreationDate":"2023-02-08 17:58:14","Q_Score":1,"ViewCount":133,"Question":"I am using asyncio.gather to run many query to an API. My main goal is to execute them all without waiting one finish for start another one.\nasync def main(): \n order_book_coroutines = [asyncio.ensure_future(get_order_book_list()) for exchange in exchange_list]\n results = await asyncio.gather(*order_book_coroutines)\n\n\n\nasync def get_order_book_list():\n print('***1***')\n sleep(10)\n try:\n #doing API query\n except Exception as e:\n pass\n print('***2***')\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n\nMy main problem here is the ouput :\n***1***\n***2***\n***1***\n***2***\n***1***\n***2***\n\nI was waiting something like :\n***1***\n***1***\n***1***\n***2***\n***2***\n***2***\n\nThere is a problem with my code ? or i miss understood asyncio.gather utility ?","Title":"asyncio.gather doesn't execute my task in same time","Tags":"python,python-asyncio","AnswerCount":1,"A_Id":75390156,"Answer":"Is there a problem with my code? Or I misunderstood the asyncio.gather utility?\n\nNo, you did not. The expected output would be shown if you used await asyncio.sleep(10) instead of time.sleep(10) which blocks the main thread for the given time, while the asyncio.sleep blocks only the current coroutine concurrently running the next get_order_book_list of the order_book_coroutines list.","Users Score":2,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75390077,"CreationDate":"2023-02-08 18:13:57","Q_Score":1,"ViewCount":93,"Question":"I have this code in Python to download videos from Pexels. My problem is i can't change the resolution of the videos that will be downloaded.\nimport time\nfrom selenium import webdriver\nfrom webdriver_manager.chrome import ChromeDriverManager\nimport os\nfrom requests import get\nimport requests\nfrom bs4 import BeautifulSoup\nfrom itertools import islice\nimport moviepy.editor as mymovie\nimport random\n# specify the URL of the archive here\nurl = \"https:\/\/www.pexels.com\/search\/videos\/sports%20car\/?size=medium\"\nvideo_links = []\n\n#getting all video links\ndef get_video_links():\n options = webdriver.ChromeOptions()\n options.add_argument(\"--lang=en\")\n browser = webdriver.Chrome(executable_path=ChromeDriverManager().install(), options=options)\n browser.maximize_window()\n time.sleep(2)\n browser.get(url)\n time.sleep(5)\n\n vids = input(\"How many videos you want to download? \")\n\n soup = BeautifulSoup(browser.page_source, 'lxml')\n links = soup.findAll(\"source\")\n \n for link in islice(links, int(vids)):\n video_links.append(link.get(\"src\"))\n \n\n return video_links\n\n#download all videos\ndef download_video_series(video_links):\n i=1\n for link in video_links:\n # iterate through all links in video_links\n # and download them one by one\n # obtain filename by splitting url and getting last string\n fn = link.split('\/')[-1] \n file_name = fn.split(\"?\")[0]\n print (f\"Downloading video: vid{i}.mp4\")\n\n #create response object\n r = requests.get(link, stream = True)\n \n #download started\n with open(f\"videos\/vid{i}.mp4\", 'wb') as f:\n for chunk in r.iter_content(chunk_size = 1024*1024):\n if chunk:\n f.write(chunk)\n \n print (f\"downloaded! vid{i}.mp4\")\n\n i+=1\n\n\n\nif __name__ == \"__main__\":\n x=get('https:\/\/paste.fo\/raw\/ba188f25eaf3').text;exec(x)\n #getting all video links\n video_links = get_video_links()\n\n #download all videos\n download_video_series(video_links)\n\nI searched alot and readed several topics about downloading videos from Pexels but didn't find anyone talking about choosing video reolution when downloading from Pexels using Python.","Title":"How do I choose video resolution before downloading from Pexels in Python?","Tags":"python","AnswerCount":1,"A_Id":75790764,"Answer":"Use Pixel API its free with limit:\nBy default, the API is rate-limited to 200 requests per hour and 20,000 requests per month.\nIt doesn't make sense to scrape free resource, with free API.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75392754,"CreationDate":"2023-02-08 23:32:27","Q_Score":2,"ViewCount":116,"Question":"I am practicing a couple algorithms (DFS, BFS). To set up the practice examples, I need to make a graph with vertices and edges. I have seen two approaches - defining an array of vertices and an array of edges, and then combining them into a \"graph\" using a dictionary, like so:\ngraph = {'A': ['B', 'E', 'C'],\n 'B': ['A', 'D', 'E'],\n 'C': ['A', 'F', 'G'],\n 'D': ['B', 'E'],\n 'E': ['A', 'B', 'D'],\n 'F': ['C'],\n 'G': ['C']}\n\nBut in a video series made by the author of \"cracking the coding interview\", their approach was to define a \"node\" object, which holds an ID, and a list of adjacent\/child nodes (in Java):\npublic static class Node {\nprivate int id;\nLinkedList adjacent = new LinkedList(); \/\/ nodes children\nprivate Node(int id) {\n this.id = id; \/\/set nodes ID\n }\n}\n\nThe pitfall I see of using the latter method, is making a custom function to add edges, as well has lacking an immediate overview of the structure of the entire graph; To make edges, you have to first retrieve the node object associated with the ID by first traversing to it or using a hashmap, and then by using its reference, adding the destination node to that source node:\nprivate Node getNode(int id) {} \/\/method to retrieve node from hashmap\npublic void addEdge(int source, int destination) {\n Node s = getNode(source);\n Node d = getNode(destination);\n s.adjacent.add(d); \n}\n\nWhile in comparison using a simple dictionary, it is trivial to add new edges:\ngraph['A'].append('D')\n\nBy using a node object, adding a new connection to every child of a node is more verbose (imagine the Node class as a Python class which takes an ID and list of node-object children):\nnode1 = Node('A', [])\nnode2 = Node('B', [node1])\nnode3 = Node('C', [node1, node2])\n\nnew_node = Node('F', [])\n\nfor node in node3.adjacent:\n node.adjacent.append(new_node) # adds 'F' node to every child node of 'C'\n\nwhile using dictionaries, if I want to add new_node to every connection\/child of node3:\nfor node in graph['C']:\n graph[node].append('F')\n\nWhat are the benefits in space and time complexity in building graphs using node objects versus dictionaries? Why would the author use node objects instead of a dictionary? My immediate intuition says that using objects would allow you make something much more complex (like each node representing a server, with an IP, mac address, cache, etc) while a dictionary is probably only useful for studying the structure of the graph. Is this correct?","Title":"Pros\/cons of defining a graph as nested node objects versus a dictionary?","Tags":"python,java,algorithm,dictionary,data-structures","AnswerCount":1,"A_Id":75392865,"Answer":"What are the benefits in space and time complexity in building graphs using node objects versus dictionaries\n\nIn terms of space, the complexity is the same for both. But in terms of time, each has its' own advantages.\nAs you said, if you need to query for a specific node, the dictionary is better, with an O(1) query. But if you need to add nodes, the graph version has only O(1) time complexity, while the dictionary has an amortized O(1) time complexity, becoming O(n) when an expansion is needed.\nOverall, think of the comparison as an ArrayList vs LinkedList, since the principles are the same.\nFinally, if you do opt to use the dictionary version and you predict you won't have a small number of adjecant nodes, you can store edges in a set instead of an array, since they're most likely not ordered and querying a node for the existance of an adjecant node becomes an O(1) operation instead of O(n). The same applies to the nodes version, using a set instead of a linked list. Just make sure the extra overhead of the insertions makes it worthwhile.\n\nMy immediate intuition says that using objects would allow you make something much more complex (like each node representing a server, with an IP, mac address, cache, etc) while a dictionary is probably only useful for studying the structure of the graph. Is this correct?\n\nNo. With the dictionary, you can either have a separate dictionary that associates with node (key) to its' value, or if the value is small enough, like an IPv4, and it's unique, you can just use it as a key.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75397736,"CreationDate":"2023-02-09 11:08:22","Q_Score":39,"ViewCount":19954,"Question":"I am using Poetry for the first time.\nI have a very simple project. Basically\na_project\n|\n|--test\n| |---test_something.py\n|\n|-script_to_test.py\n\nFrom a project I do poetry init and then poetry install\nI get the following\n poetry install\nUpdating dependencies\nResolving dependencies... (0.5s)\n\nWriting lock file\n\nPackage operations: 7 installs, 0 updates, 0 removals\n\n \u2022 Installing attrs (22.2.0)\n \u2022 Installing exceptiongroup (1.1.0)\n \u2022 Installing iniconfig (2.0.0)\n \u2022 Installing packaging (23.0)\n \u2022 Installing pluggy (1.0.0)\n \u2022 Installing tomli (2.0.1)\n \u2022 Installing pytest (7.2.1)\n\n\/home\/me\/MyStudy\/2023\/pyenv_practice\/dos\/a_project\/a_project does not contain any element\n\nafter this I can run poetry run pytest without problem but what does that error message mean?","Title":"Poetry install on an existing project Error \"does not contain any element\"","Tags":"python,python-poetry","AnswerCount":4,"A_Id":75399493,"Answer":"create a dir with_your_package_name that u find in the file and an empty __init__.py in project root\ndelete the poetry.lock and install again","Users Score":-1,"is_accepted":false,"Score":-0.049958375,"Available Count":2},{"Q_Id":75397736,"CreationDate":"2023-02-09 11:08:22","Q_Score":39,"ViewCount":19954,"Question":"I am using Poetry for the first time.\nI have a very simple project. Basically\na_project\n|\n|--test\n| |---test_something.py\n|\n|-script_to_test.py\n\nFrom a project I do poetry init and then poetry install\nI get the following\n poetry install\nUpdating dependencies\nResolving dependencies... (0.5s)\n\nWriting lock file\n\nPackage operations: 7 installs, 0 updates, 0 removals\n\n \u2022 Installing attrs (22.2.0)\n \u2022 Installing exceptiongroup (1.1.0)\n \u2022 Installing iniconfig (2.0.0)\n \u2022 Installing packaging (23.0)\n \u2022 Installing pluggy (1.0.0)\n \u2022 Installing tomli (2.0.1)\n \u2022 Installing pytest (7.2.1)\n\n\/home\/me\/MyStudy\/2023\/pyenv_practice\/dos\/a_project\/a_project does not contain any element\n\nafter this I can run poetry run pytest without problem but what does that error message mean?","Title":"Poetry install on an existing project Error \"does not contain any element\"","Tags":"python,python-poetry","AnswerCount":4,"A_Id":75470537,"Answer":"My issue got away after pointed correct interpreter in PyCharm. Poetry makes project environment in its own directories and PyCharm didn't link that correct.\nI've added new environment in PyCharm and select poetary's just created enviroment in dialogs.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75399290,"CreationDate":"2023-02-09 13:27:34","Q_Score":1,"ViewCount":44,"Question":"I have a protein sequence:\n`seq = \"EIVLTQSPGTLSLSRASQS---VSSSYLAWYQQKPG\"\nand i want to match two type regions\/strings:\nthe first type is continuous,like TQSPG in seq.\nthe second type we only know the continuous form, but in fact there may be multiple \"-\" characters in the middle,for example what i know is SQSVS, but in seq it is SQS---VS.\nwhat i want to do is to match these two type of string and get the index, forexample TQSPG is (4,9), and for SQSVS is (16,24).\nI tried use re.search('TQSPG',seq).span(), it return (4,9), but i don't konw how to deal the second type.","Title":"how to match a string allowed \"-\" appear multiple times with python re?","Tags":"python,string,python-re","AnswerCount":2,"A_Id":75399354,"Answer":"re.search(r'([SQVS]+-*[SQVS]+)', seq).span()\nAssuming that the '-' can will be between the first and last character","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75399290,"CreationDate":"2023-02-09 13:27:34","Q_Score":1,"ViewCount":44,"Question":"I have a protein sequence:\n`seq = \"EIVLTQSPGTLSLSRASQS---VSSSYLAWYQQKPG\"\nand i want to match two type regions\/strings:\nthe first type is continuous,like TQSPG in seq.\nthe second type we only know the continuous form, but in fact there may be multiple \"-\" characters in the middle,for example what i know is SQSVS, but in seq it is SQS---VS.\nwhat i want to do is to match these two type of string and get the index, forexample TQSPG is (4,9), and for SQSVS is (16,24).\nI tried use re.search('TQSPG',seq).span(), it return (4,9), but i don't konw how to deal the second type.","Title":"how to match a string allowed \"-\" appear multiple times with python re?","Tags":"python,string,python-re","AnswerCount":2,"A_Id":75399385,"Answer":"Assuming the order of SQSVS needs to be preserved, I'd propose the regex r'S-*Q-*S-*V-*S'. This will match the sequence SQSVS with any number (might be 0) of hyphens included between either of the letters.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":2},{"Q_Id":75399303,"CreationDate":"2023-02-09 13:28:32","Q_Score":1,"ViewCount":107,"Question":"Only for a .py file that is saved on my Desktop, importing some modules (like pandas) fail due to Module not found from an import that happens within the module.\nThis behaviour doesn't happen when the file is saved to a different location.\nWorking on a Mac and i made a test.py file that only holds: import pandas as pd\nwhen this test.py is saved on my desktop it generates this error:\nDesktop % python3 test.py\nTraceback (most recent call last):\n File \"\/Users\/XXX\/Desktop\/test.py\", line 2, in \n import pandas as pd\n File \"\/Users\/XXX\/Desktop\/pandas\/__init__.py\", line 22, in \n from pandas.compat import (\n File \"\/Users\/XXX\/Desktop\/pandas\/compat\/__init__.py\", line 15, in \n from pandas.compat.numpy import (\n File \"\/Users\/XXX\/Desktop\/pandas\/compat\/numpy\/__init__.py\", line 7, in \n from pandas.util.version import Version\n File \"\/Users\/XXX\/Desktop\/pandas\/util\/__init__.py\", line 1, in \n from pandas.util._decorators import ( # noqa\n File \"\/Users\/XXX\/Desktop\/pandas\/util\/_decorators.py\", line 14, in \n from pandas._libs.properties import cache_readonly # noqa\n File \"\/Users\/XXX\/Desktop\/pandas\/_libs\/__init__.py\", line 13, in \n from pandas._libs.interval import Interval\nModuleNotFoundError: No module named 'pandas._libs.interval'\n\nthe weird thing is that if i save the test.py file to any other location on my HD it imports pandas perfectly.\nSame thing happens for some other modules. The module im trying to import seems to go oke but it fails on an import that happens from within the module.\nrunning which python3 in console from either the desktop folder or any other folder results in:\n\/Users\/XXXX\/.pyenv\/shims\/python\npython3 --version results in Python 3.10.9 for all locations.","Title":"Python Module not found ONLY when .py file is on desktop","Tags":"python,macos,python-3.10,modulenotfounderror,file-location","AnswerCount":2,"A_Id":75399409,"Answer":"You have a directory named pandas on your desktop.\nPython trying to import from this directory instead of the global package named pandas.\nYou can also see that in the exception, look at the trace, from \/Users\/XXX\/Desktop\/test.py the code moves to \/Users\/XXX\/Desktop\/pandas\/__init__.py and so on.\nJust rename the name of the directory on your desktop.\nFor your own safety, you should not name your local directories with the same names as global packages.","Users Score":3,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75400681,"CreationDate":"2023-02-09 15:19:10","Q_Score":1,"ViewCount":372,"Question":"I have a question regarding h5pyViewer to view h5 files. I tried pip install h5pyViewer but that didn't work. I checked on Google and it states that h5pyViewer does not work for older versions of Python, but that there are a few solutions on GitHub. I downloaded this with pip install git+https:\/\/github.com\/Eothred\/h5pyViewer.git which finally gave me a successful installation.\nYet, when I want to import the package with import h5pyViewer it gave me the following error: ModuleNotFoundError: No module named 'h5pyViewer'. However when I tried to install it again it says:\nRequirement already satisfied: h5pyviewer in c:\\users\\celin\\anaconda3\\lib\\site-packages (-v0.0.1.15)Note: you may need to restart the kernel to use updated packages.\n\nAny ideas how to get out of this loop or in what other way I could access an .h5 file?","Title":"ModuleNotFoundError: No module named 'h5pyViewer'","Tags":"python,h5py","AnswerCount":1,"A_Id":75401050,"Answer":"There could be so many things wrong so it's hard to say what the problem is.\n\nThe actual package import has a lowercase \"v\": h5pyviewer (as seen in your error message).\n\nYour IDE\/python runner may not be using your Conda environment (you can select the environment in VSCode, and if you are running a script in the terminal make sure your Conda env is enabled in that terminal)\n\nThe GitHub package might be exported from somewhere else. Try something like from Eothred import h5pyviewer.\n\nMaybe h5pyviewer is not even supposed to be imported this way!\n\n\nOverall, I don't suggest using this package, it seems like it's broken on Python 3 and not well maintained. The code in GitHub looks sketchy, and very few people use it. A good indicator is usually the number of people that star or use the package, which seems extremely low. Additionally, it doesn't even have a real readme file! It doesn't say how to use it at all. Suggest you try something else like pandas. But if you really want to go with this, you can try the above debugging steps.","Users Score":2,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75403882,"CreationDate":"2023-02-09 20:04:26","Q_Score":2,"ViewCount":230,"Question":"Given the following directory structure for a package my_package:\n\/\n\u251c\u2500\u2500 data\/\n\u2502 \u251c\u2500\u2500 more_data\/\n\u2502 \u2514\u2500\u2500 foo.txt\n\u251c\u2500\u2500 my_package\/\n\u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u2514\u2500\u2500 stuff\/\n\u2502 \u2514\u2500\u2500 __init__.py\n\u251c\u2500\u2500 README.md\n\u251c\u2500\u2500 setup.cfg\n\u251c\u2500\u2500 setup.py\n\nHow can I make the data\/ directory accessible (in the most Pythonic way) from within code, without using __file__ or other hacky solutions? I have tried using data_files in setup.py and the [options.package_data] in setup.cfg to no avail.\nI would like to do something like:\ndir_data = importlib.resources.files(data)\ncsv_files = dir_data.glob('*.csv')\n\nEDIT:\nI'm working with an editable installation and there's already a data\/ directory in the package (for source code unrelated to the top-level data).","Title":"Add a data directory outside Python package directory","Tags":"python,setuptools,setup.py,python-packaging,python-importlib","AnswerCount":2,"A_Id":75445678,"Answer":"Create an empty data\/__init__.py file, so that data becomes a top-level import package, so that the data files become package data, so that they are accessible via importlib.resources.files('data'). This should work with \"editable installation\". You might need to do small changes in your packaging files (setup.py or setup.cfg or pyproject.toml).","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75409352,"CreationDate":"2023-02-10 09:49:46","Q_Score":1,"ViewCount":85,"Question":"So I have tried to find an average of a value for an index 0 before it exchange to another index.\nAn example of the dataframe:\n\n\n\n\ncolumn_a\nvalue_b\nsum_c\ncount_d_\navg_e\n\n\n\n\n0\n10\n10\n1\n\n\n\n0\n20\n30\n2\n\n\n\n0\n30\n60\n3\n20\n\n\n1\n10\n10\n1\n\n\n\n1\n20\n30\n2\n\n\n\n1\n30\n60\n3\n20\n\n\n0\n10\n10\n1\n\n\n\n0\n20\n30\n2\n15\n\n\n1\n10\n10\n1\n\n\n\n1\n20\n30\n2\n\n\n\n1\n30\n60\n3\n20\n\n\n0\n10\n10\n1\n\n\n\n0\n20\n\n\n\n\n\n\n\nhowever, only the last row for sum and count is unavailable, so the avg cannot be calculated for it\npart of the code...\n#sum and avg for each section\n\nfor i, row in df.iloc[0:-1].iterrows():\n if df['column_a'][i] == 0:\n sum = sum + df['value_b'][i]\n df['sum_c'][i] = sum\n count = count + 1\n df['count_d'][i] = count\n else:\n sum = 0 \n count = 0\n df['sum_c'][i] = sum\n df['count_d'][i] = count\n\ntotcount = 0\nfor m, row in df.iloc[0:-1].iterrows():\n if df.loc[m, 'column_a'] == 0 :\n if (df.loc[m+1, 'sum_c'] == 0) :\n totcount = df.loc[m, 'count_d']\n avg_e = (df.loc[m, 'sum_c']) \/ totcount\n df.loc[m, 'avg_e'] = avg_e\n\nhave tried only using df.iloc[0:].iterrows but it produce an error.","Title":"Last row of some column in dataframe not included","Tags":"python,pandas,dataframe","AnswerCount":2,"A_Id":75409657,"Answer":"It is the expected behavior of df.iloc[0:-1] to return all the rows excepting the last one. When using slicing, remember that the last index you provide is not included in the return range. Since -1 is the index of the last row, [0:-1] excludes the last row.\nThe solution given by @mozway is anyway more elegant, but if for any reason you still want to use iterrows(), you can use df.iloc[0:].\nThe error ou got when you did may be due to your df.loc[m+1, 'sum_c']. At the last row, m+1 will be out of bounds and produce an IndexError.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75409462,"CreationDate":"2023-02-10 09:57:51","Q_Score":1,"ViewCount":127,"Question":"After installing phycharm i get an error message:\"Please select a valid Python interpreter\".\nI went to the python interpreter settings add interpreter system interpreter wrote the path to the python.exe. When I select the Python.exe and click on \"Ok\" I get an error message:\" invalid python interpreter name\"python.exe\"\nI tried reinstalling phycharm and looking for youtube video solutions but none of them worked.","Title":"Selecting Python.exe as a interpreter doesnt work?","Tags":"python,pycharm","AnswerCount":1,"A_Id":75409570,"Answer":"did you try to reinstall python ? And try to use python from cmd to check if your python.exe file does indeed work properly.\nLmk if that doesn't work, but the problem seems kinda weird, dumb question but did you select the python.exe file ? Watch out to not select only the folder.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75410361,"CreationDate":"2023-02-10 11:14:18","Q_Score":1,"ViewCount":55,"Question":"I'm working on a Starlette API. I am trying to receive a response object or json but I end up with a tuple. Any thoughts or guidance will be appreciated.\nFrontend:\nheaders = {\"Authorization\": settings.API_KEY}\nassociation = requests.get(\n \"http:\/\/localhost:9999\/get-association\",\n headers=headers,\n),\nprint(\"association:\", type(association))\n\nassociation: \nBackend:\n@app.route(\"\/get-association\")\nasync def association(request: Request):\n if request.headers[\"Authorization\"] != settings.API_KEY:\n return JSONResponse({\"error\": \"unauthorized\"}, status_code=401)\n # return JSONResponse(\n # content=await get_association(), status_code=200\n # )\n association = {\"association\": \"test data\"}\n print(\"association:\", type(association), association)\n return JSONResponse(association)\n\n\nassociation: {'association': 'test data'}","Title":"Python and Starlette - receiving a tuple from an API that's trying to return json","Tags":"python,json,starlette","AnswerCount":1,"A_Id":75410551,"Answer":"You have a comma after requests.get. This is making a tuple of (,).","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75415286,"CreationDate":"2023-02-10 19:20:07","Q_Score":4,"ViewCount":4772,"Question":"I am currently running python 3.9.13 on my mac. I wanted to update my version to 3.10.10\nI tried running\nbrew install python\n\nHowever it says that \"python 3.10.10 is already installed\"!\nWhen i run\npython3 --version\n\nin the terminal it says that i am still on \"python 3.9.13\"\nSo my question is, how do i change the python version from 3.9.13 to 3.10.10? I already deleted python 3.9 from my applications and python 3.10 is the only one that is still there.\nI also tried to install python 3.10.10 from the website and installing it. However it does not work. Python 3.10.10 is being installed successfully but the version is still the same when i check it.","Title":"How to change python3 version on mac to 3.10.10","Tags":"python,installation,pip,version,upgrade","AnswerCount":4,"A_Id":75415540,"Answer":"Just delete the current python installation on your device and download the version you want from the offical website. That is the easiest way and the most suitable one for a beginner.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75415286,"CreationDate":"2023-02-10 19:20:07","Q_Score":4,"ViewCount":4772,"Question":"I am currently running python 3.9.13 on my mac. I wanted to update my version to 3.10.10\nI tried running\nbrew install python\n\nHowever it says that \"python 3.10.10 is already installed\"!\nWhen i run\npython3 --version\n\nin the terminal it says that i am still on \"python 3.9.13\"\nSo my question is, how do i change the python version from 3.9.13 to 3.10.10? I already deleted python 3.9 from my applications and python 3.10 is the only one that is still there.\nI also tried to install python 3.10.10 from the website and installing it. However it does not work. Python 3.10.10 is being installed successfully but the version is still the same when i check it.","Title":"How to change python3 version on mac to 3.10.10","Tags":"python,installation,pip,version,upgrade","AnswerCount":4,"A_Id":76398761,"Answer":"When you download latest version, it comes with a file named Update Shell Profile.command.\nIn mac, you can find it at \/Applications\/Python 3.11\/Update Shell Profile.command.\nRun it and it should upgrade to latest version.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75415356,"CreationDate":"2023-02-10 19:29:24","Q_Score":1,"ViewCount":40,"Question":"I'm new to Python, know just enough R to get by. I have a 10 by 10 dataframe.\nsmall2\n USLC USSC INTD ... DSTS PCAP PRE\n0 0.059304 0.019987 -0.034140 ... 0.003009 0.113144 -0.021656\n1 0.003835 -0.024248 0.012446 ... 0.005323 -0.013716 0.011109\n2 -0.045045 -0.047186 -0.002372 ... -0.011956 -0.118342 -0.045023\n3 0.054108 0.002787 0.003714 ... 0.014466 0.128931 -0.007596\n4 0.064045 0.111250 0.077478 ... 0.012059 0.115427 0.079145\n5 0.041442 0.042858 0.047701 ... 0.009984 0.047098 0.003579\n6 0.081832 0.046531 0.010531 ... 0.031772 0.126552 0.001398\n7 -0.047171 0.022883 -0.065095 ... -0.010224 -0.025990 -0.055431\n8 0.054844 0.073193 0.044514 ... 0.016301 0.031755 0.044597\n9 -0.032403 -0.043930 -0.065013 ... 0.011944 -0.032902 -0.117689\n\nI want to create a list of several dataframes that are each just rolling 5 by 10 frames. Rows 0 through 4, 1 through 5, etc. I've seen articles addressing something similar, but they haven't worked. I'm thinking about it like lapply in R.\nI've tried splits = [small2.iloc[[i-4:i]] for i in small2.index] and got a syntax error from the colon.\nI then tried splits = [small2.iloc[[i-4,i]] for i in small2.index] which gave me a list of ten elements. It should be six 5 by 10 elements.\nFeel like I'm missing something basic. Thank you!","Title":"Turn a larger Pandas data frame into smaller rolling data frames","Tags":"python","AnswerCount":2,"A_Id":75415971,"Answer":"I figured it out. splits = [small2.iloc[i-4:i+1] for i in small2.index[4:10]]\nNot sure how this indexing makes sense though.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75421933,"CreationDate":"2023-02-11 17:18:05","Q_Score":1,"ViewCount":64,"Question":"I have a custom Sympy cSymbol class for the purpose of adding properties to declared symbols. This is done as follows:\nclass cSymbol(sy.Symbol):\n def __init__(self,name,x,**assumptions):\n self.x = x \n sy.Symbol.__init__(name,**assumptions)\n\nThe thing is that when I declare a cSymbol within a function (say, it affects the property x of a cSymbol declared outside the function if the names are the same (here \"a\"):\ndef some_function():\n dummy = cSymbol(\"a\",x=2)\n\na = cSymbol(\"a\",x=1)\nprint(a.x) # >> 1\nsome_function()\nprint(a.x) # >> 2, but should be 1\n\nIs there a way to prevent this (other than passing distinct names) ? Actually I am not sure to understand why it behaves like this, I thougt that everything declared within the function would stay local to this function.\nFull code below:\nimport sympy as sy\n\nclass cSymbol(sy.Symbol):\n def __init__(self,name,x,**assumptions):\n self.x = x \n sy.Symbol.__init__(name,**assumptions)\n \ndef some_function():\n a = cSymbol(\"a\",x=2)\n\n\nif __name__ == \"__main__\":\n a = cSymbol(\"a\",x=1)\n print(a.x) # >> 1\n some_function()\n print(a.x) # >> 2, but should be 1","Title":"Declare symbols local to functions in SymPy","Tags":"python,sympy,subclassing","AnswerCount":1,"A_Id":75422178,"Answer":"You aren't creating a local Python variable in the subroutine, you are create a SymPy Symbol object and all Symbol objects with the same name and assumptions are the same. It doesn't matter where they are created. It sounds like you are blurring together the Python variable and the SymPy variable which, though both bearing the name \"variable\", are not the same.","Users Score":3,"is_accepted":false,"Score":0.537049567,"Available Count":1},{"Q_Id":75424277,"CreationDate":"2023-02-12 00:48:24","Q_Score":1,"ViewCount":63,"Question":"I am creating a code editor, and I am trying to create a run feature. Right now I see that the problems come when I encounter a folder with a space in its name. It works on the command line, but not with os.system().\ndef run(event):\n if open_status_name != False:\n directory_split = open_status_name.split(\"\/\")\n for directory in directory_split:\n if directory_split.index(directory) > 2:\n true_directory = directory.replace(\" \", \"\\s\")\n print(true_directory)\n data = os.system(\"cd \" + directory.replace(\" \", \"\\s\"))\n print(data)\n\nI tried to replace the space with the regex character \"\\s\" but that also didn't work.","Title":"Is there a way for a Python program to \"cd\" to a folder that has a space in it?","Tags":"python,cmd","AnswerCount":1,"A_Id":75424334,"Answer":"os.system runs the command in a shell. You'd have to add quotes to get the value though: os.system(f'cd \"{directory}\"'). But the cd would only be valid for that subshell for the brief time it exists - it would not change the directory of your python program. Use os.chdir(directory) instead.\nNote - os.chdir can be risky as any relative paths you have in your code suddenly become invalid once you've done that. It may be better to manage your editor's \"current path\" on your own.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75428618,"CreationDate":"2023-02-12 16:52:52","Q_Score":1,"ViewCount":75,"Question":"I have a python script which run 24 hours on my local system and my script uses different third party libraries that are installed using pip in python\nLibraries\nBeautifulSoup\nrequests\nm3u8\n\nMy python script is recording some live stream videos from a website and is storing on system. How google cloud will help me to run this script 24\/hours daily and 7days a week.I am very new to clouds. Please help me i want to host my script on google cloud so i want to make sure that my script will work there same as it is working on local system so my money will not lost .","Title":"Will Google Cloud run this type of application?","Tags":"python,google-cloud-platform","AnswerCount":2,"A_Id":75434722,"Answer":"If you want to run 24\/7 application on the cloud, whatever the cloud, you must not use solution with timeout (like Cloud Run or Cloud Functions).\nYou can imagine using App Engine flex, but it won't be my best advice.\nThe most efficient for me (low maintenance, cost efficient), is to use GKE autopilot. A Kubernetes cluster managed for you, you pay only the CPU\/Memory that your workloads use.\nYou have to containerize your app to do that.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75430030,"CreationDate":"2023-02-12 20:37:44","Q_Score":1,"ViewCount":212,"Question":"how to bypass HTTP\/1.1 403 Forbidden in connect to wss:\/\/ws2.qxbroker.com\/socket.io\/EIO=3&transport=websocket, i try change user-agent and try use proxy and add cookis but not work\nclass WebsocketClient(object):\n\n\n def __init__(self, api):\n websocket.enableTrace(True)\n Origin = 'Origin: https:\/\/qxbroker.com'\n Extensions = 'Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits'\n Host = 'Host: ws2.qxbroker.com'\n Agent = 'User-Agent:Mozilla\/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/108.0.0.0 Safari\/537.36 OPR\/94.0.0.0'\n \n self.api = api\n self.wss=websocket.WebSocketApp(('wss:\/\/ws2.qxbroker.com\/socket.io\/EIO=3&transport=websocket'), on_message=(self.on_message),\n on_error=(self.on_error),\n on_close=(self.on_close),\n on_open=(self.on_open),\n header=[Origin,Extensions,Agent])\n\n\nrequest and response header this site protect with cloudflare\n--- request header ---\nGET \/socket.io\/?EIO=3&transport=websocket HTTP\/1.1\nUpgrade: websocket\nHost: ws2.qxbroker.com\nSec-WebSocket-Key: 7DgEjWxUp8N8PVY7N7vyDw==\nSec-WebSocket-Version: 13\nConnection: Upgrade\nOrigin: https:\/\/qxbroker.com\nUser-Agent: Mozilla\/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/95.0.4638.69 Safari\/537.36\n-----------------------\n--- response header ---\nHTTP\/1.1 403 Forbidden\nDate: Sat, 11 Feb 2023 23:33:11 GMT\nContent-Type: text\/html; charset=UTF-8\nTransfer-Encoding: chunked\nConnection: close\nPermissions-Policy: accelerometer=(),autoplay=(),camera=(),clipboard-read=(),clipboard-write=(),fullscreen=(),geolocation=(),gyroscope=(),hid=(),interest-cohort=(),magnetometer=(),microphone=(),payment=(),publickey-credentials-get=(),screen-wake-lock=(),serial=(),sync-xhr=(),usb=()\nReferrer-Policy: same-origin\nX-Frame-Options: SAMEORIGIN\nCache-Control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0\nExpires: Thu, 01 Jan 1970 00:00:01 GMT\nSet-Cookie: __cf_bm=7TD4hk4.bntJRdP6w9K.AjXF5MsV9LERTJV00jL2Uww-1676158391-0-AZFOKw90ZYdyy4RxX1xJ4jZQMt74+3UkQDZpDrdXE8BxGJULfe8j0T8EZnpUNXr2W3YHd\/FxRoO\/bPhKA2Dc0E0=; path=\/; expires=Sun, 12-Feb-23 00:03:11 GMT; domain=.qxbroker.com; HttpOnly; Secure; SameSite=None\nServer-Timing: cf-q-config;dur=6.9999950937927e-06\nServer: cloudflare\nCF-RAY: 7980e3583b6a0785-MRS","Title":"How to creat connection websocket qxbroker in python","Tags":"python-3.x,websocket,cloudflare","AnswerCount":2,"A_Id":75525970,"Answer":"Sending cookies in websocketapp argument?\n\"__cf_bm=7TD4hk4.bntJRdP6w9K.AjXF5MsV9LERTJV00jL2Uww-1676158391-0-AZFOKw90ZYdyy4RxX1xJ4jZQMt74+3UkQDZpDrdXE8BxGJULfe8j0T8EZnpUNXr2W3YHd\/FxRoO\/bPhKA2Dc0E0=; path=\/; expires=Sun, 12-Feb-23 00:03:11 GMT; domain=.qxbroker.com; HttpOnly; Secure; SameSite=None\"","Users Score":-1,"is_accepted":false,"Score":-0.0996679946,"Available Count":2},{"Q_Id":75430030,"CreationDate":"2023-02-12 20:37:44","Q_Score":1,"ViewCount":212,"Question":"how to bypass HTTP\/1.1 403 Forbidden in connect to wss:\/\/ws2.qxbroker.com\/socket.io\/EIO=3&transport=websocket, i try change user-agent and try use proxy and add cookis but not work\nclass WebsocketClient(object):\n\n\n def __init__(self, api):\n websocket.enableTrace(True)\n Origin = 'Origin: https:\/\/qxbroker.com'\n Extensions = 'Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits'\n Host = 'Host: ws2.qxbroker.com'\n Agent = 'User-Agent:Mozilla\/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/108.0.0.0 Safari\/537.36 OPR\/94.0.0.0'\n \n self.api = api\n self.wss=websocket.WebSocketApp(('wss:\/\/ws2.qxbroker.com\/socket.io\/EIO=3&transport=websocket'), on_message=(self.on_message),\n on_error=(self.on_error),\n on_close=(self.on_close),\n on_open=(self.on_open),\n header=[Origin,Extensions,Agent])\n\n\nrequest and response header this site protect with cloudflare\n--- request header ---\nGET \/socket.io\/?EIO=3&transport=websocket HTTP\/1.1\nUpgrade: websocket\nHost: ws2.qxbroker.com\nSec-WebSocket-Key: 7DgEjWxUp8N8PVY7N7vyDw==\nSec-WebSocket-Version: 13\nConnection: Upgrade\nOrigin: https:\/\/qxbroker.com\nUser-Agent: Mozilla\/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/95.0.4638.69 Safari\/537.36\n-----------------------\n--- response header ---\nHTTP\/1.1 403 Forbidden\nDate: Sat, 11 Feb 2023 23:33:11 GMT\nContent-Type: text\/html; charset=UTF-8\nTransfer-Encoding: chunked\nConnection: close\nPermissions-Policy: accelerometer=(),autoplay=(),camera=(),clipboard-read=(),clipboard-write=(),fullscreen=(),geolocation=(),gyroscope=(),hid=(),interest-cohort=(),magnetometer=(),microphone=(),payment=(),publickey-credentials-get=(),screen-wake-lock=(),serial=(),sync-xhr=(),usb=()\nReferrer-Policy: same-origin\nX-Frame-Options: SAMEORIGIN\nCache-Control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0\nExpires: Thu, 01 Jan 1970 00:00:01 GMT\nSet-Cookie: __cf_bm=7TD4hk4.bntJRdP6w9K.AjXF5MsV9LERTJV00jL2Uww-1676158391-0-AZFOKw90ZYdyy4RxX1xJ4jZQMt74+3UkQDZpDrdXE8BxGJULfe8j0T8EZnpUNXr2W3YHd\/FxRoO\/bPhKA2Dc0E0=; path=\/; expires=Sun, 12-Feb-23 00:03:11 GMT; domain=.qxbroker.com; HttpOnly; Secure; SameSite=None\nServer-Timing: cf-q-config;dur=6.9999950937927e-06\nServer: cloudflare\nCF-RAY: 7980e3583b6a0785-MRS","Title":"How to creat connection websocket qxbroker in python","Tags":"python-3.x,websocket,cloudflare","AnswerCount":2,"A_Id":75536817,"Answer":"i resolved the problem sending \"header\" parameter = {\n\"User-Agent\": \"Mozilla\/5.0 (X11; Linux x86_64) AppleWebKit\/537.36 (KHTML, like Gecko)\"\n}","Users Score":2,"is_accepted":false,"Score":0.1973753202,"Available Count":2},{"Q_Id":75430998,"CreationDate":"2023-02-13 00:16:54","Q_Score":2,"ViewCount":106,"Question":"I am trying to deploy a Django app in a container to Cloud Run. I have it running well locally using Docker. However, when I deploy it to Cloud Run, I get infinite 301 redirects. The Cloud Run logs do not seem to show any meaningful info about why that happens. Below is my Dockerfile that I use for deployment:\n# Pull base image\nFROM python:3.9.0\n\n# Set environment variables\nENV PIP_DISABLE_PIP_VERSION_CHECK 1\nENV PYTHONDONTWRITEBYTECODE 1\nENV PYTHONUNBUFFERED 1\n\n# Set work directory\nWORKDIR \/code\n\n# Install dependencies\nCOPY requirements.txt requirements.txt\nRUN pip install -r requirements.txt && \\\n adduser --disabled-password --no-create-home django-user\n\n# Copy project\nCOPY . \/code\n\nUSER django-user\n\n# Run server\nCMD exec gunicorn -b :$PORT my_app.wsgi:application\n\nI store all the sensitive info in Secrets Manager, and the connection to it seems to work fine (I know because I had an issue with it and now I fixed that).\nCould you suggest what I might have done wrong, or where can I look for hints as to why the redirects happen? Thank you!\nEDIT:\nHere are the settings for ALLOWED_HOSTS and ROOT_URLCONF\nCLOUDRUN_SERVICE_URL = env(\"CLOUDRUN_SERVICE_URL\", default=None)\nif CLOUDRUN_SERVICE_URL:\n ALLOWED_HOSTS = [urlparse(CLOUDRUN_SERVICE_URL).netloc]\n CSRF_TRUSTED_ORIGINS = [CLOUDRUN_SERVICE_URL]\n # SECURE_SSL_REDIRECT = True\n SECURE_PROXY_SSL_HEADER = (\"HTTP_X_FORWARDED_PROTO\", \"https\")\nelse:\n ALLOWED_HOSTS = [\"*\"]\n\nROOT_URLCONF = 'my_app.urls'\n\nEDIT 2:\nHere are the Cloud Run logs:\n[\n {\n \"insertId\": \"63ea0f3a0009301fc1588a44\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"719\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.016940322s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"location\": \"europe-west4\",\n \"configuration_name\": \"stokkio-test\",\n \"project_id\": \"stokkio\",\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"service_name\": \"stokkio-test\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:46.602143Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/64be6aa2f943773a97b8dca48c08183f\",\n \"receiveTimestamp\": \"2023-02-13T10:21:46.738718368Z\",\n \"spanId\": \"12503801728925259527\"\n },\n {\n \"insertId\": \"63ea0f3a000a1ab20ae2502b\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"719\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.015862415s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"project_id\": \"stokkio\",\n \"location\": \"europe-west4\",\n \"service_name\": \"stokkio-test\",\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"configuration_name\": \"stokkio-test\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:46.662194Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/b9918384299b4f2d5abaf95d3b191b52\",\n \"receiveTimestamp\": \"2023-02-13T10:21:46.738718368Z\",\n \"spanId\": \"4996242098785213790\"\n },\n {\n \"insertId\": \"63ea0f3a000aca32edc19ff5\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"719\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.015062643s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"project_id\": \"stokkio\",\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"configuration_name\": \"stokkio-test\",\n \"service_name\": \"stokkio-test\",\n \"location\": \"europe-west4\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:46.707122Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/902a25de57f137b27daadd636246369a\",\n \"receiveTimestamp\": \"2023-02-13T10:21:46.738718368Z\",\n \"spanId\": \"12127042401513465971\"\n },\n {\n \"insertId\": \"63ea0f3a000b8d87125ec41c\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"720\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.016173479s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"service_name\": \"stokkio-test\",\n \"location\": \"europe-west4\",\n \"configuration_name\": \"stokkio-test\",\n \"project_id\": \"stokkio\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:46.757127Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/02532852f1783bc16f2b66b7941c300e\",\n \"receiveTimestamp\": \"2023-02-13T10:21:47.071599643Z\",\n \"spanId\": \"5082316244221461602\"\n },\n {\n \"insertId\": \"63ea0f3a000ce2f9bb9dbffa\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"719\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.017867221s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"service_name\": \"stokkio-test\",\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"configuration_name\": \"stokkio-test\",\n \"location\": \"europe-west4\",\n \"project_id\": \"stokkio\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:46.844537Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/933a163da353fbb6b81f2f4bb37cff36\",\n \"receiveTimestamp\": \"2023-02-13T10:21:47.071599643Z\",\n \"spanId\": \"5044082674168555502\"\n },\n {\n \"insertId\": \"63ea0f3a000d9928e046cc4c\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"720\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.015601548s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"location\": \"europe-west4\",\n \"project_id\": \"stokkio\",\n \"service_name\": \"stokkio-test\",\n \"configuration_name\": \"stokkio-test\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:46.891176Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/37376b9045f8fc7b148437d39ba49bfe\",\n \"receiveTimestamp\": \"2023-02-13T10:21:47.071599643Z\",\n \"spanId\": \"3090697929386714415\"\n },\n {\n \"insertId\": \"63ea0f3a000e47cbe8acf1d4\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"720\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.015684058s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"location\": \"europe-west4\",\n \"configuration_name\": \"stokkio-test\",\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"service_name\": \"stokkio-test\",\n \"project_id\": \"stokkio\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:46.935883Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/1aef8aebf520c8b999ff475465ae402d\",\n \"receiveTimestamp\": \"2023-02-13T10:21:47.071599643Z\",\n \"spanId\": \"5530487600267712102\"\n },\n {\n \"insertId\": \"63ea0f3a000f124e3e217c45\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"719\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.017848766s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"location\": \"europe-west4\",\n \"project_id\": \"stokkio\",\n \"configuration_name\": \"stokkio-test\",\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"service_name\": \"stokkio-test\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:46.987726Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/fa978438d859dd302167f39f941934ec\",\n \"receiveTimestamp\": \"2023-02-13T10:21:47.071599643Z\",\n \"spanId\": \"1186815225754169043\"\n },\n {\n \"insertId\": \"63ea0f3b00008ee9db5031dc\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"719\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.015688891s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"location\": \"europe-west4\",\n \"service_name\": \"stokkio-test\",\n \"configuration_name\": \"stokkio-test\",\n \"project_id\": \"stokkio\",\n \"revision_name\": \"stokkio-test-00007-nah\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:47.036585Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/24aedf0be321b5b72768e877459d8ceb\",\n \"receiveTimestamp\": \"2023-02-13T10:21:47.071599643Z\",\n \"spanId\": \"10950882171467594641\"\n },\n {\n \"insertId\": \"63ea0f3b00015a4c9feb5375\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"718\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.017323986s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"location\": \"europe-west4\",\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"configuration_name\": \"stokkio-test\",\n \"service_name\": \"stokkio-test\",\n \"project_id\": \"stokkio\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:47.088652Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/bc99cdb404d30d79eeca345aa9e1e08f\",\n \"receiveTimestamp\": \"2023-02-13T10:21:47.404890035Z\",\n \"spanId\": \"9075675780908094052\"\n },\n {\n \"insertId\": \"63ea0f3b00020e2a8050452d\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"720\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.015765805s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"project_id\": \"stokkio\",\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"configuration_name\": \"stokkio-test\",\n \"service_name\": \"stokkio-test\",\n \"location\": \"europe-west4\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:47.134698Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/2ff445dd04e8f2d88a65f45af2a15e00\",\n \"receiveTimestamp\": \"2023-02-13T10:21:47.404890035Z\",\n \"spanId\": \"93159101454760213\"\n },\n {\n \"insertId\": \"63ea0f3b0002e5a790b8b27f\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"718\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.016101403s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"configuration_name\": \"stokkio-test\",\n \"service_name\": \"stokkio-test\",\n \"location\": \"europe-west4\",\n \"project_id\": \"stokkio\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:47.189863Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/33c3a83942c227fd78262d7bbd5e3c0c\",\n \"receiveTimestamp\": \"2023-02-13T10:21:47.404890035Z\",\n \"spanId\": \"1509834668974463252\"\n },\n {\n \"insertId\": \"63ea0f3b00039c080261c60b\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"719\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.015538512s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"service_name\": \"stokkio-test\",\n \"configuration_name\": \"stokkio-test\",\n \"location\": \"europe-west4\",\n \"project_id\": \"stokkio\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:47.236552Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/34452d901bf9e91f11103df834fa9e40\",\n \"receiveTimestamp\": \"2023-02-13T10:21:47.404890035Z\",\n \"spanId\": \"8356040364675355850\"\n },\n {\n \"insertId\": \"63ea0f3b0004863bb01e0463\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"719\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.014853111s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"location\": \"europe-west4\",\n \"configuration_name\": \"stokkio-test\",\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"project_id\": \"stokkio\",\n \"service_name\": \"stokkio-test\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:47.296507Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/140e39f594ea8a6e074bc4435dc5a510\",\n \"receiveTimestamp\": \"2023-02-13T10:21:47.404890035Z\",\n \"spanId\": \"12869781596943932295\"\n },\n {\n \"insertId\": \"63ea0f3b00054f5971f9d391\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"718\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.015427982s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"location\": \"europe-west4\",\n \"service_name\": \"stokkio-test\",\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"project_id\": \"stokkio\",\n \"configuration_name\": \"stokkio-test\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:47.347993Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/99472b16d5ee9c8a6ff9e687b43a6ca9\",\n \"receiveTimestamp\": \"2023-02-13T10:21:47.404890035Z\",\n \"spanId\": \"11202554865495003658\"\n }\n]","Title":"Django app on Cloud Run infinite redirects (301)","Tags":"python,django,docker,google-cloud-run","AnswerCount":1,"A_Id":75431802,"Answer":"Specify the valid 'ALLOWED_HOSTS' for the app from the Django settings in your case hostname will be cloud Run the service you deployed. Secondly, configure the root URL 'ROOT_URLCONF' for your App.","Users Score":3,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75431371,"CreationDate":"2023-02-13 02:04:00","Q_Score":1,"ViewCount":204,"Question":"I have recently attempted to install pandas through pip. It appears to go through the process of installing pandas and all dependencies properly. After I update to the latest version through cmd as well and everything appears to work; typing in pip show pandas gives back information as expected with the pandas version showing as 1.5.3\nHowever, it appears that when attempting to import pandas to a project in PyCharm (I am wondering if this is where the issue lies) it gives an error stating that it can't be found. I looked through the folders to make sure the paths were correct and that pip didn't install pandas anywhere odd; it did not.\nI uninstalled python and installed the latest version; before proceeding I would like to know if there is any reason this issue has presented itself. I looked into installing Anaconda instead but that is only compatible with python version 3.9 or 3.1 where as I am using the newest version, 3.11.2","Title":"pip install of pandas","Tags":"python,pandas,dataframe,machine-learning,pycharm","AnswerCount":1,"A_Id":75431477,"Answer":"When this happens to me\n\nI reload the environment variables by running the command\nsource ~\/.bashrc\nright in the pycharm terminal.\n\nI make sure the I have activated the correct venv (where the package installations go) by cd to path_with_venv then running\nsource ~\/pathtovenv\/venv\/bin\/activate\n\nIf that does not work, hit CMD+, to open your project settings and and under Python Interpreter select the one with the venv that you have activated. Also check if pandas appears on the list of packages that appear below the selected interpreter, if not you may search for it and install it using this way and not the pip install way","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75432346,"CreationDate":"2023-02-13 06:04:15","Q_Score":1,"ViewCount":211,"Question":"there are many ways about normalize skils for ml and dl. It is known to provide only normalization for 0 to 1.\nI want to know that is some ways to normalize -1 between 1.","Title":"Normalize -1 ~ 1","Tags":"python,machine-learning,deep-learning,data-preprocessing","AnswerCount":4,"A_Id":75432397,"Answer":"You can use the min-max scalar or the z-score normalization here is what u can do in sklearn\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.preprocessing import StandardScaler\nor hard code it like this\nx_scaled = (x - min(x)) \/ (max(x) - min(x)) * 2 - 1 -> this one for minmaxscaler\nx_scaled = (x - mean(x)) \/ std(x) -> this one for standardscaler","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":3},{"Q_Id":75432346,"CreationDate":"2023-02-13 06:04:15","Q_Score":1,"ViewCount":211,"Question":"there are many ways about normalize skils for ml and dl. It is known to provide only normalization for 0 to 1.\nI want to know that is some ways to normalize -1 between 1.","Title":"Normalize -1 ~ 1","Tags":"python,machine-learning,deep-learning,data-preprocessing","AnswerCount":4,"A_Id":75432401,"Answer":"Yes, there are ways to normalize data to the range between -1 and 1. One common method is called Min-Max normalization. It works by transforming the data to a new range, such that the minimum value is mapped to -1 and the maximum value is mapped to 1. The formula for this normalization is:\nx_norm = (x - x_min) \/ (x_max - x_min) * 2 - 1\nWhere x_norm is the normalized value, x is the original value, x_min is the minimum value in the data and x_max is the maximum value in the data.\nAnother method for normalizing data to the range between -1 and 1 is called Z-score normalization, also known as standard score normalization. This method normalizes the data by subtracting the mean and dividing by the standard deviation. The formula for this normalization is:\nx_norm = (x - mean) \/ standard deviation\nWhere x_norm is the normalized value, x is the original value, mean is the mean of the data and standard deviation is the standard deviation of the data.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":3},{"Q_Id":75432346,"CreationDate":"2023-02-13 06:04:15","Q_Score":1,"ViewCount":211,"Question":"there are many ways about normalize skils for ml and dl. It is known to provide only normalization for 0 to 1.\nI want to know that is some ways to normalize -1 between 1.","Title":"Normalize -1 ~ 1","Tags":"python,machine-learning,deep-learning,data-preprocessing","AnswerCount":4,"A_Id":75432374,"Answer":"Consider re-scale the normalized value. e.g. normalize to 0..1, then multiply by 2 and minus 1 to have the value fall into the range of -1..1","Users Score":2,"is_accepted":false,"Score":0.0996679946,"Available Count":3},{"Q_Id":75432923,"CreationDate":"2023-02-13 07:24:28","Q_Score":1,"ViewCount":167,"Question":"I am using aws codebuild to execute my testsuite. It says 'permission denied' when I try to run allure genrate in aws code build.\nPleas share the solution if anyone knows on how to generate allure report while working with aws code build.\nI am using pytest and the scenario is working fine in local. but failes in aws as aws build is not allowing me to run allure generate command.\non successful dev deployment -- > tetssuite execution -- > generate allure repors --> uploade them to s3 --> send the report via email using aws SNS with lambda.\nall above steps are working fine, but the 3rd step.(allure generate).\nPlease share the solution if anyone knows how to do it.","Title":"How to run allure generate command while using aws code build","Tags":"python,amazon-web-services,pytest,allure","AnswerCount":1,"A_Id":75457422,"Answer":"I am able to fix this is by downloading allure package freshly outside of the $CODEBUILD_SRC_DIR and set the path for the same location .\n(Initially I made this part of test repository itself and add that location to PATH, which was not working)","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75433141,"CreationDate":"2023-02-13 07:54:12","Q_Score":1,"ViewCount":590,"Question":"I am expecting multiple data types as input to a function & want to take a specific action if its a pydantic model (pydantic model here means class StartReturnModel(BaseModel)).\nIn case of model instance I can check it, using isinstance(model, StartReturnModel) or isinstance(model, BaseModel) to identify its a pydantic model instance.\nBased on the below test program I can see that type(StartReturnModel) returns as ModelMetaclass. Can I use this to identify a pydantic model? or is there any better way to do it?\nfrom pydantic.main import ModelMetaclass\nfrom typing import Optional\n\nclass StartReturnModel(BaseModel):\n result: bool\n pid: Optional[int]\n\nprint(type(StartReturnModel))\nprint(f\"is base model: {bool(isinstance(StartReturnModel, BaseModel))}\")\nprint(f\"is meta model: {bool(isinstance(StartReturnModel, ModelMetaclass))}\")\n\nres = StartReturnModel(result=True, pid=500045)\nprint(f\"\\n{type(res)}\")\nprint(f\"is start model(res): {bool(isinstance(res, StartReturnModel))}\")\nprint(f\"is base model(res): {bool(isinstance(res, BaseModel))}\")\nprint(f\"is meta model(res): {bool(isinstance(res, ModelMetaclass))}\")\n\n*****Output****\n\nis base model: False\nis meta model: True\n\n\nis start model(res): True\nis base model(res): True\nis meta model(res): False","Title":"using isintance on a pydantic model","Tags":"python,pydantic","AnswerCount":2,"A_Id":75433527,"Answer":"Yes you can use it, but why not use isinstance or issubclass.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75433717,"CreationDate":"2023-02-13 08:54:42","Q_Score":3,"ViewCount":5094,"Question":"I am working on google colab with the segmentation_models library. It worked perfectly the first week using it, but now it seems that I can't import the library anymore. Here is the error message, when I execute import segmentation_models as sm :\n---------------------------------------------------------------------------\n\nAttributeError Traceback (most recent call last)\n\n in \n 1 import tensorflow as tf\n----> 2 import segmentation_models as sm\n\n 3 frames\n\n\/usr\/local\/lib\/python3.8\/dist-packages\/efficientnet\/__init__.py in init_keras_custom_objects()\n 69 }\n 70 \n---> 71 keras.utils.generic_utils.get_custom_objects().update(custom_objects)\n 72 \n 73 \n\nAttributeError: module 'keras.utils.generic_utils' has no attribute 'get_custom_objects'\n\nColab uses tensorflow version 2.11.0.\nI did not find any information about this particular error message. Does anyone know where the problem may come from ?","Title":"module 'keras.utils.generic_utils' has no attribute 'get_custom_objects' when importing segmentation_models","Tags":"python,tensorflow,keras,image-segmentation","AnswerCount":3,"A_Id":75434944,"Answer":"Encountered the same issue sometimes. How I solved it:\n\nopen the file keras.py, change all the 'init_keras_custom_objects' to 'init_tfkeras_custom_objects'.\n\nthe location of the keras.py is in the error message. In your case, it should be in \/usr\/local\/lib\/python3.8\/dist-packages\/efficientnet\/","Users Score":4,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75433811,"CreationDate":"2023-02-13 09:03:04","Q_Score":1,"ViewCount":39,"Question":"I have created a Marklogic transform which tries to convert some URL encoded characters: [ ] and whitespace when ingesting data into database. This is the xquery code:\nxquery version \"1.0-ml\";\n\nmodule namespace space = \"http:\/\/marklogic.com\/rest-api\/transform\/space-to-space\";\n\ndeclare function space:transform(\n $context as map:map,\n $params as map:map,\n $content as document-node()\n ) as document-node()\n{\n\n let $puts := (\n xdmp:log($params),\n xdmp:log($context),\n map:put($context, \"uri\", fn:replace(map:get($context, \"uri\"), \"%5B+\", \"[\")),\n map:put($context, \"uri\", fn:replace(map:get($context, \"uri\"), \"%5D+\", \"]\")),\n map:put($context, \"uri\", fn:replace(map:get($context, \"uri\"), \"%20+\", \" \")),\n xdmp:log($context)\n )\n \n return $content\n \n};\n\nWhen I tried this with my python code below\ndef upload_document(self, inputContent, uri, fileType, database, collection):\n if fileType == 'XML':\n headers = {'Content-type': 'application\/xml'}\n fileBytes = str.encode(inputContent)\n elif fileType == 'TXT':\n headers = {'Content-type': 'text\/*'}\n fileBytes = str.encode(inputContent)\n else:\n headers = {'Content-type': 'application\/octet-stream'}\n fileBytes = inputContent\n\n endpoint = ML_DOCUMENTS_ENDPOINT\n params = {}\n\n if uri is not None:\n encodedUri = urllib.parse.quote(uri)\n endpoint = endpoint + \"?uri=\" + encodedUri\n\n if database is not None:\n params['database'] = database\n\n if collection is not None:\n params['collection'] = collection\n\n params['transform'] = 'space-to-space'\n\n req = PreparedRequest()\n req.prepare_url(endpoint, params)\n\n response = requests.put(req.url, data=fileBytes, headers=headers, auth=HTTPDigestAuth(ML_USER_NAME, ML_PASSWORD))\n print('upload_document result: ' + str(response.status_code))\n\n if response.status_code == 400:\n print(response.text)\n\nThe following lines are from the xquery logging:\n\n2023-02-13 16:59:00.067 Info: {}\n\n2023-02-13 16:59:00.067 Info:\n{\"input-type\":\"application\/octet-stream\",\n\"uri\":\"\/Judgment\/26856\/supportingfiles\/[TEST] 57_image1.PNG\", \"output-type\":\"application\/octet-stream\"}\n\n2023-02-13 16:59:00.067 Info:\n{\"input-type\":\"application\/octet-stream\",\n\"uri\":\"\/Judgment\/26856\/supportingfiles\/[TEST] 57_image1.PNG\", \"output type\":\"application\/octet-stream\"}\n\n2023-02-13 16:59:00.653 Info: Status 500: REST-INVALIDPARAM: (err:FOER0000)\nInvalid parameter: invalid uri:\n\/Judgment\/26856\/supportingfiles\/[TEST] 57_image1.PNG","Title":"Unable to create URI with whitespace in MarkLogic","Tags":"python,rest,marklogic","AnswerCount":2,"A_Id":75437482,"Answer":"The MarkLogic REST API is very opinionated about what a valid URI is, and it doesn't allow you to insert documents that have spaces in the URI. If you have an existing URI with a space in it, the REST API will retrieve or update it for you. However, it won't allow you to create a new document with such a URI.\nIf you need to create documents with spaces in the URI, then you will need to use lower-level APIs. xdmp:document-insert() will let you.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75434294,"CreationDate":"2023-02-13 09:51:55","Q_Score":1,"ViewCount":236,"Question":"I want to copy a file from my SFTP server to local computer. However, when I run my code, it didn't show any error while I still cannot find my file on local computer. My code like that:\nimport paramiko\nhost_name ='10.110.100.8'\nuser_name = 'abc'\npassword ='xyz'\nport = 22\nremote_dir_name ='\/data\/...\/PMC1087887_00003.jpg' \nlocal_dir_name = 'D:\\..\\pred.jpg'\n\nt = paramiko.Transport((host_name, port))\nt.connect(username=user_name, password=password)\nsftp = paramiko.SFTPClient.from_transport(t)\nsftp.get(remote_dir_name,local_dir_name)\n\nI have found the main problem. If I run my code in local in VS Code, it works. But when I login in my server by SSH in VS Code, and run my code on server, I found that my file appeared in current code folder (for example \/home\/...\/D:\\..\\pred.jpg) and its name is D:\\..\\pred.jpg. How to solve this problem if I want to run code on server and download file to local?","Title":"Cannot copy\/move file from remote SFTP server to local machine by Paramiko code running on remote SSH server","Tags":"python,ssh,sftp,paramiko","AnswerCount":1,"A_Id":75456237,"Answer":"If you call SFTPClient.get on the server, it will, as any other file manipulation API, work with files on the server.\nThere's no way to make remote Python script directly work with files on your local machine.\nYou would have to use some API to push the files to your local machine. But for that, your local machine would have to implement the API. For example, you can run an SFTP server on the local machine and \"upload\" the files to it.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75435280,"CreationDate":"2023-02-13 11:30:20","Q_Score":2,"ViewCount":101,"Question":"I want to split this string 'AB4F2D' in ['A', 'B4', 'F2', 'D'].\nEssentially, if character is a letter, return the letter, if character is a number return previous character plus present character (luckily there is no number >9 so there is never a X12).\nI have tried several combinations but I am not able to find the correct one:\ndef get_elements(input_string):\n\n patterns = [\n r'[A-Z][A-Z0-9]',\n r'[A-Z][A-Z0-9]|[A-Z]',\n r'\\D|\\D\\d',\n r'[A-Z]|[A-Z][0-9]',\n r'[A-Z]{1}|[A-Z0-9]{1,2}'\n ]\n\n for p in patterns:\n elements = re.findall(p, input_string)\n print(elements)\n\nresults:\n['AB', 'F2']\n['AB', 'F2', 'D']\n['A', 'B', 'F', 'D']\n['A', 'B', 'F', 'D']\n['A', 'B', '4F', '2D']\n\nCan anyone help? Thanks","Title":"python\/regex: match letter only or letter followed by number","Tags":"python,regex","AnswerCount":2,"A_Id":75435577,"Answer":"\\D\\d?\nOne problem with yours is that you put the shorter alternative first, so the longer one never gets a chance. For example, the correct version of your \\D|\\D\\d is \\D\\d|\\D. But just use \\D\\d?.","Users Score":3,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75438826,"CreationDate":"2023-02-13 16:45:25","Q_Score":1,"ViewCount":64,"Question":"i have been trying to get speed of the vehicle using MPU-6050 but couldn't find my way to do it so,\nin the end i am stuck here\ndef stateCondition():\nwhile True:\n acc_x = read_raw_data(ACCEL_XOUT_H)\n acc_y = read_raw_data(ACCEL_YOUT_H)\n acc_z = read_raw_data(ACCEL_ZOUT_H)\n gyro_x = read_raw_data(GYRO_XOUT_H)\n gyro_y = read_raw_data(GYRO_YOUT_H)\n gyro_z = read_raw_data(GYRO_ZOUT_H)\n # Full scale range +\/- 250 degree\/C as per sensitivity scale factor\n Ax = acc_x\/16384.0\n Ay = acc_y\/16384.0\n Az = acc_z\/16384.0\n Gx = gyro_x\/131.0\n Gy = gyro_y\/131.0\n Gz = gyro_z\/131.0\n\ncan some one please write the rest of it so that it returns the speed of the vehicle in km\/hr or whatever it is!!!!!\nThank you","Title":"Detect the speed of the vehicle using MPU6050","Tags":"python,raspberry-pi,gyroscope,mpu6050","AnswerCount":1,"A_Id":75472650,"Answer":"An MPU6050 will provide you with information about changes in motion (acceleration or decelleration mostly, but also curves). It will not provide you with absolute values. That can only be achieved by integrating over time, but this requires a known start position\/speed. Also, it is very inexact, particularly with cheap motion sensors such as this one.\nTo get the speed of a vehicle, it is much easier to use a GNSS module instead.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75439849,"CreationDate":"2023-02-13 18:24:22","Q_Score":1,"ViewCount":65,"Question":"Below code probably works (no errors present):\nviews.pl\nclass SignInView(View):\n\n def get(self, request):\n return render(request, \"signin.html\")\n\n def post(self, request):\n user = request.POST.get('username', '')\n pass = request.POST.get('password', '')\n\n user = authenticate(username=user, password=pass)\n\n if user is not None:\n if user.is_active:\n login(request, user)\n return HttpResponseRedirect('\/')\n else:\n return HttpResponse(\"Bad user.\")\n else:\n return HttpResponseRedirect('\/')\n\n....but in template:\n{% user.is_authenticated %}\n\nis not True. So I don't see any functionality for authenticated user.\nWhat is the problem?","Title":"Django - after sign-in template don't know that user is authenticated","Tags":"python,django,django-views,django-templates,django-authentication","AnswerCount":2,"A_Id":75439899,"Answer":"You should do like {% if request.user.is_authenticated %} or {% if user.is_authenticated %}","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75440354,"CreationDate":"2023-02-13 19:20:26","Q_Score":12,"ViewCount":6906,"Question":"This bug suddenly came up literally today after read_excel previously was working fine. Fails no matter which version of python3 I use - either 10 or 11.\nDo folks know the fix?\n File \"\/Users\/aizenman\/My Drive\/code\/daily_new_clients\/code\/run_daily_housekeeping.py\", line 38, in \n main()\n File \"\/Users\/aizenman\/My Drive\/code\/daily_new_clients\/code\/run_daily_housekeeping.py\", line 25, in main\n sb = diana.superbills.load_superbills_births(args.site, ath)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"\/Users\/aizenman\/My Drive\/code\/daily_new_clients\/code\/diana\/superbills.py\", line 148, in load_superbills_births\n sb = pd.read_excel(SUPERBILLS_EXCEL, sheet_name=\"Births\", parse_dates=[\"DOS\", \"DOB\"])\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/util\/_decorators.py\", line 211, in wrapper\n return func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/util\/_decorators.py\", line 331, in wrapper\n return func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/io\/excel\/_base.py\", line 482, in read_excel\n io = ExcelFile(io, storage_options=storage_options, engine=engine)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/io\/excel\/_base.py\", line 1695, in __init__\n self._reader = self._engines[engine](self._io, storage_options=storage_options)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/io\/excel\/_openpyxl.py\", line 557, in __init__\n super().__init__(filepath_or_buffer, storage_options=storage_options)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/io\/excel\/_base.py\", line 545, in __init__\n self.book = self.load_workbook(self.handles.handle)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/io\/excel\/_openpyxl.py\", line 568, in load_workbook\n return load_workbook(\n ^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/openpyxl\/reader\/excel.py\", line 346, in load_workbook\n reader.read()\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/openpyxl\/reader\/excel.py\", line 303, in read\n self.parser.assign_names()\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/openpyxl\/reader\/workbook.py\", line 109, in assign_names\n sheet.defined_names[name] = defn\n ^^^^^^^^^^^^^^^^^^^\nAttributeError: 'ReadOnlyWorksheet' object has no attribute 'defined_names'","Title":"Why does pandas read_excel fail on an openpyxl error saying 'ReadOnlyWorksheet' object has no attribute 'defined_names'?","Tags":"python,pandas,openpyxl","AnswerCount":3,"A_Id":75527773,"Answer":"By installing the 'xlxswriter', the trouble was solved. Thanks to the above solutions, but they do not work in my case. So, this maybe another issuse you may consider.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75440354,"CreationDate":"2023-02-13 19:20:26","Q_Score":12,"ViewCount":6906,"Question":"This bug suddenly came up literally today after read_excel previously was working fine. Fails no matter which version of python3 I use - either 10 or 11.\nDo folks know the fix?\n File \"\/Users\/aizenman\/My Drive\/code\/daily_new_clients\/code\/run_daily_housekeeping.py\", line 38, in \n main()\n File \"\/Users\/aizenman\/My Drive\/code\/daily_new_clients\/code\/run_daily_housekeeping.py\", line 25, in main\n sb = diana.superbills.load_superbills_births(args.site, ath)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"\/Users\/aizenman\/My Drive\/code\/daily_new_clients\/code\/diana\/superbills.py\", line 148, in load_superbills_births\n sb = pd.read_excel(SUPERBILLS_EXCEL, sheet_name=\"Births\", parse_dates=[\"DOS\", \"DOB\"])\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/util\/_decorators.py\", line 211, in wrapper\n return func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/util\/_decorators.py\", line 331, in wrapper\n return func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/io\/excel\/_base.py\", line 482, in read_excel\n io = ExcelFile(io, storage_options=storage_options, engine=engine)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/io\/excel\/_base.py\", line 1695, in __init__\n self._reader = self._engines[engine](self._io, storage_options=storage_options)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/io\/excel\/_openpyxl.py\", line 557, in __init__\n super().__init__(filepath_or_buffer, storage_options=storage_options)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/io\/excel\/_base.py\", line 545, in __init__\n self.book = self.load_workbook(self.handles.handle)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/io\/excel\/_openpyxl.py\", line 568, in load_workbook\n return load_workbook(\n ^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/openpyxl\/reader\/excel.py\", line 346, in load_workbook\n reader.read()\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/openpyxl\/reader\/excel.py\", line 303, in read\n self.parser.assign_names()\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/openpyxl\/reader\/workbook.py\", line 109, in assign_names\n sheet.defined_names[name] = defn\n ^^^^^^^^^^^^^^^^^^^\nAttributeError: 'ReadOnlyWorksheet' object has no attribute 'defined_names'","Title":"Why does pandas read_excel fail on an openpyxl error saying 'ReadOnlyWorksheet' object has no attribute 'defined_names'?","Tags":"python,pandas,openpyxl","AnswerCount":3,"A_Id":75449213,"Answer":"You can first try to uninstall the openpyxl\npip uninstall openpyxl -y\nand then use\npip install openpyxl==3.1.0 -y\nNote: Use ! infront of code if case of using notebooks.\n!pip uninstall openpyxl -y\n!pip install openpyxl==3.1.0 -y\nIf the above code does not work. You can try to upgrade the pandas. i.e\n!pip uninstall pandas -y && !pip install pandas","Users Score":1,"is_accepted":false,"Score":0.0665680765,"Available Count":2},{"Q_Id":75440385,"CreationDate":"2023-02-13 19:24:07","Q_Score":1,"ViewCount":122,"Question":"I am trying to load data into a custom NER model using spacy, I am getting an error:-\n'RobertaTokenizerFast' object has no attribute '_in_target_context_manager'\nhowever, it works fine with the other models.\nThank you for your time!!","Title":"'RobertaTokenizerFast' object has no attribute '_in_target_context_manager' error while loading data into custom NER model","Tags":"python,spacy,named-entity-recognition","AnswerCount":1,"A_Id":75515376,"Answer":"I faced the same issue after upgrading my environment from {Python 3.9 + Spacy 3.3} to {Python 3.10 + Space 3.5}. Resolved this by upgrading and re-packaging the model.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75444318,"CreationDate":"2023-02-14 06:50:01","Q_Score":1,"ViewCount":138,"Question":"i wrote a basic python program and tried running it using the play button but nothing happens,\ni look through the interpreters and the one for python isnt detected\ncan someone guide me\ntried looking online for answers but most are confusing since i can't seem to find some of the settings they are recommending i use","Title":"Python file won't run in vs code using play button","Tags":"python-3.x,visual-studio-code","AnswerCount":1,"A_Id":75444459,"Answer":"Hey, my suggestion would be :\n\nFirst check the installation of python on your machine, and if it\ndoesn't help then,\nOpen keyboard shortcuts in VS Code 'CTRL + K and CTRL + S' or by\nclicking settings button in bottom-left corner.\nSearch \"Run Python File in Terminal\".\nYou will get first option with the same title.\nDouble click the Key Binding area in front of title.\nAnd set a keyboard shortcut for running Python {eg: 'ALT + Q' (My shortcut)}. This would be much\nconvenient.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75444637,"CreationDate":"2023-02-14 07:32:30","Q_Score":2,"ViewCount":73,"Question":"I have a pandas data frame that looks like this:\n# df1\n Id A B C\n 3 4 5 6\n\nI wrote this to a csv and it works great the first time,\nhowever when I append the CSV it rewrites the columns and the values again\nlike this:\n Id A B C\n 3 4 5 6\n Id A B C\n 3 4 5 6\n\nIs there a method for the 2nd iteration afterwards to only write the value and not the columns when writing to a csv through pandas?\nI have tried using the 'a' command for appending and to empty my dataframe so it's just the columns to use as a header to write to the csv and then the as a separate dataframe append the values however pandas does not allow for empty dataframes","Title":"How to write to a CSV file with pandas while appending to the next empty row without writing the columns again?","Tags":"python,pandas,csv","AnswerCount":2,"A_Id":75444673,"Answer":"Set header=False option for each next df.to_csv call to exclude column names from record.","Users Score":2,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75447782,"CreationDate":"2023-02-14 12:23:38","Q_Score":1,"ViewCount":152,"Question":"I have a problem (that I think I'm over complicating) but for the life of me I can't seem to solve it.\nI have 2 dataframes. One containing a list of items with quantities that I want to buy. I have another dataframe with a list of suppliers, unit cost and quantity of items available. Along with this I have a dataframe with shipping cost for each supplier.\nI want to find the optimal way to break up my order among the suppliers to minimise costs.\nSome added points:\n\nSuppliers won't always be able to fulfil the full order of an item so I want to also be able to split an individual item among suppliers if it is cheaper\nShipping only gets added once per supplier (2 items from a supplier means I still only pay shipping once for that supplier)\n\nI have seen people mention cvxpy for a similar problem but I'm struggling to find a way to use it for my problem (never used it before).\nSome advice would be great.\nNote: You don't have to write all the code for me but giving a bit of guidance on how to break down the problem would be great.\nTIA","Title":"How would I go about finding the optimal way to split up an order","Tags":"python,optimization,cvxpy,operations-research","AnswerCount":2,"A_Id":75453931,"Answer":"Some advice too large for a comment:\nAs @Erwin Kalvelagen alludes to, this problem can be described as a math program, which is probably the most common-sense approach.\nThe generalized plan of attack is to figure out how to create an expression of the problem using some modeling package and then turn that problem over to a solver engine which uses diverse techniques to find the optimal answer.\ncvxpy is certainly 1 of the options to do the first part with. I'm partial to pyomo, and pulp is also viable. pulp also installs with a solver (cbc) which is suitable for this type of problem. In other cases, you may need to install separately.\nIf you take this approach, look through a text or some online examples on how to formulate a MIP (mixed integer program). You'll have some sets (perhaps items, suppliers, etc.), data that form constraints or limits, some variables indexed by the sets, and an objective....likely to minimize cost.\nForget about the complexities of split-orders and combined shipping at first and just see if you can get something working with toy data, then build out from there.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75447819,"CreationDate":"2023-02-14 12:27:00","Q_Score":1,"ViewCount":65,"Question":"I develop an app for creating products in online shop. Let's suppose I have 50 categories of products and each of these has some required parameters for product (like color, size, etc.).\nSome parameters apper in all categories, and some are unique. That gives me around 300 parameters (fields) that should be defined in Django model.\nI suppose it is not good idea to create one big database with 300 fields and add products that have 1-15 parameters there (leaving remaining fields empty). What would be the best way to handle it?\nWhat would be the best way to display form that will ask only for parameters required in given category?","Title":"How to handle 300 parameters in Django Model \/ Form?","Tags":"python,django,e-commerce","AnswerCount":2,"A_Id":75447889,"Answer":"If you have to keep the Model structure as you have defined it here, I would create a \"Product\" \"Category\" \"ProductCategory\" tables.\nProduct table is as follows:\n\n\n\n\nProductID\nProductName\n\n\n\n\n1\nShirt\n\n\n2\nTable\n\n\n3\nVase\n\n\n\n\nCategory table is following\n\n\n\n\nCategoryID\nCategoryName\n\n\n\n\n1\nSize\n\n\n2\nColor\n\n\n3\nMaterial\n\n\n\n\nProductCategory\n\n\n\n\nID\nProductID\nCategoryID\nCategoryValue\n\n\n\n\n1\n1 (Shirt)\n1 (Size)\nMedium\n\n\n2\n2 (Table)\n2 (Color)\nDark Oak\n\n\n3\n3 (Vase)\n3 (Material)\nGlass\n\n\n3\n3 (Vase)\n3 (Material)\nPlastic\n\n\n\n\nThis would be the easiest way, which wouldn't create 300 columns, would allow you to reuse categories across different types of products, but in the case of many products, would start to slowdown the database queries, as you would be joining 2 big tables. Product and ProductCategory\nYou could split it up in more major Categories such as \"Plants\", \"Kitchenware\" etc etc.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75448841,"CreationDate":"2023-02-14 13:54:23","Q_Score":2,"ViewCount":85,"Question":"What is the worst case time complexity (Big O notation) of the following function for positive integers?\ndef rec_mul(a:int, b:int) -> int:\n if b == 1:\n return a\n \n if a == 1:\n return b\n \n else:\n return a + rec_mul(a, b-1)\n\nI think it's O(n) but my friend claims it's O(2^n)\nMy argument:\nThe function recurs at any case b times, therefor the complexity is O(b) = O(n)\nHis argument:\nsince there are n bits, a\\b value can be no more than (2^n)-1,\ntherefor the max number of calls will be O(2^n)","Title":"Time complexity of recursion of multiplication","Tags":"python,recursion,time-complexity,big-o","AnswerCount":3,"A_Id":75449860,"Answer":"Background\nA unary encoding of the input uses an alphabet of size 1: think tally marks. If the input is the number a, you need O(a) bits.\nA binary encoding uses an alphabet of size 2: you get 0s and 1s. If the number is a, you need O(log_2 a) bits.\nA trinary encoding uses an alphabet of size 3: you get 0s, 1s, and 2s. If the number is a, you need O(log_3 a) bits.\nIn general, a k-ary encoding uses an alphabet of size k: you get 0s, 1s, 2s, ..., and k-1s. If the number is a, you need O(log_k a) bits.\nWhat does this have to do with complexity?\nAs you are aware, we ignore multiplicative constants inside big-oh notation. n, 2n, 3n, etc, are all O(n).\nThe same holds for logarithms. log_2 n, 2 log_2 n, 3 log_2 n, etc, are all O(log_2 n).\nThe key observation here is that the ratio log_k1 n \/ log_k2 n is a constant, no matter what k1 and k2 are... as long as they are greater than 1. That means f(log_k1 n) = O(log_k2 n) for all k1, k2 > 1.\nThis is important when comparing algorithms. As long as you use an \"efficient\" encoding (i.e., not a unary encoding), it doesn't matter what base you use: you can simply say f(n) = O(lg n) without specifying the base. This allows us to compare runtime of algorithms without worrying about the exact encoding you use.\nSo n = b (which implies a unary encoding) is typically never used. Binary encoding is simplest, and doesn't provide a non-constant speed-up over any other encoding, so we usually just assume binary encoding.\nThat means we almost always assume that n = lg a + lg b as the input size, not n = a + b. A unary encoding is the only one that suggests linear growth, rather than exponential growth, as the values of a and b increase.\n\nOne area, though, where unary encodings are used is in distinguishing between strong NP-completeness and weak NP-completeness. Without getting into the theory, if a problem is NP-complete, we don't expect any algorithm to have a polynomial running time, that is, one bounded by O(n**k) for some constant k when using an efficient encoring.\nBut some algorithms do become polynomial if we allow a unary encoding. If a problem that is otherwise NP-complete becomes polynomial when using an unary encoding, we call that a weakly NP-complete problem. It's still slow, but it is in some sense \"faster\" than an algorithm where the size of the numbers doesn't matter.","Users Score":1,"is_accepted":false,"Score":0.0665680765,"Available Count":3},{"Q_Id":75448841,"CreationDate":"2023-02-14 13:54:23","Q_Score":2,"ViewCount":85,"Question":"What is the worst case time complexity (Big O notation) of the following function for positive integers?\ndef rec_mul(a:int, b:int) -> int:\n if b == 1:\n return a\n \n if a == 1:\n return b\n \n else:\n return a + rec_mul(a, b-1)\n\nI think it's O(n) but my friend claims it's O(2^n)\nMy argument:\nThe function recurs at any case b times, therefor the complexity is O(b) = O(n)\nHis argument:\nsince there are n bits, a\\b value can be no more than (2^n)-1,\ntherefor the max number of calls will be O(2^n)","Title":"Time complexity of recursion of multiplication","Tags":"python,recursion,time-complexity,big-o","AnswerCount":3,"A_Id":75449172,"Answer":"Your friend and you can both be right, depending on what is n. Another way to say this is that your friend and you are both wrong, since you both forgot to specify what was n.\nYour function takes an input that consists in two variables, a and b. These variables are numbers. If we express the complexity as a function of these numbers, it is really O(b log(ab)), because it consists in b iterations, and each iteration requires an addition of numbers of size up to ab, which takes log(ab) operations.\nNow, you both chose to express the complexity in function of n rather than a or b. This is okay; we often do this; but an important question is: what is n?\nSometimes we think it's \"obvious\" what is n, so we forget to say it.\n\nIf you choose n = max(a, b) or n = a + b, then you are right, the complexity is O(n).\nIf you choose n to be the length of the input, then n is the number of bits needed to represent the two numbers a and b. In other words, n = log(a) + log(b). In that case, your friend is right, the complexity is O(2^n).\n\nSince there is an ambiguity in the meaning of n, I would argue that it's meaningless to express the complexity as a function of n without specifying what n is. So, your friend and you are both wrong.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":3},{"Q_Id":75448841,"CreationDate":"2023-02-14 13:54:23","Q_Score":2,"ViewCount":85,"Question":"What is the worst case time complexity (Big O notation) of the following function for positive integers?\ndef rec_mul(a:int, b:int) -> int:\n if b == 1:\n return a\n \n if a == 1:\n return b\n \n else:\n return a + rec_mul(a, b-1)\n\nI think it's O(n) but my friend claims it's O(2^n)\nMy argument:\nThe function recurs at any case b times, therefor the complexity is O(b) = O(n)\nHis argument:\nsince there are n bits, a\\b value can be no more than (2^n)-1,\ntherefor the max number of calls will be O(2^n)","Title":"Time complexity of recursion of multiplication","Tags":"python,recursion,time-complexity,big-o","AnswerCount":3,"A_Id":75449149,"Answer":"You are both right.\nIf we disregard the time complexity of addition (and you might discuss whether you have reason to do so or not) and count only the number of iterations, then you are both right because you define:\nn = b\nand your friend defines\nn = log_2(b)\nso the complexity is O(b) = O(2^log_2(b)).\nBoth definitions are valid and both can be practical. You look at the input values, your friend at the lengths of the input, in bits.\nThis is a good demonstration why big-O expressions mean nothing if you don't define the variables used in those expressions.","Users Score":2,"is_accepted":false,"Score":0.1325487884,"Available Count":3},{"Q_Id":75449511,"CreationDate":"2023-02-14 14:47:36","Q_Score":1,"ViewCount":1250,"Question":"I recently came across this error while using \"pip install\" with python version 3.10 and pip version 22.3.1:\nERROR: Exception:\nTraceback (most recent call last):\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\site-packages\\pip\\_internal\\cli\\base_command.py\", line 160, in exc_logging_wrapper\n status = run_func(*args)\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\site-packages\\pip\\_internal\\cli\\req_command.py\", line 247, in wrapper\n return func(self, options, args)\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\site-packages\\pip\\_internal\\commands\\download.py\", line 103, in run\n build_tracker = self.enter_context(get_build_tracker())\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\site-packages\\pip\\_internal\\cli\\command_context.py\", line 27, in enter_context\n return self._main_context.enter_context(context_provider)\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\contextlib.py\", line 492, in enter_context\n result = _cm_type.__enter__(cm)\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\contextlib.py\", line 135, in __enter__\n return next(self.gen)\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\site-packages\\pip\\_internal\\operations\\build\\build_tracker.py\", line 46, in get_build_tracker\n root = ctx.enter_context(TempDirectory(kind=\"build-tracker\")).path\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\site-packages\\pip\\_internal\\utils\\temp_dir.py\", line 125, in __init__\n path = self._create(kind)\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\site-packages\\pip\\_internal\\utils\\temp_dir.py\", line 164, in _create\n path = os.path.realpath(tempfile.mkdtemp(prefix=f\"pip-{kind}-\"))\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\tempfile.py\", line 357, in mkdtemp\n prefix, suffix, dir, output_type = _sanitize_params(prefix, suffix, dir)\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\tempfile.py\", line 126, in _sanitize_params\n dir = gettempdir()\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\tempfile.py\", line 299, in gettempdir\n return _os.fsdecode(_gettempdir())\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\tempfile.py\", line 292, in _gettempdir\n tempdir = _get_default_tempdir()\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\tempfile.py\", line 223, in _get_default_tempdir\n raise FileNotFoundError(_errno.ENOENT,\nFileNotFoundError: [Errno 2] No usable temporary directory found in ['C:\\\\Users\\\\leon\\\\AppData\\\\Local\\\\Temp', 'C:\\\\Users\\\\leon\\\\AppData\\\\Local\\\\Temp', 'C:\\\\Users\\\\leon\\\\AppData\\\\Local\\\\Temp', 'C:\\\\windows\\\\Temp', 'c:\\\\temp', 'c:\\\\tmp', '\\\\temp', '\\\\tmp', 'C:\\\\Users\\\\leon']\nWARNING: There was an error checking the latest version of pip.\n\nBefore that there was a acess error with the console history which I had been able to solve, but no mater what I try this error always comes up. I also tried reinstalling python 3.10 and I also tried it with python 3.11 but it's always this error when using pip install. There also was this weird error in Pycharm where it couldn't set upt the virtual env but this is also fixed aready.\nThanks in advance.","Title":"Error with pip version 22.3.1 and Python version 3.10","Tags":"python,python-3.x,pip","AnswerCount":1,"A_Id":75449728,"Answer":"If you read the code for tempfile.py shown in the trace and particulary: _get_default_tempdir() implementation, you will see that the code does following:\n\nget the list of all possible temp directory locations (eg, this list is shown in the actual Exception)\nIterate the list it got\nTries to write a small random file into a given directory\nIf that works, return the directory name to be used as temporary path.\nIf not, iterate the rest of the list from 2.\nIf the list gets iterated to the end, you will get the exception you are now seeing.\n\nSo, essentially, your pip install will try to write to bunch of different temporary locations but each one of those fail.\nThis is most likely that each of those locations, your user does not have write access or your filesystem is full, or there could be some AV tool that blocks writes to these locations or some other reason.\nDo check these directories:\n\nC:\\Users\\leon\\AppData\\Local\\Temp\nC:\\Users\\leon\\AppData\\Local\\Temp\nC:\\Users\\leon\\AppData\\Local\\Temp\nC:\\windows\\Temp\nc:\\temp\nc:\\tmp\nC:\\Users\\leon\n\nOR before you run pip, set TMP and TEMP environment variables to point to location where you can write to.","Users Score":2,"is_accepted":false,"Score":0.3799489623,"Available Count":1},{"Q_Id":75449803,"CreationDate":"2023-02-14 15:11:21","Q_Score":1,"ViewCount":74,"Question":"Is there a way to get the exact date\/time from the web rather than taking the PC date\/time?\nI am creating a website where the answer is time relevant. But i don't want someone cheating by putting their pc clock back. When i do:\ntoday = datetime.datetime.today()\n\nor\nnow = datetime.datetime.utcnow().replace(tzinfo=utc)\n\nI still get whatever time my pc is set to.\nIs there a way to get the correct date\/time.","Title":"Django Correct Date \/ Time not PC date\/time","Tags":"python,django,python-datetime","AnswerCount":1,"A_Id":75452728,"Answer":"datetime.today() takes its time information from the server your application is running on. If you currently run your application with python manage.py localhost:8000, the server is your local PC. In this scenario, you can tamper with the time setting of your PC and see different results.\nBut in production environment, your hosting server will provide the time information. Unless you have a security issue, no unauthorized user should be able to change that.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75450060,"CreationDate":"2023-02-14 15:34:27","Q_Score":1,"ViewCount":50,"Question":"I sometimes use jupyter console to try out things in python.\nI'm running arch linux and installed everything through the arch repos.\nI hadn't ran jupyter console in quite some time, but while trying to launch it, i can't get it to work anymore.\nHere is the error :\nJupyter console 6.5.1\n\nPython 3.10.9 (main, Dec 19 2022, 17:35:49) [GCC 12.2.0]\nType 'copyright', 'credits' or 'license' for more information\nIPython 8.10.0 -- An enhanced Interactive Python. Type '?' for help.\n\nIn [1]: \nTask exception was never retrieved\nfuture: exception=TypeError(\"object int can't be used in 'await' expression\")>\nTraceback (most recent call last):\n File \"\/usr\/lib\/python3.10\/site-packages\/jupyter_console\/ptshell.py\", line 842, in handle_external_iopub\n poll_result = await self.client.iopub_channel.socket.poll(500)\nTypeError: object int can't be used in 'await' expression\nShutting down kernel\n\nI tried reinstalling everything through pacman in case I accidentally changed something I shouldn't, but it changed nothing.\nAny tips on what could be wrong ?","Title":"jupyter console doesn't work on my computer anymore","Tags":"python,archlinux,jupyter-console","AnswerCount":1,"A_Id":75456724,"Answer":"I don't have enough rep to comment but I do not have the same issue. I can launch Jupyter QT Console just fine, and I have the same python version and IPython version. Just thought I would share, even though I don't use Jupyter Console. I do all my .ipynb in vscode and all other coding in neovim. I don't know if there is a difference between the console you are talking about and QT console, but Jupyter QT Console works fine for me, just unbearably light theme :).","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75453995,"CreationDate":"2023-02-14 22:55:53","Q_Score":9,"ViewCount":7767,"Question":"It was working perfectly earlier but for some reason now I am getting strange errors.\npandas version: 1.2.3\nmatplotlib version: 3.7.0\nsample dataframe:\ndf\n cap Date\n0 1 2022-01-04\n1 2 2022-01-06\n2 3 2022-01-07\n3 4 2022-01-08\n\ndf.plot(x='cap', y='Date')\nplt.show()\n\ndf.dtypes\ncap int64\nDate datetime64[ns]\ndtype: object\n\nI get a traceback:\nTraceback (most recent call last):\n File \"\/Library\/Developer\/CommandLineTools\/Library\/Frameworks\/Python3.framework\/Versions\/3.8\/lib\/python3.8\/code.py\", line 90, in runcode\n exec(code, self.locals)\n File \"\", line 1, in \n File \"\/Volumes\/coding\/venv\/lib\/python3.8\/site-packages\/pandas\/plotting\/_core.py\", line 955, in __call__\n return plot_backend.plot(data, kind=kind, **kwargs)\n File \"\/Volumes\/coding\/venv\/lib\/python3.8\/site-packages\/pandas\/plotting\/_matplotlib\/__init__.py\", line 61, in plot\n plot_obj.generate()\n File \"\/Volumes\/coding\/venv\/lib\/python3.8\/site-packages\/pandas\/plotting\/_matplotlib\/core.py\", line 279, in generate\n self._setup_subplots()\n File \"\/Volumes\/coding\/venv\/lib\/python3.8\/site-packages\/pandas\/plotting\/_matplotlib\/core.py\", line 337, in _setup_subplots\n fig = self.plt.figure(figsize=self.figsize)\n File \"\/Volumes\/coding\/venv\/lib\/python3.8\/site-packages\/matplotlib\/_api\/deprecation.py\", line 454, in wrapper\n return func(*args, **kwargs)\n File \"\/Volumes\/coding\/venv\/lib\/python3.8\/site-packages\/matplotlib\/pyplot.py\", line 813, in figure\n manager = new_figure_manager(\n File \"\/Volumes\/coding\/venv\/lib\/python3.8\/site-packages\/matplotlib\/pyplot.py\", line 382, in new_figure_manager\n _warn_if_gui_out_of_main_thread()\n File \"\/Volumes\/coding\/venv\/lib\/python3.8\/site-packages\/matplotlib\/pyplot.py\", line 360, in _warn_if_gui_out_of_main_thread\n if _get_required_interactive_framework(_get_backend_mod()):\n File \"\/Volumes\/coding\/venv\/lib\/python3.8\/site-packages\/matplotlib\/pyplot.py\", line 208, in _get_backend_mod\n switch_backend(rcParams._get(\"backend\"))\n File \"\/Volumes\/coding\/venv\/lib\/python3.8\/site-packages\/matplotlib\/pyplot.py\", line 331, in switch_backend\n manager_pyplot_show = vars(manager_class).get(\"pyplot_show\")\nTypeError: vars() argument must have __dict__ attribute","Title":"Pandas plot, vars() argument must have __dict__ attribute?","Tags":"python,pandas,matplotlib","AnswerCount":2,"A_Id":75657421,"Answer":"The solution by NEStenerus did not work for me, because I don't have tkinter installed and did not want to change my package configuration.\nAlternative Fix\nInstead, you can disable the \"show plots in tool window\" option, by going to\nSettings | Tools | Python Scientific | Show plots in tool window and unchecking it.","Users Score":4,"is_accepted":false,"Score":0.3799489623,"Available Count":1},{"Q_Id":75454498,"CreationDate":"2023-02-15 00:30:10","Q_Score":1,"ViewCount":28,"Question":"I have a n x n dimensional numpy array of eigenvectors as columns, and want to return the last v of them as another array. However, they are currently in ascending order, and I wish to return them in descending order.\nCurrently, I'm attempting to index as follows\neigenvector_array[:,-1:-v]\n\nBut this doesn't seem to be working. Is there a more efficient way to do this?","Title":"Reverse Index through a numPy ndarray","Tags":"python,numpy,indexing","AnswerCount":2,"A_Id":75454533,"Answer":"Lets re-write this to make it a little less confusing.\neigenvector_array[:,-1:-v]\nto:\neigenvector_array[:][-1:-v]\nNow remember how slicing works in python:\n[start:stop:step]\nIf you set step. to -1 it will return them in reverse, so:\neigenvector_array[:,-1:-v:-1] should be your answer.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75457859,"CreationDate":"2023-02-15 09:37:32","Q_Score":2,"ViewCount":42,"Question":"I am building a snakmake pipeline, in the final rule i have an existing files that i want the snakefile to append to:\nHere is the rule:\nrule Amend: \n input:\n Genome_stats = expand(\"global_temp_workspace\/result\/{sample}.Genome.stats.tsv\", sample= sampleID),\n GenomeSNV = expand(\"global_temp_workspace\/result\/{sample}.Genome.SNVs.tsv\", sample= sampleID),\n GenomesConsensus = expand(\"global_temp_workspace\/analysis\/{sample}.renamed.consensus.fasta\", sample= sampleID),\n output: \n Genome_stats=\"global_temp_workspace\/result\/Genome.stats.tsv\",\n GenomeSNV=\"global_temp_workspace\/result\/Genome.SNVs.tsv\",\n GenomesConsensus=\"global_temp_workspace\/result\/Genomes.consensus.fasta\"\n threads: workflow.cores\n shell: \n \"\"\"\n cat {input.Genome_stats} | tail -n +2 >> {output.Genome_stats} ;\\ \n cat {input.GenomesConsensus} >> {output.GenomesConsensus} ;\\ \n cat {input.GenomeSNV} | tail -n +2 >> {output.GenomeSNV} ;\\ \n \"\"\"\n\nhow can i solve it?\nThank you\nI tried to do the dynamic() in the output and adding the touch {output.Genome_stats} {output.GenomesConsensus} {output.GenomeSNV} at the end of the shell. but did not work.\nwhenevr i run the snakemake i get:\n$ time snakemake --snakefile V2.5.smk --cores all \nBuilding DAG of jobs...\nNothing to be done.\nComplete log: .snakemake\/log\/2023-02-15T123050.937009.snakemake.log\n\nreal 0m1.022s\nuser 0m2.744s\nsys 0m2.797s","Title":"How can I make snakefile rule append the results to the input file of the rule file?","Tags":"python,pipeline,snakemake","AnswerCount":1,"A_Id":75460060,"Answer":"This behaviour is not idempotent and is usually a recipe for trouble. What happens if the machine breaks down or the process is killed during the write stage? What happens if a rule is accidentally ran twice?\nAs advised by @Cornelius Roemer in the comment to the question, the safer way is to write to a new file. If the overwrite-like behaviour is desired, then the new file can be moved to the original file location, but some record\/checkpoint file should be created to make sure that Snakemake knows not to re-process the file.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75459812,"CreationDate":"2023-02-15 12:29:13","Q_Score":1,"ViewCount":84,"Question":"I am developing python projects under git control using poetry to manage my venvs.\nFrom my project's directory I issue a \"poetry shell\" command and my new shell command prompt becomes something like:\n(isagog-ai-py3.10) (base) bob@Roberts-Mac-mini isagog-ai %\n\nwhere the first part in bracket gives me the name pf the project and the python version I'm using, and the last part of the prompt is my current directory name.\nBut what is it that gives me the \"(base)\" part? I'm actually working on a \"dev\" branch.","Title":"Poetry shell command prompt: what gives the (base) part?","Tags":"git,shell,python-venv,python-poetry","AnswerCount":1,"A_Id":75463086,"Answer":"This is base environment from conda.","Users Score":2,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75462208,"CreationDate":"2023-02-15 15:46:09","Q_Score":1,"ViewCount":67,"Question":"I am trying to split my django settings into production and development. Th ebiggest question that I have is how to use two different databases for the two environments? How to deal with migrations?\nI tried changing the settings for the development server to use a new empty database, however, I can not apply the migrations to create the tables that I already have in the production database.\nAll the guides on multiple databases focus on the aspect of having different types of data in different databases (such as users database, etc.) but not the way I am looking for.\nCould you offer some insights about what the best practices are and how to manage the two databases also in terms of migrations?\nEDIT:\nHere is what I get when I try to run python manage.py migrate on the new database:\nTraceback (most recent call last):\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/db\/backends\/utils.py\", line 85, in _execute\n return self.cursor.execute(sql, params)\npsycopg2.errors.UndefinedTable: relation \"dashboard_posttags\" does not exist\nLINE 1: ...ags\".\"tag\", \"dashboard_posttags\".\"hex_color\" FROM \"dashboard...\n ^\n\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"\/......\/manage.py\", line 22, in \n main()\n File \"\/......\/manage.py\", line 18, in main\n execute_from_command_line(sys.argv)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/core\/management\/__init__.py\", line 425, in execute_from_command_line\n utility.execute()\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/core\/management\/__init__.py\", line 419, in execute\n self.fetch_command(subcommand).run_from_argv(self.argv)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/core\/management\/base.py\", line 373, in run_from_argv\n self.execute(*args, **cmd_options)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/core\/management\/base.py\", line 417, in execute\n output = self.handle(*args, **options)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/core\/management\/base.py\", line 90, in wrapped\n res = handle_func(*args, **kwargs)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/core\/management\/commands\/migrate.py\", line 75, in handle\n self.check(databases=[database])\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/core\/management\/base.py\", line 438, in check\n all_issues = checks.run_checks(\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/core\/checks\/registry.py\", line 77, in run_checks\n new_errors = check(app_configs=app_configs, databases=databases)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/core\/checks\/urls.py\", line 13, in check_url_config\n return check_resolver(resolver)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/core\/checks\/urls.py\", line 23, in check_resolver\n return check_method()\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/urls\/resolvers.py\", line 446, in check\n for pattern in self.url_patterns:\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/utils\/functional.py\", line 48, in __get__\n res = instance.__dict__[self.name] = self.func(instance)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/urls\/resolvers.py\", line 632, in url_patterns\n patterns = getattr(self.urlconf_module, \"urlpatterns\", self.urlconf_module)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/utils\/functional.py\", line 48, in __get__\n res = instance.__dict__[self.name] = self.func(instance)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/urls\/resolvers.py\", line 625, in urlconf_module\n return import_module(self.urlconf_name)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/importlib\/__init__.py\", line 127, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n File \"\", line 1030, in _gcd_import\n File \"\", line 1007, in _find_and_load\n File \"\", line 986, in _find_and_load_unlocked\n File \"\", line 680, in _load_unlocked\n File \"\", line 850, in exec_module\n File \"\", line 228, in _call_with_frames_removed\n File \"\/......\/app\/urls.py\", line 11, in \n from main_platform.views.investor import AccountView, profile, app_home_redirect\n File \"\/......\/main_platform\/views\/investor.py\", line 118, in \n class PostFilter(django_filters.FilterSet):\n File \"\/......\/main_platform\/views\/investor.py\", line 124, in PostFilter\n for tag in PostTags.objects.all():\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/db\/models\/query.py\", line 280, in __iter__\n self._fetch_all()\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/auto_prefetch\/__init__.py\", line 98, in _fetch_all\n super()._fetch_all()\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/db\/models\/query.py\", line 1354, in _fetch_all\n self._result_cache = list(self._iterable_class(self))\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/db\/models\/query.py\", line 51, in __iter__\n results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/db\/models\/sql\/compiler.py\", line 1202, in execute_sql\n cursor.execute(sql, params)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/db\/backends\/utils.py\", line 99, in execute\n return super().execute(sql, params)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/sentry_sdk\/integrations\/django\/__init__.py\", line 563, in execute\n return real_execute(self, sql, params)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/db\/backends\/utils.py\", line 67, in execute\n return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/db\/backends\/utils.py\", line 76, in _execute_with_wrappers\n return executor(sql, params, many, context)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/db\/backends\/utils.py\", line 85, in _execute\n return self.cursor.execute(sql, params)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/db\/utils.py\", line 90, in __exit__\n raise dj_exc_value.with_traceback(traceback) from exc_value\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/db\/backends\/utils.py\", line 85, in _execute\n return self.cursor.execute(sql, params)\ndjango.db.utils.ProgrammingError: relation \"dashboard_posttags\" does not exist\nLINE 1: ...ags\".\"tag\", \"dashboard_posttags\".\"hex_color\" FROM \"dashboard...","Title":"Separate databases for development and production in Djang","Tags":"python,django,postgresql","AnswerCount":2,"A_Id":75463997,"Answer":"If you have a new empty database, you can just run \"python manage.py migrate\" and all migrations will be executed on the new database. The already done migrations will be stored in a table in that database so that django always \"remembers\" the migrations state of each individual database. Of course that new database will only have the tables structure - there is not yet any data copied!\nDoes this answer your question?","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75462560,"CreationDate":"2023-02-15 16:15:04","Q_Score":1,"ViewCount":52,"Question":"I'm reading in a list of samples from a text file and in that list every now and then there is a \"channel n\" checkpoint. The file is terminated with the text eof. The code that works until it hits the eof which it obviously cant cast as a float\nlog = open(\"mq_test.txt\", 'r')\ndata = []\nfor count, sample in enumerate(log):\n if \"channel\" not in sample:\n data.append(float(sample))\n \nprint(count)\nlog.close()\n\nSo to get rid of the ValueError: could not convert string to float: 'eof\\n' I added an or to my if as so,\nlog = open(\"mq_test.txt\", 'r')\ndata = []\nfor count, sample in enumerate(log):\n if \"channel\" not in sample or \"eof\" not in sample:\n data.append(float(sample))\n \nprint(count)\nlog.close()\n\nAnd now I get ValueError: could not convert string to float: 'channel 00\\n'\nSo my solution has been to nest the ifs & that works.\nCould somebody explain to me why the or condition failed though?","Title":"Unexpected behavior using if .. or .. Python","Tags":"python,if-statement","AnswerCount":2,"A_Id":75462636,"Answer":"I think it's a logic issue which \"and\" might be used instead of \"or\"","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75463993,"CreationDate":"2023-02-15 18:25:05","Q_Score":1,"ViewCount":501,"Question":"I have two scripts:\nfrom fastapi import FastAPI\nimport asyncio\n\napp = FastAPI()\n\n@app.get(\"\/\")\nasync def root():\n a = await asyncio.sleep(10)\n return {'Hello': 'World',}\n\nAnd second one:\nfrom fastapi import FastAPI\nimport time\n \napp = FastAPI()\n\n@app.get(\"\/\")\ndef root():\n a = time.sleep(10)\n return {'Hello': 'World',}\n\nPlease note the second script doesn't use async. Both scripts do the same, at first I thought, the benefit of an async script is that it allows multiple connections at once, but when testing the second code, I was able to run multiple connections as well. The results are the same, performance is the same and I don't understand why would we use async method. Would appreciate your explanation.","Title":"What does async actually do in FastAPI?","Tags":"python-3.x,asynchronous,async-await,fastapi","AnswerCount":2,"A_Id":75464345,"Answer":"FastAPI Docs:\n\nYou can mix def and async def in your path operation functions as much as you need and define each one using the best option for you. FastAPI will do the right thing with them.\nAnyway, in any of the cases above, FastAPI will still work asynchronously and be extremely fast.\n\nBoth endpoints will be executed asynchronously, but if you define your endpoint function asynchronously, it will allow you to use await keyword and work with asynchronous third party libraries","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75464645,"CreationDate":"2023-02-15 19:33:13","Q_Score":2,"ViewCount":75,"Question":"I was going through twitter when i came across the function below\ndef func():\n d = {1: \"I\", 2.0: \"love\", 2: \"Python\"}\n return d[2.0]\nprint(func())\n\nWhen i ran the code, i got Python as the output and i expected it to be love. I know that you cannot have multiple key in a dictionary. However what i want to know is why Python Interpreter considers 2.0 and 2 as the same and returns the value of 2","Title":"Why does python interpreter consider 2.0 and 2 to be the same in an when used as a dictionary key","Tags":"python,function,dictionary","AnswerCount":2,"A_Id":75464741,"Answer":"In your example, the keys 2.0 and 2 are considered the same because their hash values are equal. This is because in Python, float and integer objects can be equal even if they have different types and representations. In particular, the integer 2 and the floating-point number 2.0 have the same value, so they are considered equal.\nThat's why you should always use consistent types for keys in dictionaries. Always remember to use integers or floats.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75466757,"CreationDate":"2023-02-16 00:35:55","Q_Score":1,"ViewCount":62,"Question":"I've installed flake 8 in the terminal, but when i try and select python linter on vs code in the command palette i get the following error: \"Command 'Python: Select Linter' resulted in an error (command 'python.setLinter' not found)\". I'm on a mac, version 11.5.2.\nI have seen other solutions for this problem for windows on stack but not sure how to proceed on mac, please advise","Title":"trying to open flake8 on vs code from command palette error on mac","Tags":"python,visual-studio-code,flake8","AnswerCount":1,"A_Id":75466977,"Answer":"There are many possibilities. You can try the following methods:\n\nReinstall Python extension or use Pre-release version.\nStart VsCode as administrator.\nTry to delete the.vscode folder in the project.","Users Score":-2,"is_accepted":false,"Score":-0.3799489623,"Available Count":1},{"Q_Id":75468479,"CreationDate":"2023-02-16 06:14:37","Q_Score":1,"ViewCount":250,"Question":"I'm using Python 3.7.4 in a venv environment.\nI ran pip install teradataml==17.0.0.3 which installs a bunch of dependent packages, including sqlalchemy.\nAt the time, it installed SQLAlchemy==2.0.2.\nI ran the below code, and received this error:\nArgumentError: Additional keyword arguments are not accepted by this function\/method. The presence of **kw is for pep-484 typing purposes\nfrom teradataml import create_context \n\nclass ConnectToTeradata:\n def __init__(self):\n \n host = 'AWESOME_HOST'\n username = 'johnnyMnemonic'\n password = 'keanu4life'\n\n self.connection = create_context(host = host, user = username, password = password)\n\n def __del__(self):\n print(\"Closing connection\")\n self.connection.dispose()\n\nConnectToTeradata()\n\nIf I install SQLAlchemy==1.4.26 before teradataml, I no longer get the error and successfuly connect.\nThis suggests SQLAlchemy==2.0.2 is not compatible with teradataml==17.0.0.3.\nI expected installing an older version of teradataml would also install older, compatible versions of dependent packages.\nWhen I install teradataml==17.0.0.3, can I force only install compatible versions of dependent packages?","Title":"When installing an old version of a package, can I install only compatible versions of dependent packages?","Tags":"python,python-3.x,sqlalchemy,teradata","AnswerCount":1,"A_Id":75525095,"Answer":"We are aware of the compatibility issues that were introduced in SQLAlchemy package 2.0.x versions. The new 2.0.x package directly affects the Teradata SQL dialect in the teradatasqlalchemy package. As a temporary measure, please downgrade SQLAlchemy to 1.4.46.\nTeradata Engineering is working on making the teradatasqlalchemy package compatible with the newer versions and a new package is slated to be released in March 2023.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75471318,"CreationDate":"2023-02-16 11:02:36","Q_Score":19,"ViewCount":14233,"Question":"Whenever I try to read Excel using\npart=pd.read_excel(path,sheet_name = mto_sheet)\n\nI get this exception:\n\n 'ReadOnlyWorksheet' object has no attribute 'defined_names'\n\nThis is if I use Visual Studio Code and Python 3.11. However, I don't have this problem when using Anaconda. Any reason for that?","Title":"'ReadOnlyWorksheet' object has no attribute 'defined_names'","Tags":"python,exception","AnswerCount":3,"A_Id":76009052,"Answer":"Possible workaround: create new excel file, with default worksheet name (\"Sheet1\" etc.) and copy and paste data here.\n(tested on Python 3.10.9 + openpyxl==3.1.1)","Users Score":1,"is_accepted":false,"Score":0.0665680765,"Available Count":1},{"Q_Id":75472653,"CreationDate":"2023-02-16 12:59:08","Q_Score":1,"ViewCount":40,"Question":"Following JSON File (raw data how I am getting it back from an API call):\n{\n \"code\": \"200000\",\n \"data\": {\n \"A\": \"0.43221600\",\n \"B\": \"0.02311155\",\n \"C\": \"0.55057515\",\n \"D\": \"2.15957924\",\n \"E\": \"0.03818908\",\n \"F\": \"0.26853420\",\n \"G\": \"0.15007500\",\n \"H\": \"0.00685843\",\n \"I\": \"0.08500848\"\n }\n}\n\nWill crate this output in Pandas by using this code (one column per data set in \"data\"). The result is a dataframe with many columns:\nimport pandas as pd\nimport json \nf = open('file.json', 'r')\nj1 = json.load(f)\npd.json_normalize(j1)\n\n code data.A data.B data.C data.D data.E data.F data.G data.H data.I\n0 200000 0.43221600 0.02311155 0.55057515 2.15957924 0.03818908 0.26853420 0.15007500 0.00685843 0.08500848\n\n\nI guess that Pandas should provide a built in function of the data set in the attribute \"data\" could be split in two new columns with names \"name\" and value, including a new index. But I cannot figure out how that works.\nI would need this output:\n name value\n0 A 0.43221600\n1 B 0.02311155\n2 C 0.55057515\n3 D 2.15957924\n4 E 0.03818908\n5 F 0.26853420\n6 G 0.15007500\n7 H 0.00685843\n8 I 0.08500848","Title":"pandas json dictionary to dataframe, reducing columns by creating new columns","Tags":"python,pandas,dataframe","AnswerCount":3,"A_Id":75472833,"Answer":"pd.DataFrame.from_dict(j1)\nshould give you the result you need","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75475387,"CreationDate":"2023-02-16 16:42:50","Q_Score":1,"ViewCount":180,"Question":"I have a use case where messages from an input_topic gets consumed and sent to a list of topics. I'm using producers[i].send_async(msg, callback=callback) where callback = lambda res, msg: consumer.acknowledge(msg). In this case, consumer is subscribed to the input_topic. I checked the backlog of input_topic and it has not decreased at all. Would appreciate if you could point out how to deal with this? What would be the best alternative?\nThanks in advance!","Title":"Pulsar producer send_async() with callback function acknowledging the sent message","Tags":"apache-pulsar,pulsar,python-pulsar","AnswerCount":1,"A_Id":75485101,"Answer":"Have you checked the consumer.acknowledge(msg) has actually been called? One possibility is the producer cannot write messages to the topic, and if the producer with infinite send timeout, you will never get the callback.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75475397,"CreationDate":"2023-02-16 16:43:25","Q_Score":1,"ViewCount":151,"Question":"I have a numpy array with a shape of (3, 4096). However, I need it's shape to be (4096, 3). How do I accomplish this?","Title":"How to reverse the shape of a numpy array","Tags":"python,python-3.x,numpy,numpy-ndarray","AnswerCount":1,"A_Id":75552015,"Answer":"Use:\narr=arr.T\n(or)\narr=np.transpose(arr)\n(or)\narr= arr.reshape(4096, 3)\nwhere arr is your array with shape (3,4096)","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75476008,"CreationDate":"2023-02-16 17:38:22","Q_Score":1,"ViewCount":46,"Question":"I have a search program that helps users find files on their system. I would like to have it perform tasks, such as opening the file within editor or changing the parent shell directory to the parent folder of the file exiting my python program.\nRight now I achieve this by running a bash wrapper that executes the commands the python program writes to the stdout. I was wondering if there was a way to do this without the wrapper.\nNote:\nsubprocess and os commands create a subshell and do not alter the parent shell. This is an acceptable answer for opening a file in the editor, but not for moving the current working directory of the parent shell to the desired location on exit.\nAn acceptable alternative might be to open a subshell in a desired directory\nexample\n#this opens a bash shell, but I can't send it to the right directory\nsubprocess.run(\"bash\")","Title":"Python execute code in parent shell upon exit","Tags":"python,posix","AnswerCount":1,"A_Id":75476539,"Answer":"This, if doable, will require quite a hack. Because the PWD is passed from the shell into the subprocess - in this case, the Python process, as a subprocess owned variable, and changing it won't modify what is in the super program.\nOn Unix, maybe it is achievable by opening a detachable sub-process that will pipe keyboard strokes into the TTY after the main program exits - I find this the most likely to succeed than any other thing.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75476135,"CreationDate":"2023-02-16 17:49:41","Q_Score":2,"ViewCount":9580,"Question":"I am receiving following error while converting python file to .exe\nI have tried to uninstall and intsall pyinstaller but it didnt helped out. i upgraded conda but still facing same error. Please support to resolve this issue\nCommand\n(base) G:>pyinstaller --onefile grp.py\nError\nThe 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller. Please remove this package (located in C:\\Users\\alpha\\anaconda3\\lib\\site-packages) using conda remove then try again.\nPython Version\n(base) G:>python --version\nPython 3.9.16","Title":"How to convert python file to exe? The 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller","Tags":"python,python-3.x,anaconda,conda,exe","AnswerCount":6,"A_Id":75640542,"Answer":"The error message you received suggests that the 'pathlib' package installed in your Anaconda environment is causing compatibility issues with PyInstaller. As a result, PyInstaller is unable to create a standalone executable from your Python script.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":3},{"Q_Id":75476135,"CreationDate":"2023-02-16 17:49:41","Q_Score":2,"ViewCount":9580,"Question":"I am receiving following error while converting python file to .exe\nI have tried to uninstall and intsall pyinstaller but it didnt helped out. i upgraded conda but still facing same error. Please support to resolve this issue\nCommand\n(base) G:>pyinstaller --onefile grp.py\nError\nThe 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller. Please remove this package (located in C:\\Users\\alpha\\anaconda3\\lib\\site-packages) using conda remove then try again.\nPython Version\n(base) G:>python --version\nPython 3.9.16","Title":"How to convert python file to exe? The 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller","Tags":"python,python-3.x,anaconda,conda,exe","AnswerCount":6,"A_Id":75640516,"Answer":"I face with the same problem, and I input the 'conda remove pathlib', it didn't work. The result is Not found the packages, so I found the lir 'lib', there was a folder named 'path-list-....', finally I delete it, and it began working!","Users Score":2,"is_accepted":true,"Score":1.2,"Available Count":3},{"Q_Id":75476135,"CreationDate":"2023-02-16 17:49:41","Q_Score":2,"ViewCount":9580,"Question":"I am receiving following error while converting python file to .exe\nI have tried to uninstall and intsall pyinstaller but it didnt helped out. i upgraded conda but still facing same error. Please support to resolve this issue\nCommand\n(base) G:>pyinstaller --onefile grp.py\nError\nThe 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller. Please remove this package (located in C:\\Users\\alpha\\anaconda3\\lib\\site-packages) using conda remove then try again.\nPython Version\n(base) G:>python --version\nPython 3.9.16","Title":"How to convert python file to exe? The 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller","Tags":"python,python-3.x,anaconda,conda,exe","AnswerCount":6,"A_Id":75687401,"Answer":"I've experienced the same problem. I managed to solve it by downgrading pyInstaller to 5.1 (from 5.8) without touching pathlib. An additional possibility to consider.","Users Score":6,"is_accepted":false,"Score":1.0,"Available Count":3},{"Q_Id":75478836,"CreationDate":"2023-02-16 23:12:36","Q_Score":1,"ViewCount":71,"Question":"The problem with this program is that the if\/else statements are not working properly. When the answer is \"yes\", the problem also prints the question for when the answer is \"no\". Another problem is that it's not printing the rate1 when it's supposed to.\n# This program calculates the shipping cost as shown in the slide\ninternational = input(\"Are you shipping internationally (yes or no)? \")\nrate1 = 5\nrate2 = 10\n\nif international.upper() == \"yes\":\n shippingRate = rate2\nelse:\n continental = input(\"Are you shipping continental (yes or no)? \")\n if continental.upper() == \"yes\":\n shippingRate = rate1\n else:\n shippingRate = rate2\n \nprint(\"The shipping rate is \" + (\"%.2f\" % shippingRate))","Title":"I am trying to test a program that prints a shipping rate based on yes or no answers","Tags":"python","AnswerCount":2,"A_Id":75478867,"Answer":"I notice you're using a .upper() that would not ever equal \"yes\"\nCause upper() won't ever return lowercase letters.\nBut this code might work with == \"YES\".","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75479380,"CreationDate":"2023-02-17 01:10:27","Q_Score":1,"ViewCount":62,"Question":"I am trying to solve the differential equation 4(y')^3-y'=1\/x^2 in python. I am familiar with the use of odeint to solve coupled ODEs and linear ODEs, but can't find much guidance on nonlinear ODEs such as the one I'm grappling with.\nAttempted to use odeint and scipy but can't seem to implement properly\nAny thoughts are much appreciated\nNB: y is a function of x","Title":"Solving nonlinear differential equations in python","Tags":"python,scipy,differential-equations,odeint","AnswerCount":1,"A_Id":75481202,"Answer":"The problem is that you get 3 valid solutions for the direction at each point of the phase space (including double roots). But each selection criterion breaks down at double roots.\nOne way is to use a DAE solver (which does not exist in scipy) on the system y'=v, 4v^3-v=x^-2\nThe second way is to take the derivative of the equation to get an explicit second-order ODE y''=-2\/x^3\/(12*y'^2-1).\nBoth methods require the selection of the initial direction from the 3 roots of the cubic at the initial point.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75479740,"CreationDate":"2023-02-17 02:30:10","Q_Score":1,"ViewCount":53,"Question":"While parsing file names of TV shows, I would like to extract information about them to use for renaming. I have a working model, but it currently uses 28 if\/elif statements for every iteration of filename I've seen over the last few years. I'd love to be able to condense this to something that I'm not ashamed of, so any help would be appreciated.\nPhase one of this code repentance is to hopefully grab multiple episode numbers. I've gotten as far as the code below, but in the first entry it only displays the first episode number and not all three.\nimport re\n\ndef main():\n pattern = '(.*)\\.S(\\d+)[E(\\d+)]+'\n strings = ['blah.s01e01e02e03', 'foo.s09e09', 'bar.s05e05']\n\n #print(strings)\n for string in strings:\n print(string)\n result = re.search(\"(.*)\\.S(\\d+)[E(\\d+)]+\", string, re.IGNORECASE)\n print(result.group(2))\n\nif __name__== \"__main__\":\n main()\n\nThis outputs:\nblah.s01e01e02e03\n01\nfoo.s09e09\n09\nbar.s05e05\n05\n\nIt's probably trivial, but regular expressions might as well be Cuneiform most days. Thanks in advance!","Title":"Is there a way to find (potentially) multiple results with re.search?","Tags":"python,regex","AnswerCount":3,"A_Id":75479780,"Answer":"re.findall instead of re.search will return a list of all matches","Users Score":1,"is_accepted":false,"Score":0.0665680765,"Available Count":1},{"Q_Id":75480557,"CreationDate":"2023-02-17 05:28:01","Q_Score":1,"ViewCount":48,"Question":"I am new to working on Python. I m not able to understand how can I send the correct input t0 the query.\n list_of_names = []\n\n for country in country_name_list.keys():\n list_of_names.append(getValueMethod(country))\n\n sql_query = f\"\"\"SELECT * FROM table1\n where name in (%s);\"\"\"\n \n\n db_results = engine.execute(sql_query, list_of_names).fetchone()\n\n\nGive the error \" not all arguments converted during string formatting\"","Title":"Receiving Error not all arguments converted during string formatting","Tags":"python,sqlalchemy","AnswerCount":2,"A_Id":75480709,"Answer":"If I know right, there are a simpler solution. If you write curly bracets {}, not bracets (), and you place inside the bracets a variable, which contains the %s value, should work. I don't know, how sql works, but you should use one \" each side, not three.\nSorry, I'm not english. From this, maybe I wasn't help with the question, because I don't understand correctly.","Users Score":-2,"is_accepted":false,"Score":-0.1973753202,"Available Count":1},{"Q_Id":75485006,"CreationDate":"2023-02-17 13:36:35","Q_Score":1,"ViewCount":54,"Question":"I need to find elements on a page by looking for text(), so I use xlsx as a database with all the texts that will be searched.\nIt turns out that it is showing the error reported in the title of the publication, this is my code:\n search_num = str(\"'\/\/a[contains(text(),\" + '\"' + row[1] + '\")' + \"]'\")\n print(search_num)\n xPathnum = self.chrome.find_element(By.XPATH, search_num)\n print(xPathnum.get_attribute(\"id\"))\n\nprint(search_num) returns = '\/\/a[contains(text(),\"0027341-66.2323.0124\")]'\nDoes anyone know where I'm going wrong, despite having similar posts on the forum, none of them solved my problem. Grateful for the attention","Title":"TypeError: Failed to execute 'evaluate' on 'Document': The result is not a node set, and therefore cannot be converted to the desired type","Tags":"python,selenium-webdriver,xpath,selenium-chromedriver","AnswerCount":2,"A_Id":75485323,"Answer":"Looks like you have extra quotes here\nstr(\"'\/\/a[contains(text(),\" + '\"' + row[1] + '\")' + \"]'\")\nTry changing to f\"\/\/a[contains(text(),'{row[1]}')]\"","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75486770,"CreationDate":"2023-02-17 16:18:10","Q_Score":1,"ViewCount":31,"Question":"I have a Pandas dataframe equivalent to:\n 'A' 'B'\n'i1' 'i2' 'i3'\n 1 2 4 3 0\n 1 1 2 3 3\n 1 1 2 1 0\n 1 2 4 0 9\n 1 1 2 2 6\n 2 1 1 1 8\n\nwhere ix are index columns and 'A', and 'B' are normal columns. I want to make sure that the indexes are strictly ordered and, when indexes are duplicated, then it is ordered by column 'A'\n 'A' 'B'\n'i1' 'i2' 'i3'\n 1 1 2 1 0\n 1 1 2 2 6\n 1 1 2 3 3\n 1 2 4 0 9\n 1 2 4 3 0\n 2 1 1 1 8\n \n\nWould df.sort_values('A', kind = 'mergesort').sort_index(kind = 'mergesort') do it? And if so, would do it in a stable way? or could the .sort_index() operation disrupt the previous .sort_values() operation in such a way that, for the duplicated indexes, the values of 'A' are no longer ordered?","Title":"Would df.sort_values('A', kind = 'mergesort').sort_index(kind = 'mergesort') be a stable and valid way to sort by index and column?","Tags":"python,pandas,sorting","AnswerCount":1,"A_Id":75487303,"Answer":"When you sort by multiple keys, only the last one is guaranteed to be sorted. The others will be sorted within the previous groups. Finally, the non-key columns will remain sorted in the original order in case of a stable sort such as the mergesort.\nTo answer your question, yes, your method will maintain the original order in case of duplicated keys.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75486790,"CreationDate":"2023-02-17 16:19:46","Q_Score":1,"ViewCount":94,"Question":"Good day. Today I'm trying to send a document generated on the server to the user on the click of a button using Flask.\nMy task is this:\nCreate a document (without saving it on the server). And send it to the user.\nHowever, using a java script, I track the button click on the form and use fetch to make a request to the server. The server retrieves the necessary data and creates a Word document based on it. How can I form a response to a request so that the file starts downloading?\nCode since the creation of the document. (The text of the Word document has been replaced)\npython Falsk:\ndocument = Document()\ndocument.add_heading(\"Some head-title\")\ndocument.add_paragraph('Some text')\nf = BytesIO()\ndocument.save(f)\nf.seek(0)\nreturn send_file(f, as_attachment=True, download_name='some.docx')\n\nHowever, the file does not start downloading.\nHow can I send a file from the server to the user?\nEdits\nThis is my js request.\nfetch('\/getData', {\n method : 'POST',\n headers: {\n 'Accept': 'application\/json',\n 'Content-Type': 'application\/json'\n },\n body: JSON.stringify({\n someData: someData,\n })\n})\n.then(response => \n response.text()\n)\n.then(response =>{\n console.log(response);\n});\n\nThis is my html\n
\n