diff --git "a/validation.json" "b/validation.json" new file mode 100644--- /dev/null +++ "b/validation.json" @@ -0,0 +1 @@ +[{"Q_Id":74986918,"CreationDate":"2023-01-02 20:46:31","Q_Score":1,"ViewCount":49,"Question":"So i have a YAML file with the following parameters which I read into a dictionary\nThe part i don't get is how to connect it to the SQL engine, I have looked at the documentation and what i need is to break the creds as\ndialect+driver:\/\/username:password@host:port\/database\n\nbut i'm not sure what the dialect and drive is in this case\nRDS_HOST: XXXX.YYYYY.eu-west-1.rds.amazonaws.com\nRDS_PASSWORD: XXXXXX\nRDS_USER: XXXXXX\nRDS_DATABASE: postgres\nRDS_PORT: XXXX","Title":"Trying to connect to SQLalchemy engine","Tags":"python,postgresql,sqlalchemy","AnswerCount":2,"A_Id":74987014,"Answer":"The dialect can either be mysql or any relational database management system it supports.\nFor mysql the driver is mysqldb.\nFor postgresql the driver is psycopg2.\nNote: You may need to install the driver too","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":74986992,"CreationDate":"2023-01-02 20:56:34","Q_Score":1,"ViewCount":59,"Question":"# Label\n self.intro_label = Label(self, text = '\ud83d\udcb0Currency Convertor\ud83d\udcb0', fg = '#1C1075', relief = tk.RAISED, borderwidth = 3)\n self.intro_label.config(font = ('Courier',15,'bold'))\n\n self.date_label = Label(self, text = f\"Date : {self.currency_converter.data['date']}\", relief = tk.GROOVE, borderwidth = 5)\n\n self.intro_label.place(x = 10 , y = 5)\n self.date_label.place(x = 160, y= 50)\n\nI would like to center the title \"\ud83d\udcb0Currency Convertor\ud83d\udcb0\" in the GUI.\nuse the center function","Title":"how to center the title in python tkinter?","Tags":"python,tkinter","AnswerCount":2,"A_Id":74987042,"Answer":"As I think you have to set label configuration configure(anchor=\"center\")\nin your case self.intro_label.configure(anchor=\"center\")","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":74987550,"CreationDate":"2023-01-02 22:26:13","Q_Score":2,"ViewCount":72,"Question":"I wrote an array-like class 'Vector' which behaves like an 'np.ndarray' but has a few extra attributes and methods to be used in a geometry engine (which are omitted here).\nThe MVP below overrides '__ array_function __()' to ensure that a Vector object is returned when using the np.dot function.\nWhen I benchmarked my code against plain-vanilla np.array objects, I noticed a severe performance hit:\nimport numpy as np\nfrom timeit import timeit\n\n\nclass Vector(np.ndarray):\n\n def __new__(cls, input_array):\n return np.array(input_array).view(cls)\n\n def __array_function__(self, func, types, args, kwargs):\n if func == np.dot:\n out = np.dot(np.asarray(args[0]), np.asarray(args[1]))\n return out.view(Vector)\n\nBenchmark:\nv = Vector([1, 1, 1])\nI = np.identity(3)\n\nprint(type(np.dot(I, v))) # Make sure it returns the correct type.\n\n# Create a np.array and Vector object.\nA = np.random.random((100, 3))\nV = A.view(Vector)\n\n# Compare np.dot speed.\nprint(timeit(lambda: np.dot(I, A.T)))\nprint(timeit(lambda: np.dot(I, V.T)))\n\nThe above code outputs:\n\n1.207045791001292\n2.063941927997803\n\nIndicating a 70 % performance hit. Is this expected? Am I doing something wrong? Is there a way around this (I'm only interested in np.dot and np.cross)?\nIf not, I'm afraid I'll have to abandon my custom classes.","Title":"Performance penalty when overriding Numpy's __array_function__() method","Tags":"python,arrays,numpy,subclassing","AnswerCount":1,"A_Id":74988345,"Answer":"This is expected since the target arrays are very small and the overhead of calling a pure-Python function is big compared to the computation time taken by np.dot on a basic array.\n\nIndeed, np.dot(I, A.T) takes just about few microseconds: 1.7 \u00b5s on my machine. A significant part of the time is lost in Numpy overheads and the actual computation should take just a fraction of this execution time. np.dot(I, V.T) has to call the pure-Python function __array_function__ and this function takes about 1.2 us. The overall runtime is thus 2.9 us, hence a 70% slower execution.\n__array_function__ is a bit slow because it is interpreted (assuming you use the standard CPython interpreter) while usual Numpy functions are written in C and so they are compiled to native code. Interpreted codes are significantly slower (due to nearly no optimizations, dynamic typing, many dynamic allocations, object wrapping, etc.) not to mention the 2 calls to np.asarray takes a significant additional time compared to just calling np.dot directly.\nOne solution to reduce the overhead is to use Cython. Cython can compile a pure-Python function to native code. The compiled code can be much faster if type annotation are present. That being said, the benefit of using Cython here is limited. Indeed, half the overhead comes from the Numpy internals when calling np.dot. This is certainly because Numpy has to create Python objects (eg. args) so to pass them to the pure-Python function and also because Numpy and CPython has to perform few check (eg. check the function __array_function__ is actually valid). AFAIK, there is not much you can do about this Numpy overhead.\nIn the end, since >75% of the execution time of np.dot(I, A.T) is already overheads it is certainly better to rewrite the code calling this expression so calls are vectorized. Indeed, calling __array_function__ once is not really a problem. This means you may need to write a class to manage many vectors. If the vectors are of different size, then the overhead can still be significant (Numpy is not great for that).","Users Score":3,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":74989889,"CreationDate":"2023-01-03 06:51:14","Q_Score":1,"ViewCount":60,"Question":"What is the difference between cleaned_data and is_valid functions in django?, I just came across forms and immediately i got stuck there\ncan anyone play with some simple examples. I've read many documentation but i cant able to differentiate it.","Title":"What's the difference between Cleaned data and is valid in django","Tags":"python,django,django-models,django-forms,cleaned-data","AnswerCount":4,"A_Id":74991213,"Answer":"Cleaned data: Clean data are valid, accurate, complete, consistent, unique, and uniform. (Dirty data include inconsistencies and errors.)\nCleaned data valid in django: It uses uses a clean and easy approach to validate data. The is_valid() method is used to perform validation for each field of the form, it is defined in Django Form class. It returns True if data is valid and place all data into a cleaned_data attribute.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":3},{"Q_Id":74989889,"CreationDate":"2023-01-03 06:51:14","Q_Score":1,"ViewCount":60,"Question":"What is the difference between cleaned_data and is_valid functions in django?, I just came across forms and immediately i got stuck there\ncan anyone play with some simple examples. I've read many documentation but i cant able to differentiate it.","Title":"What's the difference between Cleaned data and is valid in django","Tags":"python,django,django-models,django-forms,cleaned-data","AnswerCount":4,"A_Id":74990208,"Answer":"The is_valid() method is used to perform validation for each field of the form, it is defined in Django Form class. It returns True if data is valid and place all data into a cleaned_data attribute.\nAny data the user submits through a form will be passed to the server as strings. It doesn't matter which type of form field was used to create the form.\nEventually, the browser would will everything as strings. When Django cleans the data it automatically converts data to the appropriate type.\nFor example IntegerField data would be converted to an integer\nIn Django, this cleaned and validated data is commonly known as cleaned data.\nWe can access cleaned data via cleaned_data dictionary:\nname = form.cleaned_data['name']","Users Score":2,"is_accepted":false,"Score":0.0996679946,"Available Count":3},{"Q_Id":74989889,"CreationDate":"2023-01-03 06:51:14","Q_Score":1,"ViewCount":60,"Question":"What is the difference between cleaned_data and is_valid functions in django?, I just came across forms and immediately i got stuck there\ncan anyone play with some simple examples. I've read many documentation but i cant able to differentiate it.","Title":"What's the difference between Cleaned data and is valid in django","Tags":"python,django,django-models,django-forms,cleaned-data","AnswerCount":4,"A_Id":74990113,"Answer":"is_valid() method is used to perform validation for each field of the form.\n\ncleaned_data is where all validated fields are stored.","Users Score":2,"is_accepted":false,"Score":0.0996679946,"Available Count":3},{"Q_Id":74990614,"CreationDate":"2023-01-03 08:14:47","Q_Score":1,"ViewCount":94,"Question":"I am trying to apply multiprocessing in the simplest way in Python 3 but it does not work on my laptop. I am using Windows.\nfrom multiprocessing import Process\n\n# a dummy function\ndef f(x):\n print(x)\n\nif __name__ == '__main__':\n p = Process(target=f, args=('some text',))\n p.start()\n p.join()\n\nprint('Done')\n\nIt did not end as it was expected. Instead, I got this error:\n\nTraceback (most recent call last):\nFile \"C:\\Users\\Mahdi\\anaconda3\\lib\\site-packages\\IPython\\core\\interactiveshell.py\", line 3437, in run_code\nexec(code_obj, self.user_global_ns, self.user_ns)\nFile \"\", line 1, in \nrunfile('C:\/Users\/Mahdi\/Mahdi Code\/test.py', wdir='C:\/Users\/Mahdi\/Mahdi Code')\nFile \"C:\\Program Files\\JetBrains\\PyCharm 2021.2.2\\plugins\\python\\helpers\\pydev\\_pydev_bundle\\pydev_umd.py\", line 198, in runfile\npydev_imports.execfile(filename, global_vars, local_vars) # execute the script\nFile \"C:\\Program Files\\JetBrains\\PyCharm 2021.2.2\\plugins\\python\\helpers\\pydev\\_pydev_imps\\_pydev_execfile.py\", line 18, in execfile\nexec(compile(contents+\"\\n\", file, 'exec'), glob, loc)\nFile \"C:\/Users\/Mahdi\/Mahdi Code\/test.py\", line 25, in \np.start()\nFile \"C:\\Users\\Mahdi\\anaconda3\\lib\\multiprocessing\\process.py\", line 121, in start\nself._popen = self._Popen(self)\nFile \"C:\\Users\\Mahdi\\anaconda3\\lib\\multiprocessing\\context.py\", line 224, in _Popen\nreturn _default_context.get_context().Process._Popen(process_obj)\nFile \"C:\\Users\\Mahdi\\anaconda3\\lib\\multiprocessing\\context.py\", line 327, in _Popen\nreturn Popen(process_obj)\nFile \"C:\\Users\\Mahdi\\anaconda3\\lib\\multiprocessing\\popen_spawn_win32.py\", line 93, in __init__\nreduction.dump(process_obj, to_child)\nFile \"C:\\Users\\Mahdi\\anaconda3\\lib\\multiprocessing\\reduction.py\", line 60, in dump\nForkingPickler(file, protocol).dump(obj)\n_pickle.PicklingError: Can't pickle : attribute lookup f on __main__ failed\nDone\nTraceback (most recent call last):\nFile \"\", line 1, in \nFile \"C:\\Users\\Mahdi\\anaconda3\\lib\\multiprocessing\\spawn.py\", line 116, in spawn_main\nexitcode = _main(fd, parent_sentinel)\nFile \"C:\\Users\\Mahdi\\anaconda3\\lib\\multiprocessing\\spawn.py\", line 126, in _main\nself = reduction.pickle.load(from_parent)\nEOFError: Ran out of input\n\n\nDoes anyone know about this?","Title":"Multiprocessing in Python","Tags":"python,multiprocessing","AnswerCount":2,"A_Id":74990649,"Answer":"some interactive IDEs don't support multiprocessing such as jupyter lab, there are tricks around this involving non-standard multiprocessing modules, but the most straight-forward solution is to not use an interactive environment for multiprocessed code, and instead run python in script mode using VsCode or Pycharm or through the terminal. (Spyder also works but you have to run the code as a script)","Users Score":4,"is_accepted":false,"Score":0.3799489623,"Available Count":1},{"Q_Id":74991457,"CreationDate":"2023-01-03 09:36:57","Q_Score":1,"ViewCount":186,"Question":"I have trained a BERTopic model on colab and I am now trying to use it locally I get the IndexError.\nIndexError: Failed in nopython mode pipeline (step: analyzing bytecode)\npop from empty list\n\nThe code I used is:\nfrom sentence_transformers import SentenceTransformer\nsentence_model = SentenceTransformer('KBLab\/sentence-bert-swedish-cased')\n\nmodel = BERTopic.load('bertopic_model')\ntext = \"my text here for example\"\ntext = [text]\n\nembeddings = sentence_model.encode(text)\ntopic, _ = model.transform(text, embeddings)\n\nThe last line gives me the error.\nNoticeably, the same code works just fine on colab. Not sure whats going on mlocally.\nMy numba and other related libraries are up-to-date as it was on colab.\nFull Traceback:\nTraceback (most recent call last):\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/flask\/app.py\", line 2525, in wsgi_app\n response = self.full_dispatch_request()\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/flask\/app.py\", line 1822, in full_dispatch_request\n rv = self.handle_user_exception(e)\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/flask\/app.py\", line 1820, in full_dispatch_request\n rv = self.dispatch_request()\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/flask\/app.py\", line 1796, in dispatch_request\n return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)\n File \"app.py\", line 20, in reference_prediction\n preds = data_process(input_api)\n File \"data_process.py\", line 63, in data_process\n topic, _ = topic_model_mi.transform(text, embeddings)\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/bertopic\/_bertopic.py\", line 423, in transform\n umap_embeddings = self.umap_model.transform(embeddings)\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/umap\/umap_.py\", line 2859, in transform\n dmat = pairwise_distances(\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/sklearn\/metrics\/pairwise.py\", line 2022, in pairwise_distances\n return _parallel_pairwise(X, Y, func, n_jobs, **kwds)\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/sklearn\/metrics\/pairwise.py\", line 1563, in _parallel_pairwise\n return func(X, Y, **kwds)\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/sklearn\/metrics\/pairwise.py\", line 1607, in _pairwise_callable\n out[i, j] = metric(X[i], Y[j], **kwds)\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/numba\/core\/dispatcher.py\", line 487, in _compile_for_args\n raise e\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/numba\/core\/dispatcher.py\", line 420, in _compile_for_args\n return_val = self.compile(tuple(argtypes))\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/numba\/core\/dispatcher.py\", line 965, in compile\n cres = self._compiler.compile(args, return_type)\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/numba\/core\/dispatcher.py\", line 125, in compile\n status, retval = self._compile_cached(args, return_type)\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/numba\/core\/dispatcher.py\", line 139, in _compile_cached\n retval = self._compile_core(args, return_type)\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/numba\/core\/dispatcher.py\", line 152, in _compile_core\n cres = compiler.compile_extra(self.targetdescr.typing_context,\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/numba\/core\/compiler.py\", line 716, in compile_extra\n return pipeline.compile_extra(func)\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/numba\/core\/compiler.py\", line 452, in compile_extra\n return self._compile_bytecode()\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/numba\/core\/compiler.py\", line 520, in _compile_bytecode\n return self._compile_core()\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/numba\/core\/compiler.py\", line 499, in _compile_core\n raise e\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/numba\/core\/compiler.py\", line 486, in _compile_core\n pm.run(self.state)\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/numba\/core\/compiler_machinery.py\", line 368, in run\n raise patched_exception\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/numba\/core\/compiler_machinery.py\", line 356, in run\n self._runPass(idx, pass_inst, state)\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/numba\/core\/compiler_lock.py\", line 35, in _acquire_compile_lock\n return func(*args, **kwargs)\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/numba\/core\/compiler_machinery.py\", line 311, in _runPass\n mutated |= check(pss.run_pass, internal_state)\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/numba\/core\/compiler_machinery.py\", line 273, in check\n mangled = func(compiler_state)\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/numba\/core\/untyped_passes.py\", line 86, in run_pass\n func_ir = interp.interpret(bc)\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/numba\/core\/interpreter.py\", line 1321, in interpret\n flow.run()\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/numba\/core\/byteflow.py\", line 107, in run\n runner.dispatch(state)\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/numba\/core\/byteflow.py\", line 282, in dispatch\n fn(state, inst)\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/numba\/core\/byteflow.py\", line 1061, in _binaryop\n rhs = state.pop()\n File \"\/home\/vaibhav\/.local\/lib\/python3.10\/site-packages\/numba\/core\/byteflow.py\", line 1344, in pop\n return self._stack.pop()\nIndexError: Failed in nopython mode pipeline (step: analyzing bytecode)\npop from empty list","Title":"BERTopic: pop from empty list IndexError while Inferencing","Tags":"python-3.x,nlp,bert-language-model,topic-modeling","AnswerCount":1,"A_Id":75917500,"Answer":"I had the exact same problem. In the end, I realized that, even though I carefully compared all versions of the used libraries, I had been using two different python version: 3.9 on google colab and 3.10 locally.\nSwitching to 3.9 locally immediately solved the issue. Hence, I would advise you to check if all the library versions and the python versions are matching between the environment where you stored the model and the one where you try to load it.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":74992153,"CreationDate":"2023-01-03 10:34:52","Q_Score":1,"ViewCount":39,"Question":"I want to control the simulation process like run-pause-resume-restart(mainly), import data, export data etc. just using the python script. Is it possible ? If it is possible then please give some direction or resources to proceed.\nI am just starting in this field.","Title":"How to control the OMNeT++ Simulation using Python script \/ Python code","Tags":"python,api,omnet++","AnswerCount":1,"A_Id":74992327,"Answer":"Exporting\/importing result data is available in OMNeT++ 6.0. Take a look at the samples\/results folder for examples and the Manual's Result Recording and Analysis chapter.\nAdditional python integration (like writing behavior in Python or controlling the simulation) is planned for the upcoming 7.0 version. You may try the current master branch from the omnetpp github repo. The functionality was already merged into the master branch.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":74992320,"CreationDate":"2023-01-03 10:48:51","Q_Score":1,"ViewCount":43,"Question":"I'm trying to extract specific info from a text data. The text data include a name of a person and his\/her marks from school. The text data has this format:\nXxxxx Yyyyyy: B\nAaaaa Bbbbbb: A\nCcccc Dddddd: C\n.\n.\n.\nMmmmm Nnnnnn: B\n\nThis was a task in a data science course in Coursera where we need to extract the names of students with B marks only to a list using regex from python. I already did it using regex and currently trying to do an alternative way.\nI tried this:\ndef grades():\n with open (\".\/grades.txt\", \"r\") as file:\n grades = file.read()\n \n grades = grades.splitlines()\n matches = []\n for marks in grades:\n if \": B\" in marks:\n matches.append(marks)\n matches = [match.replace(': B', '') for match in matches]\n return matches\nprint(grades())\n\nSomehow it worked but it left some whitespace after some names. Can anyone explain to me why?","Title":"Why does .replace() leave whitespace at the end of some strings?","Tags":"python,replace","AnswerCount":3,"A_Id":74992367,"Answer":"It coud happen there is a space after the 'B'.\nmatch.replace(': B', '') you are only replacing ': B' with an empty string. Any leftover spaces after that are still there.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":74995290,"CreationDate":"2023-01-03 15:21:28","Q_Score":1,"ViewCount":47,"Question":"I have the following dataframe:\n\n\n\n\ncountry\ncoin\n\n\n\n\nUSA\ncoin1\n\n\nUSA\ncoin2\n\n\nMexico\ncoin3\n\n\n\n\nEach coin is unique, and it can change the country. For example:\n\n\n\n\ncountry\ncoin\n\n\n\n\nUSA\ncoin1\n\n\nMexico\ncoin2\n\n\nMexico\ncoin3\n\n\n\n\nWhat I'm trying to find is a way to see which lines have changed. My desired output:\n\n\n\n\ncountry\ncoin\n\n\n\n\nMexico\nCoin2","Title":"Get the differences from two dataframes","Tags":"python,dataframe","AnswerCount":1,"A_Id":74995495,"Answer":"You could use concat to combine them, and then use drop_duplicates to get the difference. For example:\nconcat([df1,df2]).drop_duplicates(keep=False)\nEDIT:\nTo get just the one row, you can get the negation of everything common between the two dataframes by turning applying list to them and using .isin to find commonalities.\ndf1[~df1.apply(list,1).isin(df2.apply(list,1))]","Users Score":2,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":74996550,"CreationDate":"2023-01-03 17:15:45","Q_Score":4,"ViewCount":6621,"Question":"I have just upgraded Python to 3.11 today. Pandas-profiling worked fine before, but now I cannot seem to import it due to the following error:\ncannot import name 'DataError' from 'pandas.core.base' (C:\\Users\\User_name\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\pandas\\core\\base.py)\n\n\nAny help as to how I can fix this?\nThis is my code:\nimport pandas as pd\nfrom pandas_profiling import ProfileReport\n\nPandas version - 1.5.2\nPandas-profiling version - 3.2.0","Title":"Pandas profiling not able to import due to error 'cannot import name \"DataError' from 'pandas.core.base'\"","Tags":"python-3.x,pandas,jupyter-notebook,pandas-profiling","AnswerCount":3,"A_Id":75931055,"Answer":"Schedule for deprecation\n\nydata-profiling was launched in February 1st.\n\npip install pandas-profiling will still be supported until April 1st,\nbut a warning will be thrown. \"from pandas_profiling import\nProfileReport \" will be supported until April 1st.\n\nAfter April 1st, an error will be thrown if pip install\npandas-profiling is used. Use pip install ydata-profiling instead.\n\nAfter April 1st, an error will be thrown if from pandas_profiling\nimport ProfileReport is used. Use from ydata_profiling import\nProfileReport instead.","Users Score":3,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":74997541,"CreationDate":"2023-01-03 18:51:22","Q_Score":0,"ViewCount":51,"Question":"So I created a machine learning model to make predictions on future output at work. So far its 97% accurate.\nI wanted to predict the output using the date along with 2 other inputs and since you can't use datetime directly in regression models.\nI converted the date column using ordinal encoding, will I then be able to use the date as an input then?\nOr is there a better method?","Title":"Machine Learning predictions using dates","Tags":"python,pandas,machine-learning,regression","AnswerCount":1,"A_Id":75003657,"Answer":"Ordinal encoding is't the best approach for handling date\/time data, especially if in your data occurs seasonality or trends. Depending on your problem, you could extract a lot of different features from dates, e.q:\n\nyear, month, day ....\nhour, minute, second ....\nday of week\nseason\nholiday\netc ...\n\nWhat should you use exactly highly depends on your problem, you should first investigate your data, maybe plot your predicted variable against dates and search for patterns which can help you then achieve best prediction results.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":74997545,"CreationDate":"2023-01-03 18:51:51","Q_Score":1,"ViewCount":96,"Question":"I have a subfolder that I want to delete from an S3 bucket.\nI want to delete the subfolder folder2\nmybucket\/folder1\/folder2\/folder3\nThe name of folder2 will vary, so I'm trying to loop through the objects in folder1 and somehow delete the second level folder. The current code I have works ONLY if the folder3 (subfolder of folder2) doesn't exist.\nbucket = s3.Bucket(mybucket)\nresult = client.list_objects_v2(Bucket=mybucket, Delimiter='\/', Prefix=\"folder1\/\")\nfor object in result.get('CommonPrefixes'):\n subfolder = object.get('Prefix')\n s3.Object(mybucket,subfolder).delete()","Title":"Delete subfolder within S3 bucket","Tags":"python,amazon-s3,aws-lambda","AnswerCount":1,"A_Id":74998896,"Answer":"The thing you have to remember about Amazon S3 is that folders, in the sense you think of them, don't exist. They're not real. S3 is object storage, which means it thinks in objects, not files. The fact that the console renders things that look like filepaths with subfolders and so forth is just for convenience.\nSo instead of trying to delete a folder, you want to delete all files whose names begin with that prefix.","Users Score":3,"is_accepted":false,"Score":0.537049567,"Available Count":1},{"Q_Id":74997709,"CreationDate":"2023-01-03 19:10:37","Q_Score":1,"ViewCount":48,"Question":"In the example code below, col1 and col2 are primary keys in the database!\nMy question is: should they be added in the part of the code after the ON DUPLICATE KEY UPDATE, as it is already in the code, or should they not be added?\nExample code:\nwith Dl.cursor() as cursor:\n for chunk in np.array_split(DataFrame, 10, axis=0):\n for i in chunk.index:\n cursor.execute(\"INSERT INTO table_example (col1, col2, col3, col4) VALUES (%s, %s, %s, %s) ON DUPLICATE KEY UPDATE col1 = col1, col2 = col2, col3 = col3, col4 = col4;\", (chunk['col1'][i], chunk['col2'][i], chunk['col3'][i], chunk['col4'][i]))\n # col3 = col3, col4 = col4; ... Which version is correct?\n Dl.commit()\n cursor.close()\nDl.close()","Title":"Should primary key columns be added in the UPDATE?","Tags":"python,mysql,insert-update,on-duplicate-key","AnswerCount":2,"A_Id":74997862,"Answer":"If you have no other unique keys that could cause the ON DUPLICATE to be executed, col1 and col2 won't change and you should leave them out.\nIf you do have other unique keys, you probably don't want to change col1 and col2 anyway.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":74997960,"CreationDate":"2023-01-03 19:40:39","Q_Score":0,"ViewCount":40,"Question":"Today I've encountered a very strange problem in Microsoft Visual Studio Code 2022. When I press the 'play' button to run my python code, nothing happens. This is true also with debugging.\nThere is no activity in either the built-in cmd or powershell terminals, but through these terminals I can run my code.\nI have been using VSCode to write and execute Python code for months now with no issues - as recently as 10 hours ago! I have changed no settings or updated anything and I am at a loss.\nI've checked the VSCode Python plugin and last update was 3 weeks ago, so unlikely that, but rolled it back anyway with no luck. I have also made sure my default terminal is cmd prompt, tried reloading and opening a new terminal, restarting PC, all to no avail.\nPlease help!","Title":"VSCode 'Run Python file' does nothing","Tags":"python,windows,visual-studio-code","AnswerCount":1,"A_Id":75080065,"Answer":"You can try the following:\n\nuse the shortcuts F5\ninstall and use the code-runner extension\nreinstall vscode","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":74997987,"CreationDate":"2023-01-03 19:44:40","Q_Score":2,"ViewCount":49,"Question":"Ive been trying to figure out why this is happening. I'm fitting a DecisionTreeClassifier and the model determines that a few features are not informative for the prediction. Fitting the same model with the same hyperparameters using all of the informative features (i.e., features that have a weight > 0), now I get other features that have zero weights that had non-zero weights before?\nMy questions:\n\nIs this behavior expected?\n\nIf so, how can I use a while loop to remove features until none of the feature weights are zero?\n\n\nimport pandas as pd\nimport numpy as np\n\n# Data\ny = pd.Series({1: 'Negative', 2: 'Positive', 3: 'Positive', 4: 'Negative', 5: 'Positive', 6: 'Negative', 7: 'Negative', 8: 'Negative', 9: 'Negative', 10: 'Negative', 11: 'Negative', 12: 'Negative', 13: 'Negative', 14: 'Negative', 15: 'Negative', 16: 'Negative', 17: 'Negative', 18: 'Negative', 19: 'Negative', 20: 'Negative', 21: 'Negative', 22: 'Negative', 23: 'Negative', 24: 'Negative', 25: 'Negative', 26: 'Negative', 27: 'Negative', 28: 'Negative', 29: 'Negative', 30: 'Negative', 31: 'Negative', 32: 'Negative', 33: 'Negative', 34: 'Negative', 35: 'Negative', 36: 'Positive', 37: 'Negative', 38: 'Positive', 39: 'Positive', 40: 'Positive', 41: 'Positive', 42: 'Negative', 43: 'Negative', 44: 'Positive', 45: 'Positive', 46: 'Negative', 47: 'Negative', 48: 'Positive', 49: 'Positive', 50: 'Negative', 51: 'Negative', 52: 'Negative', 53: 'Positive', 54: 'Positive', 55: 'Positive', 56: 'Negative', 57: 'Positive', 58: 'Positive', 59: 'Positive', 60: 'Negative', 61: 'Negative', 62: 'Negative', 63: 'Positive', 64: 'Positive', 65: 'Positive', 66: 'Negative', 67: 'Positive', 68: 'Negative', 69: 'Negative', 70: 'Negative', 71: 'Positive', 72: 'Positive', 73: 'Negative', 74: 'Positive', 75: 'Positive', 76: 'Positive', 77: 'Positive', 78: 'Positive', 79: 'Positive', 80: 'Negative'})\nX = pd.DataFrame({'ASV019': {1: 0, 2: 0, 3: 0, 4: 344, 5: 0, 6: 1468, 7: 669, 8: 646, 9: 1192, 10: 169, 11: 801, 12: 793, 13: 147, 14: 27, 15: 34, 16: 1324, 17: 196, 18: 181, 19: 955, 20: 144, 21: 460, 22: 1563, 23: 253, 24: 1590, 25: 429, 26: 973, 27: 523, 28: 901, 29: 766, 30: 417, 31: 726, 32: 955, 33: 630, 34: 580, 35: 1002, 36: 0, 37: 696, 38: 0, 39: 20, 40: 0, 41: 0, 42: 0, 43: 0, 44: 0, 45: 0, 46: 87, 47: 162, 48: 0, 49: 0, 50: 173, 51: 215, 52: 634, 53: 0, 54: 40, 55: 0, 56: 17, 57: 0, 58: 0, 59: 0, 60: 787, 61: 503, 62: 439, 63: 0, 64: 25, 65: 0, 66: 365, 67: 0, 68: 252, 69: 382, 70: 1694, 71: 0, 72: 0, 73: 21, 74: 0, 75: 3069, 76: 0, 77: 2, 78: 80, 79: 0, 80: 0}, 'ASV552': {1: 0, 2: 0, 3: 0, 4: 81, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0, 10: 0, 11: 0, 12: 0, 13: 0, 14: 0, 15: 0, 16: 0, 17: 0, 18: 0, 19: 0, 20: 0, 21: 0, 22: 0, 23: 0, 24: 0, 25: 0, 26: 0, 27: 0, 28: 0, 29: 0, 30: 0, 31: 0, 32: 0, 33: 0, 34: 0, 35: 0, 36: 0, 37: 0, 38: 0, 39: 0, 40: 0, 41: 0, 42: 0, 43: 0, 44: 0, 45: 0, 46: 0, 47: 0, 48: 15, 49: 16, 50: 0, 51: 0, 52: 13, 53: 0, 54: 0, 55: 0, 56: 0, 57: 0, 58: 0, 59: 0, 60: 0, 61: 0, 62: 0, 63: 0, 64: 0, 65: 0, 66: 0, 67: 0, 68: 0, 69: 0, 70: 0, 71: 0, 72: 0, 73: 0, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 0, 80: 0}, 'ASV007': {1: 217, 2: 1673, 3: 4694, 4: 669, 5: 2734, 6: 388, 7: 210, 8: 213, 9: 568, 10: 329, 11: 703, 12: 677, 13: 776, 14: 505, 15: 987, 16: 400, 17: 334, 18: 133, 19: 0, 20: 405, 21: 475, 22: 740, 23: 766, 24: 364, 25: 705, 26: 1099, 27: 143, 28: 270, 29: 134, 30: 229, 31: 317, 32: 84, 33: 449, 34: 92, 35: 207, 36: 9288, 37: 461, 38: 135, 39: 342, 40: 464, 41: 1043, 42: 4693, 43: 2858, 44: 197, 45: 2083, 46: 223, 47: 822, 48: 1036, 49: 11656, 50: 0, 51: 348, 52: 1089, 53: 465, 54: 72, 55: 0, 56: 3885, 57: 2849, 58: 1000, 59: 4091, 60: 0, 61: 639, 62: 459, 63: 619, 64: 2563, 65: 919, 66: 1266, 67: 3038, 68: 622, 69: 521, 70: 296, 71: 10603, 72: 828, 73: 4849, 74: 5995, 75: 1252, 76: 3165, 77: 682, 78: 4219, 79: 3732, 80: 1603}, 'ASV135': {1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0, 10: 0, 11: 0, 12: 0, 13: 0, 14: 0, 15: 0, 16: 0, 17: 0, 18: 0, 19: 0, 20: 0, 21: 700, 22: 0, 23: 0, 24: 0, 25: 0, 26: 0, 27: 0, 28: 0, 29: 0, 30: 0, 31: 0, 32: 0, 33: 0, 34: 0, 35: 0, 36: 0, 37: 0, 38: 0, 39: 0, 40: 0, 41: 0, 42: 0, 43: 0, 44: 0, 45: 0, 46: 0, 47: 0, 48: 0, 49: 0, 50: 0, 51: 92, 52: 767, 53: 0, 54: 0, 55: 0, 56: 0, 57: 0, 58: 0, 59: 0, 60: 0, 61: 0, 62: 0, 63: 0, 64: 0, 65: 0, 66: 0, 67: 0, 68: 0, 69: 408, 70: 0, 71: 0, 72: 0, 73: 0, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 0, 80: 0}, 'ASV122': {1: 0, 2: 0, 3: 0, 4: 0, 5: 1303, 6: 6, 7: 26, 8: 0, 9: 0, 10: 5, 11: 0, 12: 0, 13: 0, 14: 0, 15: 0, 16: 0, 17: 0, 18: 0, 19: 0, 20: 0, 21: 0, 22: 0, 23: 0, 24: 0, 25: 19, 26: 0, 27: 0, 28: 0, 29: 0, 30: 0, 31: 0, 32: 0, 33: 0, 34: 17, 35: 0, 36: 0, 37: 0, 38: 82, 39: 0, 40: 0, 41: 0, 42: 0, 43: 0, 44: 0, 45: 0, 46: 0, 47: 0, 48: 0, 49: 0, 50: 0, 51: 0, 52: 0, 53: 0, 54: 0, 55: 0, 56: 0, 57: 70, 58: 0, 59: 0, 60: 411, 61: 0, 62: 37, 63: 32, 64: 0, 65: 0, 66: 0, 67: 0, 68: 0, 69: 11, 70: 0, 71: 0, 72: 0, 73: 0, 74: 0, 75: 5, 76: 12, 77: 0, 78: 252, 79: 0, 80: 0}, 'ASV952': {1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0, 10: 9, 11: 0, 12: 0, 13: 0, 14: 0, 15: 0, 16: 0, 17: 0, 18: 0, 19: 0, 20: 0, 21: 0, 22: 0, 23: 0, 24: 0, 25: 0, 26: 0, 27: 0, 28: 0, 29: 0, 30: 0, 31: 0, 32: 0, 33: 0, 34: 0, 35: 0, 36: 0, 37: 0, 38: 0, 39: 6, 40: 0, 41: 0, 42: 0, 43: 0, 44: 0, 45: 0, 46: 0, 47: 0, 48: 0, 49: 0, 50: 0, 51: 0, 52: 0, 53: 0, 54: 0, 55: 0, 56: 0, 57: 0, 58: 0, 59: 0, 60: 0, 61: 0, 62: 0, 63: 5, 64: 0, 65: 0, 66: 7, 67: 0, 68: 0, 69: 0, 70: 0, 71: 0, 72: 0, 73: 0, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 0, 80: 0}, 'ASV156': {1: 0, 2: 26, 3: 0, 4: 3, 5: 72, 6: 2, 7: 0, 8: 0, 9: 0, 10: 0, 11: 0, 12: 0, 13: 0, 14: 0, 15: 0, 16: 0, 17: 0, 18: 0, 19: 0, 20: 0, 21: 0, 22: 0, 23: 0, 24: 0, 25: 0, 26: 12, 27: 0, 28: 0, 29: 0, 30: 0, 31: 0, 32: 0, 33: 0, 34: 0, 35: 0, 36: 0, 37: 0, 38: 0, 39: 0, 40: 22, 41: 0, 42: 0, 43: 2, 44: 4, 45: 0, 46: 9, 47: 0, 48: 11, 49: 15, 50: 0, 51: 0, 52: 0, 53: 0, 54: 0, 55: 0, 56: 0, 57: 0, 58: 35, 59: 0, 60: 0, 61: 0, 62: 0, 63: 0, 64: 0, 65: 7, 66: 8, 67: 88, 68: 67, 69: 15, 70: 0, 71: 0, 72: 0, 73: 76, 74: 1069, 75: 14, 76: 4, 77: 49, 78: 3, 79: 5, 80: 24}, 'ASV062': {1: 199, 2: 209, 3: 0, 4: 315, 5: 0, 6: 49, 7: 63, 8: 25, 9: 29, 10: 22, 11: 24, 12: 141, 13: 0, 14: 62, 15: 49, 16: 0, 17: 288, 18: 274, 19: 0, 20: 59, 21: 134, 22: 10, 23: 147, 24: 22, 25: 101, 26: 78, 27: 0, 28: 25, 29: 47, 30: 105, 31: 0, 32: 0, 33: 74, 34: 53, 35: 110, 36: 0, 37: 8, 38: 0, 39: 0, 40: 6, 41: 0, 42: 226, 43: 21, 44: 0, 45: 373, 46: 98, 47: 126, 48: 5, 49: 8, 50: 186, 51: 93, 52: 35, 53: 21, 54: 0, 55: 0, 56: 720, 57: 3, 58: 220, 59: 0, 60: 230, 61: 41, 62: 118, 63: 0, 64: 0, 65: 0, 66: 151, 67: 0, 68: 186, 69: 225, 70: 6, 71: 22, 72: 13, 73: 97, 74: 0, 75: 2, 76: 5, 77: 134, 78: 0, 79: 0, 80: 84}})\n\n# Model\nparams = {'ccp_alpha': 0.0, 'class_weight': None, 'criterion': 'entropy', 'max_depth': None, 'max_features': 'log2', 'max_leaf_nodes': None, 'min_impurity_decrease': 0.0, 'min_samples_leaf': 1, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'random_state': 0, 'splitter': 'best'}\nestimator=DecisionTreeClassifier(**params)\n\n# Fit model\nestimator.fit(X,y)\nestimator.feature_importances_\n# array([0.68181101, 0. , 0.10029598, 0. , 0.03051763,\n# 0. , 0. , 0.18737538])\n\n# Mask zero weighted features and refit\nX_1 = X.loc[:,estimator.feature_importances_ > 0]\nestimator.fit(X_1,y)\nestimator.feature_importances_\n# array([0.51290959, 0.11922515, 0. , 0.36786526])\n\n# One more time\nX_2 = X_1.loc[:,estimator.feature_importances_ > 0]\nestimator.fit(X_2,y)\nestimator.feature_importances_\n# array([0.38116661, 0.32724164, 0.29159175])","Title":"Why does Scikit-Learn's DecisionTreeClassifier return zero weighted features after removing zero weighted features and refitting?","Tags":"python,scikit-learn,classification,decision-tree,feature-selection","AnswerCount":1,"A_Id":74999527,"Answer":"I'd say this \"isn't unexpected\" (but wouldn't go so far as to say it's \"expected\").\nWith max_features!=1.0, the number of informative features chosen depends on the number of features available. After pruning out some (relatively-)uninformative features, your log2(n_features) changes, and so one of those remaining never out-competes the final three features for a split.\nEven if you don't perform feature subsetting, there's a (rarer) possibility of this phenomenon based on the random state affecting ordering features differently when they have different numbers.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":74998349,"CreationDate":"2023-01-03 20:27:32","Q_Score":1,"ViewCount":272,"Question":"I'm trying to access Historical data of API Interactive brokers but I can't get the data.\nMy code looks like this:\nfrom ibapi.client import EClient\nfrom ibapi.wrapper import EWrapper\nfrom ibapi.contract import Contract\nimport datetime\n\n\nclass TestApp(EClient, EWrapper):\n def __init__(self):\n EClient.__init__(self, self)\n\n def nextValidId(self, orderId: int):\n # Get the current year and month\n now = datetime.datetime.now()\n year = now.year\n month = now.month\n\n contract = Contract()\n contract.symbol = \"ES\"\n contract.secType = \"FUT\"\n contract.exchange = \"GLOBEX\"\n contract.currency = \"USD\"\n contract.localSymbol = \"ESZ7\" # Set the local symbol\n\n self.reqHistoricalData(orderId, contract, \"\", \"1 D\", \"1 hour\", \"TRADES\", 0, 1, True, [])\n\n def historicalData(self, reqId, bar):\n print(f\"Historical data: {bar}\")\n\n def historicalDataEnd(self, reqId, start, end):\n print(\"End of HistoricalData\")\n print(f\"Start: {start}, End: {end}\")\n\n\napp = TestApp()\napp.connect('127.0.0.1', 7497, 1)\napp.run()\n\n\nAnd I get the following error:\nERROR 1 200 No security definition has been found for the request\nI have real times running on the futures contracts, do I need to activate another authorization in addition?\nI would be very grateful if someone here could help me solve the problem.","Title":"Unable to get the Historical data from API Interactive brokers","Tags":"python,python-3.x,api,interactive,interactive-brokers","AnswerCount":1,"A_Id":74998782,"Answer":"The solution is:\n\nI had to change contract.exchange = \"GLOBEX\" to contract.exchange = \"CME\"\n\nI had to add the following line:\ncontract.lastTradeDateOrContractMonth = \"202303\"","Users Score":2,"is_accepted":false,"Score":0.3799489623,"Available Count":1},{"Q_Id":75000224,"CreationDate":"2023-01-04 01:43:39","Q_Score":1,"ViewCount":34,"Question":"I spent half an hour debugging on the slowness of the following code snippet\nimport time\nfeature_values = {'query': ['hello', 'world'], 'ctr': [0.1, 0.2]}\n\nmodel = tf.saved_model.load(model_path)\nstart = time.time()\noutput = model.prediction_step(feature_values)\nprint(time.time() - start)\n\nThe above took a few minutes to finish. Then I found out that I need to convert the input to tensors first, then it became very fast, as expected.\nfeature_values = {k: tf.constant(v) for k, v in feature_values.items()}\n\nMy question is why is there such a big latency difference and why the first approach didn't even raise an error?","Title":"Why is tensorflow prediction_step extremely slow when the input features are python primitives instead of tensors?","Tags":"python,tensorflow2.0","AnswerCount":1,"A_Id":75000266,"Answer":"Tensor supports vectorized operations which vanilla lists don't support (as to why see next two points).\nA Tensor can contain only objects of the same type, while vanilla list can contain all kinds of types of objects in them. When working with Tensor you have to do type checking only once while with lists you have to type check every object.\nTensor is stored in a single contiguous block of memory, while vanilla list is fragmented. Hence with Tensor you get less cache misses\/pointer dereferencings.","Users Score":2,"is_accepted":false,"Score":0.3799489623,"Available Count":1},{"Q_Id":75000293,"CreationDate":"2023-01-04 02:02:10","Q_Score":1,"ViewCount":30,"Question":"I'm looking at an example from a book. The input is of shape (samples=128, timesteps=24, features=13). When defining two different networks both receiving the same input they have different input_shape on flatten and GRU layers.\nmodel 1:\nmodel = Sequential()\nmodel.add(layers.Flatten(input_shape=(24, 13)))\nmodel.add(layers.Dense(32, activation='relu'))\nmodel.add(layers.Dense(1))\n\nmodel 2:\nmodel = Sequential()\nmodel.add(layers.GRU(32, input_shape=(None, 13)))\nmodel.add(layers.Dense(1))\n\nI understand that input_shape represents the shape of a single input (not considering batch size), so on my understanding the input_shape on both cases should be (24, 13).\nWhy are the input_shapes differents between model 1 and model 2?","Title":"Keras Flatten and GRU input_shape difference receiving same inputs","Tags":"python,keras,deep-learning","AnswerCount":1,"A_Id":75010518,"Answer":"GRU is a recurrent unit (RNN), which takes a sequence of data as input. The expected input shape for GRU is (batch size, sequence length, feature size). In your case the sequence length is 24 and feature size is 13.\nAs usual, you don't need to specify a batch size for input_shape argument. Additionally, for recurrent units like GRU or LSTM you can use \"None\" instead of sequence length, so that it can accept sequences of any length. This is why \"input_shape=(None, 13)\" is allowed here.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75000683,"CreationDate":"2023-01-04 03:28:53","Q_Score":1,"ViewCount":61,"Question":"Happy new Year together,\nnormaly, i try to solve everything myself, to keep my grey Mushroom alive.\nBut in this case im really stuck...\nMy Task:\n\nRead from an USB Device\nCreate an Animation based in Data of an analog Axis\n\nProblem:\n\nIt works on one PC while in pyCharm, also as EXE\nIndex Out of Range on other PC\u00b4s\n\nInfo\n\nError belongs to Line 82 - Read_X2 = readout[2]\nConsole shows successful line of the USB List\nProgramm isnt frozen while Error (OK Print Button works)\nI started Coding 3 Days ago... So i made it like this, becouse im still to stupide\nto end a \"while\" without closing my Programm...\nAlso List-\"Drehung\" could be made more elegant. But my first intention for the error\nwas about my auto counting \"range\" creation. So i deleted it and made it this way.\nMeanwhile i know, its from my USB List.\n\nI bet, its a dead simple Problem. I just overlooking something.\nYoure also allowed to laugh about it if you tell me the solution afterwards.\nGreetings Emanresu\nMy Code Salad:\nfrom tkinter import *\nfrom PIL import ImageTk, Image\nimport hid\n\n# START Part to avoid Problems with \"One File\"\n\nimport sys\nimport os\n\n\ndef resource_path(relative_path):\n \"\"\" Get the absolute path to the resource, works for dev and for PyInstaller \"\"\"\n try:\n # PyInstaller creates a temp folder and stores path in _MEIPASS\n base_path = sys._MEIPASS\n except Exception:\n base_path = os.path.abspath(\".\")\n\n return os.path.join(base_path, relative_path)\n\n# END Part to avoid Problems with \"One File\"\n\n\nwin = Tk(\"Test\")\nwin.geometry(\"460x410\")\nwin.config(bg=\"grey\")\nBtn = Button(win, text=\"ok\", activebackground='gray', bg='grey', bd=0)\n\n# Choose a \"Path\"- Option for \"One Directory\" or \"One File\"\n\n# path = \".\/Animation\/\"\npath = (resource_path(\"Animation\/\"))\n\nList = ['000.png',\n '001.png', '002.png', '003.png', '004.png', '005.png', '006.png', '007.png', '008.png', '009.png', '010.png',\n '011.png', '012.png', '013.png', '014.png', '015.png', '016.png', '017.png', '018.png', '019.png', '020.png',\n '021.png', '022.png', '023.png', '024.png', '025.png', '026.png', '027.png', '028.png', '029.png', '030.png',\n '031.png', '032.png', '033.png', '034.png', '035.png', '036.png', '037.png', '038.png', '039.png', '040.png',\n '041.png', '042.png', '043.png', '044.png', '045.png', '046.png', '047.png', '048.png', '049.png', '050.png',\n '051.png', '052.png', '053.png', '054.png', '055.png', '056.png', '057.png', '058.png', '059.png', '060.png',\n '061.png', '062.png', '063.png', '064.png', '065.png', '066.png', '067.png', '068.png', '069.png', '070.png',\n '071.png', '072.png', '073.png', '074.png', '075.png', '076.png', '077.png', '078.png', '079.png', '080.png',\n '081.png', '082.png', '083.png', '084.png', '085.png', '086.png', '087.png', '088.png', '089.png', '090.png',\n '091.png', '092.png', '093.png', '094.png', '095.png', '096.png', '097.png', '098.png', '099.png', '100.png',\n '101.png', '102.png', '103.png', '104.png', '105.png', '106.png', '107.png', '108.png', '109.png', '110.png',\n '111.png', '112.png', '113.png', '114.png', '115.png', '116.png', '117.png', '118.png', '119.png', '120.png',\n '121.png', '122.png', '123.png', '124.png', '125.png', '126.png', '127.png', '128.png', '129.png', '130.png',\n '131.png', '132.png', '133.png', '134.png', '135.png', '136.png', '137.png', '138.png', '139.png', '140.png',\n '141.png', '142.png', '143.png', '144.png', '145.png', '146.png', '147.png', '148.png', '149.png', '150.png',\n '151.png', '152.png', '153.png', '154.png', '155.png', '156.png', '157.png', '158.png', '159.png', '160.png',\n '161.png', '162.png', '163.png', '164.png', '165.png', '166.png', '167.png', '168.png', '169.png', '170.png',\n '171.png', '172.png', '173.png', '174.png', '175.png', '176.png', '177.png', '178.png', '179.png', '180.png',\n '181.png', '182.png', '183.png', '184.png', '185.png', '186.png', '187.png', '188.png', '189.png', '190.png',\n '191.png', '192.png', '193.png', '194.png', '195.png', '196.png', '197.png', '198.png', '199.png', '200.png',\n '201.png', '202.png', '203.png', '204.png', '205.png', '206.png', '207.png', '208.png', '209.png', '210.png',\n '211.png', '212.png', '213.png', '214.png', '215.png', '216.png', '217.png', '218.png', '219.png', '220.png',\n '221.png', '222.png', '223.png', '224.png', '225.png', '226.png', '227.png', '228.png', '229.png', '230.png',\n '231.png', '232.png', '233.png', '234.png', '235.png', '236.png', '237.png', '238.png', '239.png', '240.png',\n '241.png', '242.png', '243.png', '244.png', '245.png', '246.png', '247.png', '248.png', '249.png', '250.png',\n '251.png', '252.png', '253.png', '254.png', '255.png', '256.png', '257.png', '258.png', '259.png', '260.png',\n ]\n\n\n# Creating Canvas for the Animation\ndef to_pil2(img, button, x, y, w, h):\n image = Image.open(img)\n image = image.resize((w, h))\n pic = ImageTk.PhotoImage(image)\n button['image'] = pic\n button.image = pic\n button.place(x=x, y=y)\n\n\n# Optional Visualisation of Readout\nlabel = Label(win, font=('Stencil', 30, 'bold'), bg='grey', fg='black')\nlabel.place(x=5, y=5)\n\n\n# Collecting Date from USB-Device\n# Col0=unknown, Col1+2= X-Axis, Col3+4= Y-Axis, Col5+6= Z-Axis, Col7+8+9= Analog unused, Col10= Buttons\ndef run():\n simpad = hid.device()\n simpad.open(0x2341, 0x8037)\n simpad.set_nonblocking(True)\n\n readout = simpad.read(11)\n # read_x1 = readout[1]\n read_x2 = readout[2]\n # read_multiply = (read_x2 * 256)\n # read_full = (read_multiply + read_X1)\n # animation = (read_full \/ 700) #Option For smoother Movement\n\n# Drawing Animation + Number on Canvas\n to_pil2(path + List[int(read_x2)], Btn, 5, 5, 450, 400)\n label['text'] = read_x2\n win.after(10, run)\n print(readout)\n\n\n# Pray\nrun()\nBtn[\"command\"] = lambda: print('Freeze-test')\nwin.mainloop()","Title":"Python - Unusual \"Index Out of Range\" while reading from USB","Tags":"python,indexing,usb,hid","AnswerCount":1,"A_Id":75010711,"Answer":"I'm not familiar with all the libraries you are using but I have done I\/O stuff before. Just glancing at the code I see you are doing a nonblocking read (you called set_nonblocking with True). Therefore, you either need to handle the case when no data or a smaller amount of data is read, or you need to change your code to do a blocking read, which will wait until all 11 bytes have been read.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75000688,"CreationDate":"2023-01-04 03:29:57","Q_Score":0,"ViewCount":24,"Question":"For example, in the range(0, 6) function, we only generate number from 0 to 5. 0 is included, but 6 is excluded.\nAlso I see this in list slicing. That mylist[:6], index 0-5 in included, but index 6 is excluded.\nWhat are the benefits of such indexing mechanisms? I find it strange because lower bound is included, while upper bound is excluded.","Title":"Python List Indexing: What's the advantage of using Inclusive index for lower bound, and Exclusive index for upper bound?","Tags":"python,list,indexing","AnswerCount":1,"A_Id":75000714,"Answer":"At heart it's simply elegant, and less error-prone when you're used to it. For example, for indices L <= R, the slice s[L:R] has R-L elements, while for any integer j with L <= j <= R, the slice can be decomposed as s[L:R] == s[L:j] + s[j:R].\nThose straightforward properties save the experienced from a world of off-by-1 errors.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75001056,"CreationDate":"2023-01-04 04:45:21","Q_Score":1,"ViewCount":83,"Question":"Rather than coding this is a question on how to correctly test a function.\nBackground\nI am using pytest to test a function. Now a bit about the background of this function\nOriginally the developers wrote a function like\ndef the_function(first_df, second_df)\n\nHowever, it seems that they realized something was not working with their function so they modified it to\ndef the_function(first_df, second_df, third_df)\n\nI have seen the code and in this new implementation they quit using second_df at all for their logic. Now the logical thing would have been to rewrite the function as def the_function(first_df,third_df) but they didn't. They just left second_df there unused\nSo now I have to write some unit tests for this function\nThe question\nSince second_df is not being used at all, I am thinking of preparing some data needed for first_df and third_df and just enter an empty dataframe for second_df (since it is not being used at all)\nWould this strategy be OK?\nI am a bit worried because since one of the goals of unit testing is to keep any rewriting or refactoring of the function to introduce errors, what if in the future someone refactors the_function actually using second_df...\nOn the other hand if someone does that, and then test the function, surely an empty dataframe will signal an error, but in that case, will the unit test have to be rewritten?","Title":"Correct way to test a function that has some unused parameter","Tags":"python,unit-testing","AnswerCount":1,"A_Id":75796726,"Answer":"If someone refactors the function, they can't change the behaviour of the function. That's the definition of a refactor.\nSo if someone does indeed do a \"change function signature\" refactoring to get rid of that unused second data frame, they'd also have to update the unit tests that call that function (and all other callers), ideally in a careful step-by-step way. Anyway, no issue there.\nIf someone changes the behaviour of the function so that now it does something with all three dataframes, then of course that will break the existing unit test, and that's good, because obviously if the function is now supposed to do something differently, the unit test must reflect that.\nAs for what you should do: Why not do the \"remove unused parameter\" refactoring right now? Following the \"leave the place better than you found it\" ethics of little-by-little refactoring.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75001090,"CreationDate":"2023-01-04 04:50:46","Q_Score":0,"ViewCount":52,"Question":"Assuming that I have monthly datasets showing like these:\ndf1\n\n\n\n\ncompany\ndate\nact_call\nact_visit\npo\n\n\n\n\nA\n2022-10-01\nYes\nNo\nNo\n\n\nB\n2022-10-01\nYes\nNo\nYes\n\n\nC\n2022-10-01\nNo\nNo\nNo\n\n\nB\n2022-10-02\nNo\nYes\nNo\n\n\nA\n2022-10-02\nNo\nYes\nYes\n\n\n\n\ndf2\n\n\n\n\ncompany\ndate\nact_call\nact_visit\npo\n\n\n\n\nD\n2022-11-01\nYes\nNo\nNo\n\n\nB\n2022-11-01\nYes\nNo\nYes\n\n\nC\n2022-11-01\nYes\nYes\nNo\n\n\nD\n2022-11-02\nNo\nYes\nNo\n\n\nA\n2022-11-02\nNo\nYes\nYes\n\n\n\n\nI want to compare the two dataframes and count several conditions:\n\nthe number of company that exists in both dataframes.\n\nthe number of company that exists in both dataframes that has at least one act_call as 'Yes' and act_visit as 'Yes' in df2, but has po as 'No' in df1.\n\n\nFor the 1st condition, I've tried using pandas.Dataframe.sum() and pandas.Dataframe.count_values() but they didn't give the results that I want.\nFor the 2nd condition, I tried using this code:\n(((df1[['act_calling', 'act_visit']].eq('yes'))&(df2['po'].eq('no'))).groupby(df2['company_name']).any().all(axis = 1).sum())\nbut, I'm not sure that the code above will only count the company that exists in both dataframes.\nThe expected output is this:\n\n3, (A, B, C)\n\n1, (C)\n\n\nI'm open to any suggestions. Thank u in advance!","Title":"Comparing and Count Values from 2 (or More) Different Pandas Dataframes Based on Certain Conditions","Tags":"python,pandas,dataframe,count,compare","AnswerCount":3,"A_Id":75002160,"Answer":"To See The Companies That Are In Both Data Frames\n1st part\ncombined_dataframe1=df1[df2['company'].isin(df1['company'])]\ncombined_dataframe1['company']\n2nd part\nTo see the company that satisfies your conditions\ncombined_dataframe2=df2[df2['company'].isin(df1['company'])]\njoined_dataframe=pd.merge(combined_dataframe1,combined_dataframe2, on='company',how='outer')\nAs per your condition\nfinal_dataframe=joined_dataframe[joined_dataframe.columns][joined_dataframe['po_x']=='n0'}[joined_dataframe['act_call_yes']=='yes'][joined_dataframe['act_visit_y']=='yes']\nprint(final_dataframe)","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75001534,"CreationDate":"2023-01-04 06:10:38","Q_Score":0,"ViewCount":41,"Question":"I created an API, that take xlsx as input file for post method and give me edited xlsx file.\nProblem is:- File I got from link and I have to download the xlsx file every time and put in postman.\nWhat I want:- directly put link in postman for input file\nNote:- Everytime link contains only one xlsx file\nI Looked for the solutions in documentations , but I can't find a thing, of How to put link for inpt file.","Title":"How to upload a link of file in postman instead of downloading file in Django","Tags":"python,django,postman","AnswerCount":1,"A_Id":75015177,"Answer":"You can pass the link with a header or just do one thing create an environment variable in postman itself and try to create that variable updated after every hut of the API by getting a response from the other API.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75001633,"CreationDate":"2023-01-04 06:24:37","Q_Score":0,"ViewCount":34,"Question":"Hi ive been trying to install Tweepy but it doea not install on my anaconda prompt this is the command I enter\n''' conda install -c conda-forge tweepy'''\nthe message I get back is\n'''[WinError 87] The parameter is incorrect\n()'''","Title":"Unable to install tweepy on anaconda","Tags":"python,installation,package,anaconda,tweepy","AnswerCount":2,"A_Id":75001690,"Answer":"You can use these commands too-\n\nconda install -c \"conda-forge\/label\/cf201901\" tweepy\nconda install -c \"conda-forge\/label\/cf202003\" tweepy","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":2},{"Q_Id":75001633,"CreationDate":"2023-01-04 06:24:37","Q_Score":0,"ViewCount":34,"Question":"Hi ive been trying to install Tweepy but it doea not install on my anaconda prompt this is the command I enter\n''' conda install -c conda-forge tweepy'''\nthe message I get back is\n'''[WinError 87] The parameter is incorrect\n()'''","Title":"Unable to install tweepy on anaconda","Tags":"python,installation,package,anaconda,tweepy","AnswerCount":2,"A_Id":75001701,"Answer":"Could you try running the command prompt as administrator and execute the conda command.\nIf that doesn't work, you might have to try running it on WSL\/WSL2 in your Win machine.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75002679,"CreationDate":"2023-01-04 08:20:55","Q_Score":0,"ViewCount":34,"Question":"I can't install mysql connector with below error, please help advise needed action to proceed installation of module..See below command\/errors:\nC:\\Users\\a0229010>python -m pip install mysql-connector-python==3.7.3\nCollecting mysql-connector-python==3.7.3\nWARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': \/simple\/mysql-connector-python\/\nWARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': \/simple\/mysql-connector-python\/\nWARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': \/simple\/mysql-connector-python\/\nWARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': \/simple\/mysql-connector-python\/\nWARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': \/simple\/mysql-connector-python\/\nERROR: Could not find a version that satisfies the requirement mysql-connector-python==3.7.3 (from versions: none)\nERROR: No matching distribution found for mysql-connector-python==3.7.3","Title":"Unable to install mysql-connector-python","Tags":"python-3.x","AnswerCount":2,"A_Id":75003593,"Answer":"I suggest you change your Python Package Index\nOr you can use follow code to have a try:\npip --default-timeout=100 install -i --trusted-host \ne.g when i install pandas\npip --default-timeout=100 install pandas -i https:\/\/pypi.tuna.tsinghua.edu.cn\/simple --trusted-host pypi.tuna.tsinghua.edu.cn","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75004868,"CreationDate":"2023-01-04 11:38:07","Q_Score":5,"ViewCount":118,"Question":"I had learned that n = n + v and n += v are the same. Until this;\ndef assign_value(n, v):\n n += v\n print(n)\n\nl1 = [1, 2, 3]\nl2 = [4, 5, 6]\n\nassign_value(l1, l2)\nprint(l1)\n\nThe output will be:\n[1, 2, 3, 4, 5, 6]\n[1, 2, 3, 4, 5, 6]\n\nNow when I use the expanded version:\ndef assign_value(n, v):\n n = n + v\n print(n)\n\nl1 = [1, 2, 3]\nl2 = [4, 5, 6]\n\nassign_value(l1, l2)\nprint(l1)\n\nThe output will be:\n[1, 2, 3, 4, 5, 6]\n[1, 2, 3]\n\nUsing the += has a different result with the fully expanded operation. What is causing this?","Title":"Interesting results with the '+=' increment operator","Tags":"python,increment","AnswerCount":4,"A_Id":75004991,"Answer":"This works on how python treats objects and passes variables into functions.\nBasically - in first example (with += )\nYou are passing n and v into function by \"pass-by-assignment\"\nSo n gets modified and it will be also modified out of function scope.\nIn second example - n is reassigned inside of the function to a new list. Which is not seen outside of the function.","Users Score":1,"is_accepted":false,"Score":0.049958375,"Available Count":2},{"Q_Id":75004868,"CreationDate":"2023-01-04 11:38:07","Q_Score":5,"ViewCount":118,"Question":"I had learned that n = n + v and n += v are the same. Until this;\ndef assign_value(n, v):\n n += v\n print(n)\n\nl1 = [1, 2, 3]\nl2 = [4, 5, 6]\n\nassign_value(l1, l2)\nprint(l1)\n\nThe output will be:\n[1, 2, 3, 4, 5, 6]\n[1, 2, 3, 4, 5, 6]\n\nNow when I use the expanded version:\ndef assign_value(n, v):\n n = n + v\n print(n)\n\nl1 = [1, 2, 3]\nl2 = [4, 5, 6]\n\nassign_value(l1, l2)\nprint(l1)\n\nThe output will be:\n[1, 2, 3, 4, 5, 6]\n[1, 2, 3]\n\nUsing the += has a different result with the fully expanded operation. What is causing this?","Title":"Interesting results with the '+=' increment operator","Tags":"python,increment","AnswerCount":4,"A_Id":75004960,"Answer":"Thats because in the first implementation you are editing the list n itself (and therefore the changes still apply when leaving the function), while on the other implementation you are creating a new temporary list with the same name, so when you leave the function the new list disappears and the variable n is linked to the original list.\nthe += operator works similarly to x=x+y for immutable objects (since they always create new objects), but for mutable objects such as lists they work differently. x=x+y creats a new object x while x+=y edits the current object.","Users Score":7,"is_accepted":false,"Score":1.0,"Available Count":2},{"Q_Id":75005652,"CreationDate":"2023-01-04 12:46:12","Q_Score":2,"ViewCount":102,"Question":"Is it possible to compare values between N columns, row by row, on the same dataframe and set a new column counting the repetitions, when the values from the 3 columns match with another row?\nFrom:\nid | column1 | column2 | column3\n0 | z | x | x \n1 | y | y | y \n2 | x | x | x \n3 | x | x | x \n4 | z | y | x \n5 | w | w | w \n6 | w | w | w \n7 | w | w | w \n\nTo:\nid | column1 | column2 | column3 | counter\n0 | z | x | x | 0\n1 | y | y | y | 1\n2 | x | x | x | 2\n3 | x | x | x | 2\n4 | z | y | x | 0\n5 | w | w | w | 3\n6 | w | w | w | 3\n7 | w | w | w | 3\n\nSomething like that: if(column1[someRow] == column1[anotherRow] & column2[someRow] == column2[anotherRow] & column3[someRow] == column3[anotherRow]) then counter[someRow]++","Title":"Counting repetitions and writing on a new column","Tags":"python,pandas,dataframe","AnswerCount":3,"A_Id":75036237,"Answer":"Answer:df['counter'] = df.groupby(['column1', 'column2', 'column3']).transform('size')","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75006304,"CreationDate":"2023-01-04 13:38:45","Q_Score":0,"ViewCount":49,"Question":"Please help. I have two tables: 1 report and 1 data file.\nThe data table is presented as follows:\n\n\n\n\nPATIENTS_ID\nPOL\nAge\nICD10\n\n\n\n\n10848754\n0\n22\nH52\n\n\n10848754\n0\n22\nR00\n\n\n10848754\n0\n22\nZ01\n\n\n10848754\n0\n22\nZ02\n\n\n10850478\n1\n26\nH52\n\n\n\n\nAnd etc.\nThe report file asks to collect the following data:\n\n\n\n\nICD10\nMale (20-29)\nMale (30-39)\nFemale (20-29)\nFemale (30-39)\n\n\n\n\nC00 - C97\n\n\n\n\n\n\nE10 - E14\n\n\n\n\n\n\nI00 - I99\n\n\n\n\n\n\n\n\nSo... I need to collect all \"ICD10\" data which include the gap between C00 to C99, and aggregate together with gender and age span. I know that in SQL there is a \"BETWEEN \" that will quite easily build a range and select values like this without additional conditions: \"C00, C01, C02\".\nIs there something similar in python\/pandas?\nLogical expressions like \">= C00 <= C99\" will include other letters, already tried. I would be grateful for help. Creating a separate parser\/filter seems too massive for such a job.","Title":"Selection of a condition by a range that includes strings (letter + numbers)","Tags":"python,sql,excel,pandas,report","AnswerCount":2,"A_Id":75014870,"Answer":"If there is only one letter as \"identifier\", like C02, E34, etc. you can split your column ICD10 into two columns, first one is the first character of ICD10, and second are the numbers.\ndf.loc[:, \"Letter_identifier\"] = df[\"ICD10\"].str[0]\ndf.loc[:, \"Number_identifier\"] = df[\"ICD10\"].str[1:].astype(int) \nThen you can create a masks like:\n(df[\"Letter_identifier\"] == \"C\") & (df[\"Number_identifier\"] > 0) & (df[\"Number_identifier\"] <= 99)\nYou can split your dataframe as shown, aggregate on those sub-dataframes and concat your result.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75006406,"CreationDate":"2023-01-04 13:47:50","Q_Score":0,"ViewCount":20,"Question":"I need to connect data frame and dict like this . the number of frames for each cell is different\n,so the number of \"0\",\"1\"and so on is different .Total number of cells 16.How can","Title":"How to connect pandas data frame and dict?","Tags":"python,pandas,dataframe","AnswerCount":1,"A_Id":75006507,"Answer":"To combine a pandas data frame with a dictionary, you can use the pandas.DataFrame.from_dict() function. This function takes a dictionary as input and returns a pandas data frame.\nFor example, you can create a dictionary with keys as column names and values as data for each column, and then pass this dictionary to the from_dict function to create a data frame:\nimport pandas as pd\ndata = {'col1': [1, 2, 3], 'col2': [4, 5, 6]}\ndf = pd.DataFrame.from_dict(data)\nprint(df)","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75007071,"CreationDate":"2023-01-04 14:43:18","Q_Score":1,"ViewCount":385,"Question":"Currently, I am using a backend with FastAPI (port 8000) and a frontend with Solid-JS (port 3000).\nI want to send a refresh token from the backend server to the client when they log in.\nFor login, I send a request from the client using Axios like below:\nconst onClickLogin = () => {\n axios({\n method: 'post',\n url: 'http:\/\/localhost:8000\/login',\n responseType: 'json',\n headers: {\n 'Content-Type': 'application\/x-www-form-urlencoded',\n },\n data: {\n username: inputUsername(),\n password: inputPw(),\n },\n }).then((response) => {\n props.setToken(response.data.access_token);\n props.updateUserinfo();\n props.setPageStatus('loggedin');\n });\n };\n\nWhen the FastAPI server receives the request, it sends an access token through content and I want to send a refresh token through httponly cookie for security like below.\n@app.post('\/login', summary='Create access and refresh tokens for user', response_model=TokenSchema)\nasync def login(form_data: OAuth2PasswordRequestForm = Depends()):\n ...\n\n response = JSONResponse({'access_token': create_access_token(user['id'])})\n response.set_cookie(key='refresh_token_test', value=create_refresh_token(user['id']),\n max_age=REFRESH_TOKEN_EXPIRE_MINUTES, httponly=False, samesite='none', domain='http:\/\/localhost:3000')\n\n return response\n\nIn this case, I just disabled the 'httponly' option to check the cookie more easily in Chrome developer tools.\nIt is very difficult to check what is the problem because the response is received successfully and does not return any error or warnings but there are just no cookies.\nI also set the CORS setting in FastAPI like below.\norigins = [\n 'http:\/\/localhost:3000'\n]\n\napp.add_middleware(\n CORSMiddleware,\n allow_origins=origins,\n allow_credentials=True,\n allow_methods=[\"*\"],\n allow_headers=[\"*\"],\n)\n\nIs there any reasons that the browser does not receive cookies? Is there any method to debug this case?","Title":"The browser cannot receive cookies using FastAPI's set_cookie method","Tags":"python,http,cookies,fastapi","AnswerCount":1,"A_Id":75007517,"Answer":"I solved the issue by changing the domain parameter from 'http:\/\/localhost:3000' to just 'localhost', as well as changing the samesite parameter to 'lax'.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75007547,"CreationDate":"2023-01-04 15:17:25","Q_Score":1,"ViewCount":141,"Question":"I am creating a module, henceforth called mymodule, which I distribute using a pyproject.toml. This file contains a version number. I would like to write this version number in the logfile of mymodule. In mymodule I use the following snippet (in __init__.py) to obtain the version:\nimport importlib.metadata\n\n__version__ = importlib.metadata.version(__package__)\n\ndel importlib.metadata\n\nHowever this version is wrong. This appears to be the highest version which I have ever installed. For reference the command python3 -m pip show mypackage does actually show the correct version after installing the module locally. I struggle to explain this difference. Can anyone think of a cause of this discrepancy?\nI also ran importlib.metadata.version(mypackage) which returned the same incorrect version.","Title":"Difference between version pip show and importlib.metadata.version","Tags":"python,pip,python-importlib","AnswerCount":1,"A_Id":75015859,"Answer":"The problem was related to left over build artifacts from using setup.py. importlib and pkg_resources will detect these artifacts in a local installation and pip will not. Deleting the mypackage.egg-info directory fixed the issue.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75008037,"CreationDate":"2023-01-04 15:56:53","Q_Score":1,"ViewCount":176,"Question":"I am new in micropython and testing it out, if it can fit the needs for my next project. I have set up a script to test it and there I run three async jobs in an endless loop. one of them is a tiny webserver, which should act as an API. The construct is working fine, I just need to know, how can I get the clients IP address, which is calling my API webservice (it will be only a local IP, so no worries about reverse proxies etc.)? So I would like to have the clients IP in the Method APIHandling, in this snippet just to print it out:\nasync def APIHandling(reader, writer):\n request_line = await reader.readline()\n # We are not interested in HTTP request headers, skip them\n while await reader.readline() != b\"\\r\\n\":\n pass\n request = str(request_line)\n try:\n request = request.split()[1]\n except IndexError:\n pass\n print(\"API request: \" + request + \" from IP: \")\n req = request.split('\/')\n #do some things here\n response = html % stateis\n writer.write(response)\n await writer.drain()\n await writer.wait_closed()\n\nasync def BusReader():\n #doing something here\n await asyncio.sleep(0)\n\nasync def UiHandling():\n #doing something else here\n await asyncio.sleep(0.5)\n\nasync def Main():\n set_global_exception()\n loop = asyncio.get_event_loop()\n loop.create_task(asyncio.start_server(APIHandling, Networking.GetIPAddress(), 80))\n loop.create_task(UiHandling())\n loop.create_task(BusReader())\n loop.run_forever()\n\ntry:\n asyncio.run(Main())\nfinally:\n asyncio.new_event_loop()\n\nThe only thing I found was this: Stream.get_extra_info(v) - but I do not have a Stream avaliable anywhere?\nNote: This is just a snippet with the essential parts of my actual script, so you will find references to other classes etc. which are not present in this code example.","Title":"Micropython: asyncio Server: get Client IP address","Tags":"python-asyncio,micropython","AnswerCount":1,"A_Id":75008255,"Answer":"Nevermind, I was too stupid to see that \"writer\" is actually a Stream, where I can get the clients IP with writer.get_extra_info('peername')[0]","Users Score":3,"is_accepted":false,"Score":0.537049567,"Available Count":1},{"Q_Id":75008445,"CreationDate":"2023-01-04 16:31:17","Q_Score":0,"ViewCount":75,"Question":"Currently, we have a table containing a varchar2 column with 4000 characters, however, it became a limitation as the size of the 'text' being inserted can grow bigger than 4000 characters, therefore we decided to use CLOB as the data type for this specific column, what happens now is that both the insertions and selections are way too slow compared to the previous varchar2(4000) data type.\nWe are using Python combined with SqlAlchemy to do both the insertions and the retrieval of the data. In simple words, the implementation itself did not change at all, only the column data type in the database.\nDoes anyone have any idea on how to tweak the performance?","Title":"Why CLOB slower than VARCHAR2 in Oracle?","Tags":"python,oracle","AnswerCount":3,"A_Id":75010399,"Answer":"You could also ask your DBA if possible to upgrade the DB to max_string_size=EXTENDED, then the max VARCHAR2 size would be 32K.","Users Score":-1,"is_accepted":false,"Score":-0.0665680765,"Available Count":1},{"Q_Id":75009650,"CreationDate":"2023-01-04 18:19:59","Q_Score":0,"ViewCount":92,"Question":"I want to store a numpy array to a file. This array contains thousands of float probabilities which all sum up to 1. But when I store the array to a CSV file and load it back, I realise that the numbers have been approximated, and their sum is now some 0.9999 value. How can I fix it?\n(Numpy's random choice method requires probabilities to sum up to 1)","Title":"How can I store float probabilities to a file so exactly that they sum up to 1?","Tags":"python,numpy,csv,floating-point,probability","AnswerCount":2,"A_Id":75023082,"Answer":"Due to floating point arithmetic errors, you can get tiny errors in what seem like ordinary calculations. However, in order to use the choice function, the probabilities don't need to be perfect.\nOn reviewing the code in the current version of Numpy as obtained from Github, I see that the tolerance for the sum of probabilities is that sum(p) is within sqrt(eps) of 1, where eps is the double precision floating point epsilon, which is approximately 1e-16. So the tolerance is about 1e-8. (See lines 955 and 973 in numpy\/random\/mtrand.pyx.)\nFarther down in mtrand.pyx, choice normalizes the probabilities (which are already almost normalized) to sum to 1; see line 1017.\nMy advice is to ensure that all 16 digits are stored in the csv, then when you read them back, the error in the sum will be much smaller than 1e-8 and choice will be happy. I think other people commenting here have posted some advice about how to print all digits.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75010631,"CreationDate":"2023-01-04 19:55:15","Q_Score":2,"ViewCount":455,"Question":"Is there any difference between the typing.cast function and the built-in cast function?\nx = 123\ny = str(x)\n\nfrom typing import cast\nx = 123\ny = cast(str, x)\n\nI expected that mypy might not like the first case and would prefer the typing.cast but this was not the case.","Title":"Python: typing.cast vs built in casting","Tags":"python,casting,mypy","AnswerCount":1,"A_Id":75010658,"Answer":"str(x) returns a new str object, independent of the original int. It's only an example of \"casting\" in a very loose sense (and one I don't think is useful, at least in the context of Python code).\ncast(str, x) simply returns x, but tells a type checker to pretend that the return value has type str, no matter what type x may actually have.\nBecause Python variables have no type (type is an attribute of a value), there's no need for casting in the sense that languages like C use it (where you can change how the contents of a variable are viewed based on the type you cast the variable to).","Users Score":3,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75010637,"CreationDate":"2023-01-04 19:55:46","Q_Score":2,"ViewCount":113,"Question":"I've made following code that deciphers some byte-arrays into \"Readable\" text for a translation project.\nwith open(Path(cur_file), mode=\"rb\") as file:\n contents = file.read()\n file.close()\n\ntext = \"\"\nfor i in range(0, len(contents), 2): # Since it's encoded in UTF16 or similar, there should always be pairs of 2 bytes\n byte = contents[i]\n byte_2 = contents[i+1]\n if byte == 0x00 and byte_2 == 0x00:\n text+=\"[0x00 0x00]\"\n elif byte != 0x00 and byte_2 == 0x00:\n #print(\"Normal byte\")\n if chr(byte) in printable:\n text+=chr(byte)\n elif byte == 0x00:\n pass\n else:\n text+=\"[\" + \"0x{:02x}\".format(byte) + \"]\"\n else:\n #print(\"Special byte\")\n text+=\"[\" + \"0x{:02x}\".format(byte) + \" \" + \"0x{:02x}\".format(byte_2) + \"]\"\n# Some dirty replaces - Probably slow but what do I know - It works\ntext = text.replace(\"[0x0e]n[0x01]\",\"[USERNAME_1]\") # Your name\ntext = text.replace(\"[0x0e]n[0x03]\",\"[USERNAME_3]\") # Your name\ntext = text.replace(\"[0x0e]n[0x08]\",\"[TOWNNAME_8]\") # Town name\ntext = text.replace(\"[0x0e]n[0x09]\",\"[TOWNNAME_9]\") # Town name\ntext = text.replace(\"[0x0e]n[0x0a]\",\"[CHARNAME_A]\") # Character name\n\ntext = text.replace(\"[0x0a]\",\"[ENTER]\") # Generic enter\n\nlang_dict[emsbt_key_name] = text\n\nWhile this code does work and produce output like:\nCancel[0x00 0x00]\n\nAnd more complex ones, I've stumbled upon a performance problem when I loop it through 60000 files.\nI've read a couple of questions regarding += with large strings and people say that join is preferred with large strings. However, even with strings of just under 1000 characters, a single file takes about 5 seconds to store, which is a lot.\nI almost feel like it's starts fast and gets progressively slower and slower.\nWhat would be a way to optimize this code? I feel it's also abysmal.\nAny help or clue is greatly appreciated.\nEDIT: Added cProfile output:\n 261207623 function calls (261180607 primitive calls) in 95.364 seconds\n\n Ordered by: cumulative time\n\n ncalls tottime percall cumtime percall filename:lineno(function)\n 284\/1 0.002 0.000 95.365 95.365 {built-in method builtins.exec}\n 1 0.000 0.000 95.365 95.365 start.py:1()\n 1 0.610 0.610 94.917 94.917 emsbt_to_json.py:21(to_json)\n 11179 11.807 0.001 85.829 0.008 {method 'index' of 'list' objects}\n 62501129 49.127 0.000 74.146 0.000 pathlib.py:578(__eq__)\n125048857 18.401 0.000 18.863 0.000 pathlib.py:569(_cparts)\n 63734640 6.822 0.000 6.828 0.000 {built-in method builtins.isinstance}\n 160958 0.183 0.000 4.170 0.000 pathlib.py:504(_from_parts)\n 160958 0.713 0.000 3.942 0.000 pathlib.py:484(_parse_args)\n 68959 0.110 0.000 3.769 0.000 pathlib.py:971(absolute)\n 160959 1.600 0.000 2.924 0.000 pathlib.py:56(parse_parts)\n 91999 0.081 0.000 1.624 0.000 pathlib.py:868(__new__)\n 68960 0.028 0.000 1.547 0.000 pathlib.py:956(rglob)\n 68960 0.090 0.000 1.518 0.000 pathlib.py:402(_select_from)\n 68959 0.067 0.000 1.015 0.000 pathlib.py:902(cwd)\n 37 0.001 0.000 0.831 0.022 __init__.py:1()\n 937462 0.766 0.000 0.798 0.000 pathlib.py:147(splitroot)\n 11810 0.745 0.000 0.745 0.000 {method '__exit__' of '_io._IOBase' objects}\n 137918 0.143 0.000 0.658 0.000 pathlib.py:583(__hash__)\n\nEDIT: Upon further inspection with line_profiler, turns out that the culprit isn't even in above code. It's well outside that code where I read search over the indexes to see if there is +1 file (looking ahead of the index). This apparently consumes a whole lot of CPU time.","Title":"Replace and += is abismally slow","Tags":"python,string,replace","AnswerCount":3,"A_Id":75010725,"Answer":"Just in case it provides you pathways to search, if I was in your case I'd do two separate checks over 100 files for example timing:\n\nHow much time it takes to execute only the for loop.\nHow much it takes to do only the six replaces.\n\nIf any takes most of the total time, I'd try to find a solution just for that bit.\nFor raw replacements there are specific software designed for massive replacements.\nI hope it helps in some way.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75010792,"CreationDate":"2023-01-04 20:13:05","Q_Score":2,"ViewCount":78,"Question":"I intend to make a while loop inside a defined function. In addition, I want to return a value on every iteration. Yet it doesn't allow me to iterate over the loop.\nHere is the plan:\ndef func(x):\n n=3\n while(n>0): \n x = x+1 \n return x\n\nprint(func(6)) \n\nI know the reason to such issue-return function breaks the loop.\nYet, I insist to use a defined function. Therefore, is there a way to somehow iterate over returning a value, given that such script is inside a defined function?","Title":"returning value without breaking a loop","Tags":"python,loops,return","AnswerCount":3,"A_Id":75010871,"Answer":"Depending on your use case, you may simply use print(x) inside the loop and then return the final value.\nIf you actually need to return intermediate values to a caller function, you can use yield.","Users Score":-1,"is_accepted":false,"Score":-0.0665680765,"Available Count":1},{"Q_Id":75011235,"CreationDate":"2023-01-04 20:58:48","Q_Score":1,"ViewCount":367,"Question":"Im building with Docker-Compose a web app with Django backend and React frontend. All the code was given to me, cause its an exercise for a Trainee DevOps interview, so i only have to make the building and deploy.\nThe docker-compose build runs fine, but when i make the docker-compose up, i get the next error:\nfrontend_1 |\nfrontend_1 | > frontendpublic@0.1.0 start\nfrontend_1 | > node scripts\/start.js\nfrontend_1 |\nfrontend_1 | node:internal\/modules\/cjs\/loader:1042\nfrontend_1 | throw err;\nfrontend_1 | ^\nfrontend_1 |\nfrontend_1 | Error: Cannot find module 'chalk'\nfrontend_1 | Require stack:\nfrontend_1 | - \/frontend\/scripts\/start.js\nfrontend_1 | at Module._resolveFilename (node:internal\/modules\/cjs\/loader:1039:15)\nfrontend_1 | at Module._load (node:internal\/modules\/cjs\/loader:885:27)\nfrontend_1 | at Module.require (node:internal\/modules\/cjs\/loader:1105:19)\nfrontend_1 | at require (node:internal\/modules\/cjs\/helpers:103:18)\nfrontend_1 | at Object. (\/frontend\/scripts\/start.js:18:15)\nfrontend_1 | at Module._compile (node:internal\/modules\/cjs\/loader:1218:14)\nfrontend_1 | at Module._extensions..js (node:internal\/modules\/cjs\/loader:1272:10)\nfrontend_1 | at Module.load (node:internal\/modules\/cjs\/loader:1081:32)\nfrontend_1 | at Module._load (node:internal\/modules\/cjs\/loader:922:12)\nfrontend_1 | at Function.executeUserEntryPoint [as runMain] (node:internal\/modules\/run_main:82:12) {\nfrontend_1 | code: 'MODULE_NOT_FOUND',\nfrontend_1 | requireStack: [ '\/frontend\/scripts\/start.js' ]\nfrontend_1 | }\nfrontend_1 |\nfrontend_1 | Node.js v19.3.0\n\nThe start.js script is using functions from the chalk module, but i dont know how to install it. Can you help me?\n*START.JS:\n*\n'use strict';\n\n\/\/ Do this as the first thing so that any code reading it knows the right env.\nprocess.env.BABEL_ENV = 'development';\nprocess.env.NODE_ENV = 'development';\n\n\/\/ Makes the script crash on unhandled rejections instead of silently\n\/\/ ignoring them. In the future, promise rejections that are not handled will\n\/\/ terminate the Node.js process with a non-zero exit code.\nprocess.on('unhandledRejection', err => {\n throw err;\n});\n\n\/\/ Ensure environment variables are read.\nrequire('..\/config\/env');\n\nconst fs = require('fs');\nconst chalk = require('chalk');\nconst webpack = require('webpack');\nconst WebpackDevServer = require('webpack-dev-server');\nconst clearConsole = require('react-dev-utils\/clearConsole');\nconst checkRequiredFiles = require('react-dev-utils\/checkRequiredFiles');\nconst {\n choosePort,\n createCompiler,\n prepareProxy,\n prepareUrls,\n} = require('react-dev-utils\/WebpackDevServerUtils');\nconst openBrowser = require('react-dev-utils\/openBrowser');\nconst paths = require('..\/config\/paths');\nconst config = require('..\/config\/webpack.config.dev');\nconst createDevServerConfig = require('..\/config\/webpackDevServer.config');\n\nconst useYarn = fs.existsSync(paths.yarnLockFile);\nconst isInteractive = process.stdout.isTTY;\n\n\/\/ Warn and crash if required files are missing\nif (!checkRequiredFiles([paths.appHtml, paths.appIndexJs])) {\n process.exit(1);\n}\n\n\/\/ Tools like Cloud9 rely on this.\nconst DEFAULT_PORT = parseInt(process.env.PORT, 10) || 3000;\nconst HOST = process.env.HOST || '0.0.0.0';\n\nif (process.env.HOST) {\n console.log(\n chalk.cyan(\n `Attempting to bind to HOST environment variable: ${chalk.yellow(\n chalk.bold(process.env.HOST)\n )}`\n )\n );\n console.log(\n `If this was unintentional, check that you haven't mistakenly set it in your shell.`\n );\n console.log(`Learn more here: ${chalk.yellow('...')}`);\n console.log();\n}\n\n\/\/ We attempt to use the default port but if it is busy, we offer the user to\n\/\/ run on a different port. `choosePort()` Promise resolves to the next free port.\nchoosePort(HOST, DEFAULT_PORT)\n .then(port => {\n if (port == null) {\n \/\/ We have not found a port.\n return;\n }\n const protocol = process.env.HTTPS === 'true' ? 'https' : 'http';\n const appName = require(paths.appPackageJson).name;\n const urls = prepareUrls(protocol, HOST, port);\n \/\/ Create a webpack compiler that is configured with custom messages.\n const compiler = createCompiler(webpack, config, appName, urls, useYarn);\n \/\/ Load proxy config\n const proxySetting = require(paths.appPackageJson).proxy;\n const proxyConfig = prepareProxy(proxySetting, paths.appPublic);\n \/\/ Serve webpack assets generated by the compiler over a web sever.\n const serverConfig = createDevServerConfig(\n proxyConfig,\n urls.lanUrlForConfig\n );\n const devServer = new WebpackDevServer(compiler, serverConfig);\n \/\/ Launch WebpackDevServer.\n devServer.listen(port, HOST, err => {\n if (err) {\n return console.log(err);\n }\n if (isInteractive) {\n clearConsole();\n }\n console.log(chalk.cyan('Starting the development server...\\n'));\n openBrowser(urls.localUrlForBrowser);\n });\n\n ['SIGINT', 'SIGTERM'].forEach(function(sig) {\n process.on(sig, function() {\n devServer.close();\n process.exit();\n });\n });\n })\n .catch(err => {\n if (err && err.message) {\n console.log(err.message);\n }\n process.exit(1);\n });\n\n\nThanks!\nI was trying to run the services with docker-compose, but i have an error with the chalk module who is needed in script\/start.js","Title":"Cannot find module 'chalk'","Tags":"python,reactjs,docker,react-hooks,chalk","AnswerCount":1,"A_Id":75012208,"Answer":"SOLVED!\nAdded chalk to devDependencies on the file package.json. It was only on Dependencies.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75012468,"CreationDate":"2023-01-04 23:35:46","Q_Score":1,"ViewCount":104,"Question":"I have two versions of Python installed (OS: Windows 10). The original version is 3.8.2. I installed 3.11.1 and did not have it added to PYTHONPATH. I created a virtual env using py -m venv .env. Despite using py, the virtual environment runs both Python 3.8.2 and 3.11.1 depending on whether I type python or py. Inside the virtual environment I installed a newer version of Django (4.1.5) using py -m pip install django, which successfully installed Django within the Python311 folder on my system. However, no django-admin.py file was installed, just django-admin.exe. To ensure I created my project using the newer version of Django, I navigated to the folder where the django-admin.exe file exists and ran the following:\npy django-admin.exe startproject <*project_name*> <*full_path_to_project_folder*>\n\nThe settings.py file shows it was created using Django 4.1.5, but whenever I start my project it runs using Django 3.0.4 (the pre-existing version). I am starting it using py manage.py runserver, to ensure Python 3.11.1 is being used. I have tried it both inside and outside my virtual environment. I have added the python311\\Scripts folder at the top of my Path environment variables, and have uninstalled and reinstalled Django 4.1.5.\nAt this point I am at a loss as to why I cannot get the newer version of Django to run. I have tried numerous Google and SO searches and have not found any similar issues or anything to point me in the right direction. I wondered whether it might have to do with django-admin.py not being installed with Django 4.1.5, so I looked for the django-admin.py file in the Django 4.1.5 source code with no success. I then saved the django-admin.py file from the 3.0.4 version into the python311\/Scripts folder updated to reference the python311\\python.exe, but the system still loads the one from the python38\/Scripts folder. Hoping for some additional ideas to try.\nEDIT: After adding 3.11.1 to Path and reinstalling Django I deleted and recreated the Django project. The system still wouldn't find the django-admin.py I added to the python311\\Scripts folder, even using the full path, so I used the following command to create the project:\npy \\full\\path\\to\\django-admin.exe startproject project_name .\n\nSame issue: The project was created with Django 4.1.5, but runserver still uses 3.0.4.","Title":"Wrong Version of Django Running in Virtual Environment","Tags":"python-3.x,django,version,python-venv","AnswerCount":1,"A_Id":75023482,"Answer":"I was able to ultimately solve the problem by adding the \\python311 folder to the Path (leaving off \\Scripts). So I now have both \\python311 and \\python311\\Scripts on the Path in addition to the original \\python38 and \\python38\\Scripts locations. Runserver now runs with Django 4.1.5.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75013429,"CreationDate":"2023-01-05 02:56:56","Q_Score":0,"ViewCount":40,"Question":"I have a Python script that downloads some excel spreadsheets from a website, and then uploads these spreadsheets to a folder on OneDrive, at the moment I have to run this script on my machine every day, I would like to know if there is a way to run this script on a server or something, so I don't have to keep my computer on all the time.\nI thought about uploading the script to Heroku and using the platform's scheduling service, but I don't know how to integrate with OneDrive","Title":"How can I schedule a Python script to upload files to One drive?","Tags":"python,heroku,onedrive","AnswerCount":1,"A_Id":75013492,"Answer":"Yes, it is possible to schedule a python script to run without using your local machine. There are a few options for doing this:\nUse a cloud-based computing service, such as Amazon Web Services (AWS) or Google Cloud Platform (GCP). These services allow you to set up virtual machines and run your python scripts on them.\nUse a scheduling service, such as Cron or Windows Task Scheduler. These services allow you to set up a schedule for your python script to run at specific intervals.\nUse a remote server or virtual private server (VPS). These allow you to access a machine remotely and run your python scripts on it.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75013540,"CreationDate":"2023-01-05 03:21:30","Q_Score":1,"ViewCount":54,"Question":"I have a matrix that looks like this in a txt file:\n[[0.26263508 0.89992943 0.62171512 0.20750958 0.21195397 0.97217826\n 0.61573457 0.05643889]\n [0.33188798 0.32016444 0.92051048 0.75572024 0.20247452 0.37400282\n 0.10935296 0.63343081]\n [0.87017165 0.7283508 0.80314653 0.80094718 0.74024014 0.16330332\n 0.76579785 0.75177055]\n [0.2629302 0.59727507 0.60866212 0.29746334 0.54587234 0.43876005\n 0.75007362 0.89742691]\n [0.05300406 0.83342629 0.19291691 0.83372532 0.98122163 0.7815009\n 0.59635085 0.9700382 ]\n [0.69259902 0.42779514 0.04766533 0.62205107 0.71423376 0.85045446\n 0.31985818 0.15338853]\n [0.26947509 0.41946874 0.87206754 0.35849082 0.94756447 0.59001803\n 0.41028535 0.85643487]\n [0.87299386 0.70986812 0.87212445 0.30309828 0.31214338 0.33387522\n 0.52875374 0.75712628]\n [0.51605143 0.64374971 0.37821579 0.77055732 0.12504581 0.75814223\n 0.87462081 0.97378988]\n [1.27346865 0.73175293 1.35820425 1.08405559 0.97660218 1.31912378\n 0.62859619 0.94765808]]\n\nWhen I try to read it into a program using\ninputMatrix = np.loadtxt(\"testing789.txt\", dtype = 'i' , delimiter=' ') \nprint(inputMatrix)\n\nMy problem is that the [ and ] in the file are strings that cannot be converted to int32. Is there an efficient way to read in this matrix?","Title":"How do I load a matrix from a .txt file in python?","Tags":"python,numpy","AnswerCount":1,"A_Id":75023523,"Answer":"Instead of writing the matrix to a file like this:\nmyFile.write(str(matrix)),\nWrite it like this to automatically have it formatted:\nnp.savetxt(fileName.txt, matrix)\nOne last thing: Load the matrix from the txt file like so:\ninputMatrix = np.loadtxt(\"testing789.txt\", dtype = 'f' , delimiter=' ')\nWhere dtype = 'f' is used instead of i so that the matrix values are not rounded.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75014600,"CreationDate":"2023-01-05 06:32:58","Q_Score":0,"ViewCount":22,"Question":"I am currently reading from dropbox offline using pyspark on my local machine using this code\npre_test_quiz_df = spark \\ .read \\ .option('header', 'true') \\ .csv('\/Users\/jamie\/Dropbox\/Moodle\/Course uptake\/data use\/UserDetails.csv')\nWhile working on from a server I am not able to read dropbox on my local machine. Is there a way to read the same file but from the dropbox on my browser.\nHave tried reading with pandas and converting to pyspark dataframe although it did not work.","Title":"How to read dropbox online using pyspark","Tags":"python,apache-spark,pyspark,apache-spark-sql","AnswerCount":1,"A_Id":75072952,"Answer":"I found a work around. I didn't find any direct way of doing this, so the next alternative was using the dropbox API, which works pretty well. You can check their documentation or youtube on how to set up the API.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75014692,"CreationDate":"2023-01-05 06:43:09","Q_Score":1,"ViewCount":21,"Question":"When we're not creating models and directly creating the fields inn serializers using serializer.Serializer , will the fields save to database? because we havent migrated to the database? also if i have created one existing database, i'm creating some additional fields in serializers? will that also saves to database? can anyone lighten me up? Im new to django api.\nhere i provide some example.\nlet my Model.Py be like,\n class Detail(models.Model):\n fname = models.Charfield(maxlength=20)\n lname = models.Charfield(maxlength=20)\n mobile = models.IntergerField(maxlength=20)\n email = models.EmailField(maxlength=20)\n\nlet my Serializer.Py be like,\n class DetailSerializer(serializer.Serializer):\n created_at = serializer.Charfield\n is_active = serializer.Booleanfield\n\nWill this serializer save in database that i created manually in serializer?\nanother question is if i create serializers without a model, will that save to database?\nif that saves in database how's it possible without migrating?","Title":"Will creating serializer.Serializer fields saves to the database?","Tags":"python,django,django-models,django-rest-framework,django-serializer","AnswerCount":1,"A_Id":75014775,"Answer":"No.\nDRF won't create a field in your model table if you declare a field that doesn't exists in your model.\nActually, when using ModelSerializer and declare a field that does not exists in your model, you will get an error like django.core.exceptions.ImproperlyConfigured: Field name created_at is not valid for model Detail. So you won't be able to use your serializer at all.\nAnd when you are using Serializer, you don't have create method so you won't be able to save data on db at all.","Users Score":3,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75015394,"CreationDate":"2023-01-05 08:03:16","Q_Score":1,"ViewCount":61,"Question":"I'm trying to log every active query which has been running for more than 2 minutes. But when I use,\nrunning_queries = session.run(\n \"CALL dbms.listQueries()\"\n )\n\nI have another script running in which there is an infinite loop calling a simple query in another session. But it only returns the queries that are running inside my session.","Title":"Get every running query from every sesssion using Neo4j and Python","Tags":"python,neo4j,cypher","AnswerCount":2,"A_Id":75184561,"Answer":"The problem was with the query running in the other script. It was finishing before being caught.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75015722,"CreationDate":"2023-01-05 08:37:48","Q_Score":1,"ViewCount":161,"Question":"Thanks for any reply in advance.\nI have the entrance program main.py:\nimport asyncio\nfrom loguru import logger\nfrom multiprocessing import Process\nfrom app.events import type_a_tasks, type_b_tasks, type_c_tasks\n\n\ndef run_task(task):\n loop = asyncio.get_event_loop()\n loop.run_until_complete(task())\n loop.run_forever()\n\n\ndef main():\n processes = list()\n processes.append(Process(target=run_task, args=(type_a_tasks,)))\n processes.append(Process(target=run_task, args=(type_b_tasks,)))\n processes.append(Process(target=run_task, args=(type_c_tasks,)))\n\n for process in processes:\n process.start()\n logger.info(f\"Started process id={process.pid}, name={process.name}\")\n\n for process in processes:\n process.join()\n\n\nif __name__ == '__main__':\n main()\n\nwhere the different types of tasks are similarly defined, for example type_a_tasks are:\nimport asyncio\nfrom . import business_1, business_2, business_3, business_4, business_5, business_6\n\n\nasync def type_a_tasks():\n tasks = list()\n tasks.append(asyncio.create_task(business_1.main()))\n tasks.append(asyncio.create_task(business_2.main()))\n tasks.append(asyncio.create_task(business_3.main()))\n tasks.append(asyncio.create_task(business_4.main()))\n tasks.append(asyncio.create_task(business_5.main()))\n tasks.append(asyncio.create_task(business_6.main()))\n\n await asyncio.wait(tasks)\n return tasks\n\nwhere the main() function of businesses(1-6) are Future objects provided by asyncio, in which I implemented my business code.\nIs my usage of multiprocessing and asyncio event loops above the correct way of doing it?\nI am doing so because I have a lot of asynchronous tasks to perform, but it doesn't seem appropriate to put them all in one event loop, so I divided them into three parts(a, b and c) accordingly, and I hope they can be run in three different processes to exert the capability of multiple CPU cores, in the meantime taking advantage of asyncio features.\nI tried running my code, where the log records show there actually are different processes but all are using the same thread\/event loop(knowing this by adding process_id and thread_id to loguru format)","Title":"Can I use multiple event loops in a program where I also use multiprocessing module","Tags":"multiprocessing,python-asyncio,event-loop","AnswerCount":1,"A_Id":75021153,"Answer":"this seens ok. Just use asyncio.run(task()) inside run_task - it is simpler and there is no need to call run_forever (also, with the run_forever` call, your processes will never join the base one.\nIDs for other objects across process may repeat - if you want, add to your logging the result of calling os.getpid() in the body of run_task.\n(if these are, by chance, the same, that means that somehow subprocessing is using a \"dummy\" backend due to some configuration in your project - should not happen anyway)","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75016129,"CreationDate":"2023-01-05 09:14:45","Q_Score":1,"ViewCount":32,"Question":"I use an API that has ~30 endpoints and I have settings how often I need to send request to each endpoint. For some endpoints it's seconds and for some hours. I want to implement python app that will call each API endpoint (and execute some code) after every N seconds where N can be different for each endpoint. If one call is still in progress when second one kicks in, then that one should be added to queue (or something similar) and executed after the first one finishes.\nWhat would be the correct way to implement this using python?\nI have some experience with RabbitMQ but I think that might be overkill for this problem.","Title":"Sending requests to different API endpoints every N seconds","Tags":"python,python-3.x","AnswerCount":2,"A_Id":75016339,"Answer":"You could build your code in this way:\n\nstore somewhere the URL, method and parameters for each type of query. A dictionary would be nice: {\"query1\": {\"url\":\"\/a\",\"method\":\"GET\",\"parameters\":None} , \"query2\": {\"url\":\"\/b\", \"method\":\"GET\",\"parameters\":\"c\"}} but you can do this any way you want, including a database if needed.\n\nstore somewhere a relationship between query type and interval. Again, you could do this with a case statement, or with a dict (maybe the same you previously used), or an interval column in a database.\n\nEvery N seconds, push the corresponding query entry to a queue (queue.put)\n\nan HTTP client library such as requests runs continuously, removes an element from the queue, runs the HTTP request and when it gets a result it removes the following element.\n\n\nOf course if your code is going to be distributed across multiple nodes for scalability or high availability, you will need a distributed queue such as RabbitMQ, Ray or similar.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75016292,"CreationDate":"2023-01-05 09:29:55","Q_Score":2,"ViewCount":54,"Question":"I wrote a function to collect all values into a single list from a dictionary where each value is a list. However, when I later modified that list, I found that my original dictionary was modified too!\nfrom functools import reduce \n\nd = {'foo': [1,2,3]}\nall_vals = reduce(lambda x, y: x + y, d.values())\nall_vals.append(4)\nprint(d)\n# {'foo': [1, 2, 3, 4]}\n\nThis doesn't happen if the dictionary has multiple key\/values though:\nfrom functools import reduce \n\nd = {'foo': [1,2,3], 'bar': [9]}\nall_vals = reduce(lambda x, y: x + y, d.values())\nall_vals.append(4)\nprint(d)\n# {'foo': [1, 2, 3], 'bar': [9]}\n\nThe dictionary now stays unmodified. Can anybody explain why python has this behaviour?","Title":"Modifing the return value of reduce() expression modifies the input","Tags":"python","AnswerCount":3,"A_Id":75016363,"Answer":"Because in the first case your all_vals is simply d.values()[0] because the reduce lambda is never called because there are no two elements to reduce with each other.\nIn the second case you do have two elements which are combined to form a new list which no longer references the list originally in the dictionary.","Users Score":3,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75016854,"CreationDate":"2023-01-05 10:14:30","Q_Score":2,"ViewCount":33,"Question":"I have a dataframe (more than 1 million rows) that has an open text columns for customer can write whatever they want.\nMisspelled words appear frequently and I'm trying to group comments that are grammatically the same.\nFor example:\n\n\n\n\nID\nComment\n\n\n\n\n1\nI want to change my credit card\n\n\n2\nI wannt change my creditt card\n\n\n3\nI want change credit caurd\n\n\n\n\nI have tried using Levenshtein Distance but computationally it is very expensive.\nCan you tell me another way to do this task?\nThanks!","Title":"How can I resolve write errors that I have in my data?","Tags":"python,dataframe,nlp,misspelling,write-error","AnswerCount":2,"A_Id":75016968,"Answer":"Levenshtein Distance has time complexity O(N^2).\nIf you define a maximum distance you're interested in, say m, you can reduce the time complexity to O(Nxm). The maximum distance, in your context, is the maximum number of typos you accept while still considering two comments as identical.\nIf you cannot do that, you may try to parallelize the task.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75017364,"CreationDate":"2023-01-05 10:56:41","Q_Score":0,"ViewCount":54,"Question":"I have a Folder (python311) where I have all of the Python Components stored (\"Lib\", \"Scripts\", \"python.exe\" ...) which is on this Path: D:\\python311.\nNow I want to move this Folder (python311) into another Folder (Code) -> Path: D:\\Code\\python311.\nUsing VS Code it lets me choose the Interpreter which is fine, but when I want to intsll new modules with pip, it does not work. It tries to create an process between the Interpreter of the old Path (D:\\python311\\python.exe), which is no longer existent, and the new Path where pip is stored (D:\\Code\\python311\\Scripts\\pip.exe).\nSolutions that I can think of would be for example reinstalling Python. I don't know if it can be solved through environment variables but it won't work because I store the Python Components on an external Drive.","Title":"Change Path of Python Interpreter\/Compiler","Tags":"python,pip,path","AnswerCount":2,"A_Id":75017405,"Answer":"You can solve it using environment variables even if it is on an external drive.\nYou will need to remove the old entry in the PATH variable and add the new entry (the new python path) or edit the old entry to include the new path.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75018953,"CreationDate":"2023-01-05 13:12:27","Q_Score":0,"ViewCount":30,"Question":"I have a dataframe with timeseries data\n\n\n\n\nTimestamp\nValues\n\n\n\n\n10-26-22 10.00 AM\n1\n\n\n10-26-22 09.04 AM\n5\n\n\n10.26-22 10.06 AM\n6\n\n\n--------\n--------\n\n\n10-27-22 3.32 AM\n9\n\n\n10-27-22 3.36 PM\n5\n\n\n10-27-22 3.31 PM\n8\n\n\n--------\n--------\n\n\n10-27-22 3.37 AM\n8.23\n\n\n10-28-22 4.20 AM\n7.2\n\n\n\n\nI tried to sort the timestamp column into ascending order by :\ndf.sort_values(\"Timestamp\", ascending = True, inplace= True)\nbut this code is not working. I want to get the data like this:\n\n\n\n\nTimestamp\nValues\n\n\n\n\n10-26-22 09.04 AM\n1\n\n\n10-26-22 10.00 AM\n5\n\n\n10-26-22 10.06 AM\n6\n\n\n--------\n--------\n\n\n10-27-22 3.31 AM\n9\n\n\n10-27-22 3.32 PM\n5\n\n\n10-27-22 3.36 PM\n8\n\n\n------\n--------\n\n\n10-27-22 3.37 AM\n8.23\n\n\n10-28-22 4.20 AM\n7.2","Title":"How to arrange time series data into ascending order","Tags":"python,pandas,dataframe,sorting,time-series","AnswerCount":2,"A_Id":75019067,"Answer":"I guess you'll need to drill down to the timestamp then convert the format before using the sort_values function on the dataframe..\nYou should look through the documentation. This is scarcely implemented.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75019532,"CreationDate":"2023-01-05 13:55:43","Q_Score":1,"ViewCount":63,"Question":"This issue has plagued me for the last few months, I need a more experienced opinion. We have a CLI Python application that uses a gRPC server to communicate with other backend services. Its structured something like this:\napp\n - gRPC_service\n - __init__.py\n - service_related_files.py\n - service_tests\n - __init__.py\n - test_service.py\n \n - src\n - __init__.py\n - src_files.py\n\n - python-win\n - python37.dll\n\n - gRPC_service.spec\n\nA few notes about the application:\n\nThe src directory houses the lower level machine learning code. The gRPC_service directory acts as a wrapper around the src code and sends processed requests to a given client\n\nThe python-win directory is a specific version of the Python Interpreter. Many things throughout the code base are reliant on a specific version of Python (3.7.9). This was our solution to making the code base a bit more portable. A developer can clone the repository and immediately have the necessary version of Python installed along with the plethora of 3rd party dependencies after running a Create-VirtualEnvironment.ps1 script that uses the python.exe in the python-win directory.\n\n\nThe number one issue I have faced when developing this application is namespace and importing issues, and I'm not exactly sure what's causing it. We have a sub-application within the src directory that only imports modules from within the src package and uses 3rd party libraries. This works just fine with no ModuleNotFound errors.\nIssues begin to surface when importing src modules from within the gRPC_service package. Even though both packages have __init__.py files, ModuleNotFound errors will be thrown if the PYTHONPATH is not modified at runtime. A work-around solution to this is to collect all the different file paths to each package within app and add them to sys.path. It goes without saying this is inconvenient.\nThis works for importing a majority of the modules, but to add to the confusion, some of the modules from the src package can only be imported after modifying sys.path AND adding a src. prefix to all of the local imports within the src package. Adding this prefix to local imports breaks the sub-application in the src package that I was speaking of earlier. A 'src can't be found' error gets thrown in the sub-app when doing this.\nAdditionally, without adding the src. prefixes to corresponding imports, no ModuleNotFound errors are thrown when the gRPC_service is bundled as a PyInstaller .exe. I have modified the pathex within the .spec file. The app works just fine when bundled - how do I get equivalent behavior when just running Python from source?\nSo I am looking for some advice from Python devs who have worked on large code bases. Is this issue common? What can I do to alleviate the inconvenience of modifying the PYTHONPATH at runtime? Is there a fix-all solution that tends to the needs of both applications within this codebase?","Title":"Local imports work in bundled PyInstaller app but in Python source","Tags":"python,python-3.x,pyinstaller,python-import,python-internals","AnswerCount":1,"A_Id":75035023,"Answer":"I can think of two solutions,\n\nSplit gRPC_services into its own project and repository and add it as a dependency of this project and install and import it like any other third party library.\n\nmove the gRPC_services folder inside of the src folder as a subpackage.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75020025,"CreationDate":"2023-01-05 14:32:47","Q_Score":0,"ViewCount":15,"Question":"I am writing a query in which I want to Sum amount using annotate and Sum decimal field in a foreign key relationship.\nThe field is summed correctly but it returns the Sum field in integer instead of decimal. In the database the field is in decimal format.\nThe query is like:\n***models.objects.filter(SourceDeletedFlat=False).annotate(TotalAmount=Sum(\"RequestOrderList__PurchaseOrderAmount\")).all()\nI do not want to use aggregate because I don't need overall column sum.","Title":"Sum and Annotate does not returns a decimal field Sum as a decimal","Tags":"django,django-models,django-rest-framework,python-3.7","AnswerCount":1,"A_Id":75020795,"Answer":"Can you try this\n\n**models.objects.filter(SourceDeletedFlat=False).annotate(TotalAmount=Sum(\"RequestOrderList__PurchaseOrderAmount\", output_field=DecimalField())).all()","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75020192,"CreationDate":"2023-01-05 14:45:13","Q_Score":2,"ViewCount":501,"Question":"As far as my understanding:\n\nMultiThread is an ideal option for I\/O applications.\n\nTherefore, I test a \"for loop\" code without any I\/O.\n(As following code)\nHowerver, it can reduce the execution time from 6.3s to 3.7s.\nIs the result correct ?\nor any mistake in my suppose ?\nfrom multiprocessing.dummy import Pool as ThreadPool\nimport time\n\n# normal\nl = []\ns = time.time()\nfor i in range(0, 10000):\n for j in range(i):\n l.append(j * 10)\n\ne = time.time()\nprint(f\"case1: {e-s}\") # 6.3 sec\n\n# multiThread\ndef func(x):\n for i in range(x):\n l_.append(i * 10)\n\nwith ThreadPool(50) as pool:\n l_ = []\n s = time.time()\n\n pool.map(func, range(0, 10000))\n\n e = time.time()\n print(f\"case2: {e-s}\") # 3.7 sec","Title":"Python-MultiThreading: Can MultiThreading improve \"for loop\" performance?","Tags":"python,multithreading,python-multithreading","AnswerCount":3,"A_Id":75020491,"Answer":"Multi threading is ideal for I\/O applications because it allows a server\/host to accept multiple connections, and if a single request is slow or hangs, it can continue serving the other connections without blocking them.\nThat isn't mutually exclusive from speeding up a simple for loop execution, if there's no coordination between threads required like in your trivial example above. If each execution of loop is completely independent of any other executions, then it's also very well suited to multi threading, and that's why you're seeing a speed up.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75022663,"CreationDate":"2023-01-05 18:09:59","Q_Score":1,"ViewCount":143,"Question":"I'm trying to use this code in Webots for one of the universal robots. The code works well until I try to the \"ikResults\" in line 49. The program is trying to use the least_squares.py but I'm getting the error for \"'x0' is infeasible\". This is the code I'm using:\nimport sys\nimport tempfile\ntry:\n import ikpy\n from ikpy.chain import Chain\n\nimport math\nfrom controller import Supervisor\n\nIKPY_MAX_ITERATIONS = 4\n\nsupervisor = Supervisor()\ntimeStep = int(4 * supervisor.getBasicTimeStep())\n\nfilename = None\nwith tempfile.NamedTemporaryFile(suffix='.urdf', delete=False) as file:\n filename = file.name\n file.write(supervisor.getUrdf().encode('utf-8'))\narmChain = Chain.from_urdf_file(filename, active_links_mask=[False, True, True, True, True,\nTrue, True, False, True, True, True])\n\nmotors = []\nfor link in armChain.links:\n if 'joint' in link.name and link.name !=\"wrist_3_link_gripper_joint\":\n motor = supervisor.getDevice(link.name)\n motor.setVelocity(1.0)\n position_sensor = motor.getPositionSensor()\n position_sensor.enable(timeStep)\n motors.append(motor)\n \ntarget = supervisor.getFromDef('TARGET')\narm = supervisor.getSelf()\n\nwhile supervisor.step(timeStep) != -1:\n targetPosition = target.getPosition()\n armPosition = arm.getPosition()\n\n x = targetPosition[0] - armPosition[0]\n y = targetPosition[1] - armPosition[1]\n z = targetPosition[2] - armPosition[2]\n\n initial_position = [0] + [m.getPositionSensor().getValue() for m in motors] + [0]\n ikResults = armChain.inverse_kinematics([x, y, z], max_iter=IKPY_MAX_ITERATIONS,\n initial_position=initial_position)`\n\nI've tried incrementing the iterations, changing the target's position, changing the status for the links in armChain (true or false), but nothing seemed to solve this issue. Reading other similar forums, it seems to do something with the bounds, not sure how to check on this.","Title":"\"ValueError: `x0` is infeasible.\" for least_squares.py","Tags":"python,scipy-optimize,chain,webots,inverse-kinematics","AnswerCount":1,"A_Id":75933814,"Answer":"Remove the limits from the urdf. You'll get an unbound chain, but it will likely solve","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75023405,"CreationDate":"2023-01-05 19:22:29","Q_Score":1,"ViewCount":81,"Question":"Please explain how python decides on the size or capacity of lists, when they're created (with list(), using list comprehension ...),and when they're modified.\nI have these questions because i tried the following little program and got a bit confused and curious by the results. (Maybe my formulas are wrong!)\nfrom sys import getsizeof\n\nempty_list = []\nlist_base_size = getsizeof(empty_list)\nprint(f\"{list_base_size = } bytes.\")\n\nprint()\n\nlist_manually = [\n 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19\n ]\nlist_manually_size = getsizeof(list_manually)\nlist_manually_capacity = (list_manually_size - list_base_size) \/\/ 8\nprint(f\"{list_manually_size = } bytes.\")\nprint(f\"{list_manually_capacity = } elemnts.\")\n\nprint()\n\nlist_constructor = list(range(20))\nlist_constructor_size = getsizeof(list_constructor)\nlist_constructor_capacity = (list_constructor_size - list_base_size) \/\/ 8\n# list capacity should also be equal to :\n# from cpython\n# new_allocated = (size_t)newsize + (newsize >> 3) + (newsize < 9 ? 3 : 6);\nprint(f\"{list_constructor_size = } bytes.\")\nprint(f\"{list_constructor_capacity = } elemnts.\")\n\nprint()\n\nlist_comprehension = [i for i in range(20)]\nlist_comprehension_size = getsizeof(list_comprehension)\nlist_comprehension_capacity = (list_comprehension_size - list_base_size) \/\/ 8\nprint(f\"{list_comprehension_size = } bytes.\")\nprint(f\"{list_comprehension_capacity = } elemnts.\")\n\nOutput\nist_base_size = 56 bytes.\n\nlist_manually_size = 216 bytes.\nlist_manually_capacity = 20 elemnts. \n\nlist_constructor_size = 216 bytes. \nlist_constructor_capacity = 20 elemnts. \n\nlist_comprehension_size = 248 bytes. \nlist_comprehension_capacity = 24 elemnts.","Title":"Explain size (capacity) of lists in python","Tags":"python,list","AnswerCount":1,"A_Id":75023474,"Answer":"When you create a list from anything whose length is known (i.e. you can call len on it) the list will be initialized to the exact correct size. A comprehension size isn't known so the list grows in capacity every time the existing capacity gets filled. It grows by more than 1 to be efficient.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75023622,"CreationDate":"2023-01-05 19:47:38","Q_Score":9,"ViewCount":4001,"Question":"I am trying this code to get iam role programmatically.\nfrom sagemaker import get_execution_role\nget_execution_role()\n\nIt's giving the following error.\nUnknownServiceError Traceback (most recent call last)\n\/tmp\/ipykernel_8241\/4227035378.py in ()\n----> 1 get_execution_role()\n 2 role=\"arn:aws:iam::984132841759:role\/service-role\/AmazonSageMaker-ExecutionRole-20221129T111507\",\n\n~\/anaconda3\/envs\/tensorflow2_p38\/lib\/python3.8\/site-packages\/sagemaker\/session.py in get_execution_role(sagemaker_session)\n 5039 \"\"\"\n 5040 if not sagemaker_session:\n-> 5041 sagemaker_session = Session()\n 5042 arn = sagemaker_session.get_caller_identity_arn()\n 5043 \n\n~\/anaconda3\/envs\/tensorflow2_p38\/lib\/python3.8\/site-packages\/sagemaker\/session.py in __init__(self, boto_session, sagemaker_client, sagemaker_runtime_client, sagemaker_featurestore_runtime_client, default_bucket, settings, sagemaker_metrics_client)\n 131 self.settings = settings\n 132 \n--> 133 self._initialize(\n 134 boto_session=boto_session,\n 135 sagemaker_client=sagemaker_client,\n\n~\/anaconda3\/envs\/tensorflow2_p38\/lib\/python3.8\/site-packages\/sagemaker\/session.py in _initialize(self, boto_session, sagemaker_client, sagemaker_runtime_client, sagemaker_featurestore_runtime_client, sagemaker_metrics_client)\n 183 self.sagemaker_metrics_client = sagemaker_metrics_client\n 184 else:\n--> 185 self.sagemaker_metrics_client = self.boto_session.client(\"sagemaker-metrics\")\n 186 prepend_user_agent(self.sagemaker_metrics_client)\n 187 \n\n~\/anaconda3\/envs\/tensorflow2_p38\/lib\/python3.8\/site-packages\/boto3\/session.py in client(self, service_name, region_name, api_version, use_ssl, verify, endpoint_url, aws_access_key_id, aws_secret_access_key, aws_session_token, config)\n 297 \n 298 \"\"\"\n--> 299 return self._session.create_client(\n 300 service_name,\n 301 region_name=region_name,\n\n~\/anaconda3\/envs\/tensorflow2_p38\/lib\/python3.8\/site-packages\/botocore\/session.py in create_client(self, service_name, region_name, api_version, use_ssl, verify, endpoint_url, aws_access_key_id, aws_secret_access_key, aws_session_token, config)\n 868 * path\/to\/cert\/bundle.pem - A filename of the CA cert bundle to\n 869 uses. You can specify this argument if you want to use a\n--> 870 different CA cert bundle than the one used by botocore.\n 871 \n 872 :type endpoint_url: string\n\n~\/anaconda3\/envs\/tensorflow2_p38\/lib\/python3.8\/site-packages\/botocore\/client.py in create_client(self, service_name, region_name, is_secure, endpoint_url, verify, credentials, scoped_config, api_version, client_config)\n 85 loader,\n 86 endpoint_resolver,\n---> 87 user_agent,\n 88 event_emitter,\n 89 retry_handler_factory,\n\n~\/anaconda3\/envs\/tensorflow2_p38\/lib\/python3.8\/site-packages\/botocore\/client.py in _load_service_model(self, service_name, api_version)\n 152 'signatureVersion'\n 153 ),\n--> 154 )\n 155 client_args = self._get_client_args(\n 156 service_model,\n\n~\/anaconda3\/envs\/tensorflow2_p38\/lib\/python3.8\/site-packages\/botocore\/loaders.py in _wrapper(self, *args, **kwargs)\n 130 for this to be used, it must be used on methods on an\n 131 instance, and that instance *must* provide a\n--> 132 ``self._cache`` dictionary.\n 133 \n 134 \"\"\"\n\n~\/anaconda3\/envs\/tensorflow2_p38\/lib\/python3.8\/site-packages\/botocore\/loaders.py in load_service_model(self, service_name, type_name, api_version)\n 375 def load_service_model(self, service_name, type_name, api_version=None):\n 376 \"\"\"Load a botocore service model\n--> 377 \n 378 This is the main method for loading botocore models (e.g. a service\n 379 model, pagination configs, waiter configs, etc.).\n\nUnknownServiceError: Unknown service: 'sagemaker-metrics'. Valid service names are: accessanalyzer, account, acm, acm-pca, alexaforbusiness, amp, amplify, amplifybackend, amplifyuibuilder, apigateway, apigatewaymanagementapi, apigatewayv2, appconfig, appconfigdata, appflow, appintegrations, application-autoscaling, application-insights, applicationcostprofiler, appmesh, apprunner, appstream, appsync, athena, auditmanager, autoscaling, autoscaling-plans, backup, backup-gateway, batch, braket, budgets, ce, chime, chime-sdk-identity, chime-sdk-meetings, chime-sdk-messaging, cloud9, cloudcontrol, clouddirectory, cloudformation, cloudfront, cloudhsm, cloudhsmv2, cloudsearch, cloudsearchdomain, cloudtrail, cloudwatch, codeartifact, codebuild, codecommit, codedeploy, codeguru-reviewer, codeguruprofiler, codepipeline, codestar, codestar-connections, codestar-notifications, cognito-identity, cognito-idp, cognito-sync, comprehend, comprehendmedical, compute-optimizer, config, connect, connect-contact-lens, connectparticipant, cur, customer-profiles, databrew, dataexchange, datapipeline, datasync, dax, detective, devicefarm, devops-guru, directconnect, discovery, dlm, dms, docdb, drs, ds, dynamodb, dynamodbstreams, ebs, ec2, ec2-instance-connect, ecr, ecr-public, ecs, efs, eks, elastic-inference, elasticache, elasticbeanstalk, elastictranscoder, elb, elbv2, emr, emr-containers, es, events, evidently, finspace, finspace-data, firehose, fis\n\nI tried multiple solution from the internet like upgrading sagemaker and boto3 to latest version without success.\nI am using conda_tensorflow2_py38 kernel in sagemaker notebook.","Title":"get_execution_role() sagemaker: UnknownServiceError: Unknown service: 'sagemaker-metrics'. Valid service names are: accessanalyzer","Tags":"python-3.x,amazon-iam,amazon-sagemaker,amazon-sagemaker-debugger,amazon-sagemaker-compilers","AnswerCount":4,"A_Id":75242072,"Answer":"update the boto3 and sagemaker %pip install --upgrade boto3 sagemaker in your notebook\nand do remember to RESTART your kernel","Users Score":1,"is_accepted":false,"Score":0.049958375,"Available Count":2},{"Q_Id":75023622,"CreationDate":"2023-01-05 19:47:38","Q_Score":9,"ViewCount":4001,"Question":"I am trying this code to get iam role programmatically.\nfrom sagemaker import get_execution_role\nget_execution_role()\n\nIt's giving the following error.\nUnknownServiceError Traceback (most recent call last)\n\/tmp\/ipykernel_8241\/4227035378.py in ()\n----> 1 get_execution_role()\n 2 role=\"arn:aws:iam::984132841759:role\/service-role\/AmazonSageMaker-ExecutionRole-20221129T111507\",\n\n~\/anaconda3\/envs\/tensorflow2_p38\/lib\/python3.8\/site-packages\/sagemaker\/session.py in get_execution_role(sagemaker_session)\n 5039 \"\"\"\n 5040 if not sagemaker_session:\n-> 5041 sagemaker_session = Session()\n 5042 arn = sagemaker_session.get_caller_identity_arn()\n 5043 \n\n~\/anaconda3\/envs\/tensorflow2_p38\/lib\/python3.8\/site-packages\/sagemaker\/session.py in __init__(self, boto_session, sagemaker_client, sagemaker_runtime_client, sagemaker_featurestore_runtime_client, default_bucket, settings, sagemaker_metrics_client)\n 131 self.settings = settings\n 132 \n--> 133 self._initialize(\n 134 boto_session=boto_session,\n 135 sagemaker_client=sagemaker_client,\n\n~\/anaconda3\/envs\/tensorflow2_p38\/lib\/python3.8\/site-packages\/sagemaker\/session.py in _initialize(self, boto_session, sagemaker_client, sagemaker_runtime_client, sagemaker_featurestore_runtime_client, sagemaker_metrics_client)\n 183 self.sagemaker_metrics_client = sagemaker_metrics_client\n 184 else:\n--> 185 self.sagemaker_metrics_client = self.boto_session.client(\"sagemaker-metrics\")\n 186 prepend_user_agent(self.sagemaker_metrics_client)\n 187 \n\n~\/anaconda3\/envs\/tensorflow2_p38\/lib\/python3.8\/site-packages\/boto3\/session.py in client(self, service_name, region_name, api_version, use_ssl, verify, endpoint_url, aws_access_key_id, aws_secret_access_key, aws_session_token, config)\n 297 \n 298 \"\"\"\n--> 299 return self._session.create_client(\n 300 service_name,\n 301 region_name=region_name,\n\n~\/anaconda3\/envs\/tensorflow2_p38\/lib\/python3.8\/site-packages\/botocore\/session.py in create_client(self, service_name, region_name, api_version, use_ssl, verify, endpoint_url, aws_access_key_id, aws_secret_access_key, aws_session_token, config)\n 868 * path\/to\/cert\/bundle.pem - A filename of the CA cert bundle to\n 869 uses. You can specify this argument if you want to use a\n--> 870 different CA cert bundle than the one used by botocore.\n 871 \n 872 :type endpoint_url: string\n\n~\/anaconda3\/envs\/tensorflow2_p38\/lib\/python3.8\/site-packages\/botocore\/client.py in create_client(self, service_name, region_name, is_secure, endpoint_url, verify, credentials, scoped_config, api_version, client_config)\n 85 loader,\n 86 endpoint_resolver,\n---> 87 user_agent,\n 88 event_emitter,\n 89 retry_handler_factory,\n\n~\/anaconda3\/envs\/tensorflow2_p38\/lib\/python3.8\/site-packages\/botocore\/client.py in _load_service_model(self, service_name, api_version)\n 152 'signatureVersion'\n 153 ),\n--> 154 )\n 155 client_args = self._get_client_args(\n 156 service_model,\n\n~\/anaconda3\/envs\/tensorflow2_p38\/lib\/python3.8\/site-packages\/botocore\/loaders.py in _wrapper(self, *args, **kwargs)\n 130 for this to be used, it must be used on methods on an\n 131 instance, and that instance *must* provide a\n--> 132 ``self._cache`` dictionary.\n 133 \n 134 \"\"\"\n\n~\/anaconda3\/envs\/tensorflow2_p38\/lib\/python3.8\/site-packages\/botocore\/loaders.py in load_service_model(self, service_name, type_name, api_version)\n 375 def load_service_model(self, service_name, type_name, api_version=None):\n 376 \"\"\"Load a botocore service model\n--> 377 \n 378 This is the main method for loading botocore models (e.g. a service\n 379 model, pagination configs, waiter configs, etc.).\n\nUnknownServiceError: Unknown service: 'sagemaker-metrics'. Valid service names are: accessanalyzer, account, acm, acm-pca, alexaforbusiness, amp, amplify, amplifybackend, amplifyuibuilder, apigateway, apigatewaymanagementapi, apigatewayv2, appconfig, appconfigdata, appflow, appintegrations, application-autoscaling, application-insights, applicationcostprofiler, appmesh, apprunner, appstream, appsync, athena, auditmanager, autoscaling, autoscaling-plans, backup, backup-gateway, batch, braket, budgets, ce, chime, chime-sdk-identity, chime-sdk-meetings, chime-sdk-messaging, cloud9, cloudcontrol, clouddirectory, cloudformation, cloudfront, cloudhsm, cloudhsmv2, cloudsearch, cloudsearchdomain, cloudtrail, cloudwatch, codeartifact, codebuild, codecommit, codedeploy, codeguru-reviewer, codeguruprofiler, codepipeline, codestar, codestar-connections, codestar-notifications, cognito-identity, cognito-idp, cognito-sync, comprehend, comprehendmedical, compute-optimizer, config, connect, connect-contact-lens, connectparticipant, cur, customer-profiles, databrew, dataexchange, datapipeline, datasync, dax, detective, devicefarm, devops-guru, directconnect, discovery, dlm, dms, docdb, drs, ds, dynamodb, dynamodbstreams, ebs, ec2, ec2-instance-connect, ecr, ecr-public, ecs, efs, eks, elastic-inference, elasticache, elasticbeanstalk, elastictranscoder, elb, elbv2, emr, emr-containers, es, events, evidently, finspace, finspace-data, firehose, fis\n\nI tried multiple solution from the internet like upgrading sagemaker and boto3 to latest version without success.\nI am using conda_tensorflow2_py38 kernel in sagemaker notebook.","Title":"get_execution_role() sagemaker: UnknownServiceError: Unknown service: 'sagemaker-metrics'. Valid service names are: accessanalyzer","Tags":"python-3.x,amazon-iam,amazon-sagemaker,amazon-sagemaker-debugger,amazon-sagemaker-compilers","AnswerCount":4,"A_Id":75036447,"Answer":"Upgrade your boto3 installation in your notebook by running this -\n%pip install --upgrade boto3. Once that's upgraded, restart your kernel and run the cells above, it should work as expected.\nThe get_execution_role() function is looking for a SageMaker session and creates one if it doesn't exist, and with the later version of the sagemaker sdk, it is trying to create a client for sagemaker-metrics as well, which isn't supported with the older boto3 version.","Users Score":15,"is_accepted":false,"Score":1.0,"Available Count":2},{"Q_Id":75023941,"CreationDate":"2023-01-05 20:20:29","Q_Score":1,"ViewCount":25,"Question":"Let's say I have created two shared tasks:\nfrom celery import shared_task\n\n@shared_task\ndef taskA():\n #do something\n pass\n\n@shared_task\ndef taskB():\n #do something else\n pass\n\nI am using celery to perform certain tasks that will be invoked by the users of my Django project.\nI have no issue with taskA and taskB being executed at the same time.\nBut, if taskA is already being executed, and another user tries to invoke taskA again, I want to show them an error message.\nIs there a way to do that?","Title":"Ensuring one task of a kind is being executed at a time. (Multiple tasks of different kinds can be executed concurrentlly) celery python","Tags":"python,django,celery","AnswerCount":1,"A_Id":75030486,"Answer":"The only reliable way to do this that I can think of is to have a Celery worker with concurrency set to 1, subscribed to a dedicated queue. Then you send taskA to this particular queue.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75024328,"CreationDate":"2023-01-05 21:02:54","Q_Score":2,"ViewCount":99,"Question":"Assume that I am looking for the index of some_array where some_array is equal to target. I know that python has list comprehension and np.where(), both of which would operate well for my purposes. But also assume that I want to do it with if-elif-else statements or with a for loop. The implementations if the length of the array is 3 would look like this:\nif some_array[0]==target:\n return 0\nelif some_array[1]==target:\n return 1\nelse:\n return 2 \n\nfor i in range(3):\n if some_array[i]==target:\n return i\n\nSo, when is it better to use a for loop over if-elif-else statement? I am mostly interested in the applications of it in python and in C, i.e., switch-cases.\nMy subquestions would be:\n\nDo the compilers (or in Python's case, numba or cython) switch from a for loop to switch-cases or vice versa if it feels like the other approach is faster?\nIs there a generally accepted good-coding practice that suggests a maximum length for an if-elif-else statements for better readability?\nIs there a threshold for the length of the array or the number of iterations where one of them performs better than the other?\n\nI apologise if this is asked before. I tried to check suggested questions but there were not helpful for my purposes.\nThanks in advance!","Title":"When is it worth using loop over if-else statements?","Tags":"python,c,loops,if-statement,compilation","AnswerCount":2,"A_Id":75024490,"Answer":"So, when is it better to use a for loop over if-elif-else statement?\n\nA loop is always clearer than an if-else if-else chain for this particular purpose. That is sufficient reason to prefer the loop, except possibly in the highly unlikely case that you trace a performance bottleneck to such a loop and find that it is relieved by changing to an if.\n\n\nDo the compilers (or in Python's case, numba or cython) switch from a for loop to switch-cases or vice versa if it feels like the\nother approach is faster?\n\n\nLoop unrolling is a standard optimization that many C compilers will perform when they think it will yield an improvement. A short loop might be unrolled completely out of existence, which is effectively the transformation you ask about.\nI am not aware of compilers performing the reverse transformation.\n\n\nIs there a generally expected good coding practice that suggests a maximum length for an if-elif-else statements to ensure ease of\nfollowing the code?\n\n\nNot as such.\nWrite clear code, and do not repeat yourself.\n\n\nIs there a threshold for the length of the array or the number of iterations where one of them performs better than the other?\n\n\nNot in particular. Performance has many factors, and few clear rules.\nIn general, first make it work, then, if necessary, make it fast.","Users Score":6,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75024816,"CreationDate":"2023-01-05 22:01:44","Q_Score":1,"ViewCount":74,"Question":"This is my second day using Python. I'm using Visual Studios Code with Python 3.\nI'm trying to make a madlib program in python. I'm having trouble with figuring out how to have the program automatically recognize whether to use a or an based on the variable they entered when I asked. For now I just have a(n) to be safe but I don't know how to just have it recognize that if they input the word apple then the program should say \"an apple\" or if they put grape then it should say \"a grape\"\nprint(timemin, \"minutes later.\",capitalize_string, \"used a(n)\", noun1, \"to luer it outside.\")\n\nprint(\"Suddenly, a(n)\", animal2, \"raced towards us and caused \")\n\ncapitalize_string, animal2, timein, noun1 are all variables.\nI tried googling my problem but have not been able to find any help. I just want to learn how to make my program automatically recognize if the variable needs an (a or an) so that when the madlib prints out, it doesn't say \"Jeff saw a(n) apple\" but instead says \"Jeff saw an apple\" because the program recognized the variable started with a vowel.","Title":"How do I make a program recognize whether to use A or An based on the variable the user inputs?","Tags":"python,string,variables,input,output","AnswerCount":2,"A_Id":75024987,"Answer":"\"An umbrella\", but \"a user\". Unfortunately, English uses the first sound, not the first letter, to determine whether to use \"an\" or \"a\".\nIn some dialects, you even distinguish between \"a history\" but \"an historian\" because accented \"h\" is treated differently than an unaccented one.\nIf you check to see if the lower-cased first letter is an \"a\", \"e\", \"i\", \"o\", or \"u\", you'll probably get about 95% accuracy. Beyond that is a lot of hard work.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75025366,"CreationDate":"2023-01-05 23:12:42","Q_Score":1,"ViewCount":35,"Question":"I am new to deep learning. I am trying to use depthwise separable convolutions on cancer skin dataset X contains 200 images with shape (299,299,3) and Y\"mask\" contains 200 images with same shape as X. and it is binary classification.\nhere is the error:\n\"ValueError: A target array with shape (160, 299, 299, 3) was passed for an output of shape (None, 1) while using as loss binary_crossentropy. This loss expects targets to have the same shape as the output.\"\nI dont know where have I gone wrong. Help needed in resolving the error.\ndef first(x, filters, kernel_size, strides=1):\n x = SeparableConv2D(filters=filters,\n kernel_size=kernel_size,\n strides=strides,\n padding='same',\n use_bias=False)(x)\n x = BatchNormalization()(x)\n x = ReLU()(x)\n x = MaxPool2D(pool_size=4, strides=1, padding='same')(x)\n return x\n\ndef sep_bn(x, filters, kernel_size, strides=1):\n x = SeparableConv2D(filters=filters,\n kernel_size=kernel_size,\n strides=strides,\n padding='same',\n use_bias=False)(x)\n x = BatchNormalization()(x)\n return x\n\ndef block(tensor,filters):\n \n x = sep_bn(tensor, filters=filters, kernel_size=3)\n x = ELU()(x)\n x = sep_bn(x, filters=filters, kernel_size=3)\n x = ELU()(x)\n x = sep_bn(x, filters=filters, kernel_size=3)\n\n tensor = Add()([tensor, x])\n return tensor\n\ninput = Input(shape=(299, 299, 3))\n\nmy_model = first(input, 16, 3, strides=2)\nmy_model = block(my_model,16)\n\nmy_model = first(my_model, 32, 3, strides=2)\nmy_model = block(my_model,32)\n\nmy_model = first(my_model, 48, 3, strides=2)\nmy_model = block(my_model,48)\n\nmy_model = first(my_model, 64, 3, strides=2)\nmy_model = block(my_model,64)\n\nmy_model = first(my_model, 96, 3, strides=2)\nmy_model = GlobalAveragePooling2D()(my_model)\n\nmy_model = Dense(units=512, activation='relu')(my_model)\noutput = Dense(units=1, activation='sigmoid')(my_model)\nXception_model = Model(inputs=input, outputs=output)\nprint(Xception_model.summary())\n\nXception_model.compile(optimizer= 'adam', loss= ['binary_crossentropy'], metrics=['accuracy'])\n\nhistory = Xception_model.fit(x_train,y_train,validation_data=(x_test, y_test) ,epochs=150, batch_size=32 ,verbose=1","Title":"ValueError: A target array with shape (160, 299, 299, 3) was passed for an output of shape (None, 1) while using as loss `binary_crossentropy`","Tags":"python,keras,deep-learning","AnswerCount":1,"A_Id":75032258,"Answer":"The model output is \"Dense(units=1, activation='sigmoid')\" layer, it produces a number between 0 and 1.\nHowever, your 'y_train.shape' is '(160, 299, 299, 3)', which is a 299x299x3 tensor for each input item. For 'binary_crossentropy' you should provide 0 or 1 values as y_train. I.e. the expected 'y_train.shape' is '(160, )' and 'y_train.shape' should be an array with 0s and 1s (ground truth labels). In your case it looks like y_train is an image (just as an input).","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75025388,"CreationDate":"2023-01-05 23:16:37","Q_Score":1,"ViewCount":160,"Question":"def _get_trace(self) -> None:\n \"\"\"Retrieves the stack trace via debug_traceTransaction and finds the\n return value, revert message and event logs in the trace.\n \"\"\"\n\n # check if trace has already been retrieved, or the tx warrants it\n if self._raw_trace is not None:\n return\n self._raw_trace = []\n if self.input == \"0x\" and self.gas_used == 21000:\n self._modified_state = False\n self._trace = []\n return\n\n if not web3.supports_traces:\n raise RPCRequestError(\"Node client does not support `debug_traceTransaction`\")\n try:\n trace = web3.provider.make_request( # type: ignore\n \"debug_traceTransaction\", (self.txid, {\"disableStorage\": CONFIG.mode != \"console\"})\n )\n except (requests.exceptions.Timeout, requests.exceptions.ConnectionError) as e:\n msg = f\"Encountered a {type(e).__name__} while requesting \"\n msg += \"`debug_traceTransaction`. The local RPC client has likely crashed.\"\n if CONFIG.argv[\"coverage\"]:\n msg += \" If the error persists, add the `skip_coverage` marker to this test.\"\n raise RPCRequestError(msg) from None\n\n if \"error\" in trace:\n self._modified_state = None\n self._trace_exc = RPCRequestError(trace[\"error\"][\"message\"])\n raise self._trace_exc\n\n self._raw_trace = trace = trace[\"result\"][\"structLogs\"]\n if not trace:\n self._modified_state = False\n return\n\n # different nodes return slightly different formats. its really fun to handle\n # geth\/nethermind returns unprefixed and with 0-padding for stack and memory\n # erigon returns 0x-prefixed and without padding (but their memory values are like geth)\n fix_stack = False\n for step in trace:\n if not step[\"stack\"]:\n continue\n check = step[\"stack\"][0]\n if not isinstance(check, str):\n break\n if check.startswith(\"0x\"):\n fix_stack = True\n\n> c:\\users\\xxxx\\appdata\\local\\programs\\python\\python310\\lib\\site-packages\\brownie\\network\\transaction.py(678)_get_trace()\n-> step[\"pc\"] = int(step[\"pc\"], 16)\n(Pdb)\n\nI am doing Patricks Solidity course and ran into this error. I ended up copying and pasting his code:\ndef test_only_owner_can_withdraw():\n if network.show_active() not in LOCAL_BLOCKCHAIN_ENVIRONMENTS:\n pytest.skip(\"only for local testing\")\n fund_me = deploy_fund_me()\n bad_actor = accounts.add()\n with pytest.raises(exceptions.VirtualMachineError):\n fund_me.withdraw({\"from\": bad_actor})\n\nPytest worked for my other tests however When I tried to do this one it wouldnt work.","Title":"pytest: TypeError: int() can't convert non-string with explicit base","Tags":"python,pytest,solidity,brownie","AnswerCount":1,"A_Id":75044007,"Answer":"Ok, So after looking at my scripts and contracts I found the issue. The was an issue with my .sol contract and instead of returning a variable, it was returning the error message from my retrieve function in the contract. Its fixed and working now","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75025911,"CreationDate":"2023-01-06 00:53:32","Q_Score":1,"ViewCount":255,"Question":"When I try to run the debugger in VS CODE with SAM hello world for Python, it creates the following in lunch.json\n {\n \"type\": \"aws-sam\",\n \"request\": \"direct-invoke\",\n \"name\": \"API ocrSam:HelloWorldFunction\",\n \"invokeTarget\": {\n \"target\": \"api\",\n \"templatePath\": \"${workspaceFolder}\/ocrSam\/template.yaml\",\n \"logicalId\": \"HelloWorldFunction\"\n },\n \"api\": {\n \"path\": \"\/hello\",\n \"httpMethod\": \"post\",\n \"payload\": {\n \"json\": {\n \"tst\": \"ttt\"\n }\n }\n },\n \"lambda\": {\n \"runtime\": \"python3.8\"\n }\n }\n\nWhen I run it I get:\n2023-01-06 00:43:10 [INFO]: Command: (not started) [\/usr\/local\/bin\/sam local start-api --template \/tmp\/aws-toolkit-vscode\/vsctkDR7EPj\/output\/template.yaml --port 5858 --debug-port 5859 --debugger-path \/home\/ubuntu\/.vscode-server\/extensions\/amazonwebservices.aws-toolkit-vscode-1.60.0\/resources\/debugger --debug-args \/var\/lang\/bin\/python3.8 \/tmp\/lambci_debug_files\/py_debug_wrapper.py --listen 0.0.0.0:5859 --wait-for-client --log-to-stderr \/var\/runtime\/bootstrap.py]\n2023-01-06 00:43:10 [ERROR]: Error running command \"sam local start-api\": Timeout token cancelled\n\nWhen I copy the cmd and try to run it from CLI I get:\n\/usr\/local\/bin\/sam local start-api --template \/tmp\/aws-toolkit-vscode\/vsctkDR7EPj\/output\/template.yaml --port 5858 --debug-port 5859 --debugger-path \/home\/ubuntu\/.vscode-server\/extensions\/amazonwebservices.aws-toolkit-vscode-1.60.0\/resources\/debugger --debug-args \/var\/lang\/bin\/python3.8 \/tmp\/lambci_debug_files\/py_debug_wrapper.py --listen 0.0.0.0:5859 --wait-for-client --log-to-stderr \/var\/runtime\/bootstrap.py\n\nError: No such option: --listen\nHow can I by-pass this issue?\nNOTE: this issue happened on my old pc which was dying, I bought a new mac and it doesn't happen any more, so it might be a specific problem of the pc","Title":"VS code SAM Debugger - Error running command \"sam local start-api\": Timeout token cancelled","Tags":"python-3.x,visual-studio-code,sam","AnswerCount":1,"A_Id":75062714,"Answer":"Which sam cli version do you have installed and which OS are you using?\nCan you try increasing the aws.samcli.lambdaTimeout setting in the VSCode settings to something like 90000 (1.5 minutes) to see if the default timeout is too low","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75026270,"CreationDate":"2023-01-06 02:11:49","Q_Score":1,"ViewCount":45,"Question":"I have two python packages. Package A is a dependency of package B, and includes a CLI executable. This is the basic file structure of Package A:\nPackageA\n| CLIExecutable\n | executable.exe\n| __init__.py\nsetup.py\n\nThe CLI is then called using the subprocess module.\nI have these parameters set within Package A's setuptools.setup to include the CLI when packaged:\npackage_data={package.__name__: [\"CLIExecutable\/*\"]},\ninclude_package_data=True,\n\nPackage B imports Package A and uses it for some of its methods. When I run Package B as a python package, it works and is able to access the CLI that is included in Package A. However, when I package it using pyinstaller, it is unable to access the CLI and returns a FileNotFound error.","Title":"How to package a python app that depends on another python package that includes a non-python CLI executable with pyinstaller","Tags":"python,package,pyinstaller,executable","AnswerCount":1,"A_Id":75026759,"Answer":"I figured it out. Pyinstaller has a --collect-data flag. I just needed to add --collect-data=PackageA","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75028214,"CreationDate":"2023-01-06 07:43:40","Q_Score":0,"ViewCount":53,"Question":"The organization I work for solely relies on windows task scheduler for running their daily python scripts. However, this makes it hard to monitor all the scripts and can be a bit unreliable at times.\nAlso; I can't imagine that it is best practice for a medium sized company to use windows task scheduler to automatically run Python scripts.\nWhat is best practice in this case? I heard from other that Azure is frequently used but this is not possible for us yet. I heard of applications like cron but it seems that these are mostly used for personal use.","Title":"For a medium sized company, what is the best, most consistent way to schedule Python scripts which have to run every day?","Tags":"python,automation,scheduling","AnswerCount":3,"A_Id":75028405,"Answer":"It depends on your needs and your constraints, also you must consider costs.\nHowever there are plenty solution that you can use:\n\nWindows Task Scheduler: it can be suffiscent except if you are\nfinding it unreliable or difficult to manage.\n\nCloud: Azure or another provider such as Amazon Web Services (AWS) or\nGoogle Cloud Platform (GCP), which also offer scheduled execution of\nscripts. for example with azure, you can create a \"Logic App\" that is\ntriggered by a specified schedule and runs your Python script. You\ncan also use Azure Functions, which are a serverless computing\nplatform. With aws, there is lambda that I had tested, it is a good\noption to threat parallelism and to optimize costs.\n\ncron: cron is a Unix utility that allows you to schedule scripts or\nother commands to run automatically. I use personally to automate\nrunning some processes in Ubuntu system. For example, to run your\nscript every day at 8:00 am you can use:\n0 8 * * * \/path\/to\/script.py\n\nThird-party scheduling tools: There are also a number of third-party\ntools available that allow you to schedule scripts like Jenkins.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75030277,"CreationDate":"2023-01-06 11:19:53","Q_Score":1,"ViewCount":29,"Question":"I wanna to save print result to txt file\nmy code for scaping website\nfrom selenium import webdriver\nfrom bs4 import BeautifulSoup\nfrom selenium.common.exceptions import ElementClickInterceptedException\nfrom selenium.webdriver.common.by import By\nfrom collections import Counter\n\ndriver = webdriver.Chrome()\nresult = driver.get(\" U RL \")\n\ncity = []\n\nwhile True:\n driver.implicitly_wait(10)\n page_source = driver.page_source\n soup = BeautifulSoup(page_source, 'lxml')\n\n cities = [x.get_text() for x in soup.find_all('span', attrs={'class': 'region d-inline-block mr-5'})]\n\n for i in range(len(cities)):\n city.append(cities[i])\n\n try:\n driver.find_element(by=By.LINK_TEXT, value='\u00bb').click()\n except ElementClickInterceptedException:\n break\n count = Counter(city)\n print(count)\n \n with open('example.txt', 'w', encoding='utf-8') as f:\n f.write(count)\n\ndriver.quit()\n\ncan you help me for save my result to txt file .\nthank you for helping","Title":"Save Print Counter To TXT File , PYthon","Tags":"python","AnswerCount":1,"A_Id":75030807,"Answer":"Before the print function, save the output to a variable and then save it in a new file.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75030847,"CreationDate":"2023-01-06 12:18:17","Q_Score":1,"ViewCount":46,"Question":"import email, imaplib\nimport re\nimport datetime\nimport smtplib\nfrom email.mime.multipart import MIMEMultipart\nfrom email.mime.text import MIMEText\nhost= \"imap.gmail.com\" \nusername= \"ab@gmail.com\"\npassword= \"123456\"\nmail= imaplib.IMAP4_SSL(host)\nmail.login(username, password)\n\ndestination_folder_name = \"Emails Processed Jan 2023\"\n try:\n mail.create(destination_folder_name)\n except Exception as e:\n print(\"Unable to create destination Folder: \",e)\n\nIs there any solution to create folder in gmail with spaces like folder name \"Emails Processed Jan 2023\". I am getting the error Unable to create destination Folder: CREATE command error: BAD [b'Could not parse command']","Title":"Imaplib create folder with spaces","Tags":"python-3.x","AnswerCount":1,"A_Id":75457809,"Answer":"To create a folder or to call a folder with spaces you just need to create the string like this: '\"Test folder\"'. Note that you need single quotes and double quotes","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75031941,"CreationDate":"2023-01-06 14:03:00","Q_Score":0,"ViewCount":44,"Question":"I want to open files from webpage. For example when we try to download a torrent file it redirects us to utorrent app and it continues it work. I also want to open a local file somehow using OS software. Like a video file using pot player. Is there any possible solution for me ,like making a autorun in pc to run that . Anything it may be please help me.\ud83d\ude14\ud83d\ude14\nI searched and found a solution to open a software using protocol, but in this way I cannot open a file in that software.","Title":"Cannot open a local file from webpage","Tags":"javascript,python,html,protocols","AnswerCount":2,"A_Id":75032911,"Answer":"the link acts as a magnet so your torrent application is opened maybe delete torrent for sometime till you finish the project, i know how to open image in local files in html but it will only be visible to you, you can do audio and video files also using ","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75031941,"CreationDate":"2023-01-06 14:03:00","Q_Score":0,"ViewCount":44,"Question":"I want to open files from webpage. For example when we try to download a torrent file it redirects us to utorrent app and it continues it work. I also want to open a local file somehow using OS software. Like a video file using pot player. Is there any possible solution for me ,like making a autorun in pc to run that . Anything it may be please help me.\ud83d\ude14\ud83d\ude14\nI searched and found a solution to open a software using protocol, but in this way I cannot open a file in that software.","Title":"Cannot open a local file from webpage","Tags":"javascript,python,html,protocols","AnswerCount":2,"A_Id":75031988,"Answer":"Opening a specific file in a specific software would usually depend on passing some URL parameters to the protocol-URL of the app (e.g., opening a file in VSCode would use a URL like vscode:\/\/\/Users\/me\/file.html, but this functionality would have to be explicitly handled by the app itself though, so the solution for each app would be different).\nOtherwise, if the app doesn't support opening a specific file itself through a URL, you'd have to use some scripting software (e.g. AppleScript if you're on macOS) to dynamically click\/open certain programs on a user's computer.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75032045,"CreationDate":"2023-01-06 14:13:13","Q_Score":1,"ViewCount":181,"Question":"In my conftest.py I added following code\nimport pytest\nfrom selenium import webdriver\n\n\ndef pytest_addoption(parser):\n parser.addoption(\n \"--browser_name\", action=\"store\", default=\"chrome\"\n )\n\n\n@pytest.fixture(scope=\"class\")\ndef setup(request):\n browser_name = request.config.getoption(\"--browser_name\")\n if browser_name == \"chrome\":\n driver = webdriver.Chrome()\n elif browser_name == \"firefox\":\n driver = webdriver.Firefox()\n elif browser_name == \"Edge\":\n driver = webdriver.Edge()\n driver.get(\"https:\/\/rahulshettyacademy.com\/angularpractice\/\")\n driver.maximize_window()\n request.cls.driver = driver\n yield\n driver.close()\n\nI want to choose browser name from command line, but pytest does not recognize it when I run my tests by pytest --browser_name firefox. What might be the problem?\nERROR message:\nERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]\npytest: error: unrecognized arguments: --browser_name\n inifile: None\n rootdir: \/home\/*****\/PycharmProjects\/SeleniumFramework","Title":"Pytest does not recognize new command line option","Tags":"python,selenium,testing,pytest","AnswerCount":2,"A_Id":75591888,"Answer":"The problem is resolved - I didn't have Firefox installed on my Desktop. After installation and environment restart everything works fine.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75033511,"CreationDate":"2023-01-06 16:24:42","Q_Score":0,"ViewCount":53,"Question":"I'm using Apache JMeter to test a tiny Flask app. The app performs some sort of CPU-bound task.\nSurprisingly, running the Flask app with --without-threads gives noticeably better results than running with --with-threads. How could that be?\nSome of the Apache JMeter settings and the respective results are:\n\n\n\n\nNumber of Threads (users)\nLoop Count\nTime taken without threads (seconds)\nTime taken with threads (seconds)\n\n\n\n\n5\n1000\n14\n17\n\n\n10\n500\n14\n18\n\n\n5\n3000\n43\n51\n\n\n10\n1500\n43\n56\n\n\n\n\nI'd expect that, in the case of a purely CPU-bound task, the multi-threaded version should be at least as fast as the single-threaded one. Let me explain:\nIn terms of executing the actual CPU task, I would expect both versions to perform the same. However, in terms of how quickly the next thread can be served, I'd expect the multi-threaded version to have a slight edge, because the request has already been served by Flask and it's only stuck waiting for the CPU.\nIn the single-threaded version (i.e. --without-threads), only one request gets served at a time, while all the other requests are waiting to be served by Flask. In other words, there's a certain \"serving overhead\" that Flask introduces.\nIn an ideal world, Flask could serve a new request instantly. In other words, the overhead of Flask serving an HTTP request would be 0. In that case, I would expect the single-threaded and multi-threaded versions to be equally as fast, because it would make no difference whether the threads are waiting to be served by Flask or waiting to get access to the CPU.\nI'm guessing that my understanding is incorrect. Where am I wrong?","Title":"Flask --without-threads gives better performance than --with-threads on CPU-bound tasks?","Tags":"python,multithreading,flask,jmeter","AnswerCount":2,"A_Id":75055508,"Answer":"As @Thomas suggested, I ran some more tests using a production-ready server. My server of choice was gunicorn, because it's easy to set up with Python 3.9.\ngunicorn accepts two command-line arguments pertaining to this topic:\n\n--workers - \"The number of worker processes for handling requests.\" The default value is 1.\n--threads - \"The number of worker threads for handling requests.\" The default value is also 1.\n\nIncreasing --workers up to what my CPU can handle did improve performance. Increasing --threads didn't. Furthermore, running 8 workers with 1 thread gave better results than running 8 workers with 4 threads.\nSo, I tried simulating some I\/O by sleeping for half a second. Finally, increasing the number of threads did improve performance.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75034073,"CreationDate":"2023-01-06 17:14:58","Q_Score":0,"ViewCount":47,"Question":"I need a way to play video with sound in tkinter which will not use many RAM.\nI tried many libraries and all of them use many RAM (about 10gb for 5 minute video). The only thing I found is bvPlayer, it dont use much RAM but it creates video in other window, and I need it to be possible to create other objects such as labels here as well.","Title":"Video with sound in tkinter python","Tags":"python,python-3.x,tkinter,tkinter-canvas","AnswerCount":1,"A_Id":75034173,"Answer":"How big is your video? If it's 1GB, then the RAM taken by your program will be around 1GB (more like 1.1GB). The size of your video will be roughly the amount of RAM taken. There's no practical way to use less RAM.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75035567,"CreationDate":"2023-01-06 19:59:11","Q_Score":1,"ViewCount":130,"Question":"I have a lot of files on parquet, i need jut jo take 3 columns of this files. Some times one of this columns can have different names. I have this code but, this is spending more than 3 hours to run. This is not good. i'm using pyspark.\ndf_list = []\n# I iterate all paths from a df which contains all file paths that i need\n\nfor index, row in df.iterrows():\n path = adl_gen2_full_url(DATALAKE,FILESYSTEM,'\/APPLICATION\/'+row['Ingested_Path'])\n \n \n \n try:\n \n spark_df = spark.read.parquet(path)\n # Here i select just the columns that i need, one of this columns have different name\n spark_df = spark_df.select(row['Data_referencia'] \\\n ,'data_upload' \\\n ,'data_processamento' \\\n )\n\n \n spark_df = spark_df.withColumn(\"nome_arquivo\",F.lit(row['Nome_Arquivo']))\n spark_df = spark_df.distinct()\n \n \n # Each file that i read i append on a list \n df_list.append(spark_df)\n \n print(\"\\n\")\n print(\"Sucesso \", row['Nome_Arquivo'])\n print(\"\\n\")\n \n except requests.exceptions.RequestException as e:\n print(\"Connection refused\")\n print(path)\n pass\n \n \n except Exception as e:\n print(\"Internal error\", e)\n pass\n\n \n# In the end, i reduce that list in a unique dataframe \ndfs = reduce(DataFrame.unionAll, df_list)\n\n\ndfs = dfs.filter(F.col('data_refer\u00eancia') != 'NaT')","Title":"How can I read and filter many parquet files with different column names without spending many hours","Tags":"python,for-loop,pyspark,parquet","AnswerCount":1,"A_Id":75036070,"Answer":"1) You are trying to filter the final combined set of dataframes by data_refer\u00eancia. If that field (with value NaT) has some share in most (or all) collected dataframes - your code will accumulate a lot of redundant data appended to df_list with further passing to union. So it makes sense to filter out those records from each dataframe: spark_df = spark_df.filter(F.col('data_refer\u00eancia') != 'NaT') (instead of filtering at the end).\n2) Note that DataFrame.unionAll is just an alias (some say it's deprecated) for DataFrame.union. According to your comment # ... reduce that list in a unique dataframe - union doesn't make a unique dataframe, it just combines a dataframes. Potentially, you could have a duplicate rows between combined dataframes, so perhaps reduce(DataFrame.unionAll, df_list).distinct(). Another related aspect is executing spark_df.distinct() on each dataframe: if you can monitor\/debug that the difference in size of one such filtered dataframe before distinct() and after it is in average very negligible - then try to omit distincting every df and call dictinct once on the final combined dataset as mentioned above.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75037007,"CreationDate":"2023-01-06 23:26:09","Q_Score":1,"ViewCount":379,"Question":"I'm getting this error while trying to import the cv2 module on a anaconda virtual enviroment:\nTraceback (most recent call last):\n File \"\", line 1, in \n File \"C:\\anaconda3\\envs\\venv-1\\lib\\site-packages\\cv2\\__init__.py\", line 181, in \n bootstrap()\n File \"C:\\anaconda3\\envs\\venv-1\\lib\\site-packages\\cv2\\__init__.py\", line 153, in bootstrap\n native_module = importlib.import_module(\"cv2\")\n File \"C:\\anaconda3\\envs\\venv-1\\lib\\importlib\\__init__.py\", line 127, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\nImportError: DLL load failed while importing cv2: N\u00e3o foi poss\u00edvel encontrar o m\u00f3dulo especificado.\n\nBut the opencv-python is on the package list when I run pip list. And when I run pip install opencv-python I got this message:\nRequirement already satisfied: opencv-python in c:\\anaconda3\\envs\\venv-1\\lib\\site-packages (4.7.0.68)\nRequirement already satisfied: numpy>=1.17.0 in c:\\anaconda3\\envs\\venv-1\\lib\\site-packages (from opencv-python) (1.23.5)\n\n.\nWhen I try to import on the base environment, it works fine","Title":"\"ImportError: DLL load failed while importing cv2\" but \"Requirement already satisfied\"","Tags":"python,opencv,anaconda","AnswerCount":1,"A_Id":75236912,"Answer":"... reinstall the cv python package with this argument --ignore-installed.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75037943,"CreationDate":"2023-01-07 03:31:13","Q_Score":2,"ViewCount":100,"Question":"I was working on a problem and wanted to sort things based on a condition. I have an array of words and a hash that has a count of how many times each word appears in the array of words. The problem calls for you to return the elements based on descending order of frequency of each word in the initial array of words (most frequent words appear first and least frequent appear last in the return array).\nHowever if two words appear the same amount of times, then sort them alphabetically (lexographically) in the return array. In JavaScript, I've been able to write it like this:\n`let frequentWords = Object.keys(hash).sort((a, b) => {\n if (hash[b] === hash[a]) {\n return a.localeCompare(b);\n } else {\n return hash[b] - hash[a];\n }\n});`\n\nI wanted to know how to write this but equivalently in Python with sorted(list, key=lambda x:(some function here)), but I'm not sure how to do so. I want to be able to sort based on multiple conditions for any problem that needs sorting in the future, but I'm not sure how to write a lambda function for key that can take in multiple conditions.\nI've seen this as a solution:\nfreq_words = sorted(hash, key=lambda x: (-hash[x],x))\nAnd I've tried reading the documentation, but I'm not sure how this works and unsure what to do if I need to sort based on three conditions. Whereas this is easy to do in a JS callback function, I'm unsure of the Python syntax.\nI'm coding in Python3 and cmp no longer exists, so I'm trying to figure out how to write this with just the key parameter.\nThanks!","Title":"JavaScript .sort() to Python sorted(): How to Convert Callback in JS to key in Python","Tags":"javascript,python,arrays,python-3.x,sorting","AnswerCount":2,"A_Id":75038035,"Answer":"Essentially what is happening is the the lambda function is constructing tuples from each key, value pair in hash, and sorting those.\nSo, e.g., if you have:\nhsh = {'a': 10, 'ba': 8, 'bo': 8, 'c': 12, 'do': 12, 'da': 12}\nthen you can think of\nsorted(hsh, key=lambda x: (-hsh[x], x))\nas equivalent to sorting:\n[(-10, 'a'), (-8, 'ba'), (-8, 'bo'), (-12, 'c'), (-12, 'do'), (-12, 'da')]\nTuples are compared element-wise, so -12 comes first, and it then compares 'c', 'do', and 'da' which get ordered: 'c', da', 'do', so our first elements are [(-12, 'c'), (-12, 'da'), (-12, 'do')] or elements at index 3, 5 and 4 respectively, so the final output elements are the original keys at index 3, 5, and 4, namely ['c', 'da', 'do', ...].\nIf you wanted to add more conditions, e.g.:\nhsh = {'a': (10, 12), 'bo': (10, 14), 'ba': (10, 14)} (maybe the first number is counts on wikipedia, and the second is counts in some text corpus, and you want to order by: the second digit, then the first, then alphabetically, you can do:\nsorted(hsh, key=lambda x: (-hsh[x][1], -hsh[x][0], x)).\nHopefully that's enough that you can generalize from here!","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75038916,"CreationDate":"2023-01-07 08:05:39","Q_Score":2,"ViewCount":341,"Question":"One of the subdependencies of my project is transformers. This only happened when I upgraded transformers from 4.16.0 to the latest version 4.25.1. When I try to compile the project with pyinstaller I get the following error:\nTraceback (most recent call last):\n File \"main.py\", line 14, in \n ...\n File \"transformers\\utils\\import_utils.py\", line 36, in \n File \"transformers\\utils\\logging.py\", line 123, in get_logger\n File \"transformers\\utils\\logging.py\", line 86, in _configure_library_root_logger\nAttributeError: 'NoneType' object has no attribute 'flush'\n\nUpon further inspection I found the following function in logging.py. It seems that sys.stderr is being set as NoneType for some reason.\ndef _configure_library_root_logger() -> None:\n\n global _default_handler\n\n with _lock:\n if _default_handler:\n _default_handler = logging.StreamHandler()\n _default_handler.flush = sys.stderr.flush # Error on this line\n ...\n\nThis is the file I'm using to compile the project:\n# -*- mode: python ; coding: utf-8 -*-\nfrom PyInstaller.utils.hooks import collect_data_files\nfrom PyInstaller.utils.hooks import copy_metadata\n\ndatas = []\ndatas += copy_metadata('tqdm')\ndatas += copy_metadata('numpy')\n\na = Analysis(['.main.py'],\n pathex=['.'],\n binaries=[],\n datas=datas,\n hiddenimports=[],\n hookspath=[],\n hooksconfig={},\n runtime_hooks=[],\n excludes=[],\n win_no_prefer_redirects=False,\n win_private_assemblies=False,\n cipher=None,\n noarchive=False)\npyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher)\n\nexe = EXE(pyz,\n a.scripts, \n [],\n exclude_binaries=True,\n name='MyApp',\n debug=False,\n bootloader_ignore_signals=False,\n strip=False,\n upx=True,\n console=False,\n disable_windowed_traceback=False,\n target_arch=None,\n codesign_identity=None,\n entitlements_file=None,\n icon=\"icon.ico\")\ncoll = COLLECT(exe,\n a.binaries,\n a.zipfiles,\n a.datas, \n strip=False,\n upx=True,\n upx_exclude=[],\n name='main')\n\nI have tried setting the paths paramater: pathex=['.', 'path\/to\/env\/Lib\/site-packages']. I also tried including it as a hidden import: hiddenimports=['sys', 'sys.stderr']. But none of these seem to work. I know I can just downgrade, but I want to use the latest version.","Title":"Include sys.stderr in a Pyinstaller project","Tags":"python-3.x,pyinstaller,huggingface-transformers,sys","AnswerCount":2,"A_Id":76447791,"Answer":"If you use the win10 system, you could try to run it on the terminal with cmd python main.py.\nI ran the same code on Ubuntu and the same transformers version was successful. So I run it on the terminal (win10 shell), rather than on Pycharm enviroment.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75039970,"CreationDate":"2023-01-07 11:31:20","Q_Score":1,"ViewCount":47,"Question":"image = driver.find_elementby_css_selector('#Sva75c > div.ZuT88e > div > div.dFMRD > div.pxAole > div.tvh9oe.BIB1wf > c-wiz > div.nIWXKc.JgfpDb > div.OUZ5W > div.zjoqD > div.qdnLaf.isv-id.b0vFpe > div > a > img')\n\nAs a beginner, I tried to follow the instructions in the book, but I got an error. Help","Title":"AttributeError: 'WebDriver' object has no attribute 'find_element_by_css_selector'","Tags":"python,attributeerror","AnswerCount":1,"A_Id":75040020,"Answer":"It looks like a typo - maybe you meant to use find_element_by_css_selector?","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75040517,"CreationDate":"2023-01-07 12:58:03","Q_Score":0,"ViewCount":37,"Question":"How can I change my Python scripts and simultaneously running bash script, without the bash script picking up on the new changes?\nFor example I run bash script.sh whose content is\n\npython train1.py\npython train2.py\n\nWhile train1.py is running, I edit train2.py. This means that train2.py will use the old code not the new one.\nHow to set up such that train2.py uses the old code?\nRunning and editing on two different PCs is not really a solution since I need the GPU to debug for editting. Merging them is also not a good idea because of the abstraction.\nSpecs:\nRemote Server\nUbuntu 20.04\nPython - Pytorch\nI imagine there is some git solution but have not found one.","Title":"Editing Python scripts while running bash script that contains the Python scripts","Tags":"python,bash,deep-learning,version-control","AnswerCount":2,"A_Id":75044583,"Answer":"Any changes done to train2.py, which are commited to disk before the bash script executes train2.py will be used by the script.\nThere is no avoiding that because contents of train2.py are not loaded into memory until the shell attempts to execute train2.py . That behaviour is the same regardless of the OS distro or release.\nKeep the \"master\" for train2.py in a sub-directory, then have the bash script remove train2.done at the start of the script, and touch train2.done when it has completed that step.\nThen have a routine that only \"copies\" train2.py from the subdir to the production dir if it sees the file train2.done is present, and wait for it if it missing.\nIf you are doing this constantly during repeated runs of the bash script, you probably want to have the script that copies train2.py touch train2.update before copying the file and remove that after successful copy of train2.py ... then have the bash script check for the presence of train2.update and if present, go into a loop for a short sleep, then check for the presence again, before continuing with the script ONLY if that file has been removed.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75041140,"CreationDate":"2023-01-07 14:36:52","Q_Score":1,"ViewCount":644,"Question":"I want to share my friend's telegram contact via bot. But my friend does not have a telegram username. I know his id number. How can I share it?\nI use PyTelegramBotApi. And my code is as follows:\nfrom telebot import TeleBot, types\n\n\nbot = TeleBot(token=TOKEN)\n\n@bot.message_handler(commands=['start'])\ndef start_bot(message):\n text = \"My friend contact\"\n markup = types.InlineKeyboardMarkup()\n markup.add(types.InlineKeyboardButton(text='Contact', url=\"tg:\/\/user?id=<1427022865>\"))\n bot.send_message(message.chat.id, text=text, reply_markup=markup)\n\n\nbot.polling()\n\nI read on the internet how to use url. But I make mistakes. Because: url=\"tg:\/\/user?id=<1427022865>\"\nHow to use it properly? Or is there another way?","Title":"How to find a user in Telegram by id?","Tags":"python,telegram,py-telegram-bot-api","AnswerCount":1,"A_Id":75211546,"Answer":"A bot can only send a message to a profile that has already activated the bot.\nSo make your friend \"start\" the bot, than the code should work","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75042153,"CreationDate":"2023-01-07 16:56:24","Q_Score":4,"ViewCount":1342,"Question":"I'm trying to load tokenizer and seq2seq model from pretrained models.\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM\n\ntokenizer = AutoTokenizer.from_pretrained(\"ozcangundes\/mt5-small-turkish-summarization\")\n\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"ozcangundes\/mt5-small-turkish-summarization\")\n\nBut I got this error.\nFile ~\/.local\/lib\/python3.8\/site-packages\/google\/protobuf\/descriptor.py:1028, in FileDescriptor.__new__(cls, name, package, options, serialized_options, serialized_pb, dependencies, public_dependencies, syntax, pool, create_key)\n 1026 raise RuntimeError('Please link in cpp generated lib for %s' % (name))\n 1027 elif serialized_pb:\n-> 1028 return _message.default_pool.AddSerializedFile(serialized_pb)\n 1029 else:\n 1030 return super(FileDescriptor, cls).__new__(cls)\n\n TypeError: Couldn't build proto file into descriptor pool: duplicate file name (sentencepiece_model.proto)\n\nI tried updating or downgrading the protobuf version. But I couldn't fix","Title":"Can't load from AutoTokenizer.from_pretrained - TypeError: duplicate file name (sentencepiece_model.proto)","Tags":"python,nlp,protocol-buffers,huggingface","AnswerCount":2,"A_Id":76322515,"Answer":"ilyakam,\nI ran into the same problem with mrm8488\/t5-base-finetuned-wikiSQL, also in a notebook, in a virtual environment. Your solution did (almost) work, I had to add the line 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=\"python\"'\nSo in case your solution does not work 100%, try adding the line in the notebook\nAndreas\npython 3.10 on Ubuntu 22.04.2 LTS","Users Score":2,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75042549,"CreationDate":"2023-01-07 17:51:20","Q_Score":1,"ViewCount":24,"Question":"My Folder Structure looks like the following\nperson-package\n|- __init__.py\n|- person.py\n|- person_manager.py\nmain.py\n\nperson_manager.py imports person.py\nimport person as x\n\nThe main.py imports person_manager.py\nimport person_package.person_manager as x\n\nWhen running main.py I get:\nModuleNotFoundError: No module named 'person'\n\nI know, I could solve that by changing the import of person_manager.py to the following\nfrom . import person as x\n\nHowever, when running now person_manager.py directly, I get:\nImportError: attempted relative import with no known parent package\n\nSo I can't test person_manager.py on its own.\nWhat is the most elegant way to solve that?","Title":"Python Package with inter-Modul Dependencies","Tags":"python-import,python-module,python-packaging","AnswerCount":1,"A_Id":75044106,"Answer":"1. I recommend to always use absolute imports (unless strictly impossible).\n2. person-package is not a valid Python name since it contains a dash -, if I were you I would rename to person_package with an underscore _.\n3. Since person_manager.py is part of a Python importable package (i.e. it is in a directory containing a __init__.py file), then it should not be run as python person_package\/person_manager.py, but as python -m person_package.person_manager.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75042895,"CreationDate":"2023-01-07 18:42:08","Q_Score":0,"ViewCount":120,"Question":"I have dataset for indoor localization.the dataset contain columns for wireless access point about 520 column with the RSSI value for each one .the problem is each row of the dataset has values of one scan for a signals that can be captured by a device and the maximum number of wireless access point that can be captured about only 20 ( the signal can be from 0dbm which is when the device near the access point and minus 100dbm when the device far from the access point but it can capture the signal) the rest of access points which are out of the range of the device scan they have been compensated with a default value of 100 positive.these value (100 dbm ) about 500 column in each row and have different columns when ever the location differ .the question is how to deal with them?","Title":"how to deal with out of range values in dataset (RSSI values)","Tags":"python-3.x,machine-learning,deep-learning,localization,data-processing","AnswerCount":1,"A_Id":75044928,"Answer":"One option to deal with this issue, you could try to impute (change) the values that are out of range with a more reasonable value. There are several approaches you could take to do this:\n\nReplacing the out-of-range values with the mean or median of the in-range values\nUsing linear interpolation to estimate the missing values based on the surrounding values\n\nThe choice will depend on the goal of your machine learning model and what you want to achieve.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75043134,"CreationDate":"2023-01-07 19:17:44","Q_Score":1,"ViewCount":61,"Question":"I always struggle with Enum, IntEnum, etc and have to revisit the documentation several times each time I use this Python feature. I think it would be useful to have a more clear understanding of the internals.\nFor instance, why can't I use named arguments in this example?\nclass MD_Fields(IntEnum):\n ACCOUNT = (0, **identifier=True**)\n M_DESCRIPT = (4, False)\n\n def __new__(cls, value: int, identifier: bool):\n obj = int.__new__(cls, value)\n obj.identifier = identifier\n return obj\n\nAnd of, course, the main question, how would do I pretend a Enum is a int? How do I tell Python that \"SOME.ENUM\" should be handled as if it was a 5?","Title":"How would I implement my own IntEnum in Python if one wasn't provided oob?","Tags":"python,enums","AnswerCount":2,"A_Id":75043314,"Answer":"In my experience, enums in python create overhead with very little value. Creating static class variables with the values you need is lightweight and semantically equivalent. And they maintain their type, which is what you needed.","Users Score":-2,"is_accepted":false,"Score":-0.1973753202,"Available Count":1},{"Q_Id":75043401,"CreationDate":"2023-01-07 20:00:56","Q_Score":1,"ViewCount":44,"Question":"So i wannna know the solution from my question. I already tried this\nimport re\n\n\nusername = \"\"\npassword = \"\"\nfull_name = \"\"\nbirth_date = \"\"\nphone_number = \"\"\naddress = \"\"\n\n\nwith open(\"file2.txt\", \"r\") as f:\n contents = f.read()\n\nlines = contents.split(\"\\n\")\nfor line in lines:\n if \": \" in line:\n key, value = re.split(\":\\s*\", line)\n \n if key == \"Username\":\n username = value\n elif key == \"Password\":\n password = value\n elif key == \"Nama Lengkap\":\n full_name = value\n elif key == \"Tanggal Lahir\":\n birth_date = value\n elif key == \"Nomor HP\":\n phone_number = value\n elif key == \"Alamat\":\n address = value\n\nprint(username)\nprint(password)\nprint(full_name)\nprint(birth_date)\nprint(phone_number)\nprint(address)\n\nBut the output is not what i expected. The username and password value not appearing, here when i run it\n\n\nkjdaskd\n10-20-1000\n+218112301231\ndsajh\nPress any key to continue . . .\n\nIt just printing 2 line of blank or whitespace. How to solve this?\nThis is inside file2.txt\nUsername : dsadj\nPassword : 12345\nNama Lengkap: kjdaskd\nTanggal Lahir: 10-20-1000\nNomor HP: +218112301231\nAlamat: dsajh\n\nThis is the output that i expect:\ndsadj\n12345\nkjdaskd\n10-20-1000\n+218112301231\ndsajh\nPress any key to continue . . .","Title":"How to extract data from file and assign each data to each variables on python?","Tags":"python,file,extract","AnswerCount":2,"A_Id":75043504,"Answer":"In file2.txt you have \"Username \" and \"Password \" with space in the end before \":\". So \"Username \" != \"Username\" and \"Password \" != \"Password\"","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75044048,"CreationDate":"2023-01-07 21:57:12","Q_Score":0,"ViewCount":72,"Question":"I am trying to get a list of instagram followers for a daily statistical tracker. I was using InstaLoader and using the login credentials of a Instagram account, but for obvious reasons it keeps getting flagged for suspicious activity. I would like to completely remove logging into an account from the program but I have not found any alternatives","Title":"Is there a way to be able to get a list of someones instagram followers without using InstaLoader in python?","Tags":"python,instagram","AnswerCount":2,"A_Id":75044379,"Answer":"Its a bit more hardware intense, but you can try to webscrape your follower number using something like selenium. If you use selenium-stealth or undetected_chromedriver I don't think you will get flagged. Im not sure but I hope this helps","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75044048,"CreationDate":"2023-01-07 21:57:12","Q_Score":0,"ViewCount":72,"Question":"I am trying to get a list of instagram followers for a daily statistical tracker. I was using InstaLoader and using the login credentials of a Instagram account, but for obvious reasons it keeps getting flagged for suspicious activity. I would like to completely remove logging into an account from the program but I have not found any alternatives","Title":"Is there a way to be able to get a list of someones instagram followers without using InstaLoader in python?","Tags":"python,instagram","AnswerCount":2,"A_Id":75044601,"Answer":"For some reason instagram doesn't load all followers in web version - selenium may be not helpful for scraping. Check out chrome extension IG Exporter, it stores all followers to CSV. As for logging into\/out issue, check using Chrome profile with selenium - it will allow to leave user logged in (if it suits you in terms of security)","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75044362,"CreationDate":"2023-01-07 22:58:30","Q_Score":4,"ViewCount":178,"Question":"Lately I was doing some ML stuff with Python using scikit-learn package.\nI wanted to use make_blobs() function so I began writing code for example:\nX, y = make_blobs(n_samples=m, centers=2, n_features=2, center_box=(80, 100))\n\nand of course this is fine.\nHowever while coding next lines my Intellisense within Visual Studio Code (I have only Microsoft addons for Python installed just to be clear) started to showing weird error on that line I mentioned before.\nHere's full error message:\n\nExpression with type \"tuple[Unknown | list[Unknown] | NDArray[float64], Unknown | list[Unknown] | NDArray[Any], ndarray[Any, dtype[float64]] | Any] | tuple[Unknown | list[Unknown] | NDArray[float64], Unknown | list[Unknown] | NDArray[Any]]\" cannot be assigned to target tuple\n\u00a0\u00a0Type \"tuple[Unknown | list[Unknown] | NDArray[float64], Unknown | list[Unknown] | NDArray[Any], ndarray[Any, dtype[float64]] | Any]\" is incompatible with target tuple\n\u00a0\u00a0\u00a0\u00a0 Element size mismatch; expected 2 but received 3\n\nPlease notice the last sentence. Element size mismatch where make_blobs() function returned 3 elements. What???\nI've checked scikit-learn documentation for make_blobs() function and I've read that on default make_blobs() returns only 2 elements not 3.\n3 elements can be returned when return_centers is set to True, where I have not set that to true as you can see in my example.\nOk, maybe I'll try to expect those 3 elements, so I modified that line\nX, y, _ = make_blobs(n_samples=m, centers=2, n_features=2, center_box=(80, 100))\n\nand well... this is the error message...\n\nExpression with type \"tuple[Unknown | list[Unknown] | NDArray[float64], Unknown | list[Unknown] | NDArray[Any], ndarray[Any, dtype[float64]] | Any] | tuple[Unknown | list[Unknown] | NDArray[float64], Unknown | list[Unknown] | NDArray[Any]]\" cannot be assigned to target tuple\n\u00a0\u00a0Type \"tuple[Unknown | list[Unknown] | NDArray[float64], Unknown | list[Unknown] | NDArray[Any]]\" is incompatible with target tuple\n\u00a0\u00a0\u00a0\u00a0Element size mismatch; expected 3 but received 2\n\nNow it returns 2 elements?!\nWhat I have tried next is:\n\nreinstall scikit-learn package. Same effect\npurging Python with all it files. Same effetc\nreinstalling Microsoft python extension for vscode. Same effect\n\nClearly it is some kind of intellisense issue, because running the code works fine, but what cause this behaviour?\nPython I used was 3.10.9 and 3.11.1.\nRunning on Windows 10 22H2 19045.2364.\nVSCode up-to-date.\nFor completeness scikit-learn version is 1.2.0","Title":"Weird scikit-learn Python intellisense error message","Tags":"python,visual-studio-code,scikit-learn,intellisense,pyright","AnswerCount":2,"A_Id":75044689,"Answer":"I think I found a solution to my problem.\nChecking my settings.json I found setting python.analysis.typeCheckingMode which was set to basic.\nI've changed the value of that setting to strict and then back to basic and it kinda worked? Because I no longer have that error message I mention.\nHowever @Alex Bochkarev answer is also correct.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75045356,"CreationDate":"2023-01-08 03:40:16","Q_Score":1,"ViewCount":33,"Question":"I have a pyautogui code that repeats a order to click on a webpage but sometimes that webpage freezes and does not load, how could i detect that.\n\nthe webpage in not on selenium and chrome has been opened by pyautogui too\n\nUpdate 1:\nI have just realised that the website will realise that i have been on the website for a long time so it will not load certain elements. This usually happens evry 20 minutes.","Title":"Pyautogui inactivity detection","Tags":"python,pyautogui","AnswerCount":1,"A_Id":75046354,"Answer":"I finally solved the problem by simply reloading the page every 20 minutes which solved the problem.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75045739,"CreationDate":"2023-01-08 05:41:53","Q_Score":3,"ViewCount":105,"Question":"I'm trying to run two functions in Python3 in parallel. They both take about 30ms, and unfortunately, after writing a testing script, I've found that the startup-time to get the processes running in the background takes over 100ms which is a pretty high overhead that I would like to avoid. Is anybody aware of a faster way to run functions concurrently in Python3 (having a lower overhead -- ideally in the ones or tens of milliseconds) where I can still get the results of their functions in the main process. Any guidance on this would be appreciated, and if there is any information that I can provide, please let me know.\nFor hardware information, I'm running this on a 2019 MacBook Pro with Python 3.10.9 with a 2GHz Quad-Core Intel Core i5.\nI've provided the script that I've written below as well as the output that I typically get from it.\nimport multiprocessing as mp\nimport time\nimport numpy as np\n\ndef t(s):\n return (time.perf_counter() - s) * 1000\n\ndef run0(s):\n print(f\"Time to reach run0: {t(s):.2f}ms\")\n\n time.sleep(0.03)\n return np.ones((1,4))\n\ndef run1(s):\n print(f\"Time to reach run1: {t(s):.2f}ms\")\n\n time.sleep(0.03)\n return np.zeros((1,5))\n\ndef main():\n s = time.perf_counter()\n\n with mp.Pool(processes=2) as p:\n print(f\"Time to init pool: {t(s):.2f}ms\")\n\n f0 = p.apply_async(run0, args=(time.perf_counter(),))\n f1 = p.apply_async(run1, args=(time.perf_counter(),))\n\n r0 = f0.get()\n r1 = f1.get()\n print(r0, r1)\n\n print(f\"Time to run end-to-end: {t(s):.2f}ms\")\n\nif __name__ == \"__main__\":\n main()\n\nBelow is the output that I typically get from running the above script\nTime to init pool: 33.14ms\nTime to reach run0: 198.50ms\nTime to reach run1: 212.06ms\n[[1. 1. 1. 1.]] [[0. 0. 0. 0. 0.]]\nTime to run end-to-end: 287.68ms\n\nNote: I'm looking to decrease the quantities on the 2nd and 3rd line by a factor of 10-20x smaller. I know that that is a lot, and if it is not possible, that is perfectly fine, but I was just wondering if anybody more knowledgable would know any methods. Thanks!","Title":"Faster Startup of Processes Python","Tags":"python,multithreading","AnswerCount":2,"A_Id":75046054,"Answer":"you can switch to python 3.11+ as it has a faster startup time (and faster everything), but as your application grows you will get even slower startup times compared to your toy example.\none option, is to run your application inside a linux docker image so you can use fork to avoid the spawn overhead, (though the COW overhead will still be visible)\nthe ultimate solution ? don't write your application in python (or any other language with a VM or a garbage collector), python multiprocessing isn't made for small fast tasks but for long running tasks, if you need that low startup time then write it in C or C++.\nif you have to use python then you should reuse your workers to \"absorb\" this startup time in a much larger task time.","Users Score":2,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75046165,"CreationDate":"2023-01-08 07:35:55","Q_Score":1,"ViewCount":16,"Question":"numpy.zeros((100,100,3))\nWhat does number 3 denotes in this tuple?\nI got the output but didn't totally understand the tuple argument.","Title":"what does the third number in the tuple argument denotes in numpy.zeros((100,100,3)) function?","Tags":"python,numpy","AnswerCount":1,"A_Id":75046408,"Answer":"This piece of code will create a 3D array with 100 rows, 100 columns, and in 3 dimensions.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75048459,"CreationDate":"2023-01-08 14:19:19","Q_Score":1,"ViewCount":89,"Question":"def cocktail_sort(seq: list):\n for i in range (len(list) -1, 0, -1):\n swapped = False\n for j in range(i, 0, -1):\n if list[j] < list[j-1]:\n temp = list[j]\n list[j] = list[j-1]\n list[j-1] = temp\n swapped = True\n for j in range(i):\n if list[j] > list[j+1]:\n temp2 = list[j]\n list[j] = list[j+1]\n list[j+1] = temp2\n swapped = True\n if not swapped:\n return list\nlst = [15, 4, 7, 2, 1, 20] \nprint(cocktail_sort(lst))\n\nTypeError: object of type 'type' has no len()\nI tried to find a solution to the problem on YouTube and forums, I sat several times and thought about what to do. I'm just a beginner and I don't really understand.","Title":"How can I fix this code, for shaker sorting","Tags":"python,typeerror","AnswerCount":2,"A_Id":75048492,"Answer":"You have accidentally (at the top) put len(list) instead of len(seq)","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75048688,"CreationDate":"2023-01-08 14:47:42","Q_Score":10,"ViewCount":6801,"Question":"I initiated pyspark in cmd and performed below to sharpen my skills.\nC:\\Users\\Administrator>SUCCESS: The process with PID 5328 (child process of PID 4476) has been terminated.\nSUCCESS: The process with PID 4476 (child process of PID 1092) has been terminated.\nSUCCESS: The process with PID 1092 (child process of PID 3952) has been terminated.\npyspark\nPython 3.11.1 (tags\/v3.11.1:a7a450f, Dec 6 2022, 19:58:39) [MSC v.1934 64 bit (AMD64)] on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\nSetting default log level to \"WARN\".\nTo adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).\n23\/01\/08 20:07:53 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable\nWelcome to\n ____ __\n \/ __\/__ ___ _____\/ \/__\n _\\ \\\/ _ \\\/ _ `\/ __\/ '_\/\n \/__ \/ .__\/\\_,_\/_\/ \/_\/\\_\\ version 3.3.1\n \/_\/\n\nUsing Python version 3.11.1 (tags\/v3.11.1:a7a450f, Dec 6 2022 19:58:39)\nSpark context Web UI available at http:\/\/Mohit:4040\nSpark context available as 'sc' (master = local[*], app id = local-1673188677388).\nSparkSession available as 'spark'.\n>>> 23\/01\/08 20:08:10 WARN ProcfsMetricsGetter: Exception when trying to compute pagesize, as a result reporting of ProcessTree metrics is stopped\na = sc.parallelize([1,2,3,4,5,6,7,8,9,10])\n\nWhen I execute a.take(1), I get \"_pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range\" error and I am unable to find why. When same is run on google colab, it doesn't throw any error. Below is what I get in console.\n>>> a.take(1)\nTraceback (most recent call last):\n File \"C:\\Spark\\python\\pyspark\\serializers.py\", line 458, in dumps\n return cloudpickle.dumps(obj, pickle_protocol)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Spark\\python\\pyspark\\cloudpickle\\cloudpickle_fast.py\", line 73, in dumps\n cp.dump(obj)\n File \"C:\\Spark\\python\\pyspark\\cloudpickle\\cloudpickle_fast.py\", line 602, in dump\n return Pickler.dump(self, obj)\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Spark\\python\\pyspark\\cloudpickle\\cloudpickle_fast.py\", line 692, in reducer_override\n return self._function_reduce(obj)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Spark\\python\\pyspark\\cloudpickle\\cloudpickle_fast.py\", line 565, in _function_reduce\n return self._dynamic_function_reduce(obj)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Spark\\python\\pyspark\\cloudpickle\\cloudpickle_fast.py\", line 546, in _dynamic_function_reduce\n state = _function_getstate(func)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Spark\\python\\pyspark\\cloudpickle\\cloudpickle_fast.py\", line 157, in _function_getstate\n f_globals_ref = _extract_code_globals(func.__code__)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Spark\\python\\pyspark\\cloudpickle\\cloudpickle.py\", line 334, in _extract_code_globals\n out_names = {names[oparg]: None for _, oparg in _walk_global_ops(co)}\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Spark\\python\\pyspark\\cloudpickle\\cloudpickle.py\", line 334, in \n out_names = {names[oparg]: None for _, oparg in _walk_global_ops(co)}\n ~~~~~^^^^^^^\nIndexError: tuple index out of range\nTraceback (most recent call last):\n File \"C:\\Spark\\python\\pyspark\\serializers.py\", line 458, in dumps\n return cloudpickle.dumps(obj, pickle_protocol)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Spark\\python\\pyspark\\cloudpickle\\cloudpickle_fast.py\", line 73, in dumps\n cp.dump(obj)\n File \"C:\\Spark\\python\\pyspark\\cloudpickle\\cloudpickle_fast.py\", line 602, in dump\n return Pickler.dump(self, obj)\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Spark\\python\\pyspark\\cloudpickle\\cloudpickle_fast.py\", line 692, in reducer_override\n return self._function_reduce(obj)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Spark\\python\\pyspark\\cloudpickle\\cloudpickle_fast.py\", line 565, in _function_reduce\n return self._dynamic_function_reduce(obj)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Spark\\python\\pyspark\\cloudpickle\\cloudpickle_fast.py\", line 546, in _dynamic_function_reduce\n state = _function_getstate(func)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Spark\\python\\pyspark\\cloudpickle\\cloudpickle_fast.py\", line 157, in _function_getstate\n f_globals_ref = _extract_code_globals(func.__code__)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Spark\\python\\pyspark\\cloudpickle\\cloudpickle.py\", line 334, in _extract_code_globals\n out_names = {names[oparg]: None for _, oparg in _walk_global_ops(co)}\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Spark\\python\\pyspark\\cloudpickle\\cloudpickle.py\", line 334, in \n out_names = {names[oparg]: None for _, oparg in _walk_global_ops(co)}\n ~~~~~^^^^^^^\nIndexError: tuple index out of range\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"\", line 1, in \n File \"C:\\Spark\\python\\pyspark\\rdd.py\", line 1883, in take\n res = self.context.runJob(self, takeUpToNumLeft, p)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Spark\\python\\pyspark\\context.py\", line 1486, in runJob\n sock_info = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)\n ^^^^^^^^^^^^^^^\n File \"C:\\Spark\\python\\pyspark\\rdd.py\", line 3505, in _jrdd\n wrapped_func = _wrap_function(\n ^^^^^^^^^^^^^^^\n File \"C:\\Spark\\python\\pyspark\\rdd.py\", line 3362, in _wrap_function\n pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Spark\\python\\pyspark\\rdd.py\", line 3345, in _prepare_for_python_RDD\n pickled_command = ser.dumps(command)\n ^^^^^^^^^^^^^^^^^^\n File \"C:\\Spark\\python\\pyspark\\serializers.py\", line 468, in dumps\n raise pickle.PicklingError(msg)\n_pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range\n\nIt should provide [1] as an answer but instead throws this error. Is it because of incorrect installation?\nPackage used - spark-3.3.1-bin-hadoop3.tgz, Java(TM) SE Runtime Environment (build 1.8.0_351-b10), Python 3.11.1\nCan anyone help in troubleshooting this? Many thanks in advance.","Title":"PicklingError: Could not serialize object: IndexError: tuple index out of range","Tags":"python,apache-spark,pyspark,rdd","AnswerCount":2,"A_Id":75338739,"Answer":"As of 3\/2\/23, I had the same identical problem, and as indicated above, I uninstalled python 3.11 and installed version 3.10.9 and it's solved!","Users Score":7,"is_accepted":false,"Score":1.0,"Available Count":1},{"Q_Id":75050694,"CreationDate":"2023-01-08 19:31:25","Q_Score":0,"ViewCount":50,"Question":"So basically I am just starting out coding and I need to install numpy on my computer and I want to have in on VScode cause that is my editor of choice. I have noticed though that unless I am in the specific folder in which I made the virtual environment for numpy I can't access the numpy module. Is there a way that I can just have it on my computer as a whole or will I have to create a virtual environment for every library for every project I do?","Title":"Numpy on VScode Windows 11","Tags":"python,numpy,visual-studio-code","AnswerCount":2,"A_Id":75050772,"Answer":"It's best practice to use a different virtual environment for every project because other projects may require different versions of a package.\nInstalling numpy is easy just use the command pip to install numpy in the terminal in vscode.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75052341,"CreationDate":"2023-01-09 00:52:20","Q_Score":0,"ViewCount":35,"Question":"I am making a Python script that converts real-life dates into dates for a fantasy world.\nHowever, I cannot represent years smaller than 1 or bigger than 9999 using the datetime module, and they may occasionally appear. Is there any way to represent these dates in Python?\nI tried using the datetime module but it doesn't support years smaller than 1 or bigger than 9999.","Title":"How can I work with dates with years smaller than 1 or bigger than 9999 in python?","Tags":"python,date,python-datetime","AnswerCount":1,"A_Id":75052359,"Answer":"Your case might be an unique case as it is not the common objective for most people using datetime module. You might have to do this manually, my suggestion is as follow:\n\nDefine the rule of transformation\/conversion from real-life date into fake date (between 1 - 9999)\nWrite an algorithm to perform the transformation.\n\nHope it helps.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75053735,"CreationDate":"2023-01-09 06:14:44","Q_Score":0,"ViewCount":26,"Question":"I want to duplicate my environment. I cannot use conda or other tools in personal reason. I also don't want to use requirements.txt because it takes too long time.\nHow can I solve this problem?\nI just copied and pasted original environment folder: myvenv1 to myvenv2\nBut if I activate myvenv2, it shows myvenv1's name, like this.\nroot: > source .\/myvenv2\/bin\/activate\n(myvenv1) root: >","Title":"how can I duplicate venv environment without requirements.txt?","Tags":"python-venv","AnswerCount":1,"A_Id":75056189,"Answer":"Using requirements.txt is probably the fastest and safest solution.\nOtherwise, I am not 100% sure but... if you copy a virtual environment, then I think that you need to edit the content of some files. For example the activate script which will be in charge of setting the prompt. But note that virtual environments are not designed to be copied, renamed or moved around. Things will break if you are not careful.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75056007,"CreationDate":"2023-01-09 10:34:36","Q_Score":1,"ViewCount":52,"Question":"I'm really sorry about the title being so unclear. Here is my issue. Say I have a code like this:\nfrom tkinter import *\n\nroot = Tk()\n\ndef callback(str):\n print(str)\n root.destroy()\n\nbtn = Button(root,text='destroy',command=callback('Hello world!'))\n\nbtn.pack()\n\nroot.mainloop()\n\nHowever, when I executed this an error popped out:\nTraceback (most recent call last):\n File \"\/Users\/abc\/Documents\/test\/test.py\", line 8, in \n btn = Button(root,text='destroy',command=callback('Hello world!'))\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.9\/lib\/python3.9\/tkinter\/__init__.py\", line 2647, in __init__\n Widget.__init__(self, master, 'button', cnf, kw)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.9\/lib\/python3.9\/tkinter\/__init__.py\", line 2569, in __init__\n self.tk.call(\n_tkinter.TclError: can't invoke \"button\" command: application has been destroyed\n\nI figured out that I can't write command=callback() with the parentheses, instead I should write command=callback in order to make the program function correctly. However, it seemed that it only worked when no argument is required. If I need to pass an argument, the argument(s) should be in a pair of parentheses [e.g.\"callback('hello world!')\"]. How can I pass an argument without writing a pair of parentheses? Thanks.","Title":"How to add an argument to a function in python Tkinter","Tags":"python,function,tkinter,arguments,parameter-passing","AnswerCount":1,"A_Id":75056343,"Answer":"as TheLizzard has pointed out, change command = callback('Hello world!') into command = lambda: callback('Hello world!'), since just putting without lambda calls the function, but if you put it in a lambda it becomes like a mini function since lambdas are callable, the argument for command if I remember correctly, The given argument should callable, like a function for example","Users Score":2,"is_accepted":false,"Score":0.3799489623,"Available Count":1},{"Q_Id":75057529,"CreationDate":"2023-01-09 12:52:28","Q_Score":0,"ViewCount":47,"Question":"I want to know, is there any way to debug user-written modules, if they are used in Jupyter notebook, using VSCode?\nI want it to work so that if I create a breakpoint in my module and when I call some function from this module in Jupyter notebook, it will stop and allow me to see some useful data. Default VSCode debugger works this way only if breakpoint is set in the file that I run.\nI tried to set breakpoints (like function or red dot on the left from the code) in module, but calling function with it from notebook doesn't trigger it.","Title":"Is there any way to debug user-written modules if they are used in Jupyter notebook?","Tags":"python,visual-studio-code,debugging,jupyter","AnswerCount":2,"A_Id":75057564,"Answer":"You can add import pdb; pdb.set_trace() from python pdb to add a breakpoint in your code.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75058447,"CreationDate":"2023-01-09 14:06:37","Q_Score":0,"ViewCount":71,"Question":"I'm trying to run TensorFlow on a Linux machine (ubuntu).\nI've created a Conda env and installed the required packages but I think that there's something wrong with my versions:\nUpdated versions\n\ncudatoolkit 11.6.0 cudatoolkit 11.2.0\ncudnn 8.1.0.77\ntensorflow-gpu 2.4.1\npython 3.9.15\n\nRunning nvcc -V results\n\nnvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2022 NVIDIA\nCorporation Built on Mon_Oct_24_19:12:58_PDT_2022 Cuda compilation\ntools, release 12.0, V12.0.76 Build\ncuda_12.0.r12.0\/compiler.31968024_0\n\nand running python3 -c \"import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))\" returns an empty list.\nSeems that release 12.0 is the problem here, but I'm not sure and it's not my machine that I'm running on so I don't want to make big changes on my own.\nAlso, from TensorFlow's site, it seems that tensorflow-2.4.0 should run with python 3.6-3.8 and CUDA 11.0 but the versions I mentioned are the versions that the Conda choose for me.\nI know that similar questions have been asked before, but I couldn't find an answer that works for me.","Title":"Tensorflow 2.4.1 can't find GPUs","Tags":"python,linux,tensorflow,conda","AnswerCount":1,"A_Id":75071243,"Answer":"What finally worked for me was to create a new env from scratch using conda create --name tensorflow-gpu and then adding the other deps to it. Creating a new env and then installing tensorflow-gpu didn't worked.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75060885,"CreationDate":"2023-01-09 17:23:48","Q_Score":2,"ViewCount":572,"Question":"I'm currently using FastAPI with Gunicorn\/Uvicorn as my server engine. Inside FastAPI GET method I'm using SentenceTransformer model with GPU:\n# ...\n\nfrom sentence_transformers import SentenceTransformer\n\nencoding_model = SentenceTransformer(model_name, device='cuda')\n\n# ...\napp = FastAPI()\n\n@app.get(\"\/search\/\")\ndef encode(query):\n return encoding_model.encode(query).tolist()\n\n# ...\n\ndef main():\n uvicorn.run(app, host=\"127.0.0.1\", port=8000)\n\n\nif __name__ == \"__main__\":\n main()\n\nI'm using the following config for Gunicorn:\nTIMEOUT 0\nGRACEFUL_TIMEOUT 120\nKEEP_ALIVE 5\nWORKERS 10\n\nUvicorn has all default settings, and is started in docker container casually:\nCMD [\"uvicorn\", \"app.main:app\", \"--host\", \"0.0.0.0\", \"--port\", \"8000\"]\n\nSo, inside docker container I have 10 gunicorn workers, each using GPU.\nThe problem is the following:\nAfter some load my API fails with the following message:\ntorch.cuda.OutOfMemoryError: CUDA out of memory. \nTried to allocate 734.00 MiB \n(GPU 0; 15.74 GiB total capacity; \n11.44 GiB already allocated; \n189.56 MiB free; \n11.47 GiB reserved in total by PyTorch) \nIf reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF","Title":"GPU out of memory when FastAPI is used with SentenceTransformers inference","Tags":"python,pytorch,fastapi,sentence-transformers","AnswerCount":1,"A_Id":75469807,"Answer":"The problem was that there were 10 replicas of my transformer model on GPU, as @Chris mentioned above.\nMy solution was to use celery as RPC manager (rabbitmq+redis backend setup) and a separate container for GPU-bound computations, so now there is only one instance of my model on GPU, and no race between different processes' models.","Users Score":3,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75063069,"CreationDate":"2023-01-09 21:23:49","Q_Score":5,"ViewCount":534,"Question":"I have the following folder structure..where _app, and _infra are two different projects. At the root of the workspace however are two files, the workspace project file itself and a .gitignore file.\nEach project has it's own .vscode folder and own .env files.\nThe entire workspace is a single repository in git.\nmy_app_workspace\n - proj1_app\/\n - .venv\/ (virtual environment)\n - vscode\/\n - settings.json\n - launch.json\n - task.json\n - src\/\n - config.py\n - .env\n - .env_linux\n - proj1_infra\/\n - vscode\/\n - settings.json\n - launch.json\n - task.json\n - src\/\n - config.py\n - .env\n - .env_linux\n - .git_ignore\n - my_app_workspace.code-workspace\n\nthe code-workspace file looks like this:\n{\n \"folders\": [\n {\n \"path\": \".\/proj1_app\"\n },\n {\n \"path\": \".\/proj1_infra\"\n }\n ],\n}\n\nThis is all good, but i want to include the .git_ignore and my_app_workspace.code-workspace files also into the vscode editor so that i can easy make modifications to them.\nI know i can add another folder with '\"path\": \".\"', but this will add a folder with the project folders again - which seems redundant and not efficient.\nIs there a way to add individual files to the workspace? Is the problem here i should simply split these up into two different repository in git? this way each has it's own .gitignore file as opposed to what im doing now is the entire workspace is a git repository","Title":"VSCode multi-project workspace: how to add individual files such as the .gitignore at the root of the workspace?","Tags":"python,git,visual-studio-code","AnswerCount":3,"A_Id":75132370,"Answer":"Since you are working on two entirely different projects, it is always preferred to have separate .gitignore files. The reasons for the same are:-\n\nThe presence of different dependency and config files for both projects, and putting all of them into a single .gitignore file will only make it unnecessarily bulky.\nIf you are planning to host the projects on a remote platform such as Github, Heroku, etc., then you would be able to better manage the ignored files for both, and it would also be better from the contributors' perspective.\nThe VS Code workspace should always be separate. I have recently encountered some IDE errors due to which I had to clean up the workspace file, in that case, all cached changes(and unstaged) were lost. So it's better to maintain different workspace files.\n\nIn my opinion, you should go for separate repositories approach.\nAlso, VS Code workspace currently doesn't provide any feature of including separate files in a workspace, as it maintains an array of folders only.\nHope it helps!","Users Score":-1,"is_accepted":false,"Score":-0.0665680765,"Available Count":1},{"Q_Id":75064349,"CreationDate":"2023-01-10 00:42:45","Q_Score":0,"ViewCount":33,"Question":"We have a Django 4.0.4 site running. Since upgrading from Python 3.10->3.11 and Psycopg2 from 2.8.6->2.9.3\/5 and gunicorn 20.0.4->20.1.0 we've been getting random InterfaceError: cursor already closed errors on random parts of our codebase. Rarely the same line twice. Just kind of happens once every 5-10k runs. So it feels pretty rare, but does keep happening a few times every day. I've been assuming it's related to the ugprade, but it may be something else. I don't have a full grap on why the cursor would be disconnecting and where I should be looking to figure out the true issue.\nPsycopg version: 2.9.5 & 2.9.3\nPython version: 3.11\nPostgreSQL version: 12.11\nGunicorn\nThe site had been running for 1-2 years without this error. Now it happens a few times every day after a recent upgrade.","Title":"InterfaceError: cursor already closed","Tags":"django,gunicorn,psycopg2,python-3.11,postgres-12","AnswerCount":1,"A_Id":75160751,"Answer":"We are having the same 'heisenbug' in our system and are attempting to solve it (unsuccessfully so far) ...","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75064656,"CreationDate":"2023-01-10 01:48:16","Q_Score":1,"ViewCount":955,"Question":"I'm starting Pytorch and still trying to understand the basic concepts.\nIf I have a network n on the GPU that produces an output tensor out, can it be printed to stdout directly? Or should it first be moved to the cpu, or be detached from the graph before printing?\nTried several combinations below involving .cpu() and .detach()\nimport torch.nn as nn\nimport torch\n\n\nclass Net(nn.Module):\n def __init__(self):\n super().__init__()\n self.layers = nn.Sequential(\n nn.Linear(5, 10),\n nn.ReLU(),\n nn.Linear(10, 10),\n nn.ReLU(),\n nn.Linear(10, 3),\n )\n\n def forward(self, x):\n return self.layers(x)\n\n\ndevice = torch.device(\"cuda:0\") # assume its available\nx = torch.rand(10, 5).to(device)\nnet = Net().to(device)\n\n# Pretend we are in a training loop iteration\n\nout = net(x)\nprint(f\"The output is {out.max()}\")\nprint(f\"The output is {out.max().detach()}\")\nprint(f\"The output is {out.max().cpu()}\")\nprint(f\"The output is {out.max().cpu().detach()}\")\n\n# continue training iteration and repeat more iterations in training loop\n\nI got the same output for all 4 methods. Which is the correct way?","Title":"Printing Pytorch Tensor from gpu, or move to cpu and\/or detach?","Tags":"python,pytorch","AnswerCount":1,"A_Id":75065361,"Answer":"You should not get surprised by the same value output. It shouldn't change anything value.\ncpu() transfers the tensor to cpu. And detach() detaches the tensor from the computation graph so that autograd does not track it for future backpropagations.\nUsually .detach().cpu() is what I do, since it detaches it from the computation graph and then it moves to the cpu for further processing. .cpu().detach() is also fine but in this case autograd takes into account the cpu() but in the previous case .cpu() operation won't be tracked by autograd which is what we want. That's it. It's only these little things that are different - value would be same in all cases.","Users Score":2,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75065381,"CreationDate":"2023-01-10 04:09:16","Q_Score":3,"ViewCount":73,"Question":"I have a network share that contains around 300,000 files on it and it's constantly changing (files added and removed). I want to search the directory for specific text to find certain files within this directory. I have trimmed my method down about as far as I can, but it still takes over 6 minutes to complete. I can probably do it manually around the same time, depending on the number of strings I'm searching for. I want to multithread or multiprocess it, but I'm uncertain how this can be done on a single call: i.e.,\n\nfor filename in os.scandir(sourcedir).\nCan anyone please help me figure this out?\n\ndef scan(sourcedir:str, oset:set[str]|str) -> set[str]:\n found = set()\n for filename in os.scandir(sourcedir):\n for ordr in oset:\n if ordr in filename.name:\n print(filename.name)\n found.add(filename.name)\n break\n\n\nRESULTS FROM A TYPICAL CALL:\n516 function calls in 395.033 seconds\nOrdered by: standard name\nncalls tottime percall cumtime percall filename:lineno(function)\n6 0.000 0.000 0.003 0.000 :39(isdir)\n6 0.000 0.000 1.346 0.224 :94(samefile)\n12 0.000 0.000 0.001 0.000 :103(join)\n30 0.000 0.000 0.000 0.000 :150(splitdrive)\n6 0.000 0.000 0.000 0.000 :206(split)\n6 0.000 0.000 0.000 0.000 :240(basename)\n6 0.000 0.000 0.000 0.000 :35(_get_bothseps)\n1 0.000 0.000 0.000 0.000 :545(normpath)\n1 0.000 0.000 0.000 0.000 :577(abspath)\n1 0.000 0.000 395.033 395.033 :1()\n1 0.000 0.000 395.033 395.033 CopyOrders.py:31(main)\n1 389.826 389.826 389.976 389.976 CopyOrders.py:67(scan)\n1 0.000 0.000 5.056 5.056 CopyOrders.py:88(copy)\n1 0.000 0.000 0.000 0.000 getopt.py:56(getopt)\n6 0.000 0.000 0.001 0.000 shutil.py:170(_copyfileobj_readinto)\n6 0.000 0.000 1.346 0.224 shutil.py:202(_samefile)\n18 0.000 0.000 1.493 0.083 shutil.py:220(_stat)\n6 0.001 0.000 4.295 0.716 shutil.py:226(copyfile)\n6 0.000 0.000 0.756 0.126 shutil.py:290(copymode)\n6 0.000 0.000 5.054 0.842 shutil.py:405(copy)\n6 0.000 0.000 0.000 0.000 {built-in method _stat.S_IMODE}\n6 0.000 0.000 0.000 0.000 {built-in method _stat.S_ISDIR}\n6 0.000 0.000 0.000 0.000 {built-in method _stat.S_ISFIFO}\n1 0.000 0.000 395.033 395.033 {built-in method builtins.exec}\n6 0.000 0.000 0.000 0.000 {built-in method builtins.hasattr}\n73 0.000 0.000 0.000 0.000 {built-in method builtins.isinstance}\n38 0.000 0.000 0.000 0.000 {built-in method builtins.len}\n6 0.000 0.000 0.000 0.000 {built-in method builtins.min}\n14 0.003 0.000 0.003 0.000 {built-in method builtins.print}\n12 2.180 0.182 2.180 0.182 {built-in method io.open}\n1 0.000 0.000 0.000 0.000 {built-in method nt._getfullpathname}\n1 0.000 0.000 0.000 0.000 {built-in method nt._path_normpath}\n6 0.012 0.002 0.012 0.002 {built-in method nt.chmod}\n49 0.000 0.000 0.000 0.000 {built-in method nt.fspath}\n1 0.149 0.149 0.149 0.149 {built-in method nt.scandir}\n36 2.841 0.079 2.841 0.079 {built-in method nt.stat}\n12 0.000 0.000 0.000 0.000 {built-in method sys.audit}\n12 0.019 0.002 0.019 0.002 {method 'exit' of '_io._IOBase' objects}\n6 0.000 0.000 0.000 0.000 {method 'exit' of 'memoryview' objects}\n6 0.000 0.000 0.000 0.000 {method 'add' of 'set' objects}\n1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}\n36 0.000 0.000 0.000 0.000 {method 'find' of 'str' objects}\n12 0.001 0.000 0.001 0.000 {method 'readinto' of '_io.BufferedReader' objects}\n30 0.000 0.000 0.000 0.000 {method 'replace' of 'str' objects}\n6 0.000 0.000 0.000 0.000 {method 'rstrip' of 'str' objects}\n6 0.000 0.000 0.000 0.000 {method 'write' of '_io.BufferedWriter' objects}","Title":"Search a very large directory for a file containing text in it's name","Tags":"python,python-3.x,multithreading,io,multiprocessing","AnswerCount":3,"A_Id":75368232,"Answer":"I ended up finding that no matter how many files I scan for, it doesn't take more than a shorter list of files (by much). So I think that the long period of time that it was taking to gather the list of existing files to compare against is akin to indexing the directory. I am using the tool for larger sets of files. For the onsies and twosies, I search manually. I suppose it is what it is.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75069062,"CreationDate":"2023-01-10 11:08:51","Q_Score":16,"ViewCount":32334,"Question":"I am getting below error when running mlflow app\n\nraise AttributeError(\"module {!r} has no attribute \" AttributeError:\nmodule 'numpy' has no attribute 'object'\n\nCan someone help me with this","Title":"module 'numpy' has no attribute 'object'","Tags":"python,python-3.x,numpy,kubernetes,dockerfile","AnswerCount":5,"A_Id":76322209,"Answer":"Instead of numpy.object:\nyou should use object or numpy.object_.","Users Score":1,"is_accepted":false,"Score":0.0399786803,"Available Count":1},{"Q_Id":75069304,"CreationDate":"2023-01-10 11:28:38","Q_Score":1,"ViewCount":154,"Question":"For a few weeks now, my Code-OSS has not been running my python scripts. I am running a Garuda linux distribution and have my Code- OSS on version 1.74.2-1 and my python on version 3.10.9-1. Whenever I try to run previously working .py files or make new ones, an error pops up: \"Extension activation failed, run the 'Developer: Toggle Developer Tools' command for more information.\" as well as showing the python extension loading indefinetly. After toggling the developer tools and trying to create a new python file, the following error is shown:\nmainThreadExtensionService.ts:111 Activating extension 'ms-python.python' failed: Extension 'ms-python.python' CANNOT use API proposal: telemetryLogger.\nIts package.json#enabledApiProposals-property declares: but NOT telemetryLogger.\n The missing proposal MUST be added and you must start in extension development mode or use the following command line switch: --enable-proposed-api ms-python.python.\n$onExtensionActivationError @ mainThreadExtensionService.ts:111\nlistWidget.ts:803 List with id 'list_id_2' was styled with a non-opaque background color. This will break sub-pixel antialiasing.\nstyle @ listWidget.ts:803\nlog.ts:316 ERR command 'python.createNewFile' not found: Error: command 'python.createNewFile' not found\n at b.k (vscode-file:\/\/vscode-app\/usr\/lib\/code\/out\/vs\/workbench\/workbench.desktop.main.js:1669:3069)\n at b.executeCommand (vscode-file:\/\/vscode-app\/usr\/lib\/code\/out\/vs\/workbench\/workbench.desktop.main.js:1669:2985)\n at process.processTicksAndRejections (node:internal\/process\/task_queues:96:5)\n at async vscode-file:\/\/vscode-app\/usr\/lib\/code\/out\/vs\/workbench\/workbench.desktop.main.js:1740:4663\nlog.ts:316 ERR command 'python.createNewFile' not found: Error: command 'python.createNewFile' not found\n at b.k (vscode-file:\/\/vscode-app\/usr\/lib\/code\/out\/vs\/workbench\/workbench.desktop.main.js:1669:3069)\n at b.executeCommand (vscode-file:\/\/vscode-app\/usr\/lib\/code\/out\/vs\/workbench\/workbench.desktop.main.js:1669:2985)\n at async vscode-file:\/\/vscode-app\/usr\/lib\/code\/out\/vs\/workbench\/workbench.desktop.main.js:1740:4663\nlog.ts:316 ERR command 'python.createNewFile' not found: Error: command 'python.createNewFile' not found\n at b.k (vscode-file:\/\/vscode-app\/usr\/lib\/code\/out\/vs\/workbench\/workbench.desktop.main.js:1669:3069)\n at b.executeCommand (vscode-file:\/\/vscode-app\/usr\/lib\/code\/out\/vs\/workbench\/workbench.desktop.main.js:1669:2985)\n at async vscode-file:\/\/vscode-app\/usr\/lib\/code\/out\/vs\/workbench\/workbench.desktop.main.js:1740:4663\n\nI have been struggling with this for a bit now and wanted to ask if anyone knows what to do? Thank you very much in advance and please excuse potentially bad formatting, as I do not know how to do that correctly yet.\nI tried reinstalling the Code-OSS package several times, as well as reinstalling python and the extension, however none of it worked.","Title":"CodeOSS not running python files","Tags":"python,visual-studio","AnswerCount":1,"A_Id":75076911,"Answer":"try running code-OSS like this:\n\/usr\/bin\/code-oss --enable-proposed-api ms-python.python\nif you're using a desktop enviroment just edit the launcher. Hope this helps","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75070586,"CreationDate":"2023-01-10 13:18:48","Q_Score":1,"ViewCount":35,"Question":"I\u2019m making my first django based backend server and I registered the urls for each app like\nurlpatterns = [\n path('admin\/', admin.site.urls),\n path('backfo\/', include('backfo.urls')),\n path('commun\/', include('commun.urls')),\n path('starts\/', include('starts.urls')),\n path('travleres\/', include('travleres.urls')),\n path('starts\/', include('starts.urls')),\n path('new\/', include('new.urls')),\n path('joint\/', include('joint.urls')),\n]\n\nThen I get this error in cmd saying ->\nModuleNotFoundError: No module named 'backfo.urls'\n\nI don\u2019t get what went wrong and if you need more code I\u2019ll post it on.","Title":"Why is app.urls not found?(Django Python)","Tags":"python,django,backend","AnswerCount":1,"A_Id":75070601,"Answer":"In settings.py add module in INSTALLED_APPS","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75071221,"CreationDate":"2023-01-10 14:12:53","Q_Score":2,"ViewCount":85,"Question":"I am trying to get the network interfaces created in EC2, and due to it I'm using the \"describe_network_interfaces\" function from boto3. The output of this function is a struct like this:\n{\n 'NetworkInterfaces': [\n {\n 'Association': {\n 'AllocationId': 'string',\n 'AssociationId': 'string',\n 'IpOwnerId': 'string',\n 'PublicDnsName': 'string',\n 'PublicIp': 'string',\n 'CustomerOwnedIp': 'string',\n 'CarrierIp': 'string'\n },\n 'Attachment': {\n 'AttachTime': datetime(2015, 1, 1),\n 'AttachmentId': 'string',\n 'DeleteOnTermination': True|False,\n 'DeviceIndex': 123,\n 'NetworkCardIndex': 123,\n 'InstanceId': 'string',\n 'InstanceOwnerId': 'string',\n 'Status': 'attaching'|'attached'|'detaching'|'detached',\n 'EnaSrdSpecification': {\n 'EnaSrdEnabled': True|False,\n 'EnaSrdUdpSpecification': {\n 'EnaSrdUdpEnabled': True|False\n }\n }\n },\n 'AvailabilityZone': 'string',\n 'Description': 'string',\n 'Groups': [\n {\n 'GroupName': 'string',\n 'GroupId': 'string'\n },\n ],\n 'InterfaceType': 'interface'|'natGateway'|'efa'|'trunk'|'load_balancer'|'network_load_balancer'|'vpc_endpoint'|'branch'|'transit_gateway'|'lambda'|'quicksight'|'global_accelerator_managed'|'api_gateway_managed'|'gateway_load_balancer'|'gateway_load_balancer_endpoint'|'iot_rules_managed'|'aws_codestar_connections_managed',\n 'Ipv6Addresses': [\n {\n 'Ipv6Address': 'string'\n },\n ],\n 'MacAddress': 'string',\n 'NetworkInterfaceId': 'string',\n 'OutpostArn': 'string',\n 'OwnerId': 'string',\n 'PrivateDnsName': 'string',\n 'PrivateIpAddress': 'string',\n 'PrivateIpAddresses': [\n {\n 'Association': {\n 'AllocationId': 'string',\n 'AssociationId': 'string',\n 'IpOwnerId': 'string',\n 'PublicDnsName': 'string',\n 'PublicIp': 'string',\n 'CustomerOwnedIp': 'string',\n 'CarrierIp': 'string'\n },\n 'Primary': True|False,\n 'PrivateDnsName': 'string',\n 'PrivateIpAddress': 'string'\n },\n ],\n 'Ipv4Prefixes': [\n {\n 'Ipv4Prefix': 'string'\n },\n ],\n 'Ipv6Prefixes': [\n {\n 'Ipv6Prefix': 'string'\n },\n ],\n 'RequesterId': 'string',\n 'RequesterManaged': True|False,\n 'SourceDestCheck': True|False,\n 'Status': 'available'|'associated'|'attaching'|'in-use'|'detaching',\n 'SubnetId': 'string',\n 'TagSet': [\n {\n 'Key': 'string',\n 'Value': 'string'\n },\n ],\n 'VpcId': 'string',\n 'DenyAllIgwTraffic': True|False,\n 'Ipv6Native': True|False,\n 'Ipv6Address': 'string'\n },\n ],\n 'NextToken': 'string'\n}\n\nHow can I get just the value from \"NetworkInterfaceId\" and put it in a list? I was trying extract this value using regex, but I don't have great skills on that yet. May you guys help me, please?","Title":"Extracting strings from a dict","Tags":"python,list,dictionary,pyspark","AnswerCount":3,"A_Id":75071311,"Answer":"mydict['NetworkInterfaces'][0]['NetworkInterfaceId']","Users Score":-1,"is_accepted":false,"Score":-0.0665680765,"Available Count":1},{"Q_Id":75072200,"CreationDate":"2023-01-10 15:28:52","Q_Score":1,"ViewCount":44,"Question":"I have the following dataframe\n type_x Range myValname\n0 g1 0.48 600\n1 g2 0.30 600\n2 g3 0.62 890\n3 g4 0.75 890\n\nI would like to get the following dataframe\n type_x Range myValname newCol\n0 g1 0.48 600 c1\n1 g2 0.30 600 c1\n2 g3 0.62 890 c2\n3 g4 0.75 890 c2\n\nThe significance of c1 and c2 are that if the myValname is same for a type_x value then both the value can be treated as same value. I want generalized code.\nMy thinking is to convert it into dictionary and map some values, but unable to get the outcome.\n df3['newCol'] = df3.groupby('myValname').rank()","Title":"How to create a column based on the value of the other columns","Tags":"python,pandas,dataframe","AnswerCount":3,"A_Id":75072223,"Answer":"You can add\/append a new column to the DataFrame based on the values of another column using df. assign() , df. apply() , np. where() functions and return a new Dataframe after adding a new column.","Users Score":-1,"is_accepted":false,"Score":-0.0665680765,"Available Count":1},{"Q_Id":75072968,"CreationDate":"2023-01-10 16:24:33","Q_Score":0,"ViewCount":36,"Question":"I'm going to create API with Django Rest Framework for an existing Django project. I would like to use models of existing app (product\/models.py) in 'API\/models.py'.\nWill that work smoothly as of using models across multiple apps using\nfrom product.models import item,...\nAfter importing I'll be creating serializers.py .\nCan anyone answer me whether this will work?","Title":"Using models between an App and API app in Django Rest framework","Tags":"python,django,django-models,django-rest-framework,active-model-serializers","AnswerCount":1,"A_Id":75080646,"Answer":"Yes, this works, we can import and use models of other apps in this way.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75073143,"CreationDate":"2023-01-10 16:38:47","Q_Score":1,"ViewCount":29,"Question":"I have a Lambda function which calls a Python script, which in turn gives results in json format.\nThere is a possibility for the results of the script to tend to infinity, and we end up with \"inf\" values in the json. When this happens, the script can run locally, but encounters an error when run in Lambda:\nbotocore.errorfactory.InvalidRequestContentException: An error occurred (InvalidRequestContentException) when calling the Invoke operation: Could not parse request body into json: Could not parse payload into json: Non-standard token 'Infinity': enable JsonParser.Feature.ALLOW_NON_NUMERIC_NUMBERS to allow at... \namong the results when run locally, I do see:\n0.008559854691925183, inf, inf, inf, 0.0011680872601948522\nIt seems to be telling me to enable this feature of the json parser.... I have no idea how to do that. I have checked around and I see people running into a similar json error in different contexts,but found no examples for AWS\/Python. Are there a couple lines I can add to my Lambda function to ignore the error?\nAlternatively, maybe \"inf\" should just be replaced by the largest possible float value, or something like that?\nThe plotting of these results is handled by a separate lambda function, so it would be enough to shuttle the results along and handle the infinite value there, but the mere presence of this non-numeric value seems to throw a wrench in the gears. How would you handle this? Thanks.","Title":"Handling infinite values in returned JSON in AWS Lambda Python function","Tags":"python,json,amazon-web-services,aws-lambda,data-analysis","AnswerCount":1,"A_Id":75073312,"Answer":"The answer for me was to use the Numpy function np.nan_to_num() on the data before returning it from the python script","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75073571,"CreationDate":"2023-01-10 17:14:27","Q_Score":2,"ViewCount":74,"Question":"I'm trying to output a CSV file from Python and make one of the entries a Google sheet formula:\nThis is what the formula var would look like:\n strLink = \"https:\/\/xxxxxxx.xxxxxx.com\/Interact\/Pages\/Content\/Document.aspx?id=\" + strId + \"&SearchId=0&utm_source=interact&utm_medium=general_search&utm_term=*\"\n strLinkCellFormula = \"=HYPERLINK(\\\"\" + strLink + \"\\\", \\\"\" + strTitle + \"\\\")\"\n\nand then for each row of the CSV I have this:\n strCSV = strCSV + strId + \", \" + \"\\\"\" + strTitle + \"\\\", \" + strAuthor + \", \" + strDate + \", \" + strStatus + \", \" + \"\\\"\" + strSection + \"\\\", \\\"\" + strLinkCellFormula +\"\\\"\\n\"\n\nWhich doesn't quite work, the hyperlink formula for Google sheets is like so:\n=HYPERLINK(url, title)\n\nand I can't seem to get that comma escaped. So in my Sheet I am getting an additional column with the title in it and obviously the formula does not work. Any help would be appreciated.","Title":"How to add a Google formula containing commas and quotes to a CSV file?","Tags":"python,csv,google-sheets,formula","AnswerCount":2,"A_Id":75074109,"Answer":"Try using ; as the formula argument separator. It should work the same.","Users Score":3,"is_accepted":false,"Score":0.2913126125,"Available Count":1},{"Q_Id":75073954,"CreationDate":"2023-01-10 17:49:42","Q_Score":0,"ViewCount":27,"Question":"I get an error of no such file or directory while am deploying a streamlit app to streamlit cloud share, what could be the problem, i have all files in the same directory and as a standalone the app works perfectly the error comes only when am deploying the app, i need help\nLoaded_model = pickle.load(open(filepath here\/savfile)","Title":"Open() function while deploying streamlit app","Tags":"python,file,machine-learning,streamlit,deploying","AnswerCount":1,"A_Id":75076720,"Answer":"The files in a Streamlit app are stored in a folder called app \u2013 as a result, you'll usually need to adjust the file paths when you're deploying an app that you were previously running locally. I'd recommend doing os.getcwd() to return the file path and confirm that your file paths are correct.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75074871,"CreationDate":"2023-01-10 19:22:49","Q_Score":0,"ViewCount":33,"Question":"I have the Sensor.py file in the same folder as my Main.py\nI get this message: ImportError: no module named 'Sensor'\nHow can I import .py files?\nI've tried all import options.","Title":"MicroPython Visual Studio Code \"import *.py\" in Pico","Tags":"python,visual-studio-2012,raspberry-pi-pico","AnswerCount":2,"A_Id":75086248,"Answer":"I need to save the sensor.py first, then upload it to the pico, and then I can play the main.py.\nThe sensor.py was missing on the pico. Simply clicking main.py on run is not enough.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75076691,"CreationDate":"2023-01-10 22:57:36","Q_Score":0,"ViewCount":73,"Question":"I can't find how to set up or change the Webhook through API.\nIs it possible to change it, set it when I am buying a number, or select one Webhook URL for all numbers?\nI tried to find this info in the documentation but there was helpful to me","Title":"WebHook in Twilio API","Tags":"twilio,webhooks,twilio-api,twilio-python","AnswerCount":2,"A_Id":75084856,"Answer":"You will have to log into your Twilio console.\nFrom the Develop tab, select Phone Numbers, then Manage > Active Numbers.\nYou can set the default Webhook (and back-up alternate Webhook) by clicking on the desired number and entering it under the respective Phone or (if available) SMS fields. You will likely have to set the Webhook (takes 2 seconds) for each phone number purchased as the default is the Twilio Demo URL (replies back with Hi or something)\nThe nature of a Webhook should allow any change in functionality to be done externally (on your end) through your Webhook script's functionality and thus dynamically changing the Webhook URL through the API on a case-by-case basis is discouraged and frankly should not be necessary. Someone may correct me if mistaken.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75077008,"CreationDate":"2023-01-10 23:49:28","Q_Score":1,"ViewCount":96,"Question":"Situation:\non the linux PC, the global package version installed: x.y.z\nIn the project directory, requirements.txt specifies a.b.c version for package. a.b.c > x.y.z\nthere is a bash script in the directory that sets up a virtual environment,\ninstalls the packages from requirements.txt in that virtual environment, and then runs\npytest in the virtual environment.\nthe virtual environment is set up like so in the bash script:\n#!\/usr\/bin\/env bash\nset -x\npython3 -m pip install --user virtualenv\npython3 -m virtualenv .env\nsource .env\/bin\/activate\n\nAfter this, pytest is run in the script which runs a bunch of test scripts. In one of these test scripts, a python script is called like so:\ncommand=[\"\/usr\/bin\/python\", \"\/path\/to\/script\/script.py\", ...(bunch of args)]\nprocess = subprocess.Popen(command)\n\nWhen I run the bash script, I get an output that specifies that the requirement for package==a.b.c is satisfied in the virtual environment:\nRequirement already satisfied: package==a.b.c in .\/.env\/lib\/python3.8\/site-packages (from -r requirements.txt (line 42)) (a.b.c)\n\nHowever, when I get to the point in the test script that calls the above python script.py, I get an error related to the global package version x.y.z unable to find a hardware device. This error is specific to version x.y.z and is fixed by using an updated version a.b.c as specified in requirements.txt and is what I thought we were using in the virtual environment.\nThe error references the global package as well:\n File \"\/path\/to\/script\/script.py\", line 116, in \n run()\n File \"\/path\/to\/script\/script.py\", line 82, in run\n device = scan_require_one(config='auto')\n File \"**\/home\/jenkins\/.local\/lib\/python3.8\/site-packages\/package\/driver.py**\", line 1353, in scan_require_one\n raise RuntimeError(\"no devices found\")\nRuntimeError: no devices found\nSystem information\n\nwhereas it should use the driver.py that's in .env (or so I thought).\nHow should I get the test script to use the package from the virtual environment?","Title":"pytest using global package despite using virtual env","Tags":"python,package,pytest,virtualenv","AnswerCount":2,"A_Id":75077041,"Answer":"Maybe you are trying to run the script from an IDE where the default path is selected. Try to run the program from cmd after virtual environment activation. Or select the venv as the preferred path of your IDE.","Users Score":-1,"is_accepted":false,"Score":-0.0996679946,"Available Count":1},{"Q_Id":75077851,"CreationDate":"2023-01-11 02:45:11","Q_Score":1,"ViewCount":81,"Question":"The code below is working but I need help in summarizing and sorting the output to create a more readable output.\naccount = []\namount = []\n\npmtReceived = []\nfor payment in payments_received: \n print (\" ({:>15}) <- {}\".format(payment['amount'] ,payment['from']))\n pmtReceived.append(payment['amount'])\n\n account.append(payment['to'])\n amount.append(payment['amount'])\n\n df = pd.DataFrame(list(zip(lst1, lst2)), columns =['To', 'Amount'])\n\nprint (df)\n\nCurrent Output:\n Account Amount \n0 BD001ABC... 180.0000000 \n1 ACC011XY... 120.0000000\n2 ACC011XY... 444.0000000 \n3 012ABC1A... 190.0000000\n4 012ABC1A... 50.0000000\n5 012ABC1A... 110.0000000\n6 012ABC1A... 400.0000000 \n7 XY123AYT... 0.4900000\n\nNeeded Output:\nBD001ABC (1 Transaction \/ 180.0)\nACC011XY (2 Transaction \/ 564.0)\n012ABC1A (4 Transaction \/ 750.0)\nXY123AYT (1 Transaction \/ 0.49)","Title":"How to create summary in list and create total in python","Tags":"python,python-3.x,pandas,dataframe","AnswerCount":2,"A_Id":75077990,"Answer":"try\npayments_received.sort()\nthen run your loop. if you need descending order try payments_received.sort(reverse=True)\ncheers","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75078011,"CreationDate":"2023-01-11 03:16:00","Q_Score":1,"ViewCount":288,"Question":"I had the following problem when importing aiplatform package in executor of Vertex AI workbench. This issue did not occur when I manually run the code in the Vertex AI workbench. However, the error came when I set a executor to run my code on schedule. Here is the error message:\nContextualVersionConflict: (google-cloud-bigquery 3.4.1 (\/opt\/conda\/lib\/python3.7\/site-packages), Requirement.parse('google-cloud-bigquery<3.0.0dev,>=1.15.0'), {'google-cloud-aiplatform'})\nI have tried upgrading the aiplatform package(!pip install google-cloud-aiplatform --upgrade), but still had the same issue. It seems that even we downgrade the google-cloud-bigquery package (!pip install google-cloud-bigquery==2.34.2 --user) to the version less than 3.0.0. The executor container would resume to be 3.4.1. which leads to the same issue again.\nHere is the script in the vertex AI workbench:\n!pip install google-cloud-aiplatform --upgrade\n!pip install google-cloud-bigquery==2.34.2 --user\n\nfrom google.cloud import bigquery\nprint('bigquery version:', bigquery.__version__)\n\nfrom google.cloud import aiplatform\nprint('aiplatform version:',aiplatform.__version__)\n\n\nHere is the error message:\nCollecting google-cloud-aiplatform\n Downloading google_cloud_aiplatform-1.20.0-py2.py3-none-any.whl (2.3 MB)\n[2K \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 2.3\/2.3 MB 9.6 MB\/s eta 0:00:00\n[?25hRequirement already satisfied: proto-plus<2.0.0dev,>=1.22.0 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-cloud-aiplatform) (1.22.1)\nCollecting google-cloud-bigquery<3.0.0dev,>=1.15.0\n Downloading google_cloud_bigquery-2.34.4-py2.py3-none-any.whl (206 kB)\n[2K \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 206.6\/206.6 kB 16.6 MB\/s eta 0:00:00\n[?25hCollecting packaging<22.0.0dev,>=14.3\n Downloading packaging-21.3-py3-none-any.whl (40 kB)\n[2K \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 40.8\/40.8 kB 6.2 MB\/s eta 0:00:00\n[?25hRequirement already satisfied: protobuf!=3.20.0,!=3.20.1,!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<5.0.0dev,>=3.19.5 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-cloud-aiplatform) (3.19.6)\nCollecting google-cloud-resource-manager<3.0.0dev,>=1.3.3\n Downloading google_cloud_resource_manager-1.7.0-py2.py3-none-any.whl (235 kB)\n[2K \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 235.3\/235.3 kB 13.1 MB\/s eta 0:00:00\n[?25hRequirement already satisfied: google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.32.0 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-cloud-aiplatform) (1.34.0)\nRequirement already satisfied: google-cloud-storage<3.0.0dev,>=1.32.0 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-cloud-aiplatform) (2.7.0)\nRequirement already satisfied: google-auth<3.0dev,>=1.25.0 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.32.0->google-cloud-aiplatform) (2.15.0)\nRequirement already satisfied: googleapis-common-protos<2.0dev,>=1.56.2 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.32.0->google-cloud-aiplatform) (1.57.0)\nRequirement already satisfied: requests<3.0.0dev,>=2.18.0 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.32.0->google-cloud-aiplatform) (2.28.1)\nRequirement already satisfied: grpcio<2.0dev,>=1.33.2 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.32.0->google-cloud-aiplatform) (1.51.1)\nRequirement already satisfied: grpcio-status<2.0dev,>=1.33.2 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.32.0->google-cloud-aiplatform) (1.48.2)\nRequirement already satisfied: python-dateutil<3.0dev,>=2.7.2 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-cloud-bigquery<3.0.0dev,>=1.15.0->google-cloud-aiplatform) (2.8.2)\nRequirement already satisfied: google-resumable-media<3.0dev,>=0.6.0 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-cloud-bigquery<3.0.0dev,>=1.15.0->google-cloud-aiplatform) (2.4.0)\nRequirement already satisfied: google-cloud-core<3.0.0dev,>=1.4.1 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-cloud-bigquery<3.0.0dev,>=1.15.0->google-cloud-aiplatform) (2.3.2)\nCollecting grpc-google-iam-v1<1.0.0dev,>=0.12.4\n Downloading grpc_google_iam_v1-0.12.4-py2.py3-none-any.whl (26 kB)\nRequirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in \/opt\/conda\/lib\/python3.7\/site-packages (from packaging<22.0.0dev,>=14.3->google-cloud-aiplatform) (3.0.9)\nRequirement already satisfied: rsa<5,>=3.1.4 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-auth<3.0dev,>=1.25.0->google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.32.0->google-cloud-aiplatform) (4.9)\nRequirement already satisfied: pyasn1-modules>=0.2.1 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-auth<3.0dev,>=1.25.0->google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.32.0->google-cloud-aiplatform) (0.2.8)\nRequirement already satisfied: six>=1.9.0 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-auth<3.0dev,>=1.25.0->google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.32.0->google-cloud-aiplatform) (1.16.0)\nRequirement already satisfied: cachetools<6.0,>=2.0.0 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-auth<3.0dev,>=1.25.0->google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.32.0->google-cloud-aiplatform) (5.2.0)\nRequirement already satisfied: google-crc32c<2.0dev,>=1.0 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-resumable-media<3.0dev,>=0.6.0->google-cloud-bigquery<3.0.0dev,>=1.15.0->google-cloud-aiplatform) (1.5.0)\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in \/opt\/conda\/lib\/python3.7\/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.32.0->google-cloud-aiplatform) (1.26.13)\nRequirement already satisfied: certifi>=2017.4.17 in \/opt\/conda\/lib\/python3.7\/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.32.0->google-cloud-aiplatform) (2022.12.7)\nRequirement already satisfied: charset-normalizer<3,>=2 in \/opt\/conda\/lib\/python3.7\/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.32.0->google-cloud-aiplatform) (2.1.1)\nRequirement already satisfied: idna<4,>=2.5 in \/opt\/conda\/lib\/python3.7\/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.32.0->google-cloud-aiplatform) (3.4)\nRequirement already satisfied: pyasn1<0.5.0,>=0.4.6 in \/opt\/conda\/lib\/python3.7\/site-packages (from pyasn1-modules>=0.2.1->google-auth<3.0dev,>=1.25.0->google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.32.0->google-cloud-aiplatform) (0.4.8)\nInstalling collected packages: packaging, grpc-google-iam-v1, google-cloud-resource-manager, google-cloud-bigquery, google-cloud-aiplatform\n Attempting uninstall: packaging\n Found existing installation: packaging 22.0\n Uninstalling packaging-22.0:\n Successfully uninstalled packaging-22.0\n Attempting uninstall: google-cloud-bigquery\n Found existing installation: google-cloud-bigquery 3.4.1\n Uninstalling google-cloud-bigquery-3.4.1:\n Successfully uninstalled google-cloud-bigquery-3.4.1\nSuccessfully installed google-cloud-aiplatform-1.20.0 google-cloud-bigquery-2.34.4 google-cloud-resource-manager-1.7.0 grpc-google-iam-v1-0.12.4 packaging-21.3\nWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https:\/\/pip.pypa.io\/warnings\/venv\nCollecting google-cloud-bigquery==2.34.2\n Downloading google_cloud_bigquery-2.34.2-py2.py3-none-any.whl (206 kB)\n[2K \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 206.1\/206.1 kB 4.7 MB\/s eta 0:00:00\n[?25hRequirement already satisfied: google-resumable-media<3.0dev,>=0.6.0 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-cloud-bigquery==2.34.2) (2.4.0)\nRequirement already satisfied: google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0,<3.0.0dev,>=1.31.5 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-cloud-bigquery==2.34.2) (1.34.0)\nRequirement already satisfied: grpcio<2.0dev,>=1.38.1 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-cloud-bigquery==2.34.2) (1.51.1)\nRequirement already satisfied: python-dateutil<3.0dev,>=2.7.2 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-cloud-bigquery==2.34.2) (2.8.2)\nRequirement already satisfied: packaging>=14.3 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-cloud-bigquery==2.34.2) (21.3)\nRequirement already satisfied: protobuf>=3.12.0 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-cloud-bigquery==2.34.2) (3.19.6)\nRequirement already satisfied: requests<3.0.0dev,>=2.18.0 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-cloud-bigquery==2.34.2) (2.28.1)\nRequirement already satisfied: google-cloud-core<3.0.0dev,>=1.4.1 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-cloud-bigquery==2.34.2) (2.3.2)\nRequirement already satisfied: proto-plus>=1.15.0 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-cloud-bigquery==2.34.2) (1.22.1)\nRequirement already satisfied: google-auth<3.0dev,>=1.25.0 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0,<3.0.0dev,>=1.31.5->google-cloud-bigquery==2.34.2) (2.15.0)\nRequirement already satisfied: googleapis-common-protos<2.0dev,>=1.56.2 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0,<3.0.0dev,>=1.31.5->google-cloud-bigquery==2.34.2) (1.57.0)\nRequirement already satisfied: grpcio-status<2.0dev,>=1.33.2 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0,<3.0.0dev,>=1.31.5->google-cloud-bigquery==2.34.2) (1.48.2)\nRequirement already satisfied: google-crc32c<2.0dev,>=1.0 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-resumable-media<3.0dev,>=0.6.0->google-cloud-bigquery==2.34.2) (1.5.0)\nRequirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in \/opt\/conda\/lib\/python3.7\/site-packages (from packaging>=14.3->google-cloud-bigquery==2.34.2) (3.0.9)\nRequirement already satisfied: six>=1.5 in \/opt\/conda\/lib\/python3.7\/site-packages (from python-dateutil<3.0dev,>=2.7.2->google-cloud-bigquery==2.34.2) (1.16.0)\nRequirement already satisfied: certifi>=2017.4.17 in \/opt\/conda\/lib\/python3.7\/site-packages (from requests<3.0.0dev,>=2.18.0->google-cloud-bigquery==2.34.2) (2022.12.7)\nRequirement already satisfied: idna<4,>=2.5 in \/opt\/conda\/lib\/python3.7\/site-packages (from requests<3.0.0dev,>=2.18.0->google-cloud-bigquery==2.34.2) (3.4)\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in \/opt\/conda\/lib\/python3.7\/site-packages (from requests<3.0.0dev,>=2.18.0->google-cloud-bigquery==2.34.2) (1.26.13)\nRequirement already satisfied: charset-normalizer<3,>=2 in \/opt\/conda\/lib\/python3.7\/site-packages (from requests<3.0.0dev,>=2.18.0->google-cloud-bigquery==2.34.2) (2.1.1)\nRequirement already satisfied: rsa<5,>=3.1.4 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-auth<3.0dev,>=1.25.0->google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0,<3.0.0dev,>=1.31.5->google-cloud-bigquery==2.34.2) (4.9)\nRequirement already satisfied: pyasn1-modules>=0.2.1 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-auth<3.0dev,>=1.25.0->google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0,<3.0.0dev,>=1.31.5->google-cloud-bigquery==2.34.2) (0.2.8)\nRequirement already satisfied: cachetools<6.0,>=2.0.0 in \/opt\/conda\/lib\/python3.7\/site-packages (from google-auth<3.0dev,>=1.25.0->google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0,<3.0.0dev,>=1.31.5->google-cloud-bigquery==2.34.2) (5.2.0)\nRequirement already satisfied: pyasn1<0.5.0,>=0.4.6 in \/opt\/conda\/lib\/python3.7\/site-packages (from pyasn1-modules>=0.2.1->google-auth<3.0dev,>=1.25.0->google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0,<3.0.0dev,>=1.31.5->google-cloud-bigquery==2.34.2) (0.4.8)\nInstalling collected packages: google-cloud-bigquery\nSuccessfully installed google-cloud-bigquery-2.34.2\nWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https:\/\/pip.pypa.io\/warnings\/venv\nbigquery version: 3.4.1\n---------------------------------------------------------------------------\nContextualVersionConflict Traceback (most recent call last)\n\/tmp\/ipykernel_296\/4222064428.py in \n 8 print('bigquery version:', bigquery.__version__)\n 9 \n---> 10 from google.cloud import aiplatform\n 11 print('aiplatform version:',aiplatform.__version__)\n\n\/opt\/conda\/lib\/python3.7\/site-packages\/google\/cloud\/aiplatform\/__init__.py in \n 22 \n 23 \n---> 24 from google.cloud.aiplatform import initializer\n 25 \n 26 from google.cloud.aiplatform.datasets import (\n\n\/opt\/conda\/lib\/python3.7\/site-packages\/google\/cloud\/aiplatform\/initializer.py in \n 29 from google.auth.exceptions import GoogleAuthError\n 30 \n---> 31 from google.cloud.aiplatform import compat\n 32 from google.cloud.aiplatform.constants import base as constants\n 33 from google.cloud.aiplatform import utils\n\n\/opt\/conda\/lib\/python3.7\/site-packages\/google\/cloud\/aiplatform\/compat\/__init__.py in \n 16 #\n 17 \n---> 18 from google.cloud.aiplatform.compat import services\n 19 from google.cloud.aiplatform.compat import types\n 20 \n\n\/opt\/conda\/lib\/python3.7\/site-packages\/google\/cloud\/aiplatform\/compat\/services\/__init__.py in \n 16 #\n 17 \n---> 18 from google.cloud.aiplatform_v1beta1.services.dataset_service import (\n 19 client as dataset_service_client_v1beta1,\n 20 )\n\n\/opt\/conda\/lib\/python3.7\/site-packages\/google\/cloud\/aiplatform_v1beta1\/__init__.py in \n 15 #\n 16 \n---> 17 from .services.dataset_service import DatasetServiceClient\n 18 from .services.dataset_service import DatasetServiceAsyncClient\n 19 from .services.deployment_resource_pool_service import (\n\n\/opt\/conda\/lib\/python3.7\/site-packages\/google\/cloud\/aiplatform_v1beta1\/services\/dataset_service\/__init__.py in \n 14 # limitations under the License.\n 15 #\n---> 16 from .client import DatasetServiceClient\n 17 from .async_client import DatasetServiceAsyncClient\n 18 \n\n\/opt\/conda\/lib\/python3.7\/site-packages\/google\/cloud\/aiplatform_v1beta1\/services\/dataset_service\/client.py in \n 55 from google.protobuf import struct_pb2 # type: ignore\n 56 from google.protobuf import timestamp_pb2 # type: ignore\n---> 57 from .transports.base import DatasetServiceTransport, DEFAULT_CLIENT_INFO\n 58 from .transports.grpc import DatasetServiceGrpcTransport\n 59 from .transports.grpc_asyncio import DatasetServiceGrpcAsyncIOTransport\n\n\/opt\/conda\/lib\/python3.7\/site-packages\/google\/cloud\/aiplatform_v1beta1\/services\/dataset_service\/transports\/__init__.py in \n 17 from typing import Dict, Type\n 18 \n---> 19 from .base import DatasetServiceTransport\n 20 from .grpc import DatasetServiceGrpcTransport\n 21 from .grpc_asyncio import DatasetServiceGrpcAsyncIOTransport\n\n\/opt\/conda\/lib\/python3.7\/site-packages\/google\/cloud\/aiplatform_v1beta1\/services\/dataset_service\/transports\/base.py in \n 40 DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo(\n 41 gapic_version=pkg_resources.get_distribution(\n---> 42 \"google-cloud-aiplatform\",\n 43 ).version,\n 44 )\n\n\/opt\/conda\/lib\/python3.7\/site-packages\/pkg_resources\/__init__.py in get_distribution(dist)\n 476 dist = Requirement.parse(dist)\n 477 if isinstance(dist, Requirement):\n--> 478 dist = get_provider(dist)\n 479 if not isinstance(dist, Distribution):\n 480 raise TypeError(\"Expected string, Requirement, or Distribution\", dist)\n\n\/opt\/conda\/lib\/python3.7\/site-packages\/pkg_resources\/__init__.py in get_provider(moduleOrReq)\n 352 \"\"\"Return an IResourceProvider for the named module or requirement\"\"\"\n 353 if isinstance(moduleOrReq, Requirement):\n--> 354 return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0]\n 355 try:\n 356 module = sys.modules[moduleOrReq]\n\n\/opt\/conda\/lib\/python3.7\/site-packages\/pkg_resources\/__init__.py in require(self, *requirements)\n 907 included, even if they were already activated in this working set.\n 908 \"\"\"\n--> 909 needed = self.resolve(parse_requirements(requirements))\n 910 \n 911 for dist in needed:\n\n\/opt\/conda\/lib\/python3.7\/site-packages\/pkg_resources\/__init__.py in resolve(self, requirements, env, installer, replace_conflicting, extras)\n 798 # Oops, the \"best\" so far conflicts with a dependency\n 799 dependent_req = required_by[req]\n--> 800 raise VersionConflict(dist, req).with_context(dependent_req)\n 801 \n 802 # push the new requirements onto the stack\n\nContextualVersionConflict: (google-cloud-bigquery 3.4.1 (\/opt\/conda\/lib\/python3.7\/site-packages), Requirement.parse('google-cloud-bigquery<3.0.0dev,>=1.15.0'), {'google-cloud-aiplatform'})","Title":"Contexual Version Conflict When Importing Aiplatform Package in Executor of Vertex AI Workbench","Tags":"python-3.x,google-cloud-platform,google-cloud-vertex-ai,gcp-ai-platform-notebook,gcp-ai-platform-training","AnswerCount":1,"A_Id":75079947,"Answer":"Notebooks Executor will use the container you specify when you create the execution. It does not use the current Notebook environment.\nI would create a custom container and install the required versions you need and specify this container when you create the Executions.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75078230,"CreationDate":"2023-01-11 04:04:25","Q_Score":0,"ViewCount":36,"Question":"and sorry in advance if this question is duplicated.\nIf I want to create all installed python package list, I can easily create with pip list or pip freeze.\nSome of them are installed by pip install, and some of them are installed by OS's package manager like apt, yum, pacman.\nCan I create the package list separately? if yes please let me know, thanks.","Title":"how to create python package list installed by pip only","Tags":"python,pip","AnswerCount":1,"A_Id":75078673,"Answer":"On Debian based systems using dpkg -l you can list all the packages installed, with version, architecture, description. Based on the package manager this would change, if it is a Red hat based system rpm -qa.\nTo get the list of packages that were installed by pip pip list","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75078506,"CreationDate":"2023-01-11 04:59:37","Q_Score":0,"ViewCount":27,"Question":"Title is all.\nI think, as result, they are same function.\nAre \"driver.refresh()\" and \"driver.get(current_url)\" the perfecly same?","Title":"Are \"driver.refresh()\" and \"driver.get(current_url)\" the same?","Tags":"python,selenium","AnswerCount":1,"A_Id":75083476,"Answer":"Refresh is same as browser refresh. It just reloads the same url. The get('url') option is equivalent of typing out an url in urlbar and pressing enter. Selenium waits for the website to be loaded before executing next script.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75079321,"CreationDate":"2023-01-11 06:59:05","Q_Score":2,"ViewCount":49,"Question":"I have rating of new flavors:\nflavors = {\"cinnamon\": 4, \"pumpkin\": 2.5, \"apple pie\": 3}\nprint(\"New flavors:\")\nprint(flavors)\n\nfor i in flavors:\n if flavors[i] >= 3:\n flavors[i] = True\n else:\n flavors[i] = False\n\nprint(\"Results:\")\nprint(flavors)\n\nI want get list with winning flavors:\nfor i in flavors:\n if flavors[i] == False:\n flavors.pop(i)\n\nprint(\"Release:\")\nprint(flavors.keys())\n\nCan I get release list without .pop() or avoid RuntimeError?","Title":"How to avoid \"RuntimeError: dictionary changed size during iteration\" or get result without .pop()","Tags":"python,python-3.x,dictionary,for-loop","AnswerCount":2,"A_Id":75079597,"Answer":"You can't change the keys of the dictionary while iterating over it.\nA simple fix to you code is to use for i in list(flavors): instead of for i in flavors: (in the second loop that you remove things). Basically you create a list(new object) of keys, and instead of dictionary, you iterate through this list. Now it's OK to manipulate the original dict.\nAdditional Note: In previous versions of Python(like 3.7.4 for instance), It allowed to change the keys as long as the size of the dictionary is not changed. For example while you iterate through the dictionary, you pop something and then before the next iteration, you add another key. In general that was buggy and now in my current interpreter (3.10.6) it doesn't allow you to change keys.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75079430,"CreationDate":"2023-01-11 07:11:31","Q_Score":0,"ViewCount":35,"Question":"I am trying to fill a PDF with Arabic and English. English is fine but Arabic is not visible after writing. It's showing on click.\nI am translating the English into Arabic text using Google API.\nI have also tried appearance functionality, fillpdf, PyPDF2 and pdfrw libraries.\nNo luck.","Title":"PyPDF2 fill Arabic is not visible while English is fine","Tags":"python,pypdf","AnswerCount":1,"A_Id":75115400,"Answer":"The issue was resolved after adding the proper font for Arabic text. The Ubuntu Doc viewer was not showing the text but in Adobe reader its just perfect.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75081154,"CreationDate":"2023-01-11 09:54:19","Q_Score":0,"ViewCount":50,"Question":"I tried a 3D project with ursina engine.\nI put a login part for the first page. Inside it, there is an email field with InputField ursina. But @ character cannot be entered. If anybody has an idea about it? Thanks.\nI just put the default value, which is written @, in the input field. But I want the user to enter their email just by typing @ character.","Title":"Ursina inputfield @ character cannot be typed","Tags":"python,input-field,ursina","AnswerCount":1,"A_Id":75084459,"Answer":"This is a bug, but have been fixed now. Update ursina with pip install https:\/\/github.com\/pokepetter\/ursina\/archive\/master.zip --upgrade and it should work.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75081932,"CreationDate":"2023-01-11 10:55:40","Q_Score":1,"ViewCount":637,"Question":"I'm having a problem with Django response class StreamingHttpResponse. When I return a generator as response using StreamingHttpResponse and make a request I excepted to retrieve each data block one by one, instead of that i retrieve the full data at once when the generator loop has finished.\nMy Django View:\ndef gen_message(msg):\n return '\\ndata: {}\\n\\n'.format(msg)\n\n\ndef iterator():\n for i in range(100):\n yield gen_message('iteration ' + str(i))\n print(i)\n time.sleep(0.1)\n\nclass test_stream(APIView):\n def post(self, request):\n stream = iterator()\n response = StreamingHttpResponse(stream, status=200, content_type='text\/event-stream')\n response['Cache-Control'] = 'no-cache'\n return response\n\nAnd I make the request like that:\nr = requests.post('https:\/\/******\/test_stream\/', stream=True)\n\n\nfor line in r.iter_lines():\n\n if line:\n decoded_line = line.decode('utf-8')\n print(decoded_line)\n\nWhen I see the output of the Django server I can see the print every 0.1 seconds. But the response in the second code only shows when the for loop is finished.\nAm I misunderstanding the StreamingHttpResponse class or the request class or is there another problem?\nThanks:)","Title":"Problem with Django StreamingHttpResponse","Tags":"python,django,python-requests,streaming,streaminghttpresponse","AnswerCount":1,"A_Id":75082700,"Answer":"You will need to ensure that there is no middleware or webserver in between that first \"buffers\" the response.\nIn Nginx that is possible if the proxy_buffering setting is turned on. You will thus need to disable this with:\nproxy_buffering off;","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75082069,"CreationDate":"2023-01-11 11:07:45","Q_Score":4,"ViewCount":278,"Question":"TL;DR I'm trying to run Python embedded in OpenFOAM in C++, but including some Python modules is causing OpenFOAM to fail even though the Python script seems to work fine by itself. I've tried appending the module locations to the Python path but it hasn't worked.\nI have a long Python script I wrote to couple additional capabilities into an OpenFOAM application. I want to embed this Python script into the OpenFOAM C++ code so that it executes within the iteration loops of the application.\nThis is what I'm including in OpenFOAM C++ to initialise Python, embed my module and functions, and run it with two arguments (zone name and temperature) passed from OpenFOAM:\n\/\/ Set PYTHONPATH TO working directory\nsetenv(\"PYTHONPATH\",\"..\/\",1); \n\nPy_Initialize();\n\nPyRun_SimpleString(\"import sys; sys.path.append('\/some\/path')\\n\");\n\nPyObject* myModule = PyImport_ImportModule(\"couplingScript\");\n\nPyObject* myFunction = PyObject_GetAttrString(myModule,\"couplingFunction\");\n\nPyObject* pZone = PyUnicode_FromString(zonename.c_str());\n\nPyObject* pT = PyFloat_FromDouble(T);\n\nPyObject* pArgs = PyTuple_Pack(2,pZone, pT);\n\nPyObject* myResult = PyObject_CallObject(myFunction, pArgs);\n\nPyErr_Print();\n\nPy_Finalize();\n\nThis is a simplified version of my couplingScript.py file, which runs fine from within OpenFOAM:\nimport sys\n\ndef couplingFunction(zone, T):\n print(\"Zone = \" + zone)\n print(\"Temperature = \" + str(T))\n\ncouplingFunction('Zone 1', 273)\n\nOUTPUT:\n\nZone = Zone 1\nTemperature = 273\n\nHowever, I need to include two additional modules - numpy and pandas - to make the full couplingScript file run. This is a simplified example:\nimport sys\nimport numpy as np\nimport pandas as pd\n\ndef couplingFunction(zone, T):\n print(\"Zone = \" + zone)\n print(\"Temperature = \" + str(T))\n numpy_check = np.zeros(2)\n pandas_check = pd.DataFrame()\n print(numpy_check)\n print(pandas_check)\n\ncouplingFunction('Zone 1', 273)\n\nWith these additions, couplingScript.py itself runs fine BUT the embedded Python fails.\n OUTPUT running couplingScript.py from terminal via Python:\n\n Zone = Zone 1\n Temperature = 273\n Empty DataFrame\n Columns: []\n Index: []\n [0. 0.]\n\n OUTPUT running couplingScript.py from embedded OpenFOAM application:\n\n #0 Foam::error::printStack(Foam::Ostream&) at ??:?\n #1 Foam::sigFpe::sigHandler(int) at ??:?\n #2 ? in \/lib\/x86_64-linux-gnu\/libc.so.6\n #3 ? at ??:?\n #4 PyNumber_Multiply in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #5 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #6 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #7 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #8 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #9 _PyEval_EvalCodeWithName in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #10 PyEval_EvalCodeEx in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #11 PyEval_EvalCode in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #12 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #13 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #14 PyVectorcall_Call in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #15 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #16 _PyEval_EvalCodeWithName in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #17 _PyFunction_Vectorcall in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #18 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #19 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #20 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #21 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #22 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #23 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #24 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #25 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #26 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #27 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #28 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #29 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #30 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #31 _PyObject_CallMethodIdObjArgs in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #32 PyImport_ImportModuleLevelObject in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #33 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #34 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #35 _PyObject_MakeTpCall in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #36 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #37 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #38 _PyEval_EvalCodeWithName in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #39 PyEval_EvalCodeEx in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #40 PyEval_EvalCode in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #41 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #42 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #43 PyVectorcall_Call in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #44 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #45 _PyEval_EvalCodeWithName in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #46 _PyFunction_Vectorcall in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #47 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #48 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #49 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #50 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #51 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #52 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #53 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #54 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #55 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #56 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #57 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #58 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #59 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #60 _PyObject_CallMethodIdObjArgs in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #61 PyImport_ImportModuleLevelObject in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #62 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #63 _PyEval_EvalCodeWithName in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #64 PyEval_EvalCodeEx in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #65 PyEval_EvalCode in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #66 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #67 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #68 PyVectorcall_Call in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #69 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #70 _PyEval_EvalCodeWithName in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #71 _PyFunction_Vectorcall in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #72 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #73 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #74 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #75 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #76 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #77 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #78 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #79 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #80 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #81 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #82 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #83 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #84 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #85 _PyObject_CallMethodIdObjArgs in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #86 PyImport_ImportModuleLevelObject in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #87 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #88 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #89 _PyObject_MakeTpCall in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #90 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #91 PyObject_CallFunction in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #92 PyImport_Import in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #93 PyImport_ImportModule in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #94 ? in ~\/OpenFOAM\/ud215-v1906\/platforms\/linux64GccDPInt32Opt\/bin\/GeN-Foam\n #95 ? in ~\/OpenFOAM\/ud215-v1906\/platforms\/linux64GccDPInt32Opt\/bin\/GeN-Foam\n #96 ? in ~\/OpenFOAM\/ud215-v1906\/platforms\/linux64GccDPInt32Opt\/bin\/GeN-Foam\n #97 __libc_start_main in \/lib\/x86_64-linux-gnu\/libc.so.6\n #98 ? in ~\/OpenFOAM\/ud215-v1906\/platforms\/linux64GccDPInt32Opt\/bin\/GeN-Foam\n Floating point exception (core dumped)\n\nThings I have tried:\n\nInstalling numpy\/pandas via pip gives me:\n\nRequirement already satisfied: numpy in \/usr\/lib\/python3\/dist-packages (1.17.4)\nRequirement already satisfied: pandas in \/usr\/lib\/python3\/dist-packages (0.25.3)\n\n\nAppending this to the python path via sys.path.append('\/usr\/lib\/python3\/dist-packages') directly after import sys in couplingScript.py:\n\nThis gives me the same error as before when run via OpenFOAM\n\n\nImporting directly from the package location using:\n\nimport usr.lib.python3.dist-packages.numpy as np\nimport usr.lib.python3.dist-packages.pandas as pd\n\n\nI also tried:\n\nfrom .usr.lib.python3.dist-packages import numpy as np\nfrom .usr.lib.python3.dist-packages import pandas as pd\n\n\n\nThese last two give a slightly shorter error:\n #0 Foam::error::printStack(Foam::Ostream&) at ??:?\n #1 Foam::sigSegv::sigHandler(int) at ??:?\n #2 ? in \/lib\/x86_64-linux-gnu\/libc.so.6\n #3 PyObject_GetAttrString in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #4 ? in ~\/OpenFOAM\/ud215-v1906\/platforms\/linux64GccDPInt32Opt\/bin\/GeN-Foam\n #5 ? in ~\/OpenFOAM\/ud215-v1906\/platforms\/linux64GccDPInt32Opt\/bin\/GeN-Foam\n #6 ? in ~\/OpenFOAM\/ud215-v1906\/platforms\/linux64GccDPInt32Opt\/bin\/GeN-Foam\n #7 __libc_start_main in \/lib\/x86_64-linux-gnu\/libc.so.6\n #8 ? in ~\/OpenFOAM\/ud215-v1906\/platforms\/linux64GccDPInt32Opt\/bin\/GeN-Foam\n Segmentation fault (core dumped)`\n\nI have also run print(sys.path) for both runs, which gives:\nPython run from terminal:\n ['\/home\/abc123\/Coupled_model\/GF', '\/usr\/lib\/python38.zip', '\/usr\/lib\/python3.8', '\/usr\/lib\/python3.8\/lib-dynload', '\/home\/abc123\/.local\/lib\/python3.8\/site-packages', '\/usr\/local\/lib\/python3.8\/dist-packages', '\/usr\/lib\/python3\/dist-packages']\n\nEmbedded python in OpenFOAM:\n ['\/home\/abc123\/Coupled_model\/GF', '\/usr\/lib\/python38.zip', '\/usr\/lib\/python3.8', '\/usr\/lib\/python3.8\/lib-dynload', '\/home\/abc123\/.local\/lib\/python3.8\/site-packages', '\/usr\/local\/lib\/python3.8\/dist-packages', '\/usr\/lib\/python3\/dist-packages']\n\nIf they're the same, why isn't the embedded python recognising numpy and pandas???","Title":"Error loading Python modules when embedding Python in C++ OpenFOAM","Tags":"python,c++,pandas,numpy,openfoam","AnswerCount":3,"A_Id":76405440,"Answer":"ChatGPT helped me out, since I had exactly the same problem as you had:\nThe issue you're experiencing could be related to a known problem when using OpenFOAM with embedded Python and NumPy. The error you mentioned in the link indicates that the Floating Point Unit (FPU) settings are causing conflicts between the OpenFOAM library and the Python interpreter.\nYou can avoid that by typing\nunset FOAM_SIGFPE\nin the command line.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75082069,"CreationDate":"2023-01-11 11:07:45","Q_Score":4,"ViewCount":278,"Question":"TL;DR I'm trying to run Python embedded in OpenFOAM in C++, but including some Python modules is causing OpenFOAM to fail even though the Python script seems to work fine by itself. I've tried appending the module locations to the Python path but it hasn't worked.\nI have a long Python script I wrote to couple additional capabilities into an OpenFOAM application. I want to embed this Python script into the OpenFOAM C++ code so that it executes within the iteration loops of the application.\nThis is what I'm including in OpenFOAM C++ to initialise Python, embed my module and functions, and run it with two arguments (zone name and temperature) passed from OpenFOAM:\n\/\/ Set PYTHONPATH TO working directory\nsetenv(\"PYTHONPATH\",\"..\/\",1); \n\nPy_Initialize();\n\nPyRun_SimpleString(\"import sys; sys.path.append('\/some\/path')\\n\");\n\nPyObject* myModule = PyImport_ImportModule(\"couplingScript\");\n\nPyObject* myFunction = PyObject_GetAttrString(myModule,\"couplingFunction\");\n\nPyObject* pZone = PyUnicode_FromString(zonename.c_str());\n\nPyObject* pT = PyFloat_FromDouble(T);\n\nPyObject* pArgs = PyTuple_Pack(2,pZone, pT);\n\nPyObject* myResult = PyObject_CallObject(myFunction, pArgs);\n\nPyErr_Print();\n\nPy_Finalize();\n\nThis is a simplified version of my couplingScript.py file, which runs fine from within OpenFOAM:\nimport sys\n\ndef couplingFunction(zone, T):\n print(\"Zone = \" + zone)\n print(\"Temperature = \" + str(T))\n\ncouplingFunction('Zone 1', 273)\n\nOUTPUT:\n\nZone = Zone 1\nTemperature = 273\n\nHowever, I need to include two additional modules - numpy and pandas - to make the full couplingScript file run. This is a simplified example:\nimport sys\nimport numpy as np\nimport pandas as pd\n\ndef couplingFunction(zone, T):\n print(\"Zone = \" + zone)\n print(\"Temperature = \" + str(T))\n numpy_check = np.zeros(2)\n pandas_check = pd.DataFrame()\n print(numpy_check)\n print(pandas_check)\n\ncouplingFunction('Zone 1', 273)\n\nWith these additions, couplingScript.py itself runs fine BUT the embedded Python fails.\n OUTPUT running couplingScript.py from terminal via Python:\n\n Zone = Zone 1\n Temperature = 273\n Empty DataFrame\n Columns: []\n Index: []\n [0. 0.]\n\n OUTPUT running couplingScript.py from embedded OpenFOAM application:\n\n #0 Foam::error::printStack(Foam::Ostream&) at ??:?\n #1 Foam::sigFpe::sigHandler(int) at ??:?\n #2 ? in \/lib\/x86_64-linux-gnu\/libc.so.6\n #3 ? at ??:?\n #4 PyNumber_Multiply in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #5 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #6 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #7 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #8 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #9 _PyEval_EvalCodeWithName in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #10 PyEval_EvalCodeEx in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #11 PyEval_EvalCode in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #12 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #13 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #14 PyVectorcall_Call in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #15 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #16 _PyEval_EvalCodeWithName in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #17 _PyFunction_Vectorcall in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #18 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #19 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #20 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #21 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #22 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #23 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #24 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #25 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #26 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #27 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #28 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #29 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #30 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #31 _PyObject_CallMethodIdObjArgs in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #32 PyImport_ImportModuleLevelObject in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #33 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #34 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #35 _PyObject_MakeTpCall in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #36 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #37 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #38 _PyEval_EvalCodeWithName in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #39 PyEval_EvalCodeEx in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #40 PyEval_EvalCode in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #41 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #42 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #43 PyVectorcall_Call in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #44 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #45 _PyEval_EvalCodeWithName in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #46 _PyFunction_Vectorcall in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #47 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #48 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #49 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #50 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #51 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #52 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #53 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #54 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #55 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #56 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #57 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #58 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #59 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #60 _PyObject_CallMethodIdObjArgs in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #61 PyImport_ImportModuleLevelObject in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #62 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #63 _PyEval_EvalCodeWithName in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #64 PyEval_EvalCodeEx in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #65 PyEval_EvalCode in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #66 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #67 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #68 PyVectorcall_Call in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #69 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #70 _PyEval_EvalCodeWithName in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #71 _PyFunction_Vectorcall in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #72 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #73 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #74 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #75 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #76 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #77 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #78 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #79 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #80 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #81 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #82 _PyEval_EvalFrameDefault in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #83 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #84 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #85 _PyObject_CallMethodIdObjArgs in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #86 PyImport_ImportModuleLevelObject in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #87 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #88 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #89 _PyObject_MakeTpCall in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #90 ? in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #91 PyObject_CallFunction in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #92 PyImport_Import in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #93 PyImport_ImportModule in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #94 ? in ~\/OpenFOAM\/ud215-v1906\/platforms\/linux64GccDPInt32Opt\/bin\/GeN-Foam\n #95 ? in ~\/OpenFOAM\/ud215-v1906\/platforms\/linux64GccDPInt32Opt\/bin\/GeN-Foam\n #96 ? in ~\/OpenFOAM\/ud215-v1906\/platforms\/linux64GccDPInt32Opt\/bin\/GeN-Foam\n #97 __libc_start_main in \/lib\/x86_64-linux-gnu\/libc.so.6\n #98 ? in ~\/OpenFOAM\/ud215-v1906\/platforms\/linux64GccDPInt32Opt\/bin\/GeN-Foam\n Floating point exception (core dumped)\n\nThings I have tried:\n\nInstalling numpy\/pandas via pip gives me:\n\nRequirement already satisfied: numpy in \/usr\/lib\/python3\/dist-packages (1.17.4)\nRequirement already satisfied: pandas in \/usr\/lib\/python3\/dist-packages (0.25.3)\n\n\nAppending this to the python path via sys.path.append('\/usr\/lib\/python3\/dist-packages') directly after import sys in couplingScript.py:\n\nThis gives me the same error as before when run via OpenFOAM\n\n\nImporting directly from the package location using:\n\nimport usr.lib.python3.dist-packages.numpy as np\nimport usr.lib.python3.dist-packages.pandas as pd\n\n\nI also tried:\n\nfrom .usr.lib.python3.dist-packages import numpy as np\nfrom .usr.lib.python3.dist-packages import pandas as pd\n\n\n\nThese last two give a slightly shorter error:\n #0 Foam::error::printStack(Foam::Ostream&) at ??:?\n #1 Foam::sigSegv::sigHandler(int) at ??:?\n #2 ? in \/lib\/x86_64-linux-gnu\/libc.so.6\n #3 PyObject_GetAttrString in \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n #4 ? in ~\/OpenFOAM\/ud215-v1906\/platforms\/linux64GccDPInt32Opt\/bin\/GeN-Foam\n #5 ? in ~\/OpenFOAM\/ud215-v1906\/platforms\/linux64GccDPInt32Opt\/bin\/GeN-Foam\n #6 ? in ~\/OpenFOAM\/ud215-v1906\/platforms\/linux64GccDPInt32Opt\/bin\/GeN-Foam\n #7 __libc_start_main in \/lib\/x86_64-linux-gnu\/libc.so.6\n #8 ? in ~\/OpenFOAM\/ud215-v1906\/platforms\/linux64GccDPInt32Opt\/bin\/GeN-Foam\n Segmentation fault (core dumped)`\n\nI have also run print(sys.path) for both runs, which gives:\nPython run from terminal:\n ['\/home\/abc123\/Coupled_model\/GF', '\/usr\/lib\/python38.zip', '\/usr\/lib\/python3.8', '\/usr\/lib\/python3.8\/lib-dynload', '\/home\/abc123\/.local\/lib\/python3.8\/site-packages', '\/usr\/local\/lib\/python3.8\/dist-packages', '\/usr\/lib\/python3\/dist-packages']\n\nEmbedded python in OpenFOAM:\n ['\/home\/abc123\/Coupled_model\/GF', '\/usr\/lib\/python38.zip', '\/usr\/lib\/python3.8', '\/usr\/lib\/python3.8\/lib-dynload', '\/home\/abc123\/.local\/lib\/python3.8\/site-packages', '\/usr\/local\/lib\/python3.8\/dist-packages', '\/usr\/lib\/python3\/dist-packages']\n\nIf they're the same, why isn't the embedded python recognising numpy and pandas???","Title":"Error loading Python modules when embedding Python in C++ OpenFOAM","Tags":"python,c++,pandas,numpy,openfoam","AnswerCount":3,"A_Id":75265942,"Answer":"Try deleting and reinstalling the packages.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75083359,"CreationDate":"2023-01-11 12:52:59","Q_Score":2,"ViewCount":692,"Question":"I'm trying to install torch_geometric in a conda environment but I'm getting the following werror whenever I try to:\nimport torch_geometric\n\nError:\nOSError: dlopen(\/Users\/psanchez\/miniconda3\/envs\/playbook\/lib\/python3.9\/site-packages\/libpyg.so, 0x0006): Library not loaded: \/usr\/local\/opt\/python@3.10\/Frameworks\/Python.framework\/Versions\/3.10\/Python\n Referenced from: <95F9BBA5-21FB-3EA5-9028-172B745E6ABA> \/Users\/psanchez\/miniconda3\/envs\/playbook\/lib\/python3.9\/site-packages\/libpyg.so\n Reason: tried: '\/usr\/local\/opt\/python@3.10\/Frameworks\/Python.framework\/Versions\/3.10\/Python' (no such file), '\/System\/Volumes\/Preboot\/Cryptexes\/OS\/usr\/local\/opt\/python@3.10\/Frameworks\/Python.framework\/Versions\/3.10\/Python' (no such file), '\/usr\/local\/opt\/python@3.10\/Frameworks\/Python.framework\/Versions\/3.10\/Python' (no such file), '\/Library\/Frameworks\/Python.framework\/Versions\/3.10\/Python' (no such file), '\/System\/Library\/Frameworks\/Python.framework\/Versions\/3.10\/Python' (no such file, not in dyld cache)\n\nThis is how I installed the conda envrionment:\nonda create --name playbook python=3.9.7 --no-default-packages\nconda activate playbook\n\npip install torch==1.13.1 torchvision==0.14.1\n\n\npip install pyg-lib torch-scatter torch-sparse torch-cluster torch-spline-conv torch-geometric -f https:\/\/data.pyg.org\/whl\/torch-1.13.0+cpu.html\n\n\nAny idea how to solve this error?\nThanks a lot in advance!","Title":"Error when importing torch_geometric in Python 3.9.7","Tags":"python,python-3.x,pytorch,pytorch-geometric","AnswerCount":3,"A_Id":75083548,"Answer":"If you check your error message, on the line \"Referenced from\" you can see Python version 3.9 but on the line \"Reason tried\", you have Python version 3.10.\nSo i think you are using a Python environment 3.10 with Python 3.9 or the opposite. You should recreate your environment cleanly.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75083505,"CreationDate":"2023-01-11 13:03:22","Q_Score":2,"ViewCount":179,"Question":"Disclaimer: I can wipe out the database anytime. So while answering this, please don't care about migrations and stuff.\nImagine me having a model with multiple values:\nclass Compound(models.Model):\n color = models.CharField(max_length=20, blank=True, default=\"\")\n brand = models.CharField(max_length=200, blank=True, default=\"\")\n temperature = models.FloatField(null=True, blank=True)\n melting_temp = models.FloatField(null=True, blank=True)\n # more (~20) especially numeric values as model fields\n\nNow I want to add a comment to be stored for every value of that model. For example I want to add a comment \"measured in winter\" to the temperature model field.\nWhat is the best approach to do that?\nMy brainstorming came up with:\n\nBy hand add 20 more model fields like temperature_comment = ... but that sounds not very DRY\nAdd one big json field which stores every comment. But how do I create a Form with such a json field? Because I want to separate each input field for related value. I would probably have to use javascript which I would want to avoid.\nAdd a model called Value for every value and connect them to Compound via OneToOneFields. But how do I then create a Form for Compound? Because I want to create a Compound utilizing one form. I do not want to create every Value on its own. Also it is not as easy as before, to access and play around with the values inside the Compound model.\n\nI guess this is a fairly abstract question for a usecase that comes up quite often. I do not know why I did not find resources on how to accomplish that.","Title":"How to add a new \"comment\" or \"flag\" field to every model field of existing model?","Tags":"python,django,django-models,django-forms","AnswerCount":6,"A_Id":75248801,"Answer":"Depends on the access pattern of the model.\nyou could have a model1 for values, model2 for comments, one2one relation.\nIf you access one model more than the other you don't have to load and resolve text or varchar each time.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75085016,"CreationDate":"2023-01-11 14:58:27","Q_Score":1,"ViewCount":363,"Question":"I deployed a Django project in Railway, and it uses Celery and Redis to perform an scheduled task. The project is successfully online, but the Celery tasks are not performed.\nIf I execute the Celery worker from my computer's terminal using the Railway CLI, the tasks are performed as expected, and the results are saved in the Railway's PostgreSQL, and thus those results are displayed in the on-line site. Also, the redis server used is also the one from Railway.\nHowever, Celery is operating in 'local'. This is the log on my local terminal showing the Celery is running local, and the Redis server is the one up in Railway:\n-------------- celery@MacBook-Pro-de-Corey.local v5.2.7 (dawn-chorus)\n--- ***** ----- \n-- ******* ---- macOS-13.1-arm64-arm-64bit 2023-01-11 23:08:34\n- *** --- * --- \n- ** ---------- [config]\n- ** ---------- .> app: suii:0x1027e86a0\n- ** ---------- .> transport: redis:\/\/default:**@containers-us-west-28.railway.app:7078\/\/\n- ** ---------- .> results: \n- *** --- * --- .> concurrency: 10 (prefork)\n-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)\n--- ***** ----- \n-------------- [queues]\n.> celery exchange=celery(direct) key=celery\n[tasks]\n. kansoku.tasks.suii_kakunin\n\nI included this line of code in the Procfile regarding to the worker (as I saw in another related answer):\nworker: python manage.py qcluster --settings=my_app_name.settings\n\nAnd also have as an environment variable CELERY_BROKER_REDIS_URL pointing to Railway's REDIS_URL. I also tried creating a 'Periodic task' from the admin of the live aplication, but it just doesn't get executed. What should I do in order to have the scheduled tasks be done automatically without my PC?","Title":"Run Celery tasks on Railway","Tags":"python,django,celery,django-deployment","AnswerCount":1,"A_Id":76203163,"Answer":"My understanding is that in order to run the celery worker on Railway, you need to create another service in your project and initiate the command you would ordinarily place in the Procfile to start the worker (in the service settings, go to Deploy >> Start command). This setup means you don't need Procfile (as a result, you need to re-arrange other commands (ex. \"web\") that were in the procfile into other service settings).","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75085244,"CreationDate":"2023-01-11 15:16:23","Q_Score":1,"ViewCount":101,"Question":"I am new to Django\/python and I am facing a problem with my models.py.\nI added some attributes, saved it -> py manage.py makemigrations -> py manage.py migrate\nbut the current attributes are not shown in the 0001_initial.py.\nAlso when I am opening the database in my DB Browser for SQLite I still get the old status.\nHere's my code:\nmodels.py\nfrom django.db import models\n\n\n# from django.contrib.auth.models import User\n\n# Create your models here.\n category_choice = (\n ('Allgemein', 'Allgemein'),\n ('Erk\u00e4ltung', 'Erk\u00e4ltung'),\n ('Salben & Verb\u00e4nde', 'Salben & Verb\u00e4nde'),\n )\n\nclass medicament(models.Model):\n PZN = models.CharField(max_length=5, primary_key=True) # Maxlaenge auf 5 aendern\n name = models.CharField('Medikament Name', max_length=100)\n description = models.CharField('Medikament Beschreibung', max_length=500)\n category = models.CharField(max_length=100, blank=True, null=True, choices=category_choice)\n instructionsForUse = models.CharField('Medikament Einnehmhinweise', max_length=400)\n productimage = models.ImageField(null=True, blank=True, upload_to=\"images\/\")\n stock = models.PositiveIntegerField(default='0')\n reorder_level = models.IntegerField(default='0', blank=True, null=True)\n price= models.DecimalField(default='0.0', max_digits=10, decimal_places=2)\n sold_amount = models.IntegerField(default='0', blank=True, null=True)\n sales_volume = models.DecimalField(default='0.0', max_digits=10, decimal_places=2)\n\n\n def __str__(self):\n return self.name\n\n\nAnd the 0001_initial.py\n# Generated by Django 3.2.16 on 2023-01-05 14:33\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n initial = True\n\n dependencies = [\n ]\n\n operations = [\n migrations.CreateModel(\n name='medicament',\n fields=[\n ('PZN', models.CharField(max_length=5, primary_key=True, serialize=False)),\n ('name', models.CharField(max_length=100, verbose_name='Medikament Name')),\n ('description', models.CharField(max_length=500, verbose_name='Medikament Beschreibung')),\n ('category', models.CharField(default='Allgemein', max_length=100)),\n ('instructionsForUse', models.CharField(max_length=400, verbose_name='Medikament Einnehmhinweise')),\n ('productimage', models.ImageField(blank=True, null=True, upload_to='images\/')),\n ('stock', models.IntegerField(default='0')),\n ],\n ),\n ]","Title":"Django: 0001_initial.py is not on current status after complementing models.py","Tags":"python,sql,django,database,django-models","AnswerCount":2,"A_Id":75085401,"Answer":"In any Django project, you only run makemigrations once (at the first initialization of your models), after that you run ONLY migrate for any updates.\nAs for your problems, you should delete your SQLite DB, and also delete all migrations files that end with .pyc.\nAfter that run makemigrations then migrate, (and don't run makemigrations again, the reason being that it will cause problems and collisions with older SQL migrations based on the previous makemigrations).","Users Score":-2,"is_accepted":false,"Score":-0.1973753202,"Available Count":1},{"Q_Id":75086148,"CreationDate":"2023-01-11 16:25:04","Q_Score":0,"ViewCount":26,"Question":"How do I edit\/remove feature definitions (name\/type) from my AWS Sagemaker Feature Group? From what I encounter in the Feature Store API, there are just options to delete Feature Group or record. I Tried to search the documentation for feature delete\/edit methods without success. The current solution I see is to delete the Feature Group and recreate it with the correct feature definitions.","Title":"Remove\/Edit feature definitions from AWS Sagemaker Feature Group","Tags":"python,amazon-web-services,amazon-sagemaker,aws-feature-store","AnswerCount":1,"A_Id":75092204,"Answer":"SageMaker Feature Store supports the ability to delete an entire feature record, but not a specific feature. More specifically, the current version of Feature Store supports only immutable feature groups. Once you create a feature group, its schema cannot be changed.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75086268,"CreationDate":"2023-01-11 16:34:54","Q_Score":0,"ViewCount":39,"Question":"How we can use custom mean and var in standard_scaler? I need to calculate mean and var for all data in the dataset (train set+test set) and then use these values to standardize the train set and test set (and later input data) separately. How can I do this?\nI couldn't find any example of it.","Title":"custom mean and var for standard_scaler","Tags":"python,machine-learning,deep-learning","AnswerCount":2,"A_Id":75100062,"Answer":"The simplest one is the best one!\nI found that the normal StandardScaler is the best answer to my question.\nStandardScaler(with_mean=False,with_std=False) that means mean=0 and var=1.\nThese values is fix for train set, test set and input data. so it's OK!","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75087093,"CreationDate":"2023-01-11 17:46:36","Q_Score":0,"ViewCount":58,"Question":"During development, it would be nice to build the documentation and then serve it locally so that I can inspect the latest changes. What is the best way to serve locally?\nmkdocs has a built-in command mkdocs serve, but I don't see any such equivalent for sphinx.","Title":"How do I serve `sphinx` documentation locally?","Tags":"documentation,python-sphinx","AnswerCount":2,"A_Id":75304237,"Answer":"You don\u2019t really need that. Just find a generated web page (index page or not) and launch it in a web browser.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75089574,"CreationDate":"2023-01-11 22:15:03","Q_Score":1,"ViewCount":38,"Question":"I have PDF drawings of target locations on a map. Each target location has a constant value next to it. Let's say \"A\"\nI want to add an increasing value say \"101\"+1 next to each A so that I can give each location a unique identifier.\nThis way a crew member can say \"at location 103\" and I know where on the map he\/she is.\nright now I am manually editing PDFs to add these values which sucks, wondering if I can automate\nI am using PyPDF2 and reportlab but struggling to get the location of each \"A\" and to print the new values","Title":"PDF editing. Add an increasing number next to a specific value","Tags":"python","AnswerCount":1,"A_Id":75092776,"Answer":"Consider using PyMuPDF instead. Will let you find correct locations including whatever text font properties plus color.\nAt each identified location boundary box, append your unique id ..., or add an appropriate annotation as KJ indicated.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75089743,"CreationDate":"2023-01-11 22:40:24","Q_Score":1,"ViewCount":51,"Question":"I am trynig to create a matplotlib label where I can use both newline symbol and multiplication symbol. However, when I use them together then I only see multiplication symbol with '\\n' as a part of text. The code that i use to create the symbol is below.\nr\"L1+\\nL1$\\times$L2\"\n\nCan someone point where I am wrong.","Title":"How to add newline symbol and multiplication symbol together in matplotlib","Tags":"python,matplotlib","AnswerCount":1,"A_Id":75089864,"Answer":"Looks like the solution was easier than I thought. All I had to do was to make a string as a combination of individual strings like \"L1+\\n\"+r'L1$\\times$L2 and it works","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75090778,"CreationDate":"2023-01-12 01:50:42","Q_Score":2,"ViewCount":224,"Question":"I have a Python log handler that writes using asyncio (it's too much work to write to this particular service any other way). I also want to be able to log messages from background threads, since a few bits of code do that. So my code looks basically like this (minimal version):\nclass AsyncEmitLogHandler(logging.Handler):\n def __init__(self):\n self.loop = asyncio.get_running_loop()\n super().__init__()\n\n def emit(self, record):\n self.format(record)\n asyncio.run_coroutine_threadsafe(\n coro=self._async_emit(record.message),\n loop=self.loop,\n )\n\n async def _async_emit(message):\n await my_async_write_function(message)\n\nMostly it works fine but when processes exit I get a lot some warnings like this: \"coroutine 'AsyncEmitLogHandler._async_emit' was never awaited\"\nAny suggestions on a cleaner way to do this? Or some way to catch shutdown and kill pending writes? Or just suppress the warnings?\nNote: the full code is [here][1]\n[1]: https:\/\/github.com\/lsst-ts\/ts_salobj\/blob\/c0c6473f7ff7c71bd3c84e8e95b4ad7c28e67721\/python\/lsst\/ts\/salobj\/sal_log_handler.py","Title":"Making a logging.Handler with async emit","Tags":"python,logging,python-asyncio","AnswerCount":1,"A_Id":75096294,"Answer":"You could keep a reference to the coro, and override the handler's close() method to call close() on it. A general way to manage coros is to keep a list of them in the handler, and override the handler's close() method to call close() on each coro, or else create tasks from them and call cancel() on each of the tasks.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75091251,"CreationDate":"2023-01-12 03:26:14","Q_Score":1,"ViewCount":98,"Question":"I am trying to integrate Amibroker 6.41 (64-bit) with Python. Currently, I have Python 3.11.1 (64-bit) stand-alone installed along with NumPy, and pandas installed using pip in the python library.\nI have installed AmiPy.dll into the Amibroker plugin folder and Amibroker acknowledged its presence.\nNeed your advice on the following error received while trying to set up cointegration afl using python.\nError 99. Error occurred during Python execution: *ModuleNotFoundError: No Module named '_ctypes' *callback:\n\nIt seems to me that it is unable to import the following:\nfrom ctypes._endian import BigEndianStructure, LittleEndianStructure\nfrom ctypes._endian import BigEndianUnion, LittleEndianUnion\n\nOn further investigation, it seems that somehow my latest Python 3.11.1 doesn't have ctypes installed. Hence the AmiPy Dll is unable to import the above files.\nUnable to decide what should be my next step to resolve this issue.","Title":"Python Error 99 while integrating Amibroker with Python","Tags":"python,amibroker","AnswerCount":1,"A_Id":75105995,"Answer":"Finally solved the issue by uninstalling Python 3.11.1 and installing Python 3.10.8 as Python 3.11.1 is broken. The issue is highlighted in the bug report on GitHub (link given below)\nSo suggest all not to install or use Python version 3.11.1 due to DLLs not being added to sys.path in embedded startup on Windows.\n[[1]: https:\/\/github.com\/python\/cpython\/issues\/100320][1]","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75091456,"CreationDate":"2023-01-12 04:03:54","Q_Score":0,"ViewCount":28,"Question":"I loaded QtPy5 and QtPy5-tools successfully using pip on VSCode. But I cant for the life of me find it on my system.\nIs it an .exe program?\nwhere should I be looking please and what is it called\nTIA","Title":"where is the Qt designer app loaded on my system?","Tags":"python,qt-designer","AnswerCount":1,"A_Id":75091899,"Answer":"you just hold ctrl key and move your cursor on the QtPy5 and then click it it should take you the the file location of the QtPy5. i think this should work.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75092954,"CreationDate":"2023-01-12 07:38:45","Q_Score":2,"ViewCount":85,"Question":"I have a data like this\nresponse = [{'startdata': 'Aug 24, 2022 10:37:50 PM', 'enddata': 'Aug 24, 2022 10:37:50 PM', 'province': 'Mashonaland_Central', 'district': 'Guruve', 'on_consent': '', 'meta': ''}]\n\n\ndata_mod = [\"startdata\", \"enddata\"]\n\nneed to check data_mod fields and if the key is present in the response variable I need to update it otherwise just ignore it.\nwhat I'm doing\nfor data in response_json:\n try:\n dateformat_in = \"%b %d, %Y %I:%M:%S %p\"\n dateformat_out = \"%Y-%m-%dT%H:%M:%S+00:00\"\n data[\"starttime\"] = datetime.strptime(data[\"starttime\"], dateformat_in).strftime(dateformat_out)\n data[\"endtime\"] = datetime.strptime(data[\"endtime\"], dateformat_in).strftime(dateformat_out)\n data[\"CompletionDate\"] = datetime.strptime(data[\"CompletionDate\"], dateformat_in).strftime(dateformat_out)\n data[\"SubmissionDate\"] = datetime.strptime(data[\"SubmissionDate\"], dateformat_in).strftime(dateformat_out)\n yield data\n except Exception as e:\n raise e\n\n return data\n\nBut this is not working for me. need some help on this?","Title":"how do I update the key once its found in dictionary","Tags":"python","AnswerCount":4,"A_Id":75093221,"Answer":"Try something like\nresponse[i] = datetime.strptime(response[i], dateformat_in).strftime(dateformat_out).The original statement doesn't change the value it passes.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75093960,"CreationDate":"2023-01-12 09:16:48","Q_Score":0,"ViewCount":49,"Question":"I am trying to install and use python on Windows 11 for purposes of Meraki API calls. I have installed Python version 3.11 and am now trying to run\npip install --upgrade requests\npip install --upgrade meraki\nbut these command return the following error\nWARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': \/simple\/requests\/\nERROR: Could not find a version that satisfies the requirement requests (from versions: none)\nERROR: No matching distribution found for requests\nWARNING: There was an error checking the latest version of pip.\nI don't think the firewall is blocking it but I am not sure what I need to look for in the firewall - does anyone know the addresses that need to be unblocked?\nOr is there another reason for this error?\nThanks!\nI tried adding a firewall rule but didn't know what I needed to add.","Title":"How to fix Python PIP update failing","Tags":"python,python-3.x,meraki-api","AnswerCount":1,"A_Id":75107000,"Answer":"Try to use another pip index-url\nFor example:\npip install --upgrade requests -i https:\/\/pypi.tuna.tsinghua.edu.cn\/simple\/ --trusted-host pypi.tuna.tsinghua.edu.cn","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75094244,"CreationDate":"2023-01-12 09:39:28","Q_Score":3,"ViewCount":6281,"Question":"I am using a SVC to predict a target. I am tryring to use shap to get features importance. but it fails.\nhere is my simple code that I copied from the official doc of shap :\nimport shap\nsvc_linear = SVC(C=1.2, probability=True)\nsvc_linear.fit(X_train, Y_train)\nexplainer = shap.KernelExplainer(svc_linear.predict_proba, X_train)\nshap_values = explainer.shap_values(X_test)\nshap.force_plot(explainer.expected_value[0], shap_values[0], X_test)\n\nbut I get this :\n---------------------------------------------------------------------------\nSystemError Traceback (most recent call last)\n~\\AppData\\Local\\Temp\\ipykernel_11012\\3923049429.py in \n----> 1 import shap\n 2 svc_linear = SVC(C=1.2, probability=True)\n 3 svc_linear.fit(X_train, Y_train)\n 4 explainer = shap.KernelExplainer(svc_linear.predict_proba, X_train)\n 5 shap_values = explainer.shap_values(X_test)\n\n~\\Anaconda3\\lib\\site-packages\\shap\\__init__.py in \n 10 warnings.warn(\"As of version 0.29.0 shap only supports Python 3 (not 2)!\")\n 11 \n---> 12 from ._explanation import Explanation, Cohorts\n 13 \n 14 # explainers\n\n~\\Anaconda3\\lib\\site-packages\\shap\\_explanation.py in \n 10 from slicer import Slicer, Alias, Obj\n 11 # from ._order import Order\n---> 12 from .utils._general import OpChain\n 13 from .utils._exceptions import DimensionError\n 14 \n\n~\\Anaconda3\\lib\\site-packages\\shap\\utils\\__init__.py in \n----> 1 from ._clustering import hclust_ordering, partition_tree, partition_tree_shuffle, delta_minimization_order, hclust\n 2 from ._general import approximate_interactions, potential_interactions, sample, safe_isinstance, assert_import, record_import_error\n 3 from ._general import shapley_coefficients, convert_name, format_value, ordinal_str, OpChain, suppress_stderr\n 4 from ._show_progress import show_progress\n 5 from ._masked_model import MaskedModel, make_masks\n\n~\\Anaconda3\\lib\\site-packages\\shap\\utils\\_clustering.py in \n 2 import scipy as sp\n 3 from scipy.spatial.distance import pdist\n----> 4 from numba import jit\n 5 import sklearn\n 6 import warnings\n\n~\\Anaconda3\\lib\\site-packages\\numba\\__init__.py in \n 40 \n 41 # Re-export vectorize decorators and the thread layer querying function\n---> 42 from numba.np.ufunc import (vectorize, guvectorize, threading_layer,\n 43 get_num_threads, set_num_threads)\n 44 \n\n~\\Anaconda3\\lib\\site-packages\\numba\\np\\ufunc\\__init__.py in \n 1 # -*- coding: utf-8 -*-\n 2 \n----> 3 from numba.np.ufunc.decorators import Vectorize, GUVectorize, vectorize, guvectorize\n 4 from numba.np.ufunc._internal import PyUFunc_None, PyUFunc_Zero, PyUFunc_One\n 5 from numba.np.ufunc import _internal, array_exprs\n\n~\\Anaconda3\\lib\\site-packages\\numba\\np\\ufunc\\decorators.py in \n 1 import inspect\n 2 \n----> 3 from numba.np.ufunc import _internal\n 4 from numba.np.ufunc.parallel import ParallelUFuncBuilder, ParallelGUFuncBuilder\n 5 \n\nSystemError: initialization of _internal failed without raising an exception\n\nI don't know why? does anyone knows why ?\nps :\npython version : 3.9.13\nshap version : 0.40.0","Title":"shap : SystemError: initialization of _internal failed without raising an exception","Tags":"python,machine-learning,jupyter-notebook,shap,svc","AnswerCount":2,"A_Id":75318442,"Answer":"As per Hiran's comment in the question, it also worked for me.\ninstall shap again after uninstall it.\n\npip uninstall shap\n\n\npip install shap","Users Score":2,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75095519,"CreationDate":"2023-01-12 11:17:46","Q_Score":0,"ViewCount":34,"Question":"I have df1 with around 3,67,000 rows.\ndf2 has 30k rows.\nTheir common columns are first_name, middle_name and last_name, where first name and last name are exact matches, and middle_name has some constraints.\nThe matched df has 20k rows.\nI want to make a dataframe containing df2-matched (30k-20k= 10k rows).\nEssentially, I want to find the rows in df2 that were not a match to any rows in df1, but I cannot concat or merge because the columns are different.","Title":"Make a dataframe containing rows that were not matched after merging df1 and df2","Tags":"python,sql,pandas,join,merge","AnswerCount":3,"A_Id":75095698,"Answer":"new_df = df2[~df2.index.isin(matched.index)]\nExplanation: You are saying \"keep only the rows in df2 that are not in the matched data frame, and save this as a new dataframe\"","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75098768,"CreationDate":"2023-01-12 15:34:17","Q_Score":1,"ViewCount":827,"Question":"I am working on a bot which is supposed to send slash command in the Discord channel and those slash commands will be received by another bot in the same channel. But when I send a message formatted as a slash command, the other bot doesn't detect it as a command but as a simple text message. Here is my code;\nimport discord\nimport asyncio\n\nclient = discord.Client()\n\n@client.event\nasync def on_ready():\n print(\"Bot is ready.\")\n\n@client.event\nasync def on_message(message):\n async with message.channel.typing(): await asyncio.sleep(2)\n # Send a message after 5 seconds\n await message.channel.send(\"\/spoiler 'this is spoiler'\")\n return\n\nclient.run('My_Bot_Token')\n\nI tried the following to get it working\n\nI tried using typing() method but that didn't work.\nI read the discord.py docs but found nothing from there that can help.\nSearched the internet but again nothing about sending slash commands from a bot\n\nI'd be grateful if someone could help me. Thanks","Title":"Discord.py send slash command from a bot to a bot","Tags":"python,discord,discord.py,bots","AnswerCount":3,"A_Id":75099856,"Answer":"This is not supported by Discord.","Users Score":3,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75098860,"CreationDate":"2023-01-12 15:42:02","Q_Score":1,"ViewCount":243,"Question":"I begin with marshamallow and I try to validate a field. My schema is very simple.\nclass MySchema(Schema):\n pid = fields.String(required=true)\n visibility = fields.String(validate=OneOf(['public','private'])\n\n @validates('visibility')\n def visibility_changes(self, data, **kwargs):\n # Load data from DB (based on ID)\n db_record = load_data_from_db(self.pid) # <-- problem is here\n # Check visibility changed\n if db_record.get('visibility') != data:\n do_some_check_here()\n\nBut using self.pid doesn't work. It raises an error AttributeError: 'MySchema' object has no attribute 'pid'.\nWhat's the correct way to access to my \"pid\" field value into my @validates function ?\nI tried using self.fields, self.load_fields.get('pid').get_value(), ... no easy way to access it, but I suppose that Marshmallow has such magic method.\nThanks for your help.","Title":"Access to marshmallow field value","Tags":"python,marshmallow","AnswerCount":1,"A_Id":75638903,"Answer":"You should use @validates_schema to define a validator that works on several fields. It takes the whole input payload as argument so you can find pid in there.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75099182,"CreationDate":"2023-01-12 16:07:58","Q_Score":0,"ViewCount":938,"Question":"I am trying to install locally Stable Diffusion. I follow the presented steps but when I get to the last one \"run webui-use file\" it opens the terminal and it's saying \"Press any key to continue...\". If I do so the terminal instantly closes.\nI went to the SB folder, right-clicked open in the terminal and used .\/webui-user to run the file. The terminal does not longer close but nothing is happening and I get those two errors:\nCouldn't install torch,\nNo matching distribution found for torch==1.12.1+cu113\nI've researched online and I've tried installing the torch version from the error, also I tried pip install --user pipenv==2022.1.8 but I get the same errors.","Title":"Stable Diffusion Error: Couldn't install torch \/ No matching distribution found for torch==1.12.1+cu113","Tags":"python,pytorch,stable-diffusion","AnswerCount":1,"A_Id":75607570,"Answer":"if has some problems with a python, remove venv folder, this will be generated again by script, because if you have another version to python this config files will be replaced with your paths, everything if you change a python version, don't forgot delete this folder venv.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":2},{"Q_Id":75099182,"CreationDate":"2023-01-12 16:07:58","Q_Score":0,"ViewCount":938,"Question":"I am trying to install locally Stable Diffusion. I follow the presented steps but when I get to the last one \"run webui-use file\" it opens the terminal and it's saying \"Press any key to continue...\". If I do so the terminal instantly closes.\nI went to the SB folder, right-clicked open in the terminal and used .\/webui-user to run the file. The terminal does not longer close but nothing is happening and I get those two errors:\nCouldn't install torch,\nNo matching distribution found for torch==1.12.1+cu113\nI've researched online and I've tried installing the torch version from the error, also I tried pip install --user pipenv==2022.1.8 but I get the same errors.","Title":"Stable Diffusion Error: Couldn't install torch \/ No matching distribution found for torch==1.12.1+cu113","Tags":"python,pytorch,stable-diffusion","AnswerCount":1,"A_Id":75111984,"Answer":"I ran into the same problem, found out that I was using python 3.11, instead of the version from instructions - Python 3.10.6; You can uninstall other versions from Programs and Features\/ edit env vars","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75099759,"CreationDate":"2023-01-12 16:55:09","Q_Score":0,"ViewCount":30,"Question":"I don't have a concrete project yet, but in anticipation I would like to know if it is possible to fill a pdf with data stored in mysql?\nIt would be a question of a form with several lines and column history not to simplify the thing... If yes, what technology\/language to use?\nI found several tutorials which however start from a blank pdf. I have the constraint of having to place the data in certain specific places.","Title":"How to fill a pdf with python","Tags":"python,pdf,pdf-writer","AnswerCount":3,"A_Id":75099817,"Answer":"Try using PyFPDF or ReportLab to create and manipulate PDF documents in Python.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75100102,"CreationDate":"2023-01-12 17:23:32","Q_Score":2,"ViewCount":1088,"Question":"I am not very familiar with python, I only done automation with so I am a new with packages and everything.\nI am creating an API with Flask, Gunicorn and Poetry.\nI noticed that there is a version number inside the pyproject.toml and I would like to create a route \/version which returns the version of my app.\nMy app structure look like this atm:\n\u251c\u2500\u2500 README.md\n\u251c\u2500\u2500 __init__.py\n\u251c\u2500\u2500 poetry.lock\n\u251c\u2500\u2500 pyproject.toml\n\u251c\u2500\u2500 tests\n\u2502 \u2514\u2500\u2500 __init__.py\n\u2514\u2500\u2500 wsgi.py\n\nWhere wsgi.py is my main file which run the app.\nI saw peoples using importlib but I didn't find how to make it work as it is used with:\n __version__ = importlib.metadata.version(\"__package__\")\nBut I have no clue what this package mean.","Title":"Get app version from pyproject.toml inside python code","Tags":"python,python-packaging,python-poetry,python-importlib","AnswerCount":3,"A_Id":76582168,"Answer":"I had the same question.\nOne of the solution I could come up is to ship the pyproject.toml file together with the project, as a data file. This can be done by putting pyproject.toml inside your_package\/data, and put include = [{path = \"phuego\/data\/pyproject.toml\"}]under [tool.poetry] in pyproject.toml. Then, you can use toml package to access it.\nBut I'm not convinced by this solution. Perhaps there is a better idea out there.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75100102,"CreationDate":"2023-01-12 17:23:32","Q_Score":2,"ViewCount":1088,"Question":"I am not very familiar with python, I only done automation with so I am a new with packages and everything.\nI am creating an API with Flask, Gunicorn and Poetry.\nI noticed that there is a version number inside the pyproject.toml and I would like to create a route \/version which returns the version of my app.\nMy app structure look like this atm:\n\u251c\u2500\u2500 README.md\n\u251c\u2500\u2500 __init__.py\n\u251c\u2500\u2500 poetry.lock\n\u251c\u2500\u2500 pyproject.toml\n\u251c\u2500\u2500 tests\n\u2502 \u2514\u2500\u2500 __init__.py\n\u2514\u2500\u2500 wsgi.py\n\nWhere wsgi.py is my main file which run the app.\nI saw peoples using importlib but I didn't find how to make it work as it is used with:\n __version__ = importlib.metadata.version(\"__package__\")\nBut I have no clue what this package mean.","Title":"Get app version from pyproject.toml inside python code","Tags":"python,python-packaging,python-poetry,python-importlib","AnswerCount":3,"A_Id":75100875,"Answer":"You should not use __package__, which is the name of the \"import package\" (or maybe import module, depending on where this line of code is located), and this is not what importlib.metadata.version() expects. This function expects the name of the distribution package (the thing that you pip-install), which is the one you write in pyproject.toml as name = \"???\".","Users Score":5,"is_accepted":false,"Score":0.3215127375,"Available Count":2},{"Q_Id":75100915,"CreationDate":"2023-01-12 18:38:45","Q_Score":0,"ViewCount":27,"Question":"I have data regarding the years of birth and death of several people. I want to compute efficiently how many people are in each of a group of pre-defined epochs.\nFor example. If I have this list of data:\n\nPaul 1920-1950\nSara 1930-1950\nMark 1960-2020\nLennard 1960-1970\n\nand I define the epochs 1900-1980 and 1980-2023, I would want to compute the number of people alive in each period (not necessarily the whole range of the years). In this case, the result would be 4 people (Paul, Sara, Mark and Lennard) for the first epoch and 1 person (Mark) for the second epoch.\nIs there any efficient routine out there? I would like to know, as the only way I can think of now is to create a huge loop with a lot of ifs to start categorizing.\nI really appreciate any help you can provide.","Title":"Categorize birth-death data in epochs","Tags":"python,data-analysis","AnswerCount":2,"A_Id":75100984,"Answer":"Loop over all individuals.\nExpand \"birth .. death\" years into epochs.\nIf epoch granularity was 12 months,\nthen you would generate 30 rows for a 30-year old,\nand so on.\nYour granularity is much coarser,\nwith valid epoch labels being just {1900, 1980},\nso each individual will have just one or two rows.\nOne of your examples would have a \"1900, Mark\" row,\nand a \"1980, Mark\" row, indicating he was alive\nfor some portion of both epochs.\nNow just sort values and group by,\nto count how many 1900 rows and\nhow many 1980 rows there are.\nReport the per-epoch counts.\nOr report names of folks alive in each epoch,\nif that's the level of detail you need.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75101688,"CreationDate":"2023-01-12 19:59:50","Q_Score":1,"ViewCount":162,"Question":"i try the below code but the attribute error is driving me crazy !, any thing after message. gets me an attribute error, wtv prop. i use. i end up with attribute error.\nAttributeError: .Sender\n\nCode:\nimport win32com.client\n\noutlook = win32com.client.Dispatch(\"Outlook.Application\").GetNamespace(\"MAPI\")\ninbox = outlook.GetDefaultFolder(6) # \"6\" refers to the inbox\nmessages = inbox.Items\n\nsender_email = \"TDC@AE.Roco.COM\"\nrecipient_email = \"simple.invoice@net\"\n\nfor message in messages:\n if message.Sender.Address == sender_email:\n new_mail = message.Forward()\n new_mail.Recipients.Add(recipient_email)\n for attachment in message.Attachments:\n new_mail.Attachments.Add(attachment)\n new_mail.Save()\n\nBased on given answers:\nimport win32com.client\n\noutlook = win32com.client.Dispatch(\"Outlook.Application\")\nmapi = outlook.GetNamespace(\"MAPI\")\ninbox = mapi.GetDefaultFolder(6)\naccounts = mapi.Folders\n\nquery = '@SQL=\"urn:schemas:httpmail:from\" = ' + \"'TDC@AE.Roco.COM'\" + ' AND \"urn:schemas:httpmail:hasattachment\" = ' + \"'1'\"\nprint(query)\n\ntry:\n items = inbox.Items.Restrict(query)\n print(f'Number of items found : {items.count}')\n\n\n def check_subfolders(folder):\n items = folder.Items.Restrict(query)\n if items.count > 0:\n print(f'{items.count} emails found in {folder.name}')\n for subfolder in folder.Folders:\n check_subfolders(subfolder)\n\n\n check_subfolders(inbox)\n for folder in mapi.Folders:\n items = folder.Items.Restrict(query)\n if items.count > 0:\n print(f'{items.count} emails found in {folder.name}')\n for item in items:\n mail = item.Forward()\n mail.Recipients.Add(\"simple.invoice@net\")\n mail.Subject = \"Fwd: \" + item.Subject\n mail.Body = \"Please find the forwarded message with attachments below:\\n\\n\" + item.Body\n mail.Save()\nexcept Exception as e:\n print(f'An error occurred: {e}')\n\nNow I have no errors but the result returns zero, although I have mails from that specified sender!","Title":"Forward emails from a specifc sender in outlook via python","Tags":"python,email,outlook,win32com,office-automation","AnswerCount":3,"A_Id":75102386,"Answer":"Sender property is only exposed by the MailItem object, but you can also have ReportItem and MeetingItem objects in the Inbox folder. You need to check first that Class property == 43 (which is olMail)\nAlso, do not loop through all items in a folder - use Items.Find\/FindNext or Items.Restrict with a query like [SenderEmailAddress] = 'TDC@AE.Roco.COM'","Users Score":1,"is_accepted":false,"Score":0.0665680765,"Available Count":1},{"Q_Id":75101901,"CreationDate":"2023-01-12 20:21:36","Q_Score":0,"ViewCount":58,"Question":"I am trying to run this code for an LDA Topic Model for free form text responses. The path is referencing the raw text from the reviews. When I run this, the error is\nTypeError: pipe() got an unexpected keyword argument 'n_threads'\nAny possible solutions? This is my first time running a LDA Topic model from scratch. Let me know if more info is needed. thanks\nCODE:\nsw = stopwords.words('english')\nnlp = spacy.load('en_core_web_sm')\nimport time\nt0 = time.time()\nwrite_parsed_sentence_corpus(nlppath+'rawtext.txt', nlppath+'parsedtexts.txt', nlp, batch_size=1000, n_threads=2, sw=sw, exclusions = ['-PRON-'])\ntd = time.time()-t0\nprint('Took {:.2f} minutes'.format(td\/60))","Title":"pipe() got an unexpected keyword argument 'n_threads'","Tags":"python,pipe,lda,topic-modeling","AnswerCount":1,"A_Id":75158354,"Answer":"Change n_threads=2 to n_process=2 and it should work","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75102042,"CreationDate":"2023-01-12 20:36:28","Q_Score":3,"ViewCount":102,"Question":"I am trying to generate a log file to a specific folder and path in greengrass v2. however the log file is created at the current directory.\nThe current directory at which the logger file is generated is\n\/sim\/things\/t1_gateway_iotgateway_1234\/greengrass\/packages\/artifacts-unarchived\/com.data.iot.RulesEngineCore\/2.3.1-pp.38\/package\n\nCould you please help me where am I missing?\nThe following is my program.\nimport logging\nfrom datetime import datetime\nimport os, sys\nfrom logging.handlers import RotatingFileHandler\n\ndef getStandardStdOutHandler():\n\n formatter = logging.Formatter(\n fmt=\"[%(asctime)s][%(levelname)-7s][%(name)s] %(message)s (%(threadName)s[% (thread)d]:%(module)s:%(funcName)s:%(lineno)d)\"\n )\n\n filename = datetime.now().strftime(\"rule_engine_%Y_%m_%d_%H_%M.log\")\n path = \"\/sim\/things\/t1_gateway_iotgateway_1234\/greengrass\/logs\/\"\n\n _handler = RotatingFileHandler(path + filename, maxBytes=1000000, backupCount=5)\n _handler.setLevel(logging.DEBUG)\n _handler.setFormatter(formatter)\n return _handler\n\n\ndef getLogger(name: str):\n logger = logging.getLogger(name)\n\n logger.addHandler(getStandardStdOutHandler())\n\n return logger","Title":"Problem in generating logger file to a specific path in a greengrass","Tags":"python,logging","AnswerCount":2,"A_Id":75279254,"Answer":"Directing the link can be a hustle, I prefer to use this simple technique of adding r before the link in quotes.\ndo check this:\npath folders = (r\"\/sim\/things\/t1_gateway_iotgateway_1234\/greengrass\/logs\/\")","Users Score":-2,"is_accepted":false,"Score":-0.1973753202,"Available Count":1},{"Q_Id":75102134,"CreationDate":"2023-01-12 20:45:51","Q_Score":1,"ViewCount":2503,"Question":"I'm trying to build a neural network to predict per-capita-income for counties in US based on the education level of their citizens.\nX and y have the same dtype (I have checked this) but I'm getting an error.\nHere is my data:\n county_FIPS state county per_capita_personal_income_2019 \\\n0 51013 VA Arlington, VA 97629 \n\n per_capita_personal_income_2020 per_capita_personal_income_2021 \\\n0 100687 107603 \n\n associate_degree_numbers_2016_2020 bachelor_degree_numbers_2016_2020 \\\n0 19573 132394 \n \n\nAnd here is my network\nimport torch\nimport pandas as pd\ndf = pd.read_csv(\".\/input\/US counties - education vs per capita personal income - results-20221227-213216.csv\")\nX = torch.tensor(df[[\"bachelor_degree_numbers_2016_2020\", \"associate_degree_numbers_2016_2020\"]].values)\ny = torch.tensor(df[\"per_capita_personal_income_2020\"].values)\n\nX.dtype\ntorch.int64\n\ny.dtype\ntorch.int64\n\nimport torch.nn as nn\nclass BaseNet(nn.Module):\n def __init__(self, in_dim, hidden_dim, out_dim):\n super(BaseNet, self).__init__()\n self.classifier = nn.Sequential(\n nn.Linear(in_dim, hidden_dim, bias=True), \n nn.ReLU(), \n nn.Linear(feature_dim, out_dim, bias=True))\n \n def forward(self, x): \n return self.classifier(x)\n\nfrom torch import optim\nimport matplotlib.pyplot as plt\nin_dim, hidden_dim, out_dim = 2, 20, 1\nlr = 1e-3\nepochs = 40\nloss_fn = nn.CrossEntropyLoss()\nclassifier = BaseNet(in_dim, hidden_dim, out_dim)\noptimizer = optim.SGD(classifier.parameters(), lr=lr)\n\ndef train(classifier, optimizer, epochs, loss_fn):\n classifier.train()\n losses = []\n for epoch in range(epochs):\n out = classifier(X)\n loss = loss_fn(out, y)\n loss.backward()\n optimizer.step()\n optimizer.zero_grad()\n losses.append(loss\/len(X))\n print(\"Epoch {} train loss: {}\".format(epoch+1, loss\/len(X)))\n \n plt.plot([i for i in range(1, epochs + 1)])\n plt.xlabel(\"Epoch\")\n plt.ylabel(\"Training Loss\")\n plt.show()\n\ntrain(classifier, optimizer, epochs, loss_fn)\n\nHere is the full stack trace of the error that I am getting when I try to train the network:\n---------------------------------------------------------------------------\nRuntimeError Traceback (most recent call last)\nInput In [77], in ()\n 36 plt.ylabel(\"Training Loss\")\n 37 plt.show()\n---> 39 train(classifier, optimizer, epochs, loss_fn)\n\nInput In [77], in train(classifier, optimizer, epochs, loss_fn)\n 24 losses = []\n 25 for epoch in range(epochs):\n---> 26 out = classifier(X)\n 27 loss = loss_fn(out, y)\n 28 loss.backward()\n\nFile ~\/opt\/anaconda3\/lib\/python3.9\/site-packages\/torch\/nn\/modules\/module.py:1194, in Module._call_impl(self, *input, **kwargs)\n 1190 # If we don't have any hooks, we want to skip the rest of the logic in\n 1191 # this function, and just call forward.\n 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\n 1193 or _global_forward_hooks or _global_forward_pre_hooks):\n-> 1194 return forward_call(*input, **kwargs)\n 1195 # Do not call functions when jit is used\n 1196 full_backward_hooks, non_full_backward_hooks = [], []\n\nInput In [77], in BaseNet.forward(self, x)\n 10 def forward(self, x): \n---> 11 return self.classifier(x)\n\nFile ~\/opt\/anaconda3\/lib\/python3.9\/site-packages\/torch\/nn\/modules\/module.py:1194, in Module._call_impl(self, *input, **kwargs)\n 1190 # If we don't have any hooks, we want to skip the rest of the logic in\n 1191 # this function, and just call forward.\n 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\n 1193 or _global_forward_hooks or _global_forward_pre_hooks):\n-> 1194 return forward_call(*input, **kwargs)\n 1195 # Do not call functions when jit is used\n 1196 full_backward_hooks, non_full_backward_hooks = [], []\n\nFile ~\/opt\/anaconda3\/lib\/python3.9\/site-packages\/torch\/nn\/modules\/container.py:204, in Sequential.forward(self, input)\n 202 def forward(self, input):\n 203 for module in self:\n--> 204 input = module(input)\n 205 return input\n\nFile ~\/opt\/anaconda3\/lib\/python3.9\/site-packages\/torch\/nn\/modules\/module.py:1194, in Module._call_impl(self, *input, **kwargs)\n 1190 # If we don't have any hooks, we want to skip the rest of the logic in\n 1191 # this function, and just call forward.\n 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\n 1193 or _global_forward_hooks or _global_forward_pre_hooks):\n-> 1194 return forward_call(*input, **kwargs)\n 1195 # Do not call functions when jit is used\n 1196 full_backward_hooks, non_full_backward_hooks = [], []\n\nFile ~\/opt\/anaconda3\/lib\/python3.9\/site-packages\/torch\/nn\/modules\/linear.py:114, in Linear.forward(self, input)\n 113 def forward(self, input: Tensor) -> Tensor:\n--> 114 return F.linear(input, self.weight, self.bias)\n\nRuntimeError: mat1 and mat2 must have the same dtype\n\nUpdates\nI have tried casting X and y to float tensors but this comes up with the following error: expected scalar type Long but found Float. If someone who knows PyTorch could try running this notebook for themselves that would be a great help. I'm struggling to get off the ground with Kaggle and ML.","Title":"mat1 and mat2 must have the same dtype","Tags":"python,machine-learning,pytorch,data-science","AnswerCount":2,"A_Id":75398605,"Answer":"I converted the input to np.float32 which solved a similar problem for me","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75103127,"CreationDate":"2023-01-12 22:50:03","Q_Score":2,"ViewCount":2985,"Question":"The full error:\nNotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective\/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https:\/\/fburl.com\/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].\n\nI get this when attempting to train a YOLOv8 model on a Windows 11 machine, everything works for the first epoch then this occurs.\n\nI also get this error immediately after the first epoch ends but I don't think it is relevant.\nError executing job with overrides: ['task=detect', 'mode=train', 'model=yolov8n.pt', 'data=custom.yaml', 'epochs=300', 'imgsz=160', 'workers=8', 'batch=4']\n\nI was trying to train a YOLOv8 image detection model utilizing CUDA GPU.","Title":"Getting \"NotImplementedError: Could not run 'torchvision::nms' with arguments from CUDA backend\" despite having all necessary libraries and imports","Tags":"python,pytorch,yolo","AnswerCount":2,"A_Id":76021232,"Answer":"I don't know what causes this error but I know installing the proper version of torchvision and cuda didn't fix it. The way I solved is by uninstalling all of my packages and then reinstalling. Works just fine now. In hindsight, it may have been better to just update all my packages, but problem fixed none the less.","Users Score":-1,"is_accepted":false,"Score":-0.0996679946,"Available Count":1},{"Q_Id":75103692,"CreationDate":"2023-01-13 00:32:08","Q_Score":2,"ViewCount":49,"Question":"How do I work around when my training image dataset have different number of classes than validation set.\nDirectory structure:\n- train\n - class1\n - class2\n - class3\n- test\n - class1\n - class3\n\nidg = ImageDataGenerator(\n preprocessing_function=preprocess_input\n)\ntrain_gen = idg.flow_from_directory(\n TRAIN_DATA_PATH,\n target_size=(ROWS, COLS),\n batch_size = 32\n)\n\nval_gen = idg.flow_from_directory(\n TEST_DATA_PATH,\n target_size=(ROWS, COLS),\n batch_size = 32\n)\n\ninput_shape = (ROWS, COLS, 3)\nnclass = len(train_gen.class_indices)\n\nbase_model = applications.InceptionV3(weights='imagenet', \n include_top=False, \n input_shape=(ROWS, COLS,3))\nbase_model.trainable = False\n\nmodel = Sequential()\nmodel .add(base_model)\nmodel .add(GlobalAveragePooling2D())\nmodel .add(Dropout(0.5))\nmodel .add(Dense(nclass, activation='softmax'))\n\nmodel.compile(loss='categorical_crossentropy', \n optimizer=optimizers.SGD(learning_rate=1e-4, momentum=0.9),\n metrics=['accuracy'])\nmodel.summary()\n\nmodel.fit(\n train_gen, \n epochs=20, \n verbose=True,\n validation_data=val_gen\n)\n\nThe error I get is related to the different number of classes in validation set.\nNode: 'categorical_crossentropy\/softmax_cross_entropy_with_logits'\nlogits and labels must be broadcastable: logits_size=[32,206] labels_size=[32,189]\n\nI have 206 classes in the train set and 189 in the validation set. Is it possible to have the same mapping as in train set (the names of the image folders are the same, I'm just missing some of them)","Title":"Handling class mismatch in ImageDataGenerator for training and validation sets","Tags":"python,tensorflow,keras","AnswerCount":2,"A_Id":75103717,"Answer":"If the validation set is small and\/or some classes are very rare, it may happen that some classes are completely absent from the validation dataset even if they are in the training dataset.\nThe simplest solution in your case is probably adding empty folders for the missing classes in the validation dataset directory by hand, such that all the classes will be present, even if some will have zero elements.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75103975,"CreationDate":"2023-01-13 01:26:31","Q_Score":1,"ViewCount":32,"Question":"for x in range(len(fclub1)-1):\n for y in range(x+1,len(fclub1)-1):\n if SequenceMatcher(None,fclub1[x], fclub1[y]).ratio() > 0.4:\n if SequenceMatcher(None,fclub2[x], fclub2[y]).ratio() > 0.4:\n if float(fbest_odds_1[x]) < float(fbest_odds_1[y]):\n fbest_odds_1[x] = fbest_odds_1[y]\n if float(fbest_odds_x[x]) < float(fbest_odds_x[y]):\n fbest_odds_x[x] = fbest_odds_x[y]\n if float(fbest_odds_2[x]) < float(fbest_odds_2[y]):\n fbest_odds_2[x] = fbest_odds_2[y]\n fclub1.pop(y)\n fclub2.pop(y)\n fbest_odds_1.pop(y)\n fbest_odds_x.pop(y)\n fbest_odds_2.pop(y)\n\nIt can't reliably match club names from different bookkeeps, for example Manchester United and Man. Utd.\nI tried fixing it with SequenceMatcher and making it recognize at least some part of the club name, but then it started to compare different clubs saying that they are the same:Aston Villa - Atherton Collieries and Leeds - Liversedge","Title":"Best way to recognize same club names that are written in a different way","Tags":"python,sequencematcher","AnswerCount":3,"A_Id":75104303,"Answer":"I ended up using the fuzzywuzzy library and fuzzy.partial_ratio() function","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75105492,"CreationDate":"2023-01-13 06:14:51","Q_Score":1,"ViewCount":82,"Question":"I have python 3.7.0 on windows 11 using vscode. I pip installed tensorflow and keras but when I tried to import them it gave me an error and said cannot import name OrderedDict\nTried uninstalling and reinstalling both tf and keras. Didn\u2019t work\nError Message:\nTraceback (most recent call last):\nFile \"c:\/Users\/Jai K\/CS Stuff\/test.py\", line 1, in \nimport tensorflow\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_init_.py\", line 37, in \nfrom tensorflow.python.tools import module_util as module_util\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python_init.py\", line 42, in \nfrom tensorflow.python import data\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\data_init_.py\", line 21, in \nfrom tensorflow.python.data import experimental\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\data\\experimental_init_.py\", line 96, in \nfrom tensorflow.python.data.experimental import service\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\data\\experimental\\service_init_.py\", line 419, in \nfrom tensorflow.python.data.experimental.ops.data_service_ops import distribute\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\data\\experimental\\ops\\data_service_ops.py\", line 25, in \nfrom tensorflow.python.data.ops import dataset_ops\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\data\\ops\\dataset_ops.py\", line 29, in \nfrom tensorflow.python.data.ops import iterator_ops\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\data\\ops\\iterator_ops.py\", line 34, in \nfrom tensorflow.python.training.saver import BaseSaverBuilder\nne 32, in from tensorflow.python.checkpoint import checkpoint_management\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\checkpoint_init_.py\", line 3, in from tensorflow.python.checkpoint import checkpoint_view\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\checkpoint\\checkpoint_view.py\", line 19, in from tensorflow.python.checkpoint import trackable_view\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\checkpoint\\trackable_view.py\", line 20, in from tensorflow.python.trackable import converter\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\trackable\\converter.py\", line 18, in from tensorflow.python.eager.polymorphic_function import saved_model_utils\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\eager\\polymorphic_function\\saved_model_utils.py\", line 36, in from tensorflow.python.trackable import resource\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\trackable\\resource.py\", line 22, in from tensorflow.python.eager import def_function\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\eager\\def_function.py\", line 20, in from tensorflow.python.eager.polymorphic_function.polymorphic_function import set_dynamic_variable_creation\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\eager\\polymorphic_function\\polymorphic_function.py\", line 76, in from tensorflow.python.eager.polymorphic_function import function_spec as function_spec_lib\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\eager\\polymorphic_function\\function_spec.py\", line 25, in from tensorflow.core.function.polymorphism import function_type as function_type_lib\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\core\\function\\polymorphism\\function_type.py\", line 19, in from typing import Any, Callable, Dict, Mapping, Optional, Sequence, Tuple, OrderedDict\nImportError: cannot import name 'OrderedDict' from 'typing' (C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\typing.py)","Title":"Cannot import tensorflow or keras: ordered dict","Tags":"python,tensorflow,keras","AnswerCount":1,"A_Id":75105563,"Answer":"so OrderedDict is from collections which should be on your pc anyway. it seems like some of python's dependencies are not on your system path. you should double-check check you have everything that needs to be there. I have anaconda\\scripts there\nif that fails:\ntry and pip install it (collections) anyway. then try and uninstall Tensorflow and Keras and everything related and then reinstall.\nfrom experience, I can tell you a lot of times this is something you need to do when modifying your Tensorflow installation since the resolver is just horrendous\nif it still doesn't work try to get a specific version of Tensorflow that is more stable.\nI hope this helps :)","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75105652,"CreationDate":"2023-01-13 06:37:23","Q_Score":2,"ViewCount":3760,"Question":"ScraperException: 4 requests to https:\/\/api.twitter.com\/2\/search\/adaptive.json?include_profile_interstitial_type=1&include_blocking=1&include_blocked_by=1&include_followed_by=1&include_want_retweets=1&include_mute_edge=1&include_can_dm=1&include_can_media_tag=1&skip_status=1&cards_platform=Web-12&include_cards=1&include_ext_alt_text=true&include_quote_count=true&include_reply_count=1&tweet_mode=extended&include_entities=true&include_user_entities=true&include_ext_media_color=true&include_ext_media_availability=true&send_error_codes=true&simple_quoted_tweets=true&q=%28from%3AZeeNewsEnglish%29+until%3A2023-01-12+since%3A2023-01-08+-filter%3Areplies&count=100&query_source=spelling_expansion_revert_click&pc=1&spelling_corrections=1&ext=mediaStats%2ChighlightedLabel failed, giving up.\n\nI tried following code :\nimport snscrape.modules.twitter as sntwitter\nimport time\n\nquery5 = \"(from:BBC) until:2023-01-12 since:2023-01-08 -filter:replies\"\n\nnews = [query5]\ntweets = []\n\nfor news_data in news:\n limit = 500\n for tweet in sntwitter.TwitterSearchScraper(news_data).get_items():\n\n # print(vars(tweet))\n # break\n if len(tweets) == limit:\n break\n else:\n tweets.append([tweet.date, tweet.username, tweet.content])\n \n time.sleep(2)","Title":"From yesterday i'm facing the issue of snscrape with twitter","Tags":"python,web-scraping,twitter-api-v2","AnswerCount":2,"A_Id":75132764,"Answer":"you have to install the last version of snscrape 0.5.0.20230113.","Users Score":3,"is_accepted":true,"Score":1.2,"Available Count":2},{"Q_Id":75105652,"CreationDate":"2023-01-13 06:37:23","Q_Score":2,"ViewCount":3760,"Question":"ScraperException: 4 requests to https:\/\/api.twitter.com\/2\/search\/adaptive.json?include_profile_interstitial_type=1&include_blocking=1&include_blocked_by=1&include_followed_by=1&include_want_retweets=1&include_mute_edge=1&include_can_dm=1&include_can_media_tag=1&skip_status=1&cards_platform=Web-12&include_cards=1&include_ext_alt_text=true&include_quote_count=true&include_reply_count=1&tweet_mode=extended&include_entities=true&include_user_entities=true&include_ext_media_color=true&include_ext_media_availability=true&send_error_codes=true&simple_quoted_tweets=true&q=%28from%3AZeeNewsEnglish%29+until%3A2023-01-12+since%3A2023-01-08+-filter%3Areplies&count=100&query_source=spelling_expansion_revert_click&pc=1&spelling_corrections=1&ext=mediaStats%2ChighlightedLabel failed, giving up.\n\nI tried following code :\nimport snscrape.modules.twitter as sntwitter\nimport time\n\nquery5 = \"(from:BBC) until:2023-01-12 since:2023-01-08 -filter:replies\"\n\nnews = [query5]\ntweets = []\n\nfor news_data in news:\n limit = 500\n for tweet in sntwitter.TwitterSearchScraper(news_data).get_items():\n\n # print(vars(tweet))\n # break\n if len(tweets) == limit:\n break\n else:\n tweets.append([tweet.date, tweet.username, tweet.content])\n \n time.sleep(2)","Title":"From yesterday i'm facing the issue of snscrape with twitter","Tags":"python,web-scraping,twitter-api-v2","AnswerCount":2,"A_Id":75116684,"Answer":"Faced the same issue. I guess snscraper, made use of Twitter API (elevated access) in the backend. Twitter shut down all the bots which were fetching the API data. Twitter essentially wants you to make authentic use of their data. I suggest signing up on twitter's developer account and requesting the elevated environment. Notice the first line in error makes a call to Twitter API.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":2},{"Q_Id":75106677,"CreationDate":"2023-01-13 08:36:40","Q_Score":1,"ViewCount":55,"Question":"Consider the following code snippet:\nimport abc\n\n\nclass Base(abc.ABC):\n @abc.abstractmethod\n def foo(self):\n pass\n\n\nclass WithAbstract(Base, abc.ABC):\n @abc.abstractmethod\n def bar(self):\n pass\n\n\nclass WithoutAbstract(Base):\n @abc.abstractmethod\n def bar(self):\n pass\n\n\nI have two questions regarding the code above:\n\nIs it necessary to inherit WithAbstract from abc.ABC as well, or is it sufficient to inherit WithoutAbstract only from Base?\nWhat is the pythonic way of going about it? What is the best practice?","Title":"Is it necessary to use abc.ABC for each base class in multiple inheritance?","Tags":"python,multiple-inheritance,abc","AnswerCount":2,"A_Id":75106842,"Answer":"WithAbstract inherits from Base which already inherits from abc.ABC so you don't have to inherit from abc.ABC again.\nUnless all of a sudden Base ceases to inherit from abc.ABC and your code breaks.\nI don't know about pythonic but I would tend to avoid multiple inheritance. True, it's not as problematic as in other languages like C++ but simple is better than complex.\nIf all the descendants of Base have to use @abc.abstractmethod decorator, then it's better to make it available from Base to avoid unnecessary copy\/paste when creating a new child class.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75107749,"CreationDate":"2023-01-13 10:14:31","Q_Score":2,"ViewCount":486,"Question":"When running poetry update, as well as other related commands, I get the process stuck at\nResolving dependencies...\n\nI'm using poetry version 1.2.2, so I wanted to upgrade it by running poetry self update -vvv\nThe process hangs indefinitely at this point\nSource (PyPI): Downloading sdist: msgpack-1.0.4.tar.gz\nCreating new session for files.pythonhosted.org\n\nIf it is a bug, is there a workaround to it?","Title":"Poetry self update hangs","Tags":"python,python-poetry","AnswerCount":2,"A_Id":75107993,"Answer":"Either your local network has issues or PyPi has problems.\nI suggest trying with a different Internet connection first, because diagnosing local network issues is very complicated on a discussion forum or remote generally.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75108331,"CreationDate":"2023-01-13 11:02:47","Q_Score":1,"ViewCount":69,"Question":"We're building a hardware device using a raspberry pi with a barcode scanner and a display. The barcode scanner functions like a USB keyboard, and sends keystrokes in quick succession after scanning a barcode. We're having a problem with pygame not detecting duplicate keys from the scanner in quick succession. When running the below code, pygame often misses duplicate keys. Example output from scanning the same barcode:\n5770857738\n5770857738\n570857738\n577085738\n57085738\n5770857738\n577085738\n5770857738\n5770857738\n5770857738\n5770857738\n577085738\n5770857738\n570857738\n5770857738\n\nIf I comment out the last three lines however (not updating the screen), the code is scanned successfully every time.\nWe're using pygame 2.1.2 with python 3.9.2 on a raspberry pi.\nimport sys, pygame\npygame.init()\nscreen = pygame.display.set_mode((800, 400))\nID = \"\"\nfont = pygame.font.SysFont(\"Arial\", 70)\nwhile True:\n text = font.render(\"testtext\", True, (255, 255, 255), (0, 0, 0))\n textRect = text.get_rect()\n textRect.center = (screen.get_width() \/\/ 2, screen.get_height() \/\/ 2)\n events = pygame.event.get()\n for event in events:\n if event.type == pygame.KEYDOWN:\n key = pygame.key.name(event.key)\n if key.isdigit():\n ID += key\n elif key == \"return\":\n print(ID)\n ID = \"\"\n elif key == \"left\":\n pygame.quit()\n sys.exit()\n screen.fill((0, 0, 0))\n screen.blit(text, textRect)\n pygame.display.flip()\n\nFrom testing it looks like there is about 3-4 miliseconds between each keystroke being sent by the scanner.\nWe've tried detecting keystrokes in a separate thread with different libraries, but have so far not found a workable solution.","Title":"Pygame not detecting duplicate keyboard input events","Tags":"python,pygame,raspberry-pi","AnswerCount":3,"A_Id":75113489,"Answer":"Your main loop is notorious for not adding any delay between screen frame updates - this will make the screen updtae use 100% CPU - and it is likely the event engine is doing the equivalent of \"frame skiping\" when it finally has a chance to run.\nJust add a pygame.time.delay(30) (pause 30 miliseconds) after the call to .display.flip() - that should give your O.S. a breath to catch up with the events. Since you said that commenting out any screen updates, it works, I am confident that given this space you should be fine.","Users Score":1,"is_accepted":false,"Score":0.0665680765,"Available Count":1},{"Q_Id":75108348,"CreationDate":"2023-01-13 11:03:47","Q_Score":0,"ViewCount":24,"Question":"I have selenium find element line: element.find_elements(By.XPATH, '.\/following::input')\nBut somehow it took waaaay to long to search for all next elements on the page (around 2\/3 second !!!)\nIs it way around it (or I have done something wrong)???","Title":"Python \/ Selenium - search following elements too too long time","Tags":"python,selenium,xpath","AnswerCount":1,"A_Id":75108472,"Answer":"Just make it like this and it will sort this problem (will search for the following 10 elements instead of all)\n\nelement.find_elements(By.XPATH, '.\/\/following::input[position()<=10]')","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75108666,"CreationDate":"2023-01-13 11:36:53","Q_Score":1,"ViewCount":55,"Question":"So I'm trying to create a program that finds the shortest path from nodeA to nodeB, however, I want to block certain nodes so that it would find another path. I'm not really aiming for an optimal code here I'm just trying things out, exploring etc.\nIs it still considered a BFS if I modify it a little?","Title":"Is is still considered a BFS algorithm if I modify it A little bit?","Tags":"python,algorithm,search,artificial-intelligence,breadth-first-search","AnswerCount":1,"A_Id":75109706,"Answer":"Yes, it still is a BFS, with just a few constraints. The essence of BFS algorithm is the way it explores the graph, you are just exploring a subgraph (through filtering out a bit of it).","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75109865,"CreationDate":"2023-01-13 13:28:23","Q_Score":1,"ViewCount":1717,"Question":"I tried to work with kedro and started with the spaceflight tutorial.\nI installed the src\/requirements.txt in a .venv.\nWhen running kedro viz (or kedro run or even kedro --version), I get lets of Deprecation Warnings.\nOne of which is the following (relating to kedro viz)\nkedro_viz\\models\\experiment_tracking.py:16: MovedIn20Warning: [31mDeprecated API features warnings.py:109 detected! These feature(s) are not compatible with SQLAlchemy 2.0. [32mTo prevent incompatible upgrades prior to updating applications, ensure requirements files\n are pinned to \"sqlalchemy<2.0\". [36mSet environment variable SQLALCHEMY_WARN_20=1 to show all deprecation warnings. Set environment variable\n SQLALCHEMY_SILENCE_UBER_WARNING=1 to silence this message.[0m (Background on SQLAlchemy 2.0 at: https:\/\/sqlalche.me\/e\/b8d9)\n Base = declarative_base()\n\nContext\nThis is a minor issue, but ofc I would like to setup the project to be as clean as possible.\nSteps to Reproduce\n\nSetup a fresh kedro installation (Version 0.18.4)\nCreate a .venv and install the standard requirements\nRun any kedro command (e.g. kedro --version)\n\nWhat I've tried\nI tried to put sqlalchemy<=2.0 in the requirements.txt and again run pip install -r src\/requirements.txt,\nbut that did not resolve it. Double checked with pip freeze that the following version of SQLAlchemy is installed:\nSQLAlchemy==1.4.46","Title":"Python: kedro viz SQLAlchemy DeprecationWarning","Tags":"python,sqlalchemy,dependencies,kedro","AnswerCount":1,"A_Id":75109965,"Answer":"The deprecation warning is not an issue, it's just a announcement from the SQLAlchemy folks that 2.x.x is coming, at the time it's writing it's not been released.\nkedro-viz is pinned to sqlalchemy~=1.4 or (some of the datasets use \"SQLAlchemy~=1.2\"). The ~= operator is basically the same as saying sqlalchemy >= 1.4, <2. We will look to relax this once 0.2.x is released and we test if anything needs fixing.","Users Score":2,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75109914,"CreationDate":"2023-01-13 13:32:05","Q_Score":2,"ViewCount":114,"Question":"in creating a cleaning project throught Python, I've found this code:\n# let's see if there is any missing data\n\nfor col in df.columns:\n pct_missing = np.mean(df[col].isnull())\n print('{} - {}%'.format(col, round(pct_missing,2)))\n\nWhich actually works fine, giving back the % of null values per column in the dataframe, but I'm a little confused on how it works:\nFirst we define a loop for each column in the dataframe, then we execute that mean but exactly the mean of what? The mean for each columns of the quantity of null cells or what?\nJust for reference, I've worked around it with this:\nNullValues=df.isnull().sum()\/len(df)\nprint('{} - {}%'.format(col, round(NullValues,2)))\n\nthat gives me back basically the same results but just to understand the mechanism...I'm confused about the first block of code...","Title":"What does np.mean(data.isnull()) exactly?","Tags":"python,python-3.x","AnswerCount":3,"A_Id":75109992,"Answer":"It's something that's very intuitive once you're used to it. The steps leading to this kind of code could be like the following:\n\nTo get the percentage of null values, we need to count all null rows, and divide the count by the total number of rows.\nSo, first we need to detect the null rows. This is easy, as there is a provided method: df[col].isnull().\nThe result of df[col].isnull() is a new column consisting of booleans -- True or False.\nNow we need to count the Trues. Here we can realize that counting Trues in a boolean array is the same as summing the array: True can be converted to 1, and False to zero.\nSo we would be left with df[col].isnull().sum() \/ len(df[col]).\nBut summing and dividing by the length is just the arithmetic mean! Therefore, we can shorten this to arrive at the final result: mean(df[col].isnull()).","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75110048,"CreationDate":"2023-01-13 13:45:57","Q_Score":1,"ViewCount":41,"Question":"I would like to ask some questions about lmfit accuracy (and possibly obtain better fit results by obtaining the answer).\nAll experimental spectra are limited by sampling, that is, by the distance between two points in the x-axis direction. I have noticed (so far) two instances when lmfit tries to overcome this limitation, and it is causing me problems:\n\nWhen FWHM of a peak tends to zero.\nI assume that if any two neighbor points are separated by around 0.013, then the fit result for the FWHM of 0.00000005 and multimillion percent error don't make much sense. I have solved this problem by putting a proper lower boundary on the FWHM of my peaks. I have also tried fitting some peaks with a Voigt profile, and whenever the Lorentzian width shows this kind of behavior, I convert it into a pure Gaussian. I think it makes no sense to keep it a Voigt in this condition. Is my reasoning correct?\n\nWhen the position of a peak tends to zero. I believe the reasoning is the same as what I mentioned above, but this time, I don't really know how to limit it \"from being too accurate\".\n\n\nHere is the code of the part that is causing actual problems:\n\n\nimport lmfit\nfrom lmfit import Model, Parameters\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx=[-0.3933, -0.38, -0.3667, -0.3533, -0.34, -0.3267, -0.3133, -0.3, -0.2867, -0.2733, -0.26, -0.2467, -0.2333, -0.22, -0.2067, -0.1933, -0.18, -0.1667, -0.1533, -0.14, -0.1267, -0.1133, -0.1, -0.0867, -0.0733, -0.06, -0.0467, -0.0333, -0.02, -0.0067, 0.0067, 0.02, 0.0333, 0.0467, 0.06, 0.0733, 0.0867, 0.1, 0.1133, 0.1267, 0.14, 0.1533, 0.1667, 0.18, 0.1933, 0.2067, 0.22, 0.2333, 0.2467, 0.26, 0.2733, 0.2867]\n\ny=[0.0048, 0.005, 0.0035, 0.0034, 0.0038, 0.004, 0.0034, 0.0036, 0.0038, 0.0046, 0.0038, 0.0039, 0.0054, 0.0065, 0.0073, 0.0086, 0.0079, 0.0102, 0.0105, 0.0141, 0.0192, 0.0259, 0.0275, 0.0279, 0.0257, 0.0247, 0.022, 0.0244, 0.0268, 0.0295, 0.0275, 0.0227, 0.0192, 0.0138, 0.0075, 0.0088, 0.0081, 0.005, 0.0041, 0.0034, 0.0023, 0.0019, 0.0021, 0.0019, 0.0016, 0.0013, 0.0022, 0.002, 0.0019, 0.0014, 0.0022, 0.0012]\n\ndef gfunction_norm(x, pos, gfwhm, int):\n gwid = gfwhm\/(2*np.sqrt(2*np.log(2)));\n gauss= (1\/(gwid*(np.sqrt(2*np.pi))))*(np.exp((-1.0\/2)*((((x-pos)\/gwid))**2)))\n return int*(gauss-gauss.min())\/(gauss.max()-gauss.min())\n \ndef final(x, a, b, int2, pos2, gfwhm2, int3, pos3, gfwhm3):\n return a*x+b + gfunction_norm(x, pos2, gfwhm2, int2) + gfunction_norm(x, pos3, gfwhm3, int3)\n \nparams1=Parameters()\nparams1.add('a', value=-2.8e-04)\nparams1.add('b', value=0.003)\n\nparams1.add('int2', value=0.04, min=0.01)\nparams1.add('pos2', value=0, min=-0.05, max=0.05)\nparams1.add('gfwhm2', value=0.05, min = 0.005, max=0.2)\n\nparams1.add('int3', value=0.04, min=0.01)\nparams1.add('pos3', value=-0.11, min=-0.13, max=-0.06)\nparams1.add('gfwhm3', value=0.090001, min=0.078, max=0.2)\n\n\nmodel1 = Model(final)\nresult1 = model1.fit(y, params1, x=x)\nprint(result1.fit_report())\n\nplt.plot(x, y, 'bo', markersize=4)\nplt.plot(x, result1.best_fit, 'r-', label='best fit', linewidth=2)\nplt.plot(x, gfunction_norm(x, result1.params['pos2'].value, result1.params['gfwhm2'].value, result1.params['int2'].value))\nplt.plot(x, gfunction_norm(x, result1.params['pos3'].value, result1.params['gfwhm3'].value, result1.params['int3'].value))\nplt.legend()\nplt.show()\n\n\n\nThis is what I obtain as result of the fit:\na: -0.00427895 +\/- 0.00102828 (24.03%) (init = -0.00028)\nb: 0.00331554 +\/- 2.6486e-04 (7.99%) (init = 0.003)\nint2: 0.02301220 +\/- 9.6324e-04 (4.19%) (init = 0.04)\npos2: 0.00175738 +\/- 0.00398305 (226.65%) (init = 0)\ngfwhm2: 0.08657191 +\/- 0.00708478 (8.18%) (init = 0.05)\nint3: 0.02261912 +\/- 8.7317e-04 (3.86%) (init = 0.04)\npos3: -0.09568096 +\/- 0.00432018 (4.52%) (init = -0.11)\ngfwhm3: 0.09304840 +\/- 0.00797209 (8.57%) (init = 0.090001)\n\nYou can see the huge error next to pos2, and I'm not sure how to fix it.\nThank you!","Title":"Is it possible to limit the accuracy of lmfit?","Tags":"python,gaussian,errorbar,lmfit","AnswerCount":1,"A_Id":75115250,"Answer":"As values tend to zero, the \"percent uncertainty\" will increase. That is, if the x-axis were shifted by +1, then your pos2 would have a value of 1.00176 with a standard error of 0.004, and the percent shown would be below 1% -- and the fit would be exactly the same.\nYou could interpret that as \"pos2 is consistent with 0\", but it is also true that the estimated standard error is 0.004, whereas the x-spacing of your data is around 0.01. So, yes, the value close to 0 but apparently known to be pretty close to that very small best-fit value.\nThat is, don't get too concerned about the size of the standard error compared to the best-fit value.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75110412,"CreationDate":"2023-01-13 14:19:07","Q_Score":1,"ViewCount":88,"Question":"I'm trying to perform math operation (specifically addition) with the values of integer fields on my django models but i kept getting this warning even before running the program:\n\"Class 'IntegerField' does not define 'add', so the '+' operator cannot be used on its instances\"\nthis is my django model code:\nclass Applicants(models.Model):\n fname = models.CharField(max_length=255)\n lname = models.CharField(max_length=255)\n number = models.CharField(max_length=255)\n email = models.CharField(max_length=255)\n gender = models.CharField(max_length=255)\n p_course = models.CharField(max_length=255)\n o_course = models.CharField(max_length=255)\n grade1 = models.IntegerField(max_length=255)\n grade2 = models.IntegerField(max_length=255)\n grade3 = models.IntegerField(max_length=255)\n grade4 = models.IntegerField(max_length=255)\n grade5 = models.IntegerField(max_length=255)\n grade6 = models.IntegerField(max_length=255)\n total_grade = grade1 + grade2 + grade3 + grade4 + grade4 + grade5","Title":"I'm trying to perform math operation (specifically addition) with the contents of integer fields on my django models","Tags":"python,django,django-models,addition","AnswerCount":3,"A_Id":75151457,"Answer":"It turns out that the input received from HTML comes as string.\nI had to convert each grade in the views.py file\nfrom request.POST['grade1']\nto int(request.POST['grade1']) for each of the grades and used the self.grade1\u2026 method in models.py","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75110954,"CreationDate":"2023-01-13 15:06:53","Q_Score":0,"ViewCount":49,"Question":"im creating a discord bot in python and i would like to make my bot command the music bot to play music. for example i want my bot to write \/play prompet:[SONG_NAME] in a chat room and let it be recognized and played by the other music bot. if someone has an idea to make it work please help!\ni been trying to just write a string with my own bot \"\/play prompet:[SONG_NAME]\" but the other bot is not reacting.","Title":"How to make a discord bot use other discord bot commands?","Tags":"python,discord,discord.py,bots","AnswerCount":1,"A_Id":75111110,"Answer":"You can't do this. Discord.py by default doesn't invoke commands on messages of other bots, unless you override on_message and call process_commands without checking the message author.\nConsequently, if the bot is not yours and you cannot control it, there's nothing you can do about it. If the other bot allows it then it will work without you having to do anything.\nInvoking slash commands from chat will never work, as they're not made to be called by bots.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75110981,"CreationDate":"2023-01-13 15:09:53","Q_Score":5,"ViewCount":10889,"Question":"I am facing below issue while loading the pretrained BERT model from HuggingFace due to SSL certificate error.\nError:\n\nSSLError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: \/dslim\/bert-base-NER\/resolve\/main\/tokenizer_config.json (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1108)')))\n\nThe line that is causing the issue is:\ntokenizer = AutoTokenizer.from_pretrained(\"dslim\/bert-base-NER\")\n\nSource code:\nfrom transformers import AutoTokenizer, AutoModelForTokenClassification\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(\"dslim\/bert-base-NER\")\nmodel = AutoModelForTokenClassification.from_pretrained(\"dslim\/bert-base-NER\")\n\nI am expecting to download pre-trained models while running the code in jupyter lab on Windows.","Title":"SSLError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: \/dslim\/bert-base-NER\/resolve\/main\/tokenizer_config.json","Tags":"python-3.x,huggingface-transformers,bert-language-model,huggingface-tokenizers,huggingface","AnswerCount":3,"A_Id":76632994,"Answer":"This could be due to a firewall issue. For example for some organisations this occurs when using LAN but not with WIFI.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75111910,"CreationDate":"2023-01-13 16:27:34","Q_Score":0,"ViewCount":43,"Question":"I'm trying to import a python script (called flask_router.py) from the same directory into another python script (import_requests.py) but am receiving the following ModuleNotFound error in both the terminal and VS Code.\nI've tried troubleshooting with pip install as well as the sys module to append the path. Also confirmed the directory is found in the PYTHONPATH and that the correct version of python is in use (v3.10.9).\nFeels like I've exhausted every option to this point. It seems so simple that I should be able to import a script that exists in the same folder, but clearly not. Does anyone have an idea?","Title":"Import *local python file* could not be resolved","Tags":"python,python-3.x,python-import,python-module","AnswerCount":1,"A_Id":75112099,"Answer":"What you can do is create a package for it (personal) and turn it into a.whl file","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75114038,"CreationDate":"2023-01-13 20:10:30","Q_Score":1,"ViewCount":38,"Question":"I'm new to Python and having troubles understanding Python MRO. Could somebody explain the question below as simple as possible?\nWhy this code throws TypeError: Cannot create a consistent method resolution:\nclass A:\n def method(self):\n print(\"A.method() called\")\n \nclass B:\n def method(self):\n print(\"B.method() called\")\n \nclass C(A, B):\n pass\n \nclass D(B, C):\n pass\n \nd = D()\nd.method()\n\nWhile this code works fine:\nclass A:\n def method(self):\n print(\"A.method() called\")\n \nclass B:\n def method(self):\n print(\"B.method() called\")\n \nclass C(A, B):\n pass\n \nclass D(C, B):\n pass\n \nd = D()\nd.method()","Title":"Python MRO in plain English","Tags":"python-3.x,multiple-inheritance,method-resolution-order","AnswerCount":1,"A_Id":75114107,"Answer":"When you resolve the hierarchy of the methods in your first example, you get\n\nB.method\nA.method\nB.method\n\nIn the second example, you get\n\nA.method\nB.method\nB.method\n\nThe first one doesn't work because it's inconsistent in regards to whether B.method comes before or after A.method.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75114334,"CreationDate":"2023-01-13 20:47:55","Q_Score":1,"ViewCount":1213,"Question":"I am attempting to use Flask's rate limiting library to rate limit an api based on the seconds.\nSo I have used this exact same format to limit requests to an API on an Apahce Server. However I am now using an NGINX. I do not thinks this makes a difference but when I run this code:\nimport api\n\napp = Flask(__name__, instance_relative_config=True)\n\nlimiter = Limiter(app, default_limits=[\"5\/second\"], key_func=lambda: get_remote_address)\n\nlimiter.limit(\"5\/second\", key_func=lambda: request.args.get('token') if 'token' in request.args else get_remote_address)(api.bp)\n\napp.register_blueprint(api.bp)\n\nAgain I have ran this exact same code on another server, but now it is giving this error:\n limiter = Limiter(app, \"5\/second\", key_func=lambda: request.args.get('token') if 'token' in request.args else get_remote_address)\n\nTypeError: Limiter.__init__() got multiple values for argument 'key_func'\n\nAny help would be great. I am using Flask-Limiter in python and running gevent on gunicorn server for NGINX. Thanks.","Title":"Flask Limiter error: TypeError: Limiter.__init__() got multiple values for argument 'key_func'","Tags":"python,api,flask,rate-limiting","AnswerCount":1,"A_Id":75905162,"Answer":"Your Limiter class instantiation is incorrect. Below is the correct one-\nlimiter = Limiter(get_remote_address, app=app, default_limits=[\"200 per day\", \"50 per hour\"])","Users Score":3,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75114841,"CreationDate":"2023-01-13 22:01:50","Q_Score":11,"ViewCount":6144,"Question":"I created a new environment using conda and wanted to add it to jupyter-lab. I got a warning about frozen modules? (shown below)\nipython kernel install --user --name=testi2 \n\n0.00s - Debugger warning: It seems that frozen modules are being used, which may\n\n0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off\n\n0.00s - to python to disable frozen modules.\n\n0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.\n\nInstalled kernelspec testi2 in \/home\/michael\/.local\/share\/jupyter\/kernels\/testi2\n\nAll I had installed were...\nipykernel, ipython, ipywidgets, jupyterlab_widgets, ipympl\nPython Version 3.11.0, Conda version 22.11.0\nAnd I used \"conda install nodejs -c conda-forge --repodata-fn=repodata.json\" to get the latest version of nodejs\nI also tried re-installing ipykernel to a previous version (6.20.1 -> 6.19.2)","Title":"Debugger Warning from ipython: Frozen Modules [python 3.11]","Tags":"python,anaconda,conda,jupyter-lab,python-3.11","AnswerCount":2,"A_Id":76113145,"Answer":"The main problem i noticed with this is that when you try to run python on visual studio without without a py extension you're most likely to run into an issue like this\nand Finally make sure python path is added to Environment Variable . I have done this and its working fine . I am Using Visual studio.\nPath is always found here on Windows\nC:\\Users\\NEW\\AppData\\Local\\Programs\\Python\\Python311","Users Score":-1,"is_accepted":false,"Score":-0.0996679946,"Available Count":1},{"Q_Id":75114867,"CreationDate":"2023-01-13 22:04:51","Q_Score":1,"ViewCount":63,"Question":"Firstly, we have a normal list:\ningredients = [\"hot water\", \"taste\"]\n\nTrying to print this list's hash will expectedly raise a TypeError:\nprint(hash(ingredients))\n\n>>> TypeError: unhashable type: 'list'\n\nwhich means we cannot use it as a dictionary key, for example.\nBut now suppose we have a Tea class which only takes one argument; a list.\nclass Tea:\n \n def __init__(self, ingredients: list|None = None) -> None:\n\n self.ingredients = ingredients\n if ingredients is None:\n self.ingredients = []\n\nSurprisingly, creating an instance and printing its hash will not raise an error:\ncup = Tea([\"hot water\", \"taste\"])\nprint(hash(cup))\n\n>>> 269041261\n\nThis hints at the object being hashable (although pretty much being identical to a list in its functionality). Trying to print its ingredients attribute's hash, however, will raise the expected error:\nprint(hash(cup.ingredients))\n\n>>> TypeError: unhashable type: 'list'\n\nWhy is this the case? Shouldn't the presence of the list \u2014 being an unhashable type \u2014 make it impossible to hash any object that 'contains' a list? For example, now it is possible to use our cup as a dictionary key:\ndct = {\n cup = \"test\"\n}\n\ndespite the fact that the cup is more or less a list in its functionality. So if you really want to use a list (or another unhashable type) as a dictionary key, isn't it possible do do it in this way? (not my main question, just a side consequence)\nWhy doesn't the presence of the list make the entire datatype unhashable?","Title":"If a list is unhashable in Python, why is a class instance with list attribute not?","Tags":"python,list,hashable","AnswerCount":1,"A_Id":75115184,"Answer":"Oops, it looks like you did not understand what a hash is. The rule is that a hash should never change over the whole life of an object, and should be compatible with equality. It does not matter whether the object changes or not.\n2 distinct list objects having same elements will compare equal. That is the actual reason for a list not being hashable. Suppose we manage to compute a hash for a class that would mimic a list including the equality part. Let us create two distinct instances with distinct hash values. No problem till here. Now let us create a third instance having the same elements as the first one. They will compare equal so their hash values should be the same. But if we change the elements of that third instance to be the elements of the second one, its hash should be the same as the one of the second instance - which is forbidden since a hash value shall not change over the lifetime of an object.\nBut you have only created a class that happens to contain a list. By default, 2 distinct instances will not compare equal even if they contain identical lists. Because of that, your class will be hashable and the hash will be the address of the object in CPython. It will only become non hashable if you add an __eq__ special method that would make two objects compare equal if the lists that they contain are, because the hash function will no longer be able to at the same time be compatible with equality and never change over the lifetime of the object.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75115453,"CreationDate":"2023-01-14 02:53:34","Q_Score":7,"ViewCount":187,"Question":"I've got a Django application with djongo as a database driver. The models are:\nclass Blog(models.Model):\n _id = models.ObjectIdField()\n name = models.CharField(max_length=100, db_column=\"Name\")\n tagline = models.TextField()\n\nclass Entry(models.Model):\n _id = models.ObjectIdField()\n blog = models.EmbeddedField(\n model_container=Blog\n )\n\nWhen I run this application, I got an error:\nFile \"\\.venv\\lib\\site-packages\\djongo\\models\\fields.py\", line 125, in _validate_container\n raise ValidationError(\ndjango.core.exceptions.ValidationError: ['Field \"m.Blog.name\" of model container:\"\" cannot be named as \"name\", different from column name \"Name\"']\n\nI want to keep the name of the field name in my model and database different because the database already exists, and I can't change it. The database uses camelCase for naming fields, whereas in the application, I want to use snake_case.\nHow to avoid this error?","Title":"Django EmbeddedField raises ValidationError because of renamed field","Tags":"python,django,mongodb,django-models,djongo","AnswerCount":4,"A_Id":75203651,"Answer":"you might need to do manage.py makemigrations and manage.py migrate","Users Score":1,"is_accepted":false,"Score":0.049958375,"Available Count":1},{"Q_Id":75116582,"CreationDate":"2023-01-14 08:05:46","Q_Score":0,"ViewCount":24,"Question":"I have created a custom module which I am trying to install but it shows me cancel installation button after I try to install it.\nThis issue is not seen in any other custom modules or other modules. But installation is not working for my module.","Title":"Cancel installation is shown after installing my module in Odoo","Tags":"python-3.x,odoo,odoo-15","AnswerCount":1,"A_Id":75116743,"Answer":"\"Cancel Installation\" button could be visible on these situations.\n\nAnother module is being installed.\nWhenever the server restarts.\n\nAccording to your query i guess that the cause of your issue might be 'Circular Dependency'.\nPlease re-check your 'depends' : [] inside __manifest__.py","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75119003,"CreationDate":"2023-01-14 15:18:51","Q_Score":2,"ViewCount":83,"Question":"I need to make my scheduler fire every X days at the specific time (e.g. every 7 days at 11:30)\nmy code:\ndef make_interval(record_date: str, record_time: str, record_title: str):\n hours, minutes = _get_hours_minutes(record_time)\n trigger = AndTrigger([IntervalTrigger(days=int(record_date)),\n CronTrigger(hour=hours, minute=minutes)])\n scheduler.add_job(_send_notification, trigger=trigger,\n kwargs={...},\n id=record_title,\n timezone=user_timezone)\n\nbut I got error: [Errno 22] Invalid argument on the third line.\nWhat's wrong? Can't get why it doesnt work\n_get_hours_minutes (just returns separately the value of hours and minutes from \"HH:MM\")\ndef _get_hours_minutes(user_time: str) -> (str, str):\n return user_time[:2], user_time[3:5]\n\ntracebacks:\nTraceback (most recent call last):\n File \"C:\\Users\\pizhlo21\\Desktop\\Folder\\python\\tg_bot_reminder\\scheduler\\main.py\", line 92, in make_interval\n scheduler.add_job(_send_notification, trigger=trigger,\n File \"C:\\Users\\pizhlo21\\Desktop\\Folder\\python\\tg_bot_reminder\\venv\\Lib\\site-packages\\apscheduler\\schedulers\\base.py\", line 447, in add_job\n self._real_add_job(job, jobstore, replace_existing)\n File \"C:\\Users\\pizhlo21\\Desktop\\Folder\\python\\tg_bot_reminder\\venv\\Lib\\site-packages\\apscheduler\\schedulers\\base.py\", line 863, in _real_add_job\n replacements['next_run_time'] = job.trigger.get_next_fire_time(None, now)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\pizhlo21\\Desktop\\Folder\\python\\tg_bot_reminder\\venv\\Lib\\site-packages\\apscheduler\\triggers\\combining.py\", line 55, in get_next_fire_time\n fire_times = [trigger.get_next_fire_time(previous_fire_time, now)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\pizhlo21\\Desktop\\Folder\\python\\tg_bot_reminder\\venv\\Lib\\site-packages\\apscheduler\\triggers\\combining.py\", line 55, in \n fire_times = [trigger.get_next_fire_time(previous_fire_time, now)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\pizhlo21\\Desktop\\Folder\\python\\tg_bot_reminder\\venv\\Lib\\site-packages\\apscheduler\\triggers\\interval.py\", line 68, in get_next_fire_time\n return normalize(next_fire_time)\n ^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\pizhlo21\\Desktop\\Folder\\python\\tg_bot_reminder\\venv\\Lib\\site-packages\\apscheduler\\util.py\", line 431, in normalize\n return datetime.fromtimestamp(dt.timestamp(), dt.tzinfo)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nOSError: [Errno 22] Invalid argument\n\nMy OS platform: Windows 10","Title":"apscheduler fire every X days at the specific time","Tags":"python,python-3.x,apscheduler","AnswerCount":1,"A_Id":75119122,"Answer":"I would take a closer look at \"record_time\" and the format that you pass to \"_get_hours_minutes\". It has to be in the format of \"HH:MM\" (assuming that the Errno 22 error is for that line. There are other ways to use the datetime library so that you don't have to slice strings.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75119184,"CreationDate":"2023-01-14 15:48:08","Q_Score":1,"ViewCount":138,"Question":"i am trying to get the email address of the current user after oauth.\nI have found a solution on the web:\ndef get_user_info():\n flow = InstalledAppFlow.from_client_secrets_file(\n 'client_secrets.json',\n scopes=['openid',\n 'https:\/\/www.googleapis.com\/auth\/userinfo.email',\n 'https:\/\/www.googleapis.com\/auth\/userinfo.profile'])\n\n flow.run_local_server()\n credentials = flow.credentials\n\n # service = build('calendar', 'v3', credentials=credentials)\n\n # Optionally, view the email address of the authenticated user.\n user_info_service = build('oauth2', 'v2', credentials=credentials)\n user_info = user_info_service.userinfo().get().execute()\n user_email = user_info['email']\n return user_email\n\nFirst it was working on one machine, then i tried it on another:\nFirst the Authentification pop up comes up and is satified:\nThe authentication flow has completed. You may close this window.\nOn the second run however i get:\nTraceback (most recent call last):\n File \"\/home\/jakob\/PycharmProjects\/pywhatsapp2\/main.py\", line 313, in \n user_email = get_user_info()\n File \"\/home\/jakob\/PycharmProjects\/pywhatsapp2\/main.py\", line 290, in get_user_info\n flow.run_local_server()\n File \"\/home\/jakob\/.local\/lib\/python3.10\/site-packages\/google_auth_oauthlib\/flow.py\", line 499, in run_local_server\n local_server = wsgiref.simple_server.make_server(\n File \"\/usr\/lib\/python3.10\/wsgiref\/simple_server.py\", line 154, in make_server\n server = server_class((host, port), handler_class)\n File \"\/usr\/lib\/python3.10\/socketserver.py\", line 452, in __init__\n self.server_bind()\n File \"\/usr\/lib\/python3.10\/wsgiref\/simple_server.py\", line 50, in server_bind\n HTTPServer.server_bind(self)\n File \"\/usr\/lib\/python3.10\/http\/server.py\", line 136, in server_bind\n socketserver.TCPServer.server_bind(self)\n File \"\/usr\/lib\/python3.10\/socketserver.py\", line 466, in server_bind\n self.socket.bind(self.server_address)\nOSError: [Errno 98] Address already in use","Title":"Trying to get User (email) after google oath with python ->","Tags":"python,oauth-2.0","AnswerCount":1,"A_Id":75119688,"Answer":"The error \u201cAddress is already in use\u201d is thrown because the server that deals with oauth cannot be started because you already have another server currently running on that address. What it seems it happened is that in your first run you started a server and never stopped it, so when you tried to run it again, the server could not start because the address was already being used. Normally you want to always have a server running instead of always having to start one specifically for oauth in your case.\nIf the address is being used by another server try changing the port number.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75119308,"CreationDate":"2023-01-14 16:07:14","Q_Score":1,"ViewCount":434,"Question":"I changed the data type of a field \"max\" in my model from text to float and I got this error when I run python3 manage.py migrate after makemigrations. What's the solution please?\nRunning migrations:\npsycopg2.errors.InvalidTextRepresentation: invalid input syntax for type double precision: \"max\".\nThe above exception was the direct cause of the following exception:\ndjango.db.utils.DataError: invalid input syntax for type double precision: \"max\n my original model:\n class calculation(models.Model):\n fullname = models.TextField(blank=True, null=True)\n min = models.TextField(blank=True, null=True)\n max = models.TextField(blank=True, null=True)\n unit = models.TextField(blank=True, null=True)\n\n my model after the change::\n class calculation(models.Model):\n fullname = models.TextField(blank=True, null=True)\n min = models.FloatField(blank=True, null=True)\n max = models.FloatField(blank=True, null=True)\n unit = models.TextField(blank=True, null=True)","Title":"psycopg2.errors.InvalidTextRepresentation: invalid input syntax for type double precision: \"max\"","Tags":"python-3.x,django,postgresql,django-models","AnswerCount":1,"A_Id":75119387,"Answer":"The error message you are seeing is related to the data that is already in the database for the \"max\" field. When you changed the data type of the \"max\" field from text to float, the existing text data in the field cannot be automatically converted to a float data type.\nThe solution is to do this migration in two steps:\nFirst, create a new field (e.g. \"max_temp\") of type float in your model and update your code to use this new field instead of the old \"max\" field.\nThen, create a data migration to copy the data from the old \"max\" field to the new \"max_temp\" field.\nThen, remove the old \"max\" field and rename the new \"max_temp\" field to \"max\".\nYou can use the django built-in commands like python3 manage.py makemigrations --empty to add the empty migration, python3 manage.py add_field to add the field and then write the migration function to copy the data from the old field to the new field in the migration file.\nYou can also use 3rd party libraries like \"django-db-multitenant\" to handle the migration in a more easy way, it will take care of the data migration and renaming the columns in the database.\nIt's important to have a backup of your data before doing any migrations.\nPlease note that this process can be complex and it's important to test it in a development environment before deploying it to a production environment.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75120551,"CreationDate":"2023-01-14 19:03:36","Q_Score":1,"ViewCount":187,"Question":"For some reason, my code below is giving inconsistent results. The files in files do not ever change. However, the result of hasher.hexdigest() is giving different values each time this function runs. My goal with this code is to only generate a new settings file if and only if the checksum\/hash in the current settings file does not match the result of the three settings files hashed with hashlib. Does anyone see what I might be doing wrong?\ndef should_generate_new_settings(qt_settings_generated_path: Path) -> tuple[bool, str]:\n \"\"\" compare checksum of user_settings.json and the current ini file to what is stored in the currently generated settings file \"\"\"\n generate = False\n hasher = hashlib.new('md5')\n if not qt_settings_generated_path.exists():\n generate = True\n\n try:\n # if the file is corrupt, it may have a filesize of 0.\n generated_file = qt_settings_generated_path.stat()\n if generated_file.st_size < 1:\n generate = True\n\n files = [paths.user_settings_path, paths.settings_generated_path, Path(__file__)]\n for path in files:\n file_contents = path.read_bytes()\n hasher.update(file_contents)\n\n with qt_settings_generated_path.open('r') as file:\n lines = file.read().splitlines()\n\n checksum_prefix = '# checksum: '\n for line in lines:\n if line.startswith(checksum_prefix):\n file_checksum = line.lstrip(checksum_prefix)\n if file_checksum != hasher.hexdigest():\n generate = True\n break\n except FileNotFoundError:\n generate = True\n\n return (generate, hasher.hexdigest())","Title":"Python hashlib is giving different results","Tags":"python,md5,hashlib","AnswerCount":1,"A_Id":75120721,"Answer":"I figured out the issue. The solution was simply to store the hash digest in another file other than the file I'm generating the settings into.","Users Score":2,"is_accepted":false,"Score":0.3799489623,"Available Count":1},{"Q_Id":75121677,"CreationDate":"2023-01-14 22:26:51","Q_Score":0,"ViewCount":48,"Question":"I am trying to access a site that is asking to verify that it is human accessing the site the tool used by the site is cloudflare\nI use the user-agent to access the sites and so far I haven't had any problems, but with the current site I'm facing this barrier and there's a detail I configured a 100 second sleep to do the recognition manually and even so the site recognizes that webdrive is a robot.\noptions.add_argument('--user-agent=\"Mozilla\/5.0 (Windows Phone 10.0; Android 4.2.1; Microsoft; Lumia 640 XL LTE) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/42.0.2311.135 Mobile Safari\/537.36 Edge\/12.10166\"')","Title":"How not to be detected by browser using selenium?","Tags":"python,selenium","AnswerCount":2,"A_Id":75273894,"Answer":"Maybe changing your public IP address would work. I had this issue before and struggled with headers and drivers.\nBut this varied from website to website though.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75121781,"CreationDate":"2023-01-14 22:49:14","Q_Score":0,"ViewCount":29,"Question":"I have a Orange python widget that I created. I would like to make it a standard widget in Orange canvas. I have reviewed several tutorials to do this so I understand the code that must be created but after that how do you import that code into the widget library in Canvas?\nNo problems at this point looking for a complete tutorial on widget creation and import into Orange Canvas.\nReviewed several tutorials both text and video but they fall short of successful importing the code into Canvas.\nWhen I followed the widget creation on Orange and ran the install command \"pip install -e .\" from the setup directory the command executed successfully but when I open Orange Canvas the Demo OWDataSampler was not present. Not sure how the setup tool knows how to update the Orange application to recognize where the application is installed.\nAny help would be appreciated.","Title":"Orange Data Mining Widget Creation in Canvas","Tags":"python,orange","AnswerCount":1,"A_Id":75140732,"Answer":"I was able to get it to work properly. Need to open the Orange command prompt, navigate to the directory that has the setup.py file, and run the pip install -e . command. The widget is listed properly in canvas.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75122083,"CreationDate":"2023-01-14 23:57:04","Q_Score":1,"ViewCount":54,"Question":"While going through the third part of a problem from one of MIT's OCW Problem sets, I encountered some doubts. The problem description is as follows:\n\nPart C: Finding the right amount to save away:\nIn Part B, you had a chance to explore how both the percentage of your salary that you save each month and your annual raise affect how long it takes you to save for a down payment. This is nice, but suppose you want to set a particular goal, e.g. to be able to afford the down payment in three years. How much should you save each month to achieve this? In this problem, you are going to write a program to answer that question. To simplify things, assume: 3\n\nYour semiannual raise is .07 (7%)\nYour investments have an annual return of 0.04 (4%)\nThe down payment is 0.25 (25%) of the cost of the house\nThe cost of the house that you are saving for is $1M.\n\nYou are now going to try to find the best rate of savings to achieve a down payment on a $1M house in 36 months. Since hitting this exactly is a challenge, we simply want your savings to be within $100 of the required down payment. In ps1c.py , write a program to calculate the best savings rate, as a function of your starting salary. You should use bisection search to help you do this efficiently. You should keep track of the number of steps it takes your bisections search to finish. You should be able to reuse some of the code you wrote for part B in this problem. Because we are searching for a value that is in principle a float, we are going to limit ourselves to two decimals of accuracy (i.e., we may want to save at 7.04% or 0.0704 in decimal \u2013 but we are not going to worry about the difference between 7.041% and 7.039%). This means we can search for an integer between 0 and 10000 (using integer division), and then convert it to a decimal percentage (using float division) to use when we are calculating the current_savings after 36 months. By using this range, there are only a finite number of numbers that we are searching over, as opposed to the infinite number of decimals between 0 and 1. This range will help prevent infinite loops. The reason we use 0 to 10000 is to account for two additional decimal places in the range 0% to 100%. Your code should print out a decimal (e.g., 0.0704 for 7.04%).\n\nThe problem description clearly states later on that this problem may be solved in various different ways by implementing bisection search in different styles, which would ultimately give different results and they are all correct, i.e., there are multiple rates that will allow for the savings to be in ~100 of the downpayment. However, the solution to the problem is no longer my concern, as I realize I already solved it; what I want to know now is what modifications do I have to make to my code so I can produce outputs with similar accuracy to that of the expected test output provided below:\n\nTest Case 1\n>>> Enter the starting salary: 150000\nBest savings rate: 0.4411\nSteps in bisection search: 12\n\n\nThis is my solution to the problem:\ndef calc_savings(startingSalary:int, nummonths:int, portion:float):\n \"\"\"\n Calculated total savings with fixed annual raise and r.o.i for x no. of months \n at 'portion' percentage of salary saved every month.\n \"\"\"\n savings = 0\n salary=startingSalary\n for months in range(1, nummonths+1):\n savings+= (salary\/12*portion)+(savings*(0.04\/12))\n if months%6==0:\n salary = salary+(0.07*salary)\n\n return savings\n\ncost = 1_000_000\ndownpayment = cost*0.25\nstartingsalary = int(input(\"Enter starting salary: \"))\nstep = 0\nhigh = 10000\nlow = 0\n\nif startingsalary*3 < downpayment:\n print(\"Saving the down payment in 36 months with this salary is not possible.\")\n\nelse:\n while True:\n portion = int((high+low)\/2)\/10000\n current_savings=calc_savings(startingsalary, 36, portion)\n\n if downpayment - current_savings < 100 and downpayment-current_savings>=0:\n break\n elif downpayment-current_savings>=100:\n low = portion*10000\n step+=1\n elif downpayment-current_savings < 0:\n high = portion*10000\n step+=1\n print(f\"Best savings rate: {portion}\")\n print(f\"Steps in bisection search: {step}\")\n\nAnd this is the result I'm getting:\n>>> Enter the starting salary: 150000\nBest savings rate: 0.441\nSteps in bisection search: 12\n\nI realized that this has something to do with the way I choose my limits for the bisection search and how I later convert the result I get from it back to the required number of significant digits.\nAfter playing around with the code for some time, I realized that the no. of significant digits in my result is the same as the expected results, I tested this by changing the no. of months to 40 from 36 and figured that it says 0.441 because it's actually 0.4410 which is super close to 0.4411.\nI'm just wondering if there's anything I can do to my code to hit that exact 0.4411.","Title":"What needs to be modified in the code to achieve a desired accuracy of floating point result?","Tags":"python,floating-accuracy","AnswerCount":1,"A_Id":75122176,"Answer":"First, you are not doing floating point optimization. Even though you use floating point operations for intermediate steps, you are saving your optimization variable in a fixed-point format, therefore doing fixed point optimization. When you use an integer and a constant scaling factor (100000) to represent a rational or real number that is fixed point, not floating point.\nSince you are working with a fixed point value, if you want to be sure of getting a result that's accurate to the nearest .0001, you simply have to change your exit condition. Instead of exiting as soon as your answer is correct to the nearest $100 in terms of dollars saved, wait until the answer is correct to the nearest .0001 as a fraction of the salary. Which, because of your fixed point representation means waiting until high and low are separated by 1 count, and then reporting whichever of the numbers gives the closest result to the desired final savings.\nSide note: Since high and low are always integers, you can use (high+low)\/\/2 to use integer operations to get the same result as int((high+low)\/2) without converting to floats and back again.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75122489,"CreationDate":"2023-01-15 01:59:39","Q_Score":0,"ViewCount":24,"Question":"I want to use my reminder bot on two servers at the same time, but the problem is whenever someone uses a command on one of the servers it will interact with the current loop.\nTo make things easier: my bot sends a msg for every X seconds to remind the user of the command to doing something. However when someone else uses it, it will overwrite the current loop function and result in not being able to stop but only the last loop.\nim using @tasks.loop(seconds=time) and task.start() and task.cancel() in order to control the loop.\nso i was wondering if there is a way to give a unique id to the loop so when i want to cancel it, it will search for that specific loop and cancel it (in case there are many reminders currently running).","Title":"Using the same bot on different servers","Tags":"python,discord,discord.py","AnswerCount":1,"A_Id":75123406,"Answer":"If your task is setup as a class - then you could theoretically just create a new instance of that class everytime you want to start a new reminder.\n\nThough, I would personally find a way to save state about data about the reminders elsewhere and use a single task. Either just using a dictionary, a file on the disk, or a DB of some kind.\nThe data saved for each reminder would need to be something like:\n\nguild ID: the server ID the reminder is for\nchannel ID: the channel ID we need to send the message to\nuser ID: the user that triggered the reminder\nactive: whether the reminder is active or not\nreminder(?): what we're reminding the user to do (if this is configurable)\n\nThat way:\n\nyour task can just loop every X minutes (however often you want the reminders to be triggered) and it will check if there's any active reminders (in your dict, file, db, etc). If there isn't it can just return until the next iteration\nusers in either server can start reminders. however you're starting them would need to add a 'reminder object' either to the dict, file, db, etc\nif there is active reminders, the task can loop over them and send the reminder to the relevant guild\/channel\/user as per the data\nin your on_message method, you can turn the reminder(s) on\/off as required\n\nHopefully that makes a bit of sense.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75122503,"CreationDate":"2023-01-15 02:04:30","Q_Score":1,"ViewCount":49,"Question":"I have conversations that look as follows:\ns = \"1) Person Alpha:\\nHello, how are you doing?\\n\\n1) Human:\\nGreat, thank you.\\n\\n2) Person Alpha:\\nHow is the weather?\\n\\n2) Human:\\nThe weather is good.\"\n\n1) Person Alpha:\nHello, how are you doing?\n\n1) Human:\nGreat, thank you.\n\n2) Person Alpha:\nHow is the weather?\n\n2) Human:\nThe weather is good.\n\nI would like to remove the enumeration at the beginning to get the following result:\ns = \"Person Alpha:\\nHello, how are you doing?\\n\\nHuman:\\nGreat, thank you.\\n\\nPerson Alpha:\\nHow is the weather?\\n\\nHuman:\\nThe weather is good.\"\n\nPerson Alpha:\nHello, how are you doing?\n\nHuman:\nGreat, thank you.\n\nPerson Alpha:\nHow is the weather?\n\nHuman:\nThe weather is good.\n\nMy idea is to search for 1), 2), 3),... in the text and replace it with an empty string. This might work but is inefficient (and can be a problem if e.g. 1) appears in the text of the conversation).\nIs there a better \/ more elegant way to do this?","Title":"Remove number patterns from string","Tags":"python,python-3.x,string,replace","AnswerCount":5,"A_Id":75122543,"Answer":"What do you mean by inefficient?\nDon't you want to use loops to avoid poor performance? Give more details of what you have tried and what you want and don't want to be done","Users Score":1,"is_accepted":false,"Score":0.0399786803,"Available Count":1},{"Q_Id":75123102,"CreationDate":"2023-01-15 05:17:50","Q_Score":0,"ViewCount":18,"Question":"I am new to SQLITE. I am trying to open a database through Sqlite3.exe shell. My database file path has hyphen in it..\non entering\n.open C:\\Users\\Admin\\OneDrive - batch\\db.sqlite3\ni am getting below error\nunknown option: -\ncan anyone help..\nI tried double quote around path but in that case I am getting\nError: unable to open database\nThanks in advance..","Title":"sqlite exe : database file path has hyphen : \"unknown option: -\"","Tags":"sqlite,sqlite3-python","AnswerCount":2,"A_Id":75200831,"Answer":"changing\n\nbackward slashes to forward\n\nadding double quotes\nworked...\n\n\nbelow is the solution\n.open \"C:\/Users\/Admin\/OneDrive - batch\/db.sqlite3\"","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75123240,"CreationDate":"2023-01-15 06:05:32","Q_Score":1,"ViewCount":32,"Question":"PyCharm 2022.3.1, Build #PY-223.8214.51, built on December 20, 2022\npython 3.10.6\nIf enum is decorated @unique and is declared in a separate file, pycharm will not find usages to refactor\/rename. Likewise, usages are not provided with context option to refactor\/rename.\nDeclaration:\n# file: pycharm_enum_dec.py\n\nfrom enum import Enum, unique\n\n\n@unique\nclass MyType(Enum):\n AAA = 'aaa'\n BBB = 'bbb'\n\nUsage:\n# file: pycharm_refac_enum.py\n\nfrom pycharm_enum_dec import MyType\n\nprint(MyType.AAA)\n\nIs this something inherent to the @unique decorator or a bug in PyCharm?","Title":"PyCharm unable to refactor python @unique decorated enum member - expected behavior?","Tags":"python,enums,pycharm","AnswerCount":1,"A_Id":75140812,"Answer":"IntelliJ has confirmed that this is a bug with PyCharm treatment of @unique decorator. They have assigned it for resolution. No ETA (at 2023-01-17).","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75125071,"CreationDate":"2023-01-15 12:30:43","Q_Score":9,"ViewCount":513,"Question":"In python we can add lists to each other with the extend() method but it adds the second list at the end of the first list.\nlst1 = [1, 4, 5]\nlst2 = [2, 3]\n\nlst1.extend(lst2)\n\nOutput:\n[1, 4, 5, 2, 3]\n\nHow would I add the second list to be apart of the 1st element? Such that the result is this;\n[1, 2, 3, 4, 5 ]\n\nI've tried using lst1.insert(1, *lst2) and got an error;\nTypeError: insert expected 2 arguments, got 3","Title":"Extend list with another list in specific index?","Tags":"python,arrays,python-3.x,list","AnswerCount":3,"A_Id":75125927,"Answer":"If your only goal is to get the list sorted correctly, then you use .extend() and .sort() afterwards.","Users Score":-1,"is_accepted":false,"Score":-0.0665680765,"Available Count":1},{"Q_Id":75125280,"CreationDate":"2023-01-15 13:09:10","Q_Score":0,"ViewCount":53,"Question":"I am a Python Kivy learner. As part of learning, 2 months back i successfully build a calculator app through google collab. The app was working perfectly. Now when i build the same app, it builds successfully but the orientation doesnt work. It rotates to all direction.\norientation = portrait\ndoesnt seem to work now.\nI tried building other apps too recently through google collab, and the same orientation problem persist. Anyone recently built app through collab successfully without the orientation problem?","Title":"Buildozer - Orientation Portrait setting not working","Tags":"python,kivy,orientation,buildozer,portrait","AnswerCount":1,"A_Id":75199999,"Answer":"import os\nos.environ[\"KIVY_ORIENTATION\"] = \"Portrait\"\nadding this at the top of the main.py solves the issue.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75125424,"CreationDate":"2023-01-15 13:33:05","Q_Score":1,"ViewCount":74,"Question":"I'm trying to create an executable file for a simple 'Hello World' python code. I'm using an Ubuntu Subsystem in Windows 11 and I'm trying to create the .exe file with the command:\npyinstaller --onefile Test.py\n\nThe command runs and creates folders \"build\" and \"dist\", but it doesn't create the .exe file inside the \"dist\" folder as it should do.\nIn the terminal I have the fallowing message:\nmbseidel@Matheus-Seidel:\/mnt\/c\/Users\/Matheus Seidel\/OneDrive\/NCEE Meus documentos\/Arquivos padr\u00e3o\/Up$ pyinstaller --onefile Test.py\n156 INFO: PyInstaller: 5.7.0\n156 INFO: Python: 3.8.10\n166 INFO: Platform: Linux-4.4.0-25272-Microsoft-x86_64-with-glibc2.29\n173 INFO: wrote \/mnt\/c\/Users\/Matheus Seidel\/OneDrive\/NCEE Meus documentos\/Arquivos padr\u00e3o\/Up\/Test.spec\n181 INFO: UPX is not available.\n183 INFO: Extending PYTHONPATH with paths\n['\/mnt\/c\/Users\/Matheus Seidel\/OneDrive\/NCEE Meus documentos\/Arquivos padr\u00e3o\/Up']\n648 INFO: checking Analysis\n648 INFO: Building Analysis because Analysis-00.toc is non existent\n648 INFO: Initializing module dependency graph...\n651 INFO: Caching module graph hooks...\n656 WARNING: Several hooks defined for module 'numpy'. Please take care they do not conflict.\n662 INFO: Analyzing base_library.zip ...\n1652 INFO: Loading module hook 'hook-heapq.py' from '\/home\/mbseidel\/.local\/lib\/python3.8\/site-packages\/PyInstaller\/hooks'...\n1802 INFO: Loading module hook 'hook-encodings.py' from '\/home\/mbseidel\/.local\/lib\/python3.8\/site-packages\/PyInstaller\/hooks'...\n2734 INFO: Loading module hook 'hook-pickle.py' from '\/home\/mbseidel\/.local\/lib\/python3.8\/site-packages\/PyInstaller\/hooks'...\n3599 INFO: Caching module dependency graph...\n3695 INFO: running Analysis Analysis-00.toc\n3808 INFO: Analyzing \/mnt\/c\/Users\/Matheus Seidel\/OneDrive\/NCEE Meus documentos\/Arquivos padr\u00e3o\/Up\/Test.py\n3812 INFO: Processing module hooks...\n3823 INFO: Looking for ctypes DLLs\n3827 INFO: Analyzing run-time hooks ...\n3831 INFO: Looking for dynamic libraries\n4781 INFO: Looking for eggs\n4781 INFO: Python library not in binary dependencies. Doing additional searching...\n4958 INFO: Using Python library \/lib\/x86_64-linux-gnu\/libpython3.8.so.1.0\n4979 INFO: Warnings written to \/mnt\/c\/Users\/Matheus Seidel\/OneDrive\/NCEE Meus documentos\/Arquivos padr\u00e3o\/Up\/build\/Test\/warn-Test.txt\n4995 INFO: Graph cross-reference written to \/mnt\/c\/Users\/Matheus Seidel\/OneDrive\/NCEE Meus documentos\/Arquivos padr\u00e3o\/Up\/build\/Test\/xref-Test.html\n5054 INFO: checking PYZ\n5054 INFO: Building PYZ because PYZ-00.toc is non existent\n5054 INFO: Building PYZ (ZlibArchive) \/mnt\/c\/Users\/Matheus Seidel\/OneDrive\/NCEE Meus documentos\/Arquivos padr\u00e3o\/Up\/build\/Test\/PYZ-00.pyz\n5204 INFO: Building PYZ (ZlibArchive) \/mnt\/c\/Users\/Matheus Seidel\/OneDrive\/NCEE Meus documentos\/Arquivos padr\u00e3o\/Up\/build\/Test\/PYZ-00.pyz completed successfully.\n5220 INFO: checking PKG\n5221 INFO: Building PKG because PKG-00.toc is non existent\n5221 INFO: Building PKG (CArchive) Test.pkg\n7126 INFO: Building PKG (CArchive) Test.pkg completed successfully.\n7132 INFO: Bootloader \/home\/mbseidel\/.local\/lib\/python3.8\/site-packages\/PyInstaller\/bootloader\/Linux-64bit-intel\/run\n7133 INFO: checking EXE\n7133 INFO: Building EXE because EXE-00.toc is non existent\n7133 INFO: Building EXE from EXE-00.toc\n7134 INFO: Copying bootloader EXE to \/mnt\/c\/Users\/Matheus Seidel\/OneDrive\/NCEE Meus documentos\/Arquivos padr\u00e3o\/Up\/dist\/Test\n7138 INFO: Appending PKG archive to custom ELF section in EXE\n7184 INFO: Building EXE from EXE-00.toc completed successfully.\n\nIt is weird because I tried the same thing a week ago and the .exe was created normally. I even tried unistalling and reinstalling the pyinstaller library, but still got the same results.","Title":"Pyinstallar doesn't create .exe file","Tags":"python,pyinstaller,windows-subsystem-for-linux","AnswerCount":1,"A_Id":75125471,"Answer":"Since you're creating an executable under WSL, a Linux environment, your output executable would be that of a Linux binary instead of a Windows .exe file.\nLinux binary do not an extension, but they will still be properly run if you run the path to the binary file under your WSL shell (e.g. .\/path_to_binary\/binary_file)\nIf you wish to create an executable for Windows, the simplest way is to run the same code with pyinstaller on Windows itself rather than the WSL shell.","Users Score":2,"is_accepted":false,"Score":0.3799489623,"Available Count":1},{"Q_Id":75125939,"CreationDate":"2023-01-15 14:47:14","Q_Score":0,"ViewCount":37,"Question":"I came across an issue that while I was able to resolve, I believe would benefit this platform. I will therefore pose the question here and answer it. When attempting to publish an app on binder, you are required to create a Requirements.txt file that outlines your dependencies. Mine was using pandas version 1.4.4.\nWhen attempting to launch binder using my github repo, I was getting:\nERROR: No matching distribution found for pandas==1.4.4","Title":"Binder - ERROR: No matching distribution found for pandas==1.X.X","Tags":"python,jupyter-notebook,android-binder","AnswerCount":1,"A_Id":75125940,"Answer":"Reading into the error further, it seems that binder only goes up to a certain version of pandas. If you read carefully it will list your pandas version option. Choose the latest one from that error list, and update your requirements.\nAlhamdulilah!","Users Score":-2,"is_accepted":false,"Score":-0.3799489623,"Available Count":1},{"Q_Id":75127357,"CreationDate":"2023-01-15 18:17:13","Q_Score":1,"ViewCount":159,"Question":"The enclosed Python code for Raspberry Pi 4 runs separately each function without any problem countdown() and RunThermalCam(), but when running both functions concurrently the timers stop and camera image freeze. My understanding is camera is heavily processor usage so I used multiprocessing, but it gives the following error which I couldn't figure out. The code should first runs GUI then once \"Start\" button is hit, modules (PWM) runs with countdown timers for each module along with thermal camera.\nXIO: fatal IO error 25 (Inappropriate ioctl for device) on X server \":0\"\n after 1754 requests (1754 known processed) with 38 events remaining.\n[xcb] Unknown sequence number while processing queue\n[xcb] Most likely this is a multi-threaded client and XInitThreads has not been called\n[xcb] Aborting, sorry about that.\npython3: ..\/..\/src\/xcb_io.c:269: poll_for_event: Assertion `!xcb_xlib_threads_sequence_lost' failed.\n\nProcess ended with exit code -6.\nimport RPi.GPIO as GPIO\nfrom adafruit_blinka import Enum, Lockable, agnostic\nimport csv, datetime\nfrom tkinter import * \nfrom tkinter.filedialog import asksaveasfile\nimport time,board,busio\nimport numpy as np\nimport adafruit_mlx90640\nimport matplotlib.pyplot as plt\nimport multiprocessing \n\ndef first(): \n print(\"first new time is up \\n\") \n #stop module (1)\ndef second(): \n print(\"second new time is up \\n\") \n #stop module (2)\ndef third(): \n print(\"third new time is up \\n\") \n #stop module (3)\n \n#Create interface#\nroot = Tk()\nroot.geometry(\"1024x600\")\nroot.title(\"Countdown Timer\")\n\ndef modules():\n if (clockTime[0] == 0 or clockTime[0] == -1):\n first()\n if(clockTime[1] == 0 or clockTime[1] == -1):\n second()\n if(clockTime[2] == 0 or clockTime[2] == -1):\n third()\n \n#initialize timers lists\ntimers_number = 3\nhrString=[0]*timers_number\nminString=[0]*timers_number\nsecString=[0]*timers_number\ntotalSeconds = [0]*timers_number\ntotalMinutes = [0]*timers_number\ntotalHours = [0]*timers_number\n\nfor i in range(timers_number):\n hrString[i] = StringVar()\n hrString[i].set(\"00\")\n\nfor i in range(timers_number):\n minString[i] = StringVar()\n minString[i].set(\"00\")\n \nfor i in range(timers_number):\n secString[i] = StringVar()\n secString[i].set(\"00\")\n \n#Get User Input\nhourTextBox1 = Entry(root, width=3, font=(\"Calibri\", 20, \"\"),textvariable=hrString[0]).place(x=170, y=100) \nminuteTextBox1 = Entry(root, width=3, font=(\"Calibri\", 20, \"\"),textvariable=minString[0]).place(x=220, y=100) \nsecondTextBox1 = Entry(root, width=3, font=(\"Calibri\", 20, \"\"),textvariable=secString[0]).place(x=270, y=100) \n\nhourTextBox2 = Entry(root, width=3, font=(\"Calibri\", 20, \"\"), textvariable=hrString[1]).place(x=170, y=180) \nminuteTextBox2 = Entry(root, width=3, font=(\"Calibri\", 20, \"\"), textvariable=minString[1]).place(x=220, y=180) \nsecondTextBox2 = Entry(root, width=3, font=(\"Calibri\", 20, \"\"), textvariable=secString[1]).place(x=270, y=180) \n\nhourTextBox3 = Entry(root, width=3, font=(\"Calibri\", 20, \"\"), textvariable=hrString[2]).place(x=170, y=260) \nminuteTextBox3 = Entry(root, width=3, font=(\"Calibri\", 20, \"\"), textvariable=minString[2]).place(x=220, y=260) \nsecondTextBox3 = Entry(root, width=3, font=(\"Calibri\", 20, \"\"), textvariable=secString[2]).place(x=270, y=260) \n\ndef RunThermalCam():\n thermal_mapfile = str(datetime.datetime.now().date()) + '_' + str(datetime.datetime.now().time()).replace(':', '.')\n thermal_mapfile = thermal_mapfile[:16] #limit thermal file name to 16 characters\n i2c = busio.I2C(board.SCL, board.SDA, frequency=800000) # setup I2C\n mlx = adafruit_mlx90640.MLX90640(i2c) # begin MLX90640 with I2C comm\n mlx.refresh_rate = adafruit_mlx90640.RefreshRate.REFRESH_2_HZ # set refresh rate 2Hz\n mlx_shape = (24,32)\n print(\"Initialized\")\n # setup the figure for plotting\n plt.ion() # enables interactive plotting\n fig,ax = plt.subplots(figsize=(12,7))\n therm1 = ax.imshow(np.zeros(mlx_shape),vmin=0,vmax=60) #start plot with zeros\n cbar = fig.colorbar(therm1) # setup colorbar for temps\n cbar.set_label('Temperature [$^{\\circ}$C]',fontsize=14) # colorbar label\n \n #frame = np.zeros((24*32,)) # setup array for storing all 768 temperatures\n \n t_array = []\n frame = [0] * 768\n while True:\n t1 = time.monotonic()\n try:\n mlx.getFrame(frame) # read MLX temperatures into frame var\n data_array = (np.reshape(frame,mlx_shape)) # reshape to 24x32\n therm1.set_data(np.fliplr(data_array)) # flip left to right\n therm1.set_clim(vmin=np.min(data_array),vmax=np.max(data_array)) # set bounds\n cbar.update_normal(therm1) # update colorbar range\n plt.title(f\"Max Temp: {np.max(data_array):.1f}C\")\n plt.pause(0.001) # required\n t_array.append(time.monotonic()-t1)\n print('Sample Rate: {0:2.1f}fps'.format(len(t_array)\/np.sum(t_array)))\n #except AttributeError:\n # continue \n except ValueError:\n continue # if error, just read again \n for h in range(24):\n for w in range(32):\n t = frame[h*32 + w]\n frame = list(np.around(np.array(frame),1)) #round array elements to one decimal point \n with open(\"\/home\/pi\/MOC\/Thermal_Camera\/\"+thermal_mapfile+\".csv\",\"a\") as thermalfile:\n writer = csv.writer(thermalfile,delimiter=\" \")\n writer.writerow([time.time(),frame]) \n \ndef countdown():\n for i in range (timers_number):\n \n if(clockTime[i] > -1):\n totalMinutes[i], totalSeconds[i] = divmod(clockTime[i], 60)\n if(totalMinutes[i]>60):\n totalHours[i], totalMinutes[i] = divmod(totalMinutes[i], 60)\n \n hrString[i].set(\"{0:2d}\".format(totalHours[i]))\n minString[i].set(\"{0:2d}\".format(totalMinutes[i]))\n secString[i].set(\"{0:2d}\".format(totalSeconds[i]))\n \n if(clockTime[i] == 0): #time is up\n hrString[i].set(\"00\")\n minString[i].set(\"00\")\n secString[i].set(\"00\")\n modules() \n \n if(clockTime[i] != -1): #timer is paused\n clockTime[i] -= 1 \n \n if(clockTime[i] != -1):\n root.after(1000, countdown) \n\np1 = multiprocessing.Process(target = RunThermalCam)\np2 = multiprocessing.Process(target = countdown)\n \ndef starttimer():\n #Start_modules()\n global clockTime\n clockTime = [0]*timers_number\n try:\n #global clockTime\n for i in range (timers_number):\n clockTime[i] = int(hrString[i].get())*3600 + int(minString[i].get())*60 + int(secString[i].get()) \n except:\n print(\"Incorrect values\")\n countdown()\n RunThermalCam()\n #p1.start()\n #p2.start()\n #p1.join()\n #p2.join() \n \ndef stop():\n for i in range (timers_number):\n clockTime[i] = 0 \ndef pause():\n for i in range (timers_number):\n clockTime[i] = -1\n modules()\ndef GUI(): \n setTimeButton = Button(root, text='START', bd='5', command=starttimer).place(x=200, y=500) \n setTimeButton = Button(root, text='STOP', bd='5', command=stop).place(x=350, y=500) \n setTimeButton = Button(root, text='PAUSE', bd='5', command=pause).place(x=500, y=500) \n root.mainloop() \n\nif __name__ == '__main__':\n GUI()","Title":"How to run thermal camera mlx90640 concurrently with countdown timers","Tags":"python-3.x,tkinter,python-multiprocessing,raspberry-pi4,adafruit","AnswerCount":2,"A_Id":75127983,"Answer":"Your code isn't really using multiprocessing.\nWhen you click the start button, tkinter processes the event, and calls the starttimer callback.\nThis in turn calls RunThermalCam in the current process.\nThis is a problem, because RunThermalCam has an infinite loop inside it.\nSo basically, since RunThermalCam runs forever, starttimer will never return. That means that tkinter's event processing grinds to a halt.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75127920,"CreationDate":"2023-01-15 19:41:31","Q_Score":4,"ViewCount":343,"Question":"Some context, I have some data that I'm doing some text analysis on, I have just tokenized them and I want to combine all the lists in the dataframe column for some further processing.\nMy df is as:\ndf = pd.DataFrame({'title': ['issue regarding app', 'graphics should be better'], 'text': [[\"'app'\", \"'load'\", \"'slowly'\"], [\"'interface'\", \"'need'\", \"'to'\", \"'look'\", \"'nicer'\"]]})`\n\nI want to merge all the lists in the 'text' column into one list, and also remove the open\/close inverted commas.\nSomething like this:\nlst = ['app', 'load', 'slowly', 'interface', 'need', 'to', 'look', 'nicer']`\n\nThank you for all your help!","Title":"How do I combine lists in column of dataframe to a single list","Tags":"python,pandas,list,dataframe,nlp","AnswerCount":3,"A_Id":75127942,"Answer":"We can also iterate through each list in the series and concatenate them using append() and finally use concat() to convert them to a list. Yields the same output as above.","Users Score":2,"is_accepted":false,"Score":0.1325487884,"Available Count":1},{"Q_Id":75127926,"CreationDate":"2023-01-15 19:42:59","Q_Score":1,"ViewCount":46,"Question":"I have a dictionary in which there are three conditions startwith, contains and endwith given below,\ndict1 = {'startwith':\"Raja, Bina\", 'contains':\"Tata\", \"endwith\":\"\"}\n\nIf user give value in dictionary with comma that means OR i.e \"Raja, Bina\" = Raja or Bina\nnow I have a list of name given below\nlist_of_names = [\"Raja Molli Jira\", \"Bina Tata Birla\", \"Fira Kiya Too\"]\n\nnow with above dictionary and list I have to find the name from list which satisfy the conditions given in dictionary, from above example result should be like (need result in list)\nrequired_list = [\"Raja Molli Jira\", \"Bina Tata Birla\"]\n\nthe name in required_list satisfy the condition given in dictionary which are startwith and contains.\nExample 2\nif dict1 and list_of_names are :\ndict1 = {'startwith':\"\", 'contains':\"Tata, Gola\", \"endwith\":\"Too\"}\nlist_of_names = [\"Raja Molli Jira\", \"Bina Tata Birla\", \"Fira Kiya Too\"]\n\nthe required_list list will be :\nrequired_list = [\"Bina Tata Birla\", \"Fira Kiya Too\"]\n\nthe name in required_list satisfy the condition given in dictionary which are contains and endwith\nCurrently using code\nI am able to handle problem if user give single value (without comma) with below code\ndict1 = {'startwith':\"Raja,Bina\", 'contains':\"Tata\", \"endwith\":\"\"}\nlist_of_names = [\"Raja Molli Jira\", \"Bina Tata Birla\", 'Fira Kiya Too']\nrequired_list = []\nfileopp = list(dict1.values())\n\nfor i in list_of_files:\n #startswith\n if ((fileopp[0] != \"\") and (fileopp[1] == \"\") and (fileopp[2] == \"\")):\n if i.startswith(fileopp[0]):\n listfilename.append(i)\n #containswith\n elif ((fileopp[0] == \"\") and (fileopp[1] != '') and (fileopp[2] == \"\")):\n if i.__contains__(fileopp[1]):\n listfilename.append(i)\n #endiswith\n elif ((fileopp[0] == \"\") and (fileopp[1] == '') and (fileopp[2] != \"\")):\n if i.endswith(fileopp[2]):\n listfilename.append(i)\n #startswith and contains with\n elif ((fileopp[0] != \"\") and (fileopp[1] != \"\") and (fileopp[2] == \"\")):\n if (i.startswith(fileopp[0])) and i.__contains__(fileopp[1]):\n listfilename.append(i)\n #startswith and endswith\n elif ((fileopp[0] != \"\") and (fileopp[1] == \"\") and (fileopp[2] != \"\")):\n if (i.startswith(fileopp[0])) and i.endswith(fileopp[2]):\n listfilename.append(i)\n #containswith and endswith\n elif ((fileopp[0] == \"\") and (fileopp[1] != \"\") and (fileopp[2] != \"\")):\n if (i.__contains__(fileopp[1])) and i.endswith(fileopp[2]):\n listfilename.append(i)\n\nQuestion\nIf user give Value with comma (Raja,Bina) then above code fails to give result.\nGiving the conditions and required result what I want,\nFirst\ndict1 = {'startwith':\"Fira\", 'contains':\"\", \"endwith\":\"Birla\"}\nlist_of_names = [\"Raja Molli Jira\", \"Bina Tata Birla\", \"Fira Kiya Too\"]\n\nrequired_list = [\"Bina Tata Birla\", \"Fira Kiya Too\"]\n\nSecond\ndict1 = {'startwith':\"Fira, Raja\", 'contains':\"\", \"endwith\":\"\"}\nlist_of_names = [\"Raja Molli Jira\", \"Bina Tata Birla\", \"Fira Kiya Too\"]\n\nrequired_list = [\"Raja Molli Jira\", \"Fira Kiya Too\"]","Title":"Get the list of names which satisfy the condition in dictionary from given list","Tags":"python,python-3.x,pandas,python-2.7,dictionary","AnswerCount":2,"A_Id":75128050,"Answer":"As the two names separated by a comma are in the same string, the code can not work the way you wish as it does not search for the individual names but the whole string \"Fira, Raja\" which is never given in the list_of_names as there are not even commas. Try making dict1 two-dimensional, so you can give several conditions for the different aspects (such as startswith).","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75128068,"CreationDate":"2023-01-15 20:08:38","Q_Score":1,"ViewCount":48,"Question":"I have a node2vec embedding stored as a .csv file, values are a square symmetric matrix. I have two versions of this, one with node names in the first column and another with node names in the first row. I would like to cluster this data with DBSCAN, but I can't seem to figure out how to get the input right. I tried this:\nimport numpy as np\nimport pandas as pd\nfrom sklearn.cluster import DBSCAN\nfrom sklearn import metrics\n\ninput_file = \"node2vec-labels-on-columns.emb\"\n\n# for tab delimited use:\ndf = pd.read_csv(input_file, header = 0, delimiter = \"\\t\")\n\n# put the original column names in a python list\noriginal_headers = list(df.columns.values)\n\nemb = df.as_matrix()\ndb = DBSCAN(eps=0.3, min_samples=10).fit(emb)\nlabels = db.labels_\n\n# Number of clusters in labels, ignoring noise if present.\nn_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)\nn_noise_ = list(labels).count(-1)\n\nprint(\"Estimated number of clusters: %d\" % n_clusters_)\nprint(\"Estimated number of noise points: %d\" % n_noise_)\n\nThis leads to an error:\ndbscan.py:14: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.\n emb = df.as_matrix()\nTraceback (most recent call last):\n File \"dbscan.py\", line 15, in \n db = DBSCAN(eps=0.3, min_samples=10).fit(emb)\n File \"C:\\Python36\\lib\\site-packages\\sklearn\\cluster\\_dbscan.py\", line 312, in fit\n X = self._validate_data(X, accept_sparse='csr')\n File \"C:\\Python36\\lib\\site-packages\\sklearn\\base.py\", line 420, in _validate_data\n X = check_array(X, **check_params)\n File \"C:\\Python36\\lib\\site-packages\\sklearn\\utils\\validation.py\", line 73, in inner_f\n return f(**kwargs)\n File \"C:\\Python36\\lib\\site-packages\\sklearn\\utils\\validation.py\", line 646, in check_array\n allow_nan=force_all_finite == 'allow-nan')\n File \"C:\\Python36\\lib\\site-packages\\sklearn\\utils\\validation.py\", line 100, in _assert_all_finite\n msg_dtype if msg_dtype is not None else X.dtype)\nValueError: Input contains NaN, infinity or a value too large for dtype('float64').\n\nI've tried other input methods that lead to the same error. All the tutorials I can find use datasets imported form sklearn so those are of not help figuring out how to read from a file. Can anyone point me in the right direction?","Title":"Can't get correct input for DBSCAN clustersing","Tags":"python,scikit-learn,dbscan","AnswerCount":1,"A_Id":75130671,"Answer":"The error does not come from the fact that you are reading the dataset from a file but on the content of the dataset.\nDBSCAN is meant to be used on numerical data. As stated in the error, it does not support NaNs.\nIf you are willing to cluster strings or labels, you should find some other model.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75128123,"CreationDate":"2023-01-15 20:16:29","Q_Score":1,"ViewCount":639,"Question":"When using PyTorch tensors, is there a point to initialize my data like so:\nX_tensor: torch.IntTensor = torch.IntTensor(X)\nY_tensor: torch.IntTensor = torch.IntTensor(Y)\n\nOr should I just do the 'standard':\nX_tensor: torch.Tensor = torch.Tensor(X)\nY_tensor: torch.Tensor = torch.Tensor(Y)\n\neven though I know X: list[list[int] and Y: list[list[int]","Title":"Is there a difference between torch.IntTensor and torch.Tensor","Tags":"python,types,tensor,torch","AnswerCount":2,"A_Id":75133935,"Answer":"torch.IntTensor(X): returns tensor of int32\ntorch.Tensor(X): returns tensor of float32\n\nWhat to use depends on what your forward function is expecting specially your loss function. Most loss functions operate on float tensors.","Users Score":2,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75128500,"CreationDate":"2023-01-15 21:24:02","Q_Score":1,"ViewCount":88,"Question":"I am trying to import a util package one directory up from where my code is, but I get an ImportError which I don't understand.\nI have a number of different variations on the import syntax in Python, none of which are working.\nThere are a number of similar questions on Stack Overflow, but none have helped me understand or fix this issue.\nOf the top of my head, I have tried the following variations:\nimport util\nimport ..util\nfrom .. import util\nfrom ..util import parser\nfrom AdventOfCode2022 import util\nfrom ..AdventOfCode2022 import util\nfrom ...AdventOfCode2022 import util\n\nMost of these I guessed wouldn't work, but I tried them anyway to be sure.\nError message:\n\nImportError: attempted relative import with no known parent package\n\nDirectory structure:\n.\n\u251c\u2500\u2500 day03\n\u2502 \u251c\u2500\u2500 input.txt\n\u2502 \u251c\u2500\u2500 part1.py\n\u2502 \u251c\u2500\u2500 part2.py\n\u2502 \u2514\u2500\u2500 test_input.txt\n\u2514\u2500\u2500 util\n \u251c\u2500\u2500 __init__.py\n \u2514\u2500\u2500 parser.py\n\nI just want to import my util package from any \"day0*\/\" directory - not sure why Python makes it so hard!","Title":"Relative path ImportError when trying to import a shared module from a subdirectory in a script","Tags":"python,python-3.x,python-import","AnswerCount":1,"A_Id":75129500,"Answer":"Two options:\n\nAdd the full path to .\/util\/ to your PYTHONPATH environment variable.\n\nFor example on Bash, your ~\/.bashrc might have export PYTHONPATH=\"${PYTHONPATH}:\/Users\/foobar\/projects\/advent-of-code\/util\/\".\n\nAdd sys.path.append('\/path\/to\/application\/app\/folder') before the import.\n\nThe other solutions don't work because:\n\nday03 and the parent directory are not modules with their own __init__.py. Lines like from ..util import parser only work if everything involved is a module.\nYou are presumably running the code from within .\/day03\/.\n\nView this as 'I have a bunch of independent Python projects (day01, day02 etc) that all want to share a common piece of code I have living in a different project (util) that lives somewhere else on my computer.'","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75128585,"CreationDate":"2023-01-15 21:37:51","Q_Score":2,"ViewCount":542,"Question":"class sum:\n def fx(self, op, a, b, c, d):\n if(op == 1):\n self.output = self.addition(a, b, c, d)\n else:\n self.output = self.subtraction(a, b, c, d)\n\n def addition(self, a, b, c, d):\n return a+b+c+d\n\n def subtraction(self, a, b, c, d):\n return a-b-c-d\n\nx = sum.fx(1, 1, 2, 3, 4)\n\nThe above code gives an error\n\nx = sum.fx(1, 1, 2, 3, 4)\nTypeError: sum.fx() missing 1 required positional argument: 'd'\n\nI am clearly entering the value parameter 'd' but it says that i am not. It should give an output \"10\"","Title":"having an error : \"missing 1 required positional argument\", even though i am entering the argument","Tags":"python,typeerror","AnswerCount":2,"A_Id":75128601,"Answer":"It should be sum().fx(...). The first argument you passed is considered to be the instance of the class (self) but if we consider that then you are missing one of the arguments d that you need to pass?\nYou should instantiate in this case first to call the methods.\nNote: By using the self we can access the attributes and methods of the class in python. In your case, even if you provide extra random arguments .fx(1,1,2,3,4) it would run into error later. Because you don't have the instance of the class.","Users Score":2,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75129502,"CreationDate":"2023-01-16 01:18:24","Q_Score":0,"ViewCount":37,"Question":"EDIT: (User error, I wasn't scanning entire dataframe. Delete Question if needed )A page I found had a solution that claimed to drop all rows with NAN in a selected column. In this case I am interested in the column with index 78 (int, not string, I checked).\nThe code fragment they provided turns out to look like this for me:\ndf4=df_transposed.dropna(subset=[78])\nThat did exactly the opposite of what I wanted. df4 is a dataframe that has NAN in all elements of the dataframe. I'm not sure how to\nI tried the dropna() method as suggested on half a dozen pages and I expected a dataframe with no NAN values in the column with index 78. Instead every element was NAN in the dataframe.","Title":"How do I drop all rows in a DataFrame that have NAN in that row, in a specified column?","Tags":"python,dataframe,sorting,nan","AnswerCount":1,"A_Id":75129947,"Answer":"df_transposed.dropna(subset=[78], in place=True) #returns dataframe with rows that have missing values in column 78 removed.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75130317,"CreationDate":"2023-01-16 04:44:16","Q_Score":2,"ViewCount":396,"Question":"I recently set up a new version of Python 3.11 in my system using Homebrew, the pip3 that came with it seems to be installing into the wrong target (numpy package is used as an example):\npython3.11 -m pip install numpy\nCollecting numpy\n Using cached numpy-1.24.1-cp311-cp311-macosx_10_9_x86_64.whl (19.8 MB)\nInstalling collected packages: numpy\nSuccessfully installed numpy-1.24.1\nWARNING: Target directory \/usr\/local\/lib\/python3.11\/site-packages\/pip\/numpy-1.24.1.dist-info already exists. Specify --upgrade to force replacement.\nWARNING: Target directory \/usr\/local\/lib\/python3.11\/site-packages\/pip\/numpy already exists. Specify --upgrade to force replacement.\nWARNING: Target directory \/usr\/local\/lib\/python3.11\/site-packages\/pip\/bin already exists. Specify --upgrade to force replacement.\n\nThe Python3 pip list and uninstall command shows that the package is uninstalled:\npython3.11 -m pip list\nPackage Version\n---------- -------\npip 22.3.1\nsetuptools 65.6.3\nwheel 0.38.4\n\n---------- ----------------- ----------------- ----------------- ----------------- -----------\npython3.11 -m pip uninstall numpy\nWARNING: Skipping numpy as it is not installed.\n\nThe locations of my Python 3.11 and pip3.11 are:\nwhich python3.11\n\/usr\/local\/bin\/python3.11\n---------- ----------------- ----------------- ----------------- ----------------- -----------\nwhich pip3.11\n\/usr\/local\/bin\/pip3.11\n---------- ----------------- ----------------- ----------------- ----------------- -----------\npip3.11 --version\npip 22.3.1 from \/usr\/local\/lib\/python3.11\/site-packages\/pip (python 3.11)\n\nI've tried installing different versions of Python (3.9 & 3.10) but all of it suffers from the error shown, the pip3 command keeps targeting the wrong directory.\nI've also tried the following command:\npython3.11 -m pip install --upgrade --force-reinstall pip\n\nbut it doesn't seem to work.","Title":"pip3 version installing package into different directory","Tags":"python,pip,homebrew","AnswerCount":1,"A_Id":75131491,"Answer":"You can try reinstalling Python 3.11 using Homebrew, and then make sure that the python3.11 executable and the pip3.11 executable are pointing to the same location by running which python3.11 and which pip3.11 commands.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75130716,"CreationDate":"2023-01-16 06:08:55","Q_Score":1,"ViewCount":50,"Question":"MATERIAL Prod_Date Prod_Qty Status\n0 107LPY04 2022-12-01 0 Yes\n1 051DPY03 2022-12-01 4 Unavailable \n2 040LPY72 2022-12-01 0 Yes\n3 025LPY61 2022-12-01 0 Yes\n4 034LPY05 2022-12-01 0 Yes\n\nThe above table is my data. It is a 6251 rows data.\nI want to make a new dataframe in which I will get only rows with status \"Unavailable\".\nI used the command\ndf2 = (df[df.Status == \"Unavailable\"])\ndf2\n\nBut I get an empty dataframe.\nThere are total 6 Unavailable in Status Column. Hence I should get 6 rows as my output.","Title":"I am trying to make a new dataframe from an existing dataframe using pandas","Tags":"python,pandas,dataframe","AnswerCount":2,"A_Id":75131236,"Answer":"df_new = df[df[\"Status\"] == \"Unavalible\"]","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75131490,"CreationDate":"2023-01-16 07:57:34","Q_Score":1,"ViewCount":106,"Question":"I have basically the following code and want to embed it in an async coroutine:\ndef read_midi():\n midi_in = pygame.midi.Input(0)\n while True:\n if midi_in.poll():\n midi_data = midi_in.read(1)[0][0]\n # do something with midi_data, e.g. putting it in a Queue..\n\nFrom my understanding since pygame is not asynchronous I have two options here: put the whole function in an extra thread or turn it into an async coroutine like this:\nasync def read_midi():\n midi_in = pygame.midi.Input(1)\n while True:\n if not midi_in.poll():\n await asyncio.sleep(0.1) # very bad!\n continue\n midi_data = midi_in.read(1)[0][0]\n # do something with midi_data, e.g. putting it in a Queue..\n\nSo it looks like I have to either keep the busy loop and put it in a thread and waste lots of cpu time or put it into the (fake) coroutine above and introduce a tradeoff between time lags and wasting CPU time.\nAm I wrong?\nIs there a way to read MIDI without a busy loop?\nOr even a way to await midi.Input.read?","Title":"Is there a way to wrap pygame.midi.Input.read() in an asynchronous task without polling or an extra thread?","Tags":"python,pygame,python-asyncio,midi","AnswerCount":1,"A_Id":75192463,"Answer":"It is true that the pygame library is not asynchronous, so you must either utilize a distinct thread or an asynchronous coroutine to process the MIDI input.\nUsing a distinct thread will permit the other parts of the program to carry on running concurrently to the MIDI input being read, but it will also necessitate more CPU resources.\nEmploying an async coroutine with the asyncio.sleep(0.1) call will result in a holdup in the MIDI input, although it will also reduce the CPU utilization. The trade-off here is between responsiveness and resource usage.\nUsing asyncio.sleep(0.1) will not be optimal as it will cause a considerable lag and it might not be wise to incorporate sleep in the while loop, as this will introduce a lot of holdup and won't be responsive.\nAnother possible choice is to utilize a library that furnishes an asynchronous interface for MIDI input, such as rtmidi-python or mido. These libraries may offer an approach to wait for MIDI input asynchronously without using a blocking call.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75131818,"CreationDate":"2023-01-16 08:38:33","Q_Score":0,"ViewCount":43,"Question":"I have created a program in Pyspark\/python using spyder IDE.\nProgram is using Pyspark library and it runs perfectly file when i am running it from IDE SPYDER.\nI created exe of same program using pyinstaller.\nWhen i run exe from command prompt it gives error \u201cNo module name Pyspark\u201d.\nPlease help\/suggest.\nThank You.\nI have created a program in Pyspark\/python using spyder IDE.\nProgram is using Pyspark library and it runs perfectly file when i am running it from IDE SPYDER.\nI created exe of same program using pyinstaller.\nWhen i run exe from command prompt it gives error \u201cNo module name Pyspark","Title":"How to fix \u201cNo Module name pyspark\u201d from exe","Tags":"python,pyspark,spyder","AnswerCount":1,"A_Id":75132002,"Answer":"Have you installed it with pip? pip install pyspark Must be install 'global' not only in your enviroment if you have one.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75133008,"CreationDate":"2023-01-16 10:34:22","Q_Score":1,"ViewCount":130,"Question":"I have python3 program that acts as a Modbus Master. I start a ModbusSerialClient and then proceed to read register from the slave. This is working fine on Windows. The issue is that on Ubuntu I am seeing that the ModbusSerialClient keeps changing the baudrate which makes the communication inconsistent.\nI start the communication with:\nfrom pymodbus.client.sync import ModbusSerialClient as ModbusClient\n...\ntry:\n self.client = ModbusClient(\n method = 'rtu'\n ,port= self.port\n ,baudrate=int(115200)\n ,parity = 'N'\n ,stopbits=1\n ,bytesize=8\n ,timeout=3\n ,RetryOnEmpty = True\n ,RetryOnInvalid = True\n )\n self.connection = self.client.connect()\n # Some delay may be necessary between connect and first transmission\n time.sleep(2)\n\nWhere self.port = \"COM_X\" in Windows and self.port = \"\/dev\/ttyS1\" in Linux\nAnd then I read the registers using:\nrr = self.client.read_holding_registers(register_addr,register_block,unit=MODBUS_CONFIG_ID)\nif(rr.isError()):\n logger.debug(rr)\nelse:\n # Proceed with the processing\n\nThe error I log in some ocasions is:\nModbus Error: [Input\/Output] Modbus Error: [Invalid Message] No response received, expected at least 2 bytes (0 received)\n\nI have verified the baudrate change physically measuring the signals.\nI have verified that with a command line tools like cu the baudrate remains consistent.\nThe verions I am using are:\n\npymodbus 3.1.0 (error also present with 2.5.3)\npyserial 3.5\npython 3.8.10\nkubuntu 22.04 (same behaviour with ubuntu)","Title":"Why does Pymodbus change baudrate when running on linux without requesting a baudrate change?","Tags":"python,python-3.x,pymodbus","AnswerCount":1,"A_Id":75606342,"Answer":"The issue was confirmed to be caused by spike in the CPU caused by the dead SSD.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75134246,"CreationDate":"2023-01-16 12:28:54","Q_Score":2,"ViewCount":5207,"Question":"Tried to train the model but got the mmcv error\nNo module named 'mmcv._ext'\n\nmmcv library is already installed and imported\nmmcv version = 1.4.0\nCuda version = 10.0\n\n\nAny suggestions to fix the issue??","Title":"No module named 'mmcv._ext'","Tags":"python,pytorch","AnswerCount":2,"A_Id":76329776,"Answer":"Try\nmmcv==0.6.2 and mmdet==2.3.0 this worked for me.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75136841,"CreationDate":"2023-01-16 16:10:39","Q_Score":1,"ViewCount":1226,"Question":"Update2\nOkay, I've rebuilt the Ubuntu server from scratch and the problem still exists. This is how I am doing it.\n\nCreate a virtual machine in ESXI with two disk volumes. The first is 50GB and the second is 250GB.\nRun the Ubuntu 22.04 LTS install\nCreate a static IP address\nCreate two LVM volumes, the 50GB is root and the 350GB is mounted as \/var\nSelect Docker and Prometheus to be installed along with Ubuntu\nLet the install run to completion.\nUnmount the CD rom when finished and reboot Ubuntu\nLogin and then sudo bash\ndocker pull ubuntu\ndocker run -it ubuntu\napt-get update\napt-get install -y python3\n\nYou should get the error\nI am running a new\/fresh Ubuntu Docker image on a 22.04 LTS Ubuntu server instance. Docker was installed during the Ubuntu 22.04 LTS install. It is a new Ubuntu 22.04 LTS install.\nI'm using docker version 20.10.17, build 100c70180f.\nI am having trouble getting python3 installed in the running docker container.\nTo start off, I get the Ubuntu image running in a container:\ndocker run -ti ubuntu\nIn the image I run (as the root user)\napt update\nThen I run\napt install python3\nThe installation fails with:\nroot@6bfb4be344d6:\/# apt-get install python3\nReading package lists... Done\nBuilding dependency tree... Done\nReading state information... Done\nThe following additional packages will be installed:\n libexpat1 libmpdec3 libpython3-stdlib libpython3.10-minimal libpython3.10-stdlib libreadline8 libsqlite3-0 media-types python3-minimal python3.10 python3.10-minimal readline-common\nSuggested packages:\n python3-doc python3-tk python3-venv python3.10-venv python3.10-doc binutils binfmt-support readline-doc\nThe following NEW packages will be installed:\n libexpat1 libmpdec3 libpython3-stdlib libpython3.10-minimal libpython3.10-stdlib libreadline8 libsqlite3-0 media-types python3 python3-minimal python3.10 python3.10-minimal readline-common\n0 upgraded, 13 newly installed, 0 to remove and 0 not upgraded.\nNeed to get 6494 kB of archives.\nAfter this operation, 23.4 MB of additional disk space will be used.\nDo you want to continue? [Y\/n] y\nGet:1 http:\/\/archive.ubuntu.com\/ubuntu jammy-updates\/main amd64 libpython3.10-minimal amd64 3.10.6-1~22.04.2 [810 kB]\nGet:2 http:\/\/archive.ubuntu.com\/ubuntu jammy-updates\/main amd64 libexpat1 amd64 2.4.7-1ubuntu0.2 [91.0 kB] \nGet:3 http:\/\/archive.ubuntu.com\/ubuntu jammy-updates\/main amd64 python3.10-minimal amd64 3.10.6-1~22.04.2 [2251 kB] \nGet:4 http:\/\/archive.ubuntu.com\/ubuntu jammy-updates\/main amd64 python3-minimal amd64 3.10.6-1~22.04 [24.3 kB] \nGet:5 http:\/\/archive.ubuntu.com\/ubuntu jammy\/main amd64 media-types all 7.0.0 [25.5 kB] \nGet:6 http:\/\/archive.ubuntu.com\/ubuntu jammy\/main amd64 libmpdec3 amd64 2.5.1-2build2 [86.8 kB] \nGet:7 http:\/\/archive.ubuntu.com\/ubuntu jammy\/main amd64 readline-common all 8.1.2-1 [53.5 kB] \nGet:8 http:\/\/archive.ubuntu.com\/ubuntu jammy\/main amd64 libreadline8 amd64 8.1.2-1 [153 kB] \nGet:9 http:\/\/archive.ubuntu.com\/ubuntu jammy-updates\/main amd64 libsqlite3-0 amd64 3.37.2-2ubuntu0.1 [641 kB] \nGet:10 http:\/\/archive.ubuntu.com\/ubuntu jammy-updates\/main amd64 libpython3.10-stdlib amd64 3.10.6-1~22.04.2 [1832 kB] \nGet:11 http:\/\/archive.ubuntu.com\/ubuntu jammy-updates\/main amd64 python3.10 amd64 3.10.6-1~22.04.2 [497 kB] \nGet:12 http:\/\/archive.ubuntu.com\/ubuntu jammy-updates\/main amd64 libpython3-stdlib amd64 3.10.6-1~22.04 [6910 B] \nGet:13 http:\/\/archive.ubuntu.com\/ubuntu jammy-updates\/main amd64 python3 amd64 3.10.6-1~22.04 [22.8 kB] \nFetched 6494 kB in 14s (478 kB\/s) \ndebconf: delaying package configuration, since apt-utils is not installed\nSelecting previously unselected package libpython3.10-minimal:amd64.\n(Reading database ... 4395 files and directories currently installed.)\nPreparing to unpack ...\/libpython3.10-minimal_3.10.6-1~22.04.2_amd64.deb ...\nUnpacking libpython3.10-minimal:amd64 (3.10.6-1~22.04.2) ...\nSelecting previously unselected package libexpat1:amd64.\nPreparing to unpack ...\/libexpat1_2.4.7-1ubuntu0.2_amd64.deb ...\nUnpacking libexpat1:amd64 (2.4.7-1ubuntu0.2) ...\nSelecting previously unselected package python3.10-minimal.\nPreparing to unpack ...\/python3.10-minimal_3.10.6-1~22.04.2_amd64.deb ...\nUnpacking python3.10-minimal (3.10.6-1~22.04.2) ...\nSetting up libpython3.10-minimal:amd64 (3.10.6-1~22.04.2) ...\nSetting up libexpat1:amd64 (2.4.7-1ubuntu0.2) ...\nSetting up python3.10-minimal (3.10.6-1~22.04.2) ...\n[Errno 13] Permission denied: '\/usr\/lib\/python3.10\/__pycache__\/__future__.cpython-310.pyc.139849676216832'dpkg: error processing package python3.10-minimal (--configure):\n installed python3.10-minimal package post-installation script subprocess returned error exit status 1\nErrors were encountered while processing:\n python3.10-minimal\nE: Sub-process \/usr\/bin\/dpkg returned an error code (1)\n\nLooking into the \/user\/lib\/python3.10\/__pychache__\/ all of the files in the directory are -rw-r--r--\nHow can the install complain of Permission denied when running as root and the user permissions for every file in the directory is rw?\nUpdate\nI upgraded to docker 20.10.22 build 3a2c30b but still am encountering this issue.","Title":"apt-get install Python3 in fresh Ubuntu Docker Image results in Error 13 Permission Denied","Tags":"python-3.x,docker,ubuntu,apt,failed-installation","AnswerCount":4,"A_Id":76099640,"Answer":"i had this same error but the answers i got were different, but finally making that same commands as a root user in docker, or you can try to look for enough permissions from the host system admin, otherwise your command will continue to fail, that possibly should solve your error","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75136841,"CreationDate":"2023-01-16 16:10:39","Q_Score":1,"ViewCount":1226,"Question":"Update2\nOkay, I've rebuilt the Ubuntu server from scratch and the problem still exists. This is how I am doing it.\n\nCreate a virtual machine in ESXI with two disk volumes. The first is 50GB and the second is 250GB.\nRun the Ubuntu 22.04 LTS install\nCreate a static IP address\nCreate two LVM volumes, the 50GB is root and the 350GB is mounted as \/var\nSelect Docker and Prometheus to be installed along with Ubuntu\nLet the install run to completion.\nUnmount the CD rom when finished and reboot Ubuntu\nLogin and then sudo bash\ndocker pull ubuntu\ndocker run -it ubuntu\napt-get update\napt-get install -y python3\n\nYou should get the error\nI am running a new\/fresh Ubuntu Docker image on a 22.04 LTS Ubuntu server instance. Docker was installed during the Ubuntu 22.04 LTS install. It is a new Ubuntu 22.04 LTS install.\nI'm using docker version 20.10.17, build 100c70180f.\nI am having trouble getting python3 installed in the running docker container.\nTo start off, I get the Ubuntu image running in a container:\ndocker run -ti ubuntu\nIn the image I run (as the root user)\napt update\nThen I run\napt install python3\nThe installation fails with:\nroot@6bfb4be344d6:\/# apt-get install python3\nReading package lists... Done\nBuilding dependency tree... Done\nReading state information... Done\nThe following additional packages will be installed:\n libexpat1 libmpdec3 libpython3-stdlib libpython3.10-minimal libpython3.10-stdlib libreadline8 libsqlite3-0 media-types python3-minimal python3.10 python3.10-minimal readline-common\nSuggested packages:\n python3-doc python3-tk python3-venv python3.10-venv python3.10-doc binutils binfmt-support readline-doc\nThe following NEW packages will be installed:\n libexpat1 libmpdec3 libpython3-stdlib libpython3.10-minimal libpython3.10-stdlib libreadline8 libsqlite3-0 media-types python3 python3-minimal python3.10 python3.10-minimal readline-common\n0 upgraded, 13 newly installed, 0 to remove and 0 not upgraded.\nNeed to get 6494 kB of archives.\nAfter this operation, 23.4 MB of additional disk space will be used.\nDo you want to continue? [Y\/n] y\nGet:1 http:\/\/archive.ubuntu.com\/ubuntu jammy-updates\/main amd64 libpython3.10-minimal amd64 3.10.6-1~22.04.2 [810 kB]\nGet:2 http:\/\/archive.ubuntu.com\/ubuntu jammy-updates\/main amd64 libexpat1 amd64 2.4.7-1ubuntu0.2 [91.0 kB] \nGet:3 http:\/\/archive.ubuntu.com\/ubuntu jammy-updates\/main amd64 python3.10-minimal amd64 3.10.6-1~22.04.2 [2251 kB] \nGet:4 http:\/\/archive.ubuntu.com\/ubuntu jammy-updates\/main amd64 python3-minimal amd64 3.10.6-1~22.04 [24.3 kB] \nGet:5 http:\/\/archive.ubuntu.com\/ubuntu jammy\/main amd64 media-types all 7.0.0 [25.5 kB] \nGet:6 http:\/\/archive.ubuntu.com\/ubuntu jammy\/main amd64 libmpdec3 amd64 2.5.1-2build2 [86.8 kB] \nGet:7 http:\/\/archive.ubuntu.com\/ubuntu jammy\/main amd64 readline-common all 8.1.2-1 [53.5 kB] \nGet:8 http:\/\/archive.ubuntu.com\/ubuntu jammy\/main amd64 libreadline8 amd64 8.1.2-1 [153 kB] \nGet:9 http:\/\/archive.ubuntu.com\/ubuntu jammy-updates\/main amd64 libsqlite3-0 amd64 3.37.2-2ubuntu0.1 [641 kB] \nGet:10 http:\/\/archive.ubuntu.com\/ubuntu jammy-updates\/main amd64 libpython3.10-stdlib amd64 3.10.6-1~22.04.2 [1832 kB] \nGet:11 http:\/\/archive.ubuntu.com\/ubuntu jammy-updates\/main amd64 python3.10 amd64 3.10.6-1~22.04.2 [497 kB] \nGet:12 http:\/\/archive.ubuntu.com\/ubuntu jammy-updates\/main amd64 libpython3-stdlib amd64 3.10.6-1~22.04 [6910 B] \nGet:13 http:\/\/archive.ubuntu.com\/ubuntu jammy-updates\/main amd64 python3 amd64 3.10.6-1~22.04 [22.8 kB] \nFetched 6494 kB in 14s (478 kB\/s) \ndebconf: delaying package configuration, since apt-utils is not installed\nSelecting previously unselected package libpython3.10-minimal:amd64.\n(Reading database ... 4395 files and directories currently installed.)\nPreparing to unpack ...\/libpython3.10-minimal_3.10.6-1~22.04.2_amd64.deb ...\nUnpacking libpython3.10-minimal:amd64 (3.10.6-1~22.04.2) ...\nSelecting previously unselected package libexpat1:amd64.\nPreparing to unpack ...\/libexpat1_2.4.7-1ubuntu0.2_amd64.deb ...\nUnpacking libexpat1:amd64 (2.4.7-1ubuntu0.2) ...\nSelecting previously unselected package python3.10-minimal.\nPreparing to unpack ...\/python3.10-minimal_3.10.6-1~22.04.2_amd64.deb ...\nUnpacking python3.10-minimal (3.10.6-1~22.04.2) ...\nSetting up libpython3.10-minimal:amd64 (3.10.6-1~22.04.2) ...\nSetting up libexpat1:amd64 (2.4.7-1ubuntu0.2) ...\nSetting up python3.10-minimal (3.10.6-1~22.04.2) ...\n[Errno 13] Permission denied: '\/usr\/lib\/python3.10\/__pycache__\/__future__.cpython-310.pyc.139849676216832'dpkg: error processing package python3.10-minimal (--configure):\n installed python3.10-minimal package post-installation script subprocess returned error exit status 1\nErrors were encountered while processing:\n python3.10-minimal\nE: Sub-process \/usr\/bin\/dpkg returned an error code (1)\n\nLooking into the \/user\/lib\/python3.10\/__pychache__\/ all of the files in the directory are -rw-r--r--\nHow can the install complain of Permission denied when running as root and the user permissions for every file in the directory is rw?\nUpdate\nI upgraded to docker 20.10.22 build 3a2c30b but still am encountering this issue.","Title":"apt-get install Python3 in fresh Ubuntu Docker Image results in Error 13 Permission Denied","Tags":"python-3.x,docker,ubuntu,apt,failed-installation","AnswerCount":4,"A_Id":75167468,"Answer":"I reinstalled Ubuntu 22.04 and this time did not select Docker as one of the packages installed along with Ubuntu. I installed Docker manually after the Ubuntu install completed and after a reboot.\nNow it works fine.\nI reinstalled ubuntu 22.04 again, just to make sure I could reproduce the problem and indeed, if I select Docker to be installed with Ubuntu, the problem resurfaces.","Users Score":1,"is_accepted":false,"Score":0.049958375,"Available Count":2},{"Q_Id":75137717,"CreationDate":"2023-01-16 17:32:09","Q_Score":10,"ViewCount":3987,"Question":"I know that someone will face this problem. I had this problem today, but I could fix it promptly, and I want to share my solution:\nProblem:\nfrom flask_socketio import SocketIO\n\nYou will receive an output error with something like:\n\nAttribute Error: module \"dns.rdtypes\" has no attribute ANY\n\nThis only happens if you have installed eventlet, because it install dnspython with it.\nThe solution is simple, just reinstall dnspython for previous realease:\n\npython3 -m pip install dnspython==2.2.1\n\nThe problem should disappear","Title":"Eventlet + DNS Python Attribute Error: module \"dns.rdtypes\" has no attribute ANY","Tags":"python,flask-socketio,eventlet,dnspython","AnswerCount":2,"A_Id":75137718,"Answer":"The solution is simple, just reinstall dnspython for previous realease:\n\npython3 -m pip install dnspython==2.2.1\n\nThe problem should disappear","Users Score":13,"is_accepted":false,"Score":1.0,"Available Count":2},{"Q_Id":75137717,"CreationDate":"2023-01-16 17:32:09","Q_Score":10,"ViewCount":3987,"Question":"I know that someone will face this problem. I had this problem today, but I could fix it promptly, and I want to share my solution:\nProblem:\nfrom flask_socketio import SocketIO\n\nYou will receive an output error with something like:\n\nAttribute Error: module \"dns.rdtypes\" has no attribute ANY\n\nThis only happens if you have installed eventlet, because it install dnspython with it.\nThe solution is simple, just reinstall dnspython for previous realease:\n\npython3 -m pip install dnspython==2.2.1\n\nThe problem should disappear","Title":"Eventlet + DNS Python Attribute Error: module \"dns.rdtypes\" has no attribute ANY","Tags":"python,flask-socketio,eventlet,dnspython","AnswerCount":2,"A_Id":75359056,"Answer":"I suggest taking the opposite route, i.e. upgrading eventlet (to 0.33.3 at the time of this writing) rather than downgrading dnspython.","Users Score":3,"is_accepted":false,"Score":0.2913126125,"Available Count":2},{"Q_Id":75138708,"CreationDate":"2023-01-16 19:12:21","Q_Score":1,"ViewCount":99,"Question":"def changeList(k):\n first = k.pop(0)\n last = k.pop(-1)\n k.insert(0, last)\n k.insert(-1, first)\n return k\n\nk = [9, 0, 4, 5, 6]\nprint(changeList(k))\n\nMy goal is to interchange the first and last elements in my list. But when I run the code, it switches to this list [6, 0, 4, 9, 5], instead of [6, 0, 4, 5, 9].\nI tried popping the first and last elements in my list, and then tried inserting the new elements in my list.","Title":"Interchange First and Last Elements in a list in Python","Tags":"python,list,function","AnswerCount":3,"A_Id":75138800,"Answer":"The way list.insert(index, element) works is that it will put that element in that specific index. Your list right before the last insert is [6, 0, 4, 5]. Index -1 refers to the last index at the time prior to the index being inserted, which is index 3. So it puts 9 at index 3, resulting in [6, 0, 4, 9, 5].\nYou can use the swaps as shown in earlier answers, or you can use list.append() to add to the very end.","Users Score":2,"is_accepted":false,"Score":0.1325487884,"Available Count":1},{"Q_Id":75139060,"CreationDate":"2023-01-16 19:48:56","Q_Score":2,"ViewCount":241,"Question":"(I'm new to stack overflow, but I will try to write my problem the best way I can)\nFor my thesis, I need to do the optimization for a mean squares error problem as fast as possible. For this problem, I used to use the scipy.optimize.minimize method (with and without jacobian). However; the optimization was still too slow for what we wanted to do. (This program is running on mac with python 3.9)\nSo first, this is the function to minimize (I already tried to simplify the formula, but it didn't change the speed of the program\n def _residuals_mse(coef, unshimmed_vec, coil_mat, factor):\n \"\"\" Objective function to minimize the mean squared error (MSE)\n\n Args:\n coef (numpy.ndarray): 1D array of channel coefficients\n unshimmed_vec (numpy.ndarray): 1D flattened array (point) \n coil_mat (numpy.ndarray): 2D flattened array (point, channel) of masked coils\n (axis 0 must align with unshimmed_vec)\n factor (float): Devise the result by 'factor'. This allows to scale the output for the minimize function to avoid positive directional linesearch\n\n Returns:\n scalar: Residual for least squares optimization \n \"\"\"\n\n # MSE regularized to minimize currents\n return np.mean((unshimmed_vec + np.sum(coil_mat * coef, axis=1, keepdims=False)) ** 2) \/ factor + \\ (self.reg_factor * np.mean(np.abs(coef) \/ self.reg_factor_channel))\n\n\nThis is the jacobian of the function ( There is maybe a way to make it faster but I didn't succeed to do it)\n def _residuals_mse_jacobian( coef, unshimmed_vec, coil_mat, factor):\n \"\"\" Jacobian of the function that we want to minimize, note that normally b is calculates somewhere else \n Args:\n coef (numpy.ndarray): 1D array of channel coefficients\n unshimmed_vec (numpy.ndarray): 1D flattened array (point) of the masked unshimmed map\n coil_mat (numpy.ndarray): 2D flattened array (point, channel) of masked coils\n (axis 0 must align with unshimmed_vec)\n factor (float): integer\n\n Returns:\n jacobian (numpy.ndarray) : 1D array of the gradient of the mse function to minimize\n \"\"\"\n b = (2 \/ (unshimmed_vec.size * factor))\n jacobian = np.array([\n self.b * np.sum((unshimmed_vec + np.matmul(coil_mat, coef)) * coil_mat[:, j]) + \\\n np.sign(coef[j]) * (self.reg_factor \/ (9 * self.reg_factor_channel[j]))\n for j in range(coef.size)\n ])\n\n return jacobian\n\nAnd so this is the \"main\" program\nimport numpy as np \nimport scipy.optimize as opt\nfrom numpy.random import default_rng\nrand = default_rng(seed=0)\nreg_factor_channel = rand.integers(1, 10, size=9) \ncoef = np.zeros(9)\nunshimmed_vec = np.random.randint(100, size=(150))\ncoil_mat = np.random.randint(100, size=(150,9))\nfactor = 2 \nself.reg_factor = 5 \ncurrents_sp = opt.minimize(_residuals_mse, coef,\n args=(unshimmed_vec, coil_mat, factor),\n method='SLSQP',\n jac = _residuals_mse_jacobian,\n options={'maxiter': 1000})\n\nOn my computer, the optimization takes around 40 ms for a dataset of this size.\nThe matrices in the example are usually obtained after some modifications and can be way way bigger, but here to make it clear and easy to test, I choose some arbitrary ones. In addition, this optimization is done many times (Sometimes up to 50 times), so, we are already doing multiprocessing (To do different optimization at the same time). However on mac, mp is slow to start because of the spawning method (because fork is not stable on python 3.9). For this reason, I am trying to make the optimization as fast as possible to maybe remove multiprocessing.\nDo any of you know how to make this code faster in python ? Also, this code will be available in open source for users, so I can only free solver (unlike MOSEK)\nEdit : I tried to run the code by using the CVXPY model, with this code after the one just above:\n m = currents_0.size\n n = unshimmed_vec.size\n coef = cp.Variable(m)\n unshimmed_vec2 = cp.Parameter((n))\n coil_mat2 = cp.Parameter((n,m))\n unshimmed_vec2.value = unshimmed_vec\n coil_mat2.value = coil_mat\n\n x1 = unshimmed_vec2 + cp.matmul(coil_mat2,coef)\n x2 = cp.sum_squares(x1) \/ (factor*n)\n x3 = self.reg_factor \/ self.reg_factor_channel@ cp.abs(coef) \/ m\n obj = cp.Minimize(x2 + x3)\n prob = cp.Problem(obj)\n\n prob.solve(solver=SCS)\n\nHowever, this is slowing even more my code, and it gives me a different value than with scipy.optimize.minimize, so does anyone see a problem in this code ?","Title":"How to do the optimization for a mean squares error in a Python code faster","Tags":"optimization,mathematical-optimization,python-3.9,scipy-optimize-minimize,mean-square-error","AnswerCount":2,"A_Id":75152477,"Answer":"I would suggest trying the library NLOpt. It also has SLSQP as nonlinear solver (among many others), and I found it to be faster in many instances than SciPy optimize.\nHowever, you\u2019re talking 50 ms per run, you won\u2019t get down to 5 ms.\nIf you\u2019re looking to squeeze as much performance as possible, I would probably go to the metal and re-implement the objective function and Jacobian in Fortran (or C) and then use f2py (or Cython) to bridge them to Python. Looks a bit of an overkill to me though.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75139482,"CreationDate":"2023-01-16 20:44:39","Q_Score":0,"ViewCount":28,"Question":"I am very beginner to Linux as I recently started using it. I installed different libraries like numpy, pandas etc.\nimport numpy as np\nimport pandas as pd\nIt raises a ModuleNotFoundError in VS Code. But when I run the same code in Terminal, there's no issue.\nNote: I installed these libraries with\npip3 install package\nOS: Ubuntu 22.04\nI tried to uninstall the package and reinstall but still not working. I also tried to install by\nsudo apt-get install python3-pandas.\nNothing works out.","Title":"import in python working on my Linux Terminal but raising ModuleNotFoundError on VS code","Tags":"python,linux,visual-studio-code,modulenotfounderror,ubuntu-22.04","AnswerCount":2,"A_Id":75141177,"Answer":"It seems that you have two or more interpreter of python.\nYou can use shortcuts \"Ctrl+Shift+P\" and type \"Python: Select Interpreter\" to choose the correct python interpreter in VsCode.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75139482,"CreationDate":"2023-01-16 20:44:39","Q_Score":0,"ViewCount":28,"Question":"I am very beginner to Linux as I recently started using it. I installed different libraries like numpy, pandas etc.\nimport numpy as np\nimport pandas as pd\nIt raises a ModuleNotFoundError in VS Code. But when I run the same code in Terminal, there's no issue.\nNote: I installed these libraries with\npip3 install package\nOS: Ubuntu 22.04\nI tried to uninstall the package and reinstall but still not working. I also tried to install by\nsudo apt-get install python3-pandas.\nNothing works out.","Title":"import in python working on my Linux Terminal but raising ModuleNotFoundError on VS code","Tags":"python,linux,visual-studio-code,modulenotfounderror,ubuntu-22.04","AnswerCount":2,"A_Id":75139734,"Answer":"Without all the context, it sounds like you have a few different python environments.\nIn terminal check which python you are using which python\nIn VSCode settings check Python: Default Interpreter Path\nThat might help you understand what is going on. Make sure that the VSCode python path is the same path that your terminal prints out.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":2},{"Q_Id":75139619,"CreationDate":"2023-01-16 20:56:57","Q_Score":1,"ViewCount":79,"Question":"How would i go about testing the following class and its functions?\nimport yaml\nfrom box import Box\nfrom yaml import SafeLoader\n\n\nclass Config:\n def set_config_path(self):\n self.path = r\".\/config\/datasets.yaml\"\n return self.path\n\n def create_config(self):\n with open(r\".\/config\/datasets.yaml\") as f:\n self.config = Box(yaml.load(f, Loader=SafeLoader))\n return self.config\n\nThese are the current tests I have created so far, but i am struggling with the final function:\nimport unittest\nfrom unittest.mock import mock_open, patch\nfrom src.utils.config import Config\n\n\nclass TestConfig(unittest.TestCase):\n def setUp(self):\n self.path = r\".\/config\/datasets.yaml\"\n\n def test_set_config_path(self):\n assert Config.set_config_path(self) == self.path\n\n @patch(\"builtins.open\", new_callable=mock_open, read_data=\"data\")\n def test_create_config(self, mock_file):\n assert open(self.path).read() == \"data\"\n\nHow would i go about testing\/mocking the Box() and yaml.load() methods.\nI have tried mocking where the Box and yaml.load() functions are used in the code - however i dont fully understand how this works.\nIdeally I'd want to be able to pass a fake file to the with open() as f:, which then is read by Box and yaml.load to output a fake dictionary config.\nThanks!","Title":"How do i mock an external libraries' classes\/functions such as yaml.load() or Box() in python","Tags":"python,unit-testing,mocking,with-statement","AnswerCount":1,"A_Id":75139769,"Answer":"The thing to remember about unit tests is that the goal is to test public interfaces of YOUR code. So to mock a third parties code is not really a good thing to do though in python it can be done but would be alot of monkey patching and other stuff.\nAlso creating and deleting files in a unit test is fine to do as well. So you could just create a test version of the yaml file and store it in the unit tests directory. During a test load the file and then do assertions to check that it was loaded properly and returned.\nYou wouldn't do a unit test checking if Box was initialized properly cause that should be in another test or test case. Unless its a third party then you would have to make sure it was initialized properly cause it's not your code.\nSo create a test file, open it and load it as yaml then pass it into Box constructor. Do assertions to make sure those steps completed properly. No need to mock yaml or Box.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75140683,"CreationDate":"2023-01-16 23:32:22","Q_Score":0,"ViewCount":27,"Question":"I'm trying to create a model that predicts customer status change.\nTo give context, there are 4 statuses a customer can have: [A, B, C, D]\nEach customer must have one status, and that status can change. I'm making a model with the current status as one of the features and the next status as the label.\nIs there a way to hardcode a rule into SVM (or other classifiers) that prevents the model from classifying the label as the current status? In other words, if a customer's current status is A, its next status cannot be A, it has to be either B, C, or D.\nIf anyone knows whether sklearn has similar capabilities that would help.","Title":"scikit-learn adding rules to classification model","Tags":"python,machine-learning,scikit-learn","AnswerCount":1,"A_Id":75140872,"Answer":"As far as I know, there are two ways to solve this problem but it is not inside an SVM.\nFirst Way - series\nImplementing a rule-based classifier first then applying SVM...\nSecond way - Parallel\nImplementing a rule-based classifier and SVM parallel and choosing the best one in the end layer combining together.\ne.x Ensemble learning\nboth ways probably work in some cases, but you should try and see the results to choose the best way I guess the second one might work better.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75142154,"CreationDate":"2023-01-17 05:05:41","Q_Score":3,"ViewCount":60,"Question":"I was able to save the data to a Django model without any errors, but data not reflected in db. But after a sleep time I was able to save the data again with same method. What might be causing this ?\nI suspect use of the Google API, but was able to print the data before performing the save operation.\ndef update_channel():\n client = Client.objects.get(name=\"name\")\n print(f\"Existing channel: {data['id']}\") # 123\n\n # fetch channel data from google api\n data = google_drive.subscribe_new_channel()\n\n client.channel_id = data[\"id\"]\n client.channel_resource_id = data[\"resourceId\"]\n client.save()\n\n client.refresh_from_db()\n print(f\"New channel: {data['id']}\") # 456 \n print(f\"New channel in db: {client.channel_id}\") # 456\n\n time.sleep(5)\n client.refresh_from_db() \n print(f\"channel in db: {client.channel_id}\") # 123\n\n\nSample Output:\nExisting channel: 123\nNew channel: 456\nNew channel in db: 456 \nchannel in db: 123","Title":"Refrehing the Django model after save and 5 second sleep get me old state, what's wrong?","Tags":"python,django,google-api","AnswerCount":1,"A_Id":75142247,"Answer":"This can happen if another process has already fetched the same client object and saved the object after your save operation.\nIn this case, the data in the second process still be the old one and overwrites your change when it saves.","Users Score":3,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75142891,"CreationDate":"2023-01-17 07:00:05","Q_Score":1,"ViewCount":45,"Question":"Hi I have been trying to improve the db performance and had done some basic research regarding having a db partition and db sharding and also having 2 dbs one for write and other for read .\nHowever i found out that the db sharding is the best way out of all as the mapping provided by sharding is dynamic that is one of the requirement to put it bluntly i have provided the 2 cases below\nCase 1:- we need to get all the transaction of a user (which is huge)\nCase 2:- we need all the data for a particular time interval for all the user (which is again huge)\nBecause of the above scenerios I'm looking to implement db sharding\nNote:- I have already segregated some db into multiple databases already and they sit on different machines so i want it to be applied to all those multiple databases\nWhat I'm Looking for :\n\nAny link that could be helpful\nAny snippet code that could be helpful\n\nDjango==3.2.13\nMySql == 5.7","Title":"Data Base Sharding In django using MySQL","Tags":"mysql,python-3.x,django,sharding","AnswerCount":1,"A_Id":75163095,"Answer":"Let me define some terms so that were are \"on the same page\":\nReplication or Clustering -- Multiple servers having identical datasets. They are kept in sync by automatically transferring all writes from one server to the others. One main use is for scaling reads; it allows many more clients to connect simultaneously.\nPARTITION -- This splits one table into several, based on date or something else. This is done in a single instance of MySQL. There are many myths about performance. The main valid use is for purging old data in a huge dataset.\nSharding -- This involves splitting up a dataset across multiple servers. A typical case is splitting by user_id (or some other column in the data). The use case is to scale writes. (On pure MySQL, the developer has to develop a lot of code to implement Sharding. There are add-ons, especially in MariaDB, that help.)\nYour use case\nYour \"2 dbs one for write and other for read\" sounds like Replication with 2 servers. It may not give you as much benefit as you hope for.\nYou are talking about SELECTs that return millions of rows. None of the above inherently benefits such, even if you have several simultaneous connections doing such.\nPlease provide some numbers -- RAM size, setting of innodb_buffer_pool_size, and dataset size (in GB) of the big SELECTs. With those numbers, I can discuss \"caching\" and I\/O and performance. Performing multiple queries on the same dataset may benefit from caching on a single server.\nReplication and Sharding cannot share the caching; Partitioning has essentially no impact. That is, I will try to dissuade you from embarking on a technique that won't help and could hurt.\nPlease further describe your task; maybe one of the techniques will help.\nP.S., Replication, Partitioning, and Sharding are mostly orthogonal. That is any combination of them can be put together. (But rarely is.)","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75144059,"CreationDate":"2023-01-17 09:03:34","Q_Score":2,"ViewCount":45,"Question":"I have a problem starting Playwright in Python maximized. I found some articles for other languages but doesn't work in Python, also nothing is written about maximizing window in Python in the official documentation.\nI tried browser = p.chromium.launch(headless=False, args=[\"--start-maximized\"])\nAnd it starts maximized but then automatically restores back to the default small window size.\nAny ideas?\nThanks","Title":"Python Playwright start maximized window","Tags":"python,playwright,playwright-python","AnswerCount":1,"A_Id":75144132,"Answer":"I just found the answer:\nI need to set also the following and it works: browser.new_context(no_viewport=True)","Users Score":2,"is_accepted":false,"Score":0.3799489623,"Available Count":1},{"Q_Id":75144329,"CreationDate":"2023-01-17 09:30:22","Q_Score":1,"ViewCount":79,"Question":"I am using transformer to do a speech classification task.\nI used two methods to split my_dataset into training set and test set.\nThe first is torch.utils.data.random_split:\ntrain_len = int(0.9 * len(my_dataset))\n\nlengths = [train_len , len(my_dataset) - train_len]\n\ntrain_set, valid_set = random_split(my_dataset, lengths)\n\nThe second is sklearn.model_selection.train_test_split:\ntrain_set, valid_set = train_test_split(my_dataset, test_size=0.1)\n\nI have tries many times. When I use the first method, the accuracy rate is always 60%, but when I use the second method, the accuracy rate is only 55%.\nSo what is the difference between sklearn.model_selection.train_test_split and torch.utils.data.random_split?\nThe two methods are only different in the way the data set is divided, and the others are the same.","Title":"what is the difference between sklearn.model_selection.train_test_split and torch.utils.data.random_split?","Tags":"python,scikit-learn,pytorch,transformer-model","AnswerCount":1,"A_Id":75144624,"Answer":"It might be a different split. There are many ways to spilt 90:10. If the dataset is not large enough, the accuracy will depend on the actual split. You could compare the entries of the splits.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75144722,"CreationDate":"2023-01-17 10:00:50","Q_Score":1,"ViewCount":89,"Question":"I am running a Pyspark AWS Glue Job that includes a Python UDF. In the logs I see this line repeated.\nINFO [Executor task launch worker for task 15765] python.PythonUDFRunner (Logging.scala:logInfo(54)): \nTimes: total = 268103, boot = 21, init = 2187, finish = 265895\n\nDoes anyone know what this logInfo (total\/boot\/init\/finish) means??\nI have looked at the Spark code and I am none the wiser and there isn't a mention of this info anywhere else I have looked for","Title":"AWS Glue Pyspark Python UDFRunner timing info total\/boot\/init\/finish","Tags":"python,apache-spark,pyspark,user-defined-functions,aws-glue","AnswerCount":1,"A_Id":76584191,"Answer":"Ok so this is what it all means:\n\ntotal: This is the total time taken to execute the Python UDF, measured in milliseconds.\nboot: This is the time taken to boot up the Python interpreter process that runs the UDF. This typically includes loading Python interpreter, libraries, and modules.\ninit: This is the time taken to initialize the UDF in the Python interpreter process. This typically includes time taken to deserialize and initialize the Python UDF and its dependencies.\nfinish: This is the time taken by the Python UDF to finish execution after the initialization is complete. It is computed by subtracting boot and init time from total.\n\nNow hopefully it makes more sense.\nAnd remember: if possible do not use Python UDFs but try to create a PandasUDF instead.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75146531,"CreationDate":"2023-01-17 12:38:50","Q_Score":1,"ViewCount":62,"Question":"I have the following code in Python 3.9:\nfirst_entries = [r[0] for r in result]\nseconds_entries = [r[1] for r in result]\nthird_entries = [r[2] for r in result]\n\nwhere result is a list of tuples of the following form:\nresult = [(x1,x2,x3),(y1,y2,y3),...]\n\nIs there a way to write this into one line and iterate over result only once?","Title":"Multiple list comprehensions in one line in python","Tags":"python,list","AnswerCount":2,"A_Id":75146609,"Answer":"first_entries, seconds_entries, third_entries = zip(*result)\nworks as expected","Users Score":3,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75146804,"CreationDate":"2023-01-17 12:58:38","Q_Score":1,"ViewCount":621,"Question":"I installed psycopg2 using pip but when I import psycopg2 I get this error\nImportError: dlopen(\/Users\/lce21\/Documents\/GitHub\/hazen-web- \napp\/hazen-web-app\/lib\/python3.8\/site- \npackages\/psycopg2\/_psycopg.cpython-38-darwin.so, 0x0002): Library \nnot loaded: \/usr\/local\/opt\/postgresql\/lib\/libpq.5.dylib\n\n Referenced from: \/Users\/lce21\/Documents\/GitHub\/hazen-web- \napp\/hazen-web-app\/lib\/python3.8\/site- \npackages\/psycopg2\/_psycopg.cpython-38-darwin.so\n\n Reason: tried: '\/usr\/local\/opt\/postgresql\/lib\/libpq.5.dylib' \n(no such file), '\/usr\/local\/lib\/libpq.5.dylib' (no such file), \n'\/usr\/lib\/libpq.5.dylib' (no such file), \n'\/usr\/local\/Cellar\/postgresql@14\/14.6\/lib\/libpq.5.dylib' (no such \nfile), '\/usr\/local\/lib\/libpq.5.dylib' (no such file), \n'\/usr\/lib\/libpq.5.dylib' (no such file)\n\nThings tried:\npip install psycopg2-binary\nMacOS pip install psycopg2 with sudo and in the venv. No errors when I installed. Postgres installed.\nI might need to change location of files but I don't know how to do that","Title":"psycopg2 library not loaded","Tags":"python,psycopg2","AnswerCount":1,"A_Id":75244152,"Answer":"I found the problem was that I had installed in my system a different version of postgresql than the version on my virtual env.\nSo I had to unistall from the system postgresql and then reinstall it.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75147334,"CreationDate":"2023-01-17 13:43:14","Q_Score":0,"ViewCount":49,"Question":"I have a numpy array r and I need to evaluate a scalar function, let's say np.sqrt(1-x**2) on each element x of this array. However, I want to return the value of the function as zero, whenever x>1, and the value of the function on x otherwise.\nThe final result should be a numpy array of scalars.\nHow could I write this the most pythonic way?","Title":"Evaluate scalar function on numpy array with conditionals","Tags":"python,arrays,numpy,if-statement","AnswerCount":3,"A_Id":75147438,"Answer":"You can use like numpy.where(condition,if condition holds,otherwise) so np.where(x>1,0,np.sqrt(1-x**2)) will be answer","Users Score":2,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75147699,"CreationDate":"2023-01-17 14:09:26","Q_Score":1,"ViewCount":57,"Question":"Suppose I have a number of items that I put in a queue for other processes to deal with. The items are rather large in memory, therefore I limit the queue size. At some point I will have no more things to put in the queue. How can I signal the other processes that the queue is closed?\nOne option would be to close the child processes when the queue is empty, but this relies on the queue being emptied slower than it is being filled.\nThe documentation of multiprocessing.Queue talks about the following method:\n\nclose()\nIndicate that no more data will be put on this queue by the current process. The background thread will quit once it has flushed all buffered data to the pipe. This is called automatically when the queue is garbage collected.\n\nIs it safe to call close while there are still items in the queue? Are these items guaranteed to be processed? How can a another processes know that the queue is closed?","Title":"Multiprocessing queue closing signal","Tags":"python,multiprocessing,queue","AnswerCount":3,"A_Id":75149188,"Answer":"a multiprocessing queue is simply a pipe with a lock to avoid concurrent reads\/writes from different processes.\na pipe typically has 2 sides, a read and a write, when a process tries to read from a pipe, the OS will first serve things that are in the pipe, but if the pipe is empty, the OS will suspend this process, and check if any process can write to the write end, if the answer is yes, then the OS just keeps this process suspended till someone else writes to the pipe, and if there is no one else that can write to the pipe, then the OS will send an end-of-file to the reader, which wakes him up and tells him \"don't wait on a message, none can send a message on this pipe\".\nin the case of a queue, it is different, as the reading process has both a read and a write ends of this pipe, the number of processes that can write to the queue is never zero, so reading from a queue that no other process can write to will result in the program being paused indefinitely, the reader has no direct way of knowing that the queue was closed by the other processes when they do.\nthe way multiprocessing library itself handles it in its pools is to send a message on the queue that will terminate the workers, for example the reader can terminate once it sees None on the pipe or some predefined object or string like \"END\" or \"CLOSE\", since this will be the last item on the queue, there should be no items after it, and once the reader reads it he will terminate, and if you have multiple readers then you should send multiple end messages on the queue.\nbut what if the child process crashes or for some reason doesn't send it ? your main process will be stuck on the get and will be suspended indefinitely .... so if you are manually using a queue you should take all precautions to make sure this doesn't happen (like setting a timeout, and monitoring the other writers in another thread, etc.)","Users Score":1,"is_accepted":false,"Score":0.0665680765,"Available Count":1},{"Q_Id":75148831,"CreationDate":"2023-01-17 15:41:30","Q_Score":1,"ViewCount":86,"Question":"I created a virtual environment inside the www\/mysite\/venv folder and have a python script inside the folder that I'm trying to execute from the web browser. The PHP function I'm using is shell_exec().\n\n\nThe second line in the script runs but doesn't work properly because the required pip libraries are in the virtual environment and the environment does not get activate\nI've also tried:\n\n\/bin\/bash\/source\n\n\/bin\/sh\/source\n\nsource bin\/activate","Title":"How do I get shell_exec() to change the environment before running a python script","Tags":"python,php,linux","AnswerCount":2,"A_Id":75148916,"Answer":"You need to find the path to the python executable for your virtual environment (like ~\/.venv\/path\/to\/python or something similar). You can find out when your python venv is active, just do a which python3 to see it.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75148831,"CreationDate":"2023-01-17 15:41:30","Q_Score":1,"ViewCount":86,"Question":"I created a virtual environment inside the www\/mysite\/venv folder and have a python script inside the folder that I'm trying to execute from the web browser. The PHP function I'm using is shell_exec().\n\n\nThe second line in the script runs but doesn't work properly because the required pip libraries are in the virtual environment and the environment does not get activate\nI've also tried:\n\n\/bin\/bash\/source\n\n\/bin\/sh\/source\n\nsource bin\/activate","Title":"How do I get shell_exec() to change the environment before running a python script","Tags":"python,php,linux","AnswerCount":2,"A_Id":75159350,"Answer":"shell_exec(\"\/home\/www\/mysite\/venv\/bin\/python3 \/home\/www\/mysite\/venv\/python-script.py\"); worked without having to activate the virtual environment. I had to give the full path to the python version installed in the venv and the full path to the location of the script","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75148949,"CreationDate":"2023-01-17 15:50:49","Q_Score":1,"ViewCount":111,"Question":"I am currently trying to establish connection from server (64b windows) to Avaya CMS DB - which is Informix engine - using IfxPy. IfxPy 3.0.5 comes with HCL ONEDB ODBC DRIVER (version 1.0.0.0), which is copied into site-packages.\nI think I've properly installed the IfxPy module, pointed ODBC registers to iclit09b.dll, setup system environment \"INFORMIXDIR\". I am able to import IfxPy, but IfxPy.connect returns error from db server. The error I got suggests, that ODBC driver adds server hostname with \"@\" to the user id and Informix is not understanding that.\nIs there anybody with experience with connecting to Avaya CMS DB from Python?\nimport os\nif 'INFORMIXDIR' in os.environ: #add dll lookup folder to BIN folder\n os.add_dll_directory(os.path.join(os.environ['INFORMIXDIR'],\"bin\"))\nimport IfxPy\nconStr=\"SERVER=cms_net;HOST=10.10.10.10;SERVICE=50001;PROTOCOL=olsoctcp;DATABASE=cms;UID=myuser;PWD=mypassw;CLIENT_LOCALE=en_US.UTF8;\"\n\nconn = IfxPy.connect(conStr, \"\", \"\") \nIfxPy.close(conn)\n\nException I get: [OneDB][OneDB ODBC Driver][OneDB]Incorrect password or user myuser@myserver[myserver full domain path] is not known on the database server. SQLCODE=-951\nAny ideas?\nI worked with our Avaya guy to add user into proper group and grant DB access. But we still can not figure this one out.\nNB: DB is set to accept protocol olsctcp on port 50001.\nThank you.\nI did research on internet, with not much luck. I am considering to use Informix Client SDK from IBM, hopefully that will work.\nI've also tested to establish connection from ODBC 64b windows, it allows proper setup, but test fails with exactly same error.","Title":"IfxPy connection to Informix DB problem - user with added hostname","Tags":"python,odbc,informix,avaya","AnswerCount":1,"A_Id":75155522,"Answer":"So, for everyone dealing with connection to Informix, error -951 really means that user is not created on DB or that user has one time password and cannot login.\nThat was our case, after we've tried to logon directly on database server.\nIn other news, IfxPy can be just pointed to ODBC iclit09b.dll (from IBM Informix ODBC driver, or the one comming in with IfxPy 3.0.5 installation) and it will work without INFORMIXDIR system variable, it will also work without setting up ODBC driver in Windows registers.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75149016,"CreationDate":"2023-01-17 15:55:37","Q_Score":1,"ViewCount":290,"Question":"I've trained my object detection model based on YOLOV7 and YOLOV5. Now, for some reason i need to change name of classes.\nIs it possible to change classes names and save it again as Pytorch ML model file (.pt) I've searched but there is no clear solution for this.\nFor example: If i load the model like this;\nmodel = torch.load('model file path', map_location=map_location)\n\nand then set the new class names;\nmodel.names = ['face', 'head', 'helmet']\n\nafter that how can i save this new model with it's class names as best.pt file.","Title":"how to rename classes of trained model in Pytorch","Tags":"python-3.x,machine-learning,pytorch,object-detection,yolo","AnswerCount":1,"A_Id":76596360,"Answer":"I've solved the issue in a very unexpected way;\nTo solve the proble follow these steps;\n\nExtract .pt file(it's actually a zip file) to a directory\nUse some data file reader application(i.e, Hex Editor Neo, HxD Hex Editor) in HexaDecimal format to change data.pkl file which class names are in.\nFind your class names by searching in the application.\nBe carefull!(If you change wrong place, you may damage the file) Change only the corresponding characters to your class names.\nSave it\nZip the files back, together with file you changed, which you extracted\nChange the file extension back as .pt\nThat's it!","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75149248,"CreationDate":"2023-01-17 16:15:24","Q_Score":1,"ViewCount":80,"Question":"It is unclear for me if this snippet of merge sort has space complexity of O(n log n) or O(n).\ndef mergeSort(L): \n N = len(L)\n \n if N <= 1:\n return L\n \n mid = N \/\/ 2\n L1 = mergeSort(L[: mid])\n L2 = mergeSort(L[mid :])\n return merge(L1, L2)\n \n\nAssuming that merge(L1, L2) uses auxiliar memory of O(len(L)) (i.e O(n)), doesn't every level of the recursion tree use O(n) auxiliary memory. And as long the tree has like O(log n) levels wouldn't it be O(n log n) ? A lot of sources on the internet use the exact same implementation and they say that the space complexity is O(n), and I do not understand why?","Title":"Space complexity for mergesort","Tags":"python,sorting","AnswerCount":2,"A_Id":75149456,"Answer":"The space complexity is O(N), which is not just expected case, but also best and worst case. While O(log(N)) levels of recursion may be active, a merge can be in progress only on the current level, and is not itself recursive. When the memory for the merge is done, it's released - it doesn't remain in use. All merges can (re)use the same chunk of N locations.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":2},{"Q_Id":75149248,"CreationDate":"2023-01-17 16:15:24","Q_Score":1,"ViewCount":80,"Question":"It is unclear for me if this snippet of merge sort has space complexity of O(n log n) or O(n).\ndef mergeSort(L): \n N = len(L)\n \n if N <= 1:\n return L\n \n mid = N \/\/ 2\n L1 = mergeSort(L[: mid])\n L2 = mergeSort(L[mid :])\n return merge(L1, L2)\n \n\nAssuming that merge(L1, L2) uses auxiliar memory of O(len(L)) (i.e O(n)), doesn't every level of the recursion tree use O(n) auxiliary memory. And as long the tree has like O(log n) levels wouldn't it be O(n log n) ? A lot of sources on the internet use the exact same implementation and they say that the space complexity is O(n), and I do not understand why?","Title":"Space complexity for mergesort","Tags":"python,sorting","AnswerCount":2,"A_Id":75149392,"Answer":"The time complexity is O(nlog(n))","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75150564,"CreationDate":"2023-01-17 18:12:42","Q_Score":1,"ViewCount":63,"Question":"import pandas as pd\n import numpy as np\n import tensorflow as tf\n\n data = pd.read_csv(\"Amex.csv\")\n\n data.head()\n\n X = data.iloc[:, :-1].values\n Y = data.iloc[:, -1].values\n\n from sklearn.model_selection import train_test_split\n\n x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=1234)\n\n from sklearn.preprocessing import StandardScaler\n sc = StandardScaler()\n x_train = sc.fit_transform(x_train)\n x_test = sc.fit_transform(x_test)\n\n\n ann = tf.keras.models.Sequential()\n\n ann.add(tf.keras.layers.Dense(units=1000, activation='sigmoid'))\n ann.add(tf.keras.layers.Dense(units=1280, activation='sigmoid'))\n\n ann.add(tf.keras.layers.Dense(units=10, activation='softmax'))\n ann.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\n\n ann.fit(x_train, y_train, batch_size=32, epochs=200)\n\n print(ann.predict(sc.transform([[3,7,9,8,8,1,4,4,7,0,4,5,2,6]])))`\n\nI have trained the model with an accuracy of 0.9994 The answer should be 1, but I get an array list\noutput\n [[8.7985291e-06 2.5825528e-04 2.8821041e-03 1.0145088e-04 1.5824498e-04 8.1912667e-06 1.9685100e-03 9.9447292e-01 6.3032545e-05 7.8425743e-05]]","Title":"Neural network to verify amex check digit","Tags":"python,keras,deep-learning,neural-network,tensorflow2.0","AnswerCount":1,"A_Id":75414830,"Answer":"Thanks @Dr. Snoopy for the answer and @AlphaTK for confirming that the issue got resolved. Adding this comment into the answer section for the community benefit.\n\nThis is just an array of probabilities output by the model and an\nargmax should be applied to obtain a class index.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75152920,"CreationDate":"2023-01-17 22:34:48","Q_Score":0,"ViewCount":14,"Question":"I am working on a learning how to fill in NaN in a Python DataFrame. DataFrame called data containing an age column and only one row has an NaN. I applied the following:\ndata.fillna(data.mean(),inplace=True)\nI ask to print out data and I receive a recursion msg.\nMy DataFrame only contains 4 rows if that is important.\nI was expecting the DataFrame to come back with the NaN filled in with the mean value. I also tried replacing data.mean() with a number ex. 2. Same error message.","Title":"Python DataFrame .fillna() Recursion Error","Tags":"python-3.x,recursion,fillna","AnswerCount":1,"A_Id":75152978,"Answer":"Not sure if this was the correct thing todo or not but I cleared out the Kernel in Jupyter Notebook and ran it just fine.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75153403,"CreationDate":"2023-01-17 23:53:22","Q_Score":1,"ViewCount":362,"Question":"everyone\nI'm trying to write a little code using the Interactive brokers API\nI opened a trade using the API of Interactive brokers and now let's say after it is profitable I want to sell it\nWhat code do I need to write in Python to sell the open position.\nAnd not to open another position in its place, but I emphasize - to sell the open position.\nMy code looks like this:\n def make_order(self):\n # create a contract for the ES futures\n contract = Future(symbol='ES', exchange='CME', currency='USD', lastTradeDateOrContractMonth='202303')\n\n # place a market order to buy or sell contract of ES\n order = MarketOrder(action=self.position, totalQuantity=1)\n trade = self.ib.placeOrder(contract, order)\n print(trade.orderStatus.status)\n\n return order\n\n**\n\nThen I call this function like this:\n\n**\ncontract = InteractiveAPI(ib, duration, interval, position, stop_price_fake)\norder_trade = contract.make_order()\n\nib.closeTrade(order_trade) # **This line doesn't work**\n\nI would appreciate it if someone knows how to fix the last line in the code.\nThank you very much everyone.","Title":"How to close an open trade using API of Interactive brokers","Tags":"python,python-3.x,interactive-brokers,python-interactive,ib-insync","AnswerCount":2,"A_Id":75179883,"Answer":"If you want to use the concept of trades and not positions then you must keep track of executions (reqExecutions) internally, daily. Outside of that IB is only aware of positions. Separating strategies is achieved with orderRefs.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75154315,"CreationDate":"2023-01-18 03:01:21","Q_Score":1,"ViewCount":1098,"Question":"For the longest time, I've been using\ndfi.export(df1, \"test.png\")\n\nto export a dataframe styler (df1) with type pandas.io.formats.style.Styler into a .png.\nToday I get the following error:\nTraceback (most recent call last):\n\n File ~\\Anaconda3\\lib\\site-packages\\IPython\\core\\interactiveshell.py:3369 in run_code\n exec(code_obj, self.user_global_ns, self.user_ns)\n\n Input In [48] in \n dfi.export(df1, \"ice_cotton_\" + str(dashboard_date).split(\" \")[0])\n\n File ~\\Anaconda3\\lib\\site-packages\\dataframe_image\\_pandas_accessor.py:48 in export\n return _export(\n\n File ~\\Anaconda3\\lib\\site-packages\\dataframe_image\\_pandas_accessor.py:117 in _export\n img_str = converter(html)\n\n File ~\\Anaconda3\\lib\\site-packages\\dataframe_image\\_screenshot.py:188 in run\n img = self.take_screenshot()\n\n File ~\\Anaconda3\\lib\\site-packages\\dataframe_image\\_screenshot.py:140 in take_screenshot\n img = mimage.imread(buffer)\n\n File ~\\Anaconda3\\lib\\site-packages\\matplotlib\\image.py:1560 in imread\n with img_open(fname) as image:\n\n File ~\\Anaconda3\\lib\\site-packages\\PIL\\ImageFile.py:112 in __init__\n self._open()\n\n File ~\\Anaconda3\\lib\\site-packages\\PIL\\PngImagePlugin.py:676 in _open\n raise SyntaxError(\"not a PNG file\")\n\n File \nSyntaxError: not a PNG file\n\nHow do I fix this?","Title":"SyntaxError: not a PNG file","Tags":"python,png","AnswerCount":1,"A_Id":75800695,"Answer":"This is a set of possible solutions and only the last one worked for me:\n\na previous version of matplotlib, e.g. 3.4.3\nuse a backend, e.g. \u2018mpl.use('ps')\u2019\nprevious versions of dataframe-image (tested 0.1.3, 0.1.2 and 0.1.0)\ntable_conversion, e.g. dfi.export(table, table_conversion='firefox'): this gives no error but formatting is lost\nprevious version of Chrome (i.e. January 27; 109.0.5414.119): this works!","Users Score":2,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75155648,"CreationDate":"2023-01-18 07:02:31","Q_Score":1,"ViewCount":80,"Question":"So, I have been trying to find optimum solution for the question, but I can not find a solution which is less than o(n3).\nThe problem statemnt is :-\nfind total number of triplet in an array such that sum of a[i],a[j],a[k] is divisible by a given number d and i insert date at the final tables\nIs this best practice?\nAny other suggestions? maybe use a specific tool\/data solution?\nI prefer using python scripts since it is part of a wider project.\nThank you!","Title":"DWH primary key conflict between staging tables and DWH tables","Tags":"python,etl,data-warehouse","AnswerCount":1,"A_Id":75196474,"Answer":"Instead of a straight INSERT use an UPSERT pattern. Either the MERGE statement if your database has it, or UPDATE the existing rows, followed by INSERTing the new ones.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75161984,"CreationDate":"2023-01-18 16:02:39","Q_Score":0,"ViewCount":32,"Question":"I have a python script which is executed from terminal as\nscript.py 0001\nwhere 0001 indicates the subcase to be run. If I have to run different subcases, then I use\nscript.py 0001 0002\nQuestion is how to specify a range as input? Lets say I want to run 0001..0008. I got to know seq -w 0001 0008 outputs what I desire. How to pipe this to Python as input from terminal? Or is there a different way to get this done?","Title":"Passing range of numbers from terminal to Python script","Tags":"python,bash,terminal,sequence","AnswerCount":2,"A_Id":75162059,"Answer":"Tried the following already but did not work earlier as I did not have the subcases pulled in the script repo. The following works:\nscript.py 000{1..8}","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75162883,"CreationDate":"2023-01-18 17:13:42","Q_Score":1,"ViewCount":114,"Question":"Firstly, I know that similar questions have been asked before, but mainly for classification problems. Mine is a regression-style problem.\nI am trying to train a neural network using keras to evaluate chess positions using stockfish evaluations. The input is boards in a (12,8,8) array (representing piece placement for each individual piece) and output is the evaluation in pawns. When training, the loss stagnates at around 500,000-600,000. I have a little over 12 million boards + evaluations and I train on all the data at once. The loss function is MSE.\nThis is my current code:\nmodel = Sequential()\nmodel.add(Dense(16, activation = \"relu\", input_shape = (12, 8, 8)))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(16, activation = \"relu\"))\nmodel.add(Dense(10, activation = \"relu\"))\nmodel.add(Dropout(0.2))\nmodel.add(Flatten())\nmodel.add(Dense(1, activation = \"linear\"))\nmodel.compile(optimizer = \"adam\", loss = \"mean_squared_error\", metrics = [\"mse\"])\nmodel.summary()\n# model = load_model(\"model.h5\")\n\nboards = np.load(\"boards.npy\")\nevals = np.load(\"evals.npy\")\nperf = model.fit(boards, evals, epochs = 10).history\nmodel.save(\"model.h5\")\nplt.figure(dpi = 600)\nplt.title(\"Loss\")\nplt.plot(perf[\"loss\"])\nplt.show()\n\nThis is the output of a previous epoch:\n145856\/398997 [=========>....................] - ETA: 26:23 - loss: 593797.4375 - mse: 593797.4375\n\nThe loss will remain at 570,000-580,000 upon further fitting, which is not ideal. The loss should decrease by a few more orders of magnitude if I am not wrong.\nWhat is the problem and how can I fix it to make the model learn better?","Title":"Keras loss value very high and not decreasing","Tags":"python,keras,loss-function,chess","AnswerCount":1,"A_Id":75163794,"Answer":"I would suspect that your evaluation data contains very big values, like 100000 pawns if one of sides forcefully wins. Than, if your model predicts something like 0 in the same position, then squared error is very high and this pushes MSE high as well. You might want to check your evaluation data and ensure they are in some limited range like [-20..20].\nFurthermore, evaluating a chess position is a very complex problem. It looks like your model has too few parameters for the task. Possible improvements:\n\nIncrease the numbers of neurons in your dense layers (say to 300,\n200, 100).\nIncrease the numbers of hidden layers (say to 10).\nUse convolutional layers.\n\nBesides this, you might want to create a simple \"baseline model\" to better evaluate the performance of your neural network. This baseline model could be just a python function, which runs on input data and does position evaluation based on material counting (like bishop - 3 pawns, rook - 5 etc.) Than you can run this function on your dataset and see MSE for it. If your neural network produces a smaller MSE than this baseline model, than it is really learning some useful patterns.\nI also recommend the following book: \"Neural Networks For Chess: The magic of deep and reinforcement learning revealed\" by Dominik Klein. The book contains a description of network architecture used in AlphaZero chess engine and a neural network used in Stockfish.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75165410,"CreationDate":"2023-01-18 21:20:04","Q_Score":1,"ViewCount":36,"Question":"I'm trying to debug this issue I'm having with pyarrow. See this code snippet:\npa_execution_date = Z['execution_date'][i]\npy_execution_date = pa_execution_date.as_py()\npa_report_date = Z['report_date'][i]\npy_report_date = pa_report_date.as_py()\nprint(pa_execution_date)\nprint(pa_report_date)\nprint(py_execution_date)\nprint(py_report_date)\nassert (pc.less_equal(pa_execution_date, pa_report_date))\nassert (py_execution_date <= py_report_date)\n\nWhat I'm seeing is that the second assertion is failing but not the first (in some cases). This is really odd because the two comparison operations should be equivalent...\nHere's the output from the printout when this happens:\n1591303729000000000\n1591303728000000000\n1591303729000000000\n1591303728000000000\n\nAny ideas about what I'm doing wrong?\nI was expecting the first assertion to fail before the second assertion has a chance to execute and fail. I was not expecting the second assertion to fail without the first assertion failing first.","Title":"less_equal not working like I expect (pyarrow.compute.less_equal)","Tags":"python,python-3.x,pyarrow","AnswerCount":2,"A_Id":75165476,"Answer":"I found the problem. The expression pc.less_equal(pa_execution_date, pa_report_date) actually returns a BooleanScalar, a pyarrow object, rather than a python bool. Adding .as_py() does the trick.","Users Score":2,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75165821,"CreationDate":"2023-01-18 22:15:17","Q_Score":1,"ViewCount":28,"Question":"I initially had this dataframe:\n df = pd.DataFrame({'slide': [0, 0, 1, 1, 2, 2, 0, 0],\n 'time': [1673, 17892, 1132, 61730, 2323, 8491, 3958, 3432],\n 'frame': ['-1', '0', '-1', '0', '-1', '0', '-1', '0'],\n 'id': [1111, 1111, 1132, 1132, 4636, 4636, 7711, 7711],\n 'name': ['foo', 'foo', 'bar', 'bar', 'zoo', 'zoo', 'baz', 'baz']})\n df\n slide time frame id name\n 0 0 1673 -1 1111 foo\n 1 0 17892 0 1111 foo\n 2 1 1132 -1 1132 bar\n 3 1 61730 0 1132 bar\n 4 2 2323 -1 4636 zoo\n 5 2 8491 0 4636 zoo\n 6 0 3958 -1 7711 baz\n 7 0 3432 0 7711 baz\n\nI did pivot_table to get the following:\n\n pd.pivot_table(df,index = ['id','name'], values = 'time',columns = ['frame']).astype(int)\n\n df\n frame -1 0\n id name \n 1111 foo 1673 17892\n 1132 bar 1132 61730\n 4636 zoo 2323 8491\n 7711 baz 3958 3432\n\nI want to make the resulting dataframe as the following way (see below) but I don't know how to go about this. I am new to pandas and just found out that multiindexing exist.. I tried accessing the columns with df['0']['id'] and also with df['0','id'] but both would throw me errors..... Any help will be appreciated! Thanks!\n df\n id name ON OFF \n 1111 foo 1673 17892\n 1132 bar 1132 61730\n 4636 zoo 2323 8491\n 7711 baz 3958 3432","Title":"How to access the columns after pivot_table operation (multiIndex dataframes)","Tags":"python,pandas,dataframe,pivot-table,multi-index","AnswerCount":1,"A_Id":75165945,"Answer":"To get your desired output df = df.reset_index().set_index('id', drop = True).rename(columns = {'-1': 'ON', '0': 'OFF'})\nFor reference, if you want to access a multiindex, you have to use tuples and df.loc. You can use print(df.index) to see how the indices are written. For example, df.loc[(1111, 'foo'), '-1'] gets the value with id: 1111, name: foo at column '-1'.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75165937,"CreationDate":"2023-01-18 22:31:28","Q_Score":0,"ViewCount":21,"Question":"I recently installed Python on my work computer and I am having SO MANY issues with the packages, I can't use any of them.\nRunning simple matplotlib, numpy, or pandas code gives me the below error.\n\nINTEL MKL ERROR: The specified module could not be found. mkl_intel_thread.2.dll.\nIntel MKL FATAL ERROR: Cannot load mkl_intel_thread.2.dll.\n\nHere are the versions of the installed packages.\nNumpy: 1.23.2 , Pandas: 1.4.4 , Matplotlib: 3.5.3 , Python: 3.10.6\nWhen I attempt to update any of the with \"pip install numpy --upgrade\" it tells me that the requirement is already satisfied. Then, when I try to install with \"pip install numpy --upgrade --ignore-installed\" it tells me that it could not find a version that satisfies the requirement for numpy and no matching distribution for numpy.\nAnything helps\nThanks","Title":"INTEL MKL ERROR when attempting to use Packages","Tags":"python,pip,package","AnswerCount":1,"A_Id":75166003,"Answer":"Numpy and other scientific libraries internally rely on certain numeric libraries (BLAS, LAPACK) for which there's a highly optimized version from Intel, which your python packages apparently cannot find (that's where that error around the dll comes from. These dlls aren't part of the python packages themselves.\nYou could look into a) installing the Intel MKL from scratch and see if that works or b) check if they're there and if you're missing a setting around a library path, some environment variable maybe.\nIf I may ask, how are you installing Python? On Windows in particular (I assume you're on windows because of the dll error message...) I'd recommend using Anaconda to install python. With such a package manager you might be able to avoid such dependency \/ path issues.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75166118,"CreationDate":"2023-01-18 22:55:24","Q_Score":1,"ViewCount":101,"Question":"I am documenting a small package using mkdocs, mkdocs-smae-dir, mkdocs-simple, mkdocstrings and mkdocstrings-python-legacy. When I try and view my documentation using mkdocs-serve it produces the following attribute error,\nAttributeError: module 'mkdocstrings_handlers.python' has no attribute 'get_handler'\n\nThe contents of my mkdocs.yml is,\nsite_name: TOLIMAN\ndocs_dir: .\nextra_css:\n - extra.css\n\nplugins:\n - search\n - same-dir\n - simple\n - mkdocstrings:\n default_handler: python \n - spellcheck\n\ntheme: \n name: material \n\nAnd poetry show --only=docs produces,\ncertifi 2022.12.7 Python package for providing Mozilla's CA Bundle.\ncharset-normalizer 3.0.1 The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet.\nclick 8.1.3 Composable command line interface toolkit\ncodespell 2.2.2 Codespell\ncolorama 0.4.6 Cross-platform colored terminal text.\ndocstring-parser 0.15 Parse Python docstrings in reST, Google and Numpydoc format\neditdistpy 0.1.3 Fast Levenshtein and Damerau optimal string alignment algorithms.\nghp-import 2.1.0 Copy your docs directly to the gh-pages branch.\nidna 3.4 Internationalized Domain Names in Applications (IDNA)\njinja2 3.1.2 A very fast and expressive template engine.\nmarkdown 3.3.7 Python implementation of Markdown.\nmarkupsafe 2.1.2 Safely add untrusted strings to HTML\/XML markup.\nmergedeep 1.3.4 A deep merge function for \ud83d\udc0d.\nmkdocs 1.4.2 Project documentation with Markdown.\nmkdocs-autorefs 0.4.1 Automatically link across pages in MkDocs.\nmkdocs-material 9.0.5 Documentation that simply works\nmkdocs-material-extensions 1.1.1 Extension pack for Python Markdown and MkDocs Material.\nmkdocs-same-dir 0.1.2 MkDocs plugin to allow placing mkdocs.yml in the same directory as documentation\nmkdocs-simple-plugin 2.1.2 Plugin for adding simple wiki site creation from markdown files interspersed within your code with MkDocs.\nmkdocs-spellcheck 1.0.0 A spell checker plugin for MkDocs.\nmkdocstrings 0.19.1 Automatic documentation from sources, for MkDocs.\nmkdocstrings-python-legacy 0.2.3 A legacy Python handler for mkdocstrings.\npackaging 23.0 Core utilities for Python packages\npygments 2.14.0 Pygments is a syntax highlighting package written in Python.\npymdown-extensions 9.9.1 Extension pack for Python Markdown.\npython-dateutil 2.8.2 Extensions to the standard Python datetime module\npytkdocs 0.16.1 Load Python objects documentation.\npyyaml 6.0 YAML parser and emitter for Python\npyyaml-env-tag 0.1 A custom YAML tag for referencing environment variables in YAML files. \nregex 2022.10.31 Alternative regular expression module, to replace re.\nrequests 2.28.2 Python HTTP for Humans.\nsix 1.16.0 Python 2 and 3 compatibility utilities\nsymspellpy 6.7.7 Python SymSpell\nurllib3 1.26.14 HTTP library with thread-safe connection pooling, file post, and more.\nwatchdog 2.2.1 Filesystem events monitoring\n\nHow do I resolve this error?\nRegards\nJordan","Title":"`mkdocstrings-python-legacy` Produces `AttributeError` Using `mkdocs serve`","Tags":"python,mkdocs","AnswerCount":1,"A_Id":75217161,"Answer":"Rather boringly I just had to reinstall the environment using poetry install.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75166391,"CreationDate":"2023-01-18 23:43:01","Q_Score":1,"ViewCount":32,"Question":"Given a dynamic list, that has potential to grow, is it possible to order the list such that no matter how it's otherwise sorted, a particular value is first?\nlist = ['bunny','cow','pig','apple','xerox']\nlist.sort()\n\nex. I want 'cow' to always be first on this list, and the rest can be ordered however. Or even better, 'cow' is always first and then they're alphabetically sorted after following that rule.","Title":"Assigning only the first value in a list","Tags":"python,list,sorting","AnswerCount":2,"A_Id":75166432,"Answer":"I would suggest create a class yourself that always use list[1:] for sort, but returns the appended list[0]+sorted(list[1:]). Or, you can just create an individual parameter to store \"cow\", like list.First = \"cow\", list.Rest = ['bunny','pig','apple','xerox']","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75167298,"CreationDate":"2023-01-19 02:48:00","Q_Score":0,"ViewCount":34,"Question":"when using pip install pandas\nAn error occurs as follows:\nCollecting pandas\nUsing cached pandas-1.5.2.tar.gz (5.2 MB)\nInstalling build dependencies ... done\nGetting requirements to build wheel ... error\nerror: subprocess-exited-with-error\n\u00d7 Getting requirements to build wheel did not run successfully.\n\u2502 exit code: 1\n\u2570\u2500> [28 lines of output]\nTraceback (most recent call last):\nFile \"d:\\py\\lib\\site-packages\\pip_vendor\\pep517\\in_process_in_process.py\", line 351, in \nmain()\nFile \"d:\\py\\lib\\site-packages\\pip_vendor\\pep517\\in_process_in_process.py\", line 333, in main\njson_out['return_val'] = hook(**hook_input['kwargs'])\nFile \"d:\\py\\lib\\site-packages\\pip_vendor\\pep517\\in_process_in_process.py\", line 112, in get_requires_for_build_wheel\nbackend = _build_backend()\nFile \"d:\\py\\lib\\site-packages\\pip_vendor\\pep517\\in_process_in_process.py\", line 77, in build_backend\nobj = import_module(mod_path)\nFile \"d:\\py\\lib\\importlib_init.py\", line 126, in import_module\nreturn _bootstrap._gcd_import(name[level:], package, level)\nFile \"\", line 1030, in _gcd_import\nFile \"\", line 1007, in _find_and_load\nFile \"\", line 972, in _find_and_load_unlocked\nFile \"\", line 228, in _call_with_frames_removed\nFile \"\", line 1030, in _gcd_import\nFile \"\", line 1007, in _find_and_load\nFile \"\", line 986, in _find_and_load_unlocked\nFile \"\", line 680, in _load_unlocked\nFile \"\", line 790, in exec_module\nFile \"\", line 228, in call_with_frames_removed\nFile \"C:\\Users\\zijie\\AppData\\Local\\Temp\\pip-build-env-kqsd82rz\\overlay\\Lib\\site-packages\\setuptools_init.py\", line 18, in \nfrom setuptools.dist import Distribution\nFile \"C:\\Users\\zijie\\AppData\\Local\\Temp\\pip-build-env-kqsd82rz\\overlay\\Lib\\site-packages\\setuptools\\dist.py\", line 47, in \nfrom . import _entry_points\nFile \"C:\\Users\\zijie\\AppData\\Local\\Temp\\pip-build-env-kqsd82rz\\overlay\\Lib\\site-packages\\setuptools_entry_points.py\", line 43, in \ndef validate(eps: metadata.EntryPoints):\nAttributeError: module 'importlib.metadata' has no attribute 'EntryPoints'\n[end of output]\nnote: This error originates from a subprocess, and is likely not a problem with pip.\nerror: subprocess-exited-with-error\n\u00d7 Getting requirements to build wheel did not run successfully.\n\u2502 exit code: 1\n\u2570\u2500> See above for output.\nnote: This error originates from a subprocess, and is likely not a problem with pip.\npy:3.10.0\nos:windows11\nDoes anyone know how to solve the problem? Thanks!\nI tried several times but it doesn't work.","Title":"Cannot use pip install pandas","Tags":"python,python-3.x,pandas,pip","AnswerCount":1,"A_Id":75167312,"Answer":"Have you tried:\npip3 install pandas?","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75167763,"CreationDate":"2023-01-19 04:27:50","Q_Score":1,"ViewCount":569,"Question":"I tried to upscale an image using the trained model EDSR_x4 but got an error message:\nerror: OpenCV(4.7.0) \/Users\/runner\/work\/opencv-python\/opencv-python\/opencv\/modules\/dnn\/src\/layers\/fast_convolution\/winograd_3x3s1_f63.cpp:147: error: (-215:Assertion failed) \\_FX_WINO_IBLOCK == 3 && \\_FX_WINO_KBLOCK == 4 in function '\\_fx_winograd_accum_f32'\nThe code used to work half a year ago but gives an error message now. The image path is ok, as it can show properly. I updated opencv-python and opencv-contrib-pthon, so they are the latest version. I use macOS 13.1 M1 chip. The version of my python is 3.9.12\nThe code I used was:\nimport cv2\nimport os\n\n# load EDSR model\nsr = cv2.dnn_superres.DnnSuperResImpl_create()\npath = \"EDSR_x4.pb\"\nsr.readModel(path)\nsr.setModel(\"edsr\",4)\n\n# set path\nimg = cv2.imread(r'test.jpg')\n\n# check image path\n#cv2.imshow('image',img)\n#cv2.waitKey(0)\n\n# upscale\nresult = sr.upsample(img) ### error happened here !!!\n\ncv2.imwrite(\"test1.jpg\",result)","Title":"fail to process .upsample when using cv2.dnn_superres function to upscale an image","Tags":"python,opencv","AnswerCount":1,"A_Id":75197140,"Answer":"I had exactly the same issue and solved it by downgrading the opencv and its contrib package from 4.7.0 to 4.6.0.66. They added Winograd optimization in 4.7.0 and somehow it breaks the code in superres module. Perhaps, you should report it to opencv folks","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75167963,"CreationDate":"2023-01-19 05:08:38","Q_Score":5,"ViewCount":693,"Question":"I was thinking about using polars in place of numpy in a parsing problem where I turn a structured text file into a character table and operate on different columns. However, it seems that polars is about 5 times slower than numpy in most operations I'm performing. I was wondering why that's the case and whether I'm doing something wrong given that polars is supposed to be faster.\nExample:\nimport requests\nimport numpy as np\nimport polars as pl\n\n# Download the text file\ntext = requests.get(\"https:\/\/files.rcsb.org\/download\/3w32.pdb\").text\n\n# Turn it into a 2D array of characters\nchar_tab_np = np.array(file.splitlines()).view(dtype=(str,1)).reshape(-1, 80)\n\n# Create a polars DataFrame from the numpy array\nchar_tab_pl = pl.DataFrame(char_tab_np)\n\n# Sort by first column with numpy\nchar_tab_np[np.argsort(char_tab_np[:,0])]\n\n# Sort by first column with polars\nchar_tab_pl.sort(by=\"column_0\")\n\nUsing %%timeit in Jupyter, the numpy sorting takes about 320 microseconds, whereas the polars sort takes about 1.3 milliseconds, i.e. about five times slower.\nI also tried char_tab_pl.lazy().sort(by=\"column_0\").collect(), but it had no effect on the duration.\nAnother example (Take all rows where the first column is equal to 'A'):\n# with numpy\n%%timeit\nchar_tab_np[char_tab_np[:, 0] == \"A\"]\n\n# with polars\n%%timeit\nchar_tab_pl.filter(pl.col(\"column_0\") == \"A\")\n\nAgain, numpy takes 226 microseconds, whereas polars takes 673 microseconds, about three times slower.\nUpdate\nBased on the comments I tried two other things:\n1. Making the file 1000 times larger to see whether polars performs better on larger data.\nResults: numpy was still about 2 times faster (1.3 ms vs. 2.1 ms). In addition, creating the character array took numpy about 2 seconds, whereas polars needed about 2 minutes to create the dataframe, i.e. 60 times slower.\nTo re-produce, just add text *= 1000 before creating the numpy array in the code above.\n2. Casting to integer.\nFor the original (smaller) file, casting to int sped up the process for both numpy and polars. The filtering in numpy was still about 5 times faster than polars (30 microseconds vs. 120), wheres the sorting time became more similar (150 microseconds for numpy vs. 200 for polars).\nHowever, for the large file, polars was marginally faster than numpy, but the huge instantiation time makes it only worth if the dataframe is to be queried thousands of times.","Title":"polars slower than numpy?","Tags":"python,numpy,python-polars","AnswerCount":1,"A_Id":75168994,"Answer":"Polars does extra work in filtering string data that is not worth it in this case. Polars uses arrow large-utf8 buffers for their string data. This makes filtering more expensive than filtering python strings\/chars (e.g. pointers or u8 bytes).\nSometimes it is worth it, sometimes not. If you have homogeneous data, numpy is a better fit than polars. If you have heterogenous data, polars will likely be faster. Especially if you consider your whole query instead of these micro benchmarks.","Users Score":3,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75170423,"CreationDate":"2023-01-19 09:54:15","Q_Score":0,"ViewCount":28,"Question":"Refused to apply style from '' because its MIME type ('text\/html') is not a supported stylesheet MIME type, and strict MIME checking is enabled.\ni dont have any problem in vscode but when i hosted into pythonanywhere i cant even login into admin panel and the css from admin panel is not working","Title":"Refused to apply style from '' because its MIME type ('text\/html') pythonanywhere","Tags":"python,django,admin","AnswerCount":1,"A_Id":75171290,"Answer":"You're probably getting that because it's a 404 page because your are not serving the file that you are trying to access. Search for \"static\" in the PythonAnywhere forums to find out how to configure static files.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75170806,"CreationDate":"2023-01-19 10:24:36","Q_Score":2,"ViewCount":285,"Question":"I'm trying to write a script that finds duplicate rows in a spreadsheet. I'm using the Pandas library. This is the initial dataframe:\nimport pandas as pd\n\ndf = pd.DataFrame({'title': [1, 2, 3, 4, 5, 6, 7, 8],\n 'val1': [1.1, 1.1, 2.1, 8.8, 1.1, 1.1, 8.8, 8.8],\n 'val2': [2.2, 3.3, 5.5, 6.2, 2.2, 3.3, 6.2, 6.2],\n 'val3': [3.4, 4.4, 5.5, 8.4, 0.5, 3.4, 1.9, 3.7]\n })\n\nprint(df)\n\ntitle val1 val2 val3\n 1 1.1 2.2 3.4\n 2 1.1 3.3 4.4\n 3 2.1 5.5 5.5\n 4 8.8 6.2 8.4\n 5 1.1 2.2 0.5 \n 6 1.1 3.3 3.4\n 7 8.8 6.2 1.9\n 8 8.8 6.2 3.7\n\nI have found all duplicate rows using the duplicated method based on the indicated columns and marked them by adding a new column e.g.\ndf['duplicate'] = df.duplicated(keep=False, subset=['val1', 'val2'])\n\nprint(df)\n\ntitle val1 val2 duplicated\n 1 1.1 2.2 true\n 2 1.1 3.3 true\n 3 2.1 5.5 false\n 4 8.8 6.2 true\n 5 1.1 2.2 true\n 6 1.1 3.3 true\n 7 8.8 6.2 true\n 8 8.8 6.2 true\n\nIn the last step, I would like to mark all duplicate rows by adding information with the title of the first occurrence. This way I want to make it easier to sort and group them later. This is what the result would look like:\ntitle val1 val2 first_occurence\n 1 1.1 2.2 true\n 2 1.1 3.3 true \n 3 2.1 5.5 false\n 4 8.8 6.2 true\n 5 1.1 2.2 title1\n 6 1.1 3.3 title2\n 7 8.8 6.2 title4\n 8 8.8 6.2 title4\n\nI tried to find a similar topic, but was unsuccessful. Does anyone have an idea how to do it?","Title":"How to mark duplicate rows with the index of the first occurrence in Pandas?","Tags":"python,excel,pandas,dataframe","AnswerCount":3,"A_Id":75170973,"Answer":"You can't do in Pandas. That's a possible solution:\n\nGet a list of duplicate rows\nIterate this list and generate a new row with a new column like \"duplicate_index\" and insert in this column the title number of the first equal row for each duplicated rows\nInsert all rows (original with empty value in \"duplicate_index\") in a new df\nSave the new df","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75170811,"CreationDate":"2023-01-19 10:24:52","Q_Score":1,"ViewCount":128,"Question":"At the end of some processing in Python in Windows I am trying to eject an USB SD card.\nResearching here it seems there are two ways in which to do it; call a PowerShell program or run PowerShell within Python.\nCan anyone offer me any guidance. Please keep it simple; learning Python is my new year project.\nSo I have written a PowerShell script (ejectusb.ps1) which works perfectly:\n$drive = New-Object -ComObject Shell.Application\n$drive.Namespace(17).Parsename(\"J:\").InvokeVerb(\"Eject\")\n\nI then call it from Python using subprocess:\nsubprocess.run([\"E:\\Data\\Computing\\Software\\MicroSoft\\Programming\\Powershell\\ejectusb.ps1\"])\n\nThe SD card is not ejected and I get the error messages:\nTraceback (most recent call last):\n File \"E:\/Data\/Computing\/Software\/Scripts\/SdCardPlayEachVideo06.py\", line 91, in \nsubprocess.run([\"E:\\Data\\Computing\\Software\\MicroSoft\\Programming\\Powershell\\ejectusb.ps1\"])\nFile \"C:\\Users\\David\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\subprocess.py\", line 548, in run\n with Popen(*popenargs, **kwargs) as process:\n File \"C:\\Users\\David\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\subprocess.py\", line 1024, in __init__\n self._execute_child(args, executable, preexec_fn, close_fds,\n File \"C:\\Users\\David\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\subprocess.py\", line 1493, in _execute_child\n hp, ht, pid, tid = _winapi.CreateProcess(executable, args,\nOSError: [WinError 193] %1 is not a valid Win32 application\n\nI don't understand this error message.\nSo I tried running PowerShell inside Python using:\nos.system('powershell $driveEject = New-Object -comObject Shell.Application; \n $driveEject.Namespace(17).ParseName(\"J:\").InvokeVerb(\"Eject\")')\n\nAn empty PowerShell screen and also what looks like a Windows command screen briefly flash up, but the SD card is not ejected. No error messages.\nCan anyone offer me any guidance. Please keep it simple; learning Python is my new year project.","Title":"Run Powershell script from Python?","Tags":"python,powershell,usb","AnswerCount":1,"A_Id":75192672,"Answer":"Hamed's solution uses the argument 'powershell' to launch PowerShell. i.e\nsubprocess.run([\"powershell\", \"-File\", \"E:\\Data\\Computing\\Software\\MicroSoft\\Programming\\Powershell\\ejectusb.ps1\"])\nI have been using the full path to PowerShell as the argument, i.e\nsubprocess.run([\"C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe\", etc\nThis path is correct (if I type it into the address bar of Windows File Explorer it launches PowerShell. But is causes a 'file not found' error.\nSo I dont know what the problem is with the full path but I am grateful for Hamed's workaround.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75170927,"CreationDate":"2023-01-19 10:34:19","Q_Score":1,"ViewCount":88,"Question":"I'm trying to use the shgo algorithm to run simulations (black box problem) and maximize the output parameter of the simulation. The objective functions runs and evaluates the simulation.\nI have 5 variables as input. I need to define boundaries and constraints, which is needed to limit the geometry of the simulation.\nAs this is a problem with a lot of variables I needed a global optimizer, which accepts boundaries and constraints. Therefore shgo seemed perfectly suitable.\nHowever, I am struggling to get the optimizer algorithm to accept my boundaries and constraints and to converge.\nThis is my code for the optimization:\nbnds = [(50*1e-9,500*1e-9), (50*1e-9,500*1e-9), (1,20), (20*1e-9,80*1e-9), (250*1e-9,800*1e-9)]\n\ndef constraint1(x):\n return x[4]-50*1e-9-2*x[0] # x[4]<=2*x[0]-50nm(threshold) \ndef constraint2(x):\n return x[1]-x[3]-20*1e-9 # x[1]-x[3]>=20nm(threshold) \ndef constraint3(x):\n return x[0]-(x[1]\/2)*(2.978\/x[2])-20*1e-9\n\ncons = ({'type': 'ineq', 'fun': constraint1},\n {'type': 'ineq', 'fun': constraint2},\n {'type': 'ineq', 'fun': constraint3})\n\nminimizer_kwargs = {'method':'COBYLA',\n 'bounds': bnds,\n 'constraints':cons} \n\nopts = {'disp':True}\n\nres_shgo = shgo(objective, \n bounds=bnds, \n constraints=cons, \n sampling_method='sobol', \n minimizer_kwargs=minimizer_kwargs, \n options=opts)\n\nThe global algorithm runs for 33 rounds to complete the evaluations and starts the minimiser pool:\nEvaluations completed.\nSearch for minimiser pool\n--- Starting minimization at [3.3828125e-07 4.6484375e-07 1.1984375e+01 6.7812500e-08 7.5703125e-07]...\n\nNow, the COBYLA Alorithm is used within the minimiser pool for the minimization. However, after a few rounds it exceeds the boundaries with the result, that the input parameter cause my simulation to crash.\n\nI have also tried 'L-BFGS-B' algorithm for the minimizer pool.\nminimizer_kwargs = {'method':'L-BFGS-B'}\n\nThe algo converged with the following statment:\nlres = fun: -20.247226776119533\n hess_inv: <5x5 LbfgsInvHessProduct with dtype=float64>\n jac: array([ 1.70730429e+09, 1.22968297e+09, 0.00000000e+00, -1.82566323e+09,\n 1.83071706e+09])\n message: 'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'\n nfev: 6\n nit: 0\n njev: 1\n status: 0\n success: True\n x: array([2.43359375e-07, 2.99609375e-07, 1.48046875e+01, 7.01562500e-08,\n 6.23828125e-07])\nMinimiser pool = SHGO.X_min = []\nSuccessfully completed construction of complex.\n\nThe result was not the global minimum though.\nHow can I make shgo terminate successfully preferably with the COBYLA.","Title":"Optimization boundaries and constraints are not accepted","Tags":"python,optimization,scipy-optimize,shgo","AnswerCount":3,"A_Id":75173833,"Answer":"I think intermediate (infeasible) solutions may not obey bounds. (Other NLP solvers actually never call function evaluations without bounds being observed; that is a better approach. This means we can protect against bad evaluations using bounds.) Given that you have these out-of-bound function evaluations, you can try two things:\n\nProject variables onto their bounds before calling the simulator.\nIf bounds are not obeyed, immediately return a large value and don't even call the simulator.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75170927,"CreationDate":"2023-01-19 10:34:19","Q_Score":1,"ViewCount":88,"Question":"I'm trying to use the shgo algorithm to run simulations (black box problem) and maximize the output parameter of the simulation. The objective functions runs and evaluates the simulation.\nI have 5 variables as input. I need to define boundaries and constraints, which is needed to limit the geometry of the simulation.\nAs this is a problem with a lot of variables I needed a global optimizer, which accepts boundaries and constraints. Therefore shgo seemed perfectly suitable.\nHowever, I am struggling to get the optimizer algorithm to accept my boundaries and constraints and to converge.\nThis is my code for the optimization:\nbnds = [(50*1e-9,500*1e-9), (50*1e-9,500*1e-9), (1,20), (20*1e-9,80*1e-9), (250*1e-9,800*1e-9)]\n\ndef constraint1(x):\n return x[4]-50*1e-9-2*x[0] # x[4]<=2*x[0]-50nm(threshold) \ndef constraint2(x):\n return x[1]-x[3]-20*1e-9 # x[1]-x[3]>=20nm(threshold) \ndef constraint3(x):\n return x[0]-(x[1]\/2)*(2.978\/x[2])-20*1e-9\n\ncons = ({'type': 'ineq', 'fun': constraint1},\n {'type': 'ineq', 'fun': constraint2},\n {'type': 'ineq', 'fun': constraint3})\n\nminimizer_kwargs = {'method':'COBYLA',\n 'bounds': bnds,\n 'constraints':cons} \n\nopts = {'disp':True}\n\nres_shgo = shgo(objective, \n bounds=bnds, \n constraints=cons, \n sampling_method='sobol', \n minimizer_kwargs=minimizer_kwargs, \n options=opts)\n\nThe global algorithm runs for 33 rounds to complete the evaluations and starts the minimiser pool:\nEvaluations completed.\nSearch for minimiser pool\n--- Starting minimization at [3.3828125e-07 4.6484375e-07 1.1984375e+01 6.7812500e-08 7.5703125e-07]...\n\nNow, the COBYLA Alorithm is used within the minimiser pool for the minimization. However, after a few rounds it exceeds the boundaries with the result, that the input parameter cause my simulation to crash.\n\nI have also tried 'L-BFGS-B' algorithm for the minimizer pool.\nminimizer_kwargs = {'method':'L-BFGS-B'}\n\nThe algo converged with the following statment:\nlres = fun: -20.247226776119533\n hess_inv: <5x5 LbfgsInvHessProduct with dtype=float64>\n jac: array([ 1.70730429e+09, 1.22968297e+09, 0.00000000e+00, -1.82566323e+09,\n 1.83071706e+09])\n message: 'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'\n nfev: 6\n nit: 0\n njev: 1\n status: 0\n success: True\n x: array([2.43359375e-07, 2.99609375e-07, 1.48046875e+01, 7.01562500e-08,\n 6.23828125e-07])\nMinimiser pool = SHGO.X_min = []\nSuccessfully completed construction of complex.\n\nThe result was not the global minimum though.\nHow can I make shgo terminate successfully preferably with the COBYLA.","Title":"Optimization boundaries and constraints are not accepted","Tags":"python,optimization,scipy-optimize,shgo","AnswerCount":3,"A_Id":75809490,"Answer":"Ok...I solved the problem.\nThe problem was the boundaries with values very close to zero (10^-9). So I removed the 10^-9 and simply added it elsewhere in the script.\nHowever,now the next problem has popped up:\nThe algo does a rough global with 8-10 iteration before starting the local minimization. I find this no quite enough as there are 5 input parameters.\nFurthermore the local minimization routine keeps 'digging' in the same spot for 20+ iteratins only adjusting the input parameters by less than 0,5 at a time.\nMy aim is to increase the number of global iteration to better cover the parameter range and therefore reduce the number of local iterations, where only small and therefore neglectable changes in the output occur. Or otherwise increase the step size for the local minimizer rounds.\nI have tried different input variables of the shgo-algorithm, such as 'n', 'iters', 'maxfev', 'maxev' and 'f_tol'. No of them showing the desired result.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75170936,"CreationDate":"2023-01-19 10:34:40","Q_Score":4,"ViewCount":264,"Question":"I am currently trying to add charts for the graphical part with React in an Electron software. Except that I added interactions with buttons (sections) to insert different data in the graphs depending on the click on one of the sections by the user (variable selectedSection). So I added in the dependencies of the useEffect() function the chartPMS and chartPFS functions to have access at the selectedSection variable.\nThe useEffect() function receives data continuously through a websocket from a python program. The problem is that when I run the code via the npm start command, I get a data display with a very high frequency and this error continuously in the console : WebSocket connection to 'ws:\/' failed: WebSocket is closed before the connection is established. But the functions did receive changes to the selectedSection variable based on clicks on the different sections.\nI should point out that I used the useEffect() function in this way before, it worked but I didn't have access to the updated version after clicking on one of the sections of the selectedSection variable:\n useEffect(() => {\n const socket = new WebSocket('ws:\/\/localhost:8000');\n\n socket.addEventListener('message', (event) => {\n setData(JSON.parse(event.data));\n\n chartPFS(JSON.parse(event.data));\n chartPMS(JSON.parse(event.data));\n });\n\n }, []);\n\nI added selectedSection to the dependencies except that it refreshes both panels after clicking on one of the section buttons.\nHere are the code:\nApp.js with 2 panels :\nimport React, { useState, useEffect, useRef, useSyncExternalStore } from 'react';\nimport Modal from '.\/Modal\/Modal'\nimport {Chart as ChartJS,LinearScale,PointElement,LineElement,Tooltip,Legend,Title,CategoryScale,elements} from 'chart.js';\nimport {Scatter, Line } from 'react-chartjs-2';\nimport { handleDataClick } from '.\/Modal\/Modal';\nimport { LineChart } from 'recharts';\nimport 'chart.js\/auto';\n\nChartJS.register(\n CategoryScale,\n LinearScale,\n PointElement,\n LineElement,\n Tooltip,\n Legend,\n Title);\n\n\/\/--------------------------- OPTIONS GRAPHIQUE ----------------------------------\/\/\n\n export const options5 = {\n elements: {\n line: {\n tension: 0.3,\n },\n },\n responsive: true,\n maintainAspectRatio:false,\n plugins: {\n showLine:true,\n legend: false\n },\n };\n\n\/\/--------------------------- FUNCTION APP() ----------------------------------\/\/\nexport default function App() {\n let da;\n const [data, setData] = useState(null);\n const [show,setShow] = useState(false);\n const [lastSelectedSection, setLastSelectedSection] = useState(null);\n const h2f5Ref = useRef(null);\n const h2f4Ref = useRef(null);\n const h2f3Ref = useRef(null);\n const h2f2Ref = useRef(null);\n const h2f1Ref = useRef(null);\n\n const h2m5Ref = useRef(null);\n const h2m4Ref = useRef(null);\n const h2m3Ref = useRef(null);\n const h2m2Ref = useRef(null);\n const h2m1Ref = useRef(null);\n\n const [selectedDataType, setSelectedDataType] = useState({id:\"fs-sec-1\",selected:\"twist\"});\n const [sectionData, setSectionData] = useState({\n \"fs-sec-1\": { selectedDataType: 'twist' },\n \"fs-sec-2\": { selectedDataType: 'twist' },\n \"fs-sec-3\": { selectedDataType: 'twist' },\n \"fs-sec-4\": { selectedDataType: 'twist' },\n \"fs-sec-5\": { selectedDataType: 'twist' },\n \"ms-sec-1\": { selectedDataType: 'twist' },\n \"ms-sec-2\": { selectedDataType: 'twist' },\n \"ms-sec-3\": { selectedDataType: 'twist' },\n \"ms-sec-4\": { selectedDataType: 'twist' },\n \"ms-sec-5\": { selectedDataType: 'twist' }\n });\n\n const [selectedSection, setSelectedSection] = useState(\"s1\");\n const [selectedSailP3,setSelectedSailP3]=useState(\"fs\");\n\n \/\/----------------------- Graphiques Variables initiales -------------------\/\/\n\n\n const [chartDataPFS,setChartDataPFS]=useState({\n datasets: [\n {\n label: 'Draft',\n showLine:true,\n data: [{x:3,y:1},{x:3.5,y:2},{x:5.5,y:3},{x:5.25,y:4},{x:5,y:5}],\n backgroundColor: '#df9305',\n borderColor: '#df9305'\n }]\n });\n const [chartDataPMS,setChartDataPMS]=useState({\n labels:[\"0\",\"1\",\"2\",\"3\",\"4\"],\n datasets: [\n {\n label: 'Draft',\n showLine:true,\n data: [0,2,3,2,0],\n backgroundColor: '#df9305',\n borderColor: '#df9305'\n }]\n });\n \n \/\/----------------------- Graphiques Fonctions mise \u00e0 jour -------------------\/\/\n const chartPFS=(d) =>{\n let dataToUse;\n console.log(selectedSection)\n dataToUse=[{x:0,y:0},\n {x:3.3\/2,y:d[\"fs\"][selectedSection][\"camber\"]*0.75},\n {x:3.3,y:d[\"fs\"][selectedSection][\"draft\"]},\n {x:(10-3.3)\/2+3.3,y:d[\"fs\"][selectedSection][\"draft\"]*0.55},\n {x:10,y:0}];\n setChartDataPFS({\n datasets: [\n {\n label: 'Profile',\n showLine:true,\n maintainAspectRatio:false,\n fill:false,\n data: dataToUse,\n backgroundColor: '#000000',\n borderColor: '#000000'\n }]\n });\n };\n const chartPMS=(d) =>{\n let dataToUse;\n dataToUse=[0,\n d[\"ms\"][selectedSection][\"camber\"],\n d[\"ms\"][selectedSection][\"draft\"],\n d[\"ms\"][selectedSection][\"draft\"],\n 0];\n setChartDataPMS({\n labels:[0,1,2,3,4],\n datasets: [\n {\n label: 'Profile',\n maintainAspectRatio:false,\n fill:false,\n data: dataToUse,\n borderColor: '#000000'\n }]\n });\n };\n\n \/\/----------------------- Fonctions R\u00e9cup\u00e9ration donn\u00e9es au clic -------------------\/\/\n\n const handleClick = (id,h2Text) => {\n const sectionId = id;\n setSelectedDataType({id:sectionId,selected:h2Text});\n };\n const handleSectionClick=(section) =>{\n setSelectedSection(section);\n };\n const handleSailP3Click=(sail) =>{\n setSelectedSailP3(sail);\n };\n\n \/\/----------------------- Mise \u00e0 jour donn\u00e9es -------------------\/\/\n useEffect(() => {\n const socket = new WebSocket('ws:\/\/localhost:8000');\n\n const handler = (event) => {\n\n setData(JSON.parse(event.data));\n chart1(JSON.parse(event.data));\n chart2(JSON.parse(event.data));\n chart3(JSON.parse(event.data));\n chart4(JSON.parse(event.data));\n chartPFS(JSON.parse(event.data));\n chartPMS(JSON.parse(event.data));\n };\n\n socket.addEventListener('message', handler);\n\n return () => {\n socket.removeEventListener('message', handler);\n socket.close();\n };\n }, [selectedSection]);\n \n \n return (\n
\n
\n
\n
\n
\n

FORESAIL data<\/h1>\n <\/i>\n <\/div>\n
\n
{handleClick(\"fs-sec-5\",h2f5Ref.current.textContent);setShow(true);}} >\n {data && sectionData[\"fs-sec-5\"].selectedDataType ? {data[\"fs\"][\"s5\"][sectionData[\"fs-sec-5\"].selectedDataType]}<\/span> : --<\/span>}\n

{sectionData[\"fs-sec-5\"].selectedDataType ? sectionData[\"fs-sec-5\"].selectedDataType.toUpperCase() : \"TWIST\"}<\/h2>\n

s5<\/h3>\n <\/div>\n
{handleClick(\"fs-sec-4\",h2f4Ref.current.textContent);setShow(true);}}>\n {data && sectionData[\"fs-sec-4\"].selectedDataType ? {data[\"fs\"][\"s4\"][sectionData[\"fs-sec-4\"].selectedDataType]}<\/span> : --<\/span>}\n

{sectionData[\"fs-sec-4\"].selectedDataType ? sectionData[\"fs-sec-4\"].selectedDataType.toUpperCase() : \"TWIST\"}<\/h2>\n

s4<\/h3>\n <\/div>\n
{handleClick(\"fs-sec-3\",h2f3Ref.current.textContent);setShow(true);}}>\n {data && sectionData[\"fs-sec-3\"].selectedDataType ? {data[\"fs\"][\"s3\"][sectionData[\"fs-sec-3\"].selectedDataType]}<\/span> : --<\/span>}\n

{sectionData[\"fs-sec-3\"].selectedDataType ? sectionData[\"fs-sec-3\"].selectedDataType.toUpperCase() : \"TWIST\"}<\/h2>\n

s3<\/h3>\n <\/div>\n
{handleClick(\"fs-sec-2\",h2f2Ref.current.textContent);setShow(true);}}>\n {data && sectionData[\"fs-sec-2\"].selectedDataType ? {data[\"fs\"][\"s2\"][sectionData[\"fs-sec-2\"].selectedDataType]}<\/span> : --<\/span>}\n

{sectionData[\"fs-sec-2\"].selectedDataType ? sectionData[\"fs-sec-2\"].selectedDataType.toUpperCase() : \"TWIST\"}<\/h2>\n

s2<\/h3>\n <\/div>\n
{handleClick(\"fs-sec-1\",h2f1Ref.current.textContent);setShow(true);}}>\n {data && sectionData[\"fs-sec-1\"].selectedDataType ? {data[\"fs\"][\"s1\"][sectionData[\"fs-sec-1\"].selectedDataType]}<\/span> : --<\/span>}\n

{sectionData[\"fs-sec-1\"].selectedDataType ? sectionData[\"fs-sec-1\"].selectedDataType.toUpperCase() : \"TWIST\"}<\/h2>\n

s1<\/h3>\n <\/div>\n <\/div>\n <\/div>\n
\n
\n

SAILS sections<\/h1>\n <\/i>\n <\/div>\n
\n
\n
\n \n <\/div>\n
\n \n <\/div>\n <\/div>\n
\n {handleSectionClick(\"s5\")}}\/>\n {handleSectionClick(\"s4\")}}\/>\n {handleSectionClick(\"s3\")}}\/>\n {handleSectionClick(\"s2\")}}\/>\n {handleSectionClick(\"s1\")}}\/>\n <\/div>\n
\n \n \n <\/div>\n <\/div>\n <\/div>\n <\/div>\n setShow(false)} show={show} Data={selectedDataType} sectionData={sectionData} setSectionData={setSectionData}\/>\n <\/div>\n <\/div> \n );\n}\n\n\nPython :\nimport asyncio\nimport random\nimport datetime\nimport websockets\nimport json\n\nsv={\"fs\":{\n \"s5\":{\"entry\":2,\"cfwd\":3,\"camber\":2,\"draft\":3,\"caft\":5,\"exit\":5,\"twist\":15,\"saglat\":10,\"saglong\":10},\n \"s4\":{\"entry\":2,\"cfwd\":3,\"camber\":2,\"draft\":3,\"caft\":5,\"exit\":5,\"twist\":15,\"saglat\":10,\"saglong\":10},\n \"s3\":{\"entry\":2,\"cfwd\":3,\"camber\":2,\"draft\":3,\"caft\":5,\"exit\":5,\"twist\":15,\"saglat\":10,\"saglong\":10},\n \"s2\":{\"entry\":2,\"cfwd\":3,\"camber\":2,\"draft\":3,\"caft\":5,\"exit\":5,\"twist\":15,\"saglat\":10,\"saglong\":10},\n \"s1\":{\"entry\":2,\"cfwd\":3,\"camber\":2,\"draft\":3,\"caft\":5,\"exit\":5,\"twist\":15,\"saglat\":10,\"saglong\":10},\n },\n \"ms\":{\n \"s5\":{\"entry\":2,\"cfwd\":3,\"camber\":2,\"draft\":3,\"caft\":5,\"exit\":5,\"twist\":15,\"saglat\":10,\"saglong\":10},\n \"s4\":{\"entry\":2,\"cfwd\":3,\"camber\":2,\"draft\":3,\"caft\":5,\"exit\":5,\"twist\":15,\"saglat\":10,\"saglong\":10},\n \"s3\":{\"entry\":2,\"cfwd\":3,\"camber\":2,\"draft\":3,\"caft\":5,\"exit\":5,\"twist\":15,\"saglat\":10,\"saglong\":10},\n \"s2\":{\"entry\":2,\"cfwd\":3,\"camber\":2,\"draft\":3,\"caft\":5,\"exit\":5,\"twist\":15,\"saglat\":10,\"saglong\":10},\n \"s1\":{\"entry\":2,\"cfwd\":3,\"camber\":2,\"draft\":3,\"caft\":5,\"exit\":5,\"twist\":15,\"saglat\":10,\"saglong\":10}, \n }}\n\nasync def handler(websocket, path):\n while True:\n #log_decoder()\n for key1 in sv:\n for key2 in sv[key1]:\n sv[key1][key2][\"entry\"] = random.randint(1, 10)\n sv[key1][key2][\"cfwd\"] = random.randint(1, 10)\n sv[key1][key2][\"camber\"] = random.randint(1, 10)\n sv[key1][key2][\"draft\"] = random.randint(1, 4)\n sv[key1][key2][\"caft\"] = random.randint(1, 10)\n sv[key1][key2][\"exit\"] = random.randint(1, 10)\n sv[key1][key2][\"twist\"] = random.randint(1, 10)\n sv[key1][key2][\"saglat\"] = random.randint(1, 10)\n sv[key1][key2][\"saglong\"] = random.randint(1, 10) \n #data = [random.randint(0, 20) for _ in range(10)]\n await websocket.send(json.dumps(sv))\n await asyncio.sleep(1)\n\nstart_server = websockets.serve(handler, \"localhost\", 8000)\n\nasyncio.get_event_loop().run_until_complete(start_server)\nasyncio.get_event_loop().run_forever()\n\nRegards,","Title":"Error with websocket connection when trying to add dependencies","Tags":"javascript,python,node.js,reactjs,websocket","AnswerCount":2,"A_Id":75186387,"Answer":"If you set the dependency to be selectedSection it should solve your problem. This is because that when you set the dependency to the functions it will rerender basically as fast as your computer allows, but if you set it to selectedSection it only rerender when that is updated and the correct value is included.","Users Score":2,"is_accepted":false,"Score":0.1973753202,"Available Count":2},{"Q_Id":75170936,"CreationDate":"2023-01-19 10:34:40","Q_Score":4,"ViewCount":264,"Question":"I am currently trying to add charts for the graphical part with React in an Electron software. Except that I added interactions with buttons (sections) to insert different data in the graphs depending on the click on one of the sections by the user (variable selectedSection). So I added in the dependencies of the useEffect() function the chartPMS and chartPFS functions to have access at the selectedSection variable.\nThe useEffect() function receives data continuously through a websocket from a python program. The problem is that when I run the code via the npm start command, I get a data display with a very high frequency and this error continuously in the console : WebSocket connection to 'ws:\/' failed: WebSocket is closed before the connection is established. But the functions did receive changes to the selectedSection variable based on clicks on the different sections.\nI should point out that I used the useEffect() function in this way before, it worked but I didn't have access to the updated version after clicking on one of the sections of the selectedSection variable:\n useEffect(() => {\n const socket = new WebSocket('ws:\/\/localhost:8000');\n\n socket.addEventListener('message', (event) => {\n setData(JSON.parse(event.data));\n\n chartPFS(JSON.parse(event.data));\n chartPMS(JSON.parse(event.data));\n });\n\n }, []);\n\nI added selectedSection to the dependencies except that it refreshes both panels after clicking on one of the section buttons.\nHere are the code:\nApp.js with 2 panels :\nimport React, { useState, useEffect, useRef, useSyncExternalStore } from 'react';\nimport Modal from '.\/Modal\/Modal'\nimport {Chart as ChartJS,LinearScale,PointElement,LineElement,Tooltip,Legend,Title,CategoryScale,elements} from 'chart.js';\nimport {Scatter, Line } from 'react-chartjs-2';\nimport { handleDataClick } from '.\/Modal\/Modal';\nimport { LineChart } from 'recharts';\nimport 'chart.js\/auto';\n\nChartJS.register(\n CategoryScale,\n LinearScale,\n PointElement,\n LineElement,\n Tooltip,\n Legend,\n Title);\n\n\/\/--------------------------- OPTIONS GRAPHIQUE ----------------------------------\/\/\n\n export const options5 = {\n elements: {\n line: {\n tension: 0.3,\n },\n },\n responsive: true,\n maintainAspectRatio:false,\n plugins: {\n showLine:true,\n legend: false\n },\n };\n\n\/\/--------------------------- FUNCTION APP() ----------------------------------\/\/\nexport default function App() {\n let da;\n const [data, setData] = useState(null);\n const [show,setShow] = useState(false);\n const [lastSelectedSection, setLastSelectedSection] = useState(null);\n const h2f5Ref = useRef(null);\n const h2f4Ref = useRef(null);\n const h2f3Ref = useRef(null);\n const h2f2Ref = useRef(null);\n const h2f1Ref = useRef(null);\n\n const h2m5Ref = useRef(null);\n const h2m4Ref = useRef(null);\n const h2m3Ref = useRef(null);\n const h2m2Ref = useRef(null);\n const h2m1Ref = useRef(null);\n\n const [selectedDataType, setSelectedDataType] = useState({id:\"fs-sec-1\",selected:\"twist\"});\n const [sectionData, setSectionData] = useState({\n \"fs-sec-1\": { selectedDataType: 'twist' },\n \"fs-sec-2\": { selectedDataType: 'twist' },\n \"fs-sec-3\": { selectedDataType: 'twist' },\n \"fs-sec-4\": { selectedDataType: 'twist' },\n \"fs-sec-5\": { selectedDataType: 'twist' },\n \"ms-sec-1\": { selectedDataType: 'twist' },\n \"ms-sec-2\": { selectedDataType: 'twist' },\n \"ms-sec-3\": { selectedDataType: 'twist' },\n \"ms-sec-4\": { selectedDataType: 'twist' },\n \"ms-sec-5\": { selectedDataType: 'twist' }\n });\n\n const [selectedSection, setSelectedSection] = useState(\"s1\");\n const [selectedSailP3,setSelectedSailP3]=useState(\"fs\");\n\n \/\/----------------------- Graphiques Variables initiales -------------------\/\/\n\n\n const [chartDataPFS,setChartDataPFS]=useState({\n datasets: [\n {\n label: 'Draft',\n showLine:true,\n data: [{x:3,y:1},{x:3.5,y:2},{x:5.5,y:3},{x:5.25,y:4},{x:5,y:5}],\n backgroundColor: '#df9305',\n borderColor: '#df9305'\n }]\n });\n const [chartDataPMS,setChartDataPMS]=useState({\n labels:[\"0\",\"1\",\"2\",\"3\",\"4\"],\n datasets: [\n {\n label: 'Draft',\n showLine:true,\n data: [0,2,3,2,0],\n backgroundColor: '#df9305',\n borderColor: '#df9305'\n }]\n });\n \n \/\/----------------------- Graphiques Fonctions mise \u00e0 jour -------------------\/\/\n const chartPFS=(d) =>{\n let dataToUse;\n console.log(selectedSection)\n dataToUse=[{x:0,y:0},\n {x:3.3\/2,y:d[\"fs\"][selectedSection][\"camber\"]*0.75},\n {x:3.3,y:d[\"fs\"][selectedSection][\"draft\"]},\n {x:(10-3.3)\/2+3.3,y:d[\"fs\"][selectedSection][\"draft\"]*0.55},\n {x:10,y:0}];\n setChartDataPFS({\n datasets: [\n {\n label: 'Profile',\n showLine:true,\n maintainAspectRatio:false,\n fill:false,\n data: dataToUse,\n backgroundColor: '#000000',\n borderColor: '#000000'\n }]\n });\n };\n const chartPMS=(d) =>{\n let dataToUse;\n dataToUse=[0,\n d[\"ms\"][selectedSection][\"camber\"],\n d[\"ms\"][selectedSection][\"draft\"],\n d[\"ms\"][selectedSection][\"draft\"],\n 0];\n setChartDataPMS({\n labels:[0,1,2,3,4],\n datasets: [\n {\n label: 'Profile',\n maintainAspectRatio:false,\n fill:false,\n data: dataToUse,\n borderColor: '#000000'\n }]\n });\n };\n\n \/\/----------------------- Fonctions R\u00e9cup\u00e9ration donn\u00e9es au clic -------------------\/\/\n\n const handleClick = (id,h2Text) => {\n const sectionId = id;\n setSelectedDataType({id:sectionId,selected:h2Text});\n };\n const handleSectionClick=(section) =>{\n setSelectedSection(section);\n };\n const handleSailP3Click=(sail) =>{\n setSelectedSailP3(sail);\n };\n\n \/\/----------------------- Mise \u00e0 jour donn\u00e9es -------------------\/\/\n useEffect(() => {\n const socket = new WebSocket('ws:\/\/localhost:8000');\n\n const handler = (event) => {\n\n setData(JSON.parse(event.data));\n chart1(JSON.parse(event.data));\n chart2(JSON.parse(event.data));\n chart3(JSON.parse(event.data));\n chart4(JSON.parse(event.data));\n chartPFS(JSON.parse(event.data));\n chartPMS(JSON.parse(event.data));\n };\n\n socket.addEventListener('message', handler);\n\n return () => {\n socket.removeEventListener('message', handler);\n socket.close();\n };\n }, [selectedSection]);\n \n \n return (\n
\n
\n
\n
\n
\n

FORESAIL data<\/h1>\n <\/i>\n <\/div>\n
\n
{handleClick(\"fs-sec-5\",h2f5Ref.current.textContent);setShow(true);}} >\n {data && sectionData[\"fs-sec-5\"].selectedDataType ? {data[\"fs\"][\"s5\"][sectionData[\"fs-sec-5\"].selectedDataType]}<\/span> : --<\/span>}\n

{sectionData[\"fs-sec-5\"].selectedDataType ? sectionData[\"fs-sec-5\"].selectedDataType.toUpperCase() : \"TWIST\"}<\/h2>\n

s5<\/h3>\n <\/div>\n
{handleClick(\"fs-sec-4\",h2f4Ref.current.textContent);setShow(true);}}>\n {data && sectionData[\"fs-sec-4\"].selectedDataType ? {data[\"fs\"][\"s4\"][sectionData[\"fs-sec-4\"].selectedDataType]}<\/span> : --<\/span>}\n

{sectionData[\"fs-sec-4\"].selectedDataType ? sectionData[\"fs-sec-4\"].selectedDataType.toUpperCase() : \"TWIST\"}<\/h2>\n

s4<\/h3>\n <\/div>\n
{handleClick(\"fs-sec-3\",h2f3Ref.current.textContent);setShow(true);}}>\n {data && sectionData[\"fs-sec-3\"].selectedDataType ? {data[\"fs\"][\"s3\"][sectionData[\"fs-sec-3\"].selectedDataType]}<\/span> : --<\/span>}\n

{sectionData[\"fs-sec-3\"].selectedDataType ? sectionData[\"fs-sec-3\"].selectedDataType.toUpperCase() : \"TWIST\"}<\/h2>\n

s3<\/h3>\n <\/div>\n
{handleClick(\"fs-sec-2\",h2f2Ref.current.textContent);setShow(true);}}>\n {data && sectionData[\"fs-sec-2\"].selectedDataType ? {data[\"fs\"][\"s2\"][sectionData[\"fs-sec-2\"].selectedDataType]}<\/span> : --<\/span>}\n

{sectionData[\"fs-sec-2\"].selectedDataType ? sectionData[\"fs-sec-2\"].selectedDataType.toUpperCase() : \"TWIST\"}<\/h2>\n

s2<\/h3>\n <\/div>\n
{handleClick(\"fs-sec-1\",h2f1Ref.current.textContent);setShow(true);}}>\n {data && sectionData[\"fs-sec-1\"].selectedDataType ? {data[\"fs\"][\"s1\"][sectionData[\"fs-sec-1\"].selectedDataType]}<\/span> : --<\/span>}\n

{sectionData[\"fs-sec-1\"].selectedDataType ? sectionData[\"fs-sec-1\"].selectedDataType.toUpperCase() : \"TWIST\"}<\/h2>\n

s1<\/h3>\n <\/div>\n <\/div>\n <\/div>\n
\n
\n

SAILS sections<\/h1>\n <\/i>\n <\/div>\n
\n
\n
\n \n <\/div>\n
\n \n <\/div>\n <\/div>\n
\n {handleSectionClick(\"s5\")}}\/>\n {handleSectionClick(\"s4\")}}\/>\n {handleSectionClick(\"s3\")}}\/>\n {handleSectionClick(\"s2\")}}\/>\n {handleSectionClick(\"s1\")}}\/>\n <\/div>\n
\n \n \n <\/div>\n <\/div>\n <\/div>\n <\/div>\n setShow(false)} show={show} Data={selectedDataType} sectionData={sectionData} setSectionData={setSectionData}\/>\n <\/div>\n <\/div> \n );\n}\n\n\nPython :\nimport asyncio\nimport random\nimport datetime\nimport websockets\nimport json\n\nsv={\"fs\":{\n \"s5\":{\"entry\":2,\"cfwd\":3,\"camber\":2,\"draft\":3,\"caft\":5,\"exit\":5,\"twist\":15,\"saglat\":10,\"saglong\":10},\n \"s4\":{\"entry\":2,\"cfwd\":3,\"camber\":2,\"draft\":3,\"caft\":5,\"exit\":5,\"twist\":15,\"saglat\":10,\"saglong\":10},\n \"s3\":{\"entry\":2,\"cfwd\":3,\"camber\":2,\"draft\":3,\"caft\":5,\"exit\":5,\"twist\":15,\"saglat\":10,\"saglong\":10},\n \"s2\":{\"entry\":2,\"cfwd\":3,\"camber\":2,\"draft\":3,\"caft\":5,\"exit\":5,\"twist\":15,\"saglat\":10,\"saglong\":10},\n \"s1\":{\"entry\":2,\"cfwd\":3,\"camber\":2,\"draft\":3,\"caft\":5,\"exit\":5,\"twist\":15,\"saglat\":10,\"saglong\":10},\n },\n \"ms\":{\n \"s5\":{\"entry\":2,\"cfwd\":3,\"camber\":2,\"draft\":3,\"caft\":5,\"exit\":5,\"twist\":15,\"saglat\":10,\"saglong\":10},\n \"s4\":{\"entry\":2,\"cfwd\":3,\"camber\":2,\"draft\":3,\"caft\":5,\"exit\":5,\"twist\":15,\"saglat\":10,\"saglong\":10},\n \"s3\":{\"entry\":2,\"cfwd\":3,\"camber\":2,\"draft\":3,\"caft\":5,\"exit\":5,\"twist\":15,\"saglat\":10,\"saglong\":10},\n \"s2\":{\"entry\":2,\"cfwd\":3,\"camber\":2,\"draft\":3,\"caft\":5,\"exit\":5,\"twist\":15,\"saglat\":10,\"saglong\":10},\n \"s1\":{\"entry\":2,\"cfwd\":3,\"camber\":2,\"draft\":3,\"caft\":5,\"exit\":5,\"twist\":15,\"saglat\":10,\"saglong\":10}, \n }}\n\nasync def handler(websocket, path):\n while True:\n #log_decoder()\n for key1 in sv:\n for key2 in sv[key1]:\n sv[key1][key2][\"entry\"] = random.randint(1, 10)\n sv[key1][key2][\"cfwd\"] = random.randint(1, 10)\n sv[key1][key2][\"camber\"] = random.randint(1, 10)\n sv[key1][key2][\"draft\"] = random.randint(1, 4)\n sv[key1][key2][\"caft\"] = random.randint(1, 10)\n sv[key1][key2][\"exit\"] = random.randint(1, 10)\n sv[key1][key2][\"twist\"] = random.randint(1, 10)\n sv[key1][key2][\"saglat\"] = random.randint(1, 10)\n sv[key1][key2][\"saglong\"] = random.randint(1, 10) \n #data = [random.randint(0, 20) for _ in range(10)]\n await websocket.send(json.dumps(sv))\n await asyncio.sleep(1)\n\nstart_server = websockets.serve(handler, \"localhost\", 8000)\n\nasyncio.get_event_loop().run_until_complete(start_server)\nasyncio.get_event_loop().run_forever()\n\nRegards,","Title":"Error with websocket connection when trying to add dependencies","Tags":"javascript,python,node.js,reactjs,websocket","AnswerCount":2,"A_Id":75279105,"Answer":"Briefly scanning over your code, it seems as if the problem may be caused by the way your useEffect() method handles the WebSocket connection. When dealing with websockets, the error message \"WebSocket connection failed: WebSocket being closed before the connection is established\" typically refers to a websocket that is being closed before it has a chance to establish a connection.\nOne other thing that might be causing this is that you're initiating a new WebSocket connection each time the component re-renders, which could also be the culprit. I've encountered this issue a few times in one of my projects; however, putting the WebSocket connection outside of useEffect() so that it's only instantiated once fixed the issue.\nOne last thing I would try is using useEffect() with an empty array as the dependencies if you only intend to run the connection once on mount. So when you want to update selectedSection, you can use setSelectedSection to update the state, and then use that state value in useEffect() to determine how to handle the data coming in over the websocket.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75171845,"CreationDate":"2023-01-19 11:53:40","Q_Score":1,"ViewCount":171,"Question":"#type: ignore\nimport os\nfrom keep_alive import keep_alive\nfrom discord.ext import commands\nimport discord\nimport asyncio\nimport datetime\nimport re\n\nDC_TOK = os.environ['DC_TOK']\n\nbot = commands.Bot(command_prefix='!', intents=discord.Intents(4194303))\n\n\n@bot.event\nasync def on_message(msg: discord.Message):\n if msg.author == bot.user:\n return\n guild = msg.guild\n if msg.content == '!quack':\n await msg.channel.send('''!ticket open subject - Creates a new ticket\n\n**Ticket Support Team++**\n!ticket close - Closes on-going ticket\n\n!invite ping_message - Invites user to ticket\n!remove ping_message - Removes user from ticket\n\n!mark rank - Makes the ticket exclusive for specific rank\n!category TITLE - Briefly categorises the ticket\n''')\n elif msg.content[:12] == '!ticket open':\n chn = msg.channel\n ticket_title = msg.content[13:]\n if ticket_title.strip() == '':\n ticket_title = 'None'\n ticketer_acc = msg.author\n ticketer = ticketer_acc.display_name\n category = discord.utils.get(guild.categories, name=\"Tickets 2\")\n tcc = discord.utils.get(guild.channels, name=\"ticket-creation-logs\")\n elem = None\n _ = None\n async for mg in tcc.history():\n if mg.content.startswith('CATEG'):\n continue\n elem, _ = mg.content.split(' ')\n elem = int(elem)\n _ = int(_)\n break\n assert (elem is not None)\n elem += 1\n await tcc.send(str(elem) + ' ' + str(msg.author.id))\n tck_channel = await guild.create_text_channel(f'{elem}-{ticketer}',\n category=category)\n await tck_channel.set_permissions(ticketer_acc,\n read_messages=True,\n send_messages=True)\n await chn.send(\n f'**TICKET {elem}**\\n\\nYour ticket has been created. <#{tck_channel.id}>'\n )\n await tck_channel.send(\n f'<@{ticketer_acc.id}> Hello emo! Your ticket has been created, subject: `{ticket_title}`.\\nOur support team will be with you soon! Meanwhile please address your problem because those emos are really busy!!!'\n )\n elif msg.content == '!ticket close':\n category = discord.utils.get(guild.categories, name=\"Tickets 2\")\n if msg.channel.category != category:\n return\n if not (discord.utils.get(guild.roles, name='Ticket support team')\n in msg.author.roles or discord.utils.get(\n guild.roles, name='Administrator') in msg.author.roles\n or discord.utils.get(guild.roles,\n name='Co-owners (A.K.A. super admin)')\n in msg.author.roles or discord.utils.get(\n guild.roles, name='Owner') in msg.author.roles):\n return\n closed_cat = discord.utils.get(guild.categories, name=\"Tickets 3\")\n nam = msg.channel.name.lstrip('\ud83d\udd31\ud83d\udee1\ufe0f')\n tick_id = int(nam[:nam.find('-')])\n tcc = discord.utils.get(guild.channels, name=\"ticket-creation-logs\")\n elem = None\n creator = None\n async for mg in tcc.history():\n if mg.content.startswith('CATEG'):\n continue\n elem, creator = mg.content.split(' ')\n elem = int(elem)\n creator = int(creator)\n if elem == tick_id:\n break\n assert (elem is not None)\n await msg.channel.send('Closing ticket...')\n counter = {}\n async for mg in msg.channel.history():\n if mg.author.bot or mg.author.id == creator:\n continue\n if mg.author.id not in counter.keys():\n counter[mg.author.id] = 1\n else:\n counter[mg.author.id] += 1\n max_num = 0\n max_authors = []\n for key, value in counter.items():\n if value > max_num:\n max_num = value\n max_authors = [key]\n elif value == max_num:\n max_authors.append(key)\n user_ping_list = ' '.join([f'<@{usr}>' for usr in max_authors\n ]) + ' contributed the most.'\n if user_ping_list == ' contributed the most.':\n user_ping_list = 'No one contributed.'\n await msg.channel.send(user_ping_list)\n await msg.channel.send('We hope we were able to solve your problem.')\n await asyncio.sleep(3)\n tick_creator = discord.utils.get(guild.members, id=creator)\n assert (tick_creator is not None)\n await msg.channel.set_permissions(tick_creator, overwrite=None)\n await msg.channel.edit(category=closed_cat)\n dms = discord.utils.get(guild.members, id=creator)\n assert (dms is not None)\n DM = dms\n dms = DM._user.dm_channel\n if dms is None:\n dms = await DM._user.create_dm()\n del DM\n cr = msg.channel.created_at\n assert (cr is not None)\n tick_created: datetime.datetime = cr.replace(tzinfo=None)\n time_cur = datetime.datetime.utcnow().replace(tzinfo=None)\n td = time_cur - tick_created\n mm, ss = divmod(td.seconds, 60)\n hh, mm = divmod(mm, 60)\n timestr = ''\n if td.days > 0:\n timestr += f'{td.days}d '\n if hh > 0:\n timestr += f'{hh}h '\n if mm > 0:\n timestr += f'{mm}m '\n if ss > 0:\n timestr += f'{ss}s'\n await dms.send(f'''Your ticket has been closed by <@{msg.author.id}>\nYour ticket lasted `{timestr}`.\nWe hope we were able to solve your problem. :partying_face:''')\n elif msg.content == '!mark co':\n category = discord.utils.get(guild.categories, name=\"Tickets 2\")\n if msg.channel.category != category:\n return\n if not (discord.utils.get(guild.roles, name='Ticket support team')\n in msg.author.roles or discord.utils.get(\n guild.roles, name='Administrator') in msg.author.roles\n or discord.utils.get(guild.roles,\n name='Co-owners (A.K.A. super admin)')\n in msg.author.roles or discord.utils.get(\n guild.roles, name='Owner') in msg.author.roles):\n return\n await msg.channel.send('Requesting to mark this ticket for: `Co-owner`')\n nam = msg.channel.name\n if nam.startswith('\ud83d\udee1\ufe0f'):\n nam = nam[1:]\n assert (nam is not None)\n if nam.startswith('\ud83d\udd31'):\n await msg.channel.send('Ticket already marked for `Co-owner`')\n else:\n await msg.channel.edit(name='\ud83d\udd31' + nam)\n await msg.channel.send(\n f'<@{msg.author.id}> marked this ticket for: `Co-owner`')\n await msg.channel.send(':trident:')\n elif msg.content == '!mark admin':\n category = discord.utils.get(guild.categories, name=\"Tickets 2\")\n if msg.channel.category != category:\n return\n if not (discord.utils.get(guild.roles, name='Ticket support team')\n in msg.author.roles or discord.utils.get(\n guild.roles, name='Administrator') in msg.author.roles\n or discord.utils.get(guild.roles,\n name='Co-owners (A.K.A. super admin)')\n in msg.author.roles or discord.utils.get(\n guild.roles, name='Owner') in msg.author.roles):\n return\n await msg.channel.send(\n 'Requesting to mark this ticket for: `Administrator`')\n nam = msg.channel.name\n assert (nam is not None)\n if nam.startswith('\ud83d\udee1\ufe0f'):\n await msg.channel.send('Ticket already marked for `Administrator`')\n elif nam.startswith('\ud83d\udd31'):\n await msg.channel.send('Ticket already marked for `Co-owner`')\n else:\n await msg.channel.edit(name='\ud83d\udee1\ufe0f' + nam)\n await msg.channel.send(\n f'<@{msg.author.id}> marked this ticket for: `Adiministrator`')\n await msg.channel.send(':shield:')\n elif msg.content[:7] == '!invite':\n category = discord.utils.get(guild.categories, name=\"Tickets 2\")\n if msg.channel.category != category:\n return\n if not (discord.utils.get(guild.roles, name='Ticket support team')\n in msg.author.roles or discord.utils.get(\n guild.roles, name='Administrator') in msg.author.roles\n or discord.utils.get(guild.roles,\n name='Co-owners (A.K.A. super admin)')\n in msg.author.roles or discord.utils.get(\n guild.roles, name='Owner') in msg.author.roles):\n return\n usr_ping = msg.content[8:]\n if not (usr_ping.startswith('<@') and usr_ping.endswith('>')\n and usr_ping[2:-1].isdigit()):\n return\n invited_usr = discord.utils.get(guild.members, id=int(usr_ping[2:-1]))\n assert (invited_usr is not None)\n await msg.channel.set_permissions(invited_usr,\n read_messages=True,\n send_messages=True)\n await msg.channel.send(f'{usr_ping} was invited into the ticket.')\n elif msg.content[:7] == '!remove':\n category = discord.utils.get(guild.categories, name=\"Tickets 2\")\n if msg.channel.category != category:\n return\n if not (discord.utils.get(guild.roles, name='Ticket support team')\n in msg.author.roles or discord.utils.get(\n guild.roles, name='Administrator') in msg.author.roles\n or discord.utils.get(guild.roles,\n name='Co-owners (A.K.A. super admin)')\n in msg.author.roles or discord.utils.get(\n guild.roles, name='Owner') in msg.author.roles):\n return\n usr_ping = msg.content[8:]\n if not (usr_ping.startswith('<@') and usr_ping.endswith('>')\n and usr_ping[2:-1].isdigit()):\n return\n invited_usr = discord.utils.get(guild.members, id=int(usr_ping[2:-1]))\n assert (invited_usr is not None)\n await msg.channel.set_permissions(invited_usr, overwrite=None)\n await msg.channel.send(f'{usr_ping} was removed from the ticket.')\n elif msg.content[:9] == '!category':\n category = discord.utils.get(guild.categories, name=\"Tickets 2\")\n if msg.channel.category != category:\n return\n if not (discord.utils.get(guild.roles, name='Ticket support team')\n in msg.author.roles or discord.utils.get(\n guild.roles, name='Administrator') in msg.author.roles\n or discord.utils.get(guild.roles,\n name='Co-owners (A.K.A. super admin)')\n in msg.author.roles or discord.utils.get(\n guild.roles, name='Owner') in msg.author.roles):\n return\n categ = msg.content[10:]\n tcc = discord.utils.get(guild.channels, name=\"ticket-creation-logs\")\n nam = msg.channel.name.lstrip('\ud83d\udd31\ud83d\udee1\ufe0f')\n tick_id = int(nam[:nam.find('-')])\n async for mg in tcc.history():\n if mg.content.startswith(f'CATEG {tick_id}'):\n await msg.channel.send(\n f'''This ticket is already marked with category **{mg.content[len(f'CATEG{tick_id}')+2:]}**'''\n )\n return\n await tcc.send(f'CATEG {tick_id} {categ}')\n await msg.channel.send(\n f'''<@{msg.author.id}> marked this ticket with category **{categ}**''')\n else:\n category = discord.utils.get(guild.categories, name=\"Tickets 2\")\n if msg.channel.category != category:\n return\n PING_PTN = '<@[0-9]+>'\n pings = re.findall(PING_PTN, msg.content, re.UNICODE)\n usrs = [\n discord.utils.get(guild.members, id=int(ping[2:-1])) for ping in pings\n ]\n remainder = msg.content\n for ping in pings:\n remainder = remainder.replace(str(ping), '')\n for usr in usrs:\n assert (usr is not None)\n DM = usr\n dms = DM._user.dm_channel\n if dms is None:\n dms = await DM._user.create_dm()\n del DM\n await dms.send(\n f'''`{msg.author.name}#{msg.author.discriminator}`Pinged you in a ticket: <#{msg.channel.id}>\n\n`{remainder}`''')\n\n\nkeep_alive()\nbot.run(DC_TOK)\n\nmain.py\nfrom flask import Flask\nfrom threading import Thread\n\napp = Flask('')\n\n\n@app.route('\/')\ndef home():\n return \"Hello. I am alive!\"\n\n\ndef run():\n app.run(host='0.0.0.0', port=8080)\n\n\ndef keep_alive():\n t = Thread(target=run)\n t.start()\n\nkeep_alive.py\nI am hosting my project on repl.it and using uptimerobot to ping it every 10 minutes so that it does not go dormant. But discord cloudflare bans it. I hear some say it's because of uptimerobot but how does that even affect the discord API? Also, the plan you give me must not require payment or payment methods of any type. I am a minor.","Title":"Discord API cloudflare banning my repl.it repo","Tags":"python,discord,discord.py,replit","AnswerCount":1,"A_Id":75174128,"Answer":"As moinierer3000 said,\nReplit uses shared IPs to host your Discord bots.\nThis means that when you run your bots, it sends requests to Discord servers, that are proxies by Cloudflare that rate-limits how much requests an IP can make to defend the app from DDoS attacks.\nI'd recommend you to just shutdown the bot, wait 15-30 minutes and try again later. I've done this numerous times and it worked great.\nHope this answer helped you.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75171894,"CreationDate":"2023-01-19 11:58:50","Q_Score":1,"ViewCount":43,"Question":"I'm searching a way how to check the current value of Couchbase cluster timeout, and how to set up a desired timeout using the Python SDK.\nI know the method to set up a timeout using ClusterTimeoutOptions but it doesn't work.\nThere are no problems with timeouts if I disable it using couchbase-cli:\ncouchbase-cli setting-query --set --timeout -1","Title":"How to check and set up the Couchbase timeout using the Python SDK?","Tags":"python,timeout,couchbase","AnswerCount":1,"A_Id":75244354,"Answer":"I resolve it and it works as I expected. I converted cURL commands which I found at the Couchbase documentation website to Python's requests, and I was able to check timeout and update it.\nTo check:\nrequests.get('http:\/\/localhost:8093\/admin\/settings', auth=(user, password))\nTo update:\nrequests.post('http:\/\/localhost:8091\/settings\/querySettings', headers=headers, data=data, auth=(user, password))","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75172833,"CreationDate":"2023-01-19 13:16:53","Q_Score":2,"ViewCount":41,"Question":"The way iam trying to get the week number\nimport pendulum\nfrom datetime import date\n\ndt = pendulum.parse(str(date.today())) \n\nweek = dt.week_of_month\n\nprint(dt)\n\nprint(week)\n\nResult\n\n2023-01-19T00:00:00+00:00\n-48\n\nThe week number is -48 here, please help me to get the correct week number of the month","Title":"Python pendulum module returning wrong week number","Tags":"python,python-3.x,week-number,pendulum","AnswerCount":2,"A_Id":75173097,"Answer":"As @God is One mentioned in the above answer, there's an issue with the current version and the rest of the versions in the 2.1.x series. But then when I tried to downgrade it to 2.0.5, this worked fine and it returned the expected value.\nMaybe that's the only option as of now if you're to go with this library.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75173423,"CreationDate":"2023-01-19 14:01:26","Q_Score":1,"ViewCount":91,"Question":"I have a xlsx file\n\n\n\n\nCountry name\nCountry code\n\n\n\n\nIN\nIndia\n\n\nSL\nSri Lanka\n\n\n\n\nI want to convert this to a json in the format\njson = {\n {\"Name\":\"India\",\n \"Code\":\"IN\"},\n {\"Name\":\"Sri Lanka\",\n \"Code\":\"SL\"}\n }\n\n\nI tried load the excel file using the pandas and convert them to json but i am getting\njson = {\n \"India\":\"IN\",\n \"Sri Lanka\":\"SL\"\n }","Title":"Convert excel file (.xlsx) to json","Tags":"python,json,pandas","AnswerCount":2,"A_Id":75173549,"Answer":"try:\ndf.to_json(orient=\"records\")","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75176798,"CreationDate":"2023-01-19 18:31:01","Q_Score":0,"ViewCount":58,"Question":"We have a solution written in C#\/.NET Framework 4.7. It has a lot of infrastructure code related to environment configurations, database access, logging, exception handling etc.\nOur co-workers are eager to contribute to the project with Python code that makes a lot of special calculations. Ideally we want to pass configuration plus (big amount of) input data to their code and get back (big amount of) results without resorting to database integration. Is there a viable way to do so? Main goals are: 1) not to rewrite Python code to C# 2) not to duplicate configuration\/database related code in Python to make future maintenance easier","Title":"Is there a good way to use C# (.NET Framework) and Python code together?","Tags":"python,c#,.net,integration","AnswerCount":2,"A_Id":75176886,"Answer":"Yes this is exactly what Unix (e.g. Gnu\/Linux) dose. The Unix philosophy is about creating many (usually small) programs (that usually do one thing well), and connecting them to create a system that is greater than the parts. To do this we use inter-process communication, usually pipelines \/ streams.\nAn alternate approach is to compile the C# into a library, that can be called form the python.","Users Score":-1,"is_accepted":false,"Score":-0.0996679946,"Available Count":1},{"Q_Id":75177273,"CreationDate":"2023-01-19 19:19:26","Q_Score":2,"ViewCount":98,"Question":"module constants\n \n implicit none\n \n DOUBLE PRECISION,parameter,public::MJ=9.552538964304e-4\n DOUBLE PRECISION,parameter,public::pi=3.1415926536\n DOUBLE PRECISION,parameter,public::MS=0.44\n DOUBLE PRECISION,parameter,public::RS=0.002045615583096031\n \n end module constants\n \n \n module photdynh\n \n use constants\n\n INTEGER,parameter,public::NBOD=2,NDGL=6*NBOD, &\n n=7+8*(nbod-1),nmax=77970,nparam=3*nbod\n INTEGER,public::NOBS,ntransit(nbod),phottype\n DOUBLE PRECISION,public::mc(nbod),mp(nbod),xstart, &\n ystart(ndgl),per(nbod)\n DOUBLE PRECISION,public::a(NBOD),e(NBOD),inc(NBOD)\n DOUBLE PRECISION,public::g(NBOD),node(NBOD),l(NBOD),p(NBOD), &\n ex(NBOD),ey(NBOD)\n double precision,public::lb(n),ub(n),flux_obs(nmax), &\n jd(nmax),normerr(nmax),tmids(nbod),u1,u2,rearth\n logical::ipar(n),iorbel\n\n end module photdynh \n\nThe code above defines fortran modules that are used by another module photdyn_model with use statements like use photdynh. After wrapping the modules with f2py,\nf2py -c -m photdyn_model.f90 -m photdyn_model and importing into python, I get an error message\nFile \"photdynmodel.py\", line 4, in \nimport photdyn_model\nImportError: \/mnt\/d\/TESS\/TOI2095\/photdyn_model.cpython-38-x86_64-linux-gnu.so: undefined symbol: __photdynh_MOD_ms\nWhy is f2py having trouble with this publicly defined constant ? Is there something wrong with this code\/approach ?","Title":"problem with defining constants in a Fortran module which is wrapped into Python using f2py","Tags":"python,fortran,constants,f2py","AnswerCount":1,"A_Id":75182201,"Answer":"You don't say which Fortran compiler you're using, but at least with GFortran a named constant (a variable declared with the parameter attribute) doesn't necessarily result in a corresponding symbol in the object file. It's been a very long time since I've used f2py, but if f2py expects that each Fortran variable has a corresponding symbol in the object file (as suggested by the error message you're getting), then that sounds like a bug in f2py.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75178311,"CreationDate":"2023-01-19 21:08:04","Q_Score":1,"ViewCount":410,"Question":"I am in the process of setting up my m1 pro max laptop. I have downloaded python, and I am installing all the required libs through pip. I am having problems installing open 3d lib.\nWhen I run this:\nimport sys\nprint(sys.version)\n\nimport platform\nprint(platform.platform())\n\nimport numpy as np\nimport open3d as o3d\n\n\nprint(\"Load a ply point cloud, print it, and render it\")\npcd = o3d.io.read_point_cloud(\"data\/bun315.ply\")\nprint(pcd)\nprint(np.asarray(pcd.points))\no3d.visualization.draw_geometries([pcd])\n\n\nI get this error:\n3.9.1 (v3.9.1:1e5d33e9b9, Dec 7 2020, 12:44:01) \n[Clang 12.0.0 (clang-1200.0.32.27)]\nmacOS-12.5.1-arm64-arm-64bit\n\nTraceback (most recent call last):\n File \"\/Users\/abdelnasser\/Desktop\/point clouds\/bunny\/hello.py\", line 8, in \n import open3d as o3d\n File \"\/Users\/abdelnasser\/Library\/Python\/3.9\/lib\/python\/site-packages\/open3d\/__init__.py\", line 97, in \n from open3d.cpu.pybind import (camera, data, geometry, io, pipelines,\nImportError: dlopen(\/Users\/abdelnasser\/Library\/Python\/3.9\/lib\/python\/site-packages\/open3d\/cpu\/pybind.cpython-39-darwin.so, 0x0002): Library not loaded: '\/opt\/homebrew\/opt\/libomp\/lib\/libomp.dylib'\n Referenced from: '\/Users\/abdelnasser\/Library\/Python\/3.9\/lib\/python\/site-packages\/open3d\/cpu\/pybind.cpython-39-darwin.so'\n Reason: tried: '\/opt\/homebrew\/opt\/libomp\/lib\/libomp.dylib' (no such file), '\/usr\/lib\/libomp.dylib' (no such file)\n\n\nI have searched up the error but nothing has worked. Not sure why its trying homebrew, I downloaded it to see try some things but ended up removing it from my laptop.\nWhen trying to install the open 3d lib I have had no issues with intel and m2 air laptop, but for some reason its not working on this laptop.","Title":"Installed open3d lib but the library is not loading","Tags":"python,apple-m1,open3d","AnswerCount":1,"A_Id":75187111,"Answer":"brew install libomp solves the problem.","Users Score":2,"is_accepted":false,"Score":0.3799489623,"Available Count":1},{"Q_Id":75178538,"CreationDate":"2023-01-19 21:37:03","Q_Score":1,"ViewCount":68,"Question":"I found that -1 \/\/ 2 is equal to -1 (Why not 0?), but int(-1 \/ 2) is equal to 0 (as I expected).\nIt's not the case with 1 instead of -1, so both 1 \/\/ 2 and int(1 \/ 2) is equal to 0.\nWhy the results are different for -1?","Title":"Why -1\/\/2 = -1 but int(-1\/2) = 0?","Tags":"python","AnswerCount":1,"A_Id":75178610,"Answer":"In Python, the division operator \/ and the floor division operator \/\/ have different behavior.\nThe division operator \/ returns a floating-point number that represents the exact quotient of the division. In the case of -1\/2, the quotient is -0.5. When you cast it to int, it rounds the number up to 0.\nThe floor division operator \/\/ returns the quotient of the division rounded down to the nearest integer. In the case of -1\/\/2, the quotient is -1, because -1 divided by 2 is -0.5, which is rounded down to -1.\nThat's why -1\/\/2 = -1 and int(-1\/2) = 0 in python.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75179766,"CreationDate":"2023-01-20 01:18:29","Q_Score":0,"ViewCount":28,"Question":"So I know how to create topics on Confluent Cloud with the confluent_kafka AdminClient instance but I\u2019m not sure how to set the topic\u2019s message schema programmatically? To clarify, I have the schema I want to use saved locally in an avro schema file(.avsc)","Title":"How do I tell a topic on confluent cloud to use a specific schema programmatically?","Tags":"apache-kafka,avro,confluent-schema-registry,confluent-kafka-python","AnswerCount":1,"A_Id":75179798,"Answer":"Use the AdminClient to create the topic and then use the SchemaRegistryClient to register the schema for the topic.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75181204,"CreationDate":"2023-01-20 07:58:46","Q_Score":2,"ViewCount":69,"Question":"i am getting response from request.post() as this:\n{'total': 3,\n 'files': [{'fileName': 'abc.mp4', 'size': '123'},\n {'fileName': 'def.mp4', 'size': '456'},\n {'fileName': 'ghi.mp4', 'size': '789'}]\n}\n\ni just want the filename value from this response and store it in an str list.\ni have tried the following loop to do the same but it is showing some error:\n fileNames = []\n for files in response.json()[\"files\"]:\n fileNames.append(files[\"filename\"])\n\ni expected the list of filenames but got some error","Title":"How to append value of a JSON response in a list (python)?","Tags":"python,python-3.x,http","AnswerCount":4,"A_Id":75181296,"Answer":"I was getting the KeyError, I just changed the key value to fileName instead of filename and it solved the problem.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75183706,"CreationDate":"2023-01-20 11:57:58","Q_Score":1,"ViewCount":27,"Question":"I am using robot framework as the main test framework with selenium (+ python libraries) to test web ui. I'm having issues with continuous integration in Jenkins and need to change the screenshot name (that is being assigned automatically with index (selenium-screenshot-{index}.png) to more unique name for several test cases eq.:\n${TEST NAME}-screen-{index}.png\n\nI know how to access automatic varibales, but how do I set the automatic generation name to something other than selenium-screenshot on Suite Setup\/ beggining of the tests level?\n\nTried using Set Screenshot Directory to make it more unique for test suites but filenames are still the issue. Also using keyword to capture screenshot and setting the name there is not enough, as some keywords make screenshots on failure and they are still being named with selenium-screenshot convention.","Title":"Robot framework and selenium with python - screenshot automatic name change","Tags":"python,selenium,robotframework","AnswerCount":1,"A_Id":75184305,"Answer":"Also using keyword to capture screenshot and setting the name there is not enough, as some keywords make screenshots on failure and they are still being named with selenium-screenshot convention.\n\nYou could create your own custom keyword that would handle naming and run on failure. You could use Register Keyword To Run On Failure in Suite Setup to specify which keyword to run on failure.","Users Score":2,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75184534,"CreationDate":"2023-01-20 13:18:44","Q_Score":2,"ViewCount":58,"Question":"I'm trying to loop trough a dictionnary to create a simple table with the keys in a column and the values in the other.\nSo in my view I create the dictionnary vegetables_dict. I loop trhough the \"items\" of a \"cartitem\". If the item doesn't alrady exists I created a key with the name of a ForignKey of \"item\" and a value with its attribute quantity. Otherwise I increment the already existing key with the corresponding quantity\ndef dashboard_view(request):\n\n carts = Cart.objects.filter(cart_user__isnull = False)\n cartitems = CartItem.objects.all()\n\n vegetables_dict = {}\n for item in cartitems:\n if item.stock_item.product_stockitem.name in vegetables_dict:\n vegetables_dict[item.stock_item.product_stockitem.name] += item.quantity\n else :\n vegetables_dict[item.stock_item.product_stockitem.name] = item.quantity\n\n context = {\n 'carts' : carts,\n 'cartitems' : cartitems,\n 'items' : vegetables_dict\n }\n return render(request, \"maraicher\/dashboard.html\", context)\n\nIn the template i Tried :\n \n \n \n \n \n \n
R\u00e9capitulatif de r\u00e9cole<\/th>\n <\/tr>\n <\/thead>\n
Produit<\/th>\n Quantit\u00e9<\/th>\n
{{items}}<\/div>\n\n {% for key, value in items.item %}\n
{{key}}<\/td>\n {{value}}<\/td> \n <\/tr>\n {% endfor %}\n\n <\/tbody>\n <\/table>\n\n{{items}} render the dictionnary but the table is empty.\nAny idea what is happening?\nI aslo tried\n{% for item in items %}\n
{{item}}<\/td>\n {{item.item}}<\/td> \n <\/tr>\n{% endfor %}","Title":"I can't loop trough a dictionnary in my Django Template","Tags":"python,django,html-table,django-templates","AnswerCount":3,"A_Id":75184799,"Answer":"As mentionned by @user2390182 a \"s\" was missing, the correct syntaxe is\n\n{% for key, value in items.items %}","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75184791,"CreationDate":"2023-01-20 13:42:18","Q_Score":0,"ViewCount":40,"Question":"I am wondering why the start and end of time windows have to be integers, for example for a node who has 7am-10am window, it is (7, 10)? There could be case where a time window is between 7:30am-10:30am, which could be 7.5-11.5. Why the code doesn't allow decimal values for time windows?\nWhen I have decimals, like (7.5-10.5) for the time window tuples, I got error saying, time windows are expected to be integers. While I can modify window to make it integers like (7-10), but that is not what I want if possible. How can we go about implementing it?","Title":"In OR tools, more specifically in VRPTW, why start and end times for a time window should only be integers?","Tags":"python,routes,or-tools","AnswerCount":1,"A_Id":75184857,"Answer":"why not count in minutes ? The solver is scale agnostic.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75185612,"CreationDate":"2023-01-20 14:49:30","Q_Score":2,"ViewCount":175,"Question":"Consider the example below, since I'm initializing the driver in setUp method and using it in test_login, the browser will open twice, the first time during setUp and then it will be closed and the tests will begin.\nIf I remove the logic from setUp and put it in test_login, the driver will be undefined in test_profile and tearDown\nWhat's the correct way to initialize the driver and use it throughout the class while not causing the browser to open twice?\nfrom selenium import webdriver\nimport unittest\nfrom selenium.webdriver.chrome.service import Service\nfrom webdriver_manager.chrome import ChromeDriverManager\n\n\nclass Test(unittest.TestCase):\n def setUp(self):\n self.driver = webdriver.Chrome(\n service=Service(ChromeDriverManager().install()))\n self.driver.get('https:\/\/example.com\/login')\n self.current_url = self.driver.current_url\n self.dashboard_url = 'https:\/\/example.com\/dashboard'\n\n def test_login(self):\n self.assertEqual(self.dashboard_url, self.current_url)\n \n def test_profile(self):\n self.driver.get('https:\/\/example.com\/profile')\n \n def tearDown(self):\n self.driver.close()","Title":"Python Unittest: How to initialize selenium in a class and avoid having the browser opening twice?","Tags":"python,selenium,python-unittest","AnswerCount":3,"A_Id":75186143,"Answer":"Your code works just fine. Please add the decorator @classmethod before the setUp and tearDown methods.\nAlso, issue is with the line self.driver.get('https:\/\/example.com\/login') in the setUp method. Just remove it from there and maybe create a new function to hold that code.","Users Score":1,"is_accepted":false,"Score":0.0665680765,"Available Count":1},{"Q_Id":75186316,"CreationDate":"2023-01-20 15:46:57","Q_Score":2,"ViewCount":96,"Question":"I'm playing around with movement in Turtle, I'm trying to get basic 2D WASD movement working; what I mean by consistent is the same speed, no lag spikes and\/or random speed boosts. This is my current code: (I mapped the keys to a dict to prevent key press delay)\nimport turtle\n\nkeys = {\n \"w\": False,\n \"s\": False,\n \"a\": False,\n \"d\": False\n}\n\nturtle.setup(800, 590)\n\nturtle.delay(0)\nturtle.tracer(0, 0)\n\nwn = turtle.Screen()\n\nplayer = turtle.Turtle()\nplayer.speed(4)\n\ndef movement():\n if keys[\"w\"]:\n player.goto(player.xcor(), player.ycor() + 3)\n if keys[\"s\"]:\n player.goto(player.xcor(), player.ycor() - 3)\n if keys[\"a\"]:\n player.goto(player.xcor() - 3, player.ycor())\n if keys[\"d\"]:\n player.goto(player.xcor() + 3, player.ycor())\n turtle.update()\n\ndef c_keys(key, value):\n keys[key] = value\n\nwn.onkeypress(lambda: c_keys(\"w\", True), \"w\")\nwn.onkeyrelease(lambda: c_keys(\"w\", False), \"w\")\nwn.onkeypress(lambda: c_keys(\"s\", True), \"s\")\nwn.onkeyrelease(lambda: c_keys(\"s\", False), \"s\")\nwn.onkeypress(lambda: c_keys(\"a\", True), \"a\")\nwn.onkeyrelease(lambda: c_keys(\"a\", False), \"a\")\nwn.onkeypress(lambda: c_keys(\"d\", True), \"d\")\nwn.onkeyrelease(lambda: c_keys(\"d\", False), \"d\")\n\nwn.listen()\n\nwhile True:\n movement()\n\nAny help is appreciated, thanks!","Title":"Smooth and consistent WASD movement using turtle","Tags":"python,turtle-graphics,python-turtle","AnswerCount":1,"A_Id":75189078,"Answer":"(Thanks to ggorlen) The issue was the while True:. Using ontimer fixed the problem and made the movement stable and non laggy.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75186331,"CreationDate":"2023-01-20 15:48:14","Q_Score":2,"ViewCount":291,"Question":"I'm trying to query postgres from an MWAA instance of airflow. I'm not sure if there is a conflict due to airflow itself having a different version of postgres for its metadata or what, but I get this error when connecting to postgres:\n File \"\/usr\/local\/airflow\/dags\/transactions\/transactions.py\", line 62, in load_ss_exposures_to_s3\n ss_conn = psycopg2.connect(\n File \"\/usr\/local\/airflow\/.local\/lib\/python3.10\/site-packages\/psycopg2\/__init__.py\", line 122, in connect\n conn = _connect(dsn, connection_factory=connection_factory, **kwasync)\npsycopg2.OperationalError: SCRAM authentication requires libpq version 10 or above\n\nLocally I have psycopg2 version 2.9.5 and libpq version 140005. MWAA is using psycopg2 2.9.5 and libpq 90224. Is there a way for me to force MWAA to use another version? Maybe through airflow plugins? Airflow version is 2.4.3.","Title":"MWAA Airflow job getting SCRAM error when connecting to postgres","Tags":"python,postgresql,amazon-web-services,airflow,mwaa","AnswerCount":2,"A_Id":75942876,"Answer":"In case anyone else encounters this issue when upgrading to MWWA Airflow 2.4.3, I managed to resolve this issue by adding psycopg2-binary to the requirements.txt file. I didn't specify a version.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75186697,"CreationDate":"2023-01-20 16:19:47","Q_Score":1,"ViewCount":183,"Question":"I want to compare the price of coconut on two websites. there are two stores (websites) called laughs and glomark.\nNow,I have two files main.py and comparison.py. I think the problem is in the Laughs price scrapping part. This cord is running without error. I will put my output and expected output bellow after the code.\nmain.py\nfrom compare_prices import compare_prices \nlaughs_coconut = 'https:\/\/scrape-sm1.github.io\/site1\/COCONUT%20market1super.html'\nglomark_coconut = 'https:\/\/glomark.lk\/coconut\/p\/11624'\ncompare_prices(laughs_coconut,glomark_coconut) \n\ncomparison.py\nimport requests\nimport json\nfrom bs4 import BeautifulSoup\n\n#Imitate the Mozilla browser.\nuser_agent = {'User-agent': 'Mozilla\/5.0'}\n\ndef compare_prices(laughs_coconut,glomark_coconut):\n # Aquire the web pages which contain product Price\n laughs_coconut = requests.get(laughs_coconut)\n glomark_coconut = requests.get(glomark_coconut)\n\n # LaughsSuper supermarket website provides the price in a span text.\n soup_laughs = BeautifulSoup(laughs_coconut.text, 'html.parser')\n price_laughs = soup_laughs.find('span',{'class': 'price'}).text\n \n \n # Glomark supermarket website provides the data in jason format in an inline script.\n soup_glomark = BeautifulSoup(glomark_coconut.text, 'html.parser')\n script_glomark = soup_glomark.find('script', {'type': 'application\/ld+json'}).text\n data_glomark = json.loads(script_glomark)\n price_glomark = data_glomark['offers'][0]['price']\n\n \n #TODO: Parse the values as floats, and print them.\n price_laughs = price_laughs.replace(\"Rs.\",\"\")\n price_laughs = float(price_laughs)\n price_glomark = float(price_glomark)\n print('Laughs COCONUT - Item#mr-2058 Rs.: ', price_laughs)\n print('Glomark Coconut Rs.: ', price_glomark)\n \n # Compare the prices and print the result\n if price_laughs > price_glomark:\n print('Glomark is cheaper Rs.:', price_laughs - price_glomark)\n elif price_laughs < price_glomark:\n print('Laughs is cheaper Rs.:', price_glomark - price_laughs) \n else:\n print('Price is the same')\n\n\nMy code is running without error and as an output, it shows.\nLaughs COCONUT - Item#mr-2058 Rs.: 0.0\n\nGlomark Coconut Rs.: 110.0\n\nLaughs is cheaper Rs.: 110.0\n\nbut the expected output is:\nLaughs COCONUT - Item#mr-2058 Rs.: 95.0\n\nGlomark Coconut Rs.: 110.0\n\nLaughs is cheaper Rs.: 15.0\n\nnote:- Rs.95.00<\/span> this is the element of Laughs coconut price","Title":"while using python web-scraping faced error","Tags":"json,python-3.x,web-scraping,beautifulsoup,python-requests","AnswerCount":2,"A_Id":75187235,"Answer":"Because there are two items with 'span',{'class': 'price'} . Since find() method returns first value, in this case we will use findAll() method and return second one. So in your code if you change to this price_laughs = soup_laughs.findAll('span',{'class': 'price'})[1].text problem will be solved.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75186701,"CreationDate":"2023-01-20 16:20:02","Q_Score":2,"ViewCount":1290,"Question":"I have a question: how can I create requirements.txt file inside my Docker build, so I don't have to update it manually at project's directory, while releasing new versions of the app?\nSo, what I want is basically to construct the requirements.txt file inside the Docker build and install it then.\nMy Dockerfile\nFROM --platform=arm64 python:3.9-buster\n\n# Initializing Project Directory\nCMD mkdir \/project\/dir\/ \n\n# Setting up working directory\nWORKDIR \/project\/dir\/\n\nENV PYTHONUNBUFFERED=1\n\nRUN pip install --upgrade pip \n\nRUN pip freeze > requirements.txt\nADD .\/requirements.txt .\/requirements.txt # error occurs at this line\n\n\nCOPY . .\n\nRUN pip install -r requirements.txt \n\n\nRUN chmod +x .\/run.sh\n\nENTRYPOINT [\"sh\", \".\/run.sh\"]\n\n\nBut unfortunately there is an error occured: failed to compute cache key: \"\/requirements.txt\" not found: not found.\nDo you have any tips for implementation?","Title":"How to automatically create and install requirements.txt file inside the Docker Build","Tags":"python,docker","AnswerCount":1,"A_Id":75186867,"Answer":"I believe you have to install dependencies into your environment before freeze can actually freeze them.\nSo either,\nIn the project directory run a pip freeze (preferred)\nThen in the dockerfile do a pip install -r requirements.txt instead of pip freeze\nOr add pip install x where X is each of your dependencies, then freeze.\nDoing the second option would be a bit \"odd\" considering you usually want to build the dep list first, then provide that to your build env. Not build the dep list and build sequentially.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75187059,"CreationDate":"2023-01-20 16:52:26","Q_Score":0,"ViewCount":71,"Question":"I am facing below error message while running the python code(ML model) in the python databricks notebook\nConnectException: Connection refused (Connection refused) Error while obtaining a new communication channel\nConnectException error: This is often caused by an OOM error that causes the connection to the Python REPL to be closed. Check your query's memory usage.\nSpark tip settings","Title":"ConnectException: Connection refused (Connection refused) Error while obtaining a new communication channel. error in databricks notebook","Tags":"python,apache-spark,pyspark,databricks,azure-databricks","AnswerCount":1,"A_Id":75284443,"Answer":"The driver may be experiencing a memory bottleneck, which is a frequent cause of this issue. When this occurs, the driver has an out of memory (OOM) crash, restarts often, or loses responsiveness. Any of the following factors might be the memory bottleneck's cause:\n\nFor the load placed on the driver, the driver instance type is not ideal.\nMemory-intensive procedures are carried out on the driver.\nThe same cluster is hosting a large number of concurrent notebooks or processes.\n\nPlease try below options\n\nTry increasing driver-side memory and then retry.\nYou can look at the spark job dag which give you more info on data flow.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75187563,"CreationDate":"2023-01-20 17:40:48","Q_Score":0,"ViewCount":25,"Question":"square = shape.add_textbox(Inches(3), Inches(3), Inches(1), Inches(1))\nI want this textbox to have bullets. I tried adding a textframe instead, but the size of the texframe cannot be adjusted and I want multiple boxes with bullets.\nI tried adding a textframe instead, but the size of the texframe cannot be adjusted and I want multiple boxes with bullets.","Title":"How do I add a bullet to a textbox? (I tried and it says shapes have to paragraph attribute?","Tags":"python,python-pptx","AnswerCount":2,"A_Id":75192047,"Answer":"In the textframe resizing question, you set the size of the shape. Not the textframe that is part of it.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75188349,"CreationDate":"2023-01-20 19:01:10","Q_Score":1,"ViewCount":55,"Question":"I am attempting to remove key-value pairs from a dict when a sub-dictionary matches values from another dictionary.\nExample set-up:\ne = {'a':{'aa':'yes'}, 'b':{'ac':'no'}, 'a':{'aa':'yes'}}\nf = {'a':{'aa':'yes'}, 'e':{'ab':'no'}, 'a':{'aa':'yes'}}\n\nfor keys, values in e.items():\n for k, v in f.items():\n if values.get('aa') == v.get('aa'):\n e.pop(keys)\n\n\nRuntimeError: dictionary changed size during iteration\n\nExpected result:\n#from\ne = {'a':{'aa':'yes'}, 'b':{'ac':'no'}, 'a':{'aa':'yes'}}\n\n#to\ne = {'b':{'ac':'no'}}","Title":"Dictionary sized change due to iteration of dict","Tags":"python,dictionary","AnswerCount":3,"A_Id":75188441,"Answer":"In general, you should not add or remove items from iterables that you are currently iterating over.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75188722,"CreationDate":"2023-01-20 19:44:01","Q_Score":1,"ViewCount":466,"Question":"Consider some library with an interface like this:\n\nRemoteTask.start()\nRemoteTask.cancel()\nRemoteTask.get_id()\nRemoteTask.get_result()\nRemoteTask.is_done()\n\nFor example, concurrent.futures.Future implements an API like this, but I don't want to assume the presence of a function like concurrent.futures.wait.\nIn traditional Python code, you might need to poll for results:\ndef foo():\n task = RemoteTask()\n while not task.is_done():\n time.sleep(2)\n return task.get_result()\n\nIs there some general recommended best-practice technique for wrapping this in an Awaitable interface?\nThe desired usage would be:\nasync def foo():\n task = RemoteTask()\n return await run_remote_task()\n\nI understand that the implementation details might differ across async libraries, so I am open to both general strategies for solving this problem, and specific solutions for Asyncio, Trio, AnyIO, or even Curio.\nAssume that this library cannot be easily modified, and must be wrapped.","Title":"Wrapping a polling-based asynchronous API as an Awaitable","Tags":"python,asynchronous,python-asyncio,python-trio,python-anyio","AnswerCount":2,"A_Id":75189142,"Answer":"First possibility: If the library has a way to block until completed (preferably one that doesn't just call while not task.is_done(): in a tight loop), you can use anyio.to_thread.run_sync to avoid blocking your main loop. Disadvantage: Handling cancellations is nontrivial; if the remote library doesn't expect that call from another thread at random times things might break.\nSecond: If the library has a way to hook up a completion callback, you can set an event from it, which your anyio\/trio\/asyncio task await evt.wait()s for. Arthur's answer shows how to handle cancellation.\nThird: If neither of these is true, you might try asking its author to add at least one of those. Busy waiting is not nice!\nThe fourth method is to fork the remote code's source code and liberally sprinkle it with import anyio, async def and await keywords until calling await task.run() Just Works. Cancellations are trivial and most likely involve zero work you shouldn't already be doing anyway (like try: \u2026 finally: blocks that clean up any left-over state)\nMethod five would be to split the library into a core that does all the work but doesn't do any I\/O by itself, and a wrapper that tells you when you should do some possibly-asynchronous work. This way is called \"sans I\/O\". Several nontrivial popular libraries work that way, e.g. Trio's SSL implementation, or the httpx handler for the HTTP protocol. Upside, this way is most useful because you can combine the protocol with others most easily, including writing a simple front-end that behaves just like the original module. Downside, if you start from an existing codebase it's the most work. Cancelling is easy because the sans-io part of the code doesn't wait and thus cannot be cancelled in the first place.\nMethod four often is way easier than it sounds, I've done it for a lot of not-too-complex libraries.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75189084,"CreationDate":"2023-01-20 20:36:00","Q_Score":3,"ViewCount":185,"Question":"I've been working on a project using flask, flask-socketio and redis\nI have a server, and some modules I would like to be able to emit from outside of the server file.\nserver.py\nfrom flask import Flask, Response, request, json\nfrom flask_socketio import SocketIO, join_room, leave_room, emit\n\napp = Flask(__name__)\n\nsocketio = SocketIO()\n\nsocketio.init_app(\n app, \n cors_allowed_origins=\"*\",\n message_que=\"redis:\/\/127.0.0.1:6379\/0\"\n)\n\n@socketio.on('ready')\ndef ready(data):\n socketio.emit('rollCall', { 'message': 'Ive got work for you' }, room='Ready')\n...\n\njobque.py\nfrom modules.server.server import socketio\n\n...\n\nsocketio.emit('rollCall', { 'message': 'Ive got work for you' }, room='Ready')\n\nAs it's currently configured, emits from the server file all work, the clients respond and they can talk back and forth. But when jobque goes to emit the same message, nothing happens. There's no error, and the client doesn't hear it.\nI'm also using redis for things other than the websockets, I can get and set from it with no problem, in both files.\nWhat do I need to do to get this external process to emit? I've looked through the flask-socketio documentation and this is exactly how they have it setup.\nI've also tries creating a new SocketIO object inside jobque.py instead of importing the one form the server, but the results are the same\nsocket = SocketIO(message_queue=\"redis:\/\/127.0.0.1:6379\/0\")\nsocketio.emit('rollCall', { 'message': 'Ive got work for you' }, room='Ready')\n\nI also went and checked if I could see the websocket traffic in redis with the message que setup using redis-cli > MONITOR, but I don't see any. I only see the operations I'm using redis for directly with the redis module. This makes me think the message que isn't actually being used, but I can't know for sure.","Title":"Flask-Socketio emitting from an external process","Tags":"python,flask,redis,flask-socketio","AnswerCount":1,"A_Id":75238678,"Answer":"Unfortunately spelled message_queue as message_que was the issue.\nCreating a new SocketIO instance without the app works now.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75189350,"CreationDate":"2023-01-20 21:07:38","Q_Score":0,"ViewCount":24,"Question":"there may very well be an answer to this question, but it's really hard to google for.\nyou can add commands to gdb by writing them in python. I am interested in debugging one of those python scripts that's running in gdb session.\nmy best guess is to run gdb on gdb and execute the user added command and somehow magically break on the python program code?\nhas anybody done anything like this before? I don't know the mechanism by which gdb calls python code, so if it's not in the same process space as the gdb that's calling it, I don't see how I'd be able to set breakpoints in the python program.\nor do I somehow get pdb to run in gdb? I guess I can put pdb.set_trace() in the python program, but here's the extra catch: I'd like to be able to do all this from vscode.\nso I guess my question is: what order of what things do I need to run to be able to vscode debug a python script that was initiated by gdb?\nanybody have any idea?\nthanks.","Title":"How do I debug through a gdb helper script written in python?","Tags":"python,gdb,gdb-python","AnswerCount":1,"A_Id":75190767,"Answer":"so I figured it out. it's kinda neat.\nyou run gdb to debug your program as normal, then in another window you attach to a running python program.\nin this case the running python program is the gdb process.\nonce you attach, you can set breakpoints in the python program, and then when you run commands in the first window where the gdb session is, if it hits a breakpoint in the python code, it will pop up in the second window.\nthe tipoff was that when you run gdb there does not appear to be any other python process that's a child of gdb or related anywhere, so I figured gdb must dynamically link to some python library so that the python compiler\/interpreter must be running in the gdb process space, so I figured I'd try attaching to that, and it worked.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75189505,"CreationDate":"2023-01-20 21:32:20","Q_Score":0,"ViewCount":28,"Question":"I am making an algorithm that performs certain edits to a PDF using the fitz module of PyMuPDF, more precisely inside widgets. The font size 0 has a weird behaviour, not fitting in the widget, so I thought of calculating the distance myself.\nBut searching how to do so only led me to innate\/library functions in other programming languages.\nIs there a way in PyMuPDF to get the optimal\/maximal font size given a rectangle, the text and the font?","Title":"PyMuPDF get optimal font size given a rectangle","Tags":"python,pymupdf","AnswerCount":1,"A_Id":75191243,"Answer":"As @Seon wrote, there is rc = page.insert_textbox(), which does nothing if the text does not fit. This is indicated by a negative float rc - the rectangle height deficite.\nIf positive however, the text has been written and it is too late for optimizing the font size.\nYou can of course create a Font object for your font and check text length beforehand using tl = font.text_length(text, fontsize=fs). Dividing tl \/ rect.width gives you an approximate number of lines in the rectangle, which you can compare with the rectangle height: rect.height \/ (fs * factor) in turn is a good estimate for the number of available lines in the rect.\nThe fontsize fs alone does not take the actual line height into account: the \"natural\" line height of a font is computed using its ascender and decender values lh = (font.ascender - font.descender) * fs. So the above computation should better be rect.height \/ lh for the number of fitting lines.\n.insert_textbox() has a lineheight parameter: a factor overriding the default (font.ascender - font.descender).\nDecent visual appearances can usually be achieved by setting lineheight=1.2.\nTo get a good fit for your text to fit in a rectangle in one line, choose fs = rect.width \/ font.text_length(text, fontsize=1) for the fontsize.\nAll this however is no guarantee for how a specific PDF viewer will react WRT text form fields. They have their own idea about necessary text borders, so you will need some experimenting.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75189581,"CreationDate":"2023-01-20 21:42:33","Q_Score":1,"ViewCount":63,"Question":"In my password manager prroject, I am trying to code a login function.\nIn this functtion, if the user's username and password match an account stored in this dictionary, it allows access to their object which has the following attributes: username, password, password_coll.\n(The password_coll is a dictionary\/collection of the users passwords as values to the website of use as keys).\nSo as a little stem from my original question, how would I also reference my\nThis is my first time using OOP approach and it is really frying my brain hahaha.\nSo I thought of using usernames as keys and the object as the value. But how do I structure this in code?\nAny examples would be greatly appreciated.\nI did try checking existing questions but they didn't answer my question closely enough. So here we are haha:)\nThe code block at the bottom is my attempt at testing the output of those methods to see if they return the data in the object. But the result was this message:\n\">\"\nimport random\nimport secrets\nimport string\n\nclass User:\n def __init__(self, username, password, password_dict=None) -> None:\n self.username = username\n self.password = password\n self.password_dict = {}\n \n def return_pass(self, password):\n return self.password \n def __str__(self, password) -> str:\n return self.password\n\n\n def get_creds(self, username, password):\n usern = input('Enter username: ')\n pwd = input('Enter password: ')\n self.username = usern\n self.password = pwd\n def passGen(self, password_dict): # random password generator\n n = int(input('Define password length. Longer passwords are safer.'))\n\n source = string.ascii_letters + string.digits\n password = ''.join((secrets.choice(source)) for i in range(n))\n print('Password has been generated!')\n\n print('Would you like to save this password? Type y or n: ')\n yon = input()\n\n if yon == 'y':\n site = input('Please enter the site password is to be used:')\n self.password_dict[site] = password\n\n return self.password_dict\n\n\n\nu1 = User('dave', 'pass', {})\nuser_logins = {'dave': u1}\nprint(user_logins['dave'].return_pass)","Title":"How do I store an object in a dictionary in Python?","Tags":"python,python-3.x,dictionary","AnswerCount":2,"A_Id":75189668,"Answer":"User.return_pass is a function, it has to be called:\nprint(user_logins['dave'].return_pass(\"password\")) where the text \"password\" is the arg required in the function.\nHope this helps","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75190181,"CreationDate":"2023-01-20 23:21:46","Q_Score":1,"ViewCount":214,"Question":"import os\nimport yahoo_fin.stock_info as si\n\ntickers = [\"aapl\",\"msft\",\"fb\"]\nfor ticker in tickers:\n try:\n quote = si.get_quote_table(ticker)\n price = (quote[\"Quote Price\"])\n print (ticker, price)\n \n except:\n pass\n\nWhen running this piece of code I get this error:\nFutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead.\nCan someone tell me how to adapt the code?","Title":"frame.append() method deprecated","Tags":"python,pandas,yahoo-finance","AnswerCount":2,"A_Id":75190226,"Answer":"I have never used yahoo_fin but based on your code and the warning in question, this appears to be something the developers of that library need to change (by using the concat method instead of append). In the meantime, you can continue to use it as is and ignore the warning or you could always contribute to their codebase, or fork it and make the change for yourself.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75191124,"CreationDate":"2023-01-21 04:26:53","Q_Score":0,"ViewCount":24,"Question":"Are there tools for Python that minimize the size of the imports used in a Python package, something similar to esbuild for JavaScript? Having a tool that extracts only the used methods of imported packages, uglifying them, and putting them into a single file for efficiency purposes would be very useful. I need something like that to package my Python code into a Lambda. I am having trouble finding a tool that does so beyond linting.\nI tried tools like Black, flake8, and pyright, however none fulfill the purpose of minimizing the file\/package size.","Title":"Are there Python tools that optimize for package size and performance?","Tags":"python,bundler,packaging","AnswerCount":1,"A_Id":75191149,"Answer":"A few tools are available to assist you in\u00a0packaging\u00a0your code for usage in a Lambda function and reducing\u00a0the size of Python imports.\nPipenv, a package manager that lets you manage dependencies and virtual environments for your Python applications, is one well-liked solution. You can deploy your project without relying on external dependencies by using pipenv to \"vendor\" your dependencies, which means that it will copy all the required packages into a vendor directory in your project.\nAnother tool that can help with this is pyinstaller. pyinstaller is a tool that can be used to package Python code into a standalone executable. It can also be used to package a Python script as a single executable file, which can be useful for deployment in environments like Lambda where you have limited control over the environment.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75192128,"CreationDate":"2023-01-21 08:51:50","Q_Score":1,"ViewCount":42,"Question":"I am aware that there are many questions regarding Django and virtual environments, but I cannot wrap my head around the use of virtual environments with respect to deploying my Django app (locally) via uwsgi\/nginx.\nMy setup includes a virtual environment (with Django and uwsgi), my Django app, nginx and PostgreSQL. The app was created before the virtual environment, and I applied only a single change to manage.py:\n#!\/Users\/snafu\/virtualdjango\/bin\/python3\n\nWhen I start up the uwsgi located in the virtual environment (with the appropriate .ini file), everything works right away, but I wonder why. I did not need to fiddle around with the $PYTHONPATH, or append the site packages directory to the system path in manage.py, or activate the virtual environment at any point (apart from the initial installation of packages), although the boilerplate comment in manage.py explicitly mentions an inactive virtual environment as a possible reason for an import error.","Title":"Django deployment and virtual environment","Tags":"python,django,virtualenv","AnswerCount":1,"A_Id":75192649,"Answer":"Activating a virtual environment does nothing but prepend the virtual environment's bin\/ to the $PATH thus making python and pip without explicit paths running from the virtual environment. Everything else related to virtual environments is implemented inside Python \u2014 it automatically changes sys.path and other paths (sys.prefix, sys.exec_prefix, etc).\nThis means that when you run python with an absolute path from a virtual environment Python automatically activates the virtual environment for this particular Python session. So you don't need to activate the virtual environment explicitly.\nThere is a minor warning sign on the road though: to run any Python script from a non-activated virtual environment you must set the shebang for all scripts to point to the virtual environment or use sys.executable. Do not use explicit python because that could be a different Python from the $PATH.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75192505,"CreationDate":"2023-01-21 10:02:23","Q_Score":0,"ViewCount":56,"Question":"I am taking intro to ML on Coursera offered by Duke, which I recommend if you are interested in ML. The instructors of this course explained that \"We typically include nonlinearities between layers of a neural network.There's a number of reasons to do so.For one, without anything nonlinear between them, successive linear transforms (fully connected layers) collapse into a single linear transform, which means the model isn't any more expressive than a single layer. On the other hand, intermediate nonlinearities prevent this collapse, allowing neural networks to approximate more complex functions.\" I am curious that, if I apply ReLU, aren't we losing information since ReLU is transforming every negative value to 0? Then how is this transformation more expressive than that without ReLU?\nIn Multilayer Perceptron, I tried to run MLP on MNIST dataset without a ReLU transformation, and it seems that the performance didn't change much (92% with ReLU and 90% without ReLU). But still, I am curious why this tranformation gives us more information rather than lose information.","Title":"Why ReLU function after every layer in CNN?","Tags":"python,machine-learning,pytorch,activation-function","AnswerCount":2,"A_Id":75192607,"Answer":"Neural networks are inspired by the structure of brain. Neurons in the brain transmit information between different areas of the brain by using electrical impulses and chemical signals. Some signals are strong and some are not. Neurons with weak signals are not activated.\nNeural networks work in the same fashion. Some input features have weak and some have strong signals. These depend on the features. If they are weak, the related neurons aren't activated and don't transmit the information forward. We know that some features or inputs aren't crucial players in contributing to the label. For the same reason, we don't bother with feature engineering in neural networks. The model takes care of it. Thus, activation functions help here and tell the model which neurons and how much information they should transmit.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75192505,"CreationDate":"2023-01-21 10:02:23","Q_Score":0,"ViewCount":56,"Question":"I am taking intro to ML on Coursera offered by Duke, which I recommend if you are interested in ML. The instructors of this course explained that \"We typically include nonlinearities between layers of a neural network.There's a number of reasons to do so.For one, without anything nonlinear between them, successive linear transforms (fully connected layers) collapse into a single linear transform, which means the model isn't any more expressive than a single layer. On the other hand, intermediate nonlinearities prevent this collapse, allowing neural networks to approximate more complex functions.\" I am curious that, if I apply ReLU, aren't we losing information since ReLU is transforming every negative value to 0? Then how is this transformation more expressive than that without ReLU?\nIn Multilayer Perceptron, I tried to run MLP on MNIST dataset without a ReLU transformation, and it seems that the performance didn't change much (92% with ReLU and 90% without ReLU). But still, I am curious why this tranformation gives us more information rather than lose information.","Title":"Why ReLU function after every layer in CNN?","Tags":"python,machine-learning,pytorch,activation-function","AnswerCount":2,"A_Id":75192942,"Answer":"the first point is that without nonlinearities, such as the ReLU function, in a neural network, the network is limited to performing linear combinations of the input. In other words, the network can only learn linear relationships between the input and output. This means that the network can't approximate complex functions that are not linear, such as polynomials or non-linear equations.\nConsider a simple example where the task is to classify a 2D data point as belonging to one of two classes based on its coordinates (x, y). A linear classifier, such as a single-layer perceptron, can only draw a straight line to separate the two classes. However, if the data points are not linearly separable, a linear classifier will not be able to classify them accurately. A nonlinear classifier, such as a multi-layer perceptron with a nonlinear activation function, can draw a curved decision boundary and separate the two classes more accurately.\nReLU function increases the complexity of the neural network by introducing non-linearity, which allows the network to learn more complex representations of the data. The ReLU function is defined as f(x) = max(0, x), which sets all negative values to zero. By setting all negative values to zero, the ReLU function creates multiple linear regions in the network, which allows the network to represent more complex functions.\nFor example, suppose you have a neural network with two layers, where the first layer has a linear activation function and the second layer has a ReLU activation function. The first layer can only perform a linear transformation on the input, while the second layer can perform a non-linear transformation. By having a non-linear function in the second layer, the network can learn more complex representations of the data.\nIn the case of your experiment, it's normal that the performance did not change much when you removed the ReLU function, because the dataset and the problem you were trying to solve might not be complex enough to require a ReLU function. In other words, a linear model might be sufficient for that problem, but for more complex problems, ReLU can be a critical component to achieve good performance.\nIt's also important to note that ReLU is not the only function to introduce non-linearity and other non-linear activation functions such as sigmoid and tanh could be used as well. The choice of activation function depends on the problem and dataset you are working with.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":2},{"Q_Id":75192898,"CreationDate":"2023-01-21 11:15:56","Q_Score":1,"ViewCount":3903,"Question":"there\nWith automatic1111 stable diffuison, I need to re-draw 100 images. Thanks to clip-interrogator, I've generated prompt text for each one of them. Next, I should to run img2img. However, I noticed that you cannot sigh the prompt for each image specifically with img2img batch tab.\nAny suggestions about the task? What's the best and easy way to do it?\nI tried to write a custom script about it as below:\nimport modules.scripts as scripts\nimport gradio as gr\nimport os\nimport random\nfrom PIL import Image, ImageOps\n\nfrom modules import images\nfrom modules.processing import process_images, Processed\nfrom modules.shared import opts, cmd_opts, state\n\n\ndef load_prompt_file(file):\n if file is None:\n lines = []\n else:\n lines = [x.strip() for x in file.decode('utf8', errors='ignore').split(\"\\n\")]\n return None, \"\\n\".join(lines), gr.update(lines=3)\n\ndef list_files(dirname):\n filenames = [os.path.join(dirname, x) for x in sorted(os.listdir(dirname)) if not x.startswith(\".\")]\n return [file for file in filenames if os.path.isfile(file)]\n\nclass Script(scripts.Script):\n\n def title(self):\n return \"1_img\/1_prompt img2img\"\n\n def show(self, is_img2img):\n return is_img2img\n\n def ui(self, is_img2img):\n different_seeds = gr.Checkbox(label='Use different seed for each picture', value=True, elem_id=self.elem_id(\"different_seeds\"))\n\n prompt_txt = gr.Textbox(label=\"List of prompt inputs\", lines=1, elem_id=self.elem_id(\"prompt_txt\"))\n prompt_file = gr.File(label=\"Upload prompt inputs\", type='binary', elem_id=self.elem_id(\"prompt_file\"))\n prompt_file.change(fn=load_prompt_file, inputs=[prompt_file], outputs=[prompt_file, prompt_txt, prompt_txt])\n prompt_txt.change(lambda tb: gr.update(lines=3) if (\"\\n\" in tb) else gr.update(lines=2), inputs=[prompt_txt], outputs=[prompt_txt])\n\n img_input_dir = gr.Textbox(label=\"Input directory\", elem_id=self.elem_id(\"img2img_batch_input_dir\"))\n img_output_dir = gr.Textbox(label=\"Output directory\", elem_id=self.elem_id(\"img2img_batch_output_dir\"))\n return [different_seeds, prompt_txt, img_input_dir, img_output_dir]\n\n def run(self, p, different_seeds, prompt_txt, img_input_dir, img_output_dir):\n prompt_lines = [x.strip() for x in prompt_txt.splitlines()]\n prompt_lines = [x for x in prompt_lines if len(x) > 0]\n print(prompt_lines)\n print(img_input_dir)\n print(img_output_dir)\n images_list = []\n seeds_list = []\n all_prompts = []\n all_negative_prompts = []\n save_normally = img_output_dir == ''\n p.do_not_save_grid = True\n p.do_not_save_samples = not save_normally\n\n run_per_image = p.n_iter * p.batch_size\n image_files = list_files(img_input_dir)\n image_files_num = len(image_files)\n print(f\"Will process {image_files_num} images, creating {run_per_image} new images for each.\")\n\n assert len(prompt_lines)==image_files_num\n\n state.job_count = image_files_num * run_per_image\n for n in range(image_files_num):\n i_image_file = image_files[n]\n i_img = Image.open(i_image_file)\n i_img = ImageOps.exif_transpose(i_img)\n i_prompt = prompt_lines[n]\n i_neg_prompt = \"\"\n for m in range(run_per_image):\n state.job = f\"Iteration {n + 1}image, {n + 1}\/{run_per_image} batch\"\n images_list.append(i_img)\n seeds_list.append(int(random.randrange(4294967294)))\n all_prompts.append(i_prompt)\n all_negative_prompts.append(i_neg_prompt)\n\n processed = Processed(p, images_list, seeds_list, \"\", all_prompts=all_prompts, all_negative_prompts=all_negative_prompts)\n return processed\n\nBut it shows error as:\nError completing request\nArguments: ('task(uhmsex0ftvz0mjn)', 0, '', '', [], None, None, None, None, None, None, None, 20, \n0, 4, 0, 1, False, False, 1, 1, 7, 0.75, -1.0, -1.0, 0, 0, 0, False, 512, 512, 0, 0, 32, 0, '', '', 1, True, False, True, 'a couple of kids sitting at a table with shoes, a hyperrealistic painting, cg society contest winner, hyperrealism, ferrari f 4 0, trending on youtube, surprised expression, official product photo, based on a puma, color restoration, adorable!!!, 5 years old\\na couple \nof young boys sitting next to each other, computer graphics, trending on pinterest, beakers full of liquid, spitting cushions from his mouth, blond hair green eyes, orange color, transparent water, plexus, puppets, three head one body, \u2018luca\u2019, tutorial, drinks, 12, kids, face shown, supersampled, mental alchemy\\n', 'D:\\\\ad0116\\\\style-real', 'D:\\\\ad0116\\\\', '
    \\n
  • CFG Scale<\/code> should be 2 or lower.<\/li>\\n<\/ul>\\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 1, '

    Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8<\/p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, False, False, '', '

    Will upscale the image by the selected scale factor; use width and height sliders to set tile size<\/p>', 64, 0, 2, 1, '', 0, '', True, False, False, 'Not set', True, True, '', '', '', '', '', 1.3, 'Not set', \n'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', False, 'None', '

    Deforum v0.5-webui-beta<\/p>', '

    This script is deprecated. Please use the full Deforum extension instead.
    \\nUpdate instructions:<\/p>', '

    github.com\/deforum-art\/deforum-for-automatic1111-webui\/blob\/automatic1111-webui\/README.md<\/p>', '

    discord.gg\/deforum<\/p>', False, 0, True, 384, 384, False, 2, True, True, False, False) {}\nTraceback (most recent call last):\n File \"E:\\_Ai\\stable-diffusion-webui\\modules\\call_queue.py\", line 56, in f\n res = list(func(*args, **kwargs))\n File \"E:\\_Ai\\stable-diffusion-webui\\modules\\call_queue.py\", line 37, in f\n res = func(*args, **kwargs)\n File \"E:\\_Ai\\stable-diffusion-webui\\modules\\img2img.py\", line 66, in img2img\n image = init_img.convert(\"RGB\")\nAttributeError: 'NoneType' object has no attribute 'convert'\n\nbtw, I'm using python on windows. So I used \"D:\\ad0116\\style-real\" as the image input folder style.\nANY suggestions about how to fix or deal with the problem?\nThanks for your attention and help~\nBest,\nZack","Title":"How to do batch img2img job of many img\/prompt pairs -- automatic1111 stable diffusion","Tags":"python,batch-processing,prompt,webui,stable-diffusion","AnswerCount":1,"A_Id":75277527,"Answer":"You need an inital image, as the first run through has no img to img2img, hence the can't convert None.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75193045,"CreationDate":"2023-01-21 11:42:03","Q_Score":1,"ViewCount":151,"Question":"I am making an API request and want to add the response to a JSON. Then subsequent request responses adding to the same JSON file.\nI have separated out the block of code that isn't working, adding just one API call and dealing with the request. The issue is I cannot write the JSON file with this info. When trying I get the error \"AttributeError: 'dict' object has no attribute 'append'\" I, therefore, presumed my result from the API request is a dictionary. I then tried, in about 4 ways, to convert this into a list to allow the append. Obviously, none of these methods worked.\nimport json\nimport requests\n\nfname = \"NewdataTest.json\"\n\nrequest_API = requests.get(\"https:\/\/api.themoviedb.org\/3\/movie\/745?api_key=***\")\nprint(request_API)\n# Check Reponse from API\n#print(request_API.json())\nnewData = (request_API.json())\n\n# function to add to JSON\ndef write_json(data, fname):\n with open(fname, \"w\") as f:\n json.dump(data, f, indent = 4)\n\nwith open (fname) as json_file:\n data = json.load(json_file)\n temp = data[0]\n #print(newData) \n y = newData\n temp.append(y)\n \nwrite_json(data) \n\nJSON I am trying to add data too\n[\n {\n \"adult\": false,\n \"backdrop_path\": \"\/e1cC9muSRtAHVtF5GJtKAfATYIT.jpg\",\n \"belongs_to_collection\": null,\n \"budget\": 0,\n \"genres\": [\n {\n \"id\": 10749,\n \"name\": \"Romance\"\n },\n {\n \"id\": 35,\n \"name\": \"Comedy\"\n }\n ],\n \"homepage\": \"\",\n \"id\": 1063242,\n \"imdb_id\": \"tt24640474\",\n \"original_language\": \"fr\",\n \"original_title\": \"Disconnect: The Wedding Planner\",\n \"overview\": \"After falling victim to a scam, a desperate man races the clock as he attempts to plan a luxurious destination wedding for an important investor.\",\n \"popularity\": 34.201,\n \"poster_path\": \"\/tGmCxGkVMOqig2TrbXAsE9dOVvX.jpg\",\n \"production_companies\": [],\n \"production_countries\": [\n {\n \"iso_3166_1\": \"KE\",\n \"name\": \"Kenya\"\n },\n {\n \"iso_3166_1\": \"NG\",\n \"name\": \"Nigeria\"\n }\n ],\n \"release_date\": \"2023-01-13\",\n \"revenue\": 0,\n \"runtime\": 107,\n \"spoken_languages\": [\n {\n \"english_name\": \"English\",\n \"iso_639_1\": \"en\",\n \"name\": \"English\"\n },\n {\n \"english_name\": \"Afrikaans\",\n \"iso_639_1\": \"af\",\n \"name\": \"Afrikaans\"\n }\n ],\n \"status\": \"Released\",\n \"tagline\": \"\",\n \"title\": \"Disconnect: The Wedding Planner\",\n \"video\": false,\n \"vote_average\": 5.8,\n \"vote_count\": 3\n }\n]\n\nExample of print(request_API.json())\n{'adult': False, 'backdrop_path': '\/paUKxrbN2ww0JeT2JtvgAuaGlPf.jpg', 'belongs_to_collection': None, 'budget': 40000000, 'genres': [{'id': 9648, 'name': 'Mystery'}, {'id': 53, 'name': 'Thriller'}, {'id': 18, 'name': 'Drama'}], 'homepage': '', 'id': 745, 'imdb_id': 'tt0167404', 'original_language': 'en', 'original_title': 'The Sixth Sense', 'overview': 'Following an unexpected tragedy, a child psychologist named Malcolm Crowe meets an nine year old boy named Cole Sear, who is hiding a dark secret.', 'popularity': 32.495, 'poster_path': '\/4AfSDjjCy6T5LA1TMz0Lh2HlpRh.jpg', 'production_companies': [{'id': 158, 'logo_path': '\/jSj8E9Q5D0Y59IVfYFeBnfYl1uB.png', 'name': 'Spyglass Entertainment', 'origin_country': 'US'}, {'id': 862, 'logo_path': '\/udTjbqPmcTbfrihMuLtLcizDEM1.png', 'name': 'The Kennedy\/Marshall Company', 'origin_country': 'US'}, {'id': 915, 'logo_path': '\/4neXXpjSJDZPBGBnfWtqysB5htV.png', 'name': 'Hollywood Pictures', 'origin_country': 'US'}, {'id': 17032, 'logo_path': None, 'name': 'Barry Mendel Productions', 'origin_country': 'US'}], 'production_countries': [{'iso_3166_1': 'US', 'name': 'United States of America'}], 'release_date': '1999-08-06', 'revenue': 672806292, 'runtime': 107, 'spoken_languages': [{'english_name': 'Latin', 'iso_639_1': 'la', 'name': 'Latin'}, {'english_name': 'Spanish', 'iso_639_1': 'es', 'name': 'Espa\u00f1ol'}, {'english_name': 'English', 'iso_639_1': 'en', 'name': 'English'}], 'status': 'Released', 'tagline': 'Not every gift is a blessing.', 'title': 'The Sixth Sense', 'video': False, 'vote_average': 7.94, 'vote_count': 10125}","Title":"Taking API response and adding it to json, AttributeError: 'dict' object has no attribute 'append'","Tags":"python,list,api,dictionary","AnswerCount":1,"A_Id":75193496,"Answer":"There are two problems with your code\n\nYour json file contains an array of objects [{...}], so data is an array of objects and data[0] is an object. What would you expect someobject.append(someotherobject) to do? You probably want to do data.append(y)\n\nYou define your def write_json(data, fname): function to take two parameters. But when calling it like write_json(data) you are only passing one parameter.\n\n\nThe second error occured only after you have fixed the previous one. Because as long as the append was throwing an error, it didn't even reach the write_json so it had no chance to throw an error there ...","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75193903,"CreationDate":"2023-01-21 14:03:59","Q_Score":1,"ViewCount":61,"Question":"The code closes after clicking the first proceed when it is supposed to continue to the next page and can click the procceed button again, it should 4 times after clicking the Proceed button again. When I tried not to compile the code with other kivy files, it runs very well and accurate but when I try to compiled it again, it closes right away after clicking the Proceed button, the carousel itself is working but also in only one slide. It's not continue going to the next page. I have try to add and change the code but it shows the same error. Can someone please help me.\nHere is the entire code fot python file:\nfrom kivy.clock import Clock\nfrom kivy.uix.gridlayout import GridLayout\nfrom kivymd.uix.widget import Widget\nfrom kivy.core.window import Window\nfrom kivy.utils import rgba\nfrom kivy.lang import Builder\nfrom kivymd.app import MDApp\nfrom kivy.core.text import LabelBase\nfrom kivy.uix.screenmanager import ScreenManager\nfrom kivy.uix.scrollview import ScrollView\nWindow.size = (310, 580)\\`\n\nclass Scrolling(ScrollView):\npass\n\nclass OnBoarding(MDApp):\n\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n \n def build(self):\n global screen_manager\n screen_manager = ScreenManager()\n screen_manager.add_widget(Builder.load_file(\"one.kv\"))\n screen_manager.add_widget(Builder.load_file(\"two.kv\"))\n screen_manager.add_widget(Builder.load_file(\"three.kv\"))\n self.root = Builder.load_file(\"main.kv\")\n self.theme_cls.primary_palette = \"Green\"\n self.theme_cls.theme_style = \"Light\"\n return screen_manager\n \n def on_start(self):\n Clock.schedule_once(self.change_screen, 5)\n \n def change_screen(self,dt):\n screen_manager.current = \"two\"\n \n def current_slide(self, index):\n for i in range(4):\n if index != i:\n self.root.ids[f\"slide{i}\"].color = rgba(131, 173, 97)\n else:\n self.root.ids[f\"slide{i}\"].color = rgba(79, 121, 47)\n \n def next(self):\n self.root.ids.carousel.load_next(mode=\"next\")\n\nif __name__ == '__main__':\nOnBoarding().run()\n\nHere is the kv file:\nMDScreen:\n name:\"main\"\n MDFloatLayout:\n md_bg_color:1, 1, 1, 1\n Image:\n source:\"background.png\"\n size_hint: .7, .7\n pos_hint: {\"center_x\": .5, \"center_y\": .65}\n MDFloatLayout:\n id: parent_widget\n md_bg_color:1, 1, 1, 1\n Carousel:\n id: carousel\n on_current_slide: app.current_slide(self.index)\n MDFloatLayout:\n Image:\n source:\"first.png\"\n size_hint: .8, .9\n pos_hint: {\"center_x\": .5, \"center_y\": .70}\n MDLabel:\n text: \"First Page\"\n font_size: \"20sp\"\n pos_hint: {\"center_y\": .45}\n color: rgba(34, 34, 34, 255)\n MDLabel:\n text: \"First Definition\"\n pos_hint: {\"center_x\": .5, \"center_y\": .37}\n size_hint_x: .85\n color: rgba(34, 34, 34, 255)\n MDFloatLayout:\n Image:\n source:\"second.png\"\n size_hint: .8, .8\n pos_hint: {\"center_x\": .5, \"center_y\": .70}\n MDLabel:\n text: \"Second Page\"\n pos_hint: {\"center_y\": .45}\n halign: \"center\"\n color: rgba(34, 34, 34, 255)\n MDLabel:\n text: \"Second Definition\"\n pos_hint: {\"center_x\": .5, \"center_y\": .37}\n size_hint_x: .85\n color: rgba(34, 34, 34, 255)\n Button:\n text: \"Proceed\"\n background_color: 0, 0, 0, 0\n font_size: \"18sp\"\n size_hint: .8, .070\n pos_hint: {\"center_x\": .5, \"center_y\": .2}\n border: 0, 32, 0, 32\n canvas.before:\n Color:\n rgb: rgba(79, 121, 47)\n RoundedRectangle:\n size: self.size\n pos: self.pos\n radius: [20]\n on_release:\n app.next()\n MDLabel:\n id: slide0\n text: \".\"\n halign: \"center\"\n font_size: \"80sp\"\n color: rgba(79, 121, 47)\n pos_hint: {\"center_x\": .40, \"center_y\": .31}\n MDLabel:\n id: slide1\n text: \".\"\n halign: \"center\"\n font_size: \"80sp\"\n color: rgba(131,173,97)\n pos_hint: {\"center_x\": .47, \"center_y\": .31}\n MDLabel:\n id: slide2\n text: \".\"\n halign: \"center\"\n font_size: \"80sp\"\n color: rgba(131,173,97)\n pos_hint: {\"center_x\": .55, \"center_y\": .31}\n\nThe code closes after clicking the first proceed when it is supposed to continue to the next page and can click the procceed button again, it should 4 times after clicking the Proceed button again. When I tried not to compile the code with other kivy files, it runs very well and accurate but when I try to compiled it again, it closes right away after clicking the Proceed button, the carousel itself is working but also in only one slide. It's not continue going to the next page. I have try to add and change the code but it shows the same error. Can someone please help me.","Title":"Carousel and self.root code problem. I am developing an Application using Python Kivy. My code closes after clicking the button","Tags":"python,button,kivy,carousel,kivymd","AnswerCount":1,"A_Id":75203380,"Answer":"You did not add Builder.load_file(\"main.kv\") to ScreenManager object. Replace self.root = Builder.load_file(\"main.kv\") with screen_manager.add_widget(Builder.load_file(\"main.kv\"))\n\nYou did not provided one.kv, two.kv and three.kv content\n\nReplace all occurences of self.root.ids with self.root.get_screen('main').ids\n\nYou don't have any Screen with name \"two\" specified within change_screen method which is invoked after 5 seconds from start.\n\nYou are trying to set color attribute within loop for i in range(4): for ids[slide3] ids[f\"slide{i}\"] while there is no widget with id: slide3 within your main.kv\n\n\nFix your code, and post console output if you want to get quality help.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75195106,"CreationDate":"2023-01-21 17:06:33","Q_Score":1,"ViewCount":826,"Question":"What is the best way to write a gzip archive csv in python polars?\nThis is my current implementation:\nimport polars as pl\nimport gzip\n\n# create a dataframe\ndf = pl.DataFrame({\n \"foo\": [1, 2, 3, 4, 5],\n \"bar\": [6, 7, 8, 9, 10],\n \"ham\": [\"a\", \"b\", \"c\", \"d\", \"e\"]\n})\n\n# collect dataframe to memory and write to gzip file\nfile_path = 'compressed_dataframe.gz'\nwith gzip.open(file_path, 'wb') as f:\n df.collect().write_csv(f)","Title":"Write python polars lazy_frame to csv gzip archive after collect()","Tags":"python,gzip,lazy-evaluation,python-polars","AnswerCount":2,"A_Id":76472077,"Answer":"Right now you're applying collect() to a pl.DataFrame, which does not have a collect(). If you're working with a pl.LazyFrame you can apply collect() to it.\nThe implementation you're using to write to a gzip file is the standard way of doing it in python, and works well!","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75197711,"CreationDate":"2023-01-22 01:34:48","Q_Score":1,"ViewCount":65,"Question":"I tried this code from a Python tutorial:\ndef func1(a):\n return a ** a\ndef func2(a):\n return func1(a) * func1(a)\nprint(func2(2))\n\nIt displays 16, and I am trying to understand how this works.\nDoes func1 get called when the return statement starts to run?\nCan return call functions?\nI tried to understand how it works by adding a print:\ndef func1(a):\n print(\"Hello World\")\n return a ** a\ndef func2(a):\n return func1(a) * func1(a)\nprint(func2(2))\n\nI see that the Hello World message is printed two times, so I assume that func1 is getting called twice. How exactly does this work? Is the * in this line related to how it works?","Title":"Can return call functions?","Tags":"python","AnswerCount":2,"A_Id":75197897,"Answer":"return is executed last. The expression will be evaluated first.\nfunc1(2) returns 2 ** 2, which is 4.\nIt is getting called twice, which is why Hello World is being printed twice.\n4 * 4 is 16.\nNow that the expression is evaluated, func2 will now return 16.\nreturn is a keyword that stops the function(and returns the value you put after), so everything else in that line will have to be executed first.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75199083,"CreationDate":"2023-01-22 08:31:47","Q_Score":0,"ViewCount":30,"Question":"I am using rain as an intrumental variable, so I need to pull hisotry probablity of rain given location and time to each row.\nPrefer python since I clean most of my data on python.\n\n\n\n\nCounty\nState\nDate\nRain\n\n\n\n\nFulton\nSC\n2019-1-1\n?\n\n\nChatham\nGA\n2017-9-3\n?\n\n\n\n\nProbably looking for some python library and code to find the date and create the column.\nAny help would be appreciated! Thank you!","Title":"How to return historical probability of rain given location and date","Tags":"python,r,weather,meteostat","AnswerCount":1,"A_Id":75223296,"Answer":"The obvious answer is a probability in historical \/ observed datasets does not exist. The probability is derived from probabilistic weather forecasts. When the weather went through, you can say if there was rain or not, means 1 or 0.\nBut from a data science perspective there can be alternative to that. E.g. you can build up a similarity tree or an Analog Ensemble to determine probability for rain on certain weather patterns.\nBut you need more information about the weather and weather regime.\nAt the your information will be independent from the date. The probability information will be a function on the day of year e.g.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75200396,"CreationDate":"2023-01-22 12:27:54","Q_Score":0,"ViewCount":20,"Question":"In Kmeans clustering we can define number of cluster. But is it possible to define that cluster_1 will contain 20% data, cluster_2 will have 30% and cluster_3 will have rest of the data points?\nI try to do it by python but couldn't.","Title":"How to manipulate cluster data point of Kmeans clustering algorithm","Tags":"python,cluster-analysis,analysis,abc,pareto-chart","AnswerCount":2,"A_Id":75200603,"Answer":"Using K-means clustering, as you said we specify the number of clusters but it's not actually possible to specify the percentage of data points. I would recommend using Fuzzy-C if you want to specify a exact percentage of data points alloted for each cluster","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75201283,"CreationDate":"2023-01-22 14:36:00","Q_Score":2,"ViewCount":62,"Question":"Consider this very simple code snippet:\nimport tkinter as tk\n\nclass GUI:\n def __init__(self):\n self.top_level_window = tk.Tk()\n \n\nGUI()\nGUI().top_level_window.mainloop()\n\nIt creates two top-level windows on my screen. Why?\nI thought the first instance would be immediately garbage collected, so that I would only get one window.\nI have also tried slightly modified version, which I was sure for would create two separate objects, and thus only one window:\na=GUI()\nb=GUI()\nb.top_level_window.mainloop()\n\nbut I was wrong. And I can't think of a reason.\nAny help?","Title":"Why does this simple Tkinter code create two top-level windows?","Tags":"python,class,tkinter,instance","AnswerCount":4,"A_Id":75201418,"Answer":"I think that with tkinter, the framework itself keeps hold of instances of GUI objects that you create. This defeats any garbage collection that you might assume is going to happen.\nYou would need to call .destroy() on any elements you want tkinter to forget.","Users Score":1,"is_accepted":false,"Score":0.049958375,"Available Count":2},{"Q_Id":75201283,"CreationDate":"2023-01-22 14:36:00","Q_Score":2,"ViewCount":62,"Question":"Consider this very simple code snippet:\nimport tkinter as tk\n\nclass GUI:\n def __init__(self):\n self.top_level_window = tk.Tk()\n \n\nGUI()\nGUI().top_level_window.mainloop()\n\nIt creates two top-level windows on my screen. Why?\nI thought the first instance would be immediately garbage collected, so that I would only get one window.\nI have also tried slightly modified version, which I was sure for would create two separate objects, and thus only one window:\na=GUI()\nb=GUI()\nb.top_level_window.mainloop()\n\nbut I was wrong. And I can't think of a reason.\nAny help?","Title":"Why does this simple Tkinter code create two top-level windows?","Tags":"python,class,tkinter,instance","AnswerCount":4,"A_Id":75202924,"Answer":"I thought the first instance would be immediately garbage collected\n\nThe python object that is the instance of GUI is garbage-collected. However, tkinter creates objects inside of an embedded tcl interpreter, and the tcl interpreter doesn't know anything about python objects. So, while the object is removed from python, the widgets still exist inside the tcl interpreter.\nPut another way, garbage collect of a python object doesn't guarantee that the underlying tcl object is deleted. If you want the first window to be destroyed, you must call destroy() on the instance.","Users Score":1,"is_accepted":false,"Score":0.049958375,"Available Count":2},{"Q_Id":75202590,"CreationDate":"2023-01-22 17:44:41","Q_Score":1,"ViewCount":55,"Question":"I'm trying to create a new column on a dataset (csv file) that combines contents of pre-existing columns .\nimport numpy as np\nimport pandas as pd\n\ndf = pd.read_csv('books.csv', encoding='unicode_escape', error_bad_lines=False)\n\n#List of columns to keep\ncolumns =['title', 'authors', 'publisher']\n\n#Function to combine the columns\/features\ndef combine_features(data):\n features = []\n for i in range(0, data.shape[0]):\n features.append( data['title'][i] +' '+data['authors'][i]+' '+data['publisher'][i])\n return features\n\n#Column to store the combined features\ndf['combined_features'] =combine_features(df)\n\n#Show data\ndf\n\nI was expecting to find that the new column would be created with the title, author and publisher all in one, however I received the error \"ValueError: Length of values (1) does not match length of index (11123)\".\nTo fix this tried to use the command \"df.reset_index(inplace=True,drop=True)\" which was a suggested solution but that did not work and I am still receiving the same error.\nBelow is the whole error message:\nValueError Traceback (most recent call last)\n in \n 1 #Create a column to store the combined features\n----> 2 df['combined_features'] =combine_features(df)\n 3 df\n\n3 frames\n\/usr\/local\/lib\/python3.8\/dist-packages\/pandas\/core\/frame.py in __setitem__(self, key, value)\n 3610 else:\n 3611 # set column\n-> 3612 self._set_item(key, value)\n 3613 \n 3614 def _setitem_slice(self, key: slice, value):\n\n\/usr\/local\/lib\/python3.8\/dist-packages\/pandas\/core\/frame.py in _set_item(self, key, value)\n 3782 ensure homogeneity.\n 3783 \"\"\"\n-> 3784 value = self._sanitize_column(value)\n 3785 \n 3786 if (\n\n\/usr\/local\/lib\/python3.8\/dist-packages\/pandas\/core\/frame.py in _sanitize_column(self, value)\n 4507 \n 4508 if is_list_like(value):\n-> 4509 com.require_length_match(value, self.index)\n 4510 return sanitize_array(value, self.index, copy=True, allow_2d=True)\n 4511 \n\n\/usr\/local\/lib\/python3.8\/dist-packages\/pandas\/core\/common.py in require_length_match(data, index)\n 529 \"\"\"\n 530 if len(data) != len(index):\n--> 531 raise ValueError(\n 532 \"Length of values \"\n 533 f\"({len(data)}) \"\n\nValueError: Length of values (1) does not match length of index (11123)","Title":"ValueError: Length of values (1) does not match length of index (11123)","Tags":"python,csv,valueerror","AnswerCount":1,"A_Id":75202658,"Answer":"The reason is the return statement in the function should not be inside the for loop. Because it is, it returns already after 1 iteration, so the length of values is one, rather than 11123. Unindent the return once.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75204825,"CreationDate":"2023-01-23 00:19:16","Q_Score":1,"ViewCount":119,"Question":"I am trying to make a minimal Moderngl example of about <150 lines so I can better understand how it works. The issue here is that when I try to render this texture (it is an image of a mushroom), it instead draws a fully white screen. What am I doing wrong?\nBelow is all of the code for the project (<200 lines). You can easily copy it and run it, however you do need an image to load into the texture\nmain.py\nimport moderngl as mg\nimport pygame\nimport sys\nimport texture\n\nclass Context:\n\n def __init__(self, size):\n self.window_size = size\n pygame.display.set_mode(size, pygame.OPENGL | pygame.DOUBLEBUF)\n self.ctx = mg.create_context()\n self.clock = pygame.time.Clock()\n m = pygame.image.load(\"images\/Mushroom1.png\").convert()\n self.tex = texture.Texture(self.ctx, m)\n\n def get_events(self):\n for event in pygame.event.get():\n if event.type == pygame.QUIT:\n self.tex.destroy()\n pygame.quit()\n sys.exit()\n\n def render(self):\n self.ctx.clear(color=(1.0,0.0,0.0))\n self.tex.render()\n pygame.display.flip()\n\n def run(self):\n while True:\n self.get_events()\n self.render()\n self.clock.tick(60)\n\nc = Context((800,600))\nc.run()\n\ntexture.py\nimport moderngl as mg\nimport numpy as np\nimport pygame\n\nclass Texture:\n\n def __init__(self, ctx, img):\n self.ctx = ctx\n self.img = img\n self.vbo = self.create_vbo()\n self.texture = self.get_texture()\n self.shader = self.get_shader_program(\"default\")\n self.vao = self.get_vao()\n\n def render(self):\n self.shader[\"Texture_0\"] = 0\n self.texture.use(location=0)\n self.vao.render()\n\n def destroy(self):\n self.vbo.release()\n self.texture.release()\n self.shader.release()\n self.vao.release()\n\n def get_texture(self):\n texture = self.ctx.texture((self.img.get_width(), self.img.get_height()), 4, pygame.image.tostring(self.img, \"RGBA\"))\n return texture\n\n def get_vao(self):\n vao = self.ctx.vertex_array(self.shader, [(self.vbo, \"2f 3f\", \"in_coords\", \"in_position\")])\n return vao\n\n def create_vbo(self):\n vertices = [(-1.0, -1.0, 0.0), (1.0, -1.0, 0.0), (1.0, 1.0, 0.0), (-1.0, 1.0, 0.0)]\n tex_coords = [(0.2, 0.2), (0.8, 0.2), (0.8, 0.8), (0.2, 0.8)]\n indices = [(0, 1, 2), (0, 2, 3)]\n\n vertices = self.get_data(vertices, indices)\n tex_coords = self.get_data(tex_coords, indices)\n\n vertices = np.hstack([tex_coords, vertices])\n\n vbo = self.ctx.buffer(vertices)\n return vbo\n\n @staticmethod\n def get_data(vertices, indices):\n data = [vertices[ind] for t in indices for ind in t]\n return np.array(data, dtype=\"f4\")\n\n def get_shader_program(self, shader_name):\n with open(f\"shaders\/{shader_name}.vert\") as vert:\n v_shader = vert.read()\n\n with open(f\"shaders\/{shader_name}.frag\") as frag:\n f_shader = frag.read()\n\n program = self.ctx.program(vertex_shader = v_shader, fragment_shader = f_shader)\n return program\n\nvertex shader source:\n#version 330\n\nlayout (location = 0) in vec2 in_coords;\nlayout (location = 1) in vec3 in_position;\n\nout vec2 uv_0;\n\nvoid main() {\n vec2 uv_0 = in_coords;\n gl_Position = vec4(in_position, 1.0);\n}\n\nand the fragment shader source:\n#version 330\n\nuniform sampler2D Texture_0;\n\nlayout (location = 0) out vec4 fragColor;\n\nin vec2 uv_0;\n\nvoid main() {\n vec3 color = vec3(texture(Texture_0, uv_0));\n fragColor = vec4(color, 1.0);\n}","Title":"Screen appears completely white when using ModernGL and Pygame to render a texture","Tags":"python,python-moderngl","AnswerCount":1,"A_Id":75357052,"Answer":"In your vertex shader, you are redeclaring uv_0\nvec2 uv_0 = in_coords;\ninstead of assigning the value to the output value defined above.\nChange vec2 uv_0 = in_coords; to uv_0 = in_coords;","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75205890,"CreationDate":"2023-01-23 05:23:20","Q_Score":1,"ViewCount":51,"Question":"def permute(self, nums: List[int]) -> List[List[int]]:\n res = []\n perm = []\n\n def dfs(i):\n if len(nums) == 0:\n res.append(perm[:])\n return\n \n for j in range(len(nums)):\n perm.append(nums[j])\n n = nums.pop(j)\n dfs(i+1)\n perm.pop()\n nums.insert(j,n)\n \n dfs(0)\n return res\n\nThis function gives the correct result for some reason even though it shouldn't as nums[j] should be out of bounds. Can someone explain why this works?","Title":"Why does the following code for finding out all possible permutations of an integer array work?","Tags":"python,permutation,backtracking","AnswerCount":1,"A_Id":75206656,"Answer":"To get an index out of bounds error, you would have to call nums[j], with j >= len(nums). Here's why that never happens in your code.\nYou use j to identify a location within nums three times:\n\nOn first entering the for loop, you call nums[j] to append to perm. So long as j is a value between 0 and len(nums)-1 (because range(n) = (0, 1, 2, ..., n-2, n-1)), then you can't be out of range. The only way you'd end up being out of range is if the next time you called nums[j], the length of nums had changed. Let's see if that happens.\nYour next opportunity to get an index out of bounds error is on the next line - n = nums.pop(j). Well there can be no problem here - you were able to grab the value of nums at j on the previous line, so why shouldn't you be able to pop that value out of nums now? But now you've mutated nums, and if you call nums[j] again, AND if j is greater than len(nums) - 2, you'd be in trouble - let's see if that happens.\nNext, you call dfs(i+1), which opens up a new invocation of dfs, and an entirely new for loop, with a new j, that restarts at 0, and a new nums that's one shorter than it was in the previous function call. If nums is empty, you never enter the for loop, and if nums isn't empty, you enter the for loop with j = 0 - so no risk of being out of bounds here.\nThe above recursive process continues, restarting the for loop with j = 0 and with nums truncated by one, until you get to the bottom of the recursion hole and nums == 0. At this point, you climb back up out of the recursion hole and carry on where you left off - after the last dfs(i+1) call. When you jumped into the recursion hole, you'd just mutated nums, and were in imminent danger of an index out of bounds error...\nThe next line (perm.pop()) kinda doesn't count in the nums\/j saga we're on - so we'll skip past it.\nYou then call nums.insert(j, n). nums.insert(j, n) places the value assigned to n at the index assigned to j - 1. e.g: taking a list lst with 4 items in it, calling lst.insert(4, ) would take and place it at the end of nums - making your nums.insert(j, n) equivalent to saying nums.append(n). From the time you popped n out of nums, to now, you've basically just shuffled it from where it was to the end of nums. You WERE sailing perilously close to an index out of bounds error, but you've now undone the mutation you had previously inflicted on nums, so as you exit the for loop, nums is exactly as long as it was when you entered it. Going back to the start of the for loop and incrementing j before reentering the for loop poses no risk to you of an index out of bounds error.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75206293,"CreationDate":"2023-01-23 06:36:52","Q_Score":1,"ViewCount":74,"Question":"I have a very simple Python code for bind or connect to a port. it works without any error on Ubuntu and CentOs but I have an error on Windows 10. I turned off the firewall and antivirus but it didn't help.\nmy code:\nimport socket\n\nport = 9999\n\ns = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n\nhost = socket.gethostname()\n\n\ntry:\n s.connect((host,port))\nexcept:\n s.bind((host, port))\n s.listen(1)\n print(\"I'm a server\")\n clientsocket, address = s.accept()\nelse:\n print(\"I'm a client\")\n\nerror on windows 10:\nTraceback (most recent call last):\n File \"win.py\", line 11, in \n s.connect((host,port))\nConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it\n\nDuring the handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"win.py\", line 13, in \n s.bind((host, port))\nOSError: [WinError 10022] An invalid argument was supplied\n\nEdit:\nI found my problem is in Try... Except part, if I put this code in two files my problem will solve. But Why? try except don't work correctly in Windows?","Title":"WinError 10061 and WinError 10022 in socket programming only on Windows","Tags":"python,linux,sockets,window,try-except","AnswerCount":1,"A_Id":75213279,"Answer":"The connect() fails because there is no server socket listening at (host,port).\nThe bind() fails because you can't bind to a hostname, only to an IP address. Unless you want to listen on just a specific network interface, you should bind to 0.0.0.0 to listen on all interfaces.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75209261,"CreationDate":"2023-01-23 12:06:43","Q_Score":1,"ViewCount":117,"Question":"Is there a way to install a python package without rebuilding the docker image? I have tried in this way:\ndocker compose run --rm web sh -c \"pip install requests\"\n\nbut if I list the packages using\ndocker-compose run --rm web sh -c \"pip freeze\"\n\n\nI don't get the new one.\nIt looks like that is installed in the container but not in the image.\nMy question is what is the best way to install a new python package after building the docker image?\nThanks in advance","Title":"How to install a Python package inside a docker image?","Tags":"python,docker","AnswerCount":2,"A_Id":75209342,"Answer":"I don't know too much about docker but if you execute your commands, the docker engine will spin up a new container based on your web image and runs the pip install requests command. After it executed the command, the container has nothing more to do and will stop. Since you specified the --rm flag, the docker engine will remove your new container after it has stopped such that the whole container and thus also the installed packages are removed.\nAFAIK you cannot add packages without rebuilding the image.\nI know that you can run the command without removing the container and that you can also make images from your containers. (Those images should include the packages then).","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75209261,"CreationDate":"2023-01-23 12:06:43","Q_Score":1,"ViewCount":117,"Question":"Is there a way to install a python package without rebuilding the docker image? I have tried in this way:\ndocker compose run --rm web sh -c \"pip install requests\"\n\nbut if I list the packages using\ndocker-compose run --rm web sh -c \"pip freeze\"\n\n\nI don't get the new one.\nIt looks like that is installed in the container but not in the image.\nMy question is what is the best way to install a new python package after building the docker image?\nThanks in advance","Title":"How to install a Python package inside a docker image?","Tags":"python,docker","AnswerCount":2,"A_Id":75210781,"Answer":"docker-compose is used to run multi-container applications with Docker.\nIt seems that in your case you use Docker image with python installed as entrypoint to do some further work.\nAfter building docker image you can run it:\n$ docker run -dit -name my_container_name image_name\nAnd then run:\n$ docker exec -ti my_container_name bash or\n$ docker exec -ti my_container_name sh\nin case there is no bash in the docker image.\nThis will give you shell access to the container you just created. Then if there is pip installed inside your container you can install whatever python package you need like you would do on your OS.\nTake note that everything you install is only persisted inside the container you created. If you delete this container, all the things you installed manually will be gone.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":2},{"Q_Id":75211830,"CreationDate":"2023-01-23 15:49:32","Q_Score":1,"ViewCount":48,"Question":"so the problem is that pypi.org hase been filtered by iranian government(yes , i know it's ridiculous!). i tried to install some python modules from Github downloaded files:\npip install moduleName\nbut every module has it's own dependencies and try to connect to pipy.org to reach them. then there will be an error during installation.\nis there any solution?\nyour help will be much appreciated.","Title":"how to install python modules where pipy.org is is not accessible from iran?","Tags":"python,installation,pip,module,pypi","AnswerCount":2,"A_Id":75211968,"Answer":"Try and use a VPN this will bypass any block on certain sites. Just google VPN for the top results.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75211830,"CreationDate":"2023-01-23 15:49:32","Q_Score":1,"ViewCount":48,"Question":"so the problem is that pypi.org hase been filtered by iranian government(yes , i know it's ridiculous!). i tried to install some python modules from Github downloaded files:\npip install moduleName\nbut every module has it's own dependencies and try to connect to pipy.org to reach them. then there will be an error during installation.\nis there any solution?\nyour help will be much appreciated.","Title":"how to install python modules where pipy.org is is not accessible from iran?","Tags":"python,installation,pip,module,pypi","AnswerCount":2,"A_Id":75212211,"Answer":"I live in a country that also blocks services, mostly streaming platforms. In theory, the way behind this is the same whether to watch Netflix or download python and its dependencies. That is you'll probably need to use a VPN.\nAs said by d-dutchveiws, there's tons of videos and resources on how to set up a VPN. If you do end up using a paid VPN service I would just like to add that I lived in the UAE for a while and I found that some VPN services were blocked by the country themselves. I know NordVPN did not work\/was blocked by the UAE so I ended up finding expressVPN and that worked. In other words, I'd be sure not to commit to any payment plan\/only use free trials because even the VPN services can be blocked. Hope I helped a bit!","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":2},{"Q_Id":75212158,"CreationDate":"2023-01-23 16:11:41","Q_Score":1,"ViewCount":66,"Question":"I am importing several libraries in a .py file using VScode.\nsomehow it always orders the imports when I am saving the file.\nIt is important for me that a certain order is maintained, for example:\nimport os\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\" \n\nshould be before:\nimport tensorflow.compat.v1 as tf\n\nwhich in turn should be before\nimport keras.backend as K\nimport keras\n\nbut even if I press option+shift+o, this order is lost after saving.\nHow can I force the order I need in this case, while generally keep VScode setting the order alphabetically?","Title":"VScode order of python imports: how to force tensorflow to import before keras?","Tags":"python,tensorflow,visual-studio-code,keras","AnswerCount":1,"A_Id":75212599,"Answer":"You need to disable Format on Save in VSC's settings.\n\nClick Files > Preferences > Settings\nType format in the search box\nDisable Format on Save","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75216111,"CreationDate":"2023-01-23 23:29:59","Q_Score":1,"ViewCount":32,"Question":"I downloaded pygame on the website. And went to my command prompt to pip install pygame. But it gave me this error message. metadata-generation-failed. How do I uninstall pygame and install a new version? and my version of python installed is 3.11.1\ni tried to uninstall and reinstall my version of python.","Title":"I am trying to install pygame","Tags":"python,pygame","AnswerCount":1,"A_Id":75216149,"Answer":"To uninstall Pygame, you can use the command pip uninstall pygame in your command prompt. To install a new version of Pygame, you can use the command pip install pygame==x.x.x where x.x.x is the version number you want to install.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75216312,"CreationDate":"2023-01-24 00:11:55","Q_Score":0,"ViewCount":39,"Question":"Working on a data transfer program, to move data from an oracle database to another\napplication that I cant see or change. I have to create several text files described below and drop them off on sftp site.\nI am converting from a 20+ year old SQR report. (yes SQR) :(\nI have to create text files that have a format as such an_alpa_code:2343,34533,4442,333335,.....can be thousands or numbers separated by comma.\nThe file may have only 1 line, but the file might be 48k in size.\nThere is no choice on the file format, it is required this way.\nTried using Oracle UTL_FILE, but that cannot deal with a line over 32k in length, so looking for an alterative. Python is a language my company has approved for use, so I am hoping it could do this","Title":"In Python, how can i write multiple times to a file and keep everything on 1 long line line? (40k plus characters)","Tags":"python,oracle,text","AnswerCount":2,"A_Id":75224268,"Answer":"This gave me one long line\nfile_obj = open(\"writing.txt\", \"w\")\nfor i in range(0,10000):\nfile_obj.write(\"mystuff\"+str(i)+\",\")\n# file_obj.write('\\n')\nfile_obj.close()","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75217175,"CreationDate":"2023-01-24 03:37:48","Q_Score":1,"ViewCount":1793,"Question":"i am running into the following error with the code provided here...\ni have tried changing timeout, delay, headers, etc. - nothing has solved it.\ncode below - please help with a fix if you know of anything i might be able to try...\nalso to note - the script should check t.me as well as fragment - not either or... not sure if i've structured it correctly here\n\n for word in words:\n if progress_bar == True:\n bar.next() # next % in progress bar\n\n if isinstance(words_array, str):\n word = word.replace(\"\\n\", \"\") # remove line breaks\n elif isinstance(words_array, list):\n word = word.replace(\" \", \"\") # remove spaces\n\n # Checking symbols in word\n symbols = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l','m', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', '1', '2', '3', '4', '5', '6', '7', '8', '9', '0', '_']\n word_symbols = list(word)\n valid_symbols = 0\n\n for symbol in word_symbols:\n if symbol in symbols:\n valid_symbols += 1\n # \/. # Checking symbols in word\n if len(word_symbols) == valid_symbols:\n # Set a retry limit and delay\n retry_limit = 25 # increased from 5\n delay = 5.0 # increased from 0.5\n\n # Create a for loop that will run until the retry limit is reached\n try:\n # Make the request\n time.sleep(0.01)\n telegram_web = requests.get(\n f'https:\/\/t.me\/{word.lower()}', headers=headers, timeout=60)\n soup = BeautifulSoup(telegram_web.text, 'html.parser')\n elements_list = soup.find_all(\n \"div\", {\"class\": \"tgme_page_extra\"})\n\n except ConnectionResetError:\n # Delay for a certain amount of time before making the request again\n # so that the server isn't overloaded\n time.sleep(delay)\n delay *= 2\n\n else:\n # Execute code if there is no error\n if len(elements_list) == 0:\n # Check word for sale\/sold in Fragment.com\n time.sleep(0.01)\n fragment = requests.get(\n f'https:\/\/fragment.com\/username\/{word.lower()}', headers=headers, timeout=30)\n soup = BeautifulSoup(fragment.text, 'html.parser')\n avail_status = soup.find_all(\n \"tr\", {\"class\": \"tm-section-header-status tm-status-avail\"})\n sold_status = soup.find_all(\n \"tr\", {\"class\": \"tm-section-header-status tm-status-unavail\"})\n # \/. Check word for sale\/sold in Fragment.com\n if len(avail_status) == 0 and len(sold_status) == 0:\n checked_words.append(word)\n continue\n\n if progress_bar == True:\n bar.finish() # remove progress bar","Title":"Python Request Aborted - ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None))","Tags":"python,exception,connection,telegram","AnswerCount":1,"A_Id":75266063,"Answer":"I faced this problem when connect python with firebase, using spyder IDE, I was stuck in 4 hours =)))\nSolution is.....I create new python file, copy old code paste in this new file and run again then it will work for me,\nHope it will help you too (I saw your topic while I was struggling with my problem (same error sentence))\nPredict problem: I think maybe it because the library set oauth connection error for this python file","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75217316,"CreationDate":"2023-01-24 04:10:28","Q_Score":0,"ViewCount":37,"Question":"I named my virtual environment .venv.\nI went into .venv\/Scripts\/ and there were activate.bat and Activate.ps1. My guess is that I need to run activate.bat or Activate.ps1 to access the virtual environment, but I don't know the difference between activate.bat and Activate.ps1.\nAnd I want more. what is difference between activate.???and deactivate.bat?\nAlso, why isn't deactivate.ps1 there?\nAnd I am using powershell.\nLooking on the internet, it says to use activate.bat, but they seem to use cmd. I want powershell.","Title":"python venv \"activate.bat vs activate.ps1 vs deactivate.bat\"","Tags":"python-venv","AnswerCount":1,"A_Id":75217365,"Answer":"ps1 is for powershell and bat is for cmd.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75218078,"CreationDate":"2023-01-24 06:26:07","Q_Score":0,"ViewCount":31,"Question":"I want to alter my model but before doing so I want to delete all the records from my database, is there any dajngo ORM query for doing that cuz I don't want to do it manually.\nThanks.\nI tried to alter my model but when I migrated the changes an error occured.\nit was a long error but the last line was this.\nFile \"C:\\\\Users\\\\ALI SHANAWER.virtualenvs\\\\PiikFM-App-Backend-O_dKS6jY\\\\Lib\\\\site-packages\\\\MySQLdb\\\\connections.py\", line 254, in query \\_mysql.connection.query(self, query) django.db.utils.OperationalError: (3140, 'Invalid JSON text: \"Invalid value.\" at position 0 in value for column '#sql-45_2d01.qbo_class'.')\nany one knows what this is?","Title":"Is there any way I can delete all the rows from database before altering the model in django?","Tags":"python,django,django-models,orm","AnswerCount":2,"A_Id":75218297,"Answer":"You can simple delete db.sqlite ie, database file \nthen run python manage.py makemigration and then python manage.py migrate. \n\nI hope this is what you were looking for","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75218078,"CreationDate":"2023-01-24 06:26:07","Q_Score":0,"ViewCount":31,"Question":"I want to alter my model but before doing so I want to delete all the records from my database, is there any dajngo ORM query for doing that cuz I don't want to do it manually.\nThanks.\nI tried to alter my model but when I migrated the changes an error occured.\nit was a long error but the last line was this.\nFile \"C:\\\\Users\\\\ALI SHANAWER.virtualenvs\\\\PiikFM-App-Backend-O_dKS6jY\\\\Lib\\\\site-packages\\\\MySQLdb\\\\connections.py\", line 254, in query \\_mysql.connection.query(self, query) django.db.utils.OperationalError: (3140, 'Invalid JSON text: \"Invalid value.\" at position 0 in value for column '#sql-45_2d01.qbo_class'.')\nany one knows what this is?","Title":"Is there any way I can delete all the rows from database before altering the model in django?","Tags":"python,django,django-models,orm","AnswerCount":2,"A_Id":75220062,"Answer":"If you want to truncate only a single table then use {ModelName}.objects.all().delete() otherwise use can use \"python manage.py flush\" for truncate database.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75218218,"CreationDate":"2023-01-24 06:46:09","Q_Score":0,"ViewCount":67,"Question":"I have a list of expressions (+ - *):\n[\"2 + 3\", \"5 - 1\", \"3 * 4\", ...]\nand I need to convert every expresion to expression = answer like this 2 + 3 = 5.\nI tried just doing print(listt[0]) but it outputs 2 + 3, not 5. So how do i get the answer of this expression? I know that there is a long way by doing .split() with every expression, but is there any other faster way of doing this?\nUPD: I need to use only built-in functions","Title":"How to solve expressions from a list?","Tags":"python","AnswerCount":3,"A_Id":75219112,"Answer":"Use eval() function. The eval() function evaluates the specified expression, if the expression is a legal Python statement, it will be executed.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75218609,"CreationDate":"2023-01-24 07:36:52","Q_Score":2,"ViewCount":76,"Question":"I am trying to implement a neural network. I am using CNN model for classifying. First I split the dataset into train and test.\nCode Snippet:\nX_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=42, shuffle=True, stratify=Y)\nthen I built a CNN model and used stratified cross-validation to fit the model.\nCode Snippet:\nfrom statistics import mean, stdev\n# Loop through the splits\nlst_accu_stratified = []\nfor train_index, val_index in skf.split(X_train, y_train):\n X_train_fold, X_val_fold = X_train[train_index], X_train[val_index]\n y_train_fold, y_val_fold = y_train[train_index], y_train[val_index]\n # print('Fold :')\n ResNet50 = model.fit(X_train_fold, y_train_fold, batch_size=16, epochs=20, verbose=1)\n val_loss, val_acc = model.evaluate(X_val_fold, y_val_fold, verbose=0)\n print(\"Validation Loss: \", val_loss, \"Validation Accuracy: \", val_acc)\n lst_accu_stratified.append(val_acc)\n\n# Print the output.\nprint('List of possible accuracy:', lst_accu_stratified)\nprint('\\nMaximum Accuracy That can be obtained from this model is:',\n max(lst_accu_stratified)*100, '%')\nprint('\\nMinimum Accuracy:',\n min(lst_accu_stratified)*100, '%')\nprint('\\nOverall Accuracy:',\n mean(lst_accu_stratified)*100, '%')\nprint('\\nStandard Deviation is:', stdev(lst_accu_stratified))\n\nOutput:\nEpoch 1\/20\n30\/30 [==============================] - 9s 102ms\/step - loss: 1.3490 - accuracy: 0.5756\nEpoch 2\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.4620 - accuracy: 0.8466\nEpoch 3\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.1818 - accuracy: 0.9412\nEpoch 4\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.1106 - accuracy: 0.9727\nEpoch 5\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.0643 - accuracy: 0.9811\nEpoch 6\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 0.0438 - accuracy: 0.9895\nEpoch 7\/20\n30\/30 [==============================] - 2s 73ms\/step - loss: 0.0371 - accuracy: 0.9916\nEpoch 8\/20\n30\/30 [==============================] - 2s 73ms\/step - loss: 0.0212 - accuracy: 0.9958\nEpoch 9\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 0.0143 - accuracy: 1.0000\nEpoch 10\/20\n30\/30 [==============================] - 2s 73ms\/step - loss: 0.0149 - accuracy: 0.9958\nEpoch 11\/20\n30\/30 [==============================] - 2s 73ms\/step - loss: 0.0158 - accuracy: 0.9958\nEpoch 12\/20\n30\/30 [==============================] - 2s 73ms\/step - loss: 0.0134 - accuracy: 0.9958\nEpoch 13\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 0.0072 - accuracy: 1.0000\nEpoch 14\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 0.0031 - accuracy: 1.0000\nEpoch 15\/20\n30\/30 [==============================] - 2s 73ms\/step - loss: 0.0024 - accuracy: 1.0000\nEpoch 16\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 0.0016 - accuracy: 1.0000\nEpoch 17\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 0.0016 - accuracy: 1.0000\nEpoch 18\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 0.0019 - accuracy: 1.0000\nEpoch 19\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 0.0088 - accuracy: 0.9979\nEpoch 20\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.0031 - accuracy: 1.0000\nValidation Loss: 0.8360670208930969 Validation Accuracy: 0.800000011920929\nEpoch 1\/20\n30\/30 [==============================] - 3s 106ms\/step - loss: 0.5129 - accuracy: 0.8700\nEpoch 2\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.4789 - accuracy: 0.8784\nEpoch 3\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.2724 - accuracy: 0.9224\nEpoch 4\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 0.2108 - accuracy: 0.9308\nEpoch 5\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.1081 - accuracy: 0.9706\nEpoch 6\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.1010 - accuracy: 0.9748\nEpoch 7\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.0481 - accuracy: 0.9895\nEpoch 8\/20\n30\/30 [==============================] - 2s 73ms\/step - loss: 0.0316 - accuracy: 0.9874\nEpoch 9\/20\n30\/30 [==============================] - 2s 73ms\/step - loss: 0.0483 - accuracy: 0.9811\nEpoch 10\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.0167 - accuracy: 0.9937\nEpoch 11\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.0129 - accuracy: 0.9937\nEpoch 12\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.0023 - accuracy: 1.0000\nEpoch 13\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.0024 - accuracy: 1.0000\nEpoch 14\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.0093 - accuracy: 0.9979\nEpoch 15\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 0.0389 - accuracy: 0.9895\nEpoch 16\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.0293 - accuracy: 0.9895\nEpoch 17\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.0016 - accuracy: 1.0000\nEpoch 18\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 6.7058e-04 - accuracy: 1.0000\nEpoch 19\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.0011 - accuracy: 1.0000\nEpoch 20\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 6.7595e-04 - accuracy: 1.0000\nValidation Loss: 0.5674645304679871 Validation Accuracy: 0.8571428656578064\nEpoch 1\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.1533 - accuracy: 0.9518\nEpoch 2\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.0978 - accuracy: 0.9686\nEpoch 3\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.0702 - accuracy: 0.9790\nEpoch 4\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 0.0754 - accuracy: 0.9811\nEpoch 5\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 0.0362 - accuracy: 0.9874\nEpoch 6\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 0.0174 - accuracy: 0.9916\nEpoch 7\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 0.0144 - accuracy: 0.9916\nEpoch 8\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 0.0089 - accuracy: 0.9958\nEpoch 9\/20\n30\/30 [==============================] - 2s 73ms\/step - loss: 0.0017 - accuracy: 1.0000\nEpoch 10\/20\n30\/30 [==============================] - 2s 73ms\/step - loss: 0.0044 - accuracy: 0.9979\nEpoch 11\/20\n30\/30 [==============================] - 2s 73ms\/step - loss: 0.0033 - accuracy: 1.0000\nEpoch 12\/20\n30\/30 [==============================] - 2s 73ms\/step - loss: 5.9884e-04 - accuracy: 1.0000\nEpoch 13\/20\n30\/30 [==============================] - 2s 73ms\/step - loss: 3.7875e-04 - accuracy: 1.0000\nEpoch 14\/20\n30\/30 [==============================] - 2s 73ms\/step - loss: 4.7657e-04 - accuracy: 1.0000\nEpoch 15\/20\n30\/30 [==============================] - 2s 73ms\/step - loss: 2.8062e-04 - accuracy: 1.0000\nEpoch 16\/20\n30\/30 [==============================] - 2s 73ms\/step - loss: 4.5594e-04 - accuracy: 1.0000\nEpoch 17\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 2.3471e-04 - accuracy: 1.0000\nEpoch 18\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 2.5190e-04 - accuracy: 1.0000\nEpoch 19\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 1.5143e-04 - accuracy: 1.0000\nEpoch 20\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 2.4174e-04 - accuracy: 1.0000\nValidation Loss: 0.002929181093350053 Validation Accuracy: 1.0\nEpoch 1\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.0035 - accuracy: 1.0000\nEpoch 2\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.0048 - accuracy: 0.9979\nEpoch 3\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 7.1234e-04 - accuracy: 1.0000\nEpoch 4\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.0100 - accuracy: 0.9937\nEpoch 5\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.0041 - accuracy: 1.0000\nEpoch 6\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 0.0016 - accuracy: 1.0000\nEpoch 7\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 6.2473e-04 - accuracy: 1.0000\nEpoch 8\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 4.5511e-04 - accuracy: 1.0000\nEpoch 9\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 0.0015 - accuracy: 1.0000\nEpoch 10\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 0.0132 - accuracy: 0.9979\nEpoch 11\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 0.0106 - accuracy: 0.9958\nEpoch 12\/20\n30\/30 [==============================] - 2s 73ms\/step - loss: 0.0032 - accuracy: 0.9979\nEpoch 13\/20\n30\/30 [==============================] - 2s 73ms\/step - loss: 0.0022 - accuracy: 0.9979\nEpoch 14\/20\n30\/30 [==============================] - 2s 73ms\/step - loss: 0.0039 - accuracy: 0.9979\nEpoch 15\/20\n30\/30 [==============================] - 2s 73ms\/step - loss: 0.0023 - accuracy: 1.0000\nEpoch 16\/20\n30\/30 [==============================] - 2s 73ms\/step - loss: 2.7678e-04 - accuracy: 1.0000\nEpoch 17\/20\n30\/30 [==============================] - 2s 73ms\/step - loss: 0.0022 - accuracy: 1.0000\nEpoch 18\/20\n30\/30 [==============================] - 2s 73ms\/step - loss: 0.0034 - accuracy: 0.9979\nEpoch 19\/20\n30\/30 [==============================] - 2s 73ms\/step - loss: 4.1879e-04 - accuracy: 1.0000\nEpoch 20\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 4.0388e-04 - accuracy: 1.0000\nValidation Loss: 0.003368004923686385 Validation Accuracy: 1.0\nEpoch 1\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 5.1283e-04 - accuracy: 1.0000\nEpoch 2\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 8.4923e-04 - accuracy: 1.0000\nEpoch 3\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 3.2774e-04 - accuracy: 1.0000\nEpoch 4\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 1.3468e-04 - accuracy: 1.0000\nEpoch 5\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 1.4561e-04 - accuracy: 1.0000\nEpoch 6\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 1.6656e-04 - accuracy: 1.0000\nEpoch 7\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 1.2794e-04 - accuracy: 1.0000\nEpoch 8\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 6.7647e-05 - accuracy: 1.0000\nEpoch 9\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 1.7325e-04 - accuracy: 1.0000\nEpoch 10\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 6.5071e-05 - accuracy: 1.0000\nEpoch 11\/20\n30\/30 [==============================] - 2s 72ms\/step - loss: 6.1966e-05 - accuracy: 1.0000\nEpoch 12\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 5.9293e-05 - accuracy: 1.0000\nEpoch 13\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 3.1360e-04 - accuracy: 1.0000\nEpoch 14\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 1.0051e-04 - accuracy: 1.0000\nEpoch 15\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 1.7242e-04 - accuracy: 1.0000\nEpoch 16\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 5.6384e-05 - accuracy: 1.0000\nEpoch 17\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 8.4639e-05 - accuracy: 1.0000\nEpoch 18\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 6.7929e-04 - accuracy: 1.0000\nEpoch 19\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 1.6557e-04 - accuracy: 1.0000\nEpoch 20\/20\n30\/30 [==============================] - 2s 71ms\/step - loss: 4.6414e-04 - accuracy: 1.0000\nValidation Loss: 8.931908087106422e-05 Validation Accuracy: 1.0\nList of possible accuracy: [0.800000011920929, 0.8571428656578064, 1.0, 1.0, 1.0]\n\nMaximum Accuracy That can be obtained from this model is: 100.0 %\n\nMinimum Accuracy: 80.0000011920929 %\n\nOverall Accuracy: 93.1428575515747 %\n\nStandard Deviation is: 0.09604420178372833\n\nhere the val accuracy of each fold is pretty high but when I test the model with test dataset, the accuracy is very low.\nCode snippet:\nmodel.evaluate(X_test, y_test,batch_size=32)\n\noutput:\n5\/5 [==============================] - 1s 222ms\/step - loss: 2.3315 - accuracy: 0.6913\n[2.3314528465270996, 0.6912751793861389]\n\nMy question is,\n\nIs my method correct?\nWhat can be the reason for low test accuracy?","Title":"Validation acc is very high in each fold but Test acc is very low","Tags":"python,tensorflow,keras,deep-learning,conv-neural-network","AnswerCount":2,"A_Id":75221765,"Answer":"I totally agree with overfitting situations but wanted to state one more possible reason. Your training is really fast so imagine your inputs are kinda small. If model has BatchNorm (as far as I know ResNet50 has), I would recommend to increase batch size as BatchNorm goes a bit crazy in these kind of situations.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75218788,"CreationDate":"2023-01-24 08:01:03","Q_Score":1,"ViewCount":433,"Question":"I am loading Linear SVM model and then predicting new data using the stored trained SVM Model. I used TFIDF while training such as:\nvector = TfidfVectorizer(ngram_range=(1, 3)).fit(data['text'])\n\n**when i apply new data than I am getting error at the time of Prediction.\n**\nValueError: X has 2 features, but SVC is expecting 472082 features as input.\nCode for the Prediction of new data\nLinear_SVC_classifier = joblib.load(\"\/content\/drive\/MyDrive\/dataset\/Classifers\/Linear_SVC_classifier.sav\")\ntest_data = input(\"Enter Data for Testing: \")\nnewly_testing_data = vector.transform(test_data)\nSVM_Prediction_NewData = Linear_SVC_classifier.predict(newly_testing_data)\n\nI want to predict new data using stored SVM model without applying TFIDF on training data when I give data to model for prediction. When I use the new data for prediction than the prediction line gives error. Is there any way to remove this error?","Title":"ValueError: X has 2 features, but SVC is expecting 472082 features as input","Tags":"python,machine-learning,svm","AnswerCount":1,"A_Id":75219161,"Answer":"The problem is due to your creation of a new TfidfVectorizer by fitting it on the test dataset. As the classifier has been trained on a matrix generated by the TfidfVectorier fitted on the training dataset, it expects the test dataset to have the exact same dimensions.\nIn order to do so, you need to transform your test dataset with the same vectorizer that was used during training rather than initialize a new one based on the test set.\nThe vectorizer fitted on the train set can be pickled and stored for later use to avoid any re-fitting at inference time.","Users Score":3,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75218794,"CreationDate":"2023-01-24 08:01:50","Q_Score":0,"ViewCount":24,"Question":"HI I am writing a server script in Frappe Cloud where I am trying to update a particular doctype(which is NOT THE DOCTYPE I HAVE CHOSEN IN DOCTYPE EVENT) using frappe.db.set_value(), then in order to save it i use frappe.db.commit().\nBut when the script tries to run I get the following error\nAttributeError: module has no attribute 'commit'\nAny ideas to whats wrong\nchange in the saved document data","Title":"does frappe.db.commit() not work in server script in Frappe Cloud?","Tags":"python,erpnext,server-side-scripting,frappe","AnswerCount":1,"A_Id":75262689,"Answer":"Use of frappe.db.commit mid transaction can lead to unintended side effects like partial updates.\nYou don't need to explicitly commit in your Server Script, Frappe handles those bits for you.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75220339,"CreationDate":"2023-01-24 10:42:10","Q_Score":1,"ViewCount":260,"Question":"I was trying to go through this code and constantly getting an error while importing import rioxarray as rio in python. The details of code in below.....\n outfilename = os.path.join(output_folder,'Runoff_monthly_%s.%02s.%02s.tif' %(Date.strftime('%Y'), Date.strftime('%m'), '01'))\n x = pr.rio.to_raster(outfilename) \n print(\"IMD\",ncfile['time'][i])\n i+=1\n\nthe error i am getting in below....\nFile \"rioxarray.py\", line 26, in \nfrom rioxarray.exceptions import (\nImportError: No module named exceptions\nI am trying to solve this error while i am executing this code..\nFile \"rioxarray.py\", line 26, in \nfrom rioxarray.exceptions import (\nImportError: No module named exceptions","Title":"How to solve the problem \"No module named exceptions\"?","Tags":"python,python-2.7,module,importerror","AnswerCount":3,"A_Id":75220589,"Answer":"I have already checked the list and the \"rioxarray\" already installed in my environment.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75222558,"CreationDate":"2023-01-24 13:58:42","Q_Score":2,"ViewCount":102,"Question":"I have a MindsDB model named hrd and I intended to delete the model by running the below command as per the documentation.\ndb.models.deleteOne({name: \"hrd\"}) \n\nWhile running the command and I got hit with the below error.\nMongoServerError 'unsupported operand type(s) for >>: 'NoneType' and 'int'\n\nHow can I delete specific models from a MongoDB database integration in MindsDB?","Title":"Error when deleting MindsDB model from MongoDB","Tags":"python,machine-learning,mindsdb","AnswerCount":2,"A_Id":75288338,"Answer":"This open issue is being resolved by the MindsDB engineering team, and will be fixed in the next release (24-48hrs).","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75223506,"CreationDate":"2023-01-24 15:15:44","Q_Score":0,"ViewCount":51,"Question":"Just for reference I am coming from AWS so any comparisons would be welcome.\nI need to create a function which detects when a blob is placed into a storage container and then downloads the blob to perform some actions on the data in it.\nI have created a storage account with a container in, and a function app with a python function in it. I have then set up a event grid topic and subscription so that blob creation events trigger the event. I can verify that this is working. This gives me the URL of the blob which looks something like https:\/\/.blob.core.windows.net\/\/. However then when I try to download this blob using BlobClient I get various errors about not having the correct authentication or key. Is there a way in which I can just allow the function to access the container in the same way that in AWS I would give a lambda an execution role with S3 permissions, or do I need to create some key to pass through somehow?\nEdit: I need this to run ASAP when the blob is put in the container so as far as I can tell I need to use EventGrid triggers not the normal blob triggers","Title":"Access blob in storage container from function triggered by Event Grid","Tags":"python,amazon-web-services,azure,azure-functions,azure-blob-storage","AnswerCount":2,"A_Id":75231879,"Answer":"The answer lied somewhere between @rickvdbosch's answer and Abdul's comment. I first had to assign an identity to the function giving it permission to access the storage account. Then I was able to use the azure.identity.DefaultAzureCredential class to automatically handle the credentials for the BlobClient","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75224012,"CreationDate":"2023-01-24 15:54:44","Q_Score":0,"ViewCount":26,"Question":"I learned the hard way that tkinter is not thread-safe when starting independent threads with tkinter functionality from the main tkinter thread. I got error messages in a (for me) non-reproducible way, mostly \"main thread is not in main loop\" in connection to internal del calls after I stopped my application. Sometimes the kernel crashed during or after execution, often everything just ran smoothly.\nThese independent threads should run data acquisitions (DAQ) at a couple of instruments, with different GUIs depending on the type of instrument. Threading seems to be feasible as it is not known from start which instrument will be needed at some time, DAQ tasks should be queued up if an instrument is busy etc.\nSo, my idea now is to start the DAQ threads without any tkinter functionality from the main thread. The specific DAQ thread knows which specific GUI to use and puts this specific GUI class into a queue which is handled in the main GUI\/tkinter thread. The instance of the GUI class will then be created in the GUI\/tkinter thread.\nWill this approach still violate thread-safety or is everything ok, as long as the GUI instances are created in the main tkinter thread?","Title":"Does a GUI-class argument violate thread-safety in tkinter?","Tags":"python,multithreading,class,tkinter,instance","AnswerCount":1,"A_Id":75224146,"Answer":"As long as you only access tkinter widgets and functions from a single thread, it should work just fine. One exception, as far as I understand, is that it's safe to call the event_genereate method from other threads. You can push data on a queue and then generate an event, then the event can be handled in the main thread where data can be pulled off the queue and processed.","Users Score":2,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75224450,"CreationDate":"2023-01-24 16:30:39","Q_Score":1,"ViewCount":23,"Question":"For classes:\nclass Base(ABC):\n\n def __init__(self, param1):\n self.param1 = param1\n \n @abstractmethod\n def some_method1(self):\n pass\n\n # @abstractmethod\n # def potentially_shared_method(self):\n # ????\n\nclass Child(Base):\n \n def __init__(self, param2):\n self.param1 = param1\n self.param2 = param2\n \n def some_method1(self):\n self.object1 = some_lib.generate_object1(param1, param2)\n\n def potentially_shared_method(self):\n return object1.process()\n\nI want to move the potentially_shared_method to be shared in abstract calss, however it uses object1 that is initialized in some_method1 and needs to stay there.","Title":"How to refer to subclass property in abstract shared implementation in abstract class method","Tags":"python,python-3.x","AnswerCount":1,"A_Id":75224560,"Answer":"If it's only potentially shared, it doesn't belong in the base class. You'd be breaking a few design principles.\nWhat is a child class supposed to do for which the sharing doesn't make sense?\nAlso, you're introducing some temporal coupling; you can only call potentially_shared_method after some_method1 has been called. That's not ideal because the users of your class might not realize that.\nAlso, if the method is shared, you probably don't want it to be abstract in your base class; with an abstract method you're really only sharing the signature; but it seems you'll want to share functionality.\nAnyway. Here's some options:\n\nUsing Python's multiple inheritance, move potentially_shared_method into a SharedMixin class and have those children who share it inherit from Base and from SharedMixin. You can then also move some_method1 into that SharedMixin class because it seems to me that those go together. Or maybe not...\nHide the access to object1 behind a getter. Make the getter have a dummy implementation in the base class and a proper implementation in those child classes who actually create an object1. Then potentially_shared_method can be moved to Base and just refer to the getter.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75224588,"CreationDate":"2023-01-24 16:41:19","Q_Score":2,"ViewCount":218,"Question":"I have pictures that I want to resize as they are currently quite big. The pictures are supposed to be going to Power BI and Power BI has the maximum limitation of around 32k base64 string. I created a function to resize the image but the image has become blurry and less visible after resizing. The length of the base64 image of 1 picture was around 150,000 which came down to around 7000.\n # Converting into base64\n outputBuffer = BytesIO() \n img2.save(outputBuffer, format='JPEG')\n bgBase64Data = outputBuffer.getvalue()\n # Creating a new column for highlighted picture as base64\n #image_base64_highlighted = base64.b64encode(bgBase64Data).decode() ## http:\/\/stackoverflow.com\/q\/16748083\/2603230\n \n #print(img2)\n resize_factor = 30000\/len(base64.b64encode(bgBase64Data))\n im = Image.open(io.BytesIO(bgBase64Data))\n out = im.resize( [int(resize_factor * s) for s in im.size] )\n output_byte_io = io.BytesIO()\n out.save(output_byte_io, 'JPEG')\n final = output_byte_io.getvalue()\n image_base64_highlighted = base64.b64encode(final).decode()\n\nI think it is shrinking the image too much. Is there anyway I can improve the visibility of the image. I want to be able to see at least the text in the image. I cannot post the images due to PII. Any idea?","Title":"Resizing the base64 image in Python","Tags":"python,base64,python-imaging-library","AnswerCount":2,"A_Id":75224645,"Answer":"I think you can do that with pygame itself. But its recommended for you to try open-cv python for this. I think you should use cv2.resize(). And the parameters are;\n\nsource : Input Image array (Single-channel, 8-bit or floating-point)\n\n\ndsize : Size of the output array\n\n\ndest : Output array (Similar to the dimensions and type of Input image array)\n\n\nfx : Scale factor along the horizontal axis\n\n\nfy : Scale factor along the vertical axis\ninterpolation: One of the above interpolation methods","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75224636,"CreationDate":"2023-01-24 16:46:16","Q_Score":1,"ViewCount":5096,"Question":"I am trying to import sklearn library by writing code like from sklearn.preprocessing import MinMaxScaler but it kept showing same error.\nI tried uninstalling and reinstalling but no change. Command prompt is also giving same error. Recently I installed some python libraries but that never affected my enviroment.\nI also tried running the code in jupyter notebook. When I tried to import numpy like import numpy as np, it ran successfully. So the problem is only with sklearn.\nAlso, I have worked with sklearn before but have never seen such an error.","Title":"ImportError: cannot import name 'int' from 'numpy'","Tags":"python,scikit-learn,importerror","AnswerCount":2,"A_Id":75272610,"Answer":"You have to read into the error message. For me sklearn was importing something from scipy which uses the outdated np.int, so updating scipy solved the issue for me.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75224636,"CreationDate":"2023-01-24 16:46:16","Q_Score":1,"ViewCount":5096,"Question":"I am trying to import sklearn library by writing code like from sklearn.preprocessing import MinMaxScaler but it kept showing same error.\nI tried uninstalling and reinstalling but no change. Command prompt is also giving same error. Recently I installed some python libraries but that never affected my enviroment.\nI also tried running the code in jupyter notebook. When I tried to import numpy like import numpy as np, it ran successfully. So the problem is only with sklearn.\nAlso, I have worked with sklearn before but have never seen such an error.","Title":"ImportError: cannot import name 'int' from 'numpy'","Tags":"python,scikit-learn,importerror","AnswerCount":2,"A_Id":75525007,"Answer":"Run pip3 install --upgrade scipy\nOR upgrade whatever tool that tried to import np.int and failed\nnp.int is same as normal int of python and scipy was outdated for me","Users Score":2,"is_accepted":false,"Score":0.1973753202,"Available Count":2},{"Q_Id":75224714,"CreationDate":"2023-01-24 16:54:03","Q_Score":0,"ViewCount":23,"Question":"I have a dataset with 15 different meteorological stations (providing T, rh, wind direction through time).\nHow should I implement them in a machine learning model? As independent inputs or can I combine them?\nIf you could provide me with some references or hints to start this project, that would very helpful !\nI have so far cleaned the data and separate each meteorological station.\nI believe that I should try to perform a single prediction on each station and then combine the prediction of each station together ?","Title":"how to input the data of several meteorological stations into a machine learning model?","Tags":"python,database,machine-learning,deep-learning,data-science","AnswerCount":1,"A_Id":75226178,"Answer":"There are different ways to implement multiple meteorological stations in a machine learning model depending on the specific problem you are trying to solve and the characteristics of the data. Here are a few options to consider:\n\nIndependent models: One option is to train a separate model for each meteorological station, using the data for that station as input. This approach is useful if the stations have different characteristics or if you want to make predictions for each station independently.\n\nCombined model: Another option is to combine the data from all stations and train a single model to make predictions for all of them at once. This approach is useful if the stations are similar and the relationship between the input variables and the output variable is the same across all stations.\n\nMulti-task learning: You can also consider using multi-task learning, where you train a single model to perform multiple tasks, one for each meteorological station. This approach is useful if the stations are similar but have different characteristics and you want to make predictions for all of them at once.\n\n\nRegarding how to combine the predictions, it depends on the problem you are trying to solve. If you want to make a prediction for each station independently you don't need to combine the predictions. But if you want to make a prediction for all the stations you can use an ensemble method like a majority vote or a weighted average to combine the predictions.\nYou can find more information about these approaches and examples of their implementation in papers and tutorials about multi-task learning, multi-output regression and ensemble methods.\nAlso, it might be helpful to explore the correlation between the meteorological stations. You can use the correlation matrix and heatmap to explore the correlation between the different meteorological stations. If they are highly correlated you can combine them in a single model, otherwise, you can consider them as independent inputs.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75224878,"CreationDate":"2023-01-24 17:06:54","Q_Score":1,"ViewCount":178,"Question":"I have a numpy array phase of floats dtype=np.float32 that I convert to integers out ,dtype=np.uint8.\nSince speed is an issue, this should happen in-place.\nI work with code from a previous student and the code doesn't work\nphase = np.arange(0, 4, dtype=np.float32).reshape(2, 2)\n\nout = np.empty((2, 2), dtype=np.uint8)\n\n# Prepare the 2pi -> integer conversion factor and convert.\nfactor = -(256 \/ 2 \/ np.pi)\nphase *= factor\n\nprint(\"array phase with dtype float \\n \", phase)\n\n# There is some randomness involved in casting positive floats to integers.\n# Avoid this by going all negative.\nmaximum = np.amax(phase)\nif maximum >= 0:\n toshift = 256 * 2 * np.ceil(maximum \/ 256)\n phase -= toshift\n\n# Copy and cast the data to the output\nnp.copyto(out, phase, casting=\"unsafe\")\nprint(\"phase array dtype unsigned integer\", out)\n\n\n# This part (along with the choice of type), implements modulo much faster than np.mod().\nbw = int(256 - 1)\nnp.bitwise_and(out, bw, out=out)\nprint(\"array module bit depth \\n\", out) \n\nThe output is\narray phase with dtype float \n [[ -0. -162.97466]\n [-325.9493 -488.92395]]\nphase array dtype unsigned integer [[ 0 94]\n [187 24]]\narray module bit depth \n [[ 0 94]\n [187 24]]\n\nExecuting this program yields results that I don't understand:\n\nWhy does e.g. -162 get mapped to 94?\nI am aware of the flag casting=unsafe but it is required to to in-place conversion.\nI am also aware that 300 > 256 and hence the np.uint8 data-type is too small. I guess i should increase it to np.uint16?\nWhy is there some randomness involved when casting positive floats to integer?\n\nI have also tried np.astype(np.uint8) but the results are similarly disappointing.","Title":"Numpy in-place type casting","Tags":"python,arrays,numpy,type-conversion","AnswerCount":1,"A_Id":75225589,"Answer":"Since speed is an issue, this should happen in-place.\n\nIn-place operations are not always necessary faster. This is dependent of the target platform and the way Numpy is compiled (a lot of low-level effects needs to be considered). They are generally not slower though. Reusing buffers is sufficient in some cases (to avoid page-faults). Did you profile your code and found this to be a bottleneck?\n\nWhy does e.g. -162 get mapped to 94?\n\nThis is because the range of the destination type (0..255 included) does not supports the number -162 nor any negative numbers actually since it is an unsigned integer of 8 bits. As a result, a wraparound happens : 256-162=94. That being said, AFAIK, doing this cause an undefined behaviour. The result from one platform to another can change (and actually did so based on past Numpy questions and issues). Thus, I strongly advise to use a bigger type or to change your code so the values fit in the target output type range.\n\nI am aware of the flag casting=unsafe but it is required to to in-place conversion.\n\ncasting=unsafe is pretty explicit. It basically means : \"I know exactly what I am doing and accept the risks and the consequence\". Use it at your own risk ;) .\n\nI am also aware that 300 > 256 and hence the np.uint8 data-type is too small. I guess i should increase it to np.uint16?\n\nSince numbers are negative, you should rather use np.int16 instead. Beside this, yes, this is a good idea.\n\nWhy is there some randomness involved when casting positive floats to integer?\n\nIt is not really random. Such operation is deterministic, but the result is dependent of the target platform and the input numbers (and possibly the low-level state of the processor regarding the specific target platform). In practice, as long as the input numbers fit in the target range and there is no special numbers like NaN, +Inf, -Inf values, it should be fine.\n\nI have also tried np.astype(np.uint8) but the results are similarly disappointing.\n\nThis is normal. The problem is the same and the same conversion function is called in both cases.\n\nNote the operation you do is not really an in-place operation, except the np.bitwise_and(out, bw, out=out). That being said, it is useless for an np.uint8 type since the range is bounded to 255 anyway.\n\nimplements modulo much faster than np.mod()\n\nThis is true for positive number but not for negative numbers. For negative numbers, this is dependent of the underlying representation of integers on the target platform. This does not work for processors using the C1 representation. That being said, all mainstream processors use the C2 representation these days.","Users Score":2,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75225101,"CreationDate":"2023-01-24 17:26:12","Q_Score":2,"ViewCount":104,"Question":"I have this list of countries:\ncountries = ['Estonia', 'Finland', 'Sweden', 'Denmark', 'Norway', 'Iceland']\nI need to resolve following exersice: Use reduce to concatenate all the countries and to produce this sentence: Estonia, Finland, Sweden, Denmark, Norway, and Iceland are north European countries\ndef sentece(pais,pais_next):\n\n if pais_next=='Iceland':\n return pais+' and '+pais_next + ' are north European countries'\n else: return pais+', '+pais_next\n\ncountries_reduce=reduce(sentece,countries)\nprint(countries_reduce)\n\nThe code run perfect, but if I want to do in general, How I know what is the last element?.","Title":"Last item on the reduce Method","Tags":"python,function,reduce","AnswerCount":2,"A_Id":75225194,"Answer":"The reduce function doesn't have a way to tell it what to do about the last item, only what to do about the initialization.\nThere's two general ways to go about it:\n\nJust do simple concatenation with a comma and a space, but only on the first n-1 items of the list, then manually append the correct format for the last item\nChange the last item from Iceland to and Iceland are north European countries, then do the concatenation for the full list.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75226495,"CreationDate":"2023-01-24 19:43:22","Q_Score":0,"ViewCount":31,"Question":"I have installed tqdm using pip install tqdm\nbut I still got an error that ModuleNotFoundError: No module named 'tqdm',\nhow can I fix this?\nmy code looks like this from tqdm import tqdm","Title":"how can I fix ModuleNotFoundError: No module named 'tqdm' after installation 'tqdm'","Tags":"python,machine-learning,deep-learning,computer-vision","AnswerCount":2,"A_Id":75226555,"Answer":"Here are some options I can advise:\n\nCheck that you have tdqm with pip show tdqm\nCheck that you're using the correct virtual environment.\nYou can try uninstall and then reinstall it again.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75226530,"CreationDate":"2023-01-24 19:46:17","Q_Score":0,"ViewCount":23,"Question":"Someone setup for us cmake to use pybind to create a .pyd module we package, together with some pure python files, into a wheel.\nWe are switching from an old 3.7 python to a modern one, so we want to support wheels for both the old and new python version, at least for now.\nI've read the pybind documentation and, due to my unfamiliarity with cmake, I found it unclear. So I'm looking for clarification.\nMy understanding is that you would have to compile twice, one time \"targeting\" 3.7 and another time targeting the newer version. But I wouldn't expect this to matter at all (if you were to handcode wrapping to python), or at most I'd expect it to matter if we were targeting two different major version (i.e. python2 vs python3).\nMy question is if this is really needed. Can I just avoid a second compilation and slam the .pyd I get when compiling \"for python 3.7\" into the wheel we build for the newer python too?","Title":"pybind c++ for multiple python versions","Tags":"c++,python-3.x,pybind11,python-wheel","AnswerCount":1,"A_Id":75230484,"Answer":"Yes, it is necessary. The CPython ABI changes from version to version, often in incompatible ways, so you have to compile for each version separately.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75227813,"CreationDate":"2023-01-24 22:12:06","Q_Score":1,"ViewCount":23,"Question":"i have problem with speed of MapReduce . Is there any faster library instead of this ?\ni tried this for many time but not work as good as we want.","Title":"Running a job on mapreduce produces error code 2","Tags":"python,python-imaging-library","AnswerCount":1,"A_Id":75227842,"Answer":"You can use Apache Spark Mlib It's 100x faster than MapReduce.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75228138,"CreationDate":"2023-01-24 22:56:33","Q_Score":1,"ViewCount":44,"Question":"I have a set of scripts and utility modules that were written for a recent version of Python 3. Now suddenly, I have a need to make sure that all this code works properly under an older version of Python 3. I can't get the user to update to a more recent Python version -- that's not an option. So I need to identify all the instances where I've used some functionality that was introduced since the old version they have installed, so I can remove it or develop workarounds.\nApproach #1: eyeball all the code and compare against documentation. Not ideal when there's this much code to look at.\nApproach #2: create a virtual environment locally based on the old version in question using pyenv, run everything, see where it fails, and make fixes. I'm doing this anyway, because backporting to the older Python will also mean going backwards in a number of needed third-party modules from PyPi, and I'll need to make sure that the suite still functions properly. But I don't think it's a good way to identify all my version incompatibilities, because much of the code is only exercised based on particular characteristics of input data, and it'd be hard to make sure I exercise all the code (I don't yet have good unit tests that ensure every line will get executed).\nApproach #3: in my virtual environment based on the older version, I used pyenv to install the pylint module, then used this pylint module to check my code. It ran; but it didn't identify issues with standard library calls. For example, I know that several of my functions call subprocess.run() with the \"check_output=\" Boolean argument, which didn't become available until version 3.7. I expected the 3.6 pylint run to spot this and yell at me; but it didn't. Does pylint not check standard library calls against definitions?\nAnyway, this is all I've thought of so far. Any ideas gratefully appreciated. Thanks.","Title":"Checking Python standard library function\/method calls for compatibility with old Python versions","Tags":"python,static-analysis,pylint","AnswerCount":2,"A_Id":75295235,"Answer":"As noted in the comments, the real issue is that you do not have a proper test suite, so the question is how can you get one cheaply.\nAdding unit test can be time consuming. Before doing that, you can add actual end-to-end tests (which will take some computational time and longer feedback time, but that will be easier to implement), by simply running the program with the current version of python that it is working with and storing the results and then adding a test to show you reproduce the same results.\nThis kind of test is usually expensive to maintain (as each time you are updating the behavior, you have to update the results). However, there are a safeguard against regression, and allow you to perform some heavy refactoring on legacy code in order to move to a more testable structure.\nIn your case, these end-to-end test will allow you to test against several versions of python the actual application (not only parts of it).\nOnce you have a better test suite, you can them decide if this heavy end-to-end tests are worth keeping based on the maintenance burden of the test suite (let's not forget that the test suite should not slow you down in your development, so if it is the bottleneck, that means you should rethink your testing)\nWhat will take time is to generate good input data to your end-to-end tests, to help you with that, you should use some coverage tool (you might even spot unreachable code thanks to that). If there are part of your code that you don't manage to reach, I would not bother at first about it, as it means it will be unlikely to be reached by your client (and if it is the case and it fails at your client, be sure to have proper logging implemented to be able to add the test case to your test suite)","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75229798,"CreationDate":"2023-01-25 04:50:43","Q_Score":4,"ViewCount":62,"Question":"Python gives me a different result if I assign one of the intermediate steps to a variable, like this:\n>>> -0.207 ** 0.66 - 1\n-1.3536229379434348\n>>> a = -0.207\n>>> a ** 0.66 - 1\n(-1.1703591496008927+0.30988214273656856j)\n\nFor this simple calculation, if I assign -0.207 to a temporary variable a, then the result of a ** 0.66 - 1 evaluates to a complex number.\nWhy does this happen, and how do I stop Python from doing that?","Title":"Assigning intermediate value to a variable makes the computation result complex","Tags":"python,operators","AnswerCount":1,"A_Id":75229897,"Answer":"The correct answer to your statement (according to python) is a negative number. -0.207**0.66-1 evaluates to about -1.35.\nThe reason this is happening is that you're miscalculating in the one-liner:\n-0.207 ** 0.66 - 1 actually evaluates to -(0.207 ** 0.66) - 1 and not to (-0.207) ** 0.66 - 1 like you'd expect.\nWhen you separate the lines, you're changing the calculation to the second statement.\nTo stop this from happening, use explicit parentheses where there might be any ambiguity.","Users Score":5,"is_accepted":false,"Score":0.761594156,"Available Count":1},{"Q_Id":75230007,"CreationDate":"2023-01-25 05:33:45","Q_Score":2,"ViewCount":141,"Question":"`Fatal error from pip prevented installation. Full pip output in file:\nC:\\Users\\arman.local\\pipx\\logs\\cmd_2023-01-24_23.27.56_pip_errors.log\npip failed to build packages:\nbitarray\ncytoolz\nyarl\nSome possibly relevant errors from pip install:\nerror: subprocess-exited-with-error\nerror: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2\ncytoolz\/dicttoolz.c(209): fatal error C1083: No se puede abrir el archivo incluir: 'longintrepr.h': No such file or directory\nyarl\/_quoting_c.c(196): fatal error C1083: No se puede abrir el archivo incluir: 'longintrepr.h': No such file or directory\nError installing eth-brownie.`\nAfter I run the line-code above, it outputs this error and I've tried uninstalling and installing pipx again but this just doesn\u00b4t work.","Title":"Error installing eth-brownie with `pipx install eth-brownie`","Tags":"python,solidity","AnswerCount":1,"A_Id":75773803,"Answer":"This error is generated due to an incompatibility between Python 3.11 and Cython. While it will most likely get fixed in later builds, downgrading to an earlier Python version usually does the trick. Here are a few steps I would recommend:\n\nInstall Python 3.10 or lower. This can run concurrently with your current Python version, but you need to change the priority version in PATH, use a virtual environment, or directly call it in the shell using py -3.10.\nUninstall pipx (run pip uninstall pipx) and reinstall it using the lower Python version py -3.10 -m pip install --user pipx. You might need to clean up earlier attempts to install brownie by deleting the eth-brownie folder under users\/your-username\/.local\/pipx\/venvs.\nAlso, remember to call pipx ensurepath after reinstallation\nUninstall Cython (run pip uninstall cython) and reinstall it using the lower Python version (run pip install cython).\nReattempt installing eth-brownie pipx install eth-brownie.\n\nIf brownie installation still doesn't work, try one or both of the following:\n\nForget pipx altogether and pip install brownie instead.\nDownload the Visual Studio Build tools 2019, and install all the dependencies.","Users Score":2,"is_accepted":false,"Score":0.3799489623,"Available Count":1},{"Q_Id":75230130,"CreationDate":"2023-01-25 05:52:23","Q_Score":0,"ViewCount":31,"Question":"I was trying to load multipage image file formats in Opencv by imreadmulti() function. Apart from .Tiff files, what are the other file types supported by imreadmulti()?\nI tried loading Pdf, Docx, and Dicom files too with imreadmulti(), as it accepts multipage file types, according to the official documentation. But they didn't work. Could somebody help me know the other file types supported by imreadmulti().","Title":"File types supported by imreadmulti() function in Opencv","Tags":"python,python-3.x,image,opencv","AnswerCount":1,"A_Id":75242325,"Answer":"BMP (Windows bitmap)\nDIB (Device-independent bitmap)\nJPEG (Joint Photographic Experts Group)\nJPG (Joint Photographic Experts Group)\nJP2 (JPEG 2000)\nPNG (Portable Network Graphics)\nPBM (Portable bitmap)\nPGM (Portable graymap)\nPPM (Portable pixmap)\nSR (Sun raster)\nRAS (Sun raster)\nTIFF (Tagged Image File Format)\nTIF (Tagged Image File Format)\nEXR (OpenEXR)\nJXR (JPEG XR)\nPFM (Portable float map)\nPDS (NASA Planetary Data System)\nPFM (Portable float map)\nVIFF (Khoros Visualization image file format)\nXBM (X11 bitmap)\nXPM (X11 pixmap)\nDDS (DirectDraw Surface)\nEIS (Encapsulated image file)\nMNG (Multiple-image Network Graphics)\nWEBP\nHEIF\nHEIC\nAVIF\nNote\nnote that some of the file types given below may require additional libraries or codecs to be installed on your system in order to be read by OpenCV. They're:\nSGI (Silicon Graphics Image)\nCUR (Windows cursor)\nICO (Windows icon)\nGIF (Graphics Interchange Format)\nDJVU (DjVu image format)\nPDF (Portable Document Format)\nWMF (Windows Metafile)\nEMF (Enhanced Metafile)","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75230297,"CreationDate":"2023-01-25 06:18:54","Q_Score":0,"ViewCount":18,"Question":"Is there any existing python package that can be used to handle METEOR RADAR data sets which are in .hwd file format?\nI want to work on atmoshpereic science project on tide analysis in the MLT region using python.So, the source of the data is METEOR RADAR which stores data in .hwd file format(height width depth).\nI tried searching the internet for specific packages that could help me file handle .hwd files but ended up finding no packages or libraries that are currently active.\nCould you please help me?\nThank you.","Title":"Any python package that can be used to handle METEOR RADAR data sets?","Tags":"python-3.x,machine-learning,data-analysis,physics,atmosphere","AnswerCount":1,"A_Id":75288717,"Answer":"I figured this out!\nThere is no need for external packages to work on hwd files in python.\nhwd files stand for Horizontal Wind Data files. So, METEOR radar stores data in hwd file format, which can be treated as a normal text(.txt) file for file handling in python.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75232687,"CreationDate":"2023-01-25 10:34:30","Q_Score":1,"ViewCount":43,"Question":"How do I change the text input in IDLE's terminal to green?\nimport sys\n\ntry:\n color = sys.stdout.shell\nexcept AttributeError:\n raise RuntimeError(\"Use IDLE\")\n\nfull_name = input('What is your name?')\n\ncolor.write(\"My name is \",\"DEFINITION\")\ncolor.write(full_name,\"DEFINITION\")","Title":"How to colourise user input in IDLE's terminal?","Tags":"python,python-idle","AnswerCount":1,"A_Id":75249007,"Answer":"On the Highlights tab of the Options dialog, one can define colors separately for \"Normal Code or Text\" and \"Shell User Output\". The former includes both shell code input at the '>>>' prompt and responses to input() prompts. (That input() responses get syntax colorized is a low-priority but that I will eventual fix.)\nI am not sure what you want to do, but one cannot currently define user colors from user code. The only influence is whether one sends output to sys.stdout or sys.stderr.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75233043,"CreationDate":"2023-01-25 11:03:37","Q_Score":3,"ViewCount":63,"Question":"I found a code snippet, which is a custom metric for tensorboard (pytorch training)\ndef specificity(output, target, t=0.5):\n \n tp, tn, fp, fn = tp_tn_fp_fn(output, target, t)\n\n if fp == 0:\n return 1\n s = tn \/ (tn + fp)\n\n if s != s:\n s = 1\n\n return s\n\ndef tp_tn_fp_fn(output, target, t):\n with torch.no_grad():\n preds = output > t # torch.argmax(output, dim=1)\n preds = preds.long()\n num_true_neg = torch.sum((preds == target) & (target == 0), dtype=torch.float).item()\n num_true_pos = torch.sum((preds == target) & (target == 1), dtype=torch.float).item()\n num_false_pos = torch.sum((preds != target) & (target == 1), dtype=torch.float).item()\n num_false_neg = torch.sum((preds != target) & (target == 0), dtype=torch.float).item()\n\n return num_true_pos, num_true_neg, num_false_pos, num_false_neg\n\n\nIn terms of the calculation itself it is easy enough to understand.\nWhat I don't understand is s != s. What does that check do, how can the two s even be different?","Title":"When can \"s != s\" occur in a method?","Tags":"python,pytorch","AnswerCount":1,"A_Id":75233093,"Answer":"Since it's ML-related, I'll assume the data are all numbers. The only number where s != s is true is the special not-a-number value nan. Any comparison with nan is always false, so from that follows that nan is not equal to itself.","Users Score":2,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75235221,"CreationDate":"2023-01-25 14:10:09","Q_Score":1,"ViewCount":144,"Question":"using solana library from pip\npip install solana\n\nand then trying to perform withdraw_from_vote_account\ntxn = txlib.Transaction(fee_payer=wallet_keypair.pubkey())\n# txn.recent_blockhash = blockhash\ntxn.add(\n vp.withdraw_from_vote_account(\n vp.WithdrawFromVoteAccountParams(\n vote_account_from_pubkey=vote_account_keypair.pubkey(),\n to_pubkey=validator_keypair.pubkey(),\n withdrawer=wallet_keypair.pubkey(),\n lamports=2_000_000_000,\n )\n )\n )\ntxn.sign(wallet_keypair)\ntxn.serialize_message()\nsolana_client.send_transaction(txn).value\n\nThis throw me an error\nTraceback (most recent call last):\n File \"main.py\", line 119, in \n solana_client.send_transaction(txn).value\n File \"venv\/lib\/python3.8\/site-packages\/solana\/rpc\/api.py\", line 1057, in send_transaction\n txn.sign(*signers)\n File \"venv\/lib\/python3.8\/site-packages\/solana\/transaction.py\", line 239, in sign\n self._solders.sign(signers, self._solders.message.recent_blockhash)\nsolders.SignerError: not enough signers\n\nI tried to workaround with adding more keypair to sign\ntxn.sign(wallet_keypair,validator_keypair)\n\nDoing this it throws me an error on the sign method\nself._solders.sign(signers, self._solders.message.recent_blockhash)\nsolders.SignerError: keypair-pubkey mismatch\n\nnot sure how to resolve this any help appreciated","Title":"Unable to Sign Solana Transaction using solana-py throws not enough signers","Tags":"python,solana,anchor-solana","AnswerCount":2,"A_Id":75237842,"Answer":"when you are trying to send a transaction that's withdrawing money from a wallet, you need the wallet that's holding the assets to sign a transaction to send the assets to the withdrawer.\nanyway, from what i see you are trying to withdraw from vote_account_keypair, and withdrawer os wallet_keypair, in the code that you wrote you have only one signer which is wallet_keypair but you also need vote_account_keypair to sign the transaction becuz you are withdrawing from their account.\ni hope this helps","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75236933,"CreationDate":"2023-01-25 16:25:04","Q_Score":2,"ViewCount":219,"Question":"i work on a google cloud environment where i don't have internet access. I'm trying to launch a dataflow job passing it the sdk like this:\npython wordcount.py --no_use_public_ip --sdk_location \"\/dist\/package-import-0.0.2.tar.gz\"\n\nI generated package-import-0.0.2.tar.gz with this setup.py\n import setuptools\n\n setuptools.setup(\n name='package-import',\n version='0.0.2',\n install_requires=[\n 'apache-beam==2.43.0',\n 'cachetools==4.2.4',\n 'certifi==2022.12.7',\n 'charset-normalizer==2.1.1',\n 'cloudpickle==2.2.0',\n 'crcmod==1.7',\n 'dill==0.3.1.1',\n 'docopt==0.6.2',\n 'fastavro==1.7.0',\n 'fasteners==0.18',\n 'google-api-core==2.11.0',\n 'google-apitools==0.5.31',\n 'google-auth==2.15.0',\n 'google-auth-httplib2==0.1.0',\n 'google-cloud-bigquery==3.4.1',\n 'google-cloud-bigquery-storage==2.13.2',\n 'google-cloud-bigtable==1.7.3',\n 'google-cloud-core==2.3.2',\n 'google-cloud-datastore==1.15.5',\n 'google-cloud-dlp==3.10.0',\n 'google-cloud-language==1.3.2',\n 'google-cloud-pubsub==2.13.11',\n 'google-cloud-pubsublite==1.6.0',\n 'google-cloud-recommendations-ai==0.7.1',\n 'google-cloud-spanner==3.26.0',\n 'google-cloud-videointelligence==1.16.3',\n 'google-cloud-vision==1.0.2',\n 'google-crc32c==1.5.0',\n 'google-resumable-media==2.4.0',\n 'googleapis-common-protos==1.57.1',\n 'grpc-google-iam-v1==0.12.4',\n 'grpcio==1.51.1',\n 'grpcio-status==1.51.1',\n 'hdfs==2.7.0',\n 'httplib2==0.20.4',\n 'idna==3.4',\n 'numpy==1.22.4',\n 'oauth2client==4.1.3',\n 'objsize==0.5.2',\n 'orjson==3.8.3',\n 'overrides==6.5.0',\n 'packaging==22.0',\n 'proto-plus==1.22.1',\n 'protobuf==3.20.3',\n 'pyarrow==9.0.0',\n 'pyasn1==0.4.8',\n 'pyasn1-modules==0.2.8',\n 'pydot==1.4.2',\n 'pymongo==3.13.0',\n 'pyparsing==3.0.9',\n 'python-dateutil==2.8.2',\n 'pytz==2022.7',\n 'regex==2022.10.31',\n 'requests==2.28.1',\n 'rsa==4.9',\n 'six==1.16.0',\n 'sqlparse==0.4.3',\n 'typing-extensions==4.4.0',\n 'urllib3==1.26.13',\n 'zstandard==0.19.0'\n ],\n packages=setuptools.find_packages(),\n )\n\nbut in dataflow log worker i have this error: Could not install Apache Beam SDK from a wheel: could not find a Beam SDK wheel among staged files, proceeding to install SDK from source tarball.\nAnd then he tries to download it but since he doesn't have internet he can't\nmy biggest problem is that the google cloud environment doesn't access the internet so dataflow can't download what it needs. Do you know of a way to pass it an sdk_location?","Title":"Could not install Apache Beam SDK from a wheel: could not find a Beam SDK wheel among staged files, proceeding to install SDK from source tarball","Tags":"python,google-cloud-platform,google-cloud-dataflow,apache-beam,python-3.8","AnswerCount":2,"A_Id":75312136,"Answer":"I solved using an internal proxy that allowed me to access the internet. In the command added this --no_use_public_ip and i set no_proxy=\"metadata.google.internal,www.googleapis.com,dataflow.googleapis.com,bigquery.googleapis.com\" thanks","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75237213,"CreationDate":"2023-01-25 16:45:22","Q_Score":0,"ViewCount":56,"Question":"I'm trying to find unique combinations of ~70,000 IDs.\nI'm currently doing an itertools.combinations([list name], 2) to get unique 2 ID combinations but it's been running for more than 800 minutes.\nIs there a faster way to do this?\nI tried converting the IDs into a matrix where the IDs are both the index and the columns and populating the matrix using itertools.product.\nI tried doing it the manual way with loops too.\nBut after more than a full day of letting them run, none of my methods have actually finished running.\nFor additional information, I'm storing these into a data frame, to later run a function that compares each of the unique set of IDs.","Title":"More optimized way to do itertools.combinations","Tags":"python,combinations,python-itertools","AnswerCount":2,"A_Id":75237466,"Answer":"(70_000 ** 69_000) \/ 2== 2.4 billion - it is not such a large number as to be not computable in a few hours (update I run a dry-run on itertools.product(range(70000), 2) and it took less than 70 seconds, on a 2017 era i7 @3GHz, naively using a single core) But if you are trying to keep this data in memory at once, them it won't fit - and if your system is configured to swap memory to disk before erroring with a MemoryError, this may slow-down the program by 2 or more orders of magnitude, and thus, that is when your problem come from.\nitertools.combination does the right thing in this respect, and no need to try to change it for something else: it will yield one combination at a time. What you are doing with the result, however, do change things: if you are streaming the combination to a file and not keeping it in memory, it should be fine, and then, it is just computational time you can't speed up anyway.\nIf, on the other hand, you are collecting the combinations to a list or other data structure: there is your problem - don't do it.\nNow. going a step further than your question, since these combinations are check-able and predictable, maybe trying to generate these is not the right approach at all - you don't give details on how these are to be used, but if used in a reactive form, or on a lazy form, you might have an instantaneous workflow instead.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75237476,"CreationDate":"2023-01-25 17:08:49","Q_Score":1,"ViewCount":33,"Question":"I wanted to run this script that saves images generated by a TDW script but the TDW script is definitely not running.\nimport glob\nimport os\nimport cv2\nimport subprocess\ni = 0\nframeSize = (512, 512)\npath = 'CRIPP\/annotations\/icqa_dataset\/IID\/json'\nfor i in range(0, 1):\n new_path = path + f'\/example_{i}.json'\n cmd = \"recreate.py -json={new_path}\"\n os.system(cmd)\n #subprocess.call(['recreate.py', '-json='+new_path])","Title":"executing a python script within another python script","Tags":"python,scripting,operating-system,subprocess,dataset","AnswerCount":1,"A_Id":75250455,"Answer":"I think you forgot to run script using python.\nChange cmd line to cmd = \"python recreate.py -json={new_path}\"","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75238215,"CreationDate":"2023-01-25 18:19:29","Q_Score":0,"ViewCount":21,"Question":"I have a question regarding Python\/cx-Oracle.\nThe Oracle SQLcl and SQL*Developer tools, both support proxy server connections (not to be confused with proxy users).\nFor example, on SQLcl their is a command line option, \"--proxy\", which is nothing to do with proxy users.\nI can't say that I know exactly how they work, but the options are there, and I assume that there is an option in an API in there to support it.\nIs this something which cx-Oracle supports?\nThanks,\nClive\nI tried looking at the cx-Oracle docs, but couldn't spot anything which might help.","Title":"Proxy Server Connections via Python cx-Oracle","Tags":"python,api,server,proxy,cx-oracle","AnswerCount":1,"A_Id":75264569,"Answer":"I had another through the docs and it appears that you are expected to make changes to oracle config files (sqlnet.ora and testament.ora). That said, it also appears that newer EZconnect string syntax supports the proxy server requirement.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75238512,"CreationDate":"2023-01-25 18:45:54","Q_Score":0,"ViewCount":33,"Question":"Is it possible to download a video with controlslist=\"nodownload\" and if so how? There is a poster tag and a src tag with urls, but when I tried to open them it only said Bad URL hash.\nthe whole thing looks like this: