Q_CreationDate,Title,Question,Answer,Score,Is_accepted,N_answers,Q_Id 2020-05-16 20:10:07.260,Pandas :Record count inserted by Python TO_SQL funtion,"I am using Python to_sql function to insert data in a database table from Pandas dataframe. I am able to insert data in database table but I want to know in my code how many records are inserted . How to know record count of inserts ( i do not want to write one more query to access database table to get record count)? Also, is there a way to see logs for this function execution. like what were the queries executed etc.","There is no way to do this, since python cannot know how many of the records being inserted were already in the table.",0.0,False,1,6756 2020-05-18 08:41:40.830,Understanding the sync method from the python shelve library,"The python documentation says this about the sync method: Write back all entries in the cache if the shelf was opened with writeback set to True. Also empty the cache and synchronize the persistent dictionary on disk, if feasible. This is called automatically when the shelf is closed with close(). I am really having a hard time understanding this. How does accessing data from cache differ from accessing data from disk? And does emptying the cache affect how we can access the data stored in a shelve?","For whoever is using the data in the Shelve object, it is transparent whether the data is cached or is on disk. If it is not on the cache, the file is read, the cache filled, and the value returned. Otherwise, the value as it is on the cache is used. If the cache is emptied on calling sync, that means only that on the next value fetched from the same Shelve instance, the file will be read again. Since it is all automatic, there is no difference. The documentation is mostly describing how it is implemented. If you are trying to open the same ""shelve"" file with two concurrent apps, or even two instances of shelve on the same program, chances are you are bound to big problems. Other than that, it just behaves as a ""persistent dictionary"" and that is it. This pattern of writing to disk and re-reading from a single file makes no difference for a workload of a single user in an interactive program. For a Python program running as a server with tens to thousands of clients, or even a single big-data processing script, where this could impact actual performance, Shelve is hardly a usable thing anyway.",0.0,False,1,6757 2020-05-18 09:36:32.503,How two Django applications use same database for authentication,"previously we implemented one django application call it as ""x"" and it have own database and it have django default authentication system, now we need to create another related django application call it as ""y"", but y application did n't have database settings for y application authentication we should use x applications database and existing users in x application, so is it possible to implement like this?, if possible give the way how can we use same database for two separated django applications for authentication system. Sorry for my english Thanks for spending time for my query","So, to achieve this. In your second application, add User model in the models.py and remember to keep managed=False in the User model's Meta class. Inside your settings.py have the same DATABASES configuration as of your first application. By doing this, you can achieve the User model related functionality with ease in your new application.",0.0,False,1,6758 2020-05-18 12:02:43.860,The real difference between MEDIA_ROOT (media files) and STATIC_ROOT (static files) in python django and how to use them correctly,"The real difference between MEDIA_ROOT and STATIC_ROOT in python django and how to use them correctly? I just was looking for the answer and i'm still confused about it, in the end of the day i got two different answers: - First is that the MEDIA_ROOT is for storing images and mp3 files maybe and the STATIC_ROOT for the css, js... and so on. -Second answer is that they were only using MEDIA_ROOT in the past for static files, and it caused some errors so eventually we are only using STATIC_ROOT. is one of them right if not be direct and simple please so everybody can understand and by how to use them correctly i mean what kind of files to put in them exactly","Understanding the real difference between MEDIA_ROOT and STATIC_ROOT can be confusing sometimes as both of them are related to serving files. To be clear about their differences, I could point out their uses and types of files they serve. STATIC_ROOT, STATIC_URL and STATICFILES_DIRS are all used to serve the static files required for the website or application. Whereas, MEDIA_URL and MEDIA_ROOT are used to serve the media files uploaded by a user. As you can see that the main difference lies between media and static files. So, let's differentiate them. Static files are files like CSS, JS, JQuery, scss, and other images(PNG, JPG, SVG, etc. )etc. which are used in development, creation and rendering of your website or application. Whereas, media files are those files that are uploaded by the user while using the website. So, if there is a JavaScript file named main.js which is used to give some functionalities like show popup on button click then it is a STATIC file. Similarly, images like website logo, or some static images displayed in the website that the user can't change by any action are also STATIC files. Hence, files(as mentioned above) that are used during the development and rendering of the website are known as STATIC files and are served by STATIC_ROOT, STATIC_URL or STATICFILES_DIRS(during deployment) in Django. Now for the MEDIA files: any file that the user uploads, for example; a video, or image or excel file, etc. during the normal usage of the website or application are called MEDIA files in Django. MEDIA_ROOT and MEDIA_URL are used to point out the location of MEDIA files stored in your application. Hope this makes you clear.",1.2,True,1,6759 2020-05-18 22:30:37.343,Python not starting: IDLE's subprocess didn't make connection,"When I try to open Python it gives me an error saying: IDLE's subprocess didn't make connection. See the 'startup failure' section of the IDLE doc online I am not sure how to get it to start. I am on the most recent version of windows, and on the most recent version of python.",Open cmd and type python to see if python was installed. If so fix you IDE. If not download and reinstall python.,0.0,False,2,6760 2020-05-18 22:30:37.343,Python not starting: IDLE's subprocess didn't make connection,"When I try to open Python it gives me an error saying: IDLE's subprocess didn't make connection. See the 'startup failure' section of the IDLE doc online I am not sure how to get it to start. I am on the most recent version of windows, and on the most recent version of python.","I figured it out, thanks. All I needed to do was uninstall random.py.",0.0,False,2,6760 2020-05-19 04:10:54.070,Python backend -Securing REST APIs With Client Certificates,"We have a small website with API connected using AJAX. We do not ask for usernames and passwords or any authentication like firebase auth. So it's like open service and we want to avoid the service to be misused. OAuth 2 is really effective when we ask for credentials to the user. Can you suggest the security best practice and how it can be implemented in this context using python? Thanks","Use a firewall Allow for third-party identity providers if possible Separate the concept of user identity and user account",0.3869120172231254,False,1,6761 2020-05-19 13:54:18.343,How to add pylint for Django in vscode manually?,"I have created a Django project in vscode. Generally, vscode automatically prompts me to install pylint but this time it did not (or i missed it). Even though everything is running smoothly, I am still shown import errors. How do I manually install pytlint for this project? Also,in vscode i never really create a 'workspace'. I just create and open folders and that works just fine. ps. Im using pipenv. dont know how much necessary that info was.","Hi you must active your venv at the first then install pylint (pip install pylint) In vscode: ctrl+shift+P then type linter (choose ""python:select linter"") now you can choose your linter (pylint) I hope it helps you",0.3869120172231254,False,1,6762 2020-05-19 20:35:03.707,Can I execute 1 python script by 3 different caller process at same time with respective arguments,"I have situation in centos where 3 different/Independent caller will try to execute same python script with respective command line args. eg: python main.py arg1, python main.py arg2, python main.py arg3 at same time. My question is - Is it possible in the first place or I need to copy that python script, 3 times with 3 different names to be called by each process. If it is possible then how it should be done so that these 3 processes will not interfare and python script execution will be independent from each other.","All the python processes will run entirely isolated from each other, even if executing the same source file. If they interact with any external resource other than process memory (such as files on disk), then you may need to take measures to make sure the processes don't interfere (by making sure each instance uses a different filename, for example).",0.3869120172231254,False,1,6763 2020-05-19 20:36:14.493,How to interpose RabbitMQ between REST client and (Python) REST server?,"If I develop a REST service hosted in Apache and a Python plugin which services GET, PUT, DELETE, PATCH; and this service is consumed by an Angular client (or other REST interacting browser technology). Then how do I make it scale-able with RabbitMQ (AMQP)? Potential Solution #1 Multiple Apache's still faces off against the browser's HTTP calls. Each Apache instance uses an AMQP plugin and then posts message to a queue Python microservices monitor a queue and pull a message, service it and return response Response passed back to Apache plugin, in turn Apache generates the HTTP response Does this mean the Python microservice no longer has any HTTP server code at all. This will change that component a lot. Perhaps best to decide upfront if you want to use this pattern as it seems it would be a task to rip out any HTTP server code. Other potential solutions? I am genuinely puzzled as to how we're supposed to take a classic REST server component and upgrade it to be scale-able with RabbitMQ/AMQP with minimal disruption.","I would recommend switching wsgi to asgi(nginx can help here), Im not sure why you think rabbitmq is the solution to your problem, as nothing you described seems like that would be solved by using this method. asgi is not supported by apache as far as I know, but it allows the server to go do work, and while its working it can continue to service new requests that come in. (gross over simplification) If for whatever reason you really want to use job workers (rabbitmq, etc) then I would suggest returning to the user a ""token"" (really just the job_id) and then they can call with that token, and it will report back either the current job status or the result",1.2,True,1,6764 2020-05-20 07:41:11.573,Create package with dependencies,"Do you know how to create package from my python application to be installable on Windows without internet connection? I want, for example, to create tar.gz file with my python script and all dependencies. Then install such package on windows machine with python3.7 already installed. I tried setuptools but i don't see possibility to include dependencies. Can you help me?",Their are several Java tutorials on how to make installers that are offline. You have your python project and just use a preprogrammed Java installer to then put all of the 'goodies' inside of. Then you have an installer for windows. And its an executable.,-0.3869120172231254,False,1,6765 2020-05-20 08:14:01.817,Debug function not appearing in the menu bar in VS Code. I am using it for Python,"I am new at learning Python and i am trying to trying to set up the environment on VS code. However, the Debug icon and function is not on the menu bar. Please how do I rectify this?",right click on the menu bar. you can select which menus are active. it's also called run i believe.,0.0,False,1,6766 2020-05-20 10:08:58.617,How can i solve AttributeError: module 'dis' has no attribute 'COMPILER_FLAG_NAMES' in anaconda3/envs/untitled/lib/python3.7/inspect.py,"i am trying implement from scipy.spatial import distance as dist library however it gives me File ""/home/afeyzadogan/anaconda3/envs/untitled/lib/python3.7/inspect.py"", line 56, in for k, v in dis.COMPILER_FLAG_NAMES.items(): AttributeError: module 'dis' has no attribute 'COMPILER_FLAG_NAMES' error how can i solve it? ''' for k, v in dis.COMPILER_FLAG_NAMES.items(): mod_dict[""CO_"" + v] = k '''","We ran across this issue in our code with the same exact AttributeError. Turns out it was a totally unrelated file in the current directory called dis.py.",0.3869120172231254,False,1,6767 2020-05-20 13:37:42.400,save a figure with a precise pixels size with savefig,"How can I save a plot in a 750x750 px using savefig? The only useful parameter is DPI, but I don't understand how can I use it for setting a precise size","I added plt.tight_layout() before savefig(), and it solved the trimming issue I had. Maybe it will help yours as well. I also set the figure size at the begining rcParams['figure.figsize'] = 40, 12(you can set your own width and height)",0.0,False,1,6768 2020-05-20 19:33:34.343,Call function when new result has been returned from API,"There is an API that I am using from another company that returns the ID-s of the last 100 purchases that have been made in their website. I have a function change_status(purchase_id) that I would like to call whenever a new purchase has been made. I know a workaround on how to do it, do a while True loop, keep an index last_modified_id for the last modified status of a purchase and loop all purchases from the latest to the earliest and stop once the current id is the same as last_modified_id and then put a sleeper for 10 seconds after each iteration. Is there a better way on how to do it using events in python? Like calling the function change_status(purchase_id) when the result of that API has been changed. I have been searching around for a few days but could not find about about an event and an API. Any suggestion or idea helps. Posting what I have done is usually good in stackoverflow, but I don't have anything about events. The loop solution is totally different from the events solution. Thank you","The only way to do this is to keep calling the API and watching for changes from the previous response, unless... The API provider might have an option to call your API when something is updated on their side. It is a similar mechanism to push notifications. If they provide a method to do that, you can create an endpoint on your side to do whatever you need to do when a new purchase is made, and provide them the endpoint. However, as far as I know, most API providers do not do this, and the first method is your only option. Hope this helps!",1.2,True,1,6769 2020-05-20 19:55:21.393,Tips to practice matplotlib,"I've been studying python for data science for about 5 months now. But I get really stucked when it comes to matplotlib. There's always so many options to do anything, and I can't see a well defined path to do anything. Does anyone have this problem too and knows how to deal with it?","in programming in general "" There's always so many options to do anything"". i recommend to you that read library and understand their functions and classes in a glance, then go and solve some problems from websites or give a real project if you can. if your code works do not worry and go ahead. after these try and error you have a lot of real idea about various problems and you recognize difference between these options and pros and cons of them. like me three years ago.",0.0,False,2,6770 2020-05-20 19:55:21.393,Tips to practice matplotlib,"I've been studying python for data science for about 5 months now. But I get really stucked when it comes to matplotlib. There's always so many options to do anything, and I can't see a well defined path to do anything. Does anyone have this problem too and knows how to deal with it?","I think your question is stating that you are bored and do not have any projects to make. If that is correct, there are many datasets available on sites like Kaggle that have open-source datasets for practice programmers.",0.0,False,2,6770 2020-05-21 08:14:07.720,OnetoOne (primary_key=Tue) to ForeignKey in Django,"I have a OnetoOne field with primary_key=True in a model. Now I want to change that to a ForeignKey but cannot since there is no 'id'. From this: user = models.OneToOneField(User, primary_key=True, on_delete=models.CASCADE) To this: user1 = models.ForeignKey(User, related_name='questionnaire', on_delete=models.CASCADE) Showing this while makemigrations: You are trying to add a non-nullable field 'id' to historicaluserquestionnaire without a default; we can't do that (the database needs something to populate existing rows). Please select a fix: 1) Provide a one-off default now (will be set on all existing rows with a null value for this column) 2) Quit, and let me add a default in models.py So how to do that? Thanks!","The problem is that your trying to remove the primary key, but Django is then going to add a new primary key called ""id"". This is non-nullable and unique, so you can't really provide a one-off default. The easiest solution is to just create a new model and copy your table over in a SQL migration, using the old user_id to populate the id field. Be sure to reset your table sequence to avoid collisions.",0.1352210990936997,False,1,6771 2020-05-23 16:28:37.970,Deploy python flask project into a website,"So I recently finished my python project, grabbing values from an API and put it into my website. Now I have no clue how I actually start the website (finding a host) and making it accessible to other people, I thought turning to here might find the solution. I have done a good amount of research, tried ""pythonanywhere"" and ""google app engine"" but seem to not really find a solution. I was hoping to be able to use ""hostinger"" as a host, as they have a good price and a good host. Contacted them but they said that they couldn't, though I could upload it to a VPS (which they have). Would it work for me to upload my files to this VPS and therefor get it to a website? or should I use another host?","A VPS would work, but you'll need to understand basic linux server admin to get things setup properly. Sounds like you don't have any experience with server admin, so something like App Engine would be great for you. There are a ton of tutorials on the internet for deploying flask to GAE.",0.0,False,1,6772 2020-05-24 19:16:38.693,"How can i change dtype from object to float64 in a column, using python?","I extracted some data from investing but columns values are all dtype = object, so i cant work with them... how should i convert object to float? (2558 6.678,08 2557 6.897,23 2556 7.095,95 2555 7.151,21 2554 7.093,34 ... 4 4.050,38 3 4.042,63 2 4.181,13 1 4.219,56 0 4.223,33 Name: Alta, Length: 2559, dtype: object) What i want is : 2558 6678.08 2557 6897.23 2556 7095.95 2555 7151.21 2554 7093.34 ... 4 4050.38 3 4042.63 2 4181.13 1 4219.56 0 4223.33 Name: Alta, Length: 2559, dtype: float Tried to use the a function which would replace , for . def clean(x): x = x.replace(""."", """").replace("","",""."") but it doesnt work cause dtype is object Thanks!","That is because there is a comma between the value Because a float cannot have a comma, you need to first replace the comma and then convert it into float result[col] = result[col].str.replace("","","""").astype(float)",0.0,False,1,6773 2020-05-25 14:54:36.873,Secure password store for Python CGI (Windows+IIS+Windows authentification),"I need to develope a python cgi script for a server run on Windows+IIS. The cgi script is run from a web page with Windows authentification. It means the script is run under different users from Windows active directory. I need to use login/passwords in the script and see no idea how to store the passwords securely, because keyring stores data for a certain user only. Is there a way how to access password data from keyring for all active OS users? I also tried to use os.environ variables, but they are stored for one web session only.",The only thing I can think of here is to run your script as a service account (generic AD account that is used just for this service) instead of using windows authentication. Then you can log into the server as that service account and setup the Microsoft Credential Manager credentials that way.,0.3869120172231254,False,1,6774 2020-05-26 05:06:08.297,How do i add a PATH variable in the user variables of the environment variables?,"I have a path variable in the system variables but how do i add a path variable in the user variables section since i don't have any at the moment. If there isn't a path variable in the user variables will it affect in any way? How much will values of the path variables differ from the one in environment variables to the one in user variables if there is only one user present?","to add a new variable in users variable click one new button below the user variables. 2.Then a pop window will appear asking you to type new variable name and its value, click ok after entering name and value. Thats how you can add a new variable in user variables. You should have a path variable in user variables also because ,for example while installing python you have a choice to add python path to variables here the path will be added in user variable 'path'.",0.0,False,1,6775 2020-05-26 18:44:36.240,Best way to load a Pillow Image object from binary data in Python?,I have a program that modifies PNG files with Python's Pillow library. I was wondering how I could load binary data into a PNG image from PIL's Image object. I receive the PNG over a network as binary data (e.g. the data looks like b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR...'). What is the best way to accomplish this task?,I'd suggest receiving the data into a BytesIO object from the io standard library package. You can then treat that as a file-like object for the purposes of Pillow.,0.3869120172231254,False,1,6776 2020-05-27 01:06:20.607,Clear all text in separate file,"I want to know how to delete/clear all text in a file inside another python file, I looked through stack overflow and could not find a answer, all help appreciated. Thanks!","Try: open('yourfile.txt', 'w').close()",0.1352210990936997,False,1,6777 2020-05-27 08:12:04.827,Loss function and data format for training a ''categorical input' to 'categorical output' model?,"I am trying to train a model for autonomous driving that converts input from the front camera, to a bird's eye view image. The input and output, both are segmentation masks with shape (96, 144) where each pixel has a range from 0 to 12 (each number represents a different class). Now my question is how should i preprocess my data and which loss function should i use for the model (I am trying to use a Fully convolutional Network). I tried to convert input and outputs to shape (96, 144, 13) using keras' to_categorical utility so each channel has 0s and 1s of representing a specific mask of a category. I used binary_crossentropy ad sigmoid activation for last layer with this and the model seemed to learn and loss started reducing. But i am still unsure if this is the correct way or if there are any better ways. what should be the: input and ouptput data format activation of last layer loss function","I found the solution, use categorical crossentropy with softmax activation at last layer. Use the same data format as specified in the question.",1.2,True,1,6778 2020-05-27 12:05:54.603,how to compile python kivy app for ios on Windows 10 using buildozer?,"I succesfully compiled app for android, and now I want to compile python kivy app for ios using buildozer. My operation system is Windows 10, so I don't know how to compile file for ios. I downloaded ubuntu console from microsoft store, that helped me to compile apk file. How to compile file for ios? I hope you help me...",You can only deploy to iOS if you're working on a MacOS machine.,0.0,False,1,6779 2020-05-27 12:06:03.667,How to copy and paste dataframe rows into a web page textarea field,"I have a dataframe with a single column ""Cntr_Number"" with x no of rows. What i trying to achieve is using selenium to copy and paste the data into the web page textarea. The constraint is that the web page text area only accept 20 rows of data per submission. So how can i impplment it using while loop or other method. Copy and paste the first 20 rows of data and click on the ""Submit"" button Copy and paste the next 20 rows of data and click on the ""Submit"" button repeat the cycle until the last row. Sorry i dont have any sample code to show but this is what I'm trying to achieve. Appreciate if could have some sample code on how to do the implmentation.","The better approach will be capture all the the data in a List, Later while pasting it you can check the length of the list, and later iterate through the list and paste the data 20 at a time in the text area. I hope this will solve your problem.",0.3869120172231254,False,1,6780 2020-05-27 12:11:19.710,"Convert the string ""%Y-%M-%D"" to ""YYYY-MM-DD"" for use in openpyxl NamedStyle number_format","TLDR: This is not a question about how to change the way a date is converted to a string, but how to convert between the two format types - This being ""%Y"" and ""YYYY"", the first having a % and the second having 4 x Y. I have the following date format ""%Y-%M-%D"" that is used throughout an app. I now need to use this within a openpyxl NamedStyle as the number_format option. I cant use it directly as it doesn't like the format, it needs to be in ""YYYY-MM-DD"" (Excel) format. Do these two formats have names? (so I can Google a little more) Short of creating a lookup table for each combination of %Y or %M to Y and M is there a conversion method? Maybe in openpyxl? I'd prefer not to use an additional library just for this! TIA!","Sounds like you are looking for a mapping between printf-style and Excel formatting. Individual date formats don't have names. And, due to the way Excel implements number formats I can't think of an easy way of covering all the possibilities. NamedStyles generally refer to a collection of formatting options such as font, border and not just number format.",0.3869120172231254,False,1,6781 2020-05-27 14:20:48.347,How do iterators know what item comes next?,"As far as I understood it, iterators use lazy evaluation, meaning that they don't actually save each item in memory, but just contain the instructions on how to generate the next item. However, let's say I have some list [1,2,3,4,5] and convert it into an iterator doing a = iter([1,2,3,4,5]). Now, if iterators are supposed to save memory space because as said they contain the instructions on how to generate the next item that is requested, how do they do it in this example? How is the iterator a we created supposed to know what item comes next, without saving the entire list to memory?","Just think for a moment about this scenario ... You have a file of over a million elements, loading the memory of the whole list of elements would be really expensive. By using an iterator, you can avoid making the program heavy by opening the file once and extracting only one element for the computation. You would save a lot of memory.",0.0,False,1,6782 2020-05-27 15:21:31.810,How do modules installation work in Python?,"[On a mac] I know I can get packages doing pip install etc. But I'm not entirely sure how all this works. Does it matter which folder my terminal is in when I write this command? What happens if I write it in a specific folder? Does it matter if I do pip/pip3? I'm doing a project, which had a requirements file. So I went to the folder the requirements txt was in and did pip install requirements, but there was a specific tensorflow version, which only works for python 3.7. So I did """"""python3.7 -m pip install requirements"""""" and it worked (i'm not sure why). Then I got jupyter with brew and ran a notebook which used one of the modules in the requirements file, but it says there is no such module. I suspect packages are linked to specific versions of python and I need to be running that version of python with my notebook, but I'm really not sure how. Is there some better way to be setting up my environment than just blindley pip installing stuff in random folders? I'm sorry if this is not a well formed question, I will fix it if you let me know how.","There may be a difference between pip and pip3, depending on what you have installed on your system. pip is likely the pip used for python2 while pip3 is used for python3. The easiest way to tell is to simply execute python and see what version starts. python will run typically run the older version 2.x python and python3 is required to run python version 3.x. If you install into the python2 environment (using pip install or python -m pip install the libraries will be available to the python version that runs when you execute python. To install them into a python3 environment, use pip3 or python3 -m pip install. Basically, pip is writing module components into a library path, where import can find them. To do this for ALL users, use python3 or pip3 from the command line. To test it out, or use it on an individual basis, use a virtual environment as @Abhishek Verma said.",0.0,False,1,6783 2020-05-27 16:15:33.287,How to display text on gmaps in Jupyter Python Notebook?,"Background: I'm using the gmaps package in Jupyter Python notebook. I have 2 points A (which is a marker) and B (which is a symbol) which is connected by a line. Question: I want to somehow display text on this line that represents the distance between A and B. I have already calculated the distance between A and B but cannot display the text on the map. Is there any way to display text on the line?",I found that gmaps doesn't have this feature so I switched to folium package which has labels and popups to display text on hover and clicking the line.,1.2,True,1,6784 2020-05-28 11:22:01.147,Python ValueError if running on different laptop,"I've just built a function that is working fine on my laptop (Mac, but I'm working on a Windows virtual machine of the office laptop), but when I pass it to a colleague o'mine, it raises a ValueError: ""You are trying to merge on object and int64 columns. If you wish to proceed you should use pd.concat"" The line of the code that raises the error is a simple merge that on my laptop works perfectly: df = pd.merge(df1, df2, on = ""x"", how = ""outer) The input files are exactly the same (taken directly from the same remote folder). I totally don't know how to fix the problem, and I don't understand why on my laptop it works (even if I open a new script or I restart the kernel, so no stored variables around) and in the one of my colleague it doesn't. Thanks for your help!","my guess (a wild guess) is that the data from the 2 tab-separated CSV files (i.e., TSV files) is somehow converted using different locales on your computer and your colleague's computer. Check if you have locale-dependent operations that could cause a number with the ""wrong"" decimal separator not to be recognized as a number. This should not happen in pd.read_csv() because the decimal parameter has a well-defined default value of ""."". But from an experience I had with timestamps in another context, one timestamp with a ""bad"" format can cause the whole column to be of the wrong type. So if just one number of just one of the two files, in the column you are merging on, has a decimal separator, and this decimal separator is only recognized as such on your machine, only on your machine the join will succeed (I'm supposing that pandas can join numeric columns even if they are of different type).",0.0,False,1,6785 2020-05-28 19:55:54.440,"Can terraform run ""apply"" for multiple infrastructure/workspace in parallel?","We have one terraform instance and script which could create infra in azure. We would like to use same scripts to create/update/destroy isolated infra for each one of our customers on azure . We have achieved this by assigning one workspace for each client,different var files and using backend remote state files on azure. Our intend is to create a wrapper python program that could create multiple threads and trigger terraform apply in parallel for all workspaces. This seems to be not working as terraform runs for one workspace at a time. Any suggestions/advice on how we can achieve parallel execution of terraform apply for different workspaces?","It's safe to run multiple Terraform processes concurrently as long as: They all have totally distinct backend configurations, both in terms of state storage and in terms of lock configuration. (If they have overlapping lock configuration then they'll mutex each other, effectively serializing the operations in spite of you running multiple copies.) They work with an entirely disjoint set of remote objects, including those represented by both managed resources (resource blocks) and data resources (data blocks). Most remote APIs do not support any sort of transaction or mutex concept directly themselves, so Terraform cannot generally offer fine-grained mutual exclusion for individual objects. However, multiple runs that work with entirely separate remote objects will not interact with one another. Removing a workspace (using terraform workspace delete) concurrently with an operation against that workspace will cause undefined behavior, because it is likely to delete the very objects Terraform is using to track the operation. There is no built-in Terraform command for running multiple operations concurrently, so to do so will require custom automation that wraps Terraform.",0.9950547536867304,False,1,6786 2020-05-28 20:40:14.903,How do you request device connection string in azure using python and iotHub library?,I am wondering how can you get device connection string from IotHub using python in azure? any ideas? the device object produced by IoTHubRegisterManager.Create_device_with_sas(...) doesn't seem to contain the property connection string.,"You can get a device connection string from the device registry. However, it is not recommended that you do that on a device. The reason being is that you will need the IoT hub connection string to authenticate with your hub so that you can read the device registry. If your device is doing that and it is compromised then the perpetrator now has your IoT hub connection string and could cause all kinds of mayhem. You should specifically provide each device instance with its connection string. Alternatively, you could research the Azure DPS service which will provide you with device authentication details in a secure manner.",0.0,False,1,6787 2020-05-29 21:43:13.640,I am not allowed to run a python executable on other pcs,"I was doing a game in tkinter, then I make it executable with PyInstaller and sent it to my friends so they can run it and tell me how it feels. It seems that they could download the file, but can't open it because windows forbade them telling that it's not secure and not letting them choose to assume the risk or something. They tried to run as administrator and still nothing changed. What should I do or what I should add to my code so that windows can open it without problem and why windows opens other executable files without saying that(current error that my executable gets)?","compress it as a .zip file and then it will most probably work or install NSIS and create a windows installer for it.",0.0,False,1,6788 2020-05-30 06:09:20.403,how to implement csrf without csrf token in django,"In django, if I want to use csrf token, I need to imbed a form with csrf token in django template. However as a backend-engineer I am co-working with a front-end engineer whose code is not available for me. So I caanot use the template. In this case, if I want still the csrf function. what should I do?","you should ask the coworker to embed the csrf token in the form he is sending you you can get it from document.Cookies if he doesnt want to or cannot use the {% csrf %} tag",0.0,False,1,6789 2020-05-30 08:51:11.993,How to analyze crawl results,"I crawled and saved the user's website usage lists. I want to analyze the results of the crawl, but I wonder how there is a way. First of all, what I thought was Word Cloud. I am looking for a way to track user's personal preferences with user's computer history. I want a way to visualize personal tendencies, etc. at a glance. Or I'm looking for a way to find out if there's no risk of suicide or addiction as a result of the search. thank you.","If you want to visualize data and make analysis on it matplotlib would be good start , again it depends a lot on your data. Matplotlib and seaborn are plotting libraries that are good for representing quantitative data and get some basic analysis at least.",0.0,False,1,6790 2020-06-01 16:31:56.840,Surfaces or Sprites in Pygame?,"Good evening, I'm making a platformer and would like to know when you should use one of the both. For example for: 1)The player controlled character 2)The textured tiles that make up the level 3)The background Should/Could you make everything with sprites ? I just want to know how you would do it if you were to work on a pygame project. I ask this because I see lots of pygame tutorials that explain adding textures by using surfaces but then in other tutorials, they use sprite objects instead.","Yes you could make everything including the background with sprites. It usually does not make sense for the background though (unless you;re doing layers of some form). The rest often make senses as sprite, but that depends on your situation.",1.2,True,1,6791 2020-06-01 22:09:24.457,"Threading in Python, ""communication"" between threads","I have two functions: def is_updated_database(): is checking if database is updated and the other onedef scrape_links(database): is scraping through set of links(that it downloaded from aforementioned database). So what I want do is when def is_updated_database(): finds that the updated is downloaded, I want to stop def scrape_links(database): and reload it with a new function parameter(database which would be a list of new links). My attempt: I know how to run two threads, but I have no idea how to ""connect"" them, so that if something happens to one then something should happen to another one.","Well, one way to solve this problem, may be the checking of database state, and if something new appears there, you could return the new database object, and after that scrape the links, probably this is losing it's multithreading functionality, but that's the way it works. I don't think that any code examples are required here for you to understand what I mean.",0.0,False,1,6792 2020-06-02 05:00:54.747,"Given the dataset, how to select the learning algorithm?","I've to build an ML model to classify sentences into different categories. I have a dataset with 2 columns (sentence and label) and 350 rows i.e. with shape (350, 2). To convert the sentences into numeric representation I've used TfIdf vectorization, and so the transformed dataset now has 452 columns (451 columns were obtained using TfIdf, and 1 is the label) i.e. with shape (350, 452). More generally speaking, I have a dataset with a lot more features than training samples. In such a scenario what's the best classification algorithm to use? Logistic Regression, SVM (again what kernel?), neural networks (again which architecture?), naive Bayes or is there any other algorithm? How about if I get more training samples in the future (but the number of columns doesn't increase much), say with a shape (10000, 750)? Edit: The sentences are actually narrations from bank statements. I have around 10 to 15 labels, all of which I have labelled manually. Eg. Tax, Bank Charges, Loan etc. In future I do plan to get more statements and I will be labelling them as well. I believe I may end up having around 20 labels at most.","With such a small training set, I think you would only get any reasonable results by getting some pre-trained language model such as GPT-2 and fine tune to your problem. That probably is still true even for a larger dataset, a neural net would probably still do best even if you train your own from scratch. Btw, how many labels do you have? What kind of labels are those?",0.0,False,1,6793 2020-06-02 06:45:38.810,What is the most efficient way to push and pop a list in Python?,"In Python how do I write code which shifts off the last element of a list and adds a new one to the beginning - to run as fast as possible at execution? There are good solutions involving the use of append, rotate etc but not all may translate to fast execution.","Don't use a list. A list can do fast inserts and removals of items only at its end. You'd use pop(-1) and append, and you'd end up with a stack. Instead, use collections.deque, which is designed for efficient addition and removal at both ends. Working on the ""front"" of a deque uses the popleft and appendleft methods. Note, ""deque"" means ""double ended queue"", and is pronounced ""deck"".",0.9950547536867304,False,1,6794 2020-06-02 16:26:04.147,How to set tkinter Entry Border Radius,"This is my first question to here. I don't know how to set Border Radius for Tkinter Entry, Thanks for your Help!","There is no option to set a border radius on the tkinter or ttk Entry widgets, or any of the other widgets in those modules. Tkinter doesn't support the concept of a border radius.",1.2,True,1,6795 2020-06-02 18:46:29.100,A new table for each user created,I am using Django 3.0 and I was wondering how to create a new database table linked to the creation of each user. In a practical sense: I want an app that lets users add certain stuff to a list but each user to have a different list where they can add their stuff. How should I approach this as I can't seem to find the right documentation... Thanks a lot !!!,"This is too long for a comment. Creating a new table for each user is almost never the right way to solve a problem. Instead, you just have a userStuff table that maintains the lists. It would have columns like: userId stuffId And, if you want the stuff for a given user, just use a where clause.",1.2,True,1,6796 2020-06-02 19:12:03.813,How to enable PyCharm autocompletion for imported library (Discord.py),How do I enable method autocompletion for discord.py in PyCharm? Until now I've been doing it the hard way by looking at the documentation and I didn't even know that autocomplete for a library existed. So how do I enable it?,"The answer in my case was to first create a new interpreter as a new virtual environment, copy over all of the libraries I needed (there is an option to inherit all of the libraries from the previous interpreter while setting up the new one) and then follow method 3 from above. I hope this helps anyone in the future!",1.2,True,1,6797 2020-06-03 18:20:40.193,How to install turicreate on windows 7?,"Can anyone tell me how to install turicreate on windows 7? I am using python of version 3.7. I have tried using pip install -U turicreate to install but failed. Thanks in advance","I am quoting from Turicreate website: Turi Create supports: macOS 10.12+ Linux (with glibc 2.12+) Windows 10 (via WSL) System Requirements Python 2.7, 3.5, or 3.6 Python 3.7 macOS only x86_64 architecture So Windows 7 is not supported in this case.",0.0,False,1,6798 2020-06-04 04:50:55.740,Identify domain related important keywords from a given text,"I am relatively new to the field of NLP/text processing. I would like to know how to identify domain-related important keywords from a given text. For example, if I have to build a Q&A chatbot that will be used in the Banking domain, the Q would be like: What is the maturity date for TRADE:12345 ? From the Q, I would like to extract the keywords: maturity date & TRADE:12345. From the extracted information, I would frame a SQL-like query, search the DB, retrieve the SQL output and provide the response back to the user. Any help would be appreciated. Thanks in advance.","So, this is where the work comes in. Normally people start with a stop word list. There are several, choose wisely. But more than likely you'll experiment and/or use a base list and then add more words to that list. Depending on the list it will take out ""what, is, the, for, ?"" Since this a pretty easy example, they'll all do that. But you'll notice that what is being done is just the opposite of what you wanted. You asked for domain-specific words but what is happening is the removal of all that other cruft (to the library). From here it will depend on what you use. NLTK or Spacy are common choices. Regardless of what you pick, get a real understanding of concepts or it can bite you (like pretty much anything in Data Science). Expect to start thinking in terms of linguistic patterns so, in your example: What is the maturity date for TRADE:12345 ? 'What' is an interrogative, 'the' is a definite article, 'for' starts a prepositional phrase. There may be other clues such as the ':' or that TRADE is in all caps. But, it might not be. That should get you started but you might look at some of the other StackExchange sites for deeper expertise. Finally, you want to break a question like this into more than one question (assuming that you've done the research and determined the question hasn't already been asked -- repeatedly). So, NLTK and NLP are decently new, but SQL queries are usually a Google search.",0.0,False,1,6799 2020-06-04 12:37:35.410,Devpi REST API - How to retrieve versions of packages,"I'm trying to retrieve versions of all packages from specific index. I'm trying to sending GET request with /user/index/+api suffix but it not responding nothing intresting. I can't find docs about devpi rest api :( Has anyone idea how could I do this? Best regards, Matt.",Simply add header Accept: application/json - it's working!,1.2,True,1,6800 2020-06-04 13:32:52.410,Use HTML interface to control a running python script on a lighttpd server,"I am trying to find out what the best tool is for my project. I have a lighttpd server running on a raspberry pi (RPi) and a Python3 module which controls the camera. I need a lot of custom control of the camera, and I need to be able to change modes on the fly. I would like to have a python script continuously running which waits for commands from the lighttpd server which will ultimately come from a user interacting with an HTML based webpage through an intranet (no outside connections). I have used Flask in the past to control a running script, and I have used FastCGI to execute scripts. I would like to continue using the lighttpd server over rather than switching entirely over to Flask, but I don't know how to interact with the script once it is actually running to execute individual functions. I can't separate them into multiple functions because only one script can control the camera at a time. Is the right solution to set up a Flask app and have the lighttpd send requests there, or is there a better tool for this?","You have several questions merged into one, and some of them are opion based questions as such I am going to avoid answering those. These are the opinion based questions. I am trying to find out what the best tool is for my project. Is the right solution to set up a Flask app and have the lighttpd send requests there Is there a better tool for this? The reason I point this out is not because your question isnn't valid but because often times questions like these will get flagged and/or closed. Take a look at this for future referece. Now to answer this question: "" I don't know how to interact with the script once it is actually running to execute individual functions"" Try doing it this way: Modify your script to use threads and/or processes. You will have for example a continously running thread which would be the camera. You would have another non blocking thread listening to IO commands. Your IO commands would be comming through command line arguments. Your IO thread upon recieving an IO command would redirect your running camera thread to a specific function as needed. Hope that helps and good luck!!",0.0,False,2,6801 2020-06-04 13:32:52.410,Use HTML interface to control a running python script on a lighttpd server,"I am trying to find out what the best tool is for my project. I have a lighttpd server running on a raspberry pi (RPi) and a Python3 module which controls the camera. I need a lot of custom control of the camera, and I need to be able to change modes on the fly. I would like to have a python script continuously running which waits for commands from the lighttpd server which will ultimately come from a user interacting with an HTML based webpage through an intranet (no outside connections). I have used Flask in the past to control a running script, and I have used FastCGI to execute scripts. I would like to continue using the lighttpd server over rather than switching entirely over to Flask, but I don't know how to interact with the script once it is actually running to execute individual functions. I can't separate them into multiple functions because only one script can control the camera at a time. Is the right solution to set up a Flask app and have the lighttpd send requests there, or is there a better tool for this?","I have used Flask in the past to control a running script, and I have used FastCGI to execute scripts. Given your experience, one solution is to do what you know. lighttpd can execute your script via FastCGI. Python3 supports FastCGI with Flask (or other frameworks). A python3 app which serially processes requests will have one process issuing commands to the camera. I would like to continue using the lighttpd server over rather than switching entirely over to Flask, but I don't know how to interact with the script once it is actually running to execute individual functions. Configure your Flask app to run as a FastCGI app instead of as a standalone webserver.",1.2,True,2,6801 2020-06-04 17:52:29.977,How to prevent direct access to cert files when connecting MQTT client with Python,"I am using the pho MQTT client library successfully to connect to AWS. After the mqtt client is created, providing the necessary keys and certificates is done with a call to client.tls_set() This method requires file paths to root certificate, own certificate and private key file. All is well and life is good except that I now need to provide this code to external contractors whom should not have direct access to these cert and key files. The contractors have a mix of PC and macOS systems. On macOS we have keychain I am familiar with but do not know how to approach this with python - examples/library references would be great. On the PC I have no idea which is the prevalent mechanism to solve this. To add to this, I have no control over the contractor PCs/Macs - i.e., I have no ability to revoke an item in their keychain. How do I solve this? Sorry for being such a noob in security aspects. No need to provide complete examples, just references to articles to read, courses to follow and keywords to search would be great - though code examples will be happily accepted also of course.","Short answer: you don't. Longer answer: If you want them to be able connect then you have no choice but to give them the cert/private key that identifies that device/user. The control you have is issue each contractor with their own unique key/cert and if you believe key/cert has been miss used, revoke the cert at the CA and have the broker check the revocation list. You can protect the private key with a password, but again you have to either include this in the code or give it to the contractor. Even if the contractors were using a device with a hardware keystore (secure element) that you could securely store the private key in, all that would do is stop the user from extracting the key and moving it to a different machine, they would still be able to make use of the private key for what ever they want on that machine. The best mitigation is to make sure the certificate has a short life and control renewing the certificate, this means if a certificate is leaked then it will stop working quickly even if you don't notice and explicitly revoke it.",0.3869120172231254,False,1,6802 2020-06-04 20:38:18.423,Importing module to VS code,"im very new in programming and i learn Python. I'm coding on mac btw. I'd like to know how can i import some modules in VS code. For exemple, if i want to use the speedtest module i have to download it (what i did) and then import it to my code. But it never worked and i always have the error no module etc. I used pip to install each package, i have them on my computer but i really don't know to import them on VS code. Even with the terminal of the IDE. I know it must be something very common for u guys but i will help me a lot. Thx","Quick Summary This might not be an issue with VS Code. Problem: The folder to which pip3 installs your packages is not on your $PATH. Fix: Go to /Applications/Python 3.8 in Finder, and run the Update Shell Profile.command script. Also, if you are using pip install , instead of pip3 install that might be your problem. Details Your Mac looks for installed packages in several different folders on your Mac. The list of folders it searches is stored in an environment variable called $PATH. Paths like /Library/Frameworks/Python.framework/Versions/3.8/bin should be in the $PATH environment variable, since that's where pip3 installs all packages.",1.2,True,1,6803 2020-06-05 09:05:11.073,How to install pip and python modules with a single batch file?,"I really don't understand how batch files work. But I made a python script for my father to use in his work. And I thought installing pip and necessary modules with a single batch file would make it a lot easier for him. So how can I do it? The modules I'm using in script are: xlrd, xlsxwriter and tkinter.","You can create a requirements.txt file then use pip install -r requirements.txt to download all modules, if you are working on a virtual environment and you only have the modules your project uses, you can use pip3 freeze >> requirements.txt This is not a batch file but it will work just fine and it is pretty easy",0.296905446847765,False,1,6804 2020-06-05 12:25:21.517,Python Contour Plot/HeatMap,"I have x and y coordinates in a df from LoL matches and i want to create a contour plot or heat map to show where the player normally moves in a match. Does any one know how can I do it?","A contour plot or heat map needs 3 values. You have to provide x, y and z values in order to plot a contour since x and y give the position and z gives the value of the variable you want to show the contour of as a variable of x and y. If you want to show the movement of the players as a function of time you should look at matplotlib's animations. Or if you want to show the ""players density field"" you have to calculate it.",0.0,False,1,6805 2020-06-06 13:00:36.307,Login required in django,"I am developing ecommerce website in django . I have view ( addToCart) I want sure before add to cart if user logged in or not so that i use @login_required('login') before view but when click login it show error (can't access to page ). Note that: normal login is working","Please check the following 1. Add login url on settings 2. Add redirect url on login required decorator 3. If you create a custom login view make sure to check next kwargs",0.0,False,1,6806 2020-06-06 23:06:30.737,Running all Python scripts with the same name across many directories,"I have a file structure that looks something like this: Master: First train.py other1.py Second train.py other2.py Third train.py other3.py I want to be able to have one Python script that lives in the Master directory that will do the following when executed: Loop through all the subdirectories (and their subdirectories if they exist) Run every Python script named train.py in each of them, in whatever order necessary I know how to execute a given python script from another file (given its name), but I want to create a script that will execute whatever train.py scripts it encounters. Because the train.py scripts are subject to being moved around and being duplicated/deleted, I want to create an adaptable script that will run all those that it finds. How can I do this?","If you are using Windows you could try running them from a PowerShell script. You can run two python scripts at once with just this: python Test1.py python Folder/Test1.py And then add a loop and or a function that goes searching for the files. Because it's Windows Powershell, you have a lot of power when it comes to the filesystem and controlling Windows in general.",0.1352210990936997,False,2,6807 2020-06-06 23:06:30.737,Running all Python scripts with the same name across many directories,"I have a file structure that looks something like this: Master: First train.py other1.py Second train.py other2.py Third train.py other3.py I want to be able to have one Python script that lives in the Master directory that will do the following when executed: Loop through all the subdirectories (and their subdirectories if they exist) Run every Python script named train.py in each of them, in whatever order necessary I know how to execute a given python script from another file (given its name), but I want to create a script that will execute whatever train.py scripts it encounters. Because the train.py scripts are subject to being moved around and being duplicated/deleted, I want to create an adaptable script that will run all those that it finds. How can I do this?","Which OS are you using ? If Ubuntu/CentOS try this combination: import os //put this in master and this lists every file in master + subdirectories and then after the pipe greps train.py train_scripts = os.system(""find . -type d | grep train.py "") //next execute them python train_scripts",0.1352210990936997,False,2,6807 2020-06-07 13:31:11.780,How to transfer data from Quantopian to Excel,"Anyone know how you get a dataframe from Quantopian to excel - I try - results.to_excel results are the name of my dataframe","Try this : Name of your DataFrame: Result.to_csv(""result.csv"") here Result is your DataFrame Name , while to_csv() is a function",0.0,False,1,6808 2020-06-07 15:39:09.570,How do i delete instances of a class from within it,"So in my case, I have a class Gnome for example and I want to destroy each object of this class when its variable health reaches 0. Is there a way for me to delete each instance of Gnome when its hp is 0 or should I ""mark it for death"" and delete everything that was marked? Either way, how can I do this?","Unfortunately, there isn't a way to do what you're wanting. Every Python object maintains a record of how many references there are to it. Once the reference count reaches 0, the Python garbage collector will clean it up. As long as you still have references to the instances, they will persist.",0.0,False,1,6809 2020-06-07 19:41:32.177,Use Pycharm and Spyder Together,"Recently i read this comment I like Spyder for interacting with my variables and PyCharm for editing my scripts. Alternative Solution: use both simultaneously. As I edit in PyCharm (on Mac OS), the script updates live in spyder. Best of both worlds! i want to understand how to use them together and live update the script in Spyder ?","After some research, I find that there is no variable explorer like Sypder option in PyCharm. To work with PyCharm and Spyder together, we need to use the two IDEs parallelly i.e., to write the code we can use the PyCharm and to view the Spyder we can just alt tab to the Spyder window and re run the code in Spyder. It will not take much time to re run the code again. We just need to press Ctrl + A and Ctrl + Enter, then the variables will get updated in the variable explorer. Spyder variable view is amazing especially data frames. Only thing we need to remember is, we need to install the packages in both PyCharm and Sypder. If we install the package in PyCharm, it will not reflect in Spyder. So we need to install through Conda Prompt.",0.0,False,1,6810 2020-06-08 04:26:50.710,How can i make a button in my web that when it is being clicked. it can send a data in my python script,"I have a project and i was wondering how can i make a button in my web that when it is being clicked it can display a string in my python terminal Thank you in advance","You need to create an http server in your Python, and call it with fetch in JavaScript. You can pass data in the query parameters.",0.0,False,1,6811 2020-06-08 13:35:29.377,"Camcapture on Notebook with showing the video on Pepper. (Choreographe, Python)","I have a question about a programm with Python. I must capture my Notebookcam with Pepper and show it on the Display from Pepper. Now I have the Problem, the programming with Choreograph is a little bit different and I don't know how I can handle this Programm. I would be happy if you could answer. Thanks.","You cannot use Choregraphe to retrieve the video remotely because the applications made using Choregraphe are run on the robot, not on your PC. You need to write separate program on your PC to retrieve the video.",0.0,False,1,6812 2020-06-08 20:28:02.807,Dunders no longer combined in Pycharm?,"I recently switched to the new Pycharm version and in the contrary to the previous versions it seems like two underscores are no longer combined like this: __ Does someone know how to switch it back, so the IDE combines them?",Please try to enable: File - Settings - Editor - Font - Enable font ligatures,1.2,True,1,6813 2020-06-09 14:36:13.823,How to bring a web browser with .ipybn link opened in my Jupyter?,"I have received a link with .ipynb link. I am new to Python and Jupyter and I need to open the link to work on the details inside. The link opens in my internet browser and I couldn't properly see the contents and bring it in to a Jupyter notebook. Could anyone please give me a tip how to handle such links for Python/Jupyter?","Adding to Vinzee's answer: jupyter notebook starts in your home folder and you can't move up from there; only down into subfolders. Open jupyter to see what folder it starts in, and make sure that you put the .ipynb file in that folder or one of its subfolders.",1.2,True,1,6814 2020-06-09 22:36:41.577,Project directory accidentally in sys.path - how to remove it?,"I don't know how it happened, but my sys.path now apparently contains the path to my local Python project directory, let's call that /home/me/my_project. (Ubuntu). echo $PATH does not contain that path and echo $PYTHONPATH is empty. I am currently preparing distribution of the package and playing with setup.py, trying to always work in an virtualenv. Perhaps I messed something up while not having a virtualenv active. Though I trying to re-install using python3 setup.py --record (in case I did an accidental install) fails with insufficient privileges - so I probably didn't accidentally install it into the system python. Does anyone have an idea how to track down how my module path got to the sys.path and how to remove that?","I had the same problem. I don't have the full understanding of my solution, but here it is nonetheless. My solution Remove my package from site-packages/easy-install.pth (An attempt at) explanation The first hurdle is to understand that PYTHONPATH only gets added to sys.path, but is not necessarily equal to it. We are thus after what adds the package into sys.path. The variable sys.path is defined by site.py. One of the things site.py does is automatically add packages from site-packages into sys.path. In my case, I incorrectly installed my package as a site-package, causing it to get added to easy-install.pth in site-packages and thus its path into sys.path.",0.0,False,1,6815 2020-06-10 14:46:18.217,Low-latecy response with Ray on large(isch) dataset,"TL;DR What's the fasted way to get near-zero loading time for a pandas dataset I have in memory, using ray? Background I'm making an application which uses semi-large datasets (pandas dataframes between 100MB to 700MB) and are trying to reduce each query time. For a lot of my queries the data loading is the majority of the response times. The datasets are optimized parquet files (categories instead of strings, etc) which only reads the columns it needs. Currently I use a naive approach that per-requests loads the require dataset (reading the 10-20 columns out of 1000 I need from the dataset) and then filter out the rows I need. A typical request: Read and parse the contract (~50-100ms) Load the dataset (10-20 columns) (400-1200ms) Execute pandas operations (~50-100ms) Serialise the results (50-100ms) I'm now trying to speed this up (reduce or remove the load dataset step). Things I have tried: Use Arrow's new row-level filtering on the dataset to only read the rows I need as well. This is probably a good way in the future, but for now the new Arrow Dataset API which is relies on is significantly slower than reading the full file using the legacy loader. Optimize the hell out of the datasets. This works well to a point, where things are in categories, the data types is optimized. Store the dataframe in Ray. Using ray.put and ray.get. However this doesn't actually improve the situation since the time consuming part is deserialization of the dataframe. Put the dataset in ramfs. This doesn't actually improve the situation since the time consuming part is deserialization of the dataframe. Store the object in another Plasma store (outside of ray.put) but obviously the speed is the same (even though I might get some other benefits) The datasets are parquet files, which is already pretty fast for serialization/deserialization. I typically select about 10-20 columns (out of 1000) and about 30-60% of the rows. Any good ideas on how to speed up the loading? I haven't been able to find any near zero-copy operations for pandas dataframes (i.e without the serialization penalty). Things that I am thinking about: Placing the dataset in an actor, and use one actor per thread. That would probably give the actor direct access to the dataframe without any serialization, but would require me to do a lot of handling of: Making sure I have an actor per thread Distribute requests per threads ""Recycle"" the actors when the dataset gets updated Regards, Niklas","After talking to Simon on Slack we found the culprit: simon-mo: aha yes objects/strings are not zero copy. categorical or fixed length string works. for fixed length you can try convert them to np.array first Experimenting with this (categorical values, fixed length strings etc) allows me not quite get zero-copy but at least fairly low latency (~300ms or less) when using Ray Objects or Plasma store.",1.2,True,1,6816 2020-06-10 15:08:46.340,linking web application's backend in python and frontend in flutter,I am making a CRM web application. I am planning to do its backend in python(because I only know that language better) and I have a friend who uses flutter for frontend. Is it possible to link these two things(flutter and python backend)? If yes how can it be done...and if no what are the alternatives I have?,I used $.ajax() method in HTML pages and then used request.POST['variable_name_used_in_ajax()'] in the views.py,1.2,True,2,6817 2020-06-10 15:08:46.340,linking web application's backend in python and frontend in flutter,I am making a CRM web application. I am planning to do its backend in python(because I only know that language better) and I have a friend who uses flutter for frontend. Is it possible to link these two things(flutter and python backend)? If yes how can it be done...and if no what are the alternatives I have?,"Yes you both can access same Django rest framework Backend. Try searching for rest API using Django rest framework and you are good to go. Other alternatives are Firebase or creating rest API with PHP. You would need to define API endpoints for different functions of your app like login,register etc. Django rest framework works well with Flutter. I have tried it. You could also host it in Heroku Use http package in flutter to communicate with the Django server.",0.0,False,2,6817 2020-06-10 16:24:39.130,Building Tensorflow 1.5,"I have an old Macbook Pro 3,1 running ubuntu 20.04 and python 3.8. The mac CPU doesn't have support for avx (Advanced Vector Extensions) which is needed for tensorflow 2.2 so whilst tensorflow installs, it fails to run with the error: illegal instruction (core dumped) I've surfed around and it seems that I need to use tensorflow 1.5 however there is no wheel for this for my configuration and I have the impression that I need to build one for myself. So here's my question... how do I even start to do that? Does anyone have a URL to Building-Stuff-For-Dummies or something similar please? (Any other suggestions also welcome) Thanks in advance for your help",Update: I installed python 3.6 alongside the default 3.8 and then installed tensorflow 1.5 and it looks like it works now (albeit with a few 'future warnings'.),0.0,False,2,6818 2020-06-10 16:24:39.130,Building Tensorflow 1.5,"I have an old Macbook Pro 3,1 running ubuntu 20.04 and python 3.8. The mac CPU doesn't have support for avx (Advanced Vector Extensions) which is needed for tensorflow 2.2 so whilst tensorflow installs, it fails to run with the error: illegal instruction (core dumped) I've surfed around and it seems that I need to use tensorflow 1.5 however there is no wheel for this for my configuration and I have the impression that I need to build one for myself. So here's my question... how do I even start to do that? Does anyone have a URL to Building-Stuff-For-Dummies or something similar please? (Any other suggestions also welcome) Thanks in advance for your help",Usually there are instructions for building in the repository's README.md. Isn't there such for TensorFlow? It would be odd.,0.0,False,2,6818 2020-06-10 17:22:00.360,xgboost how to copy model,"In the xgboost documentation they refer to a copy() method, but I can't figure out how to use it since if foo is my model, neither bar = foo.copy() nor bar=xgb.copy(foo) works (xgboost can't find a copy() attribute of either the module or the model). Any suggestions?","It turns out that copy() is a method of the Booster object, but a (say) XGBClassifier is not one, so if using the sklearn front end, you do bar = foo.get_booster().copy()",0.2012947653214861,False,1,6819 2020-06-11 02:21:57.910,Need help getting data using Selenium,"I'm trying to get Python and selenium to store the ""1292"" in the following html script and cant figure out why it won't work. I've tried using find_element_by_xpath as well as placing a wait before it and I keep getting this error ""Message: no such element: Unable to locate element:"" Any ideas on how else I can accomplish this? Thanks 1292 ","You can try: driver.find_element_by_xpath(""//tspan[text()='1292']"").text to obtain the string ""1292"".",0.0,False,1,6820 2020-06-11 07:07:02.407,Alternatives for interaction between C# and Python application -- Pythonnet vs DLL vs shared memory vs messaging,"We have a big C# application, would like to include an application written in python and cython inside the C# Operating system: Win 10 Python: 2.7 .NET: 4.5+ I am looking at various options for implementation here. (1) pythonnet - embed the python inside the C# application, if I have abc.py and inside the C#, while the abc.py has a line of ""import numpy"", does it know how to include all python's dependencies inside C#? (2) Convert the python into .dll - Correct me if i am wrong, this seems to be an headache to include all python files and libraries inside clr.CompileModules. Is there any automatically solution? (and clr seems to be the only solution i have found so far for building dll from python. (3) Convert .exe to .dll for C# - I do not know if i can do that, all i have is the abc.exe constructed by pyinstaller (4) shared memory seems to be another option, but the setup will be more complicated and more unstable? (because one more component needs to be taken care of?) (5) Messaging - zeromq may be a candidate for that. Requirements: Both C# and python have a lot of classes and objects and they need to be persistent C# application need to interact with Python Application They run in real-time, so performance for communication does matter, in milliseconds space. I believe someone should have been through a similar situation and I am looking for advice to find the best suitable solution, as well as pros and cons for above solution. Stability comes first, then the less complex solution the better it is.",For variant 1: in my TensorFlow binding I simply add the content of a conda environment to a NuGet package. Then you just have to point Python.NET to use that environment instead of the system Python installation.,0.0,False,1,6821 2020-06-11 15:48:31.190,Test interaction between flask apps,"I have a flask app that is intended to be hosted on multiple host. That is, the same app is running on different hosts. Each host can then send a request to the others host to take some action on the it's respective system. For example, assume that there is systems A and B both running this flask app. A knows the IP address of B and the port number that the app is hosted on B. A gets a request via a POST intended for B. A then needs to forward this request to B. I have the forwarding being done in a route that simply checks the JSON attached to the POST to see if it is the intended host. If not is uses python's requests library to make a POST request to the other host. My issue is how do I simulate this environment (two different instance of the same app with different ports) in a python unittest so I can confirm that the forwarding is done correctly? Right now I am using the app.test_client() to test most of the routes but as far as I can tell the app.test_client() does not contain a port number or IP address associated with it. So having the app POST to another app.test_client() seems unlikely. I tried hosting the apps in different threads but there does not seem to be a clean and easy way to kill the thread once app.run() starts, can't join as app.run() never exits. In addition, the internal state of the app (app.config) would be hidden. This makes verifying that A does not do the request and B does hard. Is there any way to run two flask app simultaneously on different port numbers and still get access to both app's app.config? Or am I stuck using the threads and finding some other way to make sure A does not execute the request and B does? Note: these app do not have any forums so there is no CSRF.","I ended up doing two things. One, I started using patch decorator from the mock library to fake the response form systems B. More specifically I use the @patch('requests.post') then in my code I set the return value to ""< Response [200]>"". However this only makes sure that requests.post is called, not that the second system processed it correctly. The second thing I did was write a separate test that makes the request that should have been sent by A and sends it to the system to check if it processes it correctly. In this manner systems A and B are never running at the same time. Instead the tests just fake there responses/requests. In summery, I needed to use @patch('requests.post') to fake the reply from B saying it got the request. Then, in a different test, I set up B and made a request to it.",0.0,False,1,6822 2020-06-11 23:53:40.030,How do I perform crosscorelation between two time series and what transformations should I perform in python?,"I have two-time series datasets i.e. errors received and bookings received on a daily basis for three years (a few million rows). I wish to find if there is any relationship between them.As of now, I think that cross-correlation between these two series might help. I order to so, should I perform any transformations like stationarity, detrending, deseasonality, etc. If this is correct, I'm thinking of using ""scipy.signal.correlate¶"" but really want to know how to interpret the result?","scipy.signal.correlate is for the correlation of time series. For series y1 and y2, correlate(y1, y2) returns a vector that represents the time-dependent correlation: the k-th value represents the correlation with a time lag of ""k - N + 1"", so that the N+1 th element is the similarity of the time series without time lag: close to one if y1 and y2 have similar trends (for normalized data), close to zero if the series are independent. numpy.corrcoef takes two arrays and aggregates the correlation in a single value (the ""time 0"" of the other routine), the Pearson correlation, and does so for N rows, returning a NxN array of correlations. corrcoef normalizes the data (divides the results by their rms value), so that he diagonal is supposed to be 1 (average self correlation). The questions about stationarity, detrending, and deseasonality depend on your specific problem. The routines above consider ""plain"" data without consideration for their signification.",1.2,True,1,6823 2020-06-12 19:17:12.797,How to remove superuser on the system in Django?,"I was doing some project by using django and I realized that I forgot to activate virtualenv. I already made some changes and applied it not on the venv, and created superuser on the system. How to find any changes on the system? how to remove superuser that I made on the system and what are the cmd commands for that?","If you haven't setup an additional database for your project and you have used django-admin startproject you'll just have a standard django setup, and you will be using sqlite. With this setup, your database is stored in a file in your root directory (for the project) called db.sqlite3. This is where the super-user you have created will be stored. So it does not matter if the virtualenv was activated or not. Your superuser will have been created in the right place. TLDR: No need to worry, the superuser you created will most likely be in the right place.",1.2,True,1,6824 2020-06-12 19:21:05.707,How to get python to search for whole numbers in a string-not just digits,"Okay please do not close this and send me to a similar question because I have been looking for hours at similar questions with no luck. Python can search for digits using re.search([0-9]) However, I want to search for any whole number. It could be 547 or 2 or 16589425. I don't know how many digits there are going to be in each whole number. Furthermore I need it to specifically find and match numbers that are going to take a form similar to this: 1005.2.15 or 100.25.1 or 5.5.72 or 1102.170.24 etc. It may be that there isn't a way to do this using re.search but any info on what identifier I could use would be amazing.","Assuming that you're looking for whole numbers only, try re.search(r""[0-9]+"")",0.0,False,1,6825 2020-06-12 20:05:50.403,Dynamic Select Statement In Python,"I'm using Python with cx_Oracle, and I'm trying to do an INSERT....SELECT. Some of the items in the SELECT portion are variable values. I'm not quite sure how to accomplish this. Do I bind those variables in the SELECT part, or just concatenate a string? v_insert = (""""""\ INSERT INTO editor_trades SELECT "" + v_sequence + "", "" + issuer_id, UPPER("" + p_name + ""), "" + p_quarter + "", "" + p_year + "", date_traded, action, action_xref, SYSDATE FROM "" + p_broker.lower() + ""_tmp"") """""") Many thanks!","With Oracle DB, binding only works for data, not for SQL statement text (like column names) so you have to do concatenation. Make sure to allow-list or filter the variables (v_sequence etc) so there is no possibility of SQL injection security attacks. You probably don't need to use lower() on the table name, but that's not 100% clear to me since your quoting currently isn't valid.",0.0,False,1,6826 2020-06-14 05:35:32.993,Heroku won't run latest python file,"I use Heroku to host my discord.py bot, and since I've started using sublime merge to push to GitHub (I use Heroku GitHub for it), Heroku hasn't been running the latest file. The newest release is on GitHub, but Heroku runs an older version. I don't think it's anything to do with sublime merge, but it might be. I've already tried making a new application, but same problem. Anyone know how to fix this? Edit: I also tried running Heroku bash and running the python file again","1) Try to deploy branch (maybe another branch) 2) Enable automatic deploy",0.3869120172231254,False,1,6827 2020-06-14 09:54:51.873,Is it faster and more memory efficient to manipulate data in Python or PostgreSQL?,"Say I had a PostgreSQL table with 5-6 columns and a few hundred rows. Would it be more effective to use psycopg2 to load the entire table into my Python program and use Python to select the rows I want and order the rows as I desire? Or would it be more effective to use SQL to select the required rows, order them, and only load those specific rows into my Python program. By 'effective' I mean in terms of: Memory Usage. Speed. Additionally, how would these factors start to vary as the size of the table increases? Say, the table now has a few million rows?","Actually, if you are comparing data that is already loaded into memory to data being retrieved from a database, then the in-memory operations are often going to be faster. Databases have overhead: They are in separate processes on the same server or on a different server, so data and commands needs to move between them. Queries need to be parsed and optimized. Databases support multiple users, so other work may be going on using up resources. Databases maintain ACID properties and data integrity, which can add additional overhead. The first two of these in particular add overhead compared to equivalent in-memory operations for every query. That doesn't mean that databases do not have advantages, particularly for complex queries: They implement multiple different algorithms and have an optimizer to choose the best one. They can take advantage of more resources -- particularly by running in parallel. They can (sometimes) cache results saving lots of time. The advantage of databases is not that they provide the best performance all the time. The advantage is that they provide good performance across a very wide range of requests with a simple interface (even if you don't like SQL, I think you need to admit that it is simpler, more concise, and more flexible that writing code in a 3rd generation language). In addition, databases protect data, via ACID properties and other mechanisms to support data integrity.",1.2,True,1,6828 2020-06-15 04:21:10.657,Creating a stop in a While loop - Python,"I am working on a code that is supposed to use a while loop to determine if the number inputted by the user is the same as the variable secret_number = 777. the following criteria are: will ask the user to enter an integer number; will use a while loop; will check whether the number entered by the user is the same as the number picked by the magician. If the number chosen by the user is different than the magician's secret number, the user should see the message ""Ha ha! You're stuck in my loop!"" and be prompted to enter a number again. If the number entered by the user matches the number picked by the magician, the number should be printed to the screen, and the magician should say the following words: ""Well done, muggle! You are free now."" if you also have any tips how to use the while loop that would be really helpful. Thank you!","You can use while(true) to create a while loop. Inside, set a if/else to compare the value input and secret_number. If it's true, print(""Well done, muggle! You are free now."") and break. Unless, print(""Ha ha! You're stuck in my loop"") and continue",0.0,False,1,6829 2020-06-15 16:39:14.833,"IDLE and python is different, not able to install modules properly","thanks for reading this. I am using macOS High Sierra. I am not very familiar with terminal or environment variables, but am trying to learn more. From reading other threads and google, it seems like I either have multiple pythons installed, or have pythons running from different paths. However I am not able to find a solution to resolving this, either by re-pathing my IDLE or deleting it entirely. I do have python, python launcher, and anaconda (not very sure how anaconda works, have it installed a few years back and didn't touch it) installed. I am trying to install pandas (pip install pandas), which tells me that I have it installed, but when I run it on IDLE, it says module not found. Though if i run python3 on terminal and type my code in, it works (so pandas has indeed been installed). When i run which python on terminal, it returns /Users/myname/anaconda3/bin/python (when i enter into this directory from terminal, it shows that in the bin folder, I have python, python.app, python3, python3-config, python3.7, python3.7-config, python3.7m, python3.7m-config) When i run which idle on terminal, it returns /usr/bin/idle (im not even sure how to find this directory from the terminal) When i run import os; print(os.path) on IDLE, it returns module 'posixpath' from '/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/posixpath.py' Would really appreciate some help to figure out how to ensure that when i install modules from terminal, it would be installed into the same python as the one IDLE is using. Also, I would like to know whether it is possible for me to work on VSCode instead of IDLE. I cant seem to find suitable extensions for data science and its related modules (like statsmodels, pandas etc). Thanks a lot!","First of all, a quick description of anaconda: Anaconda is meant to help you manage multiple python ""environments"", each one potentially having its own python version and installed packages (with their own respective versions). This is really useful in cases where you would like multiple python versions for different tasks or when there is some conflict in versions of packages, required by other ones. By default, anaconda creates a ""base"" environment with a specific python version, IDLE and pip. Also, anaconda provides an improved way (with respect to pip) of installing and managing packages via the command conda install . For the rest, I will be using the word ""vanilla"" to refer to the python/installation that you manually set up, independent of anaconda. Explanation of the problem: Now, the problem arises since you also installed python independently. The details of the problem depend on how exactly you set up both python and anaconda, so I cannot tell you exactly what went wrong. Also, I am not an OSX user, so I have no idea how python is installed and what it downloads/sets alongside. By your description however, it seems that the ""vanilla"" python installation did not overwrite neither your anaconda python nor anaconda's pip, but it did install IDLE and set it up to use this new python. So right now, when you are downloading something via pip, only the python from anaconda is able to see that and not IDLE's python. Possible solutions: 1. Quick fix: Just run IDLE via /Users/myname/anaconda3/bin/idle3 every time. This one uses anaconda's python and should be able to see all packages installed via conda install of pip install (*). I get this is tiresome, but you don't have to delete anything. You can also set an ""alias"" in your ~/.bashrc file to make the command idle specifically linking you there. Let me know with a comment if you would like me to explain how to do that, as this answer will get too long and redundant. 2. Remove conda altogether (not recommended) You can search google on how to uninstall anaconda along with everything that it has installed. What I do not know at this point is whether your ""vanilla"" python will become the default, whether you will need to also manually install pip again and whether there is the need to reinstall python in order for everything to work properly. 3. Remove your python ""vanilla"" installation and only use anaconda Again, I do not know how python installation works in OSX, but it should be reasonably straightforward to uninstall it. The problem now is that probably you will not have a launcher for IDLE (since I am guessing anaconda doesn't provide one on OSX) but you will be able to use it via the terminal as described in 1.. 4. Last resort: If everything fails, simply uninstall both your vanilla python (which I presume will also uninstall IDLE) and anaconda which will uninstall its own python, pip and idle versions. The relevant documentation should not be difficult to follow. Then, reinstall whichever you want anew. Finally: When you solve your problems, any IDE you choose, being VScode (I haven't use that either), pycharm or something else, will probably be able to integrate with your installed python. There is no need to install a new python ""bundle"" with every IDE. (*): Since you said that after typing pip install pandas your anaconda's python can import pandas while IDLE cannot, I am implying in my answer that pip is also the one that comes with anaconda. You can make sure this is the case by typing which pip which should point to an anaconda directory, probably /Users/myname/anaconda3/bin/pip",1.2,True,3,6830 2020-06-15 16:39:14.833,"IDLE and python is different, not able to install modules properly","thanks for reading this. I am using macOS High Sierra. I am not very familiar with terminal or environment variables, but am trying to learn more. From reading other threads and google, it seems like I either have multiple pythons installed, or have pythons running from different paths. However I am not able to find a solution to resolving this, either by re-pathing my IDLE or deleting it entirely. I do have python, python launcher, and anaconda (not very sure how anaconda works, have it installed a few years back and didn't touch it) installed. I am trying to install pandas (pip install pandas), which tells me that I have it installed, but when I run it on IDLE, it says module not found. Though if i run python3 on terminal and type my code in, it works (so pandas has indeed been installed). When i run which python on terminal, it returns /Users/myname/anaconda3/bin/python (when i enter into this directory from terminal, it shows that in the bin folder, I have python, python.app, python3, python3-config, python3.7, python3.7-config, python3.7m, python3.7m-config) When i run which idle on terminal, it returns /usr/bin/idle (im not even sure how to find this directory from the terminal) When i run import os; print(os.path) on IDLE, it returns module 'posixpath' from '/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/posixpath.py' Would really appreciate some help to figure out how to ensure that when i install modules from terminal, it would be installed into the same python as the one IDLE is using. Also, I would like to know whether it is possible for me to work on VSCode instead of IDLE. I cant seem to find suitable extensions for data science and its related modules (like statsmodels, pandas etc). Thanks a lot!","First: This would be a comment if I had enough reputation. Second: I would just delete python. Everything. And reinstall it.",0.1352210990936997,False,3,6830 2020-06-15 16:39:14.833,"IDLE and python is different, not able to install modules properly","thanks for reading this. I am using macOS High Sierra. I am not very familiar with terminal or environment variables, but am trying to learn more. From reading other threads and google, it seems like I either have multiple pythons installed, or have pythons running from different paths. However I am not able to find a solution to resolving this, either by re-pathing my IDLE or deleting it entirely. I do have python, python launcher, and anaconda (not very sure how anaconda works, have it installed a few years back and didn't touch it) installed. I am trying to install pandas (pip install pandas), which tells me that I have it installed, but when I run it on IDLE, it says module not found. Though if i run python3 on terminal and type my code in, it works (so pandas has indeed been installed). When i run which python on terminal, it returns /Users/myname/anaconda3/bin/python (when i enter into this directory from terminal, it shows that in the bin folder, I have python, python.app, python3, python3-config, python3.7, python3.7-config, python3.7m, python3.7m-config) When i run which idle on terminal, it returns /usr/bin/idle (im not even sure how to find this directory from the terminal) When i run import os; print(os.path) on IDLE, it returns module 'posixpath' from '/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/posixpath.py' Would really appreciate some help to figure out how to ensure that when i install modules from terminal, it would be installed into the same python as the one IDLE is using. Also, I would like to know whether it is possible for me to work on VSCode instead of IDLE. I cant seem to find suitable extensions for data science and its related modules (like statsmodels, pandas etc). Thanks a lot!","To repeat and summarized what has been said on various other question answers: 1a. 3rd party packages are installed for a particular python(3).exe binary. 1b. To install multiple packages to multiple binaries, see the option from python -m pip -h. To find out which python binary is running, execute import sys; print(sys.executable). 3a. For 3rd party package xyz usually installed in some_python/Lib/site-packages, IDLE itself has nothing to do with whether import xyz works. It only matters whether xyz is installed for 'somepython' (see 1a). 3b. To run IDLE with 'somepython', run somepython -m idlelib in a terminal or console. somepython can be a name recognized by the OS or a path to a python executable.",0.0,False,3,6830 2020-06-15 16:46:12.930,Why does os.system('cls') print 0,"hello before I say anything I would like to let you know that I tried searching for the answer but I found nothing. whenever I use os.system('cls') it clears the screen but it prints out a zero. is this normal, if not how do I stop it from doing that?","I guess you running in inside an interpreter os.system will return: a 16-bit number, whose low byte is the signal number that killed the process, and whose high byte is the exit status (if the signal number is zero) So it just print the value it got, the return value of the command cls in the command line, which is 0 cause the command run successfully",0.2012947653214861,False,1,6831 2020-06-15 22:24:45.037,VS Code - pylint is not running,"I have a workspace setup in VS Code where I do python development. I have linting enabled, pylint enabled as the provider, and lint on save enabled, but I continue to see no errors in the Problems panel. When I run pylint via the command line in the virtual environment i see a bunch of issues - so I know pylint works. I am also using black formatting(on save) which works without issue. I have tried using both the default pylint path as well as updating it manually to the exact location and still no results. When I look at the Output panel for python it looks like pylint is never even running (i.e. I see the commands for black running there but nothing for pylint). My pylint version is 2.4.4 and VS Code version 1.46 Any idea how to get this working?","Uninstall Python Extension Reinstall Python Extension And with that there will will be one more extension of ""Python Extension"" named - ""PYLANCE"" don't forget to install that too. Reload VS Code DONE !!",0.0,False,1,6832 2020-06-16 06:08:53.517,Saving a File in an Atom Text Editor Folder,"This is my first time on stack overflow. I am a beginner python coder and I use the Atom text editor. I am currently learning from a book called PythonCrashCourse by Eric Matthes (second edition) and is developing a practice-project called Alien Invasion. I am currently stuck on saving a file of a spaceship image into a folder named ""images"" within my text editor. I have an ASUS chromebook. The file I am trying to save is called ship.bmp and the book instructions say ""Make a folder called images inside your main alien_invasion project folder. Save the file ship.bmp in the images folder."" I have the ship.bmp file saved but I just don't know how to transport it into a file within my text editor ""images"" folder. I have been stuck on this for quite a while and I would really appreciate it if someone could give me some advice. Thanks!","First of all you need to have the ship.bmp file downloaded somewhere on your computer. You then would need to move it into your project folder. I think that the easiest way for you to navigate through the files you have is to go to your ""Files"" app in the Chromebook. You should look through your Downloads folder for the ship.bmp after you download it and manually move it into the project folder that you are working on. You should be able to open your project folder and place the ship.bmp file inside the ""images"" folder.",0.0,False,1,6833 2020-06-16 10:20:24.730,How does Python compare two lists of unequal length?,"I am aware of the following: [1,2,3]<[1,2,4] is True because Python does an element-wise comparison from left to right and 3 < 4 [1,2,3]<[1,3,4] is True because 2 < 3 so Python never even bothers to compare 3 and 4 My question is how does Python's behavior change when I compare two lists of unequal length? [1,2,3]<[1,2,3,0] is True [1,2,3]<[1,2,3,4] is True This led me to believe that the longer list is always greater than the shorter list. But then: [1,2,3]<[0,0,0,0] is False Can someone please explain how these comparisons are being done by Python? My hunch is that element-wise comparisons are first attempted and only if the first n elements are the same in both lists (where n is the number of elements in the shorter list) does Python consider the longer list to be greater. If someone could kindly confirm this or shed some light on the reason for this behavior, I'd be grateful.","The standard comparisons (<, <=, >, >=, ==, !=, in , not in ) work exactly the same among lists, tuples and strings. The lists are compared element by element. If they are of variable length, it happens till the last element of the shorter list If they are same from start to the length of the smaller one, the length is compared i.e. shorter is smaller",1.2,True,1,6834 2020-06-16 18:37:38.617,Cannot install older versions of tensorflow: No matching distribution found for tensorflow==1.9.0,"I need to install older versions of tensorflow to get the deepface library to work properly, however whenever I run pip install tensorflow==1.9.0, I get: ERROR: Could not find a version that satisfies the requirement tensorflow==1.9.0 (from versions: 2.2.0rc1, 2.2.0rc2, 2.2.0rc3, 2.2.0rc4, 2.2.0) Anyone else run into this issue/know how to fix it? Thanks!",You can install TensorFlow 1.9.0 with the following Python versions: 2.7 and 3.4 to 3.6.,0.6730655149877884,False,1,6835 2020-06-17 20:14:07.813,Remove character '\xa0' while reading CSV file in python,I want to remove the non-ASCII Character '\xa0' while reading my CSV file using read_csv into a dataframe with python. Can someone tell me how to achieve this?,"You can use x = txt.replace(u'\xa0', u'') for text you're reading.",1.2,True,1,6836 2020-06-17 21:33:19.697,"How to scrape over 50,000 data points from dynamically loading webpage in under 24 hours?","I am using selenium python and was wondering how one effectively scrapes over 50,000 data points in under 24 hours. For example, when I search for products on the webpage 'insight.com' it takes about 3.5 seconds for the scraper to search for the product and grab its price, meaning that with large amounts of data it takes the scraper several days. A part from using threads to simultaneously look up several products at the same time, how else can I speed up this process? I only have one laptop and will have to simultaneously scrape six other similar websites so therefore do not want too many threads and the speed at which the computer operates will slow down significantly. How do people achieve to scrape large amounts of data in such short periods of time?","If you stop using the selenium module, and rather work with a much more sleek and elegant module, like requests, you could get the job done in a matter of mere minutes. If you manage to reverse engineer the requests being handled, and send them yourself, you could pair this with threading to scrape at some 50 'data points' per second, more or less (depending on some factors, like processing and internet connection speed).",0.3869120172231254,False,2,6837 2020-06-17 21:33:19.697,"How to scrape over 50,000 data points from dynamically loading webpage in under 24 hours?","I am using selenium python and was wondering how one effectively scrapes over 50,000 data points in under 24 hours. For example, when I search for products on the webpage 'insight.com' it takes about 3.5 seconds for the scraper to search for the product and grab its price, meaning that with large amounts of data it takes the scraper several days. A part from using threads to simultaneously look up several products at the same time, how else can I speed up this process? I only have one laptop and will have to simultaneously scrape six other similar websites so therefore do not want too many threads and the speed at which the computer operates will slow down significantly. How do people achieve to scrape large amounts of data in such short periods of time?","Find an API and use that!!! The goal of both web scraping and APIs is to access web data. Web scraping allows you to extract data from any website through the use of web scraping software. On the other hand, APIs give you direct access to the data you’d want. As a result, you might find yourself in a scenario where there might not be an API to access the data you want, or the access to the API might be too limited or expensive. In these scenarios, web scraping would allow you to access the data as long as it is available on a website. For example, you could use a web scraper to extract product data information from Amazon since they do not provide an API for you to access this data. However, if you had access to an API, you could grab all the data you want, super, super, super fast!!! It's analogous to doing a query in a database on prem, which is very fast and very efficient, vs. refreshing a webpage, waiting for ALL elements to load, and you can't use the data until all elements have been loaded, and then.....do what you need to do.",0.2012947653214861,False,2,6837 2020-06-18 02:49:23.653,How to efficiently query a large database on a hourly basis?,"Background: I have multiple asset tables stored in a redshift database for each city, 8 cities in total. These asset tables display status updates on an hourly basis. 8 SQL tables and about 500 mil rows of data in a year. (I also have access to the server that updates this data every minute.) Example: One market can have 20k assets displaying 480k (20k*24 hrs) status updates a day. These status updates are in a raw format and need to undergo a transformation process that is currently written in a SQL view. The end state is going into our BI tool (Tableau) for external stakeholders to look at. Problem: The current way the data is processed is slow and inefficient, and probably not realistic to run this job on an hourly basis in Tableau. The status transformation requires that I look back at 30 days of data, so I do need to look back at the history throughout the query. Possible Solutions: Here are some solutions that I think might work, I would like to get feedback on what makes the most sense in my situation. Run a python script that looks at the most recent update and query the large history table 30 days as a cron job and send the result to a table in the redshift database. Materialize the SQL view and run an incremental refresh every hour Put the view in Tableau as a datasource and run an incremental refresh every hour Please let me know how you would approach this problem. My knowledge is in SQL, limited Data Engineering experience, Tableau (Prep & Desktop) and scripting in Python or R.","So first things first - you say that the data processing is ""slow and inefficient"" and ask how to efficiently query a large database. First I'd look at how to improve this process. You indicate that the process is based on the past 30 days of data - is the large tables time sorted, vacuumed and analyzed? It is important to take maximum advantage of metadata when working with large tables. Make sure your where clauses are effective at eliminating fact table block - don't rely on dimension table where clauses to select the date range. Next look at your distribution keys and how these are impacting the need for your critical query to move large amounts of data across the network. The internode network has the lowest bandwidth in a Redshift cluster and needlessly pushing lots of data across it will make things slow and inefficient. Using EVEN distribution can be a performance killer depending on your query pattern. Now let me get to your question and let me paraphrase - ""is it better to use summary tables, materialized views, or external storage (tableau datasource) to store summary data updated hourly?"" All 3 work and each has its own pros and cons. Summary tables are good because you can select the distribution of the data storage and if this data needs to be combined with other database tables it can be done most efficiently. However, there is more data management to be performed to keep this data up to data and in sync. Materialized views are nice as there is a lot less management action to worry about - when the data changes, just refresh the view. The data is still in the database so is is easy to combine with other data tables but since you don't have control over storage of the data these action may not be the most efficient. External storage is good in that the data is in your BI tool so if you need to refetch the results during the hour the data is local. However, it is not locked into your BI tool and far less efficient to combine with other database tables. Summary data usually isn't that large so how it is stored isn't a huge concern and I'm a bit lazy so I'd go with a materialized view. Like I said at the beginning I'd first look at the ""slow and inefficient"" queries I'm running every hour first. Hope this helps",1.2,True,1,6838 2020-06-18 03:50:06.060,How to send a HTML file as a table through outlook?,"I now have an HTML file and I want to send it as a table, not an attachment by using outlook. The code that I found online only sends the file as an attachment. Can anyone give me ideas on how to do it properly?",You can use the HTMLBody property of the MailItem class to set up the message body.,1.2,True,1,6839 2020-06-18 03:51:03.357,Python idle to python.exe,"So I've made a script/code in python idle and want to run it on python.exe but whenever I do this the you can see the python window pop up briefly for a second before closing, and I want to run my code using python instead of idle, how can I do this?","since I cant comment yet: go to the command line and open the file location directory and type: python filename.py",1.2,True,1,6840 2020-06-18 04:35:58.813,Using Selenium without using any browser,"I have been trying to do web automation using selenium,Is there any way to use browser like chrome,firefox without actually installing then, like using some alternate options, or having portable versions of them.If I can use portable versions how do i tell selenium to use it?","If you install pip install selenium it comes with the protable chrome browser, no need to install any browser for this. the chrome has a tag ""chrome is controlled by automated test software"" near search bar",0.0,False,1,6841 2020-06-18 06:04:06.180,Tkinter: How do I handle menus with many items?,"If I have a menu with too many items to fit on the screen, how do I get one of those 'more' buttons with a downward arrow at the bottom of the menu? Is that supported?","I solved my problem with cascading menus. I already had some, but I didn't want to use more for these particular menus items—but after closer inspection, I think it's better this way. I'm still interested in other solutions, for scenarios where cascading menus are not a practical option, however (like if the screen is too narrow to cascade that far or something). So, I don't plan to mark this as the accepted answer anytime soon (even though in most circumstances, it's probably the best solution).",-0.2012947653214861,False,1,6842 2020-06-18 10:18:20.563,How to check if a QThread is alive or killed and restart it if it is killed in PyQt5?,"I have an PyQt5 application to update database collections one by one using QThread and send updation signal to main thread as each collection gets updated to reflect it on GUI. It runs continuously 24X7. But somehow the data stops getting updated and also GUI stops getting signals. But the application is still running as other part are accessible and functioning properly. Also no errors are found in log file. Mostly the application runs fine but after some random period this problem arises(first time after approximately a month, then after 2 weeks and now after 23 days). However restarting the application solves the problem. I tried using isRunning() method and isFinished() method but no change found. Can anyone tell what is the problem?? Thank you in advance. Also tell how to check weather the QThread is stuck or killed?","If any exception occur in the thread, then thread can be finished soon. so You should use settimeout function to calling any third party library(data update) in the thread. That will solve your problem.",0.0,False,1,6843 2020-06-18 12:22:55.510,Ngrok hostname SSL Certificate,"I am running a Flask API application, and I have an SSL Certificate. When I run flask server on localhost the certificate is applied from Flask successfully. But when I use Ngrok to deploy the localhost on a custom domain, the certificate is changed to *.ngrok.com, how can I change that to my certificate?. EDIT #1: I already have a certificate for the new hostname and I have already applied it on Flask, but ngrok is changing it.","You expose your service through the URL *.ngrok.com. A browser or other client will make a request to *.ngrok.com. The certificate presented there must be valid for *.ngrok.com. If *.ngrok.com presents a certificate for example.com, any valid HTTPS client would reject it because the names do not match, which by definition makes it an invalid certificate and is a flag for a potential security problem, exactly what HTTPS is designed to mitigate. If you want to present your certificate for example.com to the client, you need to actually host your site at example.com",0.0,False,1,6844 2020-06-18 14:48:27.287,Record sound without blocking Pygame UI,"I am making a simple Python utility that shows the tempo of a song (BPM) that is playing. I record short fragments of a few seconds to calculate the tempo over. The problem is that now I want to show this on a display using a Pygame UI, but when I'm recording sound, the UI does not respond. I want to make it so that the UI will stay responsive during the recording of the sound, and then update the value on the screen once the tempo over a new fragment has been calculated. How can I implement this? I have looked at threading but I'm not sure this is the appropriate solution for this.","I'd use the python threading library. Use the pygame module in the main thread (just the normal python shell, effectively) an create a separate thread for the function that determines BPM. This BPM can then be saved to a global variable that can be accessed by PyGame for displaying.",1.2,True,1,6845 2020-06-18 18:48:51.503,Text classification using Word2Vec,"I am in trouble to understand Word2Vec. I need to do a help desk text classification, based on what users complain in the help desk system. Each sentence has its own class. I've seen some pre-trained word2vec files in the internet, but I don't know if is the best way to work since my problem is very specific. And my dataset is in Portuguese. I'm considering that I will have to create my own model and I am in doubt on how to do that. Do I have to do it with the same words as the dataset I have with my sentences and classes? In the frst line, the column titles. Below the first line, I have the sentence and the class. Could anyone help me? I saw Gensin to create vector models, and sounds me good. But I am completely lost. : chamado,classe 'Prezados não estou conseguindo gerar uma nota fiscal do módulo de estoque e custos.','ERP GESTÃO', 'Não consigo acessar o ERP com meu usuário e senha.','ERP GESTÃO', 'Médico não consegue gerar receituário no módulo de Medicina e segurança do trabalho.','ERP GESTÃO', 'O produto 4589658 tinta holográfica não está disponível no EIC e não consigo gerar a PO.','ERP GESTÃO',","Your inquiry is very general, and normally StackOverflow will be more able to help when you've tried specific things, and hit specific problems - so that you can provide exact code, errors, or shortfalls to ask about. But in general: You might not need word2vec at all: there are many text-classification approaches that, with sufficient training data, may assign your texts to helpful classes without using word-vectors. You will likely want to try those first, then consider word-vectors as a later improvement. For word-vectors to be helpful, they need to be based on your actual language, and also ideally your particular domain-of-concern. Generic word-vectors from news articles or even Wikipedia may not include the important lingo, and word-senses for your problem. But it's not too hard to train your own word-vectors – you just need a lot of varied, relevant texts that use the words in realistic, relevant contexts. So yes, you'd ideally train your word-vectors on the same texts you eventually want to classify. But mostly, if you're ""totally lost"", start with more simple text-classification examples. As you're using Python, examples based on scikit-learn may be most relevant. Adapt those to your data & goals, to familiarize yourself with all the steps & the ways of evaluating whether your changes are improving your end results or not. Then investigate techniques like word-vectors.",0.0,False,1,6846 2020-06-19 18:05:21.573,Pyqt5 widget style similar to tkinter style,"I want to create a qwidgets one with raised/sunkin/groove/ridge relief similar to tkinter. I know how to do this in tkinter, but don't know the style sheet option in Pyqt5 for each one. Please find the tkinter option Widget = Tkinter.Button(top, text =""FLAT"", relief=raised ). Hope you can help to translate to Pyqt5",You can do this with QFrame. you can set QFrame.setFrameShadow(QFrame.Sunken). But I couldn't find for a QWidget one.,0.0,False,1,6847 2020-06-20 13:21:05.770,How to program NVIDIA's tensor cores in RTX GPU with python and numba?,"I am interested in using the tensor cores from NVIDIA RTX GPUs in python to benefit from its speed up in some scientific computations. Numba is a great library that allows programming kernels for cuda, but I have not found how to use the tensor cores. Can it be done with Numba? If not, what should I use?",".... I have not found how to use the tensor cores. Can it be done with Numba? No. Numba presently doesn't have half precision support or tensor core intrinsic functions available in device code. If not, what should I use? I think you are going to be stuck with writing kernel code in the native C++ dialect and then using something like PyCUDA to run device code compiled from that C++ dialect.",1.2,True,1,6848 2020-06-20 19:06:57.617,is it possible to run multiple http servers on one machine?,"can i run multiple python http servers on one machine to receive http post request from a webpage? currently i am running an http server on port 80 and on the web page there is a HTML form which sends the http post request to the python server and in the HTML form i am using the my server's address like this : ""http://123.123.123.123"" and i am receiving the requests but i want to run multiple servers on the same machine with different ports for each server. if i run 2 more servers on port 21200 and 21300 how do i send the post request from the HTML form on a specified port , so that the post request is received and processed by correct server?? do i need to define the server address like this : ""http://123.123.123.123:21200"" and ""http://123.123.123.123:21300"" ?","Yes can run multiple webservers on one machine. use following commands to run on different ports: python3 -m http.server 4000 4000 is the port number, you can replace it with any port number here.",1.2,True,1,6849 2020-06-21 01:52:26.577,How to change API level when using buildozer?,"I just finished my app and made a release version with buildozer and signed it but when I tried to upload my apk file to Google Play Console...It said that the API level of the app was 27 and it should be level 28. So how can I do this? Thanks in advance",Find the line that says android.api = 27 in your buildozer.spec file and change it to 28.,0.0,False,2,6850 2020-06-21 01:52:26.577,How to change API level when using buildozer?,"I just finished my app and made a release version with buildozer and signed it but when I tried to upload my apk file to Google Play Console...It said that the API level of the app was 27 and it should be level 28. So how can I do this? Thanks in advance","It should be edited in buildozer.spec file. If you scroll down it's default to 27, change it to specification",1.2,True,2,6850 2020-06-21 11:00:41.357,Is there a plugin similar to gitlens for pycharm or other products?,"My question is very simple , as you read the title I want plugin similar to GitLens that I found in vscode. As you know with GitLens you can easily see the difference between two or multiple commits. I searched it up and I found GitToolBox but I don't know how to install it as well and I don't think that's like GitLens...","Open Settings on jetbrains IDE. Go to plugins and look for git toolbox. Install it and boom, its done!",0.0,False,1,6851 2020-06-21 14:24:22.560,Sending Information from one Python file to another,"I would like to know how to perform the below mentioned task I want to upload a CSV file to a python script 1, then send file's path to another python script in file same folder which will perform the task and send the results to python script 1. A working code will be very helpful or any suggestion is also helpful.","You can import the script editing the CSV to the python file and then do some sort of loop that edits the CSV file with your script 1 then does whatever else you want to do with script 2. This is an advantage of OOP, makes these sorts of tasks very easy as you have functions set in a module python file and can create a main python file and run a bunch of functions editing CSV files this way.",0.0,False,1,6852 2020-06-21 14:56:48.953,I'm trying to figure out how to install this lib on python (time),"im new to python and i was trying to install ""time"" library on python, i typed pip install time but the compiler said this C:\Users\Giuseppe\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Python 3.6>pip install time ERROR: Could not find a version that satisfies the requirement time (from versions: none) ERROR: No matching distribution found for time i dont know how to resolve, can anyone help me? please be the more simple u can cause im not too good in py, as i said im new, thx to everyone! P.S. the py version is 3.6 thx everyone, im stupid xd","Time is a module that comes built-in with python so no need to install anything, just import it : import time",0.1352210990936997,False,1,6853 2020-06-21 15:07:21.757,how can i use an chrome extension in my selenium python program?,"im just trying to use an vpn extension with selenium. I have the extension running , but i need to click in the button and enable the vpn so it can works, there's a way to do that with selenium? im thinking to use another similar option like scrapy or pyautogui...","No there is no way to enable the VPN on your extension If you want to use your VPN extension you have to set a profile (otherwise selenium will create a new profile without installed extension)",1.2,True,1,6854 2020-06-21 15:10:15.180,I have completely messed up my Python Env and need help to start fresh,"Long story short, I messed with my Python environment too much (moving files around, creating new folders, trying to reinstall packages, deleting files etc.) My google package doesn't work anymore. Everytime I try to import the package, it says it can't find the module, even though I did a pip install. I was wondering how I could do a hard reset/delete python off my computer and reinstall it. Thanks.","I figured it out. My pip was installing to a site packages folder inside a local folder, while my jupyter notebook was trying to pull from the anaconda site packages folder.",1.2,True,1,6855 2020-06-22 19:54:41.410,Gettinng back cells after being deleted in Colab,"I often delete code in Colab, by accident, and for some reason when I try to do undo code deletion it does not work. So basically when I do this I want to get my cells back somehow. Is there any way to do this, like take a look at the code that Colab is running, because my cells are probably still there. Another option would be to somehow see cells that have been previously deleted. Please help me. Any other solutions would be nice.",You can undo deleting cell in google colab simply by typing ctrl + M Z,0.2012947653214861,False,1,6856 2020-06-22 21:33:39.547,"Replace string with quotes, brackets, braces, and slashes in python","I have a string where I am trying to replace [""{\"" with [{"" and all \"" with "". I am struggling to find the right syntax in order to do this, does anyone have a solid understanding of how to do this? I am working with JSON, and I am inserting a string into the JSON properties. This caused it to put a single quotes around my inserted data from my variable, and I need those single quotes gone. I tried to do json.dumps() on the data and do a string replace, but it does not work. Any help is appreciated. Thank you.","if its two characters you want to replace then you have to first check for first character and then the second(which should be present just after the first one and so on) and shift(shorten the whole array by 3 elements in first case whenever the condition is satisfied and in the second case delete \ from the array. You can also find particular substring by using inbuilt function and replace it by using replace() function to insert the string you want in its place",0.0,False,2,6857 2020-06-22 21:33:39.547,"Replace string with quotes, brackets, braces, and slashes in python","I have a string where I am trying to replace [""{\"" with [{"" and all \"" with "". I am struggling to find the right syntax in order to do this, does anyone have a solid understanding of how to do this? I am working with JSON, and I am inserting a string into the JSON properties. This caused it to put a single quotes around my inserted data from my variable, and I need those single quotes gone. I tried to do json.dumps() on the data and do a string replace, but it does not work. Any help is appreciated. Thank you.","I would recommend maybe posting more of your code below so we can suggest a better answer. Just based on the information you have provided, I would say that what you are looking for are escape characters. I may be able to help more once you provide us with more info!",0.0,False,2,6857 2020-06-23 15:32:04.937,How to calculate percentage in Python with very simple formula,"I've seen similar questions but it's shocking that I didn't see the answer I was, in fact, looking for. So here they are, both the question and the answer: Q: How to calculate simply the percentage in Python. Say you need a tax calculator. To put it very simple, the tax is 18% of earnings. So how much tax do I have to pay if I earn, say, 18 342? The answer in math is that you divide by 100 and multiply the result by 18 (or multiply with 18 divided by 100). But how do you put that in code? tax = earnings / 100 * 18 Would that be quite right?","The answer that best fitted me, especially as it implied no import, was this: tax = earnings * 0.18 so if I earned 18 342, and the tax was 18%, I should write: tax = 18 342 * 0.18 which would result in 3 301.56 This seems trivial, I know, and probably some code was expected, moreover this form might be applicable not only in Python, but again, I didn't see the answer anywhere and I thought that it is, after all, the simplest.",0.0,False,1,6858 2020-06-23 17:34:09.277,"In P4, how do i check if a change submitted to one branch is also submitted to another branch using command","I want to find out there is a p4 command that can find cl submitted in a depot branch from a cl submitted in another depot branch. like - if CL 123 was submitted to branch //code/v1.0/files/... and same code changes were also submitted to another branch //code/v5.0/files/... can i find out cl in 2nd branch from cl 123?","There are a few different methods; which one is easiest will depend on the exact context/requirements of what you're doing. If you're interested in the specific lines of code rather than the metadata, p4 annotate is the best way. Use p4 describe 123 to see the lines of code changed in 123, and then p4 annotate -c v5.0/(file) to locate the same lines of code in v5.0 and see which changelist(s) introduced them into that branch. This method will work even if the changes were copied over manually instead of using Perforce's merge commands. If you want to track the integration history (i.e. the metadata) rather than the exact lines of code (which may have been edited in the course of being merged between codelines, making the annotate method not work), the easiest method is to use the Revision Graph tool in P4V, which lets you visually inspect a file's branching history; you can select the revision from change 123 and use the ""highlight ancestors and descendants"" tool to see which revisions/changelists it is connected to in other codelines. This makes it easy to see the context of how many integration steps were involved, who did them, when they happened, whether there were edits in between, etc. If you want to use the metadata but you're trying for a more automated solution, changes -i is a good tool. This will show you which changelists are included in another changelist via integration, so you can do p4 changes -i @123,123 to see the list of all the changes that contributed to change 123. On the other side (finding changelists in v5.0 that 123 contributed to), you could do this iteratively; run p4 changes -i @N,N for each changelist N in the v5.0 codeline, and see which of them include 123 in the output (it may be more than one).",0.6730655149877884,False,1,6859 2020-06-24 01:35:15.063,Alpha_Vantage ts.get_daily ending with [0],"I am learning how to use Alpha_Vantage api and came across this line of code. I do not understand what is the purpose of [0]. SATS = ts.get_daily('S58.SI', outputsize = ""full"")[0]","ts.get_daily() appears to return an array. SATS is getting the 0 index of the array (first item in the array)",0.0,False,1,6860 2020-06-24 06:47:05.090,how do I run two separate deep learning based model together?,"I trained a deep learning-based detection network to detect and locate some objects. I also trained a deep learning-based classification network to classify the color of the detected objects. Now I want to combine these two networks to detect the object and also classify color. I have some problems with combining these two networks and running them together. How do I call classification while running detection? They are in two different frameworks: the classifier is based on the Keras and TensorFlow backend, the detection is based on opencv DNN module.","I have read your question and from that, I can infer that your classification network takes the input from the output of your first network(object locator). i.e the located object from your first network is passed to the second network which in turn classifies them into different colors. The entire Pipeline you are using seems to be a sequential one. Your best bet is to first supply input to the first network, get its output, apply some trigger to activate the second network, feed the output of the first net into the second net, and lastly get the output of the second net. You can run both of these networks in separate GPUs. The Trigger that calls the second function can be something as simple as cropping the located object in local storage and have a function running that checks for any changes in the file structure(adding a new file). If this function returns true you can grab that cropped object and run the network with this image as input.",0.0,False,1,6861 2020-06-24 18:24:37.047,ModuleNotFoundError: No module named 'pandas' when converting Python file to Executable using auto-py-to-exe,"I used auto-py-to-exe to convert a Python script into an executable file and it converts it to an executable without any problems, but when I launch the executable the following error happens: ModuleNotFoundError: No module named 'pandas' [11084] Failed to execute script test1 Any ideas on how to fix this? I've tried many libraries to convert the Python file to and Executable and all give me the same error. I've tried with cx_Freeze, PyInstaller, py2exe, and auto-py-to-exe. All give me a ModuleNotFoundError, but when I run the script on the IDE it runs perfectly.",Are you trying pip install pandas?,0.2655860252697744,False,3,6862 2020-06-24 18:24:37.047,ModuleNotFoundError: No module named 'pandas' when converting Python file to Executable using auto-py-to-exe,"I used auto-py-to-exe to convert a Python script into an executable file and it converts it to an executable without any problems, but when I launch the executable the following error happens: ModuleNotFoundError: No module named 'pandas' [11084] Failed to execute script test1 Any ideas on how to fix this? I've tried many libraries to convert the Python file to and Executable and all give me the same error. I've tried with cx_Freeze, PyInstaller, py2exe, and auto-py-to-exe. All give me a ModuleNotFoundError, but when I run the script on the IDE it runs perfectly.","For cx_freeze, inlcude pandas explicitly in the packages. Like in the example below - build_exe_options = {'packages': ['os', 'tkinter', 'pandas']} This should include the pandas module in you build.",0.1352210990936997,False,3,6862 2020-06-24 18:24:37.047,ModuleNotFoundError: No module named 'pandas' when converting Python file to Executable using auto-py-to-exe,"I used auto-py-to-exe to convert a Python script into an executable file and it converts it to an executable without any problems, but when I launch the executable the following error happens: ModuleNotFoundError: No module named 'pandas' [11084] Failed to execute script test1 Any ideas on how to fix this? I've tried many libraries to convert the Python file to and Executable and all give me the same error. I've tried with cx_Freeze, PyInstaller, py2exe, and auto-py-to-exe. All give me a ModuleNotFoundError, but when I run the script on the IDE it runs perfectly.","A script that runs in your IDE but not outside may mean you are actually working in a virtual environment. Pandas probably is not installed globally in your system. Try remembering if you had created a virtual environment and then installed pandas inside this virtual environment. Hope it helped, Vijay.",1.2,True,3,6862 2020-06-25 05:00:30.313,Is there a python code that I can add to my program that will add it to start in windows 10?,"Currently, I have been scouring the internet for a code that will either add this program (something.exe) to the windows task scheduler or if that is not even an option how to add it to the windows reg key for a startup. I cannot find anything in terms of Python3, and I really hope it is not an answer that is right in front of my face. Thanks!","Open the windows scheduler -> select ""create basic task"" -> fill out the desired times -> input the path to the script you want to execute.",0.0,False,1,6863 2020-06-25 06:15:17.920,How do I run a downloaded repository's config in Python?,"I am trying to use sunnyportal-py. Relatively new to python, I do not understand step 2 in the README: How to run Clone or download the repository. Enter the directory and run: PYTHONPATH=. ./bin/sunnyportal2pvoutput --dry-run sunnyportal.config Enter the requested information and verify that the script is able to connect to Sunny Portal. The information is saved in sunnyportal.config and can be edited/deleted if you misstype anything. Once it works, replace --dry-run with e.g. --output to upload the last seven days output data to pvoutput or --status to upload data for the current day. Add --quiet to silence the output. Could anyone help me? I have gone into a cmd.exe in the folder I have downloaded, I don't know how to correctly write the python path in the correct location. What should I paste into the command line? Thanks! Edit : I would like to be able to do this on Windows, do tell me if this is possible.","The command at bullet 2 is to be typed at the commandline (You need to be in windows: cmd or powershell, Linux: bash, etc.. to be able to do this). PYTHONPATH=. ./bin/sunnyportal2pvoutput --dry-run sunnyportal.config The first part of the command code above indicates where your program is located. Go to the specific folder via commandline (windows: cd:... ; where .. is your foldername) and type the command. The second part is the command to be executed. Its behind the ""--"" dashes. The program knows what to do. In this case: --dry-run sunnyportal.config running a validation/config file to see if the program code itself works; as indicated by ""dry run"". In your case type at the location (while in cmd): ""sunnyportal2pvoutput --dry-run sunnyportal.config"" or ""sunnyportal2pvoutput.py --dry-run sunnyportal.config"" (without the environment variables (python path) set). Note: the pythonpath is an environment variable. This can be added via: Control Panel\All Control Panel Items\System\ --> bullet Advanced System Settings --> button ""environment variables"". Then you can select to add it to ""Variables for user ""username"""" or ""system variables"". Remember to reboot thereafter to make the change effective immediately. Update 1 (pip install sunnyportal): go to cmd. type ""pip search sunnyportal"" Result: Microsoft Windows [Version 10.0.18363.836] (c) 2019 Microsoft Corporation. All rights reserved. C:\Windows\System32>pip search sunnyportal sunnyportal-py (0.0.4) - A Python client for SMA sunny portal C:\Windows\System32> If found, then type: ""pip install sunnyportal""",0.0,False,1,6864 2020-06-25 08:51:15.257,Run one file among multiple files in azure webjobs,"I am trying to run a contionus azure webjob for python. i have 6 files where main.py is the main file, other files internally importing each other and finally everything is being called from main.py, now when i am trying to run only the first python file is getting executed, but i want that when the webjob will start only main.py will be executed not anything else. how to achieve that ?","This is quite simple. In azure webjob, if the file name starts with run, then this file has the highest priority to execute. So the most easiest way is just renaming the main.py to run.py. Or add an run.py, then call the main.py within it.",1.2,True,1,6865 2020-06-25 10:32:25.647,How do you download online libraries on python?,I am trying to download youtube videos using python and for the code to work I need to install pytube3 library but I am very new to coding so I am not sure how to do it.,"You could use python3 -m pip install pytube3",0.1352210990936997,False,1,6866 2020-06-25 16:48:51.563,How to check if image contains text or not?,"Given any image of a scanned document, I want to check if it's not empty page. I know I can send it to AWS Textract - but it will cost money for nothing. I know I can use pytesseract but maybe there is more elegant and simple solution? Or given a .html file that represents the text of the image - how to check it shows a blank page?","We can use pytesseract for this application by thresholding the image and passing it to tesseract. However if you have a .html file that represents text of image, you can use beautifulsoup for extracting text from it and check if it is empty.Still this is a round way approach.",0.2012947653214861,False,1,6867 2020-06-26 15:06:57.583,How to profile my APIs for concurrent requests?,"Scenario Hi, I have a collection of APIs that I run on Postman using POST requests. The flask and redis servers are set up using docker. What I'm trying to do I need to profile my setup/APIs in a high traffic environment. So, I need to create concurrent requests calling these APIs The profiling aims to get the system conditions with respect to memory (total memory consumed by the application), time (total execution time taken to create and execute the requests) and CPU-time (or the percentage of CPU consumption) What I have tried I am familiar with some memory profilers like mprof and time profiler like line_profiler. But I could not get a profiler for the CPU consumption. I have run the above two profilers (mprof and line_profiler) on a single execution to get the line-by-line profiling results for my code. But this focuses on the function wise results.I have also created parallel requests earlier using asyncio,etc but that was for some simple API-like programs without POSTMAN. My current APIs work with a lot of data in the body section of POSTMAN Where did I get stuck With docker, this problem gets trickier for me. Firstly, I am unable to get concurrent requests I do not know how to profile my APIs when using POSTMAN (perhaps there is an option to do it without POSTMAN) with respect to the three parameters: time, memory and CPU consumption.","I suppose that you've been using the embbed flask server(dev server) that is NOT production ready and,by default, it supports only on request per time. For concurrent requests should be looking to use gunicorn or some other wsgi server like uWsgi. Postman is only a client of your API, i don't see it's importance here. If you want to do a stress test or somethin like that, you can write your own script or use known tools, like jmetter. Hope it helps!",0.0,False,1,6868 2020-06-26 17:16:07.260,How to send clickable link and Mail in Chatterbot flask app,"I am using chatterbot, I want to send clickable link and Mail as per message sent by the user. I cant find any link or reference on how to do this",Try using linkify.... pip install autolink... linkify (bot.get_response(usr_text)),1.2,True,1,6869 2020-06-26 17:58:13.033,How to train a model for recognizing two objects?,"Ive got two separate models, one for mask recognition and another for face recognition. The problem now is that how do I combine these both models so that it performs in unison as a single model which is able to :- Recognize whether or not a person is wearing mask Simultaneously recognize who that person is if he isn't wearing mask apart from warning about no mask. What are the possibilities I have to solve this problem!!",You don't have to combine the both models and train them you have to train them seprately. And after training the model first you have to check with the mask detection model what's the probability/confidence score that there's a mask detected and if the probability is low say like 40%-45% then you have to use the other model that recognises the person.,0.2012947653214861,False,1,6870 2020-06-26 20:38:05.160,model for hand written text recognition,"I have been attempting to create a model that given an image, can read the text from it. I am attempting to do this by implementing a cnn, rnn, and ctc. I am doing this with TensorFlow and Keras. There are a couple of things I am confused about. For reading single digits, I understand that your last layer in the model should have 9 nodes, since those are the options. However, for reading words, aren't there infinitely many options, so how many nodes should I have in my last layer. Also, I am confused as to how I should add my ctc to my Keras model. Is it as a loss function?","I see two options here: You can construct your model to recognize separate letters of those words, then there are as many nodes in the last layer as there are letters and symbols in the alphabet that your model will read. You can make output of your model as a vector and then ""decode"" this vector using some other tool that can encode/decode words as vectors. One such tool I can think of is word2vec. Or there's an option to download some database of possible words and create such a tool yourself. Description of your model is very vague. If you want to get more specific help, then you should provide more info, e.g. some model architecture.",0.0,False,1,6871 2020-06-27 04:24:24.573,creating an api to update postgres db with a json or yaml payload,"I decided to ask this here after googling for hours. I want to create my own API endpoint on my own server. Essentially I want to be able to just send a yaml payload to my server, when received I want to kick off my python scripts to parse the data and update the database. I'd also like to be able to retrieve data with a different call. I can code the back-end stuff, I just don't know how to make that bridge between hitting the server from outside and having the server do the things in the back-end in python. Is django the right way? I've spent a couple days doing Django tutorials, really cool stuff, but I don't really need a website right now but whenever I search for web and python together, django pretty much always comes up. I don't need any python code help, just some direction on how to create that bridge. Thanks.",DRF was what I was looking for. As suggested.,1.2,True,1,6872 2020-06-28 12:56:23.870,PySimpleGui: how to remove event-delay in Listboxes?,"When reading events from a simple button in PySimpleGui, spamming this button with mouseclicks will generate an event for each of the clicks. When you try to do the same with Listboxes (by setting enable_events to True for this element) it seems like there is a timeout after each generated event. If you click once every second, it will generate all the events. But if you spam-click it like before it will only generate the first event. I'm not sure if this behavior is intended (only started learning PySimpleGui today), but is there a way to get rid of this delay? I tried checking the docs but can't find it mentioned anywhere.","I think the reason is that a Listbox reacts to click events, but also to double click events. A Button does not. This behavior looks like consistent.",0.0,False,1,6873 2020-06-28 19:56:59.520,How to start multiple py files (2 discord bots) from one file at once,"I'm wondering how would I run my 2 discord bots at once from main, app.py, file. And after I kill that process (main file process), they both would stop. Tried os.system, didn't work. Tried multiple subprocess.Popen, didn't work. Am I doing something wrong? How would I do that?",I think the good design is to have one bot per .py file. If they both need code that is in app.py then they should 'import' the common code. Doing that you can just run both bot1.py and bot2.py.,0.0,False,1,6874 2020-06-28 21:34:07.527,pip3 install of Jupyter and Notebook problem when running,"I have tried all of the things here on stack and on other sites with no joy... I'd appreciate any suggestions please. I have installed Jupyter and Notebook using pip3 - please note that I have updated pip3 before doing so. However when trying to check the version of both jupyter --version and notebook --version my terminal is returning no command found. I have also tried to run jupyter, notebook and jupyter notebook and I am still getting the same message. I have spent nearly two days now trying to sort this out... I'm on the verge of giving up. I have a feeling it has something to do with my PATH variable maybe not pointing to where the jupyter executable is stored but I don't know how to find out where notebook and jupyter are stored on my system. many thanks in advance Bobby","So to summarise this is what I have found on this issue (in my experience): to run the jupyter app you can use the jupyter-notebook command and this works, but why? This is because, the jupyter-notebook is stored in usr/local/bin which is normally always stored in the PATH variable. I then discovered that the jupyter notebook or jupyter --version command will now work if I did the following: open my ./bash_profile file add the following to the bottom of the file: export PATH=$PATH:/Users/your-home-directory/Library/Python/3.7/bin this should add the location of where jupyter is located to your path variable. Alternatively, as suggested by @HackLab we can also do the following: python3 -m jupyter notebook Hopefully, this will give anyone else having the same issues I had an easier time resolving this issue.",1.2,True,2,6875 2020-06-28 21:34:07.527,pip3 install of Jupyter and Notebook problem when running,"I have tried all of the things here on stack and on other sites with no joy... I'd appreciate any suggestions please. I have installed Jupyter and Notebook using pip3 - please note that I have updated pip3 before doing so. However when trying to check the version of both jupyter --version and notebook --version my terminal is returning no command found. I have also tried to run jupyter, notebook and jupyter notebook and I am still getting the same message. I have spent nearly two days now trying to sort this out... I'm on the verge of giving up. I have a feeling it has something to do with my PATH variable maybe not pointing to where the jupyter executable is stored but I don't know how to find out where notebook and jupyter are stored on my system. many thanks in advance Bobby","have you tried locate Jupiter? It may tell you where jupyter is on your system. Also, why not try installing jupyter via anaconda to avoid the hassle?",0.0814518047658113,False,2,6875 2020-06-30 01:16:27.903,How do I use a cron job in order to insert events into google calendar?,"I wrote a Python script that allows me to retrieve calendar events from an externally connected source and insert them into my Google Calendar thanks to the Google Calendar's API. It works locally when I execute the script from my command line, but I would like to make it happen automatically so that the externally added events pop up in my Google Calendar automatically. It appears that a cron job is the best way to do this, and given I used Google Calendar's API, I thought it might be helpful to use Cloud Functions with Cloud Scheduler in order to make it happen. However, I really don't know where to start and if this is even possible because accessing the API requires OAuth with Google to my personal Google account which is something I don't think a service account (which I think I need) can do on my behalf. What are the steps I need to take in order to allow the script which I manually run and authenticates me with Google Calendar run every 60 seconds ideally in the cloud so that I don't need to have my computer on at all times? Things I’ve tried to do: I created a service account with full permissions and tried to create an http-trigger event that would theoretically run the script when the created URL is hit. However, it just returns an HTTP 500 Error. I tried doing Pub/Sub event targets to listen and execute the script, but that doesn’t work either. Something I’m confused about: with either account, there needs to be a credentials.json file in order to login; how does this file get “deployed” alongside the main function? Along with the token.pickle file that gets created when the authentication happens for the first time.","The way a service account works is that it needs to be preauthorized. You would take the service account email address and share a calendar with it like you would with any other user. The catch here being that you should only be doing this with calendars you the developer control. If these are calendars owned by others you shouldnt be using a service account. The way Oauth2 works is that a user is displayed a consent screen to grant your application access to their data. Once the user has granted you access and assuming you requested offline access you should have a refresh token for that users account. Using the refresh token you can request a new access token at anytime. So the trick here would be storing the users refresh tokens in a place that your script can access it then when the cron job runs the first thing it needs to do is request a new access token using its refresh token. So the only way you will be able to do this as a cron job is if you have a refresh token stored for the account you want to access. Other wise it will require it to open a web browser to request the users consent and you cant do that with a cron job.",0.6730655149877884,False,1,6876 2020-06-30 08:51:32.650,Python FBX SDK – How to enable auto-complete?,"I am using Pycharm to code with Python FBX SDK, but I don't how to enable auto-complete. I have to look at the document for function members. It's very tedious. So, does anyone know how to enable auto-complete for Python FBX SDK in editor? Thanks!","Copy these two files [PATH_TO_YOUR_MOBU]\bin\config\Python\pyfbsdk_gen_doc.py [PATH_TO_YOUR_MOBU]\bin\config\Python\pyfbsdk_additions.py to another folder like d:\pyfbsdk_autocomplete for instance. rename the file pyfbsdk_gen_doc.py to pyfbsdk.py add the folder to your interpreter paths in PyCharm. (Interpreter Settings, Show All, Show paths for interpreter)",1.2,True,1,6877 2020-07-01 02:37:30.927,I must install django for every single project i make?,"i am new to Python programming language and Django. I am learning about web development with Django, however, each time I create a new project in PyCharm, it doesn´t recognize django module, so i have to install it again. Is this normal? Because i´ve installed django like 5 times. It doesn´t seem correct to me, there must be a way to install Django once and for all and not have the necessity of using 'pip install django' for each new project I create, I am sure there must be a way but I totally ignore it, I think I have to add django to path but I really don´t know how (just guessing). I will be thankful if anyone can help me :)","pycharm runs in a venv. A venv is an isolated duplicate (sort of) of python (interpreter) and other scripts. To use your main interpreter, change your interpreter location. The three folders (where your projects is, along with your other files) are just that. I think there is an option to inherit packages. I like to create a file called requirements.txt and put all my modules there. Comment for further help. In conclusion, this is normal.",1.2,True,1,6878 2020-07-01 22:53:41.403,How to show messages in Python?,"I am new to Django and trying to create an Application. My scenario is: I have a form on which there are many items and user can click on Add to Cart to add those item to Cart. I am validating if the user is logged in then only item should be added to Cart else a message or dialogue box must appear saying please login or sign up first. Although I was able to verify the authentication but the somehow not able to show the message if user is not logged in. For now I tried the below things: Using session messages, but somehow it needs so many places to take care when to delete or when to show the message Tried using Django Messages Framework, I checked all the configuration in settings.py and everything seems correct but somehow not showing up on HTML form Does anyone can help me here? I want to know a approach where I can authenticate the user and if user is not logged in a dialogue box or message should appear saying Please login or Signup. It should go when user refreshes the page.","If you are using render() for views.py you could add a boolean value to the context i.e render(request ""template_name.html"", {""is_auth"": True}) Assumedly you are doing auth in the serverside so you could tackle it this way. Not a great fix but might help.",0.0,False,1,6879 2020-07-02 20:11:13.097,installing Opencv on Mac Catalina,"I have successfully installed opencv 4.3.0 on my Mac OS Catalina, python 3.8 is installed also, but when I try to import cv2, I get the Module not found error. Please how do I fix this? thanks in advance.",Can you try pip install opencv-python?,0.0,False,2,6880 2020-07-02 20:11:13.097,installing Opencv on Mac Catalina,"I have successfully installed opencv 4.3.0 on my Mac OS Catalina, python 3.8 is installed also, but when I try to import cv2, I get the Module not found error. Please how do I fix this? thanks in advance.","I was having issue with installing opencv in my Macbook - python version 3.6 ( i downgraded it for TF 2.0) and MacOs Mojave 10.14. Brew , conda and pip - none of the three seemed to work for me. So i went to [https://pypi.org/project/opencv-python/#files] and downloaded the .whl that was suitable for my combo of python and MacOs versions. Post this navigated to the folder where it was downloaded and executed pip install ./opencv_python-4.3.0.36-cp36-cp36m-macosx_10_9_x86_64.whl",0.0,False,2,6880 2020-07-02 22:37:28.507,DIY HPC cluster to run Jupyter/Python notebooks,"I recently migrated my Python / Jupyter work from a macbook to a refurbrished Gen 8 HP rackmounted server (192GB DDR3 2 x 8C Xeon E5-2600), which I got off amazon for $400. The extra CPU cores have dramatically improved the speed of fitting my models particularly for decision tree ensembles that I tend to use a lot. I am now thinking of buying additional servers from that era (early-mid 2010s) (either dual or quad-socket intel xeon E5, E7 v1/v2) and wiring them up as a small HPC cluster in my apartment. Here's what I need help deciding: Is this a bad idea? Am I better off buying a GPU (like a gtx 1080). The reason I am reluctant to go the GPU route is that I tend to rely on sklearn a lot (that's pretty much the only thing I know and use). And from what I understand model training on gpus is not currently a part of the sklearn ecosystem. All my code is written in numpy/pandas/sklearn. So, there will be a steep learning curve and backward compatibility issues. Am I wrong about this? Assuming (1) is true and CPUs are indeed better for me in the short term. How do I build the cluster and run Jupyter notebooks on it. Is it as simple as buying an additional server. Designating one of the servers as the head node. Connecting the servers through ethernet. Installing Centos / Rocks on both machines. And starting the Jupyter server with IPython Parallel (?). Assuming (2) is true, or at least partly true. What other hardware / software do I need to get? Do I need an ethernet switch? Or if I am connecting only two machines, there's no need for it? Or do I need a minimum of three machines to utilize the extra CPU cores and thus need a switch? Do I need to install Centos / Rocks? Or are there better, more modern alternatives for the software layer. For context, right now I use openSUSE on the HP server, and I am pretty much a rookie when it comes to operating systems and networking. How homogeneous should my hardware be? Can I mix and match different frequency CPUs and memory across the machines? For example, having 1600 MHz DDR3 memory in one machine, 1333 MHz DDR3 in another? Or using 2.9 GHz E5-2600v1 and 2.6 GHz E5-2600v2 CPUs? Should I be worried about power? I.e. can I safely plug three rackmounted servers in the same power strip in my apartment? There's one outlet that I know if I plug my hairdryer in, the lights go out. So I should probably avoid that one :) Seriously, how do I run 2-3 multi-CPU machines under load and avoid tripping the circuit breaker? Thank you.","Nvidia's rapids.ai implements a fair bit of sklearn on gpus. Whether that is the part you use, only you can say. Using Jupiter notebooks for production is known to be a mistake. You don't need a switch unless latency is a serious issue, it rarely is. Completely irrelevant. For old hardware of the sort you are considering, you will be having VERY high power bills. But worse, since you will have many not-so-new machines, the probability of some component failing at any given time is high, so unless you seek a future in computer maintenance, this is not a great idea. A better idea is: develop your idea on your macbook/existing cluster, then rent an AWS spot instance (or two or three) for a couple of days. Cheaper, no muss, no fuss. everything just works.",1.2,True,1,6881 2020-07-03 10:03:02.433,How to reformat the date text in each individual box of a column?,"I currently converted a list of roughly 1200 items (1200 rows) and a problem arised when i looked at the date of each individual item and realised that the day and month was before the year which meant that ordering them by date would be useless. Is there any way I can reorder over 1200 dates so that they can be formatted correctly with me having to manually do it. Would I have to use python. I am very new to that and I don't know how to use it really. Here's an example of what I get: September 9 2016 And this is what i want: 2016 September 9 I am also using the microsoft excel if anyone was asking.","it must be date format. you can split date parts in other cells and re-merge them in preferred format...",0.0,False,1,6882 2020-07-03 15:06:50.723,How to convert py file to apk?,"I have created a calculator in Python using Tkinter module,though I converted it to exe but I am not able to convert it to apk.please tell me how to do so?",I personally haven't seen anyone do that. I think it would be best to try and re-make you calculator in the Kivy framework if you want to later turn it into an APK using bulldozer. Tkinter is decent for beginners but if you want to have nice Desktop UI's use PyQT5 and if you're interested in making mobile apps use Kivy. Tkinter is just a way to dip into using GUIs in python.,0.3869120172231254,False,1,6883 2020-07-04 03:40:27.593,How to diagnose inconsistent S3 permission errors,"I'm running a Python script in an AWS Lambda function. It is triggered by SQS messages that tell the script certain objects to load from an S3 bucket for further processing. The permissions seem to be set up correctly, with a bucket policy that allows the Lambda's execution role to do any action on any object in the bucket. And the Lambda can access everything most of the time. The objects are being loaded via pandas and s3fs: pandas.read_csv(f's3://{s3_bucket}/{object_key}'). However, when a new object is uploaded to the S3 bucket, the Lambda can't access it at first. The botocore SDK throws An error occurred (403) when calling the HeadObject operation: Forbidden when trying to access the object. Repeated invocations (even 50+) of the Lambda over several minutes (via SQS) give the same error. However, when invoking the Lambda with a different SQS message (that loads different objects from S3), and then re-invoking with the original message, the Lambda can suddenly access the S3 object (that previously failed every time). All subsequent attempts to access this object from the Lambda then succeed. I'm at a loss for what could cause this. This repeatable 3-step process (1) fail on newly-uploaded object, 2) run with other objects 3) succeed on the original objects) can happen all on one Lambda container (they're all in one CloudWatch log stream, which seems to correlate with Lambda containers). So, it doesn't seem to be from needing a fresh Lambda container/instance. Thoughts or ideas on how to further debug this?","Amazon S3 is an object storage system, not a filesystem. It is accessible via API calls that perform actions like GetObject, PutObject and ListBucket. Utilities like s3fs allow an Amazon S3 bucket to be 'mounted' as a file system. However, behind the scenes s3fs makes normal API calls like any other program would. This can sometimes (often?) lead to problems, especially where files are being quickly created, updated and deleted. It can take some time for s3fs to update S3 to match what is expected from a local filesystem. Therefore, it is not recommended to use tools like s3fs to 'mount' S3 as a filesystem, especially for Production use. It is better to call the AWS API directly.",1.2,True,1,6884 2020-07-06 20:18:01.003,Spyder - how to execute python script in the current console?,"I've updated conda and spyder to the latest versions. I want to execute python scripts (using F5 hotkey) in the current console. However, the new spyder behaves unexpectedly, for example, if I enter in a console a=5 and then run test.py script that only contains a command print(a), there is an error: NameError: name 'a' is not defined. In the configuration options (command+F6) I've checked ""Execute in current console"" option. I am wondering why is this happening? Conda 4.8.2, Spyder 4.0.1","In the preferences, run settings, there is a ""General settings"", in which you can (hopefully still) deactivate ""Remove all variables before execution"". I even think to remember that this is new, so it makes sense.",0.0,False,2,6885 2020-07-06 20:18:01.003,Spyder - how to execute python script in the current console?,"I've updated conda and spyder to the latest versions. I want to execute python scripts (using F5 hotkey) in the current console. However, the new spyder behaves unexpectedly, for example, if I enter in a console a=5 and then run test.py script that only contains a command print(a), there is an error: NameError: name 'a' is not defined. In the configuration options (command+F6) I've checked ""Execute in current console"" option. I am wondering why is this happening? Conda 4.8.2, Spyder 4.0.1","I figured out the answer: In run configuration (command+F6) there is another option that needs to be checked: ""Run in console's namespace instead of empty one""",1.2,True,2,6885 2020-07-06 20:45:20.950,Resampling data from 1280 Hz to 240 Hz in python,"I have a python list of force data that was sampled at 1280 Hz, I have to get it do exactly 240 Hz in order to match it exactly with a video that was filmed at 240 Hz. I was thinking about downsampling to 160 Hz and then upsampling through interpolation to 240 Hz. Does anyone have any ideas on how to go about doing this? Exact answers not needed, just an idea of where to look to find out how.","Don't downsample and that upsample again; that would lead to unnecessary information loss. Use np.fft.rfft for a discrete Fourier transform; zero-pad in the frequency domain so that you oversample 3x to a sampling frequency of 3840 Hz. (Keep in mind that rfft will return an odd number of frequencies for an even number of input samples.) You can apply a low-pass filter in the frequency domain, making sure you block everything at or above 120 Hz (the Nyqvist frequency for 240 Hz sampling rate). Now use np.fft.irfft to transform back to a time-domain signal at 3840 Hz sampling rate. Because 240 Hz is exactly 16x lower than 3840 Hz and because the low-pass filter guarantees that there is no content above the Nyqvist frequency, you can safely take every 16th sample.",1.2,True,1,6886 2020-07-07 09:52:29.370,how does one normalize a TensorFlow `Dataset` pipeline?,"I have my dataset in a TensorFlow Dataset pipeline and I am wondering how can I normalize it, The problem is that in order to normalize you need to load your entire dataset which is the exact opposite of what the TensorFlow Dataset is for. So how exactly does one normalize a TensorFlow Dataset pipeline? And how do I apply it to new data? (I.E. data used to make a new prediction)","You do not need to normalise the entire dataset at once. Depending on the type of data you work with, you can use a .map() function whose sole purpose is to normalise that specific batch of data you are working with (for instance divide by 255.0 each pixel within an image. You can use, for instance, map(preprocess_function_1).map(preprocess_function_2).batch(batch_size), where preprocess_function_1 and preprocess_function_2 are two different functions that preprocess a Tensor. If you use .batch(batch_size) then the preprocessing functions are applied sequentially on batch_size number of elements, you do not need to alter the entire dataset prior to using tf.data.Dataset()",0.2012947653214861,False,1,6887 2020-07-07 11:19:47.523,Python Selenium bot to view Instagram stories | How can i click the profiles of people that have active stories?,"I have this Instagram bot that is made using Python and Selenium, It log into Instagram, goes to a profile, select the last post and select the ""other x people liked this photo"" to show the complete list of the people that liked the post(it can be done with the follower of the page too). Now I am stuck because I don't know how can i make the bot click only the profiles that have active stories and how to make it scroll down (the problem is that the way that i found to click on the profiles works just with the first one profile because when I click on the profile it opens the stories and closes the post, so when i reopen the post and the list of like on this post it will reclick on the same profile that I have already seen the stories of). Does someone know how to do that or a similar thing maybe something even better that I didn't thinked of? I don't think code is needed but if you need I will post it, just let me know.","Have you tried to use the ""back"" button on your browser window? Or open the page in a new tab, so you have still the old one to go back to.",0.3869120172231254,False,1,6888 2020-07-08 04:22:54.717,How do we get the output when 1 filter convolutes over 3 images?,"Imagine, that I have a 28 x 28 size grayscale image.. Now if I apply a Keras Convolutional layer with 3 filters and 3X3 size with 1X1 stride, I will get 3 images as output. Now if I again apply a Keras Convolutional layer with only 1 filter and 3X3 size and 1X1 stride, so how will this one 3X3 filter convolute over these 3 images and then how will we get one image.. What I think is that, the one filter will convolute over each of the 3 images resulting in 3 images, then it adds all of the three images to get the one output image. I am using tensorflow backend of keras. please excuse my grammar, And Please Help me.","Answering my own question: I figured out that the one filter convolutes over 3 images, it results in 3 images, but then these these images pixel values are added together to get one resultant image.. You can indeed check by outputting 3 images for 3 filters on 1 image. when you add these 3 images yourself (matrix addition), and plot it, the resultant image makes a lot of sense.",1.2,True,1,6889 2020-07-08 09:52:48.397,How to rank images based on pairs of comparisons with SVM?,"I'm working on a neural network to predict scores on how ""good"" the images are. The images are the inputs to another machine learning algorithm, and the app needs to tell the user how good the image they are taking is for that algorithm. I have a training dataset, and I need to rank these images so I can have a score for each one for the regression neural network to train. I created a program that gives me 2 images from the training set at a time and I will decide which one wins (or ties). I heard that the full rank can be obtained from these comparisons using SVM Ranking. However, I haven't really worked with SVMs before. I only know the very basics of SVMs. I read a few articles on SVM Ranking and it seems like the algorithm turns the ranking problem to a classification problem, but the maths really confuses me. Can anyone explain how it works in simple terms and how to implement it in Python?","I did some more poking around on the internet, and found the solution. The problem was how to transform this ranking problem to a classification problem. This is actually very simple. If you have images (don't have to be images though, can be anything) A and B, and A is better than B, then we can have (A, B, 1). If B is better, then we have (A, B, -1) And we just need a normal SVM to take the names of the 2 images in and classify 1 or -1. That's it. After we train this model, we can give it all the possible pairs of images from the dataset and generating the full rank will be simple.",1.2,True,1,6890 2020-07-08 11:14:08.523,Efficient way to remove half of the duplicate items in a list,"If I have a list say l = [1, 8, 8, 8, 1, 3, 3, 8] and it's guaranteed that every element occurs an even number of times, how do I make a list with all elements of l now occurring n/2 times. So since 1 occurred 2 times, it should now occur once. Since 8 occurs 4 times, it should now occur twice. Since 3 occurred twice, it should occur once. So the new list will be something like k=[1,8,8,3] What is the fastest way to do this? I did list.count() for every element but it was very slow.","I like using a trie set, as you need to detect duplicates to remove them, or a big hash set (lots of buckets). The trie does not go unbalanced and you do not need to know the size of the final set. An alternative is a very parallel sort -- brute force.",0.0340004944420038,False,2,6891 2020-07-08 11:14:08.523,Efficient way to remove half of the duplicate items in a list,"If I have a list say l = [1, 8, 8, 8, 1, 3, 3, 8] and it's guaranteed that every element occurs an even number of times, how do I make a list with all elements of l now occurring n/2 times. So since 1 occurred 2 times, it should now occur once. Since 8 occurs 4 times, it should now occur twice. Since 3 occurred twice, it should occur once. So the new list will be something like k=[1,8,8,3] What is the fastest way to do this? I did list.count() for every element but it was very slow.","Instead of using a counter, which keeps track of an integer for each possible element of the list, try mapping elements to booleans using a dictionary. Map to true the first time they're seen, and then every time after that flip the bit, and if it's true skip the element.",0.2336958171850616,False,2,6891 2020-07-08 16:42:47.570,how to get position of thumb (in pixels) inside of vertical scale widget relatively upper right corner?,"Is there a way to get a position of thumb in pixels in vertical scale widget relative to upper right corner of widget? I want a label with scale value to pop up next to thumb when mouse pointer hovering over it, for this I need thumb coordinates.","The coords method returns the location along the trough corresponding to a particular value. This is from the canonical documentation for the coords method: Returns a list whose elements are the x and y coordinates of the point along the centerline of the trough that corresponds to value. If value is omitted then the scale's current value is used. Note: you asked for coordinates relative to upper-right corner. These coordinates are relative to the upper-left. You can get the width of the widget with winfo_width() and do a simple transformation.",1.2,True,1,6892 2020-07-09 10:59:20.653,user interaction with django,"I'm working on a question and answer system with django. my problem : I want the app to get a question from an ontology and according the user's answer get the next question. how can I have all the questions and user's answers displayed. i'm new to django, I don't know if I can use session with unauthenticated user and if I need to use websocket with the django channels library.","Given that you want to work with anonymous users the simplest way to go is to add a hidden field on the page and use it to track the user progress. The field can contain virtual session id that will point at a model record in the backend, or the entire Q/A session(ugly but fast and easy). Using REST or sockets would require similar approach. I can't tell from the top of my mind if you can step on top of the built in session system. It will work for registered users, but I do believe that for anonymous users it gets reset on refresh(may be wrong here).",0.3869120172231254,False,1,6893 2020-07-09 22:14:52.293,How do I use external applications to scrape data from a mobile app?,"I am trying to scrape data from a mobile application (Pokemon HOME). The app shows usage statistics and other useful statistics that I want to scrape. I want to scrape this on my computer using python. I am having trouble determining how to scrape data from a mobile application. I tried using Fiddler and an Android emulator to intercept server data but I am unfamiliar with the software to be able to understand what exactly to do. Any help would be very beneficial. Even just suggestions for resources where I can learn how to do this on my own. Thank you!","It's possible but it's really a hard nut to break. There's a huge difference between Mobile app and web app Web app is accessible through WAN ,v.i.z World area network. Scraping is fairly and squarely easier. In Python, you can bs4 to do it. But in Mobile app, essentially and effectively, it's more about LAN. It's installed locally. Install an app to remote control your device from another device (usually required root) However, whole data might not be available.",0.0,False,1,6894 2020-07-09 23:28:48.100,How does python collections accept multiple data types?,"The most popular python version is CPython, written in C. What i want to know is how is it possible to write a python collection using C when C arrays can only store on type of data at the same time?","This is not how python does it in C, but I've written a small interpreted language in Java (which also only allows arrays/lists with 1 data type) and implemented mixed type lists. I had a Value interface and a class for each type of value and those classes implemented the Value interface. I had FunctionValue class, a StringValue class, a BooleanValue class, and a ListValue class, all of which implemented the value interface. The ListValue class has a field of type List which contains the list's elements. All methods on the Value interface and its implementing classes which do stuff like numeric addition, string appending, list access, function calling, etc. initially take in Value objects and do different things based on which actual kind of Value it is. You could do something similar in C, albeit at a lower level since it doesn't have interfaces and classes to help you manage your types.",0.0,False,1,6895 2020-07-10 20:37:35.990,Python same Network Card Game,"So I'm doing this python basics course and my final project is to create a card game. At the bottom of the instructions I get this For extra credit, allow 2 players to play on two different computers that are on the same network. Two people should be able to start identical versions of your program, and enter the internal IP address of the user on the network who they want to play against. The two applications should communicate with each other, across the network using simple HTTP requests. Try this library to send requests: http://docs.python-requests.org/en/master/ http://docs.python-requests.org/en/master/user/quickstart/ And try Flask to receive them: http://flask.pocoo.org/ The 2-player game should only start if one person has challenged the other (by entering their internal IP address), and the 2nd person has accepted the challenge. The exact flow of the challenge mechanism is up to you. I already investigated how flask works and kind of understand how python-requests works too. I just can't figure out how to make those two work together. If somebody could explain what should I do or tell me what to watch or read I would really appreciate it.","it would be nice to see how far you've come before answer (as hmm suggested you in a comment), but i can tell you something theorical about this. What you are talking about is a client-server application, where server need to elaborate the result of clients actions. What i can suggest is to learn about REST API, that you can use to let client and server to communicate in a easy way. Your clients will send http requests to server exposed APIs. From what you wrote, you have a basically constraints that should be respected during client and server communication, here reasumed: Someone search for your ip and send you a challenge request You have received a challenge that you refuse or accept; only if you accept the challenge you can start the game As you can see from the project specifications the entire challenge mechanism is up to you, so you can decide the best for you. I would begin start thinking to a possible protocol that make use of REST API to start initial communication between client and server and let you define a basic challenge mechanism. Enjoy programming :).",0.0,False,1,6896 2020-07-11 14:03:16.807,Putting .exe file in windows autorun with python,"I'm writing installer for my program with python. When everything is extracted, how can i make my program .exe file to run with Windows startup? I want to make it fully automatic, without any user input. Thanks.","You don't need to use Python for this. You can copy your .exe file and paste it in this directory: C:\Users\YourUsername\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup It will run automatically when your computer starts.",0.0,False,1,6897 2020-07-12 16:23:15.963,"What's the difference between calling pip as a command line command, and calling it as a module of the python command?","When installing python modules, I seem to have two possible command line commands to do so. pip install {module} and py -{version} -m pip install {module} I suppose this can be helpful for selecting which version of python has installed which modules? But there's rarely a case where I wouldn't want a module installed for all possible versions. Also the former method seems to have a pesky habit of being out-of-date no matter how many times I call: pip install pip --upgrade So are these separate? Does the former just call the latest version of the latter?","TLDR: Prefer ... -m pip to always install modules for a specific Python version/environment. The pip command executes the equivalent of ... -m pip. However, bare pip does not allow to select which Python version/environment to install to – the first match in your executable search path is selected. This may be the most recent Python installation, a virtual environment, or any other Python installation. Use the ... -m pip variant in order to select the Python version/environment for which to install a module.",0.5457054096481145,False,2,6898 2020-07-12 16:23:15.963,"What's the difference between calling pip as a command line command, and calling it as a module of the python command?","When installing python modules, I seem to have two possible command line commands to do so. pip install {module} and py -{version} -m pip install {module} I suppose this can be helpful for selecting which version of python has installed which modules? But there's rarely a case where I wouldn't want a module installed for all possible versions. Also the former method seems to have a pesky habit of being out-of-date no matter how many times I call: pip install pip --upgrade So are these separate? Does the former just call the latest version of the latter?","So the pip install module is callable if you have already installed the pip. The pip install pip --upgrade upgrades the pip and if you replace the pip into a module name it will upgrade that module to the most recent one. the py -{version} -m pip install {module} is callable if you have installed many versions of python - for example most of the Linux servers got installed python 2, so when you install the Python 3, and you want to install a module to version 3, you will have to call that command.",0.0,False,2,6898 2020-07-13 03:40:35.067,how to get names of all detected models from existing tensorflow lite instance?,"I'm looking to build a system that alerts me when there's a package at my front door. I already have a solution for detecting when there's a package (tflite), but I don't know how to get the array of detected objects from the existing tflite process and then pull out an object's title through the array. Is this even possible, or am I doing this wrong? Also, the tflite model google gives does not know how to detect packages, but I'll train my own for that",I've figured out a solution. I can just use the same array that the function that draws labels uses (labels[int(classes[i])) to get the name of the object in place i of the array (dunno if I'm using the correct terminology but whatever). hopefully this will help someone,0.0,False,1,6899 2020-07-13 04:19:48.737,Upgrading pycharm venv python version,"I have python 3.6 in my venv on PyCharm. However, I want to change that to Python 3.8. I have already installed 3.8, so how do I change my venv python version? I am on windows 10. Changing the version on the project intepreter settings seems to run using the new venv not my existing venv with all the packages I have installed. Attempting to add a new intepreter also results in the ""OK"" button being greyed out, possibly due to the current venv being not empty.","In pycharm you can do further steps: Go in File-->Settings-->Python Interpreter Select different python environment if already available from the drop down, If not click on ""Add"". Select New Environment option, then in Base interpreter you can select 3.8 version",0.2012947653214861,False,1,6900 2020-07-13 11:13:34.620,How to embed my python chatbot to a website,"I am very new to python, and I am trying to create a chatbot with python for a school project. I am almost done with creating my chatbot, but I don't know how to create a website to display it, I know how to create a website with Flask but how can I embed the chatbot code into the website?","In your flask code you can also embed the chatbot predict-functions into specific routes of your flask app. This would require following steps: Just before you start the flask server you train the chatbot to ensure its predict function works propperly. After that you can specifiy some more route-functions to your flask app. In those functions you grab input from the user (from for example route parameters), send it through the chatbots predict function and then send the respons (probably with postprocessing if you wish) back to the requester. Sending to the requester can be done through many different ways. Two examples just of my head would be via display (render_template) to the webpage (if the request came in over GET-Request via usual browser site-opening request) or by sending a request to the users ip itself. As a first hand experience i coupled the later mechanism to a telegram bot on my home-automation via post-request which itself then sends the response to me via telegram.",0.0,False,1,6901 2020-07-13 12:20:28.610,two versions of python installed at two places,"I had uninstalled python 3.8 from my system and installed 3.7.x But after running the command where python and where python3 in the cmd I get two different locations. I was facing issues regarding having two versions of python. So I would like to know how i can completely remove python3 located files.","To delete a specific python version, you can use which python and remove the python folder using sudo rm -rf . You might also have to modify the PATH env variable to the location which contains the python executables of the version you want. Or you can install Anaconda [https://www.anaconda.com/products/individual] which helps to manage multiple versions of python for you.",0.0,False,1,6902 2020-07-14 20:41:18.337,How to encrypt data using the password from User,"I have a flask site. It's specifically a note app. At the moment I am storing the user notes as plaintext. That means that anyone with access to the server which is me has access to the notes. I want to encrypt the data with the user password, so that only the user can access it using their password, but that would require the user to input his/her password each time they save their notes, retrive the notes or even updates them. I am hashing the password obviously. Anyone has any idea how this could be done?","Use session to store user information, the Flask-Login extension would be a good choice for you.",-0.2012947653214861,False,1,6903 2020-07-15 03:10:47.947,I have a visual studio code terminal problem how do i fix it so that i have the integrated one instead of external?,"I'm using VS Code on Windows 10. I had no problems until a few hours ago (at the time of post), whenever I want to run a python program, it opens terminals outside of VS Code like Win32 and Git Bash. How do I change it back to the integrated terminal I usually had?","With your Python file open in VS Code: Go to Run > Open Configurations, if you get prompted select ""Python File"" In the launch.json file, change the value of ""console"" to ""integratedTerminal""",0.3869120172231254,False,1,6904 2020-07-15 12:26:42.943,How can I remove/delete a virtual python environment created with virtualenv in Windows 10?,"I want to learn how to remove a virtual environment using the windows command prompt, I know that I can easily remove the folder of the environment. But I want to know if there is a more professional way to do it.","There is no command to remove virtualenv, you can deactivate it or remove the folder but unfortunately virtualenv library doesn't contain any kind of removal functionality.",1.2,True,1,6905 2020-07-16 07:00:18.590,"In NumPy, how to use a float that is larger than float64's max value?","I have a calculation that may result in very, very large numbers, that won fit into a float64. I thought about using np.longdouble but that may not be large enough either. I'm not so interested in precision (just 8 digits would do for me). It's the decimal part that won't fit. And I need to have an array of those. Is there a way to represent / hold an unlimited size number, say, only limited by the available memory? Or if not, what is the absolute max value I can place in an numpy array?","Can you rework the calculation so it works with the logarithms of the numbers instead? That's pretty much how the built-in floats work in any case... You would only convert the number back to linear for display, at which point you'd separate the integer and fractional parts; the fractional part gets exponentiated as normal to give the 8 digits of precision, and the integer part goes into the ""×10ⁿ"" or ""×eⁿ"" or ""×2ⁿ"" part of the output (depending on what base logarithm you use).",1.2,True,1,6906 2020-07-16 15:46:39.480,Why does the dimensions of Kivy app changes after deployment?,"As mentioned in the question, I build a kivy app and deploy it to my android phone. The app works perfectly on my laptop but after deploying it the font size changes all of a sudden and become very small. I can't debug this since everything works fine. The only problem is this design or rather the UI. Does anyone had this issue before? Do you have a suggestion how to deal with it? PS: I can't provide a reproducible code here since everything works fine. I assume it is a limitation of the framework but I'm not sure.","It sounds like you coded everything in terms of pixel sizes (the default units for most things). The difference on the phone is probably just that the pixels are smaller. Use the kivy.metrics.dp helper function to apply a rough scaling according to pixel density. You'll probably find that if you currently have e.g. width: 50, on the desktop then width: dp(50) will look the same while on the phone it will be twice as big as before. PS: I can't provide a reproducible code here since everything works fine. Providing a minimal runnable example would, in fact, have let the reader verify whether you were attempting to compensate for pixel density.",1.2,True,1,6907 2020-07-16 16:58:29.950,Adding files to gitignore in Visual Studio Code,"In Visual Studio Code, with git extensions installed, how do you add files or complete folders to the .gitignore file so the files do not show up in untracked changes. Specifically, using Python projects, how do you add the pycache folder and its contents to the .gitignore. I have tried right-clicking in the folder in explorer panel but the pop-menu has no git ignore menu option. Thanks in advance. Edit: I know how to do it from the command line. Yes, just edit the .gitignore file. I was just asking how it can be done from within VS Code IDE using the git extension for VS Code.","So after further investigation, it is possible to add files from the pycache folder to the .gitignore file from within VS Code by using the list of untracked changed files in the 'source control' panel. You right-click a file and select add to .gitignore from the pop-up menu. You can't add folders but just the individual files.",1.2,True,1,6908 2020-07-17 06:35:43.907,how to get proper formatted string?,"if I print the string in command prompt I I'm getting it i proper structure ""connectionstring""."""".""OT"".""ORDERS"".""SALESMAN_ID"" but when I write it to json, I'm getting it in below format \""connectionstring\"".\""\"".\""OT\"".\""ORDERS\"".\""SALESMAN_ID\"" how to remove those escape characters? when It's happening?","What is happening? Json serialization and de-serialization is happening. From wikipedia: In the context of data storage, serialization (or serialisation) is the process of translating data structures or object state into a format that can be stored (for example, in a file or memory buffer) or transmitted (for example, across a network connection link) and reconstructed later. [...] The opposite operation, extracting a data structure from a series of bytes, is deserialization. In console, you de-serialize the json but when storing in file, you serialize the json.",0.0,False,1,6909 2020-07-17 11:57:33.973,how do we check similarity between hash values of two audio files in python?,"About the data : we have 2 video files which are same and audio of these files is also same but they differ in quality. that is one is in 128kbps and 320kbps respectively. we have used ffmpeg to extract the audio from video, and generated the hash values for both the audio file using the code : ffmpeg -loglevel error -i 320kbps.wav -map 0 -f hash - the output was : SHA256=4c77a4a73f9fa99ee219f0019e99a367c4ab72242623f10d1dc35d12f3be726c similarly we did it for another audio file to which we have to compare , C:\FFMPEG>ffmpeg -loglevel error -i 128kbps.wav -map 0 -f hash - SHA256=f8ca7622da40473d375765e1d4337bdf035441bbd01187b69e4d059514b2d69a Now we know that these audio files and hash values are different but we want to know how much different/similar they are actually , for eg: like some distance in a-b is say 3 can someone help with this?","You cannot use a SHA256 hash for this. This is intentional. It would weaken the security of the hash if you could. what you suggest is akin to differential cryptoanalysis. SHA256 is a modern cryptographic hash, and designed to be safe against such attacks.",0.2012947653214861,False,1,6910 2020-07-17 19:42:44.647,Add Kivy Widgets Gradually,"I would like to ask how could I add dynamically some widgets in my application one by one and not all at once. Those widgets are added in a for loop which contains the add_widget() command, and is triggered by a button. So I would like to know if there is a way for the output to be shown gradually, and not all at once, in the end of the execution. Initially I tried to add a delay inside the for loop, but I'm afraid it has to do with the way the output is built each time. EDIT: Well, it seems that I hadn't understood well the use of Clock.schedule_interval and Clock.schedule_once, so what I had tried with them (or with time.sleep) didn't succeed at all. But obviously, this was the solution to my problem.",Use Clock.schedule_interval or Clock.schedule_once to schedule each iteration of the loop at your desired time spacing.,1.2,True,1,6911 2020-07-18 01:31:21.407,Why isn't lst.sort().reverse() valid?,"Per title. I do not understand why it is not valid. I understand that they mutate the object, but if you call the sort method, after it's done then you'd call the reverse method so it should be fine. Why is it then that I need to type lst.sort() then on the line below, lst.reverse()? Edit: Well, when it's pointed out like that, it's a bit embarrassing how I didn't get it before. I literally recognize that it mutated the object and thus returns a None, but I suppose it didn't register that also meant that you can't reverse a None-type object.","When you call lst.sort(), it does not return anything, it changes the list itself. So the result of lst.sort() is None, thus you try to reverse None which is impossible.",1.2,True,1,6912 2020-07-18 05:52:32.897,Converting numpy boolean array to binary array,"I have a boolean numpy array which I need to convert it to binary, therefore where there is true it should be 255 and where it is false it should be 0. Can someone point me out how to write the code?","Let x be your data in numpy array Boolean format. Try np.where(x,255,0)",0.0,False,1,6913 2020-07-18 16:00:43.153,"df['colimn_name'] vs df.loc[:, 'colimn_name']","I would like more info. on the answer to the following question: df[‘Name’] and 2. df.loc[:, ‘Name’], where: df = pd.DataFrame(['aa', 'bb', 'xx', 'uu'], [21, 16, 50, 33], columns = ['Name', 'Age']) Choose the correct option: 1 is the view of original dataframe and 2 is a copy of original dataframe 2 is the view of original dataframe and 1 is a copy of original dataframe Both are copies of original dataframe Both are views of original dataframe I found more than one answer online but not sure. I think the answer is number 2 but when i tried x = df['name'] then x[0] = 'cc' then print(df) I saw that the change appeared in the original dataframe. So how the changed appeared in the original dataframe although I also got this warining: A value is trying to be set on a copy of a slice from a DataFrame I just want to know more about the difference between the two and weather one is really a copy of the original dataframe or not. Thank you.","Both are the views of original dataframe One can be used to add more columns in dataframe and one is used for specifically getting a view of a cell or row or column in dataframe.",0.0,False,1,6914 2020-07-19 11:57:34.290,In-memory database and programming language memory management / garbage collection,"I've been reading about in-memory databases and how they use RAM instead of disk-storage. I'm trying to understand the pros and cons of building an in-memory database with different programming languages, particularly Java and Python. What would each implementation offer in terms of speed, efficiency, memory management and garbage collection? I think I could write a program in Python faster, but I'm not sure what additional benefits it would generate. I would imagine the language with a faster or more efficient memory management / garbage collection algorithm would be a better system to use because that would free up resources for my in-memory database. From my basic understanding I think Java's algorithm might be more efficient that Python's at freeing up memory. Would this be a correct assumption? Cheers","You choose an in-memory database for performance, right? An in-memory database written in C/C++ and that provides an API for Java and/or Python won't have GC issues. Many (most?) financial systems are sensitive to latency and 'jitter'. GC exacerbates jitter.",0.0,False,1,6915 2020-07-20 08:27:36.160,How to know the response data type of API using requests,"I have one simple question, is there a easy way to know the type of API's response? Fox example: Using requests post method to send api requests, some apis will return data format as .xml type or .json type, how can i know the response type so i can choose not to convert to .json use json() when response type is .xml?",Use r.headers.get('content-type') to get the response type,1.2,True,1,6916 2020-07-20 14:58:08.290,Calculating how much area of an ellipsis is covered by a certain pixel in Python,"I am working with Python and currently trying to figure out the following: If I place an ellipsis of which the semi-axes, the centre's location and the orientation are known, on a pixel map, and the ellipsis is large enough to cover multiple pixels, how do I figure out which pixel covers which percentage of the total area of the ellipsis? As an example, let's take a map of 10*10 pixels (i.e. interval of [0,9]) and an ellipsis with the centre at (6.5, 6.5), semi-axes of (0.5, 1.5) and an orientation angle of 30° between the horizontal and the semi-major axis. I have honestly no idea, so any help is appreciated. edit: To clarify, the pixels (or cells) have an area. I know the area of the ellipsis, its position and its orientation, and I want to find out how much of its area is located within pixel 1, how much it is within pixel 2 etc.","This is math problem. Try math.exchange rather than stackoverflow. I suggest you to transform the plane: translation to get the center in the middle, rotation to get the ellipsis's axes on the x-y ones and dilatation on x to get a circle. And then work with a circle on rhombus tiles. Your problem won't be less or more tractable in the new formulation but the math and code you have to work on will be slightly lighter.",0.0,False,1,6917 2020-07-20 17:32:32.860,How to dinamically inject HTML code in Django,"In a project of mine I need to create an online encyclopedia. In order to do so, I need to create a page for each entry file, which are all written in Markdown, so I have to covert it to HTML before sending them to the website. I didn't want to use external libraries for this so I wrote my own python code that receives a Markdown file and returns a list with all the lines already formatted in HTML. The problem now is that I don't know how to inject this code to the template I have in Django, when I pass the list to it they are just printed like normal text. I know I could make my function write to an .html file but I don't think it's a great solution thinking about scalability. Is there a way to dynamically inject HTML in Django? Is there a ""better"" approach to my problem?","You could use the safe filter in your template! So it would look like that. Assuming you have your html in a string variable called my_html then in your template just write {{ my_html | safe }} And don’t forget to import it!",1.2,True,1,6918 2020-07-21 09:12:16.213,EnvironmentNotWritableError on Windows 10,"I am trying to get python-utils package and utils module work in my anaconda3. However, whenever I open my Anaconda Powershell and try to install the package it fails with the comment EnvironmentNotWritableError: The current user does not have write permissions to the target environment. environment location: C:\ProgramData\Anaconda3 I searched for solutions and was advised that I update conda. However, when I ran the comment below conda update -n base -c defaults conda it also failed with EnvironmentNotWritableError showing. Then I found a comment that says maybe my conda isn't installed at some places, so I tried conda install conda which got the same error. Then I tried conda install -c conda-forge python-utils which also failed with the same error. Maybe it's the problem with setting paths? but I don't know how to set them. All I know about paths is that I can type sys.path and get where Anaconda3 is running.","I have got the same non writable error in anaconda prompt for downloading pandas,then sorted the the error by running anaconda prompt as administrator. it worked for me since i already had that path variable in environment path",0.3869120172231254,False,2,6919 2020-07-21 09:12:16.213,EnvironmentNotWritableError on Windows 10,"I am trying to get python-utils package and utils module work in my anaconda3. However, whenever I open my Anaconda Powershell and try to install the package it fails with the comment EnvironmentNotWritableError: The current user does not have write permissions to the target environment. environment location: C:\ProgramData\Anaconda3 I searched for solutions and was advised that I update conda. However, when I ran the comment below conda update -n base -c defaults conda it also failed with EnvironmentNotWritableError showing. Then I found a comment that says maybe my conda isn't installed at some places, so I tried conda install conda which got the same error. Then I tried conda install -c conda-forge python-utils which also failed with the same error. Maybe it's the problem with setting paths? but I don't know how to set them. All I know about paths is that I can type sys.path and get where Anaconda3 is running.",Run the PowerShell as Administrator. Right Click on the PowerShell -> Choose to Run as Administrator. Then you'll be able to install the required packages.,1.2,True,2,6919 2020-07-21 19:42:40.367,"Selenium(Python): After clicking button, wait until all the new elements (which can have different attributes) are loaded","How do I wait for all the new elements that appear on the screen to load after clicking a specific button? I know that I can use the presence_of_elements_located function to wait for specific elements, but how do I wait until all the new elements have loaded on the page? Note that these elements might not necessarily have one attribute value like class name or id.","Well in reality you can't, but you can run a script to check for that. However be wary that this will not work on javascript/AJAX elements. self.driver.execute_script(""return document.readyState"").equals(""complete""))",1.2,True,1,6920 2020-07-22 10:14:37.227,Scipy Differential Evolution initial solution(s) input,"Does anyone know how to feed in an initial solution or matrix of initial solutions into the differential evolution function from the Scipy library? The documentation doesn't explain if its possible but I know that initial solution implementation is not unusual. Scipy is so widely used I would expect it to have that type of functionality.","Ok, after review and testing I believe I now understand it. There are a set of parameters that the scipy.optimize.differential_evolution(...) function can accept, one is the init parameter which allows you to upload an array of solutions. Personally I was looking at a set of coordinates so enumerated them into an array and fed in 99 other variations of it (100 different solutions) and fed this matrix into the inti parameter. I believe it needs to have more than 4 solutions or your are going to get a tuple error. I probably didn't need to ask/answer the question though it may help others that got equally confused.",1.2,True,1,6921 2020-07-22 18:39:12.457,How do i check if it should be an or a in python?,"so im making a generator (doesn't really matter what one it is) and im trying to make the a/ans appear before nouns correctly. for example: ""an apple plays rock paper scissors with a banana"" and not: ""a apple plays rock paper scissors with an banana"" the default thing for the not-yet determined a/an is """" so i need to replace the """" with either a or an depending on if the letter after it is a vowel or not. how would i do this?","Pseudo code first find letter 'a' or 'an' in string and keep track of it then find first word after it if word starts with vowel: make it 'an' Else: make it 'a' this rules breaks with words like 'hour' or 'university' so also make exception rule(find a list of words if u can)",0.0,False,1,6922 2020-07-23 02:51:14.593,Schoology API understanding,I can get to the user information using the API but I cannot access course information. Can someone explain what I need to do to make the correct call for course information?,The easiest way to answer these questions is to try it in Postman. Highly recommended.,0.0,False,1,6923 2020-07-23 08:31:12.210,Is an abstract class without any implementation and variables effectively interface?,"I'm reviewing the concepts of OOP, reading . Here the book defines interface as The set of all signatures defined by an object’s operations is called the interface to the object. (p.39) And the abstract class as An abstract class is one whose main purpose is to define a common interface for its subclasses. An abstract class will defer some or all of its implementation to operations defined in subclasses; hence an abstract class cannot be instantiated. The operations that an abstract class declares but doesn’t implement are called abstract operations. Classes that aren’t abstract are called concrete classes. (p.43) And I wonder, if I define an abstract class without any internal data (variables) and concrete operations, just some abstract operations, isn't it effectively just a set of signatures? Isn't it then just an interface? So this is my first question: Can I say an abstract class with only abstract functions is ""effectively (or theoretically)"" an interface? Then I thought, the book also says something about types and classes. An object’s class defines how the object is implemented. The class defines the object’s internal state and the implementation of its operations. In contrast, an object’s type only refers to its interface—the set of requests to which it can respond. An object can have many types, and objects of different classes can have the same type. (p.44) Then I remembered that some languages, like Java, does not allow multiple inheritance while it allows multiple implementation. So I guess for some languages (like Java), abstract class with only abstract operations != interfaces. So this is my second question: Can I say an abstract class with only abstract functions is ""generally equivalent to"" an interface in languages that support multiple inheritance? My first question was like checking definitions, and the second one is about how other languages work. I mainly use Java and Kotlin so I'm not so sure about other languages that support multiple inheritance. I do not expect a general, comprehensive review on current OOP languages, but just a little hint on single language (maybe python?) will be very helpful.","No. In Java, every class is a subclass of Object, so you can't make an abstract class with only abstract methods. It will always have the method implementations inherited from Object: hashCode(), equals(), toString(), etc. Yes, pretty much. In C++, for example, there is no specific interface keyword, and an interface is just a class with no implementations. There is no universal base class in C++, so you can really make a class with no implementations. Multiple inheritance is not really the deciding feature. Java has multiple inheritance of a sort, with special classes called ""interfaces"" that can even have default methods. It's really the universal base class Object that makes the difference. interface is the way you make a class that doesn't inherit from Object.",1.2,True,1,6924 2020-07-23 11:53:33.000,How to control Django with Javascript?,"I am building a web application with Django and I show the graphs in the website. The graphs are obtained from real time websites and is updated daily. I want to know how can I send graphs using matplotlib to template and add refresh option with javascript which will perform the web scraping script which I have written. The main question is which framework should I use? AJAX, Django REST, or what?","You're better off using a frontend framework and calling the backend for the data via JS. separating the front and backend is a more contemporary approach and has some advantages over doing it all in the backend. From personal experience, it gets really messy mixing Python and JS in the same system. Use Django as a Rest-ful backend, and try not to use AJAX in the frontend, then pick a frontend of your choice to deliver the web app.",0.3869120172231254,False,1,6925 2020-07-23 15:56:17.107,How can I deploy a streamlit application in repl.it?,"I installed/imported streamlit, numpy, and pandas but I do not know how I can see the charts I have made. How do I deploy it on repl.it?","You can not deploy streamlit application within repl.it because In order to protect against CSRF attacks, we send a cookie with each request. To do so, we must specify allowable origins, which places a restriction on cross-origin resource sharing. One solution is push your code from repl.it to GitHub. Then deploy from GitHub on share.streamlit.io.",0.2012947653214861,False,1,6926 2020-07-23 17:07:46.247,How to get jupyter notebook theme in vscode,I am a data scientist use jupyter notebook a lot and also have started to do lot of development work and use Vscode for development. so how can I get Jupyter notebook theme in vscode as well? I know how to open a Jupyter notebook in vscode by installing an extension but I wanted to know how to get Jupyter notebook theme for vs code. so it gets easier to switch between both ide without training eyes,"You can edit your VScode's settings by: 1- Go to your Jupyter extension => Extension settings => and check ""Ignore Vscode Theme"". 2- Click on File => preference=> color Theme 3- Select the theme you need. You can download the theme extension from VSCode's extension store, for example: Markdown Theme Kit; Material Theme Kit. Note: You need to restart or reload VSCode to see the changes.",0.296905446847765,False,1,6927 2020-07-24 18:18:58.150,KivyMD MDFlatButton not clickable & Kivy ScreenManager not working,"So I'm making this game with Kivy and it's a game where there's a start screen with an MDToolbar, an MDNavigationDrawer, two Images, three MDLabels and a OneLineIconListItem that says 'Start Game' and when you click on it the game is supposed to start. The game screen contains: Viruses Masked man Soap which you use to hit the viruses Current score in an MDLabel A button to go back to the start screen Issues: The background music for the game starts playing before the game screen is shown (When the start screen is shown) - ScreenManager issue When I click the button to go back to the start screen, the button doesn't get clicked - MDFlatButton issue I used on_touch_down, on_touch_move, and on_touch_up for this game and I know that's what's causing the MDFlatButton issue. So does anyone know how I'm supposed to have the on_touch_* methods defined AND have clickable buttons? And I don't know how to fix the ScreenManager issue either. I know I haven't provided any code here, but that's because this post is getting too long. I already got a post deleted because people thought the post was too long and I was providing too much code and too less details. And I don't want that to happen again. If anyone needs to view the code of my project, I will leave a Google Docs link to it. Thanks in advance!","I fixed my app. Just in case anyone had the same question, I'm gonna post the answer here. To get a clickable button, you have to create a new Screen or Widget and add the actual screen as a widget to the new class. Then, you can add buttons to the new class. This works because the button is on top of the actual screen. So when you click anywhere in the button's area, the button gets clicked and the on_touch_* methods of the actual screen don't get called. And to fix the ScreenManager issue, you just have to expirement.",1.2,True,1,6928 2020-07-25 22:12:31.897,Tkinter pickle save and load,help me please how can I use the pickle save if I have a lot of entry and I want to save all in one file and load form the file for each entry separately?,"You can't pickle tkinter widgets. You will have to extract the data and save just the data. Then, on restart you will have to unpickle the data and insert it back into the widgets.",0.0,False,1,6929 2020-07-26 07:50:11.350,Windows desktop application read session data from browser,"I'm writing a desktop and web app, Just need to know how can i authorize this desktop application with same open web app browser after installed?","if you mean to authorize your desktop app via the login of user from any web browser, you can use TCP/UDP socket or also for example , call an api every 2 seconds to check is user is loged in or not. in web browser , if user had be loged in , you can set login state with its ip or other data in database to authorize the user from desktop app.",0.0,False,1,6930 2020-07-26 13:19:22.760,How to add a python matplotlib interactive figure to vue.js web app?,"I have a plot made using Python matplotlib that updates every time new sensor data is acquired. I also have a web GUI using vue. I'd like to incorporate the matplotlib figure into the web GUI and have it update as it does when running it independently. This therefore means not just saving plot and loading it as an image. Can anyone advise how to achieve this?","In my opinion it's not reasonable way, There are very good visualizing tools powered by javascript, for example chart.js. you can do your computation with python in back-end and pass data to front-end by API and plot every interactive diagrams you want using javascript.",1.2,True,1,6931 2020-07-27 06:36:07.150,How to instal python packages for Spyder,"I am using the IDE called Spyder for learning Python. I would like to know in how to go about in installing Python packages for Spyder? Thank you","Spyder is a package too, you can install packages using pip or conda, and spyder will access them using your python path in environment. Spyder is not a package manager like conda,, but an IDE like jupyter notebook and VS Code.",0.1618299653758019,False,2,6932 2020-07-27 06:36:07.150,How to instal python packages for Spyder,"I am using the IDE called Spyder for learning Python. I would like to know in how to go about in installing Python packages for Spyder? Thank you","I have not checked if the ways described by people here before me work or not. I am running Spyder 5.0.5, and for me below steps worked: Step 1: Open anaconda prompt (I had my Spyder opened parallelly) Step 2: write - ""pip install package-name"" Note: I got my Spyder 5.0.5 up and running after installing the whole Anaconda Navigator 2.0.3.",0.0,False,2,6932 2020-07-28 16:08:13.623,What is the difference between sys.stdin.read() and sys.stdin.readline(),"Specifically, I would like to know how to give input in the case of read(). I tried everywhere but couldn't find the differences anywhere.","read() recognizes each character and prints it. But readline() recognizes the object line by line and prints it out.",0.2012947653214861,False,2,6933 2020-07-28 16:08:13.623,What is the difference between sys.stdin.read() and sys.stdin.readline(),"Specifically, I would like to know how to give input in the case of read(). I tried everywhere but couldn't find the differences anywhere.",">>> help(sys.stdin.read) Help on built-in function read: read(size=-1, /) method of _io.TextIOWrapper instance Read at most n characters from stream. Read from underlying buffer until we have n characters or we hit EOF. If n is negative or omitted, read until EOF. (END) So you need to send EOF when you are done (*nix: Ctrl-D, Windows: Ctrl-Z+Return): >>> sys.stdin.read() asd 123 'asd\n123\n' The readline is obvious. It will read until newline or EOF. So you can just press Enter when you are done.",0.3869120172231254,False,2,6933 2020-07-28 17:13:22.017,"Is there any simple way to pass arguments based on their position, rather than kwargs. Like a positional version of kwargs?","Is there a generic python way to pass arguments to arbitrary functions based on specified positions? While it would be straightforward to make a wrapper that allows positional argument passing, it would be incredibly tedious for me considering how frequently I find myself needing to pass arguments based on their position. Some examples when such would be useful: when using functools.partial, to partially set specific positional arguments passing arguments with respect to a bijective argument sorting key, where 2 functions take the same type of arguments, but where their defined argument names are different An alternative for me would be if I could have every function in my code automatically wrapped with a wrapper that enables positional argument passing. I know several ways this could be done, such as running my script through another script which modifies it, but before resorting to that I'd like to consider simpler pythonic solutions.",For key arguments use **kwargs but for positional arguments use *args.,0.0,False,1,6934 2020-07-28 22:24:48.747,NaN values with Pandas Spearman and Kendall correlations,"I am attempting to calculate Kendall's tau for a large matrix of data stored in a Pandas dataframe. Using the corr function, with method='kendall', I am receiving NaN for a row that has only one value (repeated for the length of the array). Is there a way to resolve it? The same issue happened with Spearman's correlation as well, presumably because Python doesn't know how to rank an array that has a single repeated value, which leaves me with Pearson's correlation -- which I am hesitant to use due to its normality and linearity assumptions. Any advice is greatly appreciated!","I decided to abandon the complicated mathematics in favor of intuition. Because the NaN values arose only on arrays with constant values, it occurred to me that there is no relationship between it and the other data, so I set its Spearman and Kendall correlations to zero.",0.0,False,1,6935 2020-07-28 23:02:11.343,Cannot find Python 3.8.2 path on Windows 10,"I have Windows 10 on my computer and when I use the cmd and check python --version, I get python 3.8.2. But when I try to find the path for it, I am unable to find it through searching on my PC in hidden files as well as through start menu. I don't seem to have a python 3.8 folder on my machine. Anybody have any ideas how to find it?","If you're using cmd (ie Command Prompt), and typing python works, then you can get the path for it by doing where python. It will list all the pythons it finds, but the first one is what it'll be using.",0.1352210990936997,False,1,6936 2020-07-29 02:33:18.637,Pygame how to let balls collide,I want to make a script in pygame where two balls fly towards each other and when they collide they should bounce off from each other but I don't know how to do this so can you help me?,"Its pretty easy you just check if the x coordinate is in the same spot as the other x coordinate. For example if you had one of the x coordinated called x, and another one called i(there are 2 x coordinates for both of the balls) then you could just say if oh and before I say anything esle this example is fi your pygame window is a 500,500. You could say if x == 250: x -= 15. And the other way around for i. If i == 250: i += 15. Ther you go!. Obviously there are a few changes you have to do, but this is the basic code, and I think you would understand this",0.0,False,1,6937 2020-07-29 08:54:18.833,How to set intervals between multiple requests AWS Lambda API,"I have created an API using AWS Lambda function (using Python). Now my react js code hits this API whenever an event fire. So user can request API as many times the events are fired. Now the problem is we are not getting the response from lambda API sequentially. Sometime we are getting the response of our last request faster than the previous response of previous request. So we need to handle our response in Lambda function sequentially, may be adding some delay between 2 request or may be implementing throttling. So how can I do that.","Did you check the concurrency setting on Lambda? You can throttle the lambda there. But if you throttle the lambda and the requests being sent are not being received, the application sending the requests might be receiving an error unless you are storing the requests somewhere on AWS for being processed later. I think putting an SQS in front of lambda might help. You will be hitting API gateway, the requests get sent to SQS, lambda polls requests concurrently (you can control the concurrency) and then send the response back.",0.1352210990936997,False,2,6938 2020-07-29 08:54:18.833,How to set intervals between multiple requests AWS Lambda API,"I have created an API using AWS Lambda function (using Python). Now my react js code hits this API whenever an event fire. So user can request API as many times the events are fired. Now the problem is we are not getting the response from lambda API sequentially. Sometime we are getting the response of our last request faster than the previous response of previous request. So we need to handle our response in Lambda function sequentially, may be adding some delay between 2 request or may be implementing throttling. So how can I do that.","You can use SQS FIFO Queue as a trigger on the Lambda function, set Batch size to 1, and the Reserved Concurrency on the Function to 1. The messages will always be processed in order and will not concurrently poll the next message until the previous one is complete. SQS triggers do not support Batch Window - which will 'wait' until polling the next message. This is a feature for Stream based Lambda triggers (Kinesis and DynamoDB Streams) If you want to streamlined process, Step Function will let you manage states using state machines and supports automatic retry based off the outputs of individual states.",1.2,True,2,6938 2020-07-29 11:03:18.770,"Is it possible to store an image with a value in a way similar to an array, in a database (Firebase or any other)?","Would it be possible to store an image and a value together in a database? Like in a array? So it would be like [image, value]. I’m just trying to be able to access the image to print that and then access the value later (for example a image if a multi-choice question and its answer is the value). Also how would I implement and access this? I’m using Firebase with the pyrebase wrapper for python but if another database is more suitable I’m open to suggestions.","you can set your computer as a server and in database you can store like [image_path, value].",0.0,False,1,6939 2020-07-29 11:45:40.760,How to change the Anaconda environment of a jupyter notebook?,"I have created a new Anaconda environnement for Python. I managed to add it has an optional environnement you can choose when you create a new Notebook. Hovewer, I'd like to know how can I change the environnement of an already existing Notebook.","open your .ipynb file on your browser. On top, there is Kernel tab. You can find your environments under Change Kernel part.",0.2012947653214861,False,1,6940 2020-07-29 13:58:51.300,"'pychattr' library in Python, 'n_simulations' parameter","Does anyone know if it is possible to use n_simulation = None in 'MarkovModel' algorithm in 'pychhatr' library in Python? It throws me an error it must be an integer, but in docsting i have information like that: 'n_simulations : one of {int, None}; default=10000' I`d like to do something like nsim = NULL in 'markov_model' in 'ChannelAttribution' package in R, these two algorithms are similarly implemented. I don`t know how does it works exactly, how many simulations from a transition matrix I have using NULL. Could anyone help with this case? Regards, Sylwia","Out of curiosity I spent some minutes staring intensely at the source code of both pychattr module and ChannelAttribution package. I'm not really familiar with the model, but are you really able to call this in R with ""nsim=NULL""? Unless I missed something if you omit this parameter it will use value 100000 as the default and if parameter exists, the R wrapper will complain if it's not a positive number. Regards, Maciej",0.0,False,2,6941 2020-07-29 13:58:51.300,"'pychattr' library in Python, 'n_simulations' parameter","Does anyone know if it is possible to use n_simulation = None in 'MarkovModel' algorithm in 'pychhatr' library in Python? It throws me an error it must be an integer, but in docsting i have information like that: 'n_simulations : one of {int, None}; default=10000' I`d like to do something like nsim = NULL in 'markov_model' in 'ChannelAttribution' package in R, these two algorithms are similarly implemented. I don`t know how does it works exactly, how many simulations from a transition matrix I have using NULL. Could anyone help with this case? Regards, Sylwia","I checked that 'pychattr' (Python) doesn`t support value None but it supports n_simulations = 0 and it sets n_simulations to 1e6 (1 000 000). 'ChannelAttribution' (R) replaces nsim = NULL and nsim = 0 to nsim = 1e6 (1 000 000) too. In latest version of 'ChannelAttribution' (27.07.2020) we have nsim_start parameter instead of nsim and it doesn`t support 0 or NULL value anymore. Important: default value of nsim_start is 1e5 (100 000) and from my experience it`s not enough in many cases. Regards, Sylwia",0.0,False,2,6941 2020-07-29 16:10:55.583,How to know the alpha or critical value of your t test analysis?,"How do you decide the critical values(alpha) and analyze with the p value example: stats.ttest_ind(early['assignment1_grade'], late['assignment1_grade']) (2 series with score of their assignments) I understand the concept that if the p value is greater than the alpha value then the null hypothesis cant be neglected. Im doing a course and instructor said that the alpha value here is 0.05 but how do you determine it.","The alpha value cannot be determined in the sense that there were a formula to calculate it. Instead, it is arbitrarily chosen, ideally before the study is conducted. The value alpha = 0.05 is a common choice that goes back to a suggestion by Ronald Fisher in his influential book Statistical Methods for Research Workers (first published in 1925). The only particular reason for this value is that if the test statistic has a normal distribution under the null hypothesis, then for a two-tailed test with alpha = 0.05 the critical values of the test statistic will be its mean plus/minus 2 (more exactly, 1.96) times its standard deviation. In fact, you don't need alpha when you calculate the p value, because you can just publish the p value and then every reader can decide whether to consider it low enough for any given purpose or not.",0.0,False,1,6942 2020-07-31 14:50:10.383,Giving interactive control of a Python program to the user,"I need my Python program to do some stuff, and at a certain point give control to the user (like a normal Python shell when you run python3 or whatever) so that he can interact with it via command line. I was thinking of using pwntools's interactive() method but I' m not sure how I would use that for the local program instead of a remote. How would I do that? Any idea is accepted, if pwntools is not needed, even better.","Use IPython If you haven't already, add the package IPython using pip, anaconda, etc. Add to your code: from IPython import embed Then where you want a ""breakpoint"", add: embed() I find this mode, even while coding to be very efficient.",0.3869120172231254,False,1,6943 2020-07-31 15:51:48.670,Python Coverage how to generate Unittest report,"In python I can get test coverage by coverage run -m unittest and the do coverage report -m / coverage html to get html report. However, it does not show the actual unit test report. The unit test result is in the logs, but I would like to capture it in a xml or html, so I can integrate it with Jenkins and publish on each build. This way user does not have to dig into logs. I tried to find solution to this but could not find any, please let me know, how we can get this using coverage tool. I can get this using nose2 - nose2 --html-report --with-coverage --coverage-report html - this will generate two html report - one for unit test and other for coverage. But for some reason this fails when I run with actual project (no coverage data collected / reported)","Ok for those who end up here , I solved it with - nose2 --html-report --with-coverage --coverage-report html --coverage ./ The issue I was having earlier with 'no coverage data' was fixed by specifying the the directory where the coverage should be reported, in the command above its with --coverage ./",1.2,True,1,6944 2020-08-01 13:20:07.317,Rename hundred or more column names in pandas dataframe,"I am working with the John Hopkins Covid data for personal use to create charts. The data shows cumulative deaths by country, I want deaths per day. Seems to me the easiest way is to create two dataframes and subtract one from the other. But the file has column names as dates and the code, e.g. df3 = df2 - df1 subtracts the columns with the matching dates. So I want to rename all the columns with some easy index, for example, 1, 2, 3, .... I cannot figure out how to do this?","Thanks for the time and effort but I figured out a simple way. for i, row in enumerate(df): df.rename(columns = { row : str(i)}, inplace = True) to change the columns names and then for i, row in enumerate(df): df.rename(columns = { row : str( i + 43853)}, inplace = True) to change them back to the dates I want.",0.0,False,1,6945 2020-08-02 09:58:49.600,JWT authorization and token leaks,"I need help understanding the security of JWT tokens used for login functionality. Specifically, how does it prevent an attack from an attacker who can see the user's packets? My understanding is that, encrypted or not, if an attacker gains access to a token, they'll be able to copy the token and use it to login themselves and access a protected resource. I have read that this is why the time-to-live of a token should be short. But how much does that actually help? It doesn't take long to grab a resource. And if the attacker could steal a token once, can't they do it again after the refressh? Is there no way to verify that a token being sent by a client is being sent from the same client that you sent it to? Or am I missing the point?","how does it prevent an attack from an attacker who can see the user's packets? Just because you can see someone's packets doesn't mean that you can see the contents. HTTPS encrypts the traffic so even if someone manages to capture your traffic, they will no be able to extract JWT out of it. Every website that is using authentication should only run through HTTPS. If someone is able to perform man-in-the-middle attack then that is a different story. they'll be able to copy the token and use it to login themselves and access a protected resource Yes but only as the user they stole the token from. JWT are signed which means that you can't modify their content without breaking the signature which will be detected by the server (at least it is computationally infeasible to find the hash collision such that you could modify the content of the JWT). For highly sensitive access (bank accounts, medical data, enterprise cloud admin accounts...) you will need at least 2-factor authentication. And if the attacker could steal a token once, can't they do it again after the refressh? Possibly but that depends on how the token has been exposed. If the attacked sits on the unencrypted channel between you and the server then sure they can repeat the same process but this exposure might be a result of a temporary glitch/human mistake which might be soon repaired which will prevent attack to use the token once it expires. Is there no way to verify that a token being sent by a client is being sent from the same client that you sent it to? If the attacker successfully performs man-in-the-middle attack, they can forge any information that you might use to verify the client so the answer is no, there is no 100% reliable way to verify the client. The biggest issue I see with JWTs is not JWTs themselves but the way they are handled by some people (stored in an unencrypted browser local storage, containing PII, no HTTPS, no 2-factor authentication where necessary, etc...)",1.2,True,1,6946 2020-08-02 12:15:56.920,Python runs in Docker but not in Kubernetes hosted in Raspberry Pi cluster running Ubuntu 20,"Here is the situation. Trying to run a Python Flask API in Kubernetes hosted in Raspberry Pi cluster, nodes are running Ubuntu 20. The API is containerized into a Docker container on the Raspberry Pi control node to account for architecture differences (ARM). When the API and Mongo are ran outside K8s on the Raspberry Pi, just using Docker run command, the API works correctly; however, when the API is applied as a Deployment on Kubernetes the pod for the API fails with a CrashLoopBackoff and logs show 'standard_init_linux.go:211: exec user process caused ""exec format error""' Investigations show that the exec format error might be associated with problems related to building against different CPU architectures. However, having build the Docker image on a Raspberry Pi, and are successfully running the API on the architecture, I am unsure this could the source of the problem. It has been two days and all attempts have failed. Can anyone help?","Fixed; however, something doesn't seem right. The Kubernetes Deployment was always deployed onto the same node. I connected to that node and ran the Docker container and it wouldn't run; the ""exec format error"" would occur. So, it looks like it was a node specific problem. I copied the API and Dockerfile onto the node and ran Docker build to create the image. It now runs. That does not make sense as the Docker image should have everything it needs to run. Maybe it's because a previous image build against x86 (the development machine) remained in that nodes Docker cache/repository. Maybe the image on the node is not overwritten with newer images that have the same name and version number (the version number didn't increment). That would seem the case as the spin up time of the image on the remote node is fast suggesting the new image isn't copied on the remote node. That likely to be what it is. I will post this anyway as it might be useful. Edit: allow me to clarify some more, the root of this problem was ultimately because there was no shared image repository in the cluster. Images were being manually copied onto each RPI (running ARM64) from a laptop (not running ARM64) and this manual process caused the problem. An image build on the laptop was based from a base image incompatible with ARM64; this was manually copied to all RPI's in the cluster. This caused the Exec Format error. Building the image on the RPI pulled a base image that supported ARM64; however, this build had to be done on all RPI because there was no central repository in the cluster that Kubernetes could pull newly build ARM64 compatible images to other RPI nodes in the cluster. Solution: a shared repository Hope this helps.",0.6730655149877884,False,1,6947 2020-08-02 12:29:32.010,Getting json from html with same name,"I have issue with scraping page and getting json from it. = 3.8, DLLs are no longer imported from the PATH. If gdalXXX.dll is in the PATH, then set the USE_PATH_FOR_GDAL_PYTHON=YES environment variable to feed the PATH into os.add_dll_directory(). I've been looking for a solution to this but can't seem to figure out how to fix this. Anybody has a solution?","use: from osgeo import gdal instead of: import gdal",0.0,False,1,7107 2020-11-06 04:17:49.740,How to Get coordinates of detected area in opencv using python,"I have been able to successfully detect an object(face and eye) using haar cascade classifier in python using opencv. When the object is detected, a rectangle is shown around the object. I want to get coordinates of mid point of the two eyes. and want to store them in a array. Can any one help me? how can i do this. any guide","I suppose you have the coordinates for the bounding boxes of both eyes. Something like X1:X2 Y1:Y2 for both boxes. You just have to find the center of these boxes: (X2-X1)/2+X1 and (Y2-Y1)/2+Y1 You'll get two XY coordinates from this, basically just do the above again with these coordinates, and you'll get the center point",0.0,False,2,7108 2020-11-06 04:17:49.740,How to Get coordinates of detected area in opencv using python,"I have been able to successfully detect an object(face and eye) using haar cascade classifier in python using opencv. When the object is detected, a rectangle is shown around the object. I want to get coordinates of mid point of the two eyes. and want to store them in a array. Can any one help me? how can i do this. any guide","So you already detected the eye? You also have a bounding box around the eye? So your question comes down to calculatiing the distance between 2 bounding boxes and then dividing it by 2? Or do I misunderstand? If you need exact the center between the two eyes a good way to go about that would be to take the center of the 2 boxes bounding the 2 eyes. Calculate the distance between those two points and divide it by 2. If you're willing to post your code I'm willing to help more with writing code.",0.0,False,2,7108 2020-11-06 12:51:43.300,How to search on Google with Selenium in Python?,I'm really new to web scraping. Is there anyone that could tell me how to search on google.com with Selenium in Python?,Selenium probably isn't the best. other libraries/tools would work better. BeautifulSoup is the first one that comes to mind,0.1352210990936997,False,1,7109 2020-11-06 18:15:31.933,Download cloudtrail event,"I need some advise in one of my usecase regarding Cloudtrail and Python boto3. I have some cloudtrail events like configured and i need to send the report of all those events manually by downloading the file of events. I am planning to automate this stuff using python boto3. Can you please advise how can i use boto3 to get the cloudtrail events for some specific date i should paas at runtime along with the csv or json files downloaded and sent over the email. As of now i have created a python script which shows the cloudtrail event but not able to download the files. Please advise","My suggestions is to simply configure the deliver of those events to an S3 bucket, and you have there the file of events. This configuration is part of your trail configuration and doesn't need boto3. You can then access events files stored on S3 using boto3 (personally the best way to interact with AWS resources), and manipulate those files as you prefer.",0.0,False,1,7110 2020-11-07 02:37:34.713,Saving Tensorflow models with custom layers,"I read through the documentation, but something wasn't clear for me: if I coded a custom layer and then used it in a model, can I just save the model as SavedModel and the custom layer automatically goes within it or do I have to save the custom layer too? I tried saving just the model in H5 format and not the custom layer. When I tried to load the model, I had an error on the custom layer not being recognized or something like this. Reading through the documentation, I saw that saving to custom objects to H5 format is a bit more involved. But how does it work with SavedModels?","If I understand your question, you should simply use tf.keras.models.save_model(,'file_name',save_format='tf'). My understanding is that the 'tf' format automatically saves the custom layers, so loading doesn't require all libraries be present. This doesn't extend to all custom objects, but I don't know where that distinction lies. If you want to load a model that uses non-layer custom objects you have to use the custom_objects parameter in tf.keras.models.load_model(). This is only necessary if you want to train immediately after loading. If you don't intend to train the model immediately, you should be able to forego custom_objects and just set compile=False in load_model. If you want to use the 'h5' format, you supposedly have to have all libraries/modules/packages that the custom object utilizes present and loaded in order for the 'h5' load to work. I know I've done this with an intializer before. This might not matter for layers, but I assume that it does. You also need to implement get_config() and save_config() functions in the custom object definition in order for 'h5' to save and load properly.",0.0,False,1,7111 2020-11-07 06:19:25.707,How to determine whether function returns an iterable object which calculates results on demand?,"How can one surelly tell that function retuns an iterable object, which calculates results on demand, and not an iterator, which returns already calculated results? For e.g. function filter() from python's documentation says: Construct an iterator from those elements of iterable for which function returns true Reading that I cat tell that this function returns an object which implements iterable protocol but I can't be sure it won't eat up all my memory if use it with generator which reads values from 16gb file untill I read further and see the Note: Note that filter(function, iterable) is equivalent to the generator expression (item for item in iterable if function(item)) So, how does one can tell that function calculates returned results on demand and not just iterating over temporary lists which holds already calculated values? I have to inspect sources?","If the doc says that a function returns an iterator, it's pretty safe to assume it calculates items on the fly to save memory. If it did calculate all its items at once, it would almost certainly return a list.",1.2,True,1,7112 2020-11-07 12:40:31.890,How to get only the whole number without rounding-off?,"how do you get only the whole number of a non-integer value without the use of rounding-off? I have searched for it and I seem to be having a hard time. For example: w = 2.20 w = 2.00 x = 2.50 x = 2.00 y = 3.70 y = 3.00 z = 4.50 z = 4.00 Is it as simple as this or that might get wrong in some values? x = 2.6 or x = 2.5 or x = 2.4 x = int(x) x = 2 Is it really simple as that? Thanks for answering this stewpid question.","you can just divided it into (1) but use (//) like this: x = x // 1",0.6730655149877884,False,1,7113 2020-11-08 15:39:33.647,How to install OpenCV in Docker (CentOs)?,"I am trying to install OpenCV in a docker container (CentOS). I tried installing python first and then tried yum install opencv-contrib but it doesn't work. Can someone help me out as to how to install OpenCV in Docker (CentOS)?","To install OpenCV use the command: sudo yum install opencv opencv-devel opencv-python And when the installation is completed use the command to verify: pkg-config --modversion opencv",0.0,False,1,7114 2020-11-10 12:41:13.300,How can I bypass the 429-error from www.instagram.com?,"i'm solliciting you today because i've a problem with selenium. my goal is to make a full automated bot that create an account with parsed details (mail, pass, birth date...) So far, i've managed to almost create the bot (i just need to access to gmail and get the confirmation code). My problem is here, because i've tried a lot of things, i have a Failed to load resource: the server responded with a status of 429 () So, i guess, instagram is blocking me. how could i bypass this ?","Status code of 429 means that you've bombarded Instagram's server too many times ,and that is why Instagram has blocked your ip. This is done mainly to prevent from DDOS attacks. Best thing would be to try after some time ( there might be a Retry-After header in the response). Also, increase the time interval between each request and set the specific count of number of requests made within a specified time (let's say 1 hr).",0.0,False,2,7115 2020-11-10 12:41:13.300,How can I bypass the 429-error from www.instagram.com?,"i'm solliciting you today because i've a problem with selenium. my goal is to make a full automated bot that create an account with parsed details (mail, pass, birth date...) So far, i've managed to almost create the bot (i just need to access to gmail and get the confirmation code). My problem is here, because i've tried a lot of things, i have a Failed to load resource: the server responded with a status of 429 () So, i guess, instagram is blocking me. how could i bypass this ?","The answer is in the description of the HTTP error code. You are being blocked because you made too many requests in a short time. Reduce the rate at which your bot makes requests and see if that helps. As far as I know there's no way to ""bypass"" this check by the server. Check if the response header has a Retry-After value to tell you when you can try again.",0.0,False,2,7115 2020-11-11 03:48:10.147,Tweepy API Search Filter,"I'm currently learning how to use the Tweepy API, and is there a way to filter quoted Tweets and blocked users? I'm trying to stop search from including quoted Tweets and Tweets from blocked users. I have filtered Retweets and replies already. Here's what I have: for tweet in api.search(q = 'python -filter:retweets AND -filter:replies', lang = 'en', count = 100):","To filter quotes, use '-filter:quote'",1.2,True,1,7116 2020-11-11 22:28:13.737,Read a csv file from s3 excluding some values,"How can I read a csv file from s3 without few values. Eg: list [a,b] Except the values a and b. I need to read all the other values in the csv. I know how to read the whole csv from s3. sqlContext.read.csv(s3_path, header=True) but how do I exclude these 2 values from the file and read the rest of the file.","You don't. A file is a sequential storage medium. A CSV file is a form of text file: it's character-indexed. Therefore, to exclude columns, you have to first read and process the characters to find the column boundaries. Even if you could magically find those boundaries, you would have to seek past those locations; this would likely cost you more time than simply reading and ignoring the characters, since you would be interrupting the usual, smooth block-transfer instructions that drive most file buffering. As the comments tell you, simply read the file as is and discard the unwanted data as part of your data cleansing. If you need the file repeatedly, then cleanse it once, and use that version for your program.",0.2012947653214861,False,1,7117 2020-11-12 19:06:49.007,python on windows 10 cannot upgrade modules in virtual environment,"I has been forced to develop python scripts on Windows 10, which I have never been doing before. I have installed python 3.9 using windows installer package into C:\Program Files\Python directory. This directory is write protected against regular user and I don't want to elevate to admin, so when using pip globally I use --user switch and python installs modules to C:\Users\AppData\Roaming\Python\Python39\site-packages and scripts to C:\Users\AppData\Roaming\Python\Python39\Scripts directory. I don't know how he sets this weird path, but at least it is working. I have added this path to %Path% variable for my user. Problems start, when I'm trying to use virtual environment and upgrade pip: I have created new project on local machine in C:\Users\Projects and entered the path in terminal. python -m venv venv source venv\Scrips\activate pip install --upgrade pip But then I get error: ERROR: Could not install packages due to an EnvironmentError: [WinError 5] Access denied: 'C:\Users\\AppData\Local\Temp\pip-uninstall-7jcd65xy\pip.exe' Consider using the --user option or check the permissions. So when I try to use --user flag I get: ERROR: Can not perform a '--user' install. User site-packages are not visible in this virtualenv. So my questions are: why it is not trying to install everything inside virtual enviroment (venv\Scripts\pip.exe)? how I get access denied, when this folder suppose to be owned by my user? When using deprecated easy_install --upgrade pip everything works fine.",I recently had the same issue for some other modules. My solution was simply downgrade from python 3.9 to 3.7. Or make an virtual environment for 3.7 and use that and see how it works.,0.3869120172231254,False,1,7118 2020-11-13 07:25:48.307,How to show a variable value on the webcam video stream? (python OpenCV),"I coded to open webcam video on a new window using OpenCV cv2.VideoCapture(0). You can display text on webcam video using cv2.putText() command. But it displays string values only. How to put varying values in the webcam video that is being displayed on a new window? For example, if value of variable p is changing all the time, you can easily display it on the command window by writing print(p). But how can we display values of p over the webcam video?","You can also show changing variables using cv2.putText() method. Just need to convert the variable into string using str() method. Suppose you want to show variable x that is for example an integer and it is always changing. You can use cv2.putText(frame, str(x), org, font, fontScale, color, thickness, cv2.LINE_AA) to do it (You should fill org,font, etc.).",1.2,True,1,7119 2020-11-13 09:59:06.667,Is there any solution regarding to PyQt library doesn't work in Mac OS Big Sur?,"I've done some project using PyQt library for my class assignmnet. And I need to check my application working before I submit it. Today, 3 hours ago I updated my Mac book OS to Big Sur. And I found out that PyQt library doesn't work. It doesn't show any GUI. Are there someone know how to fix it?","Related to this, after upgrading to BigSur my app stopped launching its window...I am using the official Qt supported binding PySide2/shiboken2 Upgrading from PySide2 5.12 to 5.15 fixed the issue. Steps: Remove PySide2/shiboken2 pip3 uninstall PySide2 pip3 uninstall shiboken2 Reinstall pip3 install PySide2",0.0,False,2,7120 2020-11-13 09:59:06.667,Is there any solution regarding to PyQt library doesn't work in Mac OS Big Sur?,"I've done some project using PyQt library for my class assignmnet. And I need to check my application working before I submit it. Today, 3 hours ago I updated my Mac book OS to Big Sur. And I found out that PyQt library doesn't work. It doesn't show any GUI. Are there someone know how to fix it?","Rolling back to PyQt5==5.13.0 fixed the issue for me! you should uninstall PyQt5 and then install it using pip install PyQt5==5.13.0",0.5457054096481145,False,2,7120 2020-11-13 23:42:07.327,access methods on one socketio namespace from a different one,"I have a flask application that uses flask-socketio and python-socketio to facilitate communication between a socketio server in the cloud and a display device via a hardware device. I have a display namespace which exposes the display facing events, and also uses a separate client class which connects and talks to the server in the cloud. This works well as designed, but now I want to trigger the connection method in my client class from a different namespace. So far I have not been able to get this to work. What I have tried is adding the display namespace class to the flask context, then passing that into the socketio.on_namespace() method. Then from the other namespace I am grabbing it from current_app and trying to trigger the connection to the cloud server. This returns a 'RuntimeError: working outside of application context' error. So at this point I'm still researching how to do this correctly, but I was hoping someone has dealt with something like this before, and knows how to access methods on one namespace from a different one.","I found a solution. Instead of instantiating my client class from the display namespace, I instantiate it before I add the namespaces to socketio. Then I pass the client object into both namespaces when I call the socketio.on_namespace() method.",0.0,False,1,7121 2020-11-15 07:45:39.937,pypi package imports python file instead of package,"After pip install package_name from my recently uploaded pypi package It imports python filename directly after installing, I wanted to use like below import package_name or from package_name import python_file but this doesnt work instead this works import python_file even package is installed name is package_name pypi package name package_name and My directory structure is below package_name setup.py folder1 python_file In setup.py , i've used package_dir={'': 'folder_1'} but even import folder_1 or from folder_1 import python_file didnt worked. I tried if adding __init__.py inside folder_1, it didnt solved. I've been following Mark Smith - Publish a (Perfect) Python Package on PyPI, which told this way , but any idea what is happening, how can i solve it??","So what you actual did is to tell python that the root folder is folder_1. This is not what you want. You just need to tell that folder_1 (or actually replace it by package_name, see below) is a package and to declare it using: packages = {'folder1'}. Usually, people don't do it but let the function find_packages() to do the work for them by packages=find_packages() In addition package folder should contain a __init__.py. to conclude you need a folder structure like below and use find_packages(). It is OK and even popular choice that the project name and it single main package have the same name. project_name setup.py package_name __init__.py python_file.py",1.2,True,1,7122 2020-11-15 11:43:31.347,Checkers board in kivy,"What it is the best way to make a chessboard for checkers using Kivy framework? I have board.png, white.png, black.png, white_q.png, black_q.png files already. I wonder how to assign to each black tile on my board.png its own coordinate. Should I create 32 transparent widgets placed on black tiles of board.png or it is impossible? And what widget to use for 24 checkers? Any ideas or it is too complicated using Kivy and I should use tkinter?","There are many ways you could do this. It isn't complicated, it's very easy. The best way depends more on how you want to structure your app than anything else. I wonder how to assign to each black tile on my board.png its own coordinate Set the pos attribute of a widget to control its position, or better in this case use a layout that does what you want. For instance, adding your squares to a GridLayout with the right number of columns will have the right effect without you needing to worry more about positioning them. Should I create 32 transparent widgets placed on black tiles of board.png or it is impossible? I don't understand what you're asking here. You can make transparent widgets if you want but I don't know why you'd want to. And what widget to use for 24 checkers? The real question is, what do you want the widget to do? e.g. if you want it to display an image then inherit from Image. Overall this answer is very generic because your question is very generic. I suggest that if you're stuck, try to ask a more specific question about a task you're struggling with, and give a code example showing where you are now.",0.3869120172231254,False,1,7123 2020-11-15 20:51:24.707,How to change the value of a variable at run time from another script at remote machine?,"I have a local computer A and remote computer B. Computer A has script client.py Computer B has server.py Script client.py has a variable port. Let's say port = 5535. I am running client.py on Computer A, which is using the port number for socket communication. I need to change the port number to another port number while the client.py is running so it will switch to another server at runtime after notifying the client to change the port number. I am using pyzmq to send data from the client to the server sending a continuous stream of data. Is this scenario possible and how can I do it?","Yes, it's possible. You may design / modify the (so far unseen) code so as to PUSH any such need to change a port# on-the-fly to the PULL-side, to release the 5535 and use another one. The PULL-side shall then call .disconnect() and .close() methods, so as to release the said port 5535 ( plus notify that it has done so, perhaps by another PUSH/PULL to the .bind()-locked party, that it can now unbind and close the .bind()-locked port# 5535 too) and next setup a new connection to an announced ""new_port#"", received from the initial notification ( which ought have been already .bind()-locked on the PUSH-side, ought it not? :o) ). That easy.",1.2,True,1,7124 2020-11-16 09:47:54.700,without Loops to Sum Range of odd numbers,is there any way to sum odd numbers from 1 to n but without any loops and if there isn't a way how can i create this by fast algorithm to do this task in less than n loops.,"You can try the one below, which loop through from 1 to n, stepping 2 sum(range(1,n,2))",0.0,False,1,7125 2020-11-17 04:00:00.753,How do I activate python virtual environment from a different repo?,"So am working in a group project, we are using python and of the code is on GitHub. My question is how do I activate the virtual environment? Do I make one on my own using the ""python virtual -m venv env"" or the one that's on the repo, if there is such a thing. Thanks","Yes, you'll want to create your own with something like: python -m venv venv. The final argument specifies where your environment will live; you could put it anywhere you like. I often have a venv folder in Python projects, and just .gitignore it. After you have the environment, you can activate it. On Linux: source venv/bin/activate. Once activated, any packages you install will go into it; you can run pip install -r requirements.txt for instance.",0.0,False,2,7126 2020-11-17 04:00:00.753,How do I activate python virtual environment from a different repo?,"So am working in a group project, we are using python and of the code is on GitHub. My question is how do I activate the virtual environment? Do I make one on my own using the ""python virtual -m venv env"" or the one that's on the repo, if there is such a thing. Thanks","virtual env is used to make your original env clean. you can pip install virtualenv and then create a virtual env like virtualenv /path/to/folder then use source /path/to/folder/bin/activate to activate the env. then you can do pip install -r requirements.txt to install dependencies into the env. then everything will be installed into /path/to/folder/lib alteratively, you can use /path/to/folder/bin/pip install or /path/to/folder/bin/python without activating the env.",0.2012947653214861,False,2,7126 2020-11-17 12:28:03.713,Maintaining label encoding across different files in pandas,"I know how to use scikit-learn and pandas to encode my categorical data. I've been using the category codes in pandas for now which I later will transform into an OneHot encoded format for ML. My issues is that I need to create a pre-processing pipeline for multiple files with the same data format. I've discovered that using the pandas category codes encoding is not consistent, even if the categories (strings) in the data are identical across multiple files. Is there a way to do this encoding lexicographically so that it's done the same way across all files or is there any specific method that can be used which would result in the same encoding when applied on multiple files?","The LabelEncoder like all other Sklearn-Transformers has three certain methods: fit(): Creates the labels given some input data transform(): Transforms data to the labels of the encoder instance. It must have called fit() before or will throw an error fit_transform(): That's a convenience-method that will create the labels and transform the data directly. I'm guessing you are calling fit_transform everywhere. To fix this, just call the fit-method once (on a superset of all your data because it will throw an error if it encounters a label that was not present in the data you called fit on) and than use the transform method.",0.0,False,1,7127 2020-11-18 17:55:34.847,Using Python to access DirectShow to create and use Virtual Camera(Software Only Camera),"Generally to create a Virtual Camera we need to create a C++ application and include DirectShow API to achieve this. But with the modules such as win32 modules and other modules we can use win32 api which lets us use these apis in python. Can anyone Help sharing a good documentation or some Sample codes for doing this?","There is no reliable way to emulate a webcam on Windows otherwise than supplying a driver. Many applications take simpler path with DirectShow, and emulate a webcam for a subset of DirectShow based applications (in particular, modern apps will be excluded since they don't use DirectShow), but even in this case you have to develop C++ camera enumation code and connect your python code with it.",0.3869120172231254,False,1,7128 2020-11-19 19:45:23.240,No module names xlrd,"I am working out of R Studio and am trying to replicate what I am doing in R in Python. On my terminal, it is saying that I have xlrd already installed but when I try to import the package (import xlrd) in R Studio, it tells me: ""No module named 'xlrd'"". Does anyone know how to fix this?","I have solved this on my own. In your terminal, go to ls -a and this will list out applications on your laptop. If Renviron is there, type nano .Renviron to write to the Renviron file. Find where Python is stored on your laptop and type RETICULATE_PYTHON=(file path where Python is stored). ctrl + x to exit, y to save and then hit enter. Restart R studio and this should work for you.",0.3869120172231254,False,1,7129 2020-11-20 13:54:11.863,How to Order a fraction of a Crypto (like Bitcoin) in zipline?,"Basically as you all know we can backtest our strategies in Zipline, the problem is that Zipline is developed for stock markets and the minimum order of an asset that can be ordered is 1 in those markets but in crypto markets we are able to order a fraction of a Crypto currency. So how can I make zipline to order a fraction of Bitcoin base on the available capital?","You can simulate your test on a smaller scale, e.g. on Satoshi level (1e8). I can think of two methods: Increase your capital to the base of 1e8, and leave the input as is. This way you can analyse the result in Satoshi, but you need to correct for the final portfolio value and any other factors that are dependent on the capital base. Scale the input to Satoshi or any other level and change the handle_data method to either order on Satoshi level or based on your portfolio percentage using order_target_percent method. NOTE: Zipline rounds the inputs to 3 decimal points. So re-scaling to Satoshi turns prices that are lower than 5000 to NaN (not considering rounding errors for higher prices). My suggestion is to either use 1e5 for Bitcoin or log-scale.",0.0,False,1,7130 2020-11-21 23:14:34.487,"Pandas, find and delete rows","Been searching for a while in order to understand how to do this basic task without any success which is very strange. I have a dataset where some of the rows contain '-', I have no clue under which columns these values lie. How do I search in the whole dataset (including all columns) for '-' and drop the rows containing this value? thank you!","This is a bit more robust than wwnde's answer, as it will work if some of the columns aren't originally strings: df.loc[~df.apply(lambda x: any('-' in str(col) for col in x), axis = 1)] If you have data that's stored as datetime, it will display as having -, but will return an error if you check for inclusion without converting to str first. Negative numbers will also return True once converted to str. If you want different behavior, you'll have to do something more complicated, such as df.loc[~df.apply(lambda x: any('-' in col if isinstance(col, str) else False for col in x), axis = 1)]",1.2,True,1,7131 2020-11-22 09:23:23.773,"How to resize a depth map from size [400,400] into size [60,60]?","I have a depth map image which was obtained using a kinect camera. In that image I have selected a region of size [400,400] and stored it as another image. Now, I would like to know how to resize this image into a size of [x,y] in python.","I don't recommend to reduce resolution of depth map the same way like it is done for images. Imagine a scene with a small object 5 m before the wall: Using bicubic/bilinear algorithms you will get depth of something between the object and the wall. In reality there is just a free space in between. Using nearest-neighbor interpolation is better but you are ignoring a lot of information and in some cases it may happed that the object just disappears. The best approach is to use the Mode function. Divide the original depth map into windows. Each window will represent one pixel in the downsized map. For each of them calculate the most frequent depth value. You can use Python's statistics.mode() function.",0.0,False,1,7132 2020-11-22 16:19:49.853,Raspberry pi python editor,"I was writing code to make a facial recognition, but my code did not work because I was writing on verison 3, do you know how to download python 3 on the raspberry pi?","Linux uses package managers to download packages or programing languages ,raspberry pi uses apt(advanced package tool) This is how you use APT to install python3: sudo apt-get install python3 OR sudo apt install python3 and to test if python3 installed correctly type: python3 If a python shell opens python3 has been installed properly",1.2,True,1,7133 2020-11-23 15:05:15.013,how to authorize only flutter app in djano server?,"While I'm using Django as my backend and flutter as my front end. I want only the flutter app to access the data from django server. Is there any way to do this thing? Like we use allowed host can we do something with that?",You can use an authentication method for it. Only allow for the users authenticated from your flutter app to use your backend.,0.3869120172231254,False,1,7134 2020-11-23 17:14:54.653,pymongo getTimestamp without ObjectId,"in my mongodb, i have a collection where the docs are created not using ObjectId, how can I get the timestamp (generation_time in pymongo) of those docs? Thank you","If you don't store timestamps in documents, they wouldn't have any timestamps to retrieve. If you store timestamps in some other way than via ObjectId, you would retrieve them based on how they are stored.",1.2,True,1,7135 2020-11-24 05:55:23.327,using a pandas dataframe without headers to write to mysql with to_sql,"I have a dataframe created from an excel sheet (the source). The excel sheet will not have a header row. I have a table in mysql that is already created (the target). It will always be the exact same layout as the excel sheet. source_data = pd.read_excel(full_path, sheet_name=sheet_name, skiprows=ignore_rows, header=None) db_engine = [function the returns my mysql engine] source_data.to_sql(name=table_name, con=db_engine, schema=schema_name, if_exists='append', index=False) This fails with an error due to pandas using numbers as column names in the insert statement.. [SQL: INSERT INTO [tablename] (0, 1) VALUES (%(0)s, %(1)s)] error=(pymysql.err.OperationalError) (1054, ""Unknown column '0' in 'field list' how can i get around this? Is there a different insert method i can use? do i really have to load up the dataframe with the proper column names from the table?","Maybe after importing the data into Pandas, you can rename the columns to something that is not a number, e.g. ""First"", ""Second"", etc. or [str(i) for i in range(len(source_data))] This would resolve the issue of SQL being confused by the numerical labels.",0.0,False,2,7136 2020-11-24 05:55:23.327,using a pandas dataframe without headers to write to mysql with to_sql,"I have a dataframe created from an excel sheet (the source). The excel sheet will not have a header row. I have a table in mysql that is already created (the target). It will always be the exact same layout as the excel sheet. source_data = pd.read_excel(full_path, sheet_name=sheet_name, skiprows=ignore_rows, header=None) db_engine = [function the returns my mysql engine] source_data.to_sql(name=table_name, con=db_engine, schema=schema_name, if_exists='append', index=False) This fails with an error due to pandas using numbers as column names in the insert statement.. [SQL: INSERT INTO [tablename] (0, 1) VALUES (%(0)s, %(1)s)] error=(pymysql.err.OperationalError) (1054, ""Unknown column '0' in 'field list' how can i get around this? Is there a different insert method i can use? do i really have to load up the dataframe with the proper column names from the table?","Found no alternatives.. went with adding the column names to the data frame during the read.. So first i constructed the list of column names sql = (""select [column_name] from [table i get my metadata from];"") db_connection = [my connection for sqlalchemy] result = db_connection.execute(sql) column_names = [] for column in result: column_names.append(column[0]) And then i use that column listing in the read command: source_data = pd.read_excel(full_path, sheet_name=sheet_name, skiprows=ignore_rows,header=None, names=column_names) the to_sql statement then runs without error.",0.0,False,2,7136 2020-11-24 18:45:59.360,Getting skeletal data in pykinect (xbox 360 version),"I'm having trouble finding any sort of documentation or instruction for pykinect, specifically for the xbox 360 version of the kinect. how do I get skeletal data or where do I find the docs?? if I wasn't clear here please let me know!","To use python with the kinect 360 you need the follwing: python 2.7 windows kinect sdk 1.8 pykinect - NOT pykinect2",-0.3869120172231254,False,1,7137 2020-11-25 09:51:23.410,How to implement a MIDI keyboard into python,"Looking to create a GUI based 25-key keyboard using PYQT5, which can support MIDI controller keyboards. However, I don’t know where to start (What libraries should I use and how do I go about finding a universal method to supporting all MIDI controller keyboards). I plan to potentially use the Mido Library, or PyUSB but I am still confused as to how to make this all function. Any starting guides would be much appreciated.","MIDI is a universal standard shared by all manufacturers, so you don't have to worry about ""supporting all MIDI controller keyboards"", you just have to worry about supporting the MIDI studio of your system. You'll have to scan your environment to get the existing MIDI ports. With the list of existing ports you can let the user choose to which port he wants to send the events generated by your keyboard and/or from which port he wants to receive events that will animate the keyboard (for instance from a physical MIDI keyboard connected to your computer), possibly all available input ports. To support input events, you'll need a kind of callback prepared to receive the incoming notes on and off (which are the main relevant messages for a keyboard) at any time. That also means that you have to filter the received events that are not of those types because, in MIDI, a stream of events is subject to contain many kinds of other events mixed with the notes (pitch bend, controllers, program change, and so on). Finally notice that MIDI doesn't produce any sound by itself. So if you plane to hear something when you play on your keyboard, the produced MIDI events should be send to a device subject to produce the sound (for instance a synthesizer or virtual instrument) via a port that this device receives. For the library, Mido seems to be a pretty good choice : it has all the features needed for such a project.",0.6730655149877884,False,1,7138 2020-11-25 11:44:47.813,flask / flask_restful : calling routes in one blueprint from another route in a different blueprint,"I'm working on a very basic Web Application (built using flask and flask_restful) with unrelated views split into different blueprints. Different blueprints deal with a different instance of a class. Now I want to design a page with status(properties and value) of all the classes these blueprints are dealing with. The page is a kind of a control panel of sorts. For this I want to call all the status routes (defined by me) in different blueprints from a single route(status page route) in a different blueprint. I have been searching for a while on how to make internal calls in Flask / Flask_restful, but haven't found anything specifically for this. So.... I would love to find out how to make these internal calls. Also, is there any problem or convention against making internal calls. I also thought of making use of the requests calls using Requests module, but that feels more like a hack. Is this the only option I got??? If yes, is there a way I dont have to hard code the url in them like using something close to url_for() in flask?? Thanks.. :)","I would love to find out how to make these internal calls. Ans: use url_for() or Requests module, as u do for any other post or get method. Also, is there any problem or convention against making internal calls ? Ans: I didn't find any even after intensive searching. I also thought of making use of the requests calls using Requests module, but that feels more like a hack. Is this the only option I got??? If yes, is there a way I don't have to hard code the url in them like using something close to url_for() in flask?? Ans: If you don't wanna use Requests module, url_for() is the simplest and cleanest option there is. Hard coded path is the only option.",1.2,True,1,7139 2020-11-25 19:10:03.817,"When doing runserver, keep getting new data loaded in my database","Every time I do a: python manage.py runserver And I load the site, python gets data and puts this in my database. Even when I already filled some info in the database. Enough to get a view of what I am working on. Now it is not loading the information I want and instead putting in new information to add to the database so it can work with some data. What is the reason my data in the database is not being processed? And how do I stop new data being loaded into the database.","May be it is happening due to migration file first sometimes when you migrate models into database query language with same number python manage.py makemigrations 0001 This ""0001"" has to be changed everytime To solve your problem once delete the migrations file and then again migrate all models and then try Tell if this work",0.0,False,1,7140 2020-11-26 13:38:11.537,How to find the stitch (seam) position between two images with OpenCV?,"I find many examples of passing a list of images, and returning a stitched image, but not much information about how these images have beeen stitched together. In a project, we have a camera fixed still, pointing down, and coveyers pass underneath. The program detects objects and start recording images. However some objects do not enter completely in the image, so we need to capture multiple images and stich then together, but we need to know the position of the stitched image because there are other sensors synchronized with the captured image, and we need to also synchronize their readings within the stitched image (i.e. we know where the reading is within each single capture, but not if captures are stitched together). In short, given a list of images, how can we find the coordinates of each images relative to each other?","Basically while stiching correspondence between two (or more) images are setup. This is done with some constant key points. After finding those key points the images are warped or transformed & put together, i.e. stitched. Now those key points could be set/ noted as per a global coordinate system (containing all images). Then one can get the position after stitching too.",0.0,False,1,7141 2020-11-27 03:21:57.860,Unable to change data types of certain columns read from xslx and by Pandas,"I import an Excel file with pandas and when I try to convert all columns to float64 for further manipulation. I have several columns that have a type like: 0 column_name_1 float64 column_name_1 float64 dtype: object and it is unable to do any calculations. May I ask how I could change this column type to float64?",I just solved it yesterday and it is because I have two same columns in the Data frame and it causes that when I try to access pd['something'] it automatically combine two columns together and then it becomes an object instead of float64,0.0,False,1,7142 2020-11-28 07:18:31.807,How to update an py made exe file from my pc to people I have sent it to?,What I mean is that I have a py file which I have converted to an exe file. So I wanted to know in case I decide to update the py file then how do I make it if I have sent it to someone the same changes occur in his file as well whether the exe or py file.,"Put your version of the program on a file share, or make it otherwise available in the internet and build in an update check in the program. So that it checks the URL for a new version everytime it is started. I guess this is the most common way to do something like that.",0.0,False,1,7143 2020-11-29 05:47:22.927,Is there any way to return the turtle object that is clicked?,"I'm making a matching game where there are several cards faced upside down and the user has to match the right pairs. The cards faced upside down are all turtle objects. For eg. if there are 8 faced down cards, there are 8 turtle objects. I'm having some trouble figuring out how to select the cards since I don't know which turtle is associated with the particular card selected by the user. I do have a nested list containing all turtles and those with similar images are grouped together. Is there any way to return the turtle object selected by the user?","If i got your question, one way to do so is that you should provide some id attribute to each turtle which will identify it. Then you can check easily which turtle was selected by the user.",0.0,False,1,7144 2020-11-29 10:11:25.313,Nativescript can't find six,"I installed Nativescript successfully and it works when running ns run android. However, when I try to use ns run ios I get the ominous WARNING: The Python 'six' package not found.-error Same happens, when I try to use ns doctor. I tried EVERYTHING that I found on the web. Setting PATH, PYTHONPATH, re-install python, six and everything - nothing helped. Re-install of six tells me Requirement already satisfied. Any ideas how to make this work??? I'm on MacOS Catalina.","It seems I have a total mess with paths and python installations on my Mac. I found like 6 different pip-paths and like 4 different python paths. Since I have no idea which ones I can delete, I tried installing six with all pip-versions I found and that helped. How to clean up this mess is likely a subject for another thread :)",1.2,True,1,7145 2020-12-01 03:18:45.860,I have different excel files in the same folder,"I have different excel files in the same folder, in each of them there are the same sheets. I need to select the last sheet of each file and join them all by the columns (that is, form a single table). The columns of all files are named the same. I think it is to identify the dataframe of each file and then paste them. But I do not know how","Just do what Recessive said and use a for loop to read the excel file one by one and do the following: excel_files = os.listdir(filepath) for file in excel_files: read excel file sheet save specific column to variable end of loop concatenate each column from different variables to one dataframe",0.0,False,1,7146 2020-12-01 16:05:05.240,Added more parameters to smtplib.SMTP in python,"Im trying to make a script that sent an email with python using smtp.smtplib , almost of examples i found while googling shows how to call this function with only smtpserver and port parameters. i want to added other paramaters : domain and binding IP i tried this : server = smtplib.SMTP(smtpserver, 25,'mydomain.com',5,'myServerIP') I got this as error : TypeError: init() takes at most 5 arguments (6 given) Can you suggest a way to do this?",This error is likely because the parameters are invalid (there is one too many). Try looking at the smtplib docs to see what parameters are valid,0.0,False,1,7147 2020-12-02 00:32:33.690,How could i delete several lines of code at the same time in Jupiter notebook?,I want to delete/tab several lines of code at the same time in Jupiter notebook. how could i do that? Is there hot keys for that?,"While in the notebook, click to the left of the grey input box where it says In []: (You'll see the highlight color go from green to blue) While it's blue, hold down shift and use your up arrow key to select the rows above or below Press D twice Click back into the cell and the highlight will turn back to green.",0.3869120172231254,False,1,7148 2020-12-02 03:27:38.637,python : Compute columns of data frames and add them to new columns,"I want to make a new column by calculating existing columns. For example df df no data1 data2 1 10 15 2 51 46 3 36 20 ...... i want to make this new_df no data1 data2 data1/-2 data1/2 data2/-2 data2/2 1 10 15 -5 5 -7.5 7.5 2 51 46 -25.5 25.5 -23 23 3 36 20 -18 18 -9 9 but i don't know how to make this as efficient as possible","To create a new df column based on the calculations of two or more other columns, you would have to define a new column and set it equal to your equation. For example: df['new_col'] = df['col_1'] * df['col_2']",0.0,False,1,7149 2020-12-02 08:33:53.520,How to decrypt django pbkdf2_sha256 algorthim password?,"I need user_password plaintext using Django. I tried many ways to get plaintext in user_password. but It's not working. So, I analyzed how the Django user password is generated. it's using the make_password method in the Django core model. In this method generating the hashed code using( pbkdf2_sha256) algorthm. If any possible to decrypt the password. Example: pbkdf2_sha256$150000$O9hNDLwzBc7r$RzJPG76Vki36xEflUPKn37jYI3xRbbf6MTPrWbjFrgQ=","As you have already seen, Django uses hashing method like SHA256 in this case. Hashing mechanisms basically use lossy compression method, so there is no way to decrypt hashed messages as they are irreversible. Because it is not encryption and there is no backward method like decryption. It is safe to store password in the hashed form, as only creator of the password should know the original password and the backend system just compares the hashes. This is normal situation for most backend frameworks. Because this is made for security reasons so far. Passwords are hashed and saved in the database so that even if the malicious user gets access to the database, he can't find usefull information there or it will be really hard to crack the hashes with some huge words dictionary.",1.2,True,1,7150 2020-12-02 10:02:42.763,Find answer to tcp packet in PCAP with scapy,"I parse pcap file with scapy python , and there is TCP packet in that pcap that I want to know what is the answer of this pcaket, How can I do that? For example : client and server TCP stream client-> server : ""hi"" server-> client : ""how are you"" When I get ""hi"" packet (with scapy) how can I get ""how are you"" ?","Look at the TCP sequence number of the message from the client. Call this SeqC. Then look for the first message from the client whose TCP acknowledgement sequence is higher than SeqC (usually it will be equal to SeqC plus the size of the client's TCP payload). Call this PacketS1. Starting with PacketS1, collect the TCP payloads from all packets until you see a packet sent by the server with the TCP PSH (push) flag set. This suggests the end of the application-layer message. Call these payloads PayloadS1 to PayloadSN. Concatenate PayloadS1 to PayloadSN. This is the likely application-layer response to the client message.",0.6730655149877884,False,1,7151 2020-12-02 14:42:06.810,How do I keep changes made within a python GUI?,"For, example If a button click turns the background blue, or changes the button's text, how do I make sure that change stays even after i go to other frames?",One way to go is to create a configuration file (e.g. conf.ini) where you store your changes or apply them to other dialogs. It will allow you to keep changes after an app restarted.,0.0,False,1,7152 2020-12-04 09:56:10.630,raspberry pi using a webcam to output to a website to view,"I am currently working on a project in which I am using a webcam attached to a raspberry pi to then show what the camera is seeing through a website using a client and web server based method through python, However, I need to know how to link the raspberry pi to a website to then output what it sees through the camera while then also outputting it through the python script, but then i don't know where to start If anyone could help me I would really appreciate it. Many thanks.","So one way to do this with python would be to capture the camera image using opencv in a loop and display it to a website hosted on the Pi using a python frontend like flask (or some other frontend). However as others have pointed out, the latency on this would be so bad any processing you wish to do would be nearly impossible. If you wish to do this without python, take a look at mjpg-streamer, that can pull a video feed from an attached camera and display it on a localhost website. The quality is fairly good on localhost. You can then forward this to the web (if needed) using port forwarding or an application like nginx. If you want to split the recorded stream into 2 (to forward one to python and to broadcast another to a website), ffmpeg is your best bet, but the FPS and quality would likely be terrible.",0.0,False,1,7153 2020-12-04 10:21:30.123,"Does python mne raw object represent a single trail? if so, how to average across many trials?","I'm new to python MNE and EEG data in general. From what I understand, MNE raw object represent a single trial (with many channels). Am I correct? What is the best way to average data across many trials? Also, I'm not quite sure what the mne.Epochs().average() represents. Can anyone pls explain? Thanks a lot.","From what I understand, MNE raw object represent a single trial (with many channels). Am I correct? An MNE raw object represents a whole EEG recording. If you want to separate the recording into several trials, then you have to transform the raw object into an ""epoch"" object (with mne.Epochs()). You will receive an object with the shape (n_epochs, n_channels and n_times). What is the best way to average data across many trials? Also, I'm not quite sure what the mne.Epochs().average() represents. Can anyone pls explain? About ""mne.Epochs().average()"": if you have an ""epoch"" object and want to combine the data of all trials into one whole recording again (for example, after you performed certain pre-processing steps on the single trials or removed some of them), then you can use the average function of the class. Depending on the method you're choosing, you can calculate the mean or median of all trials for each channel and obtain an object with the shape (n_channels, n_time). Not quite sure about the best way to average the data across the trials, but with mne.epochs.average you should be able to do it with ease. (Personally, I always calculated the mean for all my trials for each channel. But I guess that depends on the problem you try to solve)",1.2,True,1,7154 2020-12-05 19:15:10.533,How can i have 2D bounding box on a sequence of RGBD frames from a 3D bounding box in point clouds?,"i have a 3d point clouds of my object by using Open3d reconstruction system ( makes point clouds by a sequence of RGBD frames) also I created a 3d bounding box on the object in point clouds my question is how can I have 2d bounding box on all of the RGB frames at the same coordinates of 3d bounding box? my idea Is to project 3d bb to 2d bb but as it is clear, the position of the object is different in each frame, so I do not know how can i use this approach? i appreciate any help or solution, thanks","calculate points for the eight corners of your box transform those points from the world frame into your chosen camera frame project the points, apply lens distortion if needed. OpenCV has functions for some of these operations and supports you with matrix math for the rest. I would guess that Open3d gives you pose matrices for all the cameras. you use those to transform from the world coordinate frame to any camera's frame.",1.2,True,1,7155 2020-12-05 23:26:35.533,Create a schedule where a group of people all talk to each other - with restrictions,"Problem statement I would like to achieve the following: (could be used for example to organize some sort of a speeddating event for students) Create a schedule so people talk to each other one-on-one and this to each member of the group. but with restrictions. Input: list of people. (eg. 30 people) Restrictions: some of the people should not talk to each other (eg. they know each other) Output: List of pairs (separated into sessions) just one solution is ok, no need to know all of the possible outcomes Example eg. Group of 4 people John Steve Mark Melissa Restrictions: John - Mellisa -> NO Outcome Session one John - Steve Mark - Melissa Session two John - Mark Steve - Melissa Session three Steve - Mark John and Mellisa will not join session three as it is restriction. Question Is there a way to approach this using Python or even excel? I am especially looking for some pointers how this problem is called as I assume this is some Should I look towards some solver? Dynamic programming etc?","Your given information is pretty generous, you have a set of all the students, and a set of no-go pairs (because you said it yourself, and it makes it easy to explain, just say this is a set of pairs of students who know each other). So we can iterate through our students list creating random pairings so long as they do not exist in our no-go set, then expand our no-go set with them, and recurse on the remaining students until we can not create any pairs that do not exist already in the no-go set (we have pairings so that every student has met all students).",0.0,False,1,7156 2020-12-06 10:22:21.857,Is there any way to know the command-line options available for a separate program from Python?,"I am relatively new to the python's subprocess and os modules. So, I was able to do the process execution like running bc, cat commands with python and putting the data in stdin and taking the result from stdout. Now I want to first know that a process like cat accepts what flags through python code (If it is possible). Then I want to execute a particular command with some flags set. I googled it for both things and it seems that I got the solution for second one but with multiple ways. So, if anyone know how to do these things and do it in some standard kind of way, it would be much appreciated.","In the context of processes, those flags are called arguments, hence also the argument vector called argv. Their interpretation is 100% up to the program called. In other words, you have to read the manpages or other documentation for the programs you want to call. There is one caveat though: If you don't invoke a program directly but via a shell, that shell is the actual process being started. It then also interprets wildcards. For example, if you run cat with the argument vector ['*'], it will output the content of the file named * if it exists or an error if it doesn't. If you run /bin/sh with ['-c', 'cat *'], the shell will first resolve * into all entries in the current directory and then pass these as separate arguments to cat.",1.2,True,1,7157 2020-12-06 10:45:49.563,Pandas: How to calculate the percentage of one column against another?,"I am just trying to calculate the percentage of one column against another's total, but I am unsure how to do this in Pandas so the calculation gets added into a new column. Let's say, for argument's sake, my data frame has two attributes: Number of Green Marbles Total Number of Marbles Now, how would I calculate the percentage of the Number of Green Marbles out of the Total Number of Marbles in Pandas? Obviously, I know that the calculation will be something like this: (Number of Green Marbles / Total Number of Marbles) * 100 Thanks - any help is much appreciated!",df['percentage columns'] = (df['Number of Green Marbles']) / (df['Total Number of Marbles'] ) * 100,0.0,False,1,7158 2020-12-06 15:58:58.593,int to str in python removes leading 0s,"So right now, I'm making a sudoku solver. You don't really need to know how it works, but one of the checks I take so the solver doesn't break is to check if the string passed (The sudoku board) is 81 characters (9x9 sudoku board). An example of the board would be: ""000000000000000000000000000384000000000000000000000000000000000000000000000000002"" this is a sudoku that I've wanted to try since it only has 4 numbers. but basically, when converting the number to a string, it removes all the '0's up until the '384'. Does anyone know how I can stop this from happening?","There is no way to prevent it from happening, because that is not what is happening. Integers cannot remember leading zeroes, and something that does not exist cannot be removed. The loss of zeroes does not happen at conversion of int to string, but at the point where you parse the character sequence into a number in the first place. The solution: keep the input as string until you don't need the original formatting any more.",1.2,True,1,7159 2020-12-06 18:29:12.933,How does urllib3 determine which TLS extensions to use?,"I'd like to modify the Extensions that I send in the client Hello packet with python. I've had a read of most of the source code found on GitHub for urllib3 but I still don't know how it determines which TLS extensions to use. I am aware that it will be quite low level and the creators of urllib3 may just import another package to do this for them. If this is the case, which package do they use? If not, how is this determined? Thanks in advance for any assistance.",The HTTPS support in urllib3 uses the ssl package which uses the openssl C-library. ssl does not provide any way to directly fiddle with the TLS extension except for setting the hostname in the TLS handshake (i.e. server_name extension aka SNI).,1.2,True,1,7160 2020-12-07 22:29:46.250,tkinter in Pycharm (python version 3.8.6),"I'm using Pycharm on Windows 10. Python version: 3.8.6 I've checked using the CMD if I have tkinter install python -m tkinter. It says I have version 8.6 Tried: import tkinter. I get ""No module named 'tkinter' "" from tkinter import *. I get ""Unresolved reference 'tkinter'"" Installed future package but that didn't seem to change the errors. Any suggestions on how to fix this issue? Thank you!","You just verify in the project settings, sometimes Pycharm doesn't use the same interpreter.",-0.2012947653214861,False,2,7161 2020-12-07 22:29:46.250,tkinter in Pycharm (python version 3.8.6),"I'm using Pycharm on Windows 10. Python version: 3.8.6 I've checked using the CMD if I have tkinter install python -m tkinter. It says I have version 8.6 Tried: import tkinter. I get ""No module named 'tkinter' "" from tkinter import *. I get ""Unresolved reference 'tkinter'"" Installed future package but that didn't seem to change the errors. Any suggestions on how to fix this issue? Thank you!","You can try ""pip install tkinter"" in cmd",-0.2012947653214861,False,2,7161 2020-12-07 23:17:05.743,how to convert a string to list I have a string how to convert it to a list?,"I have a string like: string = ""[1, 2, 3]"" I need to convert it to a list like: [1, 2, 3] I've tried using regular expression for this purpose, but to no avail","Try [int(x) for x in arr.strip(""[]"").split("", "")], or if your numbers are floats you can do [float(x) for x in arr.strip(""[]"").split("", "")]",0.2655860252697744,False,1,7162 2020-12-08 14:02:34.340,2D numpy array showing as 1D,"I have a numpy ndarray train_data of length 200, where every row is another ndarray of length 10304. However when I print np.shape(train_data), I get (200, 1), and when I print np.shape(train_data[0]) I get (1, ), and when I print np.shape(train_data[0][0]) I get (10304, ). I am quite confused with this behavior as I supposed the first np.shape(train_data) should return (200, 10304). Can someone explains to me why this is happening, and how could I get the array to be in shape of (200, 10304)?","I'm not sure why that's happening, try reshaping the array: B = np.reshape(A, (-1, 2))",0.0,False,1,7163 2020-12-08 16:51:13.820,Multiple threads sending over one socket simultaneously?,"I have two python programs. Program 1 displays videos in a grid with multiple controls on it, and Program 2 performs manipulations to the images and sends it back depending on the control pressed in Program 1. Each video in the grid is running in its own thread, and each video has a thread in Program 2 for sending results back. I'm running this on the same machine though and I was unable to get multiple socket connections working to and from the same address (localhost). If there's a way of doing that - please stop reading and tell me how! I currently have one socket sitting independent of all of my video threads in Program 1, and in Program 2 I have multiple threads sending data to the one socket in an array with a flag for which video the data is for. The problem is when I have multiple threads sending data at the same time it seems to scramble things and stop working. Any tips on how I can achieve this?","Regarding If there's a way of doing that - please stop reading and tell me how!. There's a way of doing it, assuming you are on Linux or using WSL on Windows, you could use the hostname -I commend which will output an IP that looks like 192.168.X.X. You can use that IP in your python program by binding your server to that IP instead of localhost or 127.0.0.1.",0.0,False,1,7164 2020-12-08 20:00:28.467,"Grabbing values (Name, Address, Phone, etc.) from directory websites like TruePeopleSearch.com with Chrome Developer Tool","Good day everybody. I'm still learning parsing data with Python. I'm now trying to familiarize myself with Chrome Developer Tools. My question is when inspecting a directory website like TruePeopleSearch.com, how do I copy or view the variables that holds the data such as Name, Phone, and Address? I tried browsing the tool, but since I'm new with the Developer tool, I'm so lost with all the data. I would appreciate if the experts here points me to the right direction. Thank you all!","Upon further navigating the Developer Console, I learned that these strings are located in these variables, by copying the JS paths. NAME & AGE document.querySelector(""#personDetails > div:nth-child(1)"").innerText ADDRESS document.querySelector(""#personDetails > div:nth-child(4)"").innerText PHONE NUMBERS document.querySelector(""#personDetails > div:nth-child(6)"").innerText STEP 1 From the website, highlight are that you need to inspect and click ""Inspect Element"" STEP 2 Under elements, right-click the highlighted part and copy the JS path STEP 3 Navigate to console and paste the JS path and add .innerText and press Enter",0.0,False,1,7165 2020-12-09 07:30:40.480,Can you plot the accuracy graph of a pre-trained model? Deep Learning,"I am new to Deep Learning. I finished training a model that took 8 hours to run, but I forgot to plot the accuracy graph before closing the jupyter notebook. I need to plot the graph, and I did save the model to my hard-disk. But how do I plot the accuracy graph of a pre-trained model? I searched online for solutions and came up empty. Any help would be appreciated! Thanks!","What kind of framework did you use and which version? In the future problem, you may face, this information can play a key role in the way we can help you. Unfortunately, for Pytorch/Tensorflow the model you saved is likely to be saved with only the weights of the neurons, not with its history. Once Jupyter Notebook is closed, the memory is cleaned (and with it, the data of your training history). The only thing you can extract is the final loss/accuracy you had. However, if you regularly saved a version of the model, you can load them and compute manually the accuracy/loss that you need. Next, you can use matplotlib to reconstruct the graph. I understand this is probably not the answer you were looking for. However, if the hardware is yours, I would recommend you to restart training. 8h is not that much to train a model in deep learning.",0.0,False,1,7166 2020-12-09 13:03:41.490,"How do I handle communication between object instances, or between modules?","I appear to be missing some fundamental Python concept that is so simple that no one ever talks about it. I apologize in advance for likely using improper description - I probably don't know enough to ask the question correctly. Here is a conceptual dead end I have arrived at: I have an instance of Class Net, which handles communicating with some things over the internet. I have an instance of Class Process, which does a bunch of processing and data management I have an instance of Class Gui, which handles the GUI. The Gui instance needs access to Net and Process instances, as the callbacks from its widgets call those methods, among other things. The Net and Process instances need access to some of the Gui instances' methods, as they need to occasionally display stuff (what it's doing, results of queries, etc) How do I manage it so these things talk to each other? Inheritance doesn't work - I need the instance, not the class. Besides, inheritance is one way, not two way. I can obviously instantiate the Gui, and then pass it (as an object) to the others when they are instantiated. But the Gui then won't know about the Process and Net instances. I can of course then manually pass the Net and Process instances to the Gui instance after creation, but that seems like a hack, not like proper practice. Also the number of interdependencies I have to manually pass along grows rather quickly (almost factorially?) with the number of objects involved - so I expect this is not the correct strategy. I arrived at this dead end after trying the same thing with normal functions, where I am more comfortable. Due to their size, the similarly grouped functions lived in separate modules, again Net, Gui, and Process. The problem was exactly the same. A 'parent' module imports 'child' modules, and can then can call their methods. But how do the child modules call the parent module's methods, and how do they call each other's methods? Having everything import everything seems fraught with peril, and again seems to explode as more objects are involved. So what am I missing in organizing my code that I run into this problem where apparently all other python users do not?","The answer to this is insanely simple. Anything that needs to be globally available to other modules can be stored its own module, global_param for instance. Every other module can import global_param, and then use and modify its contents as needed. This avoids any issues with circular importing as well. Not sure why it took me so long to figure this out...",0.3869120172231254,False,1,7167 2020-12-09 18:38:18.553,"On single gpu, can TensorFlow train a model which larger than GPU memory?","If I have a single GPU with 8GB RAM and I have a TensorFlow model (excluding training/validation data) that is 10GB, can TensorFlow train the model? If yes, how does TensorFlow do this? Notes: I'm not looking for distributed GPU training. I want to know about single GPU case. I'm not concerned about the training/validation data sizes.","No you can not train a model larger than your GPU's memory. (there may be some ways with dropout that I am not aware of but in general it is not advised). Further you would need more memory than even all the parameters you are keeping because your GPU needs to retain the parameters along with the derivatives for each step to do back-prop. Not to mention the smaller batch size this would require as there is less space left for the dataset.",0.0,False,1,7168 2020-12-09 19:13:03.913,How would I use a bot to send multiple reactions on one message? Discord.py,this is kind of a dumb question but how would I make a discord.py event to automatically react to a message with a bunch of different default discord emojis at once. I am new to discord.py,You have to use on_message event. Its a default d.py function. It is an automatic function.,0.0,False,1,7169 2020-12-10 05:08:39.017,How can I get my server to UDP multicast to clients across the internet? Do I need a special multicast IP address?,"I am creating a multiplayer game and I would like the communication between my server program (written in python) and the clients (written in c# - Unity) to happen via UDP sockets. I recently came across the concept of UDP Multicast, and it sounds like it could be much better for my use case as opposed to using UDP Unicast , because my server needs to update all of the clients (players) with the same content every interval. So, rather than sending multiple identical packets to all the clients with UDP unicast, I would like to be able to only send one packet to all the clients using multicast, which sounds much more efficient. I am new to multicasting and my questions are: How can I get my server to multicast to clients across the internet? Do I need my server to have a special public multicast IP address? If so how do I get one? Is it even possible to multicast across the internet? or is multicasting available only within my LAN? And what are the pros and cons with taking the multicast approach? Thank you all for your help in advance!!","You can't multicast on the Internet. Full stop. Basically, multicast is only designed to work when there's someone in charge of the whole network to set it up. As you noted, that person needs to assign the multicast IP addresses, for example.",1.2,True,1,7170 2020-12-10 07:37:54.630,Create symlink on a network drive to a file on same network drive (Win10),"Problem statement: I have a python 3.8.5 script running on Windows 10 that processes large files from multiple locations on a network drive and creates .png files containing graphs of the analyzed results. The graphs are all stored in a single destination folder on the same network drive. It looks something like this Source files: \\drive\src1\src1.txt \\drive\src2\src2.txt \\drive\src3\src3.txt Output folder: \\drive\dest\out1.png \\drive\dest\out2.png \\drive\dest\out3.png Occasionally we need to replot the original source file and examine a portion of the data trace in detail. This involves hunting for the source file in the right folder. The source file names are longish alphanumerical strings so this process is tedious. In order to make it less tedious I would like to creaty symlinks to the orignal source files and save them side by side with the .png files. The output folder would then look like this Output files: \\drive\dest\out1.png \\drive\dest\out1_src.txt \\drive\dest\out2.png \\drive\dest\out2_src.txt \\drive\dest\out3.png \\drive\dest\out3_src.txt where \\drive\dest\out1_src.txt is a symlink to \\drive\src1\src1.txt, etc. I am attempting to accomplish this via os.symlink('//drive/dest/out1_src.txt', '//drive/src1/src1.txt') However no matter what I try I get PermissionError: [WinError 5] Access is denied I have tried running the script from an elevated shell, enabling Developer Mode, and running fsutil behavior set SymlinkEvaluation R2R:1 fsutil behavior set SymlinkEvaluation R2L:1 but nothing seems to work. There is absolutely no problem creating the symlinks on a local drive, e.g., os.symlink('C:/dest/out1_src.txt', '//drive/src1/src1.txt') but that does not accomplish my goals. I have also tried creading links on the local drive per above then then copying them to the network location with shutil.copy(src, dest, follow_symlinks = False) and it fails with the same error message. Attempts to accomplish the same thing directly in the shell from an elevated shell also fail with the same ""Access is denied"" error message mklink \\drive\dest\out1_src.txt \\drive\src1\src1.txt It seems to be some type of a windows permission error. However when I run fsutil behavior query SymlinkEvaluation in the shell I get Local to local symbolic links are enabled. Local to remote symbolic links are enabled. Remote to local symbolic links are enabled. Remote to remote symbolic links are enabled. Any idea how to resolve this? I have been googling for hours and according to everything I am reading it should work, except that it does not.","Open secpol.msc on PC where the newtork share is hosted, navigate to Local Policies - User Rights Assignment - Create symbolic links and add account you use to connect to the network share. You need to logoff from shared folder (Control Panel - All Control Panel Items - Credential Manager or maybe you have to reboot both computers) and try again.",0.0,False,1,7171 2020-12-11 11:57:46.063,How to downgrade python from 3.9.0 to 3.6,"I'm trying to install PyAudio but it needs a Python 3.6 installation and I only have Python 3.9 installed. I tried to switch using brew and pyenv but it doesn't work. Does anyone know how to solve this problem?","You may install multiple versions of the same major python 3.x version, as long as the minor version is different in this case x here refers to the minor version, and you could delete the no longer needed version at anytime since they are kept separate from each other. so go ahead and install python 3.6 since it's a different minor from 3.9, and you could then delete 3.9 if you would like to since it would be used over 3.6 by the system, unless you are going to specify the version you wanna run.",1.2,True,1,7172 2020-12-11 16:40:32.080,Running functions siultaneoulsy in python,"I am making a small program in which I need a few functions to check for something in the background. I used module threading and all those functions indeed run simultaneously and everything works perfectly until I start adding more functions. As the threading module makes new threads, they all stay within the same process, so when I add more, they start slowing each other down. The problem is not with the CPU as it's utilization never reaches 100% (i5-4460). I also tried the multiprocessing module which creates a new process for function, but then it seems that variables can't be shared between different processes or I don't know how. (newly started process for each function seems to take existing variables with itself, but my main program cannot access any of the changes that function in the separate process makes or even new variables it creates) I tried using the global keyword but it seems to have no effect in multiprocessing as it does in threading. How could I solve this problem? I am pretty sure that I have to create new processes for those background functions but I need to get some feedback from them and that part I don't know to solve.",I ended up using multiprocessing Value,1.2,True,1,7173 2020-12-11 21:06:25.180,Python not using proper pip,"I'm running CentOS 8 that came with native Python 3.6.8. I needed Python 3.7 so I installed Python 3.7.0 from sources. Now, python command is unknown to the system, while commands python3 and python3.7 both use Python 3.7. All good until now, but I can't seem to get pip working. Command pip returns command not found, while python3 -m pip, python3.7 -m pip, python3 -m pip3, and python3.7 -m pip3 return No module named pip. Only pip command that works is pip3. Now whatever package I install via pip3 does not seem to install properly. Example given, pip3 install tornado returns Requirement already satisfied, but when I try to import tornado in Python 3.7 I get ModuleNotFoundError: No module named 'tornado'. Not the same thing can be said when I try to import it in Python 3.6, which works flawlessly. From this, I understand that my pip only works with Python 3.6, and not with 3.7. Please tell me how can I use pip with Python 3.7, thank you.","It looks like your python3.7 does not have pip. Install pip for your specific python by running python3.7 -m easy_install pip. Then, install packages by python3.7 -m pip install Another option is to create a virtual environment from your python3.7. The venv brings pip into it by default. You create venv by python3.7 -m venv ",1.2,True,2,7174 2020-12-11 21:06:25.180,Python not using proper pip,"I'm running CentOS 8 that came with native Python 3.6.8. I needed Python 3.7 so I installed Python 3.7.0 from sources. Now, python command is unknown to the system, while commands python3 and python3.7 both use Python 3.7. All good until now, but I can't seem to get pip working. Command pip returns command not found, while python3 -m pip, python3.7 -m pip, python3 -m pip3, and python3.7 -m pip3 return No module named pip. Only pip command that works is pip3. Now whatever package I install via pip3 does not seem to install properly. Example given, pip3 install tornado returns Requirement already satisfied, but when I try to import tornado in Python 3.7 I get ModuleNotFoundError: No module named 'tornado'. Not the same thing can be said when I try to import it in Python 3.6, which works flawlessly. From this, I understand that my pip only works with Python 3.6, and not with 3.7. Please tell me how can I use pip with Python 3.7, thank you.","I think the packages you install will be installed for the previous version of Python. I think you should update the native OS Python like this: Install the python3.7 package using apt-get sudo apt-get install python 3.7 Add python3.6 & python3.7 to update-alternatives: sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.6 1 sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.7 2 Update python3 to point to Python 3.7: `sudo update-alternatives --config python3 Test the version: python3 -V",0.0,False,2,7174 2020-12-13 14:28:27.847,How to communicate with Cylon BMS controller,"I try to communicate with Cylon device (UC32) by Bacnet protocol (BAC0) but I can not discover any device. And I try with Yabe and it does not have any result. Is there any document describing how to create my communication driver? Or any technique which can be uswed to connect with this device?","(Assuming you've set the default gateway address - for it to know where to return it's responses, but only if necessary.) If we start with the assumption that maybe the device is not (by default) listening for broadcasts or having some issue sending it - a bug maybe (although probably unlikely), then you could send a unicast/directed message, e.g. use the Read-Property service to read back the (already known) BOIN (BACnet Object Instance Number), but you would need a (BACnet) client (application/software) that provides that option, like possibly one of the 'BACnet stack' cmd-line tools or maybe via the (for most part) awesome (but advanced-level) 'VTS (Visual Test Shell)' tool. As much as it might be possible to discover what the device's BOIN (BACnet Object Instance Number) is, it's better if you know it already (- as a small few device's might not make it easy to discover - i.e. you might have to resort to using a round-robin bruteforce approach, firing lots of requests - one after the other with only the BOIN changed/incremented by 1, until you receive/see a successful response).",0.3869120172231254,False,1,7175 2020-12-13 15:07:08.070,Create PM2 Ecosystem File from current processes,"I'm running a few programs (NodeJS and python) in my server (Ubuntu 20.04). I use PM2 CLI to create and manage processes. Now I want to manage all process through an echo system file. But when I run pm2 ecosystem, it just creates a sample file. I want to save my CURRENT PROCESSES to the echo system file and modify it. Anyone know how to save pm2 current process as echo system file?","If you use pm2 startup pm2 creates a file named ~/.pm2/dump.pm2 with all running processes (with too many parameters, as it saves the whole environment in the file) Edit: This file is similar to the output of the command pm2 prettylist",1.2,True,1,7176 2020-12-13 20:33:50.563,"Git, heroku, pre-receive hook declined","So I was trying to host a simple python script on Heroku.com, but encountered this error. After a little googling, I found this on the Heroku's website: git, Heroku: pre-receive hook declined, Make sure you are pushing a repo that contains a proper supported app ( Rails, Django etc.) and you are not just pushing some random repo to test it out. Problem is I have no idea how these work, and few tutorials I looked up were for more detailed use of those frameworks. What I need to know is how can i use them with a simple 1 file python script. Thanks in advance.","Okay I got it. It was about some unused modules in requirements.txt, I'm an idiot for not reading the output properly ‍♂️",0.0,False,1,7177 2020-12-13 23:30:31.457,How to get author's Discord Tag shown,"How do I display the user's Name + Discord Tag? As in: I know that; f""Hello, <@{ctx.author.id}>"" will return the user, and being pinged. (@user) And that; f""Hello, {ctx.author.name}"" will return the user's nickname, but WITHOUT the #XXXX after it. (user) But how do I get it to display the user's full name and tag? (user#XXXX)",To get user#XXXX you can just do str(ctx.author) (or just put it in your f-string and it will automatically be converted to a string). You can also do ctx.author.discriminator to get their tag (XXXX).,0.2012947653214861,False,1,7178 2020-12-14 15:50:01.883,How to scrape data from multiple unrelated sections of a website (using Scrapy),"I have made a Scrapy web crawler which can scrape Amazon. It can scrape by searching for items using a list of keywords and scrape the data from the resulting pages. However, I would like to scrape Amazon for large portion of its product data. I don't have a preferred list of keywords with which to query for items. Rather, I'd like to scrape the website evenly and collect X number of items which is representative of all products listed on Amazon. Does anyone know how scrape a website in this fashion? Thanks.","I'm putting my comment as an answer so that others looking for a similar solution can find it easier. One way to achieve this is to going through each category (furniture, clothes, technology, automotive, etc.) and collecting a set number of items there. Amazon has side/top bars with navigation links to different categories, so you can let it run through there. The process would be as follows: Follow category urls from initial Amazon.com parse Use a different parse function for the callback, one that will scrape however many items from that category Ensure that data is writing to a file (it will probably be a lot of data) However, such an approach would not be representative in the proportions of each category in the total Amazon products. Try looking for a ""X number of results"" label for each category to compensate for that. Good luck with your project!",1.2,True,1,7179 2020-12-16 08:12:51.783,How to change colors of pip error messages in windows powershell,"The error messages printed by pip in my Windows PowerShell are dark red on dark blue (default PowerShell background). This is quite hard to read and I'd like to change this, but I couldn't find any hint to how to do this. Even not, if this is a default in Python applied to all stderr-like output, or if it's specific to pip. My configuration: Windows 10, Python 3.9.0, pip 20.2.3. Thanks for your help!","Coloring in pip is done via ANSI escape sequences. So the solution to this problem would be, to either change the way, PowerShell displays ANSI colors or the color scheme pip uses. Pip provides though a command-line switch '--no-color' which can be used to deactivate coloring the output.",0.0,False,1,7180 2020-12-16 12:06:31.327,python api verified number usinf firebase,"I will create python api using Django now I trying to verify phone number using firebase authentication end send SMS to user but I don't know how I will do","The phone number authentication in Firebase is only available from it's client-side SDKs, so the code that runs directly in your iOS, Android or Web app. It is not possible to trigger sending of the SMS message from the server. So you can either find another service to send SMS messages, or to put the call to send the SMS message into the client-side code and then trigger that after it calls your Django API.",1.2,True,1,7181 2020-12-16 16:21:38.647,ImportError: No module named 'sklearn.compose' with scikit-learn==0.23.2,"I'm fully aware of the previous post regarding this error. That issue was with scikit-learn < 0.20. But I'm having scikit-learn 0.23.2 and I've tried uninstall and reinstall 0.22 and 0.23 and I still have this error. Followup: Although pip list told me the scikit-learn version is 0.23.2, but when I ran sklearn.__version__, the real version is 0.18.1. Why and how to resolve this inconsistency? (Uninstall 0.23.2 didn't work)","[RESOLVED] It turned out that my Conda environment has different sys.path as my jupyter environment. The jupyter environment used the system env, which is due to the fact that I installed the ipykernel like this: python -m ipykernel install without use --user flag. The correct way should be to do so within the Conda env and run pip install jupyter",0.0,False,1,7182 2020-12-17 08:39:49.780,How can I transform a list to array quickly in the framework of Mxnet?,"I have a list which has 8 elements and all of those elements are arrays whose shape are (3,480,364).Now I want to transform this list to array as (8,3,480,364).When I use the array=nd.array(list) this command,it will takes me a lot of time and sometimes it will send 'out of memory' error.When I try to use this command array=np.stack(list.aixs=0),when I debug the code,it will stay at this step and can't run out the result.So I wonder how can I transform a list to array quickly when I use the Mxnet framework?","Your method of transforming a list of lists into an array is correct, but an 'out of memory' error means you are running out of memory, which would also explain the slowdown. How to check how much RAM you have left: on Linux, you can run free -mh in the terminal. How to check how much memory a variable takes: The function sys.getsizeof tells you memory size in bytes. You haven't said what data type your arrays have, but, say, if they're float64, that's 8 bytes per element, so your array of 8 * 3 * 480 * 364 = 4193280 elements should only take up 4193280 * 8 bytes = about 30 Mb. So, unless you have very little RAM left, you probably shouldn't be running out of memory. So, I'd first check your assumptions: does your list really only have 8 elements, do all the elements have the same shape of (3, 480, 364), what is the data type, do you create this array once or a thousand times? You can also check the size of a list element: sys.getsizeof(list[0]). Most likely this will clear it up, but what if your array is really just too big for your RAM? What to do if an array doesn't fit in memory One solution is to use smaller data type (dtype=np.float32 for floating point, np.int32 or even np.uint8 for small integer numbers). This will sacrifice some precision for floating point calculations. If almost all elements in the array are zero, consider using a sparse matrix. For training a neural net, you can use a batch training algorithm and only load data into memory in small batches.",0.0,False,1,7183 2020-12-18 05:07:49.150,How do you set up a python project to be able to send to others without them having to manually copy and paste the code into an editor,"I made a cool little project for my friend, basically a timer using tkinter, but I am confused on how to let them access this project without having vscode or pycharm. Is it possible for them to just see the Tkinter window or something like that? Is there an application for this? Sorry if this is a stupid question.","You can just built an .exe (Application) of your project. Then just share the application file and anyone can use the application through .exe. You can use pyinstaller to convert your python code to exe. pip install pyinstaller then cd to the project folder then run the following command pyinstaller --onefile YourFileName.py if you want to make exe without console showing up then use this command pyinstaller --onefile YourFileName.py --noconsole",0.6730655149877884,False,1,7184 2020-12-18 06:28:45.840,Deploy Python Web Scraping files on Azure cloud(function apps),"I have 2 python files that do Web scraping using Selenium and Beautifulsoup and store the results in separate CSV files say file1.csv and file2.csv. Now, I want to deploy these files on the Azure cloud, I know Azure function apps will be ideal for this. But, I don't know how Functions app will support Selenium driver on it. Basically, I want to time trigger my 2 web scraping files and store the results in two separate files file1.csv and file2.csv that will be stored in blob storage on Azure cloud. Can someone help me with this task? How can I use the selenium driver on Azure functions app?","Deploying on virtual machines or EC2 is the only option that one can use to achieve this task. Also, with Heroku, we will be able to run selenium on the cloud by adding buildpacks. But when it comes to storing the files, we will not be able to store files on heroku as heroku does not persist the files. So, VMs or EC2 instances are the only options for this task.",1.2,True,1,7185 2020-12-18 19:17:18.420,Do I have to sort dates chronologically to use pandas.DataFrame.ewm?,"I need to calculate EMA for a set of data from csv file where dates are in descending order. When I apply pandas.DataFrame.ewm I get EMA for the latest (by date) equal to the value. This is because ewm starts observation from top to bottom in DataFrame. So far, I could not find option to make it reverse for ewm. So I guess, I will have to reverse all my dataset. Maybe somebody knows how to make ewm start from bottom values? Or is it recommended to always use datetimeindex sorted chronologically? From oldest values on top to newest on the bottom?","From pandas' documentation: Times corresponding to the observations. Must be monotonically increasing and datetime64[ns] dtype. I guess, datetimeindex must be chronological..",1.2,True,1,7186 2020-12-19 15:35:48.737,How should I handle a data set with around 300000 small groups of data tables?,"I have a data science project in Python and I wonder how to manage my data. Some details about my situation: My data consists of a somewhat larger number of football matches, currently around 300000, and it is supposed to grow further as time goes on. Attached to each match are a few tables with different numbers of rows/columns (but similar column formats across different matches). Now obviously I need to iterate through this set of matches to do some computations. So while I don’t think that I can hold the whole database in memory, I guess it would make sense to have at least chunks in memory, do computations on that chunk, and release it. At the moment I have split everything up into single matches, gave each match an ID and created a folder for each match with the ID as folder name. Then I put the corresponding tables as small individual csv files into the folder that belongs to a given match. Additionally, I have an „overview“ DataFrame with some „metadata“ columns, one row per match. I put this row as a small json into each match folder for convenience as well. I guess there would also be other ways to split the whole data set into chunks than match-wise, but for prototyping/testing my code with a small number of matches, it actually turned out to be quite handy to be able to go to a specific match folder in a file manager and look at one of these tables in a spreadsheet program (although similar inspections could obviously also be made from code in appropriate settings). But now I am at the point where this huge number of quite small files/folders slows down the OS so much that I need to do something else. Just to be able to deal with the data at all right now, I simply created an additional layer of folder hierarchy like „range-0“ contains folders 0-9999, „range-1“ contains 10000-19999 etc. But I‘m not sure if this is the way to go forward. Maybe I could simply save one chunk - whatever one chunk is - as a json file, but would lose some of the ease of the manual inspection. At least everything is small enough, so that I can do my statistical analyses on a single machine, such that I think I can avoid map/reduce-type algorithms. On another note, I have close to zero knowledge about database frameworks (I have written a few lines of SQL in my life), and I guess I would be the only person making requests to my database, so I am in doubt that this makes sense. But in case it does, what are the advantages of such an approach? So, to the people out there having some experience with handling data in such projects - what kind of way to manage my data, on a conceptual level, would you suggest or recommend to use in such a setting (independent of specific tools/libraries etc.)?","Your arrangement is not bad at all. We are not used to think of it this way, but modern filesystems are themselves very efficient (noSQL) databases. All you have to do is having auxiliary files to work as indexes and metadata so your application can find its way. From your post, it looks like you already have that done to some degree. Since you don't give more specific details of the specific files and data you are dealing with, we can't suggest specific arrangements. If the data is proper to be arranged in an SQL tabular representation, you could get benefits from putting all of it in a database and use some ORM - you'd also have to write adapters to get the Python object data into Pandas for your numeric analysis if you that, and it might end up being a superfluous layer if you are already getting it to work. So, just be sure that whatever adaptations you do to get the files easier to deal with by hand (like the extra layer of folders you mention), don't get in the way of your code - i.e., make your code so that it automatically find its way across this, or any extra layers you happen to create (this can be as simple as having the final game match folder full path as a column in your ""overview"" dataframe)",1.2,True,1,7187 2020-12-19 18:54:45.050,pip install a specific version of PyQt5,"I am using spyder & want to install finplot. However when I did this I could not open spyder and had to uninstall & reinstall anconda. The problem is to do with PyQt5 as I understand. The developer of finplot said that one solution would be to install PyQt5 version 5.9. Error: spyder 4.1.3 has requirement pyqt5<5.13; python_version >= ""3"", but you'll have pyqt5 5.13.0 which is incompatible My question is how would I do this? To install finplot I used the line below, pip install finplot Is there a way to specify that it should only install PyQt5?","As far as I understand you just want to install PyQT5 version 9.0.You can try this below if you got pip installed on your machine pip install PyQt5==5.9 Edit: First you need to uninstall your pyQT5 5.13 pip uninstall PyQt5",0.6730655149877884,False,1,7188 2020-12-19 22:58:12.080,Running another script while sharing functions and variable as in jupyter notebook,"I have a notebook that %run another notebook under JupyterLab. They can call back and forth each other functions and share some global variables. I now want to convert the notebooks to py files so it can be executed from the command line. I follow the advice found on SO and imported the 2nd file into the main one. However, I found out that they can not call each other functions. This is a major problem because the 2nd file is a service to the main one, but it uses continuously functions that are part of the main one. Essentially, the second program is non-GUI and it is driven by the main one which is a GUI program. Thus whenever the service program needs to print, it checks to see if a flag is set that tells it that it runs in a GUI mode, and then instead of simple printing it calls a function in the main one which knows how to display it on the GUI screen. I want to keep this separation. How can I achieve it without too much change to the service program?","I ended up collecting all the GUI functions from the main GUI program, and putting them into a 3rd file in a class, including the relevant variables. In the GUI program, just before calling the non GUI program (the service one) I created the class and set all the variables, and in the call I passed the class. Then in the service program I call the functions that are in the class and got the variables needed from the class as well. The changes to the service program were minor - just reading the variables from the class and change the calls to the GUI function to call the class functions instead.",0.0,False,1,7189 2020-12-19 23:06:07.333,How to evaluate trained model Average Precison and Mean AP with IOU=0.3,"I trained a model using Tensorflow object detection API using Faster-RCNN with Resnet architecture. I am using tensorflow 1.13.1, cudnn 7.6.5, protobuf 3.11.4, python 3.7.7, numpy 1.18.1 and I cannot upgrade the versions at the moment. I need to evaluate the accuracy (AP/mAP) of the trained model with the validation set for the IOU=0.3. I am using legacy/eval.py script on purpose since it calculates AP/mAP for IOU=0.5 only (instead of mAP:0.5:0.95) python legacy/eval.py --logtostderr --pipeline_config_path=training/faster_rcnn_resnet152_coco.config --checkpoint_dir=training/ --eval_dir=eval/ I tried several things including updating pipeline config file to have min_score_threshold=0.3: eval_config: { num_examples: 60 min_score_threshold: 0.3 .. Updated the default value in the protos/eval.proto file and recompiled the proto file to generate new version of eval_pb2.py // Minimum score threshold for a detected object box to be visualized optional float min_score_threshold = 13 [default = 0.3]; However, eval.py still calculates/shows AP/mAP with IOU=0.5 The above configuration helped only to detect objects on the images with confidence level < 0.5 in the eval.py output images but this is not what i need. Does anybody know how to evaluate the model with IOU=0.3?",I finally could solve it by modifing hardcoded matching_iou_threshold=0.5 argument value in multiple method arguments (especially def __init) in the ../object_detection/utils/object_detection_evaluation.py,1.2,True,1,7190 2020-12-20 12:53:05.890,random_state in random forest,"I just learned how to use the machine learning model Random Forest; however, although I read about the random_state parameter, I couldn't understand what it does. For example, what is the difference between random_state = 0 and random_state = 300 Can someone please explain?","train_test_split splits arrays or matrices into random train and test subsets. That means that everytime you run it without specifying random_state, you will get a different result, this is expected behavior. When you use random_state=any_value then your code will show exactly same behaviour when you run your code.",0.0,False,3,7191 2020-12-20 12:53:05.890,random_state in random forest,"I just learned how to use the machine learning model Random Forest; however, although I read about the random_state parameter, I couldn't understand what it does. For example, what is the difference between random_state = 0 and random_state = 300 Can someone please explain?",Random forests introduce stochasticity by randomly sampling data and features. Running RF on the exact same data may produce different outcomes for each run due to these random samplings. Fixing the seed to a constant i.e. 1 will eliminate that stochasticity and will produce the same results for each run.,0.0,False,3,7191 2020-12-20 12:53:05.890,random_state in random forest,"I just learned how to use the machine learning model Random Forest; however, although I read about the random_state parameter, I couldn't understand what it does. For example, what is the difference between random_state = 0 and random_state = 300 Can someone please explain?","In addition, most people use the number 42 when we use random_state. For example, random_state = 42 and there's a reason for that. Below is the answer. The number 42 is, in The Hitchhiker's Guide to the Galaxy by Douglas Adams, the ""Answer to the Ultimate Question of Life, the Universe, and Everything"", calculated by an enormous supercomputer named Deep Thought over a period of 7.5 million years. Unfortunately, no one knows what the question is",0.0,False,3,7191 2020-12-20 23:15:46.737,Get the number of boosts in a server discord.py,"I am trying to make a server info command and I want it to display the server name, boost count, boost members and some other stuff as well. Only problem is I have looked at the docs and searched online and I cant find out how to find the boost information. I dont have any code as Ive not found any code to try and use for myself Is there any way to get this information?","Guild Name - guild_object.name Boost count - guild_object.premium_subscription_count Boosters, the people who boosted the server - guild_object.premium_subscribers If your doing this in a command as I assume, use ctx.guild instead of guild_object. For anything further, you can re-read the docs as all of the above information is in it under the discord.Guild",1.2,True,1,7192 2020-12-21 17:02:29.590,find frequency of a int appear in a list of interval,"I were given a list of intervals, for example [[10,40],[20,60]] and a list of position [5,15,30] we should return the frequency of position appeared in the list, the answer would be [[5,0],[15,1],[30,2]] because 5 didn't cover by the interval and 15 was covered once, 30 was covered twice. If I just do a for loop the time complexity would be O(m*n) m is the number of the interval, n is the number of position Can I preprocess the intervals and make it faster? I was thinking of sort the interval first and use binary search but I am not sure how to implement it in python, Can someone give me a hint? Or can I use a hashtable to store intervals? what would be the time complexity for that?","You can use a frequency array to preprocess all interval data and then query for any value to get the answer. Specifically, create an array able to hold the min and max possible end-points of all the intervals. Then, for each interval, increment the frequency of the starting interval point and decrease the frequency of the value just after the end interval. At the end, accumulate this data for the array and we will have the frequency of occurrence of each value between the min and max of the interval. Each query is then just returning the frequency value from this array. freq[] --> larger than max-min+1 (min: minimum start value, max: maximum end value) For each [L,R] --> freq[L]++, freq[R+1] = freq[R+1]-1 freq[i] = freq[i]+freq[i-1] For any query V, answer is freq[V] Do consider tradeoffs when range is very large compared to simple queries, where simple check for all may suffice.",0.0,False,1,7193 2020-12-22 08:56:10.800,"Convert Json format String to Link{""link"":""https://i.imgur.com/zfxsqlk.png""}","I try to convert this String to only the link: {""link"":""https://i.imgur.com/zfxsqlk.png""} I'm trying to create a discord bot, which sends random pictures from the API https://some-random-api.ml/img/red_panda. With imageURL = json.loads(requests.get(redpandaurl).content) I get the json String, but what do I have to do that I only get the Link like this https://i.imgur.com/zfxsqlk.png Sorry if my question is confusingly written, I'm new to programming and don't really know how to describe this problem.","What you get from json.loads() is a Python dict. You can access values in the dict by specifying their keys. In your case, there is only one key-value pair in the dict: ""link"" is the key and ""https://i.imgur.com/zfxsqlk.png"" is the value. You can get the link and store it in the value by appending [""link""] to your line of code: imageURL = json.loads(requests.get(redpandaurl).content)[""link""]",0.0,False,1,7194 2020-12-23 07:39:41.123,Finding or building a python security profiler,"I want a security profiler for python. Specifically, I want something that will take as input a python program and tell me if the program tries to make system calls, read files, or import libraries. If such a security profiler exists, where can I find it? If no such thing exists and I were to write one myself, where could I have my profiler 'checked' (that is, verified that it works). If you don't find this question appropriate for SO, let me know if there is another SE site I can post this on, or if possible, how I can change/rephrase my question. Thanks","Usually, python uses an interpreter called CPython. It is hard to say for python code by itself if it opens files or does something special, due a lot of python libraries and interpreter itself are written in C, and system calls/libc calls can happen only from there. Also python syntax by itself can be very obscure. So, by answering your suspect: I suspect this would need specific knowledge of the python programming language, it does not look like that, due it is about C language. You can think it is possible to patch CPython itself. Well it is not correct too as I guess. A lot of shared libraries use C/C++ code as CPython itself. Tensorflow, for example. Going further, I guess it is possible to do following things: patch the compiler which compiles C/C++ code for CPython/modules, which is hard I guess. just use an usual profiler, and trace which files, directories and calls are used by python itself for operation, and whitelist them, due they are needed, which is the best option by my opinion (AppArmor for example). maybe you can be interested in the patching of CPython itself, where it is possible to hook needed functions and calls to external C libraries, but it can be annoying due you will have to revise every added library to your project, and also C code is often used for performance (e.g. json module), which doesn't open too much things.",1.2,True,1,7195 2020-12-23 23:02:35.663,How can I let the user of an Django Admin Page control which list_display fields are visible?,"I have an ModelAdmin with a set of fields in list_display. I want the user to be able to click a checkbox in order to add or remove these fields. Is there a straightforward way of doing this? I've looked into Widgets but I'm not sure how they would change the list_display of a ModelAdmin","To do this I had to Override an admin template (and change TEMPLATES in settings.py). I added a form with checkboxes so user can set field Add a new model and endpoint to update it (the model stores the fields to be displayed, the user submits a set of fields in the new admin template) Update admin.py, overriding get_list_display so it sets fields based on the state of the model object updated",1.2,True,1,7196 2020-12-24 16:49:26.270,What is the difference between a+=1 and a=+1..?,"how to understand difference between a+=1 and a=+1 in Python? it seems that they're different. when I debug them in Python IDLE both were having different output.","It really depends on the type of object that a references. For the case that a is another int: The += is a single operator, an augmented assignment operator, that invokes a=a.__add__(1), for immutables. It is equivalent to a=a+1 and returns a new int object bound to the variable a. The =+ is parsed as two operators using the normal order of operations: + is a unary operator working on its right-hand-side argument invoking the special function a.__pos__(), similar to how -a would negate a via the unary a.__neg__() operator. = is the normal assignment operator For mutables += invokes __iadd__() for an in-place addition that should return the mutated original object.",0.1016881243684853,False,2,7197 2020-12-24 16:49:26.270,What is the difference between a+=1 and a=+1..?,"how to understand difference between a+=1 and a=+1 in Python? it seems that they're different. when I debug them in Python IDLE both were having different output.","a+=1 is a += 1, where += is a single operator meaning the same as a = a + 1. a=+1 is a = + 1, which assigns + 1 to the variable without using the original value of a",0.2012947653214861,False,2,7197 2020-12-24 19:05:39.640,different python files sharing the same variables,"I would like to know please, how can I define variables in a python file and share these variables with their values with multiple python files?","You can create a python module Create a py file inside that module define variables and import that module in the required places.",0.0,False,2,7198 2020-12-24 19:05:39.640,different python files sharing the same variables,"I would like to know please, how can I define variables in a python file and share these variables with their values with multiple python files?","To do this, you can create a new module specifically for storing all the global variables your application might need. For this you can create a function that will initialize any of these globals with a default value, you only need to call this function once from your main class, then you can import the globals file from any other class and use those globals as needed.",1.2,True,2,7198 2020-12-25 13:44:08.990,How to connect a Python Flask backend to a React front end ? How does it work together?,I am making a website. And I want to know how to connect React js to my Flask back end. I have tried searching online but unfortunately it was not what I am looking for. If you know how to do it please recomend me some resources. And I also want to know the logic of how Flask and React work together.,"Flask is a backend micro-service and react is a front-end framework. Flask communicates with the database and makes the desired API hit points. The backend listens for any API request and sends the corresponding response as a JSON format. So using React you can make HTTP requests to the backend. For testing purposes have the backend and frontend separated and communicate only using the REST APIs. For production, use the compiled js of React as static files and render only the index.html of the compiled react from the backend. P.S: I personally recommend Django rest framework over flask if you are planning to do huge project.",1.2,True,1,7199 2020-12-26 19:08:35.663,AES 128 bit encryption of bitstream data in python,"I am trying to encrypt a bitstream data or basically a list of binary data like this [1,0,1,1,1,0,0,1,1,0,1,1,0,1] in python using AES encryption with block size of 128bit, the problem is that i want the output to be binary data as well and the same size as the original binary data list, is that possible?how do i do that?","Yes, there are basically two ways: You have a unique value tied to the data (for instance if they are provided in sequence then you can create a sequence number) then you can simply use the unique value as nonce and then use AES encryption in counter mode. Counter mode doesn't expand the data but it is insecure if no nonce is supplied. Note that you do need the nonce when decrypting. You use format preserving encryption or FPE such as FF1 and FF3 defined by NIST. There are a few problems with this approach: there are issues with these algorithms if the amount of input data is minimal (as it seems to be in your case); the implementations of FF1 and FF3 are generally hard to find; if you have two unique bit values then they will result in identical ciphertext. Neither of these schemes provide integrity or authenticity of the data obviously, and they by definition leak the size of the plaintext.",1.2,True,1,7200 2020-12-26 21:26:15.483,Running encrypted python code using RSA or AES encryption,"As I was working on a project the topic of code obfuscation came up, as such, would it be possible to encrypt python code using either RSA or AES and then de-code it on the other side and run it?. And if it's possible how would you do it?. I know that you can obfuscate code using Base64, or XOR, but using AES or RSA would be an interesting application. This is simply a generic question for anyone that may have an idea on how to do it. I am just looking to encrypt a piece of code from point A, send it to point B, have it decrypted at point B and run there locally using either AES or RSA. It can be sent by any means, as long as the code itself is encrypted and unreadable.","Yes this is very possible but would require some setup to work. First off Base64 is an encoder for encoding data from binary/bytes to a restricted ascii/utf subset for transmission usually over http. Its not really an obfuscator, more like a packager for binary data. So here is what is needed for this to work. A pre-shared secret key that both point A and point B have. This key cannot be transmitted along with the code since anyone who gets the encrypted code would also get the key to decrypt it. There would need to be an unencrypted code/program that allows you to insert that pre-shared key to use to decrypt the encrypted code that was sent. Can't hardcode the key into the decryptor since again anyone with the decryptor can now decrypt the code and also if the secrey key is leaked you would have to resend out the decryptor to use a different key. Once its decrypted the ""decryptor"" could save that code to a file for you to run or run the code itself using console commands or if its a python program you can call eval or use importlib to import that code and call the function within. WARNING: eval is known to be dangerous since it will execute whatever code it reads. If you use eval with code you dont trust it can download a virus or grab info from your computer or anything really. DO NOT RUN UNTRUSTED CODE. Also there is a difference between AES and RSA. One is a symmetric cipher and the other is asymmetric. Both will work for what you want but they require different things for encryption and decryption. One uses a single key for both while the other uses one for encryption and one for decryption. So something to think about.",1.2,True,1,7201 2020-12-29 07:50:36.320,How to send and receive data (and / or data structures) from a C ++ script to a Python script?,"I am working on a project that needs to do the following: [C++ Program] Checks a given directory, extracts all the names (full paths) of the found files and records them in a vector. [C++ Program] ""Send"" the vector to a Python script. [Python Script] ""Receive"" the vector and transform it into a List. [Python Script] Compares the elements of the List (the paths) against the records of a database and removes the matches from the List (removes the paths already registered). [Python Script] ""Sends"" the processed List back to the C++ Program. [C++ Program] ""Receives"" the List, transforms it into a vector and continues its operations with this processed data. I would like to know how to send and receive data structures (or data) between a C ++ Script and a Python Script. For this case I put the example of a vector transforming into a List, however I would like to know how to do it for any structure or data in general. Obviously I am a beginner, that is why I would like your help on what documentation to read, what concepts should I start with, what technique should I use (maybe there is some implicit standard), what links I could review to learn how to communicate data between Scripts of the languages ​​I just mentioned. Any help is useful to me.","If the idea is to execute the python script from the c++ process, then the easiest would be to design the python script to accept input_file and output_file as arguments and the c++ program should write the input_file, start the script and read the output_file. For simple structures like list-of-strings, you can simply write them as text files and share, but for more complex types, you can use google-protocolbuffers to do the marshalling/unmarshalling. if the idea is to send/receive data between two already stared process, then you can use the same protocol buffers to encode data and send/receive via sockets between each other. Check gRPC",0.0,False,1,7202 2020-12-30 17:33:11.363,Unable to get LabJack U3 model loaded into PyCharm properly,I am trying to use a LabJack product U3 using Python and I am using PyCharm for development of my code. I am new to both Python and PyCharm FYI. In the LabJack documentation they say to run python setup.py install in the directory I down loaded there Python links for using there device. I did this and when run under straight Python console can get the import u3 to run and am able to access the U3 device. Yet when I run this in PyCharm I can not get it to run. It always tells me module not found. I have asked LabJack for help but they do not know PyCharm. I have looked on the net but I can seem to see how to get the module properly under PyCharm. Could i please get some help on how to do this properly?,First Yll download that module inside of pycharm settings if it's still not working then import module in terminal of pycharm then try to run you're python script,0.0,False,1,7203 2020-12-31 05:11:06.240,Hyper-prparameter tuning and classification algorithm comparation,"I have a doubt about classification algorithm comparation. I am doing a project regarding hyperparameter tuning and classification model comparation for a dataset. The Goal is to find out the best fitted model with the best hyperparameters for my dataset. For example: I have 2 classification models (SVM and Random Forest), my dataset has 1000 rows and 10 columns (9 columns are features) and 1 last column is lable. First of all, I splitted dataset into 2 portions (80-10) for training (800 rows) and tesing (200rows) correspondingly. After that, I use Grid Search with CV = 10 to tune hyperparameter on training set with these 2 models (SVM and Random Forest). When hyperparameters are identified for each model, I use these hyperparameters of these 2 models to test Accuracy_score on training and testing set again in order to find out which model is the best one for my data (conditions: Accuracy_score on training set < Accuracy_score on testing set (not overfiting) and which Accuracy_score on testing set of model is higher, that model is the best model). However, SVM shows the accuracy_score of training set is 100 and the accuracy_score of testing set is 83.56, this means SVM with tuning hyperparameters is overfitting. On the other hand, Random Forest shows the accuracy_score of training set is 72.36 and the accuracy_score of testing set is 81.23. It is clear that the accuracy_score of testing set of SVM is higher than the accuracy_score of testing set of Random Forest, but SVM is overfitting. I have some question as below: _ Is my method correst when I implement comparation of accuracy_score for training and testing set as above instead of using Cross-Validation? (if use Cross-Validation, how to do it? _ It is clear that SVM above is overfitting but its accuracy_score of testing set is higher than accuracy_score of testing set of Random Forest, could I conclude that SVM is a best model in this case? Thank you!","I would suggest splitting your data into three sets, rather than two: Training Validation Testing Training is used to train the model, as you have been doing. The validation set is used to evaluate the performance of a model trained with a given set of hyperparameters. The optimal set of hyperparameters is then used to generate predictions on the test set, which wasn't part of either training or hyper parameter selection. You can then compare performance on the test set between your classifiers. The large decrease in performance on your SVM model on your validation dataset does suggest overfitting, though it is common for a classifier to perform better on the training dataset than an evaluation or test dataset.",0.0,False,1,7204 2020-12-31 06:41:56.733,Equivalent gray value of a color given the LAB values,"I have an RGB image and I converted to Lab colorspace. Now, I want to convert the image in LAB space to grayscale one. I know L NOT = Luminance. So, any idea how to get the equivalent gray value of a specific color in lab space? I'm looking for a formula or algorithm to determine the equivalent gray value of a color given the LAB values.","The conversion from Luminance Y to Lightness L* is defined by the CIE 1976 Lightness Function. Put another way, L* transforms linear values into non-linear values that are perceptually uniform for the Human Visual System (HVS). With that in mind, your question is now dependent on what kind of gray you are looking for, if perceptually uniform and thus non-linear, the Lightness channel from CIE Lab* is actually that of CIE 1976 and is appropriate. If you need something linear, you would have to convert back to CIE XYZ tristimulus values and use the Y channel.",0.3869120172231254,False,1,7205 2020-12-31 13:28:50.363,"Creating a JSON file in python, where they are not separated by commas","I'm looking to create the below JSON file in python. I do not understand how I can have multiple dictionaries that are not separated by commas so when I use the JSON library to save the dictionary to disk, I get the below JSON; {""text"": ""Terrible customer service."", ""labels"": [""negative""], ""meta"": {""wikiPageID"": 1}} {""text"": ""Really great transaction."", ""labels"": [""positive""], ""meta"": {""wikiPageID"": 2}} {""text"": ""Great price."", ""labels"": [""positive""], ""meta"": {""wikiPageID"": 3}} instead of a list of dictionaries like below; [{""text"": ""Terrible customer service."", ""labels"": [""negative""], ""meta"": {""wikiPageID"": 1}}, {""text"": ""Really great transaction."", ""labels"": [""positive""], ""meta"": {""wikiPageID"": 2}}, {""text"": ""Great price."", ""labels"": [""positive""], ""meta"": {""wikiPageID"": 3}}] The difference is, in the first example, each line is a dictionary and they are not in a list or separated by commas. Whereas in the second example, which is what I'm able to come up with is a list of dictionaries, each dictionary separated by a comma. I'm sorry if this a stupid question I have been breaking my head over this for weeks, and have not been able to come up with a solution. Any help is appreciated. And thank you in advance.","The way you want to store the Data in one file isn't possible with JSON. Each JSOn file can only contain one Object. This means that you can either have one Object defined within curly braces, or an Array of objects as you mentioned. If you want to store each Object as a JSON object you should use separate files each containing a single Object.",0.0,False,1,7206 2020-12-31 21:45:40.700,save user input data in kivy and store for later use/analysis python,"I am a kivy n00b, using python, and am not sure if this is the right place to ask. Can someone please explain how a user can input data in an Android app, and how/where it is stored (SQL table, csv, xml?). I am also confused as to how it can be extended/used for further analysis. I think it should be held as a SQL table, but do not understand how to save/set up a SQL table in an android app, nor how to access it. Similarly, how to save/append/access a csv/xml document, nor how if these are made, how they are secure from accidental deletion, overwriting, etc In essence, I want to save only the timestamp a user enters some data, and the corresponding values (max 4). User input would consist of 4 variables, x1, x2, x3, x4, and I would write a SQL statement along the lines: insert into data.table timestamp, x1, x2, x3, x4, and then to access the data something along the lines of select * from data.table and then do/show stuff. Can someone offer suggestions on what resources to read? How to set up a SQL Server table in an android app?","This works basically the same way on android as on the desktop: you have access to the local filesystem to create/edit files (at least within the app directory), so you can read and write whatever data storage format you like. If you want to use a database, sqlite is the simplest and most obvious option.",1.2,True,1,7207 2021-01-01 02:54:19.350,"Django: Channels and Web Socket, how to make group chats exclusive","Eg i have a chat application, however, i realised that for my application, as long as you have the link to the chat, you can enter. how do I prevent that, and make it such that only members of the group chat can access the chat. Something like password protected the url to the chat, or perhaps something like whatsapp. Does anyone have any suggestion and reference material as to how I should build this and implement the function? Thank you!","I am in the exact same condition as you.What I am thinking of doing is Store group_url and the respective user_ids (which we get from django's authentication) in a table(with two columns group_url and allowed_user_ids) or in Redis. Then when a client connects to a channel,say chat/1234 (where 1234 is the group_url),we get the id of that user using self.scope['user'].id and check them in the table. If the user_id is in the respected group_url,we accept the connection.Else reject the connection. I am new to this too.Suggest me if you find a better approach",1.2,True,1,7208 2021-01-01 21:31:38.310,Discord.py get user with Name#0001,"How do I get the user/member object in discord.py with only the Name#Discriminator? I searched now for a few hours and didn't find anything. I know how to get the object using the id, but is there a way to convert Name#Discriminator to the id? The user may not be in the Server.","There's no way to do it if you aren't sure they're in the server. If you are, you can search through the servers' members, but otherwise, it wouldn't make sense. Usernames/Discriminators change all the time, while IDs remain unique, so it would become a huge headache trying to implement that. Try doing what you want by ID, or searching the server.",0.0,False,1,7209 2021-01-03 12:30:31.743,Get embed footer from reaction message,"I want the person who used the command to be able to delete the result. I have put the user's ID in the footer of the embed, and my question is: how do I get that data from the message where the user reacted to. reaction.message.embed.footer doesn't work. I currently don't have code as I was trying to get that ID first. Thanks in advance!","discord.Message object has no attribute embed, but it has embeds. It returns you a list of embeds that the message has. So you can simply do: reaction.message.embeds[0].footer.",1.2,True,1,7210 2021-01-03 19:40:29.317,How to do auto login in python with sql database?,how can I make a login form that will remember the user so that he does not have to log in next time.,"Some more information would be nice but if you want to use a database for this then you would have to create a entry for the user information last entered. And then on reopening the programm you would check if there are any entrys and if yes load it. But I think that writing the login information to a file on you pc would be a lot easier. So you run the steps from above just writing to a file instead of a database. I am not sure how you would make this secure because you can't really encrypt the password because you would need a password or key of some type and that password or key would be easy to find in the source code especially in python. It would be harder to find in other compiler based programming languages but also somewhere. And if you would use a database you would have a password for that but that would also lay on the hardrive if not encrypted otherwise but there we are where we started. So as mentioned above a database would be quite useless for a task like this because it doesn't improve anything and is a hassle for beginners to setup.",0.0,False,1,7211 2021-01-04 08:15:55.150,Cloudwatch Alarm for Aurora Data Dump Automation to S3 Bucket,"I need your advice on something that I'm working on as a part of my work. I'm working on automating the Aurora Dump to S3 bucket every midnight. As a part of it, I have created a ec2 instance that generates the dump and I have written a python script using boto3 which moves the dump to S3 bucket every night. I need to intimate a list of developers if the data dump doesn't take place for some reason. As of now, I'm posting a message to SNS topic which notifies the developers if the backup doesn't happen. But I need to do this with Cloudwatch and I'm not sure how to do it. Your help will be much appreciated. ! Thanks!",I have created a custom metric to which I have attached a Cloudwatch alarm and it gets triggered if there's an issue in data backup process.,0.0,False,1,7212 2021-01-04 20:54:14.400,Installations on WSL?,"I use Python Anaconda and Visual Studio Code for Data Science and Machine Learning projects. I want to learn how to use Windows Subsystem for Linux, and I have seen that tools such as Conda or Git can be installed directly there, but I don't quite understand the difference between a common Python Anaconda installation and a Conda installation in WSL. Is one better than the other? Or should I have both? How should I integrate WSL into my work with Anaconda, Git, and VS Code? What advantages does it have or what disadvantages? Help please, I hate not installing my tools properly and then having a mess of folders, environment variables, etc.","If you use conda it's better to install it directly on Windows rather than in WSL. Think of WSL as a virtual machine in your current PC, but much faster than you think. It's most useful use would be as an alternate base for docker. You can run a whole lot of stuff with Windows integration from WSL, which includes VS Code. You can lauch VS code as if it is run from within that OS, with all native extension and app support. You can also access the entire Windows filesystem from WSL and vice versa, so integrating Git with it won't be a bad idea",1.2,True,1,7213 2021-01-04 23:27:42.213,discord.py get all permissions a bot has,So I am developing a Bot using discord.py and I want to get all permissions the Bot has in a specific Guild. I already have the Guild Object but I don't know how to get the Permissions the Bot has. I already looked through the documentation but couln't find anything in that direction...,"From a Member object, like guild.me (a Member object similar to Bot.user, essentially a Member object representing your bot), you can get the permissions that member has from the guild_permissions attribute.",1.2,True,1,7214